зеркало из https://github.com/github/vitess-gh.git
Merge branch 'master' into aaijazi_single_gitignore
This commit is contained in:
Коммит
1fbe779d56
9
Makefile
9
Makefile
|
@ -48,21 +48,22 @@ clean:
|
|||
clean_pkg:
|
||||
rm -rf ../../../../pkg Godeps/_workspace/pkg
|
||||
|
||||
unit_test:
|
||||
unit_test: build
|
||||
echo $$(date): Running unit tests
|
||||
godep go test $(VT_GO_PARALLEL) ./go/...
|
||||
|
||||
# Run the code coverage tools, compute aggregate.
|
||||
# If you want to improve in a directory, run:
|
||||
# go test -coverprofile=coverage.out && go tool cover -html=coverage.out
|
||||
unit_test_cover:
|
||||
unit_test_cover: build
|
||||
godep go test $(VT_GO_PARALLEL) -cover ./go/... | misc/parse_cover.py
|
||||
|
||||
unit_test_race:
|
||||
unit_test_race: build
|
||||
godep go test $(VT_GO_PARALLEL) -race ./go/...
|
||||
|
||||
# Run coverage and upload to coveralls.io.
|
||||
# Requires the secret COVERALLS_TOKEN env variable to be set.
|
||||
unit_test_goveralls:
|
||||
unit_test_goveralls: build
|
||||
travis/goveralls.sh
|
||||
|
||||
queryservice_test:
|
||||
|
|
|
@ -4,10 +4,6 @@ Vitess. Vitess uses backups for two purposes:
|
|||
* Provide a point-in-time backup of the data on a tablet
|
||||
* Bootstrap new tablets in an existing shard
|
||||
|
||||
**Contents:**
|
||||
|
||||
<div id="toc"></div>
|
||||
|
||||
## Prerequisites
|
||||
|
||||
Vitess stores data backups on a Backup Storage service. Currently,
|
||||
|
|
|
@ -1,9 +1,3 @@
|
|||
**Contents:**
|
||||
|
||||
<div id="toc"></div>
|
||||
|
||||
## Overview
|
||||
|
||||
You can access your Vitess cluster using a variety of clients and
|
||||
programming languages. Vitess client libraries help your client
|
||||
application to more easily talk to your storage system to query data.
|
||||
|
|
|
@ -1,6 +1,4 @@
|
|||
Vitess uses the following concepts and terms:
|
||||
|
||||
<div id="toc"></div>
|
||||
This document defines common Vitess concepts and terminology.
|
||||
|
||||
## Keyspace
|
||||
|
||||
|
@ -40,9 +38,9 @@ A keyspace ID can be an unsigned number or a binary character column
|
|||
(<code>unsigned bigint</code> or <code>varbinary</code> in MySQL tables).
|
||||
Other data types are not allowed due to ambiguous equality or inequality rules.
|
||||
|
||||
<!--
|
||||
<div style="display:none">
|
||||
TODO: keyspace ID rules must be solidified once VTGate features are finalized.
|
||||
-->
|
||||
</div>
|
||||
|
||||
## Shard
|
||||
|
||||
|
@ -79,9 +77,9 @@ There are several other tablet types that each serve a specific purpose, includi
|
|||
|
||||
Only <code>master</code>, <code>replica</code>, and <code>rdonly</code> tablets are included in the [serving graph](#serving-graph).
|
||||
|
||||
<!--
|
||||
<div style="display:none">
|
||||
TODO: Add pointer to complete list of types and explain how to update type?
|
||||
-->
|
||||
</div>
|
||||
|
||||
## Shard graph
|
||||
|
||||
|
@ -132,11 +130,11 @@ A Vitess implementation has one global instance of the topology service and one
|
|||
Each local instance contains information about information specific to the cell where it is located. Specifically, it contains data about tablets in the cell, the serving graph for that cell, and the master-slave map for MySQL instances in that cell.<br><br>
|
||||
The local topology server must be available for Vitess to serve data.
|
||||
|
||||
<!--
|
||||
<div style="display:none">
|
||||
To ensure reliability, the topology service has multiple server processes running on different servers. Those servers elect a master and perform chorum writes. In ZooKeeper, for a write to succeed, more than half of the servers must acknowledge it. Thus, a typical ZooKeeper configuration consists of either three or five servers, where two (out of three) or three (out of five) servers must agree for a write operation to succeed.
|
||||
The instance is the set of servers providing topology services. So, in a Vitess implementation using ZooKeeper, the global and local instances likely consist of three or five servers apiece.
|
||||
To be reliable, the global instance needs to have server processes spread across all regions and cells. Read-only replicas of the global instance can be maintained in each data center (cell).
|
||||
-->
|
||||
</div>
|
||||
|
||||
## Cell (Data Center)
|
||||
|
||||
|
|
|
@ -4,8 +4,6 @@ You can build Vitess using either [Docker](#docker-build) or a
|
|||
If you run into issues or have questions, please post on our
|
||||
[forum](https://groups.google.com/forum/#!forum/vitess).
|
||||
|
||||
<div id="toc"></div>
|
||||
|
||||
## Docker Build
|
||||
|
||||
To run Vitess in Docker, use an
|
||||
|
@ -247,7 +245,7 @@ lock service. ZooKeeper is included in the Vitess distribution.
|
|||
step since the environment variables will already be set.
|
||||
|
||||
Navigate to the directory where you built Vitess
|
||||
(**$WORKSPACE/src/github.com/youtube/vitess**) and run the
|
||||
(**$WORKSPACE/src/<wbr>github.com/<wbr>youtube/vitess**) and run the
|
||||
following command:
|
||||
|
||||
``` sh
|
||||
|
|
|
@ -7,10 +7,6 @@ If you already have Kubernetes v0.19+ running in one of the other
|
|||
you can skip the <code>gcloud</code> steps.
|
||||
The <code>kubectl</code> steps will apply to any Kubernetes cluster.
|
||||
|
||||
**Contents:**
|
||||
|
||||
<div id="toc"></div>
|
||||
|
||||
## Prerequisites
|
||||
|
||||
To complete the exercise in this guide, you must locally install Go 1.3+,
|
||||
|
@ -701,3 +697,17 @@ $ kubectl logs vttablet-100 mysql
|
|||
Post the logs somewhere and send a link to the [Vitess
|
||||
mailing list](https://groups.google.com/forum/#!forum/vitess)
|
||||
to get more help.
|
||||
|
||||
### Root Certificates
|
||||
|
||||
If you see in the logs a message like this:
|
||||
|
||||
```
|
||||
x509: failed to load system roots and no roots provided
|
||||
```
|
||||
|
||||
It usually means that your Kubernetes nodes are running a host OS
|
||||
that puts root certificates in a different place than our configuration
|
||||
expects by default (for example, Fedora). See the comments in the
|
||||
[etcd controller template](https://github.com/youtube/vitess/blob/master/examples/kubernetes/etcd-controller-template.yaml)
|
||||
for examples of how to set the right location for your host OS.
|
||||
|
|
|
@ -2,10 +2,6 @@ This step-by-step guide explains how to split an unsharded keyspace into two sha
|
|||
|
||||
You can use the same general instructions to reshard a sharded keyspace.
|
||||
|
||||
**Contents:**
|
||||
|
||||
<div id="toc"></div>
|
||||
|
||||
## Prerequisites
|
||||
|
||||
To complete these steps, you must have:
|
||||
|
|
|
@ -1,9 +1,3 @@
|
|||
**Contents:**
|
||||
|
||||
<div id="toc"></div>
|
||||
|
||||
## Overview
|
||||
|
||||
**Reparenting** is the process of changing a shard's master tablet
|
||||
from one host to another or changing a slave tablet to have a
|
||||
different master. Reparenting can be initiated manually
|
||||
|
|
|
@ -7,10 +7,6 @@ This document describes the <code>[vtctl](/reference/vtctl.html)</code>
|
|||
commands that you can use to [review](#reviewing-your-schema) or
|
||||
[update](#changing-your-schema) your schema in Vitess.
|
||||
|
||||
**Contents:**
|
||||
|
||||
<div id="toc"></div>
|
||||
|
||||
## Reviewing your schema
|
||||
|
||||
This section describes the following <code>vtctl</code> commands, which let you look at the schema and validate its consistency across tablets or shards:
|
||||
|
|
|
@ -2,10 +2,6 @@ Sharding is a method of horizontally partitioning a database to store
|
|||
data across two or more database servers. This document explains how
|
||||
sharding works in Vitess and the types of sharding that Vitess supports.
|
||||
|
||||
**Contents:**
|
||||
|
||||
<div id="toc"></div>
|
||||
|
||||
## Overview
|
||||
|
||||
In Vitess, a shard is a partition of a keyspace. In turn, the keyspace
|
||||
|
|
|
@ -1,9 +1,6 @@
|
|||
This guide walks you through the process of sharding an existing unsharded
|
||||
Vitess [keyspace](http://vitess.io/overview/concepts.html#keyspace) in Kubernetes.
|
||||
|
||||
**Contents:**
|
||||
<div id="toc"></div>
|
||||
|
||||
## Prerequisites
|
||||
|
||||
We begin by assuming you've completed the
|
||||
|
|
|
@ -1,7 +1,3 @@
|
|||
**Contents:**
|
||||
|
||||
<div id="toc"></div>
|
||||
|
||||
## Platform support
|
||||
|
||||
Vitess runs on either Ubuntu 14.04 (Trusty) or Debian 7.0 (Wheezy).
|
||||
|
|
|
@ -1,25 +1,15 @@
|
|||
**Contents:**
|
||||
|
||||
<div id="toc"></div>
|
||||
|
||||
## Overview
|
||||
|
||||
Vitess is a database solution for scaling MySQL. It's architected to run as
|
||||
effectively in a public or private cloud architecture as it does on dedicated
|
||||
hardware. It combines and extends many important MySQL features with the
|
||||
scalability of a NoSQL database. Vitess has been serving all YouTube database
|
||||
traffic since 2011.
|
||||
|
||||
### Vitess on Kubernetes
|
||||
## Vitess on Kubernetes
|
||||
|
||||
Kubernetes is an open-source orchestration system for Docker containers, and Vitess is the logical storage engine choice for Kubernetes users.
|
||||
|
||||
Kubernetes handles scheduling onto nodes in a compute cluster, actively manages workloads on those nodes, and groups containers comprising an application for easy management and discovery. Using Kubernetes, you can easily create and manage a Vitess cluster, out of the box.
|
||||
|
||||
<!--
|
||||
### Vitess on Local Hardware
|
||||
-->
|
||||
|
||||
## Comparisons to other storage options
|
||||
|
||||
The following sections compare Vitess to two common alternatives, a vanilla MySQL implementation and a NoSQL implementation.
|
||||
|
@ -131,7 +121,9 @@ Vitess tools and servers are designed to help you whether you start with a compl
|
|||
|
||||
The diagram below illustrates Vitess' components:
|
||||
|
||||
![Diagram showing Vitess implementation](https://raw.githubusercontent.com/youtube/vitess/master/doc/VitessOverview.png)
|
||||
<div style="overflow-x: scroll">
|
||||
<img src="https://raw.githubusercontent.com/youtube/vitess/master/doc/VitessOverview.png" alt="Diagram showing Vitess implementation" width="509" height="322"/>
|
||||
</div>
|
||||
|
||||
### Topology
|
||||
|
||||
|
@ -151,10 +143,6 @@ To route queries, vtgate considers the sharding scheme, required latency, and th
|
|||
|
||||
vttablet performs tasks that attempt to maximize throughput as well as protect MySQL from harmful queries. Its features include connection pooling, query rewriting, and query de-duping. In addition, vttablet executes management tasks that vtctl initiates, and it provides streaming services that are used for [filtered replication](/user-guide/sharding.html#filtered-replication) and data exports.
|
||||
|
||||
<!--
|
||||
It is a newer version of and provides all of the same benefits as vtocc, including connection pooling, query rewriting, and query de-duping. In addition, vttablet executes management tasks that vtctl initiates. It also provides streaming services that are used for [filtered replication](http://vitess.io/user-guide/sharding.html#resharding) and data export.
|
||||
-->
|
||||
|
||||
A lightweight Vitess implementation uses vttablet as a smart connection proxy that serves queries for a single MySQL database. By running vttablet in front of your MySQL database and changing your app to use the Vitess client instead of your MySQL driver, your app benefits from vttablet's connection pooling, query rewriting, and query de-duping features.
|
||||
|
||||
### vtctl
|
||||
|
|
|
@ -11,7 +11,7 @@ VTCTLD_PORT=${VTCTLD_PORT:-30000}
|
|||
|
||||
# Get the ExternalIP of any node.
|
||||
get_node_ip() {
|
||||
$KUBECTL get -o template -t '{{range (index .items 0).status.addresses}}{{if eq .type "ExternalIP"}}{{.address}}{{end}}{{end}}' nodes
|
||||
$KUBECTL get -o template -t '{{range (index .items 0).status.addresses}}{{if eq .type "ExternalIP" "LegacyHostIP"}}{{.address}}{{end}}{{end}}' nodes
|
||||
}
|
||||
|
||||
# Try to find vtctld address if not provided.
|
||||
|
|
|
@ -18,14 +18,22 @@ spec:
|
|||
spec:
|
||||
volumes:
|
||||
- name: certs
|
||||
hostPath: {path: /etc/ssl/certs}
|
||||
# Uncomment one of the following lines to configure the location
|
||||
# of the root certificates file on your host OS. We need this so
|
||||
# we can import it into the container OS.
|
||||
# If your host OS is Fedora/RHEL:
|
||||
#hostPath: {path: /etc/pki/tls/certs/ca-bundle.crt}
|
||||
# If your host OS is Debian/Ubuntu/Gentoo:
|
||||
hostPath: {path: /etc/ssl/certs/ca-certificates.crt}
|
||||
containers:
|
||||
- name: etcd
|
||||
image: vitess/etcd:v2.0.13-lite
|
||||
volumeMounts:
|
||||
- name: certs
|
||||
readOnly: true
|
||||
mountPath: /etc/ssl/certs
|
||||
# Mount root certs from the host OS into the location
|
||||
# expected for our container OS (Debian):
|
||||
mountPath: /etc/ssl/certs/ca-certificates.crt
|
||||
command:
|
||||
- bash
|
||||
- "-c"
|
||||
|
|
|
@ -12,6 +12,7 @@ import (
|
|||
"github.com/youtube/vitess/go/vt/tabletmanager/tmclient"
|
||||
"github.com/youtube/vitess/go/vt/tabletserver/grpcqueryservice"
|
||||
"github.com/youtube/vitess/go/vt/tabletserver/queryservice"
|
||||
"github.com/youtube/vitess/go/vt/vttest/fakesqldb"
|
||||
"github.com/youtube/vitess/go/vt/wrangler"
|
||||
"github.com/youtube/vitess/go/vt/wrangler/testlib"
|
||||
"github.com/youtube/vitess/go/vt/zktopo"
|
||||
|
@ -77,10 +78,11 @@ func (s *streamHealthTabletServer) BroadcastHealth(terTimestamp int64, stats *pb
|
|||
}
|
||||
|
||||
func TestTabletData(t *testing.T) {
|
||||
db := fakesqldb.Register()
|
||||
ts := zktopo.NewTestServer(t, []string{"cell1", "cell2"})
|
||||
wr := wrangler.New(logutil.NewConsoleLogger(), ts, tmclient.NewTabletManagerClient(), time.Second)
|
||||
|
||||
tablet1 := testlib.NewFakeTablet(t, wr, "cell1", 0, pbt.TabletType_MASTER, testlib.TabletKeyspaceShard(t, "ks", "-80"))
|
||||
tablet1 := testlib.NewFakeTablet(t, wr, "cell1", 0, pbt.TabletType_MASTER, db, testlib.TabletKeyspaceShard(t, "ks", "-80"))
|
||||
tablet1.StartActionLoop(t, wr)
|
||||
defer tablet1.StopActionLoop(t)
|
||||
shsq := newStreamHealthTabletServer(t)
|
||||
|
|
|
@ -10,6 +10,7 @@ import (
|
|||
|
||||
log "github.com/golang/glog"
|
||||
"github.com/youtube/vitess/go/exit"
|
||||
"github.com/youtube/vitess/go/vt/discovery"
|
||||
"github.com/youtube/vitess/go/vt/servenv"
|
||||
"github.com/youtube/vitess/go/vt/topo"
|
||||
"github.com/youtube/vitess/go/vt/vtgate"
|
||||
|
@ -26,10 +27,12 @@ var (
|
|||
connTimeoutPerConn = flag.Duration("conn-timeout-per-conn", 1500*time.Millisecond, "vttablet connection timeout (per connection)")
|
||||
connLife = flag.Duration("conn-life", 365*24*time.Hour, "average life of vttablet connections")
|
||||
maxInFlight = flag.Int("max-in-flight", 0, "maximum number of calls to allow simultaneously")
|
||||
testGateway = flag.String("test_gateway", "", "additional gateway to test health check module")
|
||||
)
|
||||
|
||||
var resilientSrvTopoServer *vtgate.ResilientSrvTopoServer
|
||||
var topoReader *TopoReader
|
||||
var healthCheck discovery.HealthCheck
|
||||
|
||||
var initFakeZK func()
|
||||
|
||||
|
@ -81,6 +84,8 @@ startServer:
|
|||
topoReader = NewTopoReader(resilientSrvTopoServer)
|
||||
servenv.Register("toporeader", topoReader)
|
||||
|
||||
vtgate.Init(resilientSrvTopoServer, schema, *cell, *retryDelay, *retryCount, *connTimeoutTotal, *connTimeoutPerConn, *connLife, *maxInFlight)
|
||||
healthCheck = discovery.NewHealthCheck(*connTimeoutTotal, *retryDelay)
|
||||
|
||||
vtgate.Init(healthCheck, ts, resilientSrvTopoServer, schema, *cell, *retryDelay, *retryCount, *connTimeoutTotal, *connTimeoutPerConn, *connLife, *maxInFlight, *testGateway)
|
||||
servenv.RunDefault()
|
||||
}
|
||||
|
|
|
@ -16,8 +16,7 @@ import (
|
|||
"github.com/youtube/vitess/go/vt/vtgate/bsonp3vtgateservice"
|
||||
)
|
||||
|
||||
// TestGoRPCGoClient tests the go client using goRPC
|
||||
func TestGoRPCGoClient(t *testing.T) {
|
||||
func TestBSONRPCP3GoClient(t *testing.T) {
|
||||
service := services.CreateServices()
|
||||
|
||||
// listen on a random port
|
||||
|
|
|
@ -0,0 +1,43 @@
|
|||
// Copyright 2015 Google Inc. All rights reserved.
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
package goclienttest
|
||||
|
||||
import (
|
||||
"encoding/json"
|
||||
"strings"
|
||||
"testing"
|
||||
|
||||
"github.com/youtube/vitess/go/cmd/vtgateclienttest/services"
|
||||
"github.com/youtube/vitess/go/vt/callerid"
|
||||
"github.com/youtube/vitess/go/vt/vtgate/vtgateconn"
|
||||
"golang.org/x/net/context"
|
||||
|
||||
pb "github.com/youtube/vitess/go/vt/proto/topodata"
|
||||
)
|
||||
|
||||
// testCallerID adds a caller ID to a context, and makes sure the server
|
||||
// gets it.
|
||||
func testCallerID(t *testing.T, conn *vtgateconn.VTGateConn) {
|
||||
t.Log("testCallerID")
|
||||
ctx := context.Background()
|
||||
callerID := callerid.NewEffectiveCallerID("test_principal", "test_component", "test_subcomponent")
|
||||
ctx = callerid.NewContext(ctx, callerID, nil)
|
||||
|
||||
data, err := json.Marshal(callerID)
|
||||
if err != nil {
|
||||
t.Errorf("failed to marshal callerid: %v", err)
|
||||
return
|
||||
}
|
||||
query := services.CallerIDPrefix + string(data)
|
||||
|
||||
// test Execute forwards the callerID
|
||||
if _, err := conn.Execute(ctx, query, nil, pb.TabletType_MASTER); err != nil {
|
||||
if !strings.Contains(err.Error(), "SUCCESS: ") {
|
||||
t.Errorf("failed to pass callerid: %v", err)
|
||||
}
|
||||
}
|
||||
|
||||
// FIXME(alainjobart) add all function calls
|
||||
}
|
|
@ -0,0 +1,384 @@
|
|||
// Copyright 2015 Google Inc. All rights reserved.
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
package goclienttest
|
||||
|
||||
import (
|
||||
"testing"
|
||||
|
||||
"golang.org/x/net/context"
|
||||
|
||||
"github.com/youtube/vitess/go/vt/callerid"
|
||||
"github.com/youtube/vitess/go/vt/key"
|
||||
"github.com/youtube/vitess/go/vt/vtgate/vtgateconn"
|
||||
|
||||
mproto "github.com/youtube/vitess/go/mysql/proto"
|
||||
tproto "github.com/youtube/vitess/go/vt/tabletserver/proto"
|
||||
gproto "github.com/youtube/vitess/go/vt/vtgate/proto"
|
||||
|
||||
pbt "github.com/youtube/vitess/go/vt/proto/topodata"
|
||||
pbg "github.com/youtube/vitess/go/vt/proto/vtgate"
|
||||
)
|
||||
|
||||
var (
|
||||
echoPrefix = "echo://"
|
||||
|
||||
query = "test query"
|
||||
keyspace = "test_keyspace"
|
||||
|
||||
shards = []string{"-80", "80-"}
|
||||
shardsEcho = "[-80 80-]"
|
||||
|
||||
keyspaceIDs = [][]byte{
|
||||
[]byte{1, 2, 3, 4},
|
||||
[]byte{5, 6, 7, 8},
|
||||
}
|
||||
keyspaceIDsEcho = "[[1 2 3 4] [5 6 7 8]]"
|
||||
keyspaceIDsEchoOld = "[01020304 05060708]"
|
||||
|
||||
keyRanges = []*pbt.KeyRange{
|
||||
&pbt.KeyRange{Start: []byte{1, 2, 3, 4}, End: []byte{5, 6, 7, 8}},
|
||||
}
|
||||
keyRangesEcho = "[start:\"\\001\\002\\003\\004\" end:\"\\005\\006\\007\\010\" ]"
|
||||
|
||||
entityKeyspaceIDs = []*pbg.ExecuteEntityIdsRequest_EntityId{
|
||||
&pbg.ExecuteEntityIdsRequest_EntityId{
|
||||
KeyspaceId: []byte{1, 2, 3},
|
||||
XidType: pbg.ExecuteEntityIdsRequest_EntityId_TYPE_INT,
|
||||
XidInt: 123,
|
||||
},
|
||||
&pbg.ExecuteEntityIdsRequest_EntityId{
|
||||
KeyspaceId: []byte{4, 5, 6},
|
||||
XidType: pbg.ExecuteEntityIdsRequest_EntityId_TYPE_FLOAT,
|
||||
XidFloat: 2.0,
|
||||
},
|
||||
&pbg.ExecuteEntityIdsRequest_EntityId{
|
||||
KeyspaceId: []byte{7, 8, 9},
|
||||
XidType: pbg.ExecuteEntityIdsRequest_EntityId_TYPE_BYTES,
|
||||
XidBytes: []byte{1, 2, 3},
|
||||
},
|
||||
}
|
||||
entityKeyspaceIDsEcho = "[xid_type:TYPE_INT xid_int:123 keyspace_id:\"\\001\\002\\003\" xid_type:TYPE_FLOAT xid_float:2 keyspace_id:\"\\004\\005\\006\" xid_type:TYPE_BYTES xid_bytes:\"\\001\\002\\003\" keyspace_id:\"\\007\\010\\t\" ]"
|
||||
|
||||
tabletType = pbt.TabletType_REPLICA
|
||||
tabletTypeEcho = pbt.TabletType_name[int32(tabletType)]
|
||||
|
||||
bindVars = map[string]interface{}{
|
||||
"int": 123,
|
||||
"float": 2.0,
|
||||
"bytes": []byte{1, 2, 3},
|
||||
}
|
||||
bindVarsEcho = "map[bytes:[1 2 3] float:2 int:123]"
|
||||
|
||||
sessionEcho = "InTransaction: true, ShardSession: []"
|
||||
|
||||
callerID = callerid.NewEffectiveCallerID("test_principal", "test_component", "test_subcomponent")
|
||||
callerIDEcho = "principal:\"test_principal\" component:\"test_component\" subcomponent:\"test_subcomponent\" "
|
||||
)
|
||||
|
||||
// testEcho exercises the test cases provided by the "echo" service.
|
||||
func testEcho(t *testing.T, conn *vtgateconn.VTGateConn) {
|
||||
testEchoExecute(t, conn)
|
||||
testEchoStreamExecute(t, conn)
|
||||
testEchoTransactionExecute(t, conn)
|
||||
testEchoSplitQuery(t, conn)
|
||||
}
|
||||
|
||||
func testEchoExecute(t *testing.T, conn *vtgateconn.VTGateConn) {
|
||||
var qr *mproto.QueryResult
|
||||
var err error
|
||||
|
||||
ctx := callerid.NewContext(context.Background(), callerID, nil)
|
||||
|
||||
qr, err = conn.Execute(ctx, echoPrefix+query, bindVars, tabletType)
|
||||
checkEcho(t, "Execute", qr, err, map[string]string{
|
||||
"callerId": callerIDEcho,
|
||||
"query": echoPrefix + query,
|
||||
"bindVars": bindVarsEcho,
|
||||
"tabletType": tabletTypeEcho,
|
||||
})
|
||||
|
||||
qr, err = conn.ExecuteShards(ctx, echoPrefix+query, keyspace, shards, bindVars, tabletType)
|
||||
checkEcho(t, "ExecuteShards", qr, err, map[string]string{
|
||||
"callerId": callerIDEcho,
|
||||
"query": echoPrefix + query,
|
||||
"keyspace": keyspace,
|
||||
"shards": shardsEcho,
|
||||
"bindVars": bindVarsEcho,
|
||||
"tabletType": tabletTypeEcho,
|
||||
})
|
||||
|
||||
qr, err = conn.ExecuteKeyspaceIds(ctx, echoPrefix+query, keyspace, keyspaceIDs, bindVars, tabletType)
|
||||
checkEcho(t, "ExecuteKeyspaceIds", qr, err, map[string]string{
|
||||
"callerId": callerIDEcho,
|
||||
"query": echoPrefix + query,
|
||||
"keyspace": keyspace,
|
||||
"keyspaceIds": keyspaceIDsEcho,
|
||||
"bindVars": bindVarsEcho,
|
||||
"tabletType": tabletTypeEcho,
|
||||
})
|
||||
|
||||
qr, err = conn.ExecuteKeyRanges(ctx, echoPrefix+query, keyspace, keyRanges, bindVars, tabletType)
|
||||
checkEcho(t, "ExecuteKeyRanges", qr, err, map[string]string{
|
||||
"callerId": callerIDEcho,
|
||||
"query": echoPrefix + query,
|
||||
"keyspace": keyspace,
|
||||
"keyRanges": keyRangesEcho,
|
||||
"bindVars": bindVarsEcho,
|
||||
"tabletType": tabletTypeEcho,
|
||||
})
|
||||
|
||||
qr, err = conn.ExecuteEntityIds(ctx, echoPrefix+query, keyspace, "column1", entityKeyspaceIDs, bindVars, tabletType)
|
||||
checkEcho(t, "ExecuteEntityIds", qr, err, map[string]string{
|
||||
"callerId": callerIDEcho,
|
||||
"query": echoPrefix + query,
|
||||
"keyspace": keyspace,
|
||||
"entityColumnName": "column1",
|
||||
"entityIds": entityKeyspaceIDsEcho,
|
||||
"bindVars": bindVarsEcho,
|
||||
"tabletType": tabletTypeEcho,
|
||||
})
|
||||
|
||||
var qrs []mproto.QueryResult
|
||||
|
||||
qrs, err = conn.ExecuteBatchShards(ctx, []gproto.BoundShardQuery{
|
||||
gproto.BoundShardQuery{
|
||||
Sql: echoPrefix + query,
|
||||
Keyspace: keyspace,
|
||||
Shards: shards,
|
||||
BindVariables: bindVars,
|
||||
},
|
||||
}, tabletType, true)
|
||||
checkEcho(t, "ExecuteBatchShards", &qrs[0], err, map[string]string{
|
||||
"callerId": callerIDEcho,
|
||||
"query": echoPrefix + query,
|
||||
"keyspace": keyspace,
|
||||
"shards": shardsEcho,
|
||||
"bindVars": bindVarsEcho,
|
||||
"tabletType": tabletTypeEcho,
|
||||
"asTransaction": "true",
|
||||
})
|
||||
|
||||
qrs, err = conn.ExecuteBatchKeyspaceIds(ctx, []gproto.BoundKeyspaceIdQuery{
|
||||
gproto.BoundKeyspaceIdQuery{
|
||||
Sql: echoPrefix + query,
|
||||
Keyspace: keyspace,
|
||||
KeyspaceIds: key.ProtoToKeyspaceIds(keyspaceIDs),
|
||||
BindVariables: bindVars,
|
||||
},
|
||||
}, tabletType, true)
|
||||
checkEcho(t, "ExecuteBatchKeyspaceIds", &qrs[0], err, map[string]string{
|
||||
"callerId": callerIDEcho,
|
||||
"query": echoPrefix + query,
|
||||
"keyspace": keyspace,
|
||||
"keyspaceIds": keyspaceIDsEchoOld,
|
||||
"bindVars": bindVarsEcho,
|
||||
"tabletType": tabletTypeEcho,
|
||||
"asTransaction": "true",
|
||||
})
|
||||
}
|
||||
|
||||
func testEchoStreamExecute(t *testing.T, conn *vtgateconn.VTGateConn) {
|
||||
var qrc <-chan *mproto.QueryResult
|
||||
var err error
|
||||
|
||||
ctx := callerid.NewContext(context.Background(), callerID, nil)
|
||||
|
||||
qrc, _, err = conn.StreamExecute(ctx, echoPrefix+query, bindVars, tabletType)
|
||||
checkEcho(t, "StreamExecute", <-qrc, err, map[string]string{
|
||||
"callerId": callerIDEcho,
|
||||
"query": echoPrefix + query,
|
||||
"bindVars": bindVarsEcho,
|
||||
"tabletType": tabletTypeEcho,
|
||||
})
|
||||
|
||||
qrc, _, err = conn.StreamExecuteShards(ctx, echoPrefix+query, keyspace, shards, bindVars, tabletType)
|
||||
checkEcho(t, "StreamExecuteShards", <-qrc, err, map[string]string{
|
||||
"callerId": callerIDEcho,
|
||||
"query": echoPrefix + query,
|
||||
"keyspace": keyspace,
|
||||
"shards": shardsEcho,
|
||||
"bindVars": bindVarsEcho,
|
||||
"tabletType": tabletTypeEcho,
|
||||
})
|
||||
|
||||
qrc, _, err = conn.StreamExecuteKeyspaceIds(ctx, echoPrefix+query, keyspace, keyspaceIDs, bindVars, tabletType)
|
||||
checkEcho(t, "StreamExecuteKeyspaceIds", <-qrc, err, map[string]string{
|
||||
"callerId": callerIDEcho,
|
||||
"query": echoPrefix + query,
|
||||
"keyspace": keyspace,
|
||||
"keyspaceIds": keyspaceIDsEcho,
|
||||
"bindVars": bindVarsEcho,
|
||||
"tabletType": tabletTypeEcho,
|
||||
})
|
||||
|
||||
qrc, _, err = conn.StreamExecuteKeyRanges(ctx, echoPrefix+query, keyspace, keyRanges, bindVars, tabletType)
|
||||
checkEcho(t, "StreamExecuteKeyRanges", <-qrc, err, map[string]string{
|
||||
"callerId": callerIDEcho,
|
||||
"query": echoPrefix + query,
|
||||
"keyspace": keyspace,
|
||||
"keyRanges": keyRangesEcho,
|
||||
"bindVars": bindVarsEcho,
|
||||
"tabletType": tabletTypeEcho,
|
||||
})
|
||||
}
|
||||
|
||||
func testEchoTransactionExecute(t *testing.T, conn *vtgateconn.VTGateConn) {
|
||||
var qr *mproto.QueryResult
|
||||
var err error
|
||||
|
||||
ctx := callerid.NewContext(context.Background(), callerID, nil)
|
||||
|
||||
tx, err := conn.Begin(ctx)
|
||||
if err != nil {
|
||||
t.Fatalf("Begin error: %v", err)
|
||||
}
|
||||
|
||||
qr, err = tx.Execute(ctx, echoPrefix+query, bindVars, tabletType, true)
|
||||
checkEcho(t, "Execute", qr, err, map[string]string{
|
||||
"callerId": callerIDEcho,
|
||||
"query": echoPrefix + query,
|
||||
"bindVars": bindVarsEcho,
|
||||
"tabletType": tabletTypeEcho,
|
||||
"session": sessionEcho,
|
||||
"notInTransaction": "true",
|
||||
})
|
||||
|
||||
qr, err = tx.ExecuteShards(ctx, echoPrefix+query, keyspace, shards, bindVars, tabletType, true)
|
||||
checkEcho(t, "ExecuteShards", qr, err, map[string]string{
|
||||
"callerId": callerIDEcho,
|
||||
"query": echoPrefix + query,
|
||||
"keyspace": keyspace,
|
||||
"shards": shardsEcho,
|
||||
"bindVars": bindVarsEcho,
|
||||
"tabletType": tabletTypeEcho,
|
||||
"session": sessionEcho,
|
||||
"notInTransaction": "true",
|
||||
})
|
||||
|
||||
qr, err = tx.ExecuteKeyspaceIds(ctx, echoPrefix+query, keyspace, keyspaceIDs, bindVars, tabletType, true)
|
||||
checkEcho(t, "ExecuteKeyspaceIds", qr, err, map[string]string{
|
||||
"callerId": callerIDEcho,
|
||||
"query": echoPrefix + query,
|
||||
"keyspace": keyspace,
|
||||
"keyspaceIds": keyspaceIDsEcho,
|
||||
"bindVars": bindVarsEcho,
|
||||
"tabletType": tabletTypeEcho,
|
||||
"session": sessionEcho,
|
||||
"notInTransaction": "true",
|
||||
})
|
||||
|
||||
qr, err = tx.ExecuteKeyRanges(ctx, echoPrefix+query, keyspace, keyRanges, bindVars, tabletType, true)
|
||||
checkEcho(t, "ExecuteKeyRanges", qr, err, map[string]string{
|
||||
"callerId": callerIDEcho,
|
||||
"query": echoPrefix + query,
|
||||
"keyspace": keyspace,
|
||||
"keyRanges": keyRangesEcho,
|
||||
"bindVars": bindVarsEcho,
|
||||
"tabletType": tabletTypeEcho,
|
||||
"session": sessionEcho,
|
||||
"notInTransaction": "true",
|
||||
})
|
||||
|
||||
qr, err = tx.ExecuteEntityIds(ctx, echoPrefix+query, keyspace, "column1", entityKeyspaceIDs, bindVars, tabletType, true)
|
||||
checkEcho(t, "ExecuteEntityIds", qr, err, map[string]string{
|
||||
"callerId": callerIDEcho,
|
||||
"query": echoPrefix + query,
|
||||
"keyspace": keyspace,
|
||||
"entityColumnName": "column1",
|
||||
"entityIds": entityKeyspaceIDsEcho,
|
||||
"bindVars": bindVarsEcho,
|
||||
"tabletType": tabletTypeEcho,
|
||||
"session": sessionEcho,
|
||||
"notInTransaction": "true",
|
||||
})
|
||||
|
||||
if err := tx.Rollback(ctx); err != nil {
|
||||
t.Fatalf("Rollback error: %v", err)
|
||||
}
|
||||
tx, err = conn.Begin(ctx)
|
||||
if err != nil {
|
||||
t.Fatalf("Begin (again) error: %v", err)
|
||||
}
|
||||
|
||||
var qrs []mproto.QueryResult
|
||||
|
||||
qrs, err = tx.ExecuteBatchShards(ctx, []gproto.BoundShardQuery{
|
||||
gproto.BoundShardQuery{
|
||||
Sql: echoPrefix + query,
|
||||
Keyspace: keyspace,
|
||||
Shards: shards,
|
||||
BindVariables: bindVars,
|
||||
},
|
||||
}, tabletType, true)
|
||||
checkEcho(t, "ExecuteBatchShards", &qrs[0], err, map[string]string{
|
||||
"callerId": callerIDEcho,
|
||||
"query": echoPrefix + query,
|
||||
"keyspace": keyspace,
|
||||
"shards": shardsEcho,
|
||||
"bindVars": bindVarsEcho,
|
||||
"tabletType": tabletTypeEcho,
|
||||
"session": sessionEcho,
|
||||
"asTransaction": "true",
|
||||
})
|
||||
|
||||
qrs, err = tx.ExecuteBatchKeyspaceIds(ctx, []gproto.BoundKeyspaceIdQuery{
|
||||
gproto.BoundKeyspaceIdQuery{
|
||||
Sql: echoPrefix + query,
|
||||
Keyspace: keyspace,
|
||||
KeyspaceIds: key.ProtoToKeyspaceIds(keyspaceIDs),
|
||||
BindVariables: bindVars,
|
||||
},
|
||||
}, tabletType, true)
|
||||
checkEcho(t, "ExecuteBatchKeyspaceIds", &qrs[0], err, map[string]string{
|
||||
"callerId": callerIDEcho,
|
||||
"query": echoPrefix + query,
|
||||
"keyspace": keyspace,
|
||||
"keyspaceIds": keyspaceIDsEchoOld,
|
||||
"bindVars": bindVarsEcho,
|
||||
"tabletType": tabletTypeEcho,
|
||||
"session": sessionEcho,
|
||||
"asTransaction": "true",
|
||||
})
|
||||
}
|
||||
|
||||
func testEchoSplitQuery(t *testing.T, conn *vtgateconn.VTGateConn) {
|
||||
want := &pbg.SplitQueryResponse_Part{
|
||||
Query: tproto.BoundQueryToProto3(echoPrefix+query+":split_column:123", bindVars),
|
||||
KeyRangePart: &pbg.SplitQueryResponse_KeyRangePart{Keyspace: keyspace},
|
||||
}
|
||||
got, err := conn.SplitQuery(context.Background(), keyspace, echoPrefix+query, bindVars, "split_column", 123)
|
||||
if err != nil {
|
||||
t.Fatalf("SplitQuery error: %v", err)
|
||||
}
|
||||
// For some reason, proto.Equal() is calling them unequal even though no diffs
|
||||
// are found.
|
||||
gotstr, wantstr := got[0].String(), want.String()
|
||||
if gotstr != wantstr {
|
||||
t.Errorf("SplitQuery() = %v, want %v", gotstr, wantstr)
|
||||
}
|
||||
}
|
||||
|
||||
// getEcho extracts the echoed field values from a query result.
|
||||
func getEcho(qr *mproto.QueryResult) map[string]string {
|
||||
values := map[string]string{}
|
||||
for i, field := range qr.Fields {
|
||||
values[field.Name] = qr.Rows[0][i].String()
|
||||
}
|
||||
return values
|
||||
}
|
||||
|
||||
// checkEcho verifies that the values present in 'want' are equal to those in
|
||||
// 'got'. Note that extra values in 'got' are fine.
|
||||
func checkEcho(t *testing.T, name string, qr *mproto.QueryResult, err error, want map[string]string) {
|
||||
if err != nil {
|
||||
t.Fatalf("%v error: %v", name, err)
|
||||
}
|
||||
got := getEcho(qr)
|
||||
for k, v := range want {
|
||||
if got[k] != v {
|
||||
t.Errorf("%v: %v = %q, want %q", name, k, got[k], v)
|
||||
}
|
||||
}
|
||||
}
|
|
@ -0,0 +1,16 @@
|
|||
// Copyright 2015 Google Inc. All rights reserved.
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
package goclienttest
|
||||
|
||||
import (
|
||||
"testing"
|
||||
|
||||
"github.com/youtube/vitess/go/vt/vtgate/vtgateconn"
|
||||
)
|
||||
|
||||
// testErrors exercises the test cases provided by the "errors" service.
|
||||
func testErrors(t *testing.T, conn *vtgateconn.VTGateConn) {
|
||||
|
||||
}
|
|
@ -5,16 +5,11 @@
|
|||
package goclienttest
|
||||
|
||||
import (
|
||||
"encoding/json"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"github.com/youtube/vitess/go/cmd/vtgateclienttest/services"
|
||||
"github.com/youtube/vitess/go/vt/callerid"
|
||||
"github.com/youtube/vitess/go/vt/vtgate/vtgateconn"
|
||||
"golang.org/x/net/context"
|
||||
|
||||
pb "github.com/youtube/vitess/go/vt/proto/topodata"
|
||||
)
|
||||
|
||||
// This file contains the reference test for clients. It tests
|
||||
|
@ -25,29 +20,6 @@ import (
|
|||
//
|
||||
// TODO(team) add more unit test cases.
|
||||
|
||||
// testCallerID adds a caller ID to a context, and makes sure the server
|
||||
// gets it.
|
||||
func testCallerID(t *testing.T, conn *vtgateconn.VTGateConn) {
|
||||
t.Log("testCallerID")
|
||||
ctx := context.Background()
|
||||
callerID := callerid.NewEffectiveCallerID("test_principal", "test_component", "test_subcomponent")
|
||||
ctx = callerid.NewContext(ctx, callerID, nil)
|
||||
|
||||
data, err := json.Marshal(callerID)
|
||||
if err != nil {
|
||||
t.Errorf("failed to marshal callerid: %v", err)
|
||||
return
|
||||
}
|
||||
query := services.CallerIDPrefix + string(data)
|
||||
|
||||
// test Execute forwards the callerID
|
||||
if _, err := conn.Execute(ctx, query, nil, pb.TabletType_MASTER); err != nil {
|
||||
t.Errorf("failed to pass callerid: %v", err)
|
||||
}
|
||||
|
||||
// FIXME(alainjobart) add all function calls
|
||||
}
|
||||
|
||||
// TestGoClient runs the test suite for the provided client
|
||||
func TestGoClient(t *testing.T, protocol, addr string) {
|
||||
// Create a client connecting to the server
|
||||
|
@ -58,6 +30,9 @@ func TestGoClient(t *testing.T, protocol, addr string) {
|
|||
}
|
||||
|
||||
testCallerID(t, conn)
|
||||
testEcho(t, conn)
|
||||
testErrors(t, conn)
|
||||
testSuccess(t, conn)
|
||||
|
||||
// and clean up
|
||||
conn.Close()
|
||||
|
|
|
@ -0,0 +1,56 @@
|
|||
// Copyright 2015 Google Inc. All rights reserved.
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
package goclienttest
|
||||
|
||||
import (
|
||||
"testing"
|
||||
|
||||
"golang.org/x/net/context"
|
||||
|
||||
"github.com/golang/protobuf/proto"
|
||||
"github.com/youtube/vitess/go/vt/vtgate/vtgateconn"
|
||||
|
||||
pbt "github.com/youtube/vitess/go/vt/proto/topodata"
|
||||
)
|
||||
|
||||
// testSuccess exercises the test cases provided by the "success" service.
|
||||
func testSuccess(t *testing.T, conn *vtgateconn.VTGateConn) {
|
||||
testGetSrvKeyspace(t, conn)
|
||||
}
|
||||
|
||||
func testGetSrvKeyspace(t *testing.T, conn *vtgateconn.VTGateConn) {
|
||||
want := &pbt.SrvKeyspace{
|
||||
Partitions: []*pbt.SrvKeyspace_KeyspacePartition{
|
||||
&pbt.SrvKeyspace_KeyspacePartition{
|
||||
ServedType: pbt.TabletType_REPLICA,
|
||||
ShardReferences: []*pbt.ShardReference{
|
||||
&pbt.ShardReference{
|
||||
Name: "shard0",
|
||||
KeyRange: &pbt.KeyRange{
|
||||
Start: []byte{0x40, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
|
||||
End: []byte{0x80, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
ShardingColumnName: "sharding_column_name",
|
||||
ShardingColumnType: pbt.KeyspaceIdType_UINT64,
|
||||
ServedFrom: []*pbt.SrvKeyspace_ServedFrom{
|
||||
&pbt.SrvKeyspace_ServedFrom{
|
||||
TabletType: pbt.TabletType_MASTER,
|
||||
Keyspace: "other_keyspace",
|
||||
},
|
||||
},
|
||||
SplitShardCount: 128,
|
||||
}
|
||||
got, err := conn.GetSrvKeyspace(context.Background(), "big")
|
||||
if err != nil {
|
||||
t.Fatalf("GetSrvKeyspace error: %v", err)
|
||||
}
|
||||
if !proto.Equal(got, want) {
|
||||
t.Errorf("GetSrvKeyspace() = %v, want %v", proto.MarshalTextString(got), proto.MarshalTextString(want))
|
||||
}
|
||||
}
|
|
@ -37,11 +37,10 @@ func newCallerIDClient(fallback vtgateservice.VTGateService) *callerIDClient {
|
|||
}
|
||||
}
|
||||
|
||||
// checkCallerID will see if this module is handling the request,
|
||||
// and if it is, check the callerID from the context.
|
||||
// Returns false if the query is not for this module.
|
||||
// Returns true and the the error to return with the call
|
||||
// if this module is handling the request.
|
||||
// checkCallerID will see if this module is handling the request, and
|
||||
// if it is, check the callerID from the context. Returns false if
|
||||
// the query is not for this module. Returns true and the error to
|
||||
// return with the call if this module is handling the request.
|
||||
func (c *callerIDClient) checkCallerID(ctx context.Context, received string) (bool, error) {
|
||||
if !strings.HasPrefix(received, CallerIDPrefix) {
|
||||
return false, nil
|
||||
|
@ -62,7 +61,7 @@ func (c *callerIDClient) checkCallerID(ctx context.Context, received string) (bo
|
|||
return true, fmt.Errorf("callerid mismatch, got %v expected %v", receivedCallerID, expectedCallerID)
|
||||
}
|
||||
|
||||
return true, nil
|
||||
return true, fmt.Errorf("SUCCESS: callerid matches")
|
||||
}
|
||||
|
||||
func (c *callerIDClient) Execute(ctx context.Context, sql string, bindVariables map[string]interface{}, tabletType pb.TabletType, session *proto.Session, notInTransaction bool, reply *proto.QueryResult) error {
|
||||
|
@ -146,9 +145,9 @@ func (c *callerIDClient) StreamExecuteKeyRanges(ctx context.Context, sql string,
|
|||
return c.fallbackClient.StreamExecuteKeyRanges(ctx, sql, bindVariables, keyspace, keyRanges, tabletType, sendReply)
|
||||
}
|
||||
|
||||
func (c *callerIDClient) SplitQuery(ctx context.Context, keyspace string, sql string, bindVariables map[string]interface{}, splitColumn string, splitCount int, reply *proto.SplitQueryResult) error {
|
||||
func (c *callerIDClient) SplitQuery(ctx context.Context, keyspace string, sql string, bindVariables map[string]interface{}, splitColumn string, splitCount int) ([]*pbg.SplitQueryResponse_Part, error) {
|
||||
if ok, err := c.checkCallerID(ctx, sql); ok {
|
||||
return err
|
||||
return nil, err
|
||||
}
|
||||
return c.fallbackClient.SplitQuery(ctx, sql, keyspace, bindVariables, splitColumn, splitCount, reply)
|
||||
return c.fallbackClient.SplitQuery(ctx, sql, keyspace, bindVariables, splitColumn, splitCount)
|
||||
}
|
||||
|
|
|
@ -15,10 +15,12 @@ import (
|
|||
|
||||
"github.com/youtube/vitess/go/sqltypes"
|
||||
"github.com/youtube/vitess/go/vt/callerid"
|
||||
tproto "github.com/youtube/vitess/go/vt/tabletserver/proto"
|
||||
"github.com/youtube/vitess/go/vt/vtgate/proto"
|
||||
"github.com/youtube/vitess/go/vt/vtgate/vtgateservice"
|
||||
|
||||
mproto "github.com/youtube/vitess/go/mysql/proto"
|
||||
pbq "github.com/youtube/vitess/go/vt/proto/query"
|
||||
pb "github.com/youtube/vitess/go/vt/proto/topodata"
|
||||
pbg "github.com/youtube/vitess/go/vt/proto/vtgate"
|
||||
)
|
||||
|
@ -268,16 +270,19 @@ func (c *echoClient) StreamExecuteKeyRanges(ctx context.Context, sql string, bin
|
|||
return c.fallbackClient.StreamExecuteKeyRanges(ctx, sql, bindVariables, keyspace, keyRanges, tabletType, sendReply)
|
||||
}
|
||||
|
||||
func (c *echoClient) SplitQuery(ctx context.Context, keyspace string, sql string, bindVariables map[string]interface{}, splitColumn string, splitCount int, reply *proto.SplitQueryResult) error {
|
||||
func (c *echoClient) SplitQuery(ctx context.Context, keyspace string, sql string, bindVariables map[string]interface{}, splitColumn string, splitCount int) ([]*pbg.SplitQueryResponse_Part, error) {
|
||||
if strings.HasPrefix(sql, EchoPrefix) {
|
||||
reply.Splits = append(reply.Splits, proto.SplitQueryPart{
|
||||
Query: &proto.KeyRangeQuery{
|
||||
Sql: fmt.Sprintf("%v:%v:%v", sql, splitColumn, splitCount),
|
||||
BindVariables: bindVariables,
|
||||
Keyspace: keyspace,
|
||||
return []*pbg.SplitQueryResponse_Part{
|
||||
&pbg.SplitQueryResponse_Part{
|
||||
Query: &pbq.BoundQuery{
|
||||
Sql: fmt.Sprintf("%v:%v:%v", sql, splitColumn, splitCount),
|
||||
BindVariables: tproto.BindVariablesToProto3(bindVariables),
|
||||
},
|
||||
KeyRangePart: &pbg.SplitQueryResponse_KeyRangePart{
|
||||
Keyspace: keyspace,
|
||||
},
|
||||
},
|
||||
})
|
||||
return nil
|
||||
}, nil
|
||||
}
|
||||
return c.fallback.SplitQuery(ctx, sql, keyspace, bindVariables, splitColumn, splitCount, reply)
|
||||
return c.fallback.SplitQuery(ctx, sql, keyspace, bindVariables, splitColumn, splitCount)
|
||||
}
|
||||
|
|
|
@ -82,8 +82,8 @@ func (c fallbackClient) Rollback(ctx context.Context, inSession *proto.Session)
|
|||
return c.fallback.Rollback(ctx, inSession)
|
||||
}
|
||||
|
||||
func (c fallbackClient) SplitQuery(ctx context.Context, keyspace string, sql string, bindVariables map[string]interface{}, splitColumn string, splitCount int, reply *proto.SplitQueryResult) error {
|
||||
return c.fallback.SplitQuery(ctx, sql, keyspace, bindVariables, splitColumn, splitCount, reply)
|
||||
func (c fallbackClient) SplitQuery(ctx context.Context, keyspace string, sql string, bindVariables map[string]interface{}, splitColumn string, splitCount int) ([]*pbg.SplitQueryResponse_Part, error) {
|
||||
return c.fallback.SplitQuery(ctx, sql, keyspace, bindVariables, splitColumn, splitCount)
|
||||
}
|
||||
|
||||
func (c fallbackClient) GetSrvKeyspace(ctx context.Context, keyspace string) (*pb.SrvKeyspace, error) {
|
||||
|
|
|
@ -87,8 +87,8 @@ func (c *terminalClient) Rollback(ctx context.Context, inSession *proto.Session)
|
|||
return errTerminal
|
||||
}
|
||||
|
||||
func (c *terminalClient) SplitQuery(ctx context.Context, keyspace string, sql string, bindVariables map[string]interface{}, splitColumn string, splitCount int, reply *proto.SplitQueryResult) error {
|
||||
return errTerminal
|
||||
func (c *terminalClient) SplitQuery(ctx context.Context, keyspace string, sql string, bindVariables map[string]interface{}, splitColumn string, splitCount int) ([]*pbg.SplitQueryResponse_Part, error) {
|
||||
return nil, errTerminal
|
||||
}
|
||||
|
||||
func (c *terminalClient) GetSrvKeyspace(ctx context.Context, keyspace string) (*pb.SrvKeyspace, error) {
|
||||
|
|
|
@ -34,7 +34,7 @@ const (
|
|||
func init() {
|
||||
// This needs to be called before threads begin to spawn.
|
||||
C.vt_library_init()
|
||||
sqldb.Register("mysql", Connect)
|
||||
sqldb.RegisterDefault(Connect)
|
||||
}
|
||||
|
||||
const (
|
||||
|
|
|
@ -17,13 +17,13 @@ import (
|
|||
// using given ConnParams.
|
||||
type NewConnFunc func(params ConnParams) (Conn, error)
|
||||
|
||||
// conns stores all supported db connection.
|
||||
var conns = make(map[string]NewConnFunc)
|
||||
var (
|
||||
defaultConn NewConnFunc
|
||||
|
||||
var mu sync.Mutex
|
||||
|
||||
// DefaultDB decides the default db connection.
|
||||
var DefaultDB string
|
||||
// mu protects conns.
|
||||
mu sync.Mutex
|
||||
conns = make(map[string]NewConnFunc)
|
||||
)
|
||||
|
||||
// Conn defines the behavior for the low level db connection
|
||||
type Conn interface {
|
||||
|
@ -61,7 +61,16 @@ type Conn interface {
|
|||
SetCharset(cs proto.Charset) error
|
||||
}
|
||||
|
||||
// Register a db connection.
|
||||
// RegisterDefault registers the default connection function.
|
||||
// Only one default can be registered.
|
||||
func RegisterDefault(fn NewConnFunc) {
|
||||
if defaultConn != nil {
|
||||
panic("default connection initialized more than once")
|
||||
}
|
||||
defaultConn = fn
|
||||
}
|
||||
|
||||
// Register registers a db connection.
|
||||
func Register(name string, fn NewConnFunc) {
|
||||
mu.Lock()
|
||||
defer mu.Unlock()
|
||||
|
@ -73,20 +82,15 @@ func Register(name string, fn NewConnFunc) {
|
|||
|
||||
// Connect returns a sqldb.Conn using the default connection creation function.
|
||||
func Connect(params ConnParams) (Conn, error) {
|
||||
// Use a lock-free fast path for default.
|
||||
if params.Engine == "" {
|
||||
return defaultConn(params)
|
||||
}
|
||||
mu.Lock()
|
||||
defer mu.Unlock()
|
||||
if DefaultDB == "" {
|
||||
if len(conns) == 1 {
|
||||
for _, fn := range conns {
|
||||
return fn(params)
|
||||
}
|
||||
}
|
||||
panic("there are more than one conn func " +
|
||||
"registered but no default db has been given.")
|
||||
}
|
||||
fn, ok := conns[DefaultDB]
|
||||
fn, ok := conns[params.Engine]
|
||||
if !ok {
|
||||
panic(fmt.Sprintf("connection function for given default db: %s is not found.", DefaultDB))
|
||||
panic(fmt.Sprintf("connection function not found for engine: %s", params.Engine))
|
||||
}
|
||||
return fn(params)
|
||||
}
|
||||
|
|
|
@ -7,6 +7,7 @@ package sqldb
|
|||
|
||||
// ConnParams contains all the parameters to use to connect to mysql
|
||||
type ConnParams struct {
|
||||
Engine string `json:"engine"`
|
||||
Host string `json:"host"`
|
||||
Port int `json:"port"`
|
||||
Uname string `json:"uname"`
|
||||
|
|
|
@ -142,8 +142,8 @@ func (f *fakeVTGateService) Rollback(ctx context.Context, inSession *proto.Sessi
|
|||
}
|
||||
|
||||
// SplitQuery is part of the VTGateService interface
|
||||
func (f *fakeVTGateService) SplitQuery(ctx context.Context, keyspace string, sql string, bindVariables map[string]interface{}, splitColumn string, splitCount int, reply *proto.SplitQueryResult) error {
|
||||
return nil
|
||||
func (f *fakeVTGateService) SplitQuery(ctx context.Context, keyspace string, sql string, bindVariables map[string]interface{}, splitColumn string, splitCount int) ([]*pbg.SplitQueryResponse_Part, error) {
|
||||
return nil, nil
|
||||
}
|
||||
|
||||
// GetSrvKeyspace is part of the VTGateService interface
|
||||
|
|
|
@ -10,6 +10,10 @@ type fakeHealthCheck struct {
|
|||
endPoints map[string]*pbt.EndPoint
|
||||
}
|
||||
|
||||
// SetListener sets the listener for healthcheck updates.
|
||||
func (fhc *fakeHealthCheck) SetListener(listener HealthCheckStatsListener) {
|
||||
}
|
||||
|
||||
// AddEndPoint adds the endpoint, and starts health check.
|
||||
func (fhc *fakeHealthCheck) AddEndPoint(cell string, endPoint *pbt.EndPoint) {
|
||||
key := endPointToMapKey(endPoint)
|
||||
|
@ -31,3 +35,8 @@ func (fhc *fakeHealthCheck) GetEndPointStatsFromKeyspaceShard(keyspace, shard st
|
|||
func (fhc *fakeHealthCheck) GetEndPointStatsFromTarget(keyspace, shard string, tabletType pbt.TabletType) []*EndPointStats {
|
||||
return nil
|
||||
}
|
||||
|
||||
// CacheStatus returns a displayable version of the cache.
|
||||
func (fhc *fakeHealthCheck) CacheStatus() EndPointsCacheStatusList {
|
||||
return nil
|
||||
}
|
||||
|
|
|
@ -2,12 +2,14 @@ package discovery
|
|||
|
||||
import (
|
||||
"fmt"
|
||||
"html/template"
|
||||
"sort"
|
||||
"strings"
|
||||
"sync"
|
||||
"time"
|
||||
|
||||
log "github.com/golang/glog"
|
||||
"github.com/youtube/vitess/go/stats"
|
||||
pbq "github.com/youtube/vitess/go/vt/proto/query"
|
||||
pbt "github.com/youtube/vitess/go/vt/proto/topodata"
|
||||
"github.com/youtube/vitess/go/vt/tabletserver/tabletconn"
|
||||
|
@ -15,6 +17,16 @@ import (
|
|||
"golang.org/x/net/context"
|
||||
)
|
||||
|
||||
var (
|
||||
hcConnCounters *stats.MultiCounters
|
||||
hcErrorCounters *stats.MultiCounters
|
||||
)
|
||||
|
||||
func init() {
|
||||
hcConnCounters = stats.NewMultiCounters("HealthcheckConnections", []string{"keyspace", "shardname", "tablettype"})
|
||||
hcErrorCounters = stats.NewMultiCounters("HealthcheckErrors", []string{"keyspace", "shardname", "tablettype"})
|
||||
}
|
||||
|
||||
// HealthCheckStatsListener is the listener to receive health check stats update.
|
||||
type HealthCheckStatsListener interface {
|
||||
StatsUpdate(endPoint *pbt.EndPoint, cell string, target *pbq.Target, tabletExternallyReparentedTimestamp int64, stats *pbq.RealtimeStats)
|
||||
|
@ -31,6 +43,8 @@ type EndPointStats struct {
|
|||
|
||||
// HealthCheck defines the interface of health checking module.
|
||||
type HealthCheck interface {
|
||||
// SetListener sets the listener for healthcheck updates. It should not block.
|
||||
SetListener(listener HealthCheckStatsListener)
|
||||
// AddEndPoint adds the endpoint, and starts health check.
|
||||
AddEndPoint(cell string, endPoint *pbt.EndPoint)
|
||||
// RemoveEndPoint removes the endpoint, and stops the health check.
|
||||
|
@ -39,14 +53,15 @@ type HealthCheck interface {
|
|||
GetEndPointStatsFromKeyspaceShard(keyspace, shard string) []*EndPointStats
|
||||
// GetEndPointStatsFromTarget returns all EndPointStats for the given target.
|
||||
GetEndPointStatsFromTarget(keyspace, shard string, tabletType pbt.TabletType) []*EndPointStats
|
||||
// CacheStatus returns a displayable version of the cache.
|
||||
CacheStatus() EndPointsCacheStatusList
|
||||
}
|
||||
|
||||
// NewHealthCheck creates a new HealthCheck object.
|
||||
func NewHealthCheck(listener HealthCheckStatsListener, connTimeout time.Duration, retryDelay time.Duration) HealthCheck {
|
||||
func NewHealthCheck(connTimeout time.Duration, retryDelay time.Duration) HealthCheck {
|
||||
return &HealthCheckImpl{
|
||||
addrToConns: make(map[string]*healthCheckConn),
|
||||
targetToEPs: make(map[string]map[string]map[pbt.TabletType][]*pbt.EndPoint),
|
||||
listener: listener,
|
||||
connTimeout: connTimeout,
|
||||
retryDelay: retryDelay,
|
||||
}
|
||||
|
@ -98,6 +113,9 @@ func (hc *HealthCheckImpl) checkConn(cell string, endPoint *pbt.EndPoint) {
|
|||
return
|
||||
default:
|
||||
}
|
||||
if hcc.target != nil {
|
||||
hcErrorCounters.Add([]string{hcc.target.Keyspace, hcc.target.Shard, strings.ToLower(hcc.target.TabletType.String())}, 1)
|
||||
}
|
||||
log.Errorf("cannot connect to %+v: %v", endPoint, err)
|
||||
time.Sleep(hc.retryDelay)
|
||||
continue
|
||||
|
@ -114,6 +132,9 @@ func (hc *HealthCheckImpl) checkConn(cell string, endPoint *pbt.EndPoint) {
|
|||
return
|
||||
default:
|
||||
}
|
||||
if hcc.target != nil {
|
||||
hcErrorCounters.Add([]string{hcc.target.Keyspace, hcc.target.Shard, strings.ToLower(hcc.target.TabletType.String())}, 1)
|
||||
}
|
||||
log.Errorf("error when streaming tablet health from %+v: %v", endPoint, err)
|
||||
time.Sleep(hc.retryDelay)
|
||||
break
|
||||
|
@ -182,7 +203,9 @@ func (hcc *healthCheckConn) processResponse(ctx context.Context, hc *HealthCheck
|
|||
hcc.mu.Unlock()
|
||||
}
|
||||
// notify downstream for tablettype and realtimestats change
|
||||
hc.listener.StatsUpdate(endPoint, hcc.cell, hcc.target, hcc.tabletExternallyReparentedTimestamp, hcc.stats)
|
||||
if hc.listener != nil {
|
||||
hc.listener.StatsUpdate(endPoint, hcc.cell, hcc.target, hcc.tabletExternallyReparentedTimestamp, hcc.stats)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
}
|
||||
|
@ -203,6 +226,11 @@ func (hc *HealthCheckImpl) deleteConn(endPoint *pbt.EndPoint) {
|
|||
}
|
||||
}
|
||||
|
||||
// SetListener sets the listener for healthcheck updates. It should not block.
|
||||
func (hc *HealthCheckImpl) SetListener(listener HealthCheckStatsListener) {
|
||||
hc.listener = listener
|
||||
}
|
||||
|
||||
// AddEndPoint adds the endpoint, and starts health check.
|
||||
// It does not block.
|
||||
func (hc *HealthCheckImpl) AddEndPoint(cell string, endPoint *pbt.EndPoint) {
|
||||
|
@ -313,6 +341,7 @@ func (hc *HealthCheckImpl) addEndPointToTargetProtected(target *pbq.Target, endP
|
|||
}
|
||||
}
|
||||
ttMap[target.TabletType] = append(epList, endPoint)
|
||||
hcConnCounters.Add([]string{target.Keyspace, target.Shard, strings.ToLower(target.TabletType.String())}, 1)
|
||||
}
|
||||
|
||||
// deleteEndPointFromTargetProtected deletes the endpoint for the given target.
|
||||
|
@ -334,11 +363,90 @@ func (hc *HealthCheckImpl) deleteEndPointFromTargetProtected(target *pbq.Target,
|
|||
if topo.EndPointEquality(ep, endPoint) {
|
||||
epList = append(epList[:i], epList[i+1:]...)
|
||||
ttMap[target.TabletType] = epList
|
||||
hcConnCounters.Add([]string{target.Keyspace, target.Shard, strings.ToLower(target.TabletType.String())}, -1)
|
||||
return
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// EndPointsCacheStatus is the current endpoints for a cell/target.
|
||||
// TODO: change this to reflect the e2e information about the endpoints.
|
||||
type EndPointsCacheStatus struct {
|
||||
Cell string
|
||||
Target *pbq.Target
|
||||
EndPointsStats []*EndPointStats
|
||||
}
|
||||
|
||||
// StatusAsHTML returns an HTML version of the status.
|
||||
func (epcs *EndPointsCacheStatus) StatusAsHTML() template.HTML {
|
||||
epLinks := make([]string, 0, 1)
|
||||
for _, eps := range epcs.EndPointsStats {
|
||||
vtPort := eps.EndPoint.PortMap["vt"]
|
||||
epLinks = append(epLinks, fmt.Sprintf(`<a href="http://%v:%d">%v:%d</a>`, eps.EndPoint.Host, vtPort, eps.EndPoint.Host, vtPort))
|
||||
}
|
||||
return template.HTML(strings.Join(epLinks, " "))
|
||||
}
|
||||
|
||||
// EndPointsCacheStatusList is used for sorting.
|
||||
type EndPointsCacheStatusList []*EndPointsCacheStatus
|
||||
|
||||
// Len is part of sort.Interface.
|
||||
func (epcsl EndPointsCacheStatusList) Len() int {
|
||||
return len(epcsl)
|
||||
}
|
||||
|
||||
// Less is part of sort.Interface
|
||||
func (epcsl EndPointsCacheStatusList) Less(i, j int) bool {
|
||||
return epcsl[i].Cell+"."+epcsl[i].Target.Keyspace+"."+epcsl[i].Target.Shard+"."+string(epcsl[i].Target.TabletType) <
|
||||
epcsl[j].Cell+"."+epcsl[j].Target.Keyspace+"."+epcsl[j].Target.Shard+"."+string(epcsl[j].Target.TabletType)
|
||||
}
|
||||
|
||||
// Swap is part of sort.Interface
|
||||
func (epcsl EndPointsCacheStatusList) Swap(i, j int) {
|
||||
epcsl[i], epcsl[j] = epcsl[j], epcsl[i]
|
||||
}
|
||||
|
||||
// CacheStatus returns a displayable version of the cache.
|
||||
func (hc *HealthCheckImpl) CacheStatus() EndPointsCacheStatusList {
|
||||
epcsl := make(EndPointsCacheStatusList, 0, 1)
|
||||
hc.mu.RLock()
|
||||
for _, shardMap := range hc.targetToEPs {
|
||||
for _, ttMap := range shardMap {
|
||||
for _, epList := range ttMap {
|
||||
var epcs *EndPointsCacheStatus
|
||||
for _, ep := range epList {
|
||||
key := endPointToMapKey(ep)
|
||||
hcc, ok := hc.addrToConns[key]
|
||||
if !ok {
|
||||
continue
|
||||
}
|
||||
hcc.mu.RLock()
|
||||
if epcs == nil {
|
||||
epcs = &EndPointsCacheStatus{
|
||||
Cell: hcc.cell,
|
||||
Target: hcc.target,
|
||||
EndPointsStats: make([]*EndPointStats, 0, 1),
|
||||
}
|
||||
epcsl = append(epcsl, epcs)
|
||||
}
|
||||
stats := &EndPointStats{
|
||||
Cell: hcc.cell,
|
||||
Target: hcc.target,
|
||||
EndPoint: ep,
|
||||
Stats: hcc.stats,
|
||||
TabletExternallyReparentedTimestamp: hcc.tabletExternallyReparentedTimestamp,
|
||||
}
|
||||
hcc.mu.RUnlock()
|
||||
epcs.EndPointsStats = append(epcs.EndPointsStats, stats)
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
hc.mu.RUnlock()
|
||||
sort.Sort(epcsl)
|
||||
return epcsl
|
||||
}
|
||||
|
||||
// endPointToMapKey creates a key to the map from endpoint's host and ports.
|
||||
func endPointToMapKey(endPoint *pbt.EndPoint) string {
|
||||
parts := make([]string, 0, 1)
|
||||
|
|
|
@ -31,7 +31,8 @@ func TestHealthCheck(t *testing.T) {
|
|||
createFakeConn(ep, input)
|
||||
t.Logf(`createFakeConn({Host: "a", PortMap: {"vt": 1}}, c)`)
|
||||
l := newListener()
|
||||
hc := NewHealthCheck(l, 1*time.Millisecond, 1*time.Millisecond).(*HealthCheckImpl)
|
||||
hc := NewHealthCheck(1*time.Millisecond, 1*time.Millisecond).(*HealthCheckImpl)
|
||||
hc.SetListener(l)
|
||||
hc.AddEndPoint("cell", ep)
|
||||
t.Logf(`hc = HealthCheck(); hc.AddEndPoint("cell", {Host: "a", PortMap: {"vt": 1}})`)
|
||||
|
||||
|
|
|
@ -15,6 +15,7 @@ import (
|
|||
"github.com/youtube/vitess/go/stats"
|
||||
"github.com/youtube/vitess/go/vt/dbconnpool"
|
||||
"github.com/youtube/vitess/go/vt/mysqlctl/proto"
|
||||
"github.com/youtube/vitess/go/vt/vttest/fakesqldb"
|
||||
"golang.org/x/net/context"
|
||||
)
|
||||
|
||||
|
@ -87,6 +88,8 @@ type MysqlDaemon interface {
|
|||
// FakeMysqlDaemon implements MysqlDaemon and allows the user to fake
|
||||
// everything.
|
||||
type FakeMysqlDaemon struct {
|
||||
db *fakesqldb.DB
|
||||
|
||||
// Mycnf will be returned by Cnf()
|
||||
Mycnf *Mycnf
|
||||
|
||||
|
@ -185,8 +188,9 @@ type FakeMysqlDaemon struct {
|
|||
|
||||
// NewFakeMysqlDaemon returns a FakeMysqlDaemon where mysqld appears
|
||||
// to be running
|
||||
func NewFakeMysqlDaemon() *FakeMysqlDaemon {
|
||||
func NewFakeMysqlDaemon(db *fakesqldb.DB) *FakeMysqlDaemon {
|
||||
return &FakeMysqlDaemon{
|
||||
db: db,
|
||||
Running: true,
|
||||
}
|
||||
}
|
||||
|
@ -417,5 +421,5 @@ func (fmd *FakeMysqlDaemon) GetAppConnection() (dbconnpool.PoolConnection, error
|
|||
|
||||
// GetDbaConnection is part of the MysqlDaemon interface.
|
||||
func (fmd *FakeMysqlDaemon) GetDbaConnection() (*dbconnpool.DBConnection, error) {
|
||||
return dbconnpool.NewDBConnection(&sqldb.ConnParams{}, stats.NewTimings(""))
|
||||
return dbconnpool.NewDBConnection(&sqldb.ConnParams{Engine: fmd.db.Name}, stats.NewTimings(""))
|
||||
}
|
||||
|
|
|
@ -13,6 +13,7 @@ import (
|
|||
"github.com/youtube/vitess/go/history"
|
||||
"github.com/youtube/vitess/go/stats"
|
||||
"github.com/youtube/vitess/go/vt/mysqlctl"
|
||||
"github.com/youtube/vitess/go/vt/vttest/fakesqldb"
|
||||
"github.com/youtube/vitess/go/vt/zktopo"
|
||||
"golang.org/x/net/context"
|
||||
|
||||
|
@ -24,6 +25,7 @@ import (
|
|||
// so this has to be in one test.
|
||||
func TestInitTablet(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
db := fakesqldb.Register()
|
||||
ts := zktopo.NewTestServer(t, []string{"cell1", "cell2"})
|
||||
tabletAlias := &pb.TabletAlias{
|
||||
Cell: "cell1",
|
||||
|
@ -33,7 +35,7 @@ func TestInitTablet(t *testing.T) {
|
|||
// start with idle, and a tablet record that doesn't exist
|
||||
port := int32(1234)
|
||||
gRPCPort := int32(3456)
|
||||
mysqlDaemon := mysqlctl.NewFakeMysqlDaemon()
|
||||
mysqlDaemon := mysqlctl.NewFakeMysqlDaemon(db)
|
||||
agent := &ActionAgent{
|
||||
TopoServer: ts,
|
||||
TabletAlias: tabletAlias,
|
||||
|
|
|
@ -14,10 +14,10 @@ import (
|
|||
)
|
||||
|
||||
func TestConnPoolGet(t *testing.T) {
|
||||
fakesqldb.Register()
|
||||
db := fakesqldb.Register()
|
||||
testUtils := newTestUtils()
|
||||
appParams := &sqldb.ConnParams{}
|
||||
dbaParams := &sqldb.ConnParams{}
|
||||
appParams := &sqldb.ConnParams{Engine: db.Name}
|
||||
dbaParams := &sqldb.ConnParams{Engine: db.Name}
|
||||
connPool := testUtils.newConnPool()
|
||||
connPool.Open(appParams, dbaParams)
|
||||
defer connPool.Close()
|
||||
|
@ -44,10 +44,10 @@ func TestConnPoolPutWhilePoolIsClosed(t *testing.T) {
|
|||
}
|
||||
|
||||
func TestConnPoolSetCapacity(t *testing.T) {
|
||||
fakesqldb.Register()
|
||||
db := fakesqldb.Register()
|
||||
testUtils := newTestUtils()
|
||||
appParams := &sqldb.ConnParams{}
|
||||
dbaParams := &sqldb.ConnParams{}
|
||||
appParams := &sqldb.ConnParams{Engine: db.Name}
|
||||
dbaParams := &sqldb.ConnParams{Engine: db.Name}
|
||||
connPool := testUtils.newConnPool()
|
||||
connPool.Open(appParams, dbaParams)
|
||||
defer connPool.Close()
|
||||
|
@ -65,14 +65,14 @@ func TestConnPoolSetCapacity(t *testing.T) {
|
|||
}
|
||||
|
||||
func TestConnPoolStatJSON(t *testing.T) {
|
||||
fakesqldb.Register()
|
||||
db := fakesqldb.Register()
|
||||
testUtils := newTestUtils()
|
||||
connPool := testUtils.newConnPool()
|
||||
if connPool.StatsJSON() != "{}" {
|
||||
t.Fatalf("pool is closed, stats json should be empty: {}")
|
||||
}
|
||||
appParams := &sqldb.ConnParams{}
|
||||
dbaParams := &sqldb.ConnParams{}
|
||||
appParams := &sqldb.ConnParams{Engine: db.Name}
|
||||
dbaParams := &sqldb.ConnParams{Engine: db.Name}
|
||||
connPool.Open(appParams, dbaParams)
|
||||
defer connPool.Close()
|
||||
statsJSON := connPool.StatsJSON()
|
||||
|
@ -106,10 +106,10 @@ func TestConnPoolStateWhilePoolIsClosed(t *testing.T) {
|
|||
}
|
||||
|
||||
func TestConnPoolStateWhilePoolIsOpen(t *testing.T) {
|
||||
fakesqldb.Register()
|
||||
db := fakesqldb.Register()
|
||||
testUtils := newTestUtils()
|
||||
appParams := &sqldb.ConnParams{}
|
||||
dbaParams := &sqldb.ConnParams{}
|
||||
appParams := &sqldb.ConnParams{Engine: db.Name}
|
||||
dbaParams := &sqldb.ConnParams{Engine: db.Name}
|
||||
idleTimeout := 10 * time.Second
|
||||
connPool := testUtils.newConnPool()
|
||||
connPool.Open(appParams, dbaParams)
|
||||
|
|
|
@ -28,8 +28,8 @@ func TestDBConnExec(t *testing.T) {
|
|||
}
|
||||
db.AddQuery(sql, expectedResult)
|
||||
connPool := testUtils.newConnPool()
|
||||
appParams := &sqldb.ConnParams{}
|
||||
dbaParams := &sqldb.ConnParams{}
|
||||
appParams := &sqldb.ConnParams{Engine: db.Name}
|
||||
dbaParams := &sqldb.ConnParams{Engine: db.Name}
|
||||
connPool.Open(appParams, dbaParams)
|
||||
defer connPool.Close()
|
||||
ctx, cancel := context.WithDeadline(context.Background(), time.Now().Add(10*time.Second))
|
||||
|
@ -57,8 +57,8 @@ func TestDBConnKill(t *testing.T) {
|
|||
db := fakesqldb.Register()
|
||||
testUtils := newTestUtils()
|
||||
connPool := testUtils.newConnPool()
|
||||
appParams := &sqldb.ConnParams{}
|
||||
dbaParams := &sqldb.ConnParams{}
|
||||
appParams := &sqldb.ConnParams{Engine: db.Name}
|
||||
dbaParams := &sqldb.ConnParams{Engine: db.Name}
|
||||
connPool.Open(appParams, dbaParams)
|
||||
defer connPool.Close()
|
||||
queryServiceStats := NewQueryServiceStats("", false)
|
||||
|
@ -102,8 +102,8 @@ func TestDBConnStream(t *testing.T) {
|
|||
}
|
||||
db.AddQuery(sql, expectedResult)
|
||||
connPool := testUtils.newConnPool()
|
||||
appParams := &sqldb.ConnParams{}
|
||||
dbaParams := &sqldb.ConnParams{}
|
||||
appParams := &sqldb.ConnParams{Engine: db.Name}
|
||||
dbaParams := &sqldb.ConnParams{Engine: db.Name}
|
||||
connPool.Open(appParams, dbaParams)
|
||||
defer connPool.Close()
|
||||
ctx, cancel := context.WithDeadline(context.Background(), time.Now().Add(10*time.Second))
|
||||
|
|
|
@ -25,238 +25,245 @@ func TargetToProto3(target *Target) *pb.Target {
|
|||
|
||||
// BoundQueryToProto3 converts internal types to proto3 BoundQuery
|
||||
func BoundQueryToProto3(sql string, bindVars map[string]interface{}) *pb.BoundQuery {
|
||||
result := &pb.BoundQuery{
|
||||
Sql: sql,
|
||||
return &pb.BoundQuery{
|
||||
Sql: sql,
|
||||
BindVariables: BindVariablesToProto3(bindVars),
|
||||
}
|
||||
if len(bindVars) > 0 {
|
||||
result.BindVariables = make(map[string]*pb.BindVariable)
|
||||
for k, v := range bindVars {
|
||||
bv := new(pb.BindVariable)
|
||||
switch v := v.(type) {
|
||||
case []interface{}:
|
||||
// This is how the list variables will normally appear.
|
||||
if len(v) == 0 {
|
||||
continue
|
||||
}
|
||||
}
|
||||
|
||||
// This assumes homogenous types, but that is what we support.
|
||||
val := v[0]
|
||||
switch val.(type) {
|
||||
// string and []byte are TYPE_BYTES_LIST
|
||||
case string:
|
||||
bv.Type = pb.BindVariable_TYPE_BYTES_LIST
|
||||
listArg := make([][]byte, len(v))
|
||||
for i, lv := range v {
|
||||
listArg[i] = []byte(lv.(string))
|
||||
}
|
||||
bv.ValueBytesList = listArg
|
||||
case []byte:
|
||||
bv.Type = pb.BindVariable_TYPE_BYTES_LIST
|
||||
listArg := make([][]byte, len(v))
|
||||
for i, lv := range v {
|
||||
listArg[i] = lv.([]byte)
|
||||
}
|
||||
bv.ValueBytesList = listArg
|
||||
// BindVariablesToProto3 converts internal type to proto3 BindVariable array
|
||||
func BindVariablesToProto3(bindVars map[string]interface{}) map[string]*pb.BindVariable {
|
||||
if len(bindVars) == 0 {
|
||||
return nil
|
||||
}
|
||||
|
||||
// int, int16, int32, int64 are TYPE_INT_LIST
|
||||
case int:
|
||||
bv.Type = pb.BindVariable_TYPE_INT_LIST
|
||||
listArg := make([]int64, len(v))
|
||||
for i, lv := range v {
|
||||
listArg[i] = int64(lv.(int))
|
||||
}
|
||||
bv.ValueIntList = listArg
|
||||
case int16:
|
||||
bv.Type = pb.BindVariable_TYPE_INT_LIST
|
||||
listArg := make([]int64, len(v))
|
||||
for i, lv := range v {
|
||||
listArg[i] = int64(lv.(int16))
|
||||
}
|
||||
bv.ValueIntList = listArg
|
||||
case int32:
|
||||
bv.Type = pb.BindVariable_TYPE_INT_LIST
|
||||
listArg := make([]int64, len(v))
|
||||
for i, lv := range v {
|
||||
listArg[i] = int64(lv.(int32))
|
||||
}
|
||||
bv.ValueIntList = listArg
|
||||
case int64:
|
||||
bv.Type = pb.BindVariable_TYPE_INT_LIST
|
||||
listArg := make([]int64, len(v))
|
||||
for i, lv := range v {
|
||||
listArg[i] = lv.(int64)
|
||||
}
|
||||
bv.ValueIntList = listArg
|
||||
result := make(map[string]*pb.BindVariable)
|
||||
for k, v := range bindVars {
|
||||
bv := new(pb.BindVariable)
|
||||
switch v := v.(type) {
|
||||
case []interface{}:
|
||||
// This is how the list variables will normally appear.
|
||||
if len(v) == 0 {
|
||||
continue
|
||||
}
|
||||
|
||||
// uint, uint16, uint32, uint64 are TYPE_UINT_LIST
|
||||
case uint:
|
||||
bv.Type = pb.BindVariable_TYPE_UINT_LIST
|
||||
listArg := make([]uint64, len(v))
|
||||
for i, lv := range v {
|
||||
listArg[i] = uint64(lv.(uint))
|
||||
}
|
||||
bv.ValueUintList = listArg
|
||||
case uint16:
|
||||
bv.Type = pb.BindVariable_TYPE_UINT_LIST
|
||||
listArg := make([]uint64, len(v))
|
||||
for i, lv := range v {
|
||||
listArg[i] = uint64(lv.(uint16))
|
||||
}
|
||||
bv.ValueUintList = listArg
|
||||
case uint32:
|
||||
bv.Type = pb.BindVariable_TYPE_UINT_LIST
|
||||
listArg := make([]uint64, len(v))
|
||||
for i, lv := range v {
|
||||
listArg[i] = uint64(lv.(uint32))
|
||||
}
|
||||
bv.ValueUintList = listArg
|
||||
case uint64:
|
||||
bv.Type = pb.BindVariable_TYPE_UINT_LIST
|
||||
listArg := make([]uint64, len(v))
|
||||
for i, lv := range v {
|
||||
listArg[i] = lv.(uint64)
|
||||
}
|
||||
bv.ValueUintList = listArg
|
||||
|
||||
// float32, float64 are TYPE_FLOAT_LIST
|
||||
case float32:
|
||||
bv.Type = pb.BindVariable_TYPE_FLOAT_LIST
|
||||
listArg := make([]float64, len(v))
|
||||
for i, lv := range v {
|
||||
listArg[i] = float64(lv.(float32))
|
||||
}
|
||||
bv.ValueFloatList = listArg
|
||||
case float64:
|
||||
bv.Type = pb.BindVariable_TYPE_FLOAT_LIST
|
||||
listArg := make([]float64, len(v))
|
||||
for i, lv := range v {
|
||||
listArg[i] = lv.(float64)
|
||||
}
|
||||
bv.ValueFloatList = listArg
|
||||
}
|
||||
// This assumes homogenous types, but that is what we support.
|
||||
val := v[0]
|
||||
switch val.(type) {
|
||||
// string and []byte are TYPE_BYTES_LIST
|
||||
case string:
|
||||
bv.Type = pb.BindVariable_TYPE_BYTES
|
||||
bv.ValueBytes = []byte(v)
|
||||
case []string:
|
||||
bv.Type = pb.BindVariable_TYPE_BYTES_LIST
|
||||
listArg := make([][]byte, len(v))
|
||||
for i, lv := range v {
|
||||
listArg[i] = []byte(lv)
|
||||
listArg[i] = []byte(lv.(string))
|
||||
}
|
||||
bv.ValueBytesList = listArg
|
||||
case []byte:
|
||||
bv.Type = pb.BindVariable_TYPE_BYTES
|
||||
bv.ValueBytes = v
|
||||
case [][]byte:
|
||||
bv.Type = pb.BindVariable_TYPE_BYTES_LIST
|
||||
listArg := make([][]byte, len(v))
|
||||
for i, lv := range v {
|
||||
listArg[i] = lv
|
||||
listArg[i] = lv.([]byte)
|
||||
}
|
||||
bv.ValueBytesList = listArg
|
||||
|
||||
// int, int16, int32, int64 are TYPE_INT_LIST
|
||||
case int:
|
||||
bv.Type = pb.BindVariable_TYPE_INT
|
||||
bv.ValueInt = int64(v)
|
||||
bv.Type = pb.BindVariable_TYPE_INT_LIST
|
||||
listArg := make([]int64, len(v))
|
||||
for i, lv := range v {
|
||||
listArg[i] = int64(lv.(int))
|
||||
}
|
||||
bv.ValueIntList = listArg
|
||||
case int16:
|
||||
bv.Type = pb.BindVariable_TYPE_INT
|
||||
bv.ValueInt = int64(v)
|
||||
bv.Type = pb.BindVariable_TYPE_INT_LIST
|
||||
listArg := make([]int64, len(v))
|
||||
for i, lv := range v {
|
||||
listArg[i] = int64(lv.(int16))
|
||||
}
|
||||
bv.ValueIntList = listArg
|
||||
case int32:
|
||||
bv.Type = pb.BindVariable_TYPE_INT
|
||||
bv.ValueInt = int64(v)
|
||||
bv.Type = pb.BindVariable_TYPE_INT_LIST
|
||||
listArg := make([]int64, len(v))
|
||||
for i, lv := range v {
|
||||
listArg[i] = int64(lv.(int32))
|
||||
}
|
||||
bv.ValueIntList = listArg
|
||||
case int64:
|
||||
bv.Type = pb.BindVariable_TYPE_INT
|
||||
bv.ValueInt = v
|
||||
case []int:
|
||||
bv.Type = pb.BindVariable_TYPE_INT_LIST
|
||||
listArg := make([]int64, len(v))
|
||||
for i, lv := range v {
|
||||
listArg[i] = int64(lv)
|
||||
}
|
||||
bv.ValueIntList = listArg
|
||||
case []int16:
|
||||
bv.Type = pb.BindVariable_TYPE_INT_LIST
|
||||
listArg := make([]int64, len(v))
|
||||
for i, lv := range v {
|
||||
listArg[i] = int64(lv)
|
||||
}
|
||||
bv.ValueIntList = listArg
|
||||
case []int32:
|
||||
bv.Type = pb.BindVariable_TYPE_INT_LIST
|
||||
listArg := make([]int64, len(v))
|
||||
for i, lv := range v {
|
||||
listArg[i] = int64(lv)
|
||||
}
|
||||
bv.ValueIntList = listArg
|
||||
case []int64:
|
||||
bv.Type = pb.BindVariable_TYPE_INT_LIST
|
||||
listArg := make([]int64, len(v))
|
||||
for i, lv := range v {
|
||||
listArg[i] = lv
|
||||
listArg[i] = lv.(int64)
|
||||
}
|
||||
bv.ValueIntList = listArg
|
||||
|
||||
// uint, uint16, uint32, uint64 are TYPE_UINT_LIST
|
||||
case uint:
|
||||
bv.Type = pb.BindVariable_TYPE_UINT
|
||||
bv.ValueUint = uint64(v)
|
||||
bv.Type = pb.BindVariable_TYPE_UINT_LIST
|
||||
listArg := make([]uint64, len(v))
|
||||
for i, lv := range v {
|
||||
listArg[i] = uint64(lv.(uint))
|
||||
}
|
||||
bv.ValueUintList = listArg
|
||||
case uint16:
|
||||
bv.Type = pb.BindVariable_TYPE_UINT
|
||||
bv.ValueUint = uint64(v)
|
||||
bv.Type = pb.BindVariable_TYPE_UINT_LIST
|
||||
listArg := make([]uint64, len(v))
|
||||
for i, lv := range v {
|
||||
listArg[i] = uint64(lv.(uint16))
|
||||
}
|
||||
bv.ValueUintList = listArg
|
||||
case uint32:
|
||||
bv.Type = pb.BindVariable_TYPE_UINT
|
||||
bv.ValueUint = uint64(v)
|
||||
bv.Type = pb.BindVariable_TYPE_UINT_LIST
|
||||
listArg := make([]uint64, len(v))
|
||||
for i, lv := range v {
|
||||
listArg[i] = uint64(lv.(uint32))
|
||||
}
|
||||
bv.ValueUintList = listArg
|
||||
case uint64:
|
||||
bv.Type = pb.BindVariable_TYPE_UINT
|
||||
bv.ValueUint = v
|
||||
case []uint:
|
||||
bv.Type = pb.BindVariable_TYPE_UINT_LIST
|
||||
listArg := make([]uint64, len(v))
|
||||
for i, lv := range v {
|
||||
listArg[i] = uint64(lv)
|
||||
}
|
||||
bv.ValueUintList = listArg
|
||||
case []uint16:
|
||||
bv.Type = pb.BindVariable_TYPE_UINT_LIST
|
||||
listArg := make([]uint64, len(v))
|
||||
for i, lv := range v {
|
||||
listArg[i] = uint64(lv)
|
||||
}
|
||||
bv.ValueUintList = listArg
|
||||
case []uint32:
|
||||
bv.Type = pb.BindVariable_TYPE_UINT_LIST
|
||||
listArg := make([]uint64, len(v))
|
||||
for i, lv := range v {
|
||||
listArg[i] = uint64(lv)
|
||||
}
|
||||
bv.ValueUintList = listArg
|
||||
case []uint64:
|
||||
bv.Type = pb.BindVariable_TYPE_UINT_LIST
|
||||
listArg := make([]uint64, len(v))
|
||||
for i, lv := range v {
|
||||
listArg[i] = lv
|
||||
listArg[i] = lv.(uint64)
|
||||
}
|
||||
bv.ValueUintList = listArg
|
||||
|
||||
// float32, float64 are TYPE_FLOAT_LIST
|
||||
case float32:
|
||||
bv.Type = pb.BindVariable_TYPE_FLOAT
|
||||
bv.ValueFloat = float64(v)
|
||||
case float64:
|
||||
bv.Type = pb.BindVariable_TYPE_FLOAT
|
||||
bv.ValueFloat = float64(v)
|
||||
case []float32:
|
||||
bv.Type = pb.BindVariable_TYPE_FLOAT_LIST
|
||||
listArg := make([]float64, len(v))
|
||||
for i, lv := range v {
|
||||
listArg[i] = float64(lv)
|
||||
listArg[i] = float64(lv.(float32))
|
||||
}
|
||||
bv.ValueFloatList = listArg
|
||||
case []float64:
|
||||
case float64:
|
||||
bv.Type = pb.BindVariable_TYPE_FLOAT_LIST
|
||||
listArg := make([]float64, len(v))
|
||||
for i, lv := range v {
|
||||
listArg[i] = lv
|
||||
listArg[i] = lv.(float64)
|
||||
}
|
||||
bv.ValueFloatList = listArg
|
||||
}
|
||||
result.BindVariables[k] = bv
|
||||
case string:
|
||||
bv.Type = pb.BindVariable_TYPE_BYTES
|
||||
bv.ValueBytes = []byte(v)
|
||||
case []string:
|
||||
bv.Type = pb.BindVariable_TYPE_BYTES_LIST
|
||||
listArg := make([][]byte, len(v))
|
||||
for i, lv := range v {
|
||||
listArg[i] = []byte(lv)
|
||||
}
|
||||
bv.ValueBytesList = listArg
|
||||
case []byte:
|
||||
bv.Type = pb.BindVariable_TYPE_BYTES
|
||||
bv.ValueBytes = v
|
||||
case [][]byte:
|
||||
bv.Type = pb.BindVariable_TYPE_BYTES_LIST
|
||||
listArg := make([][]byte, len(v))
|
||||
for i, lv := range v {
|
||||
listArg[i] = lv
|
||||
}
|
||||
bv.ValueBytesList = listArg
|
||||
case int:
|
||||
bv.Type = pb.BindVariable_TYPE_INT
|
||||
bv.ValueInt = int64(v)
|
||||
case int16:
|
||||
bv.Type = pb.BindVariable_TYPE_INT
|
||||
bv.ValueInt = int64(v)
|
||||
case int32:
|
||||
bv.Type = pb.BindVariable_TYPE_INT
|
||||
bv.ValueInt = int64(v)
|
||||
case int64:
|
||||
bv.Type = pb.BindVariable_TYPE_INT
|
||||
bv.ValueInt = v
|
||||
case []int:
|
||||
bv.Type = pb.BindVariable_TYPE_INT_LIST
|
||||
listArg := make([]int64, len(v))
|
||||
for i, lv := range v {
|
||||
listArg[i] = int64(lv)
|
||||
}
|
||||
bv.ValueIntList = listArg
|
||||
case []int16:
|
||||
bv.Type = pb.BindVariable_TYPE_INT_LIST
|
||||
listArg := make([]int64, len(v))
|
||||
for i, lv := range v {
|
||||
listArg[i] = int64(lv)
|
||||
}
|
||||
bv.ValueIntList = listArg
|
||||
case []int32:
|
||||
bv.Type = pb.BindVariable_TYPE_INT_LIST
|
||||
listArg := make([]int64, len(v))
|
||||
for i, lv := range v {
|
||||
listArg[i] = int64(lv)
|
||||
}
|
||||
bv.ValueIntList = listArg
|
||||
case []int64:
|
||||
bv.Type = pb.BindVariable_TYPE_INT_LIST
|
||||
listArg := make([]int64, len(v))
|
||||
for i, lv := range v {
|
||||
listArg[i] = lv
|
||||
}
|
||||
bv.ValueIntList = listArg
|
||||
case uint:
|
||||
bv.Type = pb.BindVariable_TYPE_UINT
|
||||
bv.ValueUint = uint64(v)
|
||||
case uint16:
|
||||
bv.Type = pb.BindVariable_TYPE_UINT
|
||||
bv.ValueUint = uint64(v)
|
||||
case uint32:
|
||||
bv.Type = pb.BindVariable_TYPE_UINT
|
||||
bv.ValueUint = uint64(v)
|
||||
case uint64:
|
||||
bv.Type = pb.BindVariable_TYPE_UINT
|
||||
bv.ValueUint = v
|
||||
case []uint:
|
||||
bv.Type = pb.BindVariable_TYPE_UINT_LIST
|
||||
listArg := make([]uint64, len(v))
|
||||
for i, lv := range v {
|
||||
listArg[i] = uint64(lv)
|
||||
}
|
||||
bv.ValueUintList = listArg
|
||||
case []uint16:
|
||||
bv.Type = pb.BindVariable_TYPE_UINT_LIST
|
||||
listArg := make([]uint64, len(v))
|
||||
for i, lv := range v {
|
||||
listArg[i] = uint64(lv)
|
||||
}
|
||||
bv.ValueUintList = listArg
|
||||
case []uint32:
|
||||
bv.Type = pb.BindVariable_TYPE_UINT_LIST
|
||||
listArg := make([]uint64, len(v))
|
||||
for i, lv := range v {
|
||||
listArg[i] = uint64(lv)
|
||||
}
|
||||
bv.ValueUintList = listArg
|
||||
case []uint64:
|
||||
bv.Type = pb.BindVariable_TYPE_UINT_LIST
|
||||
listArg := make([]uint64, len(v))
|
||||
for i, lv := range v {
|
||||
listArg[i] = lv
|
||||
}
|
||||
bv.ValueUintList = listArg
|
||||
case float32:
|
||||
bv.Type = pb.BindVariable_TYPE_FLOAT
|
||||
bv.ValueFloat = float64(v)
|
||||
case float64:
|
||||
bv.Type = pb.BindVariable_TYPE_FLOAT
|
||||
bv.ValueFloat = float64(v)
|
||||
case []float32:
|
||||
bv.Type = pb.BindVariable_TYPE_FLOAT_LIST
|
||||
listArg := make([]float64, len(v))
|
||||
for i, lv := range v {
|
||||
listArg[i] = float64(lv)
|
||||
}
|
||||
bv.ValueFloatList = listArg
|
||||
case []float64:
|
||||
bv.Type = pb.BindVariable_TYPE_FLOAT_LIST
|
||||
listArg := make([]float64, len(v))
|
||||
for i, lv := range v {
|
||||
listArg[i] = lv
|
||||
}
|
||||
bv.ValueFloatList = listArg
|
||||
}
|
||||
result[k] = bv
|
||||
}
|
||||
return result
|
||||
}
|
||||
|
|
|
@ -38,7 +38,7 @@ func TestQueryExecutorPlanDDL(t *testing.T) {
|
|||
}
|
||||
db.AddQuery(query, want)
|
||||
ctx := context.Background()
|
||||
tsv := newTestTabletServer(ctx, enableRowCache|enableStrict)
|
||||
tsv := newTestTabletServer(ctx, enableRowCache|enableStrict, db)
|
||||
qre := newTestQueryExecutor(ctx, tsv, query, 0)
|
||||
defer tsv.StopService()
|
||||
checkPlanID(t, planbuilder.PLAN_DDL, qre.plan.PlanId)
|
||||
|
@ -60,7 +60,7 @@ func TestQueryExecutorPlanPassDmlStrictMode(t *testing.T) {
|
|||
db.AddQuery(query, want)
|
||||
ctx := context.Background()
|
||||
// non strict mode
|
||||
tsv := newTestTabletServer(ctx, noFlags)
|
||||
tsv := newTestTabletServer(ctx, noFlags, db)
|
||||
qre := newTestQueryExecutor(ctx, tsv, query, newTransaction(tsv))
|
||||
checkPlanID(t, planbuilder.PLAN_PASS_DML, qre.plan.PlanId)
|
||||
got, err := qre.Execute()
|
||||
|
@ -74,7 +74,7 @@ func TestQueryExecutorPlanPassDmlStrictMode(t *testing.T) {
|
|||
tsv.StopService()
|
||||
|
||||
// strict mode
|
||||
tsv = newTestTabletServer(ctx, enableRowCache|enableStrict)
|
||||
tsv = newTestTabletServer(ctx, enableRowCache|enableStrict, db)
|
||||
qre = newTestQueryExecutor(ctx, tsv, query, newTransaction(tsv))
|
||||
defer tsv.StopService()
|
||||
defer testCommitHelper(t, tsv, qre)
|
||||
|
@ -101,7 +101,7 @@ func TestQueryExecutorPlanPassDmlStrictModeAutoCommit(t *testing.T) {
|
|||
db.AddQuery(query, want)
|
||||
// non strict mode
|
||||
ctx := context.Background()
|
||||
tsv := newTestTabletServer(ctx, noFlags)
|
||||
tsv := newTestTabletServer(ctx, noFlags, db)
|
||||
qre := newTestQueryExecutor(ctx, tsv, query, 0)
|
||||
checkPlanID(t, planbuilder.PLAN_PASS_DML, qre.plan.PlanId)
|
||||
got, err := qre.Execute()
|
||||
|
@ -115,7 +115,7 @@ func TestQueryExecutorPlanPassDmlStrictModeAutoCommit(t *testing.T) {
|
|||
|
||||
// strict mode
|
||||
// update should fail because strict mode is not enabled
|
||||
tsv = newTestTabletServer(ctx, enableRowCache|enableStrict)
|
||||
tsv = newTestTabletServer(ctx, enableRowCache|enableStrict, db)
|
||||
qre = newTestQueryExecutor(ctx, tsv, query, 0)
|
||||
defer tsv.StopService()
|
||||
checkPlanID(t, planbuilder.PLAN_PASS_DML, qre.plan.PlanId)
|
||||
|
@ -140,7 +140,7 @@ func TestQueryExecutorPlanInsertPk(t *testing.T) {
|
|||
}
|
||||
query := "insert into test_table values(1)"
|
||||
ctx := context.Background()
|
||||
tsv := newTestTabletServer(ctx, enableRowCache|enableStrict)
|
||||
tsv := newTestTabletServer(ctx, enableRowCache|enableStrict, db)
|
||||
qre := newTestQueryExecutor(ctx, tsv, query, 0)
|
||||
defer tsv.StopService()
|
||||
checkPlanID(t, planbuilder.PLAN_INSERT_PK, qre.plan.PlanId)
|
||||
|
@ -172,7 +172,7 @@ func TestQueryExecutorPlanInsertSubQueryAutoCommmit(t *testing.T) {
|
|||
|
||||
db.AddQuery(insertQuery, &mproto.QueryResult{})
|
||||
ctx := context.Background()
|
||||
tsv := newTestTabletServer(ctx, enableRowCache|enableStrict)
|
||||
tsv := newTestTabletServer(ctx, enableRowCache|enableStrict, db)
|
||||
qre := newTestQueryExecutor(ctx, tsv, query, 0)
|
||||
defer tsv.StopService()
|
||||
checkPlanID(t, planbuilder.PLAN_INSERT_SUBQUERY, qre.plan.PlanId)
|
||||
|
@ -204,7 +204,7 @@ func TestQueryExecutorPlanInsertSubQuery(t *testing.T) {
|
|||
|
||||
db.AddQuery(insertQuery, &mproto.QueryResult{})
|
||||
ctx := context.Background()
|
||||
tsv := newTestTabletServer(ctx, enableRowCache|enableStrict)
|
||||
tsv := newTestTabletServer(ctx, enableRowCache|enableStrict, db)
|
||||
qre := newTestQueryExecutor(ctx, tsv, query, newTransaction(tsv))
|
||||
|
||||
defer tsv.StopService()
|
||||
|
@ -227,7 +227,7 @@ func TestQueryExecutorPlanUpsertPk(t *testing.T) {
|
|||
}
|
||||
query := "insert into test_table values(1) on duplicate key update val=1"
|
||||
ctx := context.Background()
|
||||
tsv := newTestTabletServer(ctx, enableRowCache|enableStrict)
|
||||
tsv := newTestTabletServer(ctx, enableRowCache|enableStrict, db)
|
||||
qre := newTestQueryExecutor(ctx, tsv, query, 0)
|
||||
defer tsv.StopService()
|
||||
checkPlanID(t, planbuilder.PLAN_UPSERT_PK, qre.plan.PlanId)
|
||||
|
@ -283,7 +283,7 @@ func TestQueryExecutorPlanDmlPk(t *testing.T) {
|
|||
want := &mproto.QueryResult{}
|
||||
db.AddQuery(query, want)
|
||||
ctx := context.Background()
|
||||
tsv := newTestTabletServer(ctx, enableRowCache|enableStrict)
|
||||
tsv := newTestTabletServer(ctx, enableRowCache|enableStrict, db)
|
||||
qre := newTestQueryExecutor(ctx, tsv, query, newTransaction(tsv))
|
||||
defer tsv.StopService()
|
||||
defer testCommitHelper(t, tsv, qre)
|
||||
|
@ -303,7 +303,7 @@ func TestQueryExecutorPlanDmlAutoCommit(t *testing.T) {
|
|||
want := &mproto.QueryResult{}
|
||||
db.AddQuery(query, want)
|
||||
ctx := context.Background()
|
||||
tsv := newTestTabletServer(ctx, enableRowCache|enableStrict)
|
||||
tsv := newTestTabletServer(ctx, enableRowCache|enableStrict, db)
|
||||
qre := newTestQueryExecutor(ctx, tsv, query, 0)
|
||||
defer tsv.StopService()
|
||||
checkPlanID(t, planbuilder.PLAN_DML_PK, qre.plan.PlanId)
|
||||
|
@ -324,7 +324,7 @@ func TestQueryExecutorPlanDmlSubQuery(t *testing.T) {
|
|||
db.AddQuery(query, want)
|
||||
db.AddQuery(expandedQuery, want)
|
||||
ctx := context.Background()
|
||||
tsv := newTestTabletServer(ctx, enableRowCache|enableStrict)
|
||||
tsv := newTestTabletServer(ctx, enableRowCache|enableStrict, db)
|
||||
qre := newTestQueryExecutor(ctx, tsv, query, newTransaction(tsv))
|
||||
defer tsv.StopService()
|
||||
defer testCommitHelper(t, tsv, qre)
|
||||
|
@ -346,7 +346,7 @@ func TestQueryExecutorPlanDmlSubQueryAutoCommit(t *testing.T) {
|
|||
db.AddQuery(query, want)
|
||||
db.AddQuery(expandedQuery, want)
|
||||
ctx := context.Background()
|
||||
tsv := newTestTabletServer(ctx, enableRowCache|enableStrict)
|
||||
tsv := newTestTabletServer(ctx, enableRowCache|enableStrict, db)
|
||||
qre := newTestQueryExecutor(ctx, tsv, query, 0)
|
||||
defer tsv.StopService()
|
||||
checkPlanID(t, planbuilder.PLAN_DML_SUBQUERY, qre.plan.PlanId)
|
||||
|
@ -369,7 +369,7 @@ func TestQueryExecutorPlanOtherWithinATransaction(t *testing.T) {
|
|||
}
|
||||
db.AddQuery(query, want)
|
||||
ctx := context.Background()
|
||||
tsv := newTestTabletServer(ctx, enableRowCache|enableSchemaOverrides|enableStrict)
|
||||
tsv := newTestTabletServer(ctx, enableRowCache|enableSchemaOverrides|enableStrict, db)
|
||||
qre := newTestQueryExecutor(ctx, tsv, query, newTransaction(tsv))
|
||||
defer tsv.StopService()
|
||||
defer testCommitHelper(t, tsv, qre)
|
||||
|
@ -401,7 +401,7 @@ func TestQueryExecutorPlanPassSelectWithInATransaction(t *testing.T) {
|
|||
Fields: fields,
|
||||
})
|
||||
ctx := context.Background()
|
||||
tsv := newTestTabletServer(ctx, enableStrict)
|
||||
tsv := newTestTabletServer(ctx, enableStrict, db)
|
||||
qre := newTestQueryExecutor(ctx, tsv, query, newTransaction(tsv))
|
||||
defer tsv.StopService()
|
||||
defer testCommitHelper(t, tsv, qre)
|
||||
|
@ -427,7 +427,7 @@ func TestQueryExecutorPlanPassSelectWithLockOutsideATransaction(t *testing.T) {
|
|||
Fields: getTestTableFields(),
|
||||
})
|
||||
ctx := context.Background()
|
||||
tsv := newTestTabletServer(ctx, enableRowCache|enableSchemaOverrides|enableStrict)
|
||||
tsv := newTestTabletServer(ctx, enableRowCache|enableSchemaOverrides|enableStrict, db)
|
||||
qre := newTestQueryExecutor(ctx, tsv, query, 0)
|
||||
defer tsv.StopService()
|
||||
checkPlanID(t, planbuilder.PLAN_PASS_SELECT, qre.plan.PlanId)
|
||||
|
@ -456,7 +456,7 @@ func TestQueryExecutorPlanPassSelect(t *testing.T) {
|
|||
Fields: getTestTableFields(),
|
||||
})
|
||||
ctx := context.Background()
|
||||
tsv := newTestTabletServer(ctx, enableRowCache|enableSchemaOverrides|enableStrict)
|
||||
tsv := newTestTabletServer(ctx, enableRowCache|enableSchemaOverrides|enableStrict, db)
|
||||
qre := newTestQueryExecutor(ctx, tsv, query, 0)
|
||||
defer tsv.StopService()
|
||||
checkPlanID(t, planbuilder.PLAN_PASS_SELECT, qre.plan.PlanId)
|
||||
|
@ -490,7 +490,7 @@ func TestQueryExecutorPlanPKIn(t *testing.T) {
|
|||
Fields: getTestTableFields(),
|
||||
})
|
||||
ctx := context.Background()
|
||||
tsv := newTestTabletServer(ctx, enableRowCache|enableSchemaOverrides|enableStrict)
|
||||
tsv := newTestTabletServer(ctx, enableRowCache|enableSchemaOverrides|enableStrict, db)
|
||||
qre := newTestQueryExecutor(ctx, tsv, query, 0)
|
||||
defer tsv.StopService()
|
||||
checkPlanID(t, planbuilder.PLAN_PK_IN, qre.plan.PlanId)
|
||||
|
@ -542,7 +542,7 @@ func TestQueryExecutorPlanSelectSubQuery(t *testing.T) {
|
|||
Fields: getTestTableFields(),
|
||||
})
|
||||
ctx := context.Background()
|
||||
tsv := newTestTabletServer(ctx, enableRowCache|enableSchemaOverrides|enableStrict)
|
||||
tsv := newTestTabletServer(ctx, enableRowCache|enableSchemaOverrides|enableStrict, db)
|
||||
qre := newTestQueryExecutor(ctx, tsv, query, 0)
|
||||
defer tsv.StopService()
|
||||
checkPlanID(t, planbuilder.PLAN_SELECT_SUBQUERY, qre.plan.PlanId)
|
||||
|
@ -560,7 +560,7 @@ func TestQueryExecutorPlanSet(t *testing.T) {
|
|||
setQuery := "set unknown_key = 1"
|
||||
db.AddQuery(setQuery, &mproto.QueryResult{})
|
||||
ctx := context.Background()
|
||||
tsv := newTestTabletServer(ctx, enableRowCache|enableStrict)
|
||||
tsv := newTestTabletServer(ctx, enableRowCache|enableStrict, db)
|
||||
defer tsv.StopService()
|
||||
qre := newTestQueryExecutor(ctx, tsv, setQuery, 0)
|
||||
checkPlanID(t, planbuilder.PLAN_SET, qre.plan.PlanId)
|
||||
|
@ -780,12 +780,12 @@ func TestQueryExecutorPlanSet(t *testing.T) {
|
|||
}
|
||||
|
||||
func TestQueryExecutorPlanSetMaxResultSize(t *testing.T) {
|
||||
setUpQueryExecutorTest()
|
||||
db := setUpQueryExecutorTest()
|
||||
want := &mproto.QueryResult{}
|
||||
vtMaxResultSize := int64(128)
|
||||
query := fmt.Sprintf("set vt_max_result_size = %d", vtMaxResultSize)
|
||||
ctx := context.Background()
|
||||
tsv := newTestTabletServer(ctx, enableRowCache|enableStrict)
|
||||
tsv := newTestTabletServer(ctx, enableRowCache|enableStrict, db)
|
||||
qre := newTestQueryExecutor(ctx, tsv, query, 0)
|
||||
defer tsv.StopService()
|
||||
checkPlanID(t, planbuilder.PLAN_SET, qre.plan.PlanId)
|
||||
|
@ -802,10 +802,10 @@ func TestQueryExecutorPlanSetMaxResultSize(t *testing.T) {
|
|||
}
|
||||
|
||||
func TestQueryExecutorPlanSetMaxResultSizeFail(t *testing.T) {
|
||||
setUpQueryExecutorTest()
|
||||
db := setUpQueryExecutorTest()
|
||||
query := "set vt_max_result_size = 0"
|
||||
ctx := context.Background()
|
||||
tsv := newTestTabletServer(ctx, enableRowCache|enableStrict)
|
||||
tsv := newTestTabletServer(ctx, enableRowCache|enableStrict, db)
|
||||
qre := newTestQueryExecutor(ctx, tsv, query, 0)
|
||||
defer tsv.StopService()
|
||||
checkPlanID(t, planbuilder.PLAN_SET, qre.plan.PlanId)
|
||||
|
@ -824,12 +824,12 @@ func TestQueryExecutorPlanSetMaxResultSizeFail(t *testing.T) {
|
|||
}
|
||||
|
||||
func TestQueryExecutorPlanSetMaxDmlRows(t *testing.T) {
|
||||
setUpQueryExecutorTest()
|
||||
db := setUpQueryExecutorTest()
|
||||
want := &mproto.QueryResult{}
|
||||
vtMaxDmlRows := int64(256)
|
||||
query := fmt.Sprintf("set vt_max_dml_rows = %d", vtMaxDmlRows)
|
||||
ctx := context.Background()
|
||||
tsv := newTestTabletServer(ctx, enableRowCache|enableStrict)
|
||||
tsv := newTestTabletServer(ctx, enableRowCache|enableStrict, db)
|
||||
qre := newTestQueryExecutor(ctx, tsv, query, 0)
|
||||
defer tsv.StopService()
|
||||
checkPlanID(t, planbuilder.PLAN_SET, qre.plan.PlanId)
|
||||
|
@ -846,10 +846,10 @@ func TestQueryExecutorPlanSetMaxDmlRows(t *testing.T) {
|
|||
}
|
||||
|
||||
func TestQueryExecutorPlanSetMaxDmlRowsFail(t *testing.T) {
|
||||
setUpQueryExecutorTest()
|
||||
db := setUpQueryExecutorTest()
|
||||
query := "set vt_max_dml_rows = 0"
|
||||
ctx := context.Background()
|
||||
tsv := newTestTabletServer(ctx, enableRowCache|enableStrict)
|
||||
tsv := newTestTabletServer(ctx, enableRowCache|enableStrict, db)
|
||||
qre := newTestQueryExecutor(ctx, tsv, query, 0)
|
||||
defer tsv.StopService()
|
||||
checkPlanID(t, planbuilder.PLAN_SET, qre.plan.PlanId)
|
||||
|
@ -867,12 +867,12 @@ func TestQueryExecutorPlanSetMaxDmlRowsFail(t *testing.T) {
|
|||
}
|
||||
|
||||
func TestQueryExecutorPlanSetStreamBufferSize(t *testing.T) {
|
||||
setUpQueryExecutorTest()
|
||||
db := setUpQueryExecutorTest()
|
||||
want := &mproto.QueryResult{}
|
||||
vtStreamBufferSize := int64(2048)
|
||||
query := fmt.Sprintf("set vt_stream_buffer_size = %d", vtStreamBufferSize)
|
||||
ctx := context.Background()
|
||||
tsv := newTestTabletServer(ctx, enableRowCache|enableStrict)
|
||||
tsv := newTestTabletServer(ctx, enableRowCache|enableStrict, db)
|
||||
qre := newTestQueryExecutor(ctx, tsv, query, 0)
|
||||
defer tsv.StopService()
|
||||
checkPlanID(t, planbuilder.PLAN_SET, qre.plan.PlanId)
|
||||
|
@ -889,10 +889,10 @@ func TestQueryExecutorPlanSetStreamBufferSize(t *testing.T) {
|
|||
}
|
||||
|
||||
func TestQueryExecutorPlanSetStreamBufferSizeFail(t *testing.T) {
|
||||
setUpQueryExecutorTest()
|
||||
db := setUpQueryExecutorTest()
|
||||
query := "set vt_stream_buffer_size = 128"
|
||||
ctx := context.Background()
|
||||
tsv := newTestTabletServer(ctx, enableRowCache|enableStrict)
|
||||
tsv := newTestTabletServer(ctx, enableRowCache|enableStrict, db)
|
||||
qre := newTestQueryExecutor(ctx, tsv, query, 0)
|
||||
defer tsv.StopService()
|
||||
checkPlanID(t, planbuilder.PLAN_SET, qre.plan.PlanId)
|
||||
|
@ -919,7 +919,7 @@ func TestQueryExecutorPlanOther(t *testing.T) {
|
|||
}
|
||||
db.AddQuery(query, want)
|
||||
ctx := context.Background()
|
||||
tsv := newTestTabletServer(ctx, enableRowCache|enableSchemaOverrides|enableStrict)
|
||||
tsv := newTestTabletServer(ctx, enableRowCache|enableSchemaOverrides|enableStrict, db)
|
||||
qre := newTestQueryExecutor(ctx, tsv, query, 0)
|
||||
defer tsv.StopService()
|
||||
checkPlanID(t, planbuilder.PLAN_OTHER, qre.plan.PlanId)
|
||||
|
@ -964,7 +964,7 @@ func TestQueryExecutorTableAcl(t *testing.T) {
|
|||
t.Fatalf("unable to load tableacl config, error: %v", err)
|
||||
}
|
||||
|
||||
tsv := newTestTabletServer(ctx, enableRowCache|enableSchemaOverrides|enableStrict)
|
||||
tsv := newTestTabletServer(ctx, enableRowCache|enableSchemaOverrides|enableStrict, db)
|
||||
qre := newTestQueryExecutor(ctx, tsv, query, 0)
|
||||
defer tsv.StopService()
|
||||
checkPlanID(t, planbuilder.PLAN_PASS_SELECT, qre.plan.PlanId)
|
||||
|
@ -1010,7 +1010,7 @@ func TestQueryExecutorTableAclNoPermission(t *testing.T) {
|
|||
t.Fatalf("unable to load tableacl config, error: %v", err)
|
||||
}
|
||||
// without enabling Config.StrictTableAcl
|
||||
tsv := newTestTabletServer(ctx, enableRowCache|enableSchemaOverrides|enableStrict)
|
||||
tsv := newTestTabletServer(ctx, enableRowCache|enableSchemaOverrides|enableStrict, db)
|
||||
qre := newTestQueryExecutor(ctx, tsv, query, 0)
|
||||
checkPlanID(t, planbuilder.PLAN_PASS_SELECT, qre.plan.PlanId)
|
||||
got, err := qre.Execute()
|
||||
|
@ -1023,7 +1023,7 @@ func TestQueryExecutorTableAclNoPermission(t *testing.T) {
|
|||
tsv.StopService()
|
||||
|
||||
// enable Config.StrictTableAcl
|
||||
tsv = newTestTabletServer(ctx, enableRowCache|enableSchemaOverrides|enableStrict|enableStrictTableAcl)
|
||||
tsv = newTestTabletServer(ctx, enableRowCache|enableSchemaOverrides|enableStrict|enableStrictTableAcl, db)
|
||||
qre = newTestQueryExecutor(ctx, tsv, query, 0)
|
||||
defer tsv.StopService()
|
||||
checkPlanID(t, planbuilder.PLAN_PASS_SELECT, qre.plan.PlanId)
|
||||
|
@ -1076,7 +1076,7 @@ func TestQueryExecutorTableAclExemptACL(t *testing.T) {
|
|||
}
|
||||
|
||||
// enable Config.StrictTableAcl
|
||||
tsv := newTestTabletServer(ctx, enableRowCache|enableSchemaOverrides|enableStrict|enableStrictTableAcl)
|
||||
tsv := newTestTabletServer(ctx, enableRowCache|enableSchemaOverrides|enableStrict|enableStrictTableAcl, db)
|
||||
qre := newTestQueryExecutor(ctx, tsv, query, 0)
|
||||
defer tsv.StopService()
|
||||
checkPlanID(t, planbuilder.PLAN_PASS_SELECT, qre.plan.PlanId)
|
||||
|
@ -1152,7 +1152,7 @@ func TestQueryExecutorTableAclDryRun(t *testing.T) {
|
|||
username,
|
||||
}, ".")
|
||||
// enable Config.StrictTableAcl
|
||||
tsv := newTestTabletServer(ctx, enableRowCache|enableSchemaOverrides|enableStrict|enableStrictTableAcl)
|
||||
tsv := newTestTabletServer(ctx, enableRowCache|enableSchemaOverrides|enableStrict|enableStrictTableAcl, db)
|
||||
tsv.qe.enableTableAclDryRun = true
|
||||
qre := newTestQueryExecutor(ctx, tsv, query, 0)
|
||||
defer tsv.StopService()
|
||||
|
@ -1210,7 +1210,7 @@ func TestQueryExecutorBlacklistQRFail(t *testing.T) {
|
|||
username: bannedUser,
|
||||
}
|
||||
ctx := callinfo.NewContext(context.Background(), callInfo)
|
||||
tsv := newTestTabletServer(ctx, enableRowCache|enableStrict)
|
||||
tsv := newTestTabletServer(ctx, enableRowCache|enableStrict, db)
|
||||
qre := newTestQueryExecutor(ctx, tsv, query, 0)
|
||||
defer tsv.StopService()
|
||||
checkPlanID(t, planbuilder.PLAN_SELECT_SUBQUERY, qre.plan.PlanId)
|
||||
|
@ -1269,7 +1269,7 @@ func TestQueryExecutorBlacklistQRRetry(t *testing.T) {
|
|||
username: bannedUser,
|
||||
}
|
||||
ctx := callinfo.NewContext(context.Background(), callInfo)
|
||||
tsv := newTestTabletServer(ctx, enableRowCache|enableStrict)
|
||||
tsv := newTestTabletServer(ctx, enableRowCache|enableStrict, db)
|
||||
qre := newTestQueryExecutor(ctx, tsv, query, 0)
|
||||
defer tsv.StopService()
|
||||
checkPlanID(t, planbuilder.PLAN_SELECT_SUBQUERY, qre.plan.PlanId)
|
||||
|
@ -1297,7 +1297,7 @@ const (
|
|||
)
|
||||
|
||||
// newTestQueryExecutor uses a package level variable testTabletServer defined in tabletserver_test.go
|
||||
func newTestTabletServer(ctx context.Context, flags executorFlags) *TabletServer {
|
||||
func newTestTabletServer(ctx context.Context, flags executorFlags, db *fakesqldb.DB) *TabletServer {
|
||||
randID := rand.Int63()
|
||||
config := DefaultQsConfig
|
||||
config.StatsPrefix = fmt.Sprintf("Stats-%d-", randID)
|
||||
|
@ -1326,7 +1326,7 @@ func newTestTabletServer(ctx context.Context, flags executorFlags) *TabletServer
|
|||
}
|
||||
tsv := NewTabletServer(config)
|
||||
testUtils := newTestUtils()
|
||||
dbconfigs := testUtils.newDBConfigs()
|
||||
dbconfigs := testUtils.newDBConfigs(db)
|
||||
if flags&enableRowCache > 0 {
|
||||
dbconfigs.App.EnableRowcache = true
|
||||
} else {
|
||||
|
|
|
@ -31,8 +31,8 @@ func TestSchemaInfoStrictMode(t *testing.T) {
|
|||
}
|
||||
schemaInfo := newTestSchemaInfo(10, 1*time.Second, 1*time.Second, false)
|
||||
t.Log(schemaInfo)
|
||||
appParams := sqldb.ConnParams{}
|
||||
dbaParams := sqldb.ConnParams{}
|
||||
appParams := sqldb.ConnParams{Engine: db.Name}
|
||||
dbaParams := sqldb.ConnParams{Engine: db.Name}
|
||||
schemaInfo.cachePool.Open()
|
||||
defer schemaInfo.cachePool.Close()
|
||||
defer handleAndVerifyTabletError(
|
||||
|
@ -55,8 +55,8 @@ func TestSchemaInfoOpenFailedDueToMissMySQLTime(t *testing.T) {
|
|||
},
|
||||
})
|
||||
schemaInfo := newTestSchemaInfo(10, 1*time.Second, 1*time.Second, false)
|
||||
appParams := sqldb.ConnParams{}
|
||||
dbaParams := sqldb.ConnParams{}
|
||||
appParams := sqldb.ConnParams{Engine: db.Name}
|
||||
dbaParams := sqldb.ConnParams{Engine: db.Name}
|
||||
schemaInfo.cachePool.Open()
|
||||
defer schemaInfo.cachePool.Close()
|
||||
defer handleAndVerifyTabletError(
|
||||
|
@ -78,8 +78,8 @@ func TestSchemaInfoOpenFailedDueToIncorrectMysqlRowNum(t *testing.T) {
|
|||
},
|
||||
})
|
||||
schemaInfo := newTestSchemaInfo(10, 1*time.Second, 1*time.Second, false)
|
||||
appParams := sqldb.ConnParams{}
|
||||
dbaParams := sqldb.ConnParams{}
|
||||
appParams := sqldb.ConnParams{Engine: db.Name}
|
||||
dbaParams := sqldb.ConnParams{Engine: db.Name}
|
||||
schemaInfo.cachePool.Open()
|
||||
defer schemaInfo.cachePool.Close()
|
||||
defer handleAndVerifyTabletError(
|
||||
|
@ -101,8 +101,8 @@ func TestSchemaInfoOpenFailedDueToInvalidTimeFormat(t *testing.T) {
|
|||
},
|
||||
})
|
||||
schemaInfo := newTestSchemaInfo(10, 1*time.Second, 1*time.Second, false)
|
||||
appParams := sqldb.ConnParams{}
|
||||
dbaParams := sqldb.ConnParams{}
|
||||
appParams := sqldb.ConnParams{Engine: db.Name}
|
||||
dbaParams := sqldb.ConnParams{Engine: db.Name}
|
||||
schemaInfo.cachePool.Open()
|
||||
defer schemaInfo.cachePool.Close()
|
||||
defer handleAndVerifyTabletError(
|
||||
|
@ -124,8 +124,8 @@ func TestSchemaInfoOpenFailedDueToExecErr(t *testing.T) {
|
|||
RowsAffected: math.MaxUint64,
|
||||
})
|
||||
schemaInfo := newTestSchemaInfo(10, 1*time.Second, 1*time.Second, false)
|
||||
appParams := sqldb.ConnParams{}
|
||||
dbaParams := sqldb.ConnParams{}
|
||||
appParams := sqldb.ConnParams{Engine: db.Name}
|
||||
dbaParams := sqldb.ConnParams{Engine: db.Name}
|
||||
schemaInfo.cachePool.Open()
|
||||
defer schemaInfo.cachePool.Close()
|
||||
defer handleAndVerifyTabletError(
|
||||
|
@ -153,8 +153,8 @@ func TestSchemaInfoOpenFailedDueToTableInfoErr(t *testing.T) {
|
|||
RowsAffected: math.MaxUint64,
|
||||
})
|
||||
schemaInfo := newTestSchemaInfo(10, 1*time.Second, 1*time.Second, false)
|
||||
appParams := sqldb.ConnParams{}
|
||||
dbaParams := sqldb.ConnParams{}
|
||||
appParams := sqldb.ConnParams{Engine: db.Name}
|
||||
dbaParams := sqldb.ConnParams{Engine: db.Name}
|
||||
schemaInfo.cachePool.Open()
|
||||
defer schemaInfo.cachePool.Close()
|
||||
defer handleAndVerifyTabletError(
|
||||
|
@ -172,8 +172,8 @@ func TestSchemaInfoOpenWithSchemaOverride(t *testing.T) {
|
|||
db.AddQuery(query, result)
|
||||
}
|
||||
schemaInfo := newTestSchemaInfo(10, 10*time.Second, 10*time.Second, false)
|
||||
appParams := sqldb.ConnParams{}
|
||||
dbaParams := sqldb.ConnParams{}
|
||||
appParams := sqldb.ConnParams{Engine: db.Name}
|
||||
dbaParams := sqldb.ConnParams{Engine: db.Name}
|
||||
schemaInfo.cachePool.Open()
|
||||
defer schemaInfo.cachePool.Close()
|
||||
schemaOverrides := getSchemaInfoTestSchemaOverride()
|
||||
|
@ -201,8 +201,8 @@ func TestSchemaInfoReload(t *testing.T) {
|
|||
}
|
||||
idleTimeout := 10 * time.Second
|
||||
schemaInfo := newTestSchemaInfo(10, 10*time.Second, idleTimeout, false)
|
||||
appParams := sqldb.ConnParams{}
|
||||
dbaParams := sqldb.ConnParams{}
|
||||
appParams := sqldb.ConnParams{Engine: db.Name}
|
||||
dbaParams := sqldb.ConnParams{Engine: db.Name}
|
||||
schemaInfo.cachePool.Open()
|
||||
defer schemaInfo.cachePool.Close()
|
||||
// test cache type RW
|
||||
|
@ -284,8 +284,8 @@ func TestSchemaInfoCreateOrUpdateTableFailedDuetoExecErr(t *testing.T) {
|
|||
},
|
||||
})
|
||||
schemaInfo := newTestSchemaInfo(10, 1*time.Second, 1*time.Second, false)
|
||||
appParams := sqldb.ConnParams{}
|
||||
dbaParams := sqldb.ConnParams{}
|
||||
appParams := sqldb.ConnParams{Engine: db.Name}
|
||||
dbaParams := sqldb.ConnParams{Engine: db.Name}
|
||||
schemaInfo.cachePool.Open()
|
||||
defer schemaInfo.cachePool.Close()
|
||||
defer handleAndVerifyTabletError(
|
||||
|
@ -313,8 +313,8 @@ func TestSchemaInfoCreateOrUpdateTable(t *testing.T) {
|
|||
},
|
||||
})
|
||||
schemaInfo := newTestSchemaInfo(10, 1*time.Second, 1*time.Second, false)
|
||||
appParams := sqldb.ConnParams{}
|
||||
dbaParams := sqldb.ConnParams{}
|
||||
appParams := sqldb.ConnParams{Engine: db.Name}
|
||||
dbaParams := sqldb.ConnParams{Engine: db.Name}
|
||||
schemaInfo.cachePool.Open()
|
||||
defer schemaInfo.cachePool.Close()
|
||||
schemaInfo.Open(&appParams, &dbaParams, getSchemaInfoTestSchemaOverride(), false)
|
||||
|
@ -337,8 +337,8 @@ func TestSchemaInfoDropTable(t *testing.T) {
|
|||
},
|
||||
})
|
||||
schemaInfo := newTestSchemaInfo(10, 1*time.Second, 1*time.Second, false)
|
||||
appParams := sqldb.ConnParams{}
|
||||
dbaParams := sqldb.ConnParams{}
|
||||
appParams := sqldb.ConnParams{Engine: db.Name}
|
||||
dbaParams := sqldb.ConnParams{Engine: db.Name}
|
||||
schemaInfo.cachePool.Open()
|
||||
defer schemaInfo.cachePool.Close()
|
||||
schemaInfo.Open(&appParams, &dbaParams, getSchemaInfoTestSchemaOverride(), false)
|
||||
|
@ -361,8 +361,8 @@ func TestSchemaInfoGetPlanPanicDuetoEmptyQuery(t *testing.T) {
|
|||
db.AddQuery(query, result)
|
||||
}
|
||||
schemaInfo := newTestSchemaInfo(10, 10*time.Second, 10*time.Second, false)
|
||||
appParams := sqldb.ConnParams{}
|
||||
dbaParams := sqldb.ConnParams{}
|
||||
appParams := sqldb.ConnParams{Engine: db.Name}
|
||||
dbaParams := sqldb.ConnParams{Engine: db.Name}
|
||||
schemaInfo.cachePool.Open()
|
||||
defer schemaInfo.cachePool.Close()
|
||||
schemaOverrides := getSchemaInfoTestSchemaOverride()
|
||||
|
@ -387,8 +387,8 @@ func TestSchemaInfoQueryCacheFailDueToInvalidCacheSize(t *testing.T) {
|
|||
db.AddQuery(query, result)
|
||||
}
|
||||
schemaInfo := newTestSchemaInfo(10, 10*time.Second, 10*time.Second, false)
|
||||
appParams := sqldb.ConnParams{}
|
||||
dbaParams := sqldb.ConnParams{}
|
||||
appParams := sqldb.ConnParams{Engine: db.Name}
|
||||
dbaParams := sqldb.ConnParams{Engine: db.Name}
|
||||
schemaInfo.cachePool.Open()
|
||||
defer schemaInfo.cachePool.Close()
|
||||
schemaOverrides := getSchemaInfoTestSchemaOverride()
|
||||
|
@ -416,8 +416,8 @@ func TestSchemaInfoQueryCache(t *testing.T) {
|
|||
db.AddQuery("select * from test_table_02 where 1 != 1", &mproto.QueryResult{})
|
||||
|
||||
schemaInfo := newTestSchemaInfo(10, 10*time.Second, 10*time.Second, true)
|
||||
appParams := sqldb.ConnParams{}
|
||||
dbaParams := sqldb.ConnParams{}
|
||||
appParams := sqldb.ConnParams{Engine: db.Name}
|
||||
dbaParams := sqldb.ConnParams{Engine: db.Name}
|
||||
schemaInfo.cachePool.Open()
|
||||
defer schemaInfo.cachePool.Close()
|
||||
schemaOverrides := getSchemaInfoTestSchemaOverride()
|
||||
|
@ -449,8 +449,8 @@ func TestSchemaInfoExportVars(t *testing.T) {
|
|||
db.AddQuery(query, result)
|
||||
}
|
||||
schemaInfo := newTestSchemaInfo(10, 1*time.Second, 1*time.Second, true)
|
||||
appParams := sqldb.ConnParams{}
|
||||
dbaParams := sqldb.ConnParams{}
|
||||
appParams := sqldb.ConnParams{Engine: db.Name}
|
||||
dbaParams := sqldb.ConnParams{Engine: db.Name}
|
||||
schemaInfo.cachePool.Open()
|
||||
defer schemaInfo.cachePool.Close()
|
||||
schemaInfo.Open(&appParams, &dbaParams, []SchemaOverride{}, true)
|
||||
|
@ -468,8 +468,8 @@ func TestUpdatedMysqlStats(t *testing.T) {
|
|||
}
|
||||
idleTimeout := 10 * time.Second
|
||||
schemaInfo := newTestSchemaInfo(10, 10*time.Second, idleTimeout, false)
|
||||
appParams := sqldb.ConnParams{}
|
||||
dbaParams := sqldb.ConnParams{}
|
||||
appParams := sqldb.ConnParams{Engine: db.Name}
|
||||
dbaParams := sqldb.ConnParams{Engine: db.Name}
|
||||
schemaInfo.cachePool.Open()
|
||||
defer schemaInfo.cachePool.Close()
|
||||
schemaInfo.Open(&appParams, &dbaParams, nil, true)
|
||||
|
@ -537,8 +537,8 @@ func TestSchemaInfoStatsURL(t *testing.T) {
|
|||
query := "select * from test_table_01"
|
||||
db.AddQuery("select * from test_table_01 where 1 != 1", &mproto.QueryResult{})
|
||||
schemaInfo := newTestSchemaInfo(10, 1*time.Second, 1*time.Second, false)
|
||||
appParams := sqldb.ConnParams{}
|
||||
dbaParams := sqldb.ConnParams{}
|
||||
appParams := sqldb.ConnParams{Engine: db.Name}
|
||||
dbaParams := sqldb.ConnParams{Engine: db.Name}
|
||||
schemaInfo.cachePool.Open()
|
||||
defer schemaInfo.cachePool.Close()
|
||||
schemaInfo.Open(&appParams, &dbaParams, []SchemaOverride{}, true)
|
||||
|
|
|
@ -30,7 +30,7 @@ func TestTableInfoNew(t *testing.T) {
|
|||
cachePool := newTestTableInfoCachePool()
|
||||
cachePool.Open()
|
||||
defer cachePool.Close()
|
||||
tableInfo, err := newTestTableInfo(cachePool, "USER_TABLE", "test table")
|
||||
tableInfo, err := newTestTableInfo(cachePool, "USER_TABLE", "test table", db)
|
||||
if err != nil {
|
||||
t.Fatalf("failed to create a test table info")
|
||||
}
|
||||
|
@ -53,7 +53,7 @@ func TestTableInfoFailBecauseUnableToRetrieveTableIndex(t *testing.T) {
|
|||
cachePool := newTestTableInfoCachePool()
|
||||
cachePool.Open()
|
||||
defer cachePool.Close()
|
||||
_, err := newTestTableInfo(cachePool, "USER_TABLE", "test table")
|
||||
_, err := newTestTableInfo(cachePool, "USER_TABLE", "test table", db)
|
||||
if err == nil {
|
||||
t.Fatalf("table info creation should fail because it is unable to get test_table index")
|
||||
}
|
||||
|
@ -68,7 +68,7 @@ func TestTableInfoWithoutRowCacheViaComment(t *testing.T) {
|
|||
cachePool := newTestTableInfoCachePool()
|
||||
cachePool.Open()
|
||||
defer cachePool.Close()
|
||||
tableInfo, err := newTestTableInfo(cachePool, "USER_TABLE", "vtocc_nocache")
|
||||
tableInfo, err := newTestTableInfo(cachePool, "USER_TABLE", "vtocc_nocache", db)
|
||||
if err != nil {
|
||||
t.Fatalf("failed to create a test table info")
|
||||
}
|
||||
|
@ -89,7 +89,7 @@ func TestTableInfoWithoutRowCacheViaTableType(t *testing.T) {
|
|||
cachePool := newTestTableInfoCachePool()
|
||||
cachePool.Open()
|
||||
defer cachePool.Close()
|
||||
tableInfo, err := newTestTableInfo(cachePool, "VIEW", "test table")
|
||||
tableInfo, err := newTestTableInfo(cachePool, "VIEW", "test table", db)
|
||||
if err != nil {
|
||||
t.Fatalf("failed to create a test table info")
|
||||
}
|
||||
|
@ -119,7 +119,7 @@ func TestTableInfoWithoutRowCacheViaNoPKColumn(t *testing.T) {
|
|||
cachePool := newTestTableInfoCachePool()
|
||||
cachePool.Open()
|
||||
defer cachePool.Close()
|
||||
tableInfo, err := newTestTableInfo(cachePool, "USER_TABLE", "test table")
|
||||
tableInfo, err := newTestTableInfo(cachePool, "USER_TABLE", "test table", db)
|
||||
if err != nil {
|
||||
t.Fatalf("failed to create a test table info")
|
||||
}
|
||||
|
@ -162,7 +162,7 @@ func TestTableInfoWithoutRowCacheViaUnknownPKColumnType(t *testing.T) {
|
|||
cachePool := newTestTableInfoCachePool()
|
||||
cachePool.Open()
|
||||
defer cachePool.Close()
|
||||
tableInfo, err := newTestTableInfo(cachePool, "USER_TABLE", "test table")
|
||||
tableInfo, err := newTestTableInfo(cachePool, "USER_TABLE", "test table", db)
|
||||
if err != nil {
|
||||
t.Fatalf("failed to create a test table info")
|
||||
}
|
||||
|
@ -180,7 +180,7 @@ func TestTableInfoReplacePKColumn(t *testing.T) {
|
|||
cachePool := newTestTableInfoCachePool()
|
||||
cachePool.Open()
|
||||
defer cachePool.Close()
|
||||
tableInfo, err := newTestTableInfo(cachePool, "USER_TABLE", "test table")
|
||||
tableInfo, err := newTestTableInfo(cachePool, "USER_TABLE", "test table", db)
|
||||
if err != nil {
|
||||
t.Fatalf("failed to create a table info")
|
||||
}
|
||||
|
@ -219,7 +219,7 @@ func TestTableInfoSetPKColumn(t *testing.T) {
|
|||
cachePool := newTestTableInfoCachePool()
|
||||
cachePool.Open()
|
||||
defer cachePool.Close()
|
||||
tableInfo, err := newTestTableInfo(cachePool, "USER_TABLE", "test table")
|
||||
tableInfo, err := newTestTableInfo(cachePool, "USER_TABLE", "test table", db)
|
||||
if err != nil {
|
||||
t.Fatalf("failed to create a table info")
|
||||
}
|
||||
|
@ -258,7 +258,7 @@ func TestTableInfoInvalidCardinalityInIndex(t *testing.T) {
|
|||
cachePool := newTestTableInfoCachePool()
|
||||
cachePool.Open()
|
||||
defer cachePool.Close()
|
||||
tableInfo, err := newTestTableInfo(cachePool, "USER_TABLE", "test table")
|
||||
tableInfo, err := newTestTableInfo(cachePool, "USER_TABLE", "test table", db)
|
||||
if err != nil {
|
||||
t.Fatalf("failed to create a table info")
|
||||
}
|
||||
|
@ -267,10 +267,10 @@ func TestTableInfoInvalidCardinalityInIndex(t *testing.T) {
|
|||
}
|
||||
}
|
||||
|
||||
func newTestTableInfo(cachePool *CachePool, tableType string, comment string) (*TableInfo, error) {
|
||||
func newTestTableInfo(cachePool *CachePool, tableType string, comment string, db *fakesqldb.DB) (*TableInfo, error) {
|
||||
ctx := context.Background()
|
||||
appParams := sqldb.ConnParams{}
|
||||
dbaParams := sqldb.ConnParams{}
|
||||
appParams := sqldb.ConnParams{Engine: db.Name}
|
||||
dbaParams := sqldb.ConnParams{Engine: db.Name}
|
||||
queryServiceStats := NewQueryServiceStats("", false)
|
||||
connPoolIdleTimeout := 10 * time.Second
|
||||
connPool := NewConnPool("", 2, connPoolIdleTimeout, false, queryServiceStats)
|
||||
|
|
|
@ -57,7 +57,7 @@ func TestTabletServerAllowQueriesFailBadConn(t *testing.T) {
|
|||
config := testUtils.newQueryServiceConfig()
|
||||
tsv := NewTabletServer(config)
|
||||
checkTabletServerState(t, tsv, StateNotConnected)
|
||||
dbconfigs := testUtils.newDBConfigs()
|
||||
dbconfigs := testUtils.newDBConfigs(db)
|
||||
err := tsv.StartService(nil, &dbconfigs, []SchemaOverride{}, testUtils.newMysqld(&dbconfigs))
|
||||
if err == nil {
|
||||
t.Fatalf("TabletServer.StartService should fail")
|
||||
|
@ -66,14 +66,14 @@ func TestTabletServerAllowQueriesFailBadConn(t *testing.T) {
|
|||
}
|
||||
|
||||
func TestTabletServerAllowQueriesFailStrictModeConflictWithRowCache(t *testing.T) {
|
||||
setUpTabletServerTest()
|
||||
db := setUpTabletServerTest()
|
||||
testUtils := newTestUtils()
|
||||
config := testUtils.newQueryServiceConfig()
|
||||
// disable strict mode
|
||||
config.StrictMode = false
|
||||
tsv := NewTabletServer(config)
|
||||
checkTabletServerState(t, tsv, StateNotConnected)
|
||||
dbconfigs := testUtils.newDBConfigs()
|
||||
dbconfigs := testUtils.newDBConfigs(db)
|
||||
// enable rowcache
|
||||
dbconfigs.App.EnableRowcache = true
|
||||
err := tsv.StartService(nil, &dbconfigs, []SchemaOverride{}, testUtils.newMysqld(&dbconfigs))
|
||||
|
@ -84,12 +84,12 @@ func TestTabletServerAllowQueriesFailStrictModeConflictWithRowCache(t *testing.T
|
|||
}
|
||||
|
||||
func TestTabletServerAllowQueries(t *testing.T) {
|
||||
setUpTabletServerTest()
|
||||
db := setUpTabletServerTest()
|
||||
testUtils := newTestUtils()
|
||||
config := testUtils.newQueryServiceConfig()
|
||||
tsv := NewTabletServer(config)
|
||||
checkTabletServerState(t, tsv, StateNotConnected)
|
||||
dbconfigs := testUtils.newDBConfigs()
|
||||
dbconfigs := testUtils.newDBConfigs(db)
|
||||
tsv.setState(StateServing)
|
||||
err := tsv.StartService(nil, &dbconfigs, []SchemaOverride{}, testUtils.newMysqld(&dbconfigs))
|
||||
tsv.StopService()
|
||||
|
@ -106,7 +106,7 @@ func TestTabletServerAllowQueries(t *testing.T) {
|
|||
}
|
||||
|
||||
func TestTabletServerInitDBConfig(t *testing.T) {
|
||||
setUpTabletServerTest()
|
||||
_ = setUpTabletServerTest()
|
||||
testUtils := newTestUtils()
|
||||
config := testUtils.newQueryServiceConfig()
|
||||
tsv := NewTabletServer(config)
|
||||
|
@ -124,7 +124,7 @@ func TestTabletServerInitDBConfig(t *testing.T) {
|
|||
}
|
||||
|
||||
func TestDecideAction(t *testing.T) {
|
||||
setUpTabletServerTest()
|
||||
_ = setUpTabletServerTest()
|
||||
testUtils := newTestUtils()
|
||||
config := testUtils.newQueryServiceConfig()
|
||||
tsv := NewTabletServer(config)
|
||||
|
@ -227,11 +227,11 @@ func TestDecideAction(t *testing.T) {
|
|||
}
|
||||
|
||||
func TestSetServingType(t *testing.T) {
|
||||
setUpTabletServerTest()
|
||||
db := setUpTabletServerTest()
|
||||
testUtils := newTestUtils()
|
||||
config := testUtils.newQueryServiceConfig()
|
||||
tsv := NewTabletServer(config)
|
||||
dbconfigs := testUtils.newDBConfigs()
|
||||
dbconfigs := testUtils.newDBConfigs(db)
|
||||
|
||||
err := tsv.InitDBConfig(nil, &dbconfigs, []SchemaOverride{}, testUtils.newMysqld(&dbconfigs))
|
||||
if err != nil {
|
||||
|
@ -279,11 +279,11 @@ func TestSetServingType(t *testing.T) {
|
|||
}
|
||||
|
||||
func TestTabletServerCheckMysql(t *testing.T) {
|
||||
setUpTabletServerTest()
|
||||
db := setUpTabletServerTest()
|
||||
testUtils := newTestUtils()
|
||||
config := testUtils.newQueryServiceConfig()
|
||||
tsv := NewTabletServer(config)
|
||||
dbconfigs := testUtils.newDBConfigs()
|
||||
dbconfigs := testUtils.newDBConfigs(db)
|
||||
target := &pb.Target{}
|
||||
err := tsv.StartService(target, &dbconfigs, []SchemaOverride{}, testUtils.newMysqld(&dbconfigs))
|
||||
defer tsv.StopService()
|
||||
|
@ -308,7 +308,7 @@ func TestTabletServerCheckMysqlFailInvalidConn(t *testing.T) {
|
|||
testUtils := newTestUtils()
|
||||
config := testUtils.newQueryServiceConfig()
|
||||
tsv := NewTabletServer(config)
|
||||
dbconfigs := testUtils.newDBConfigs()
|
||||
dbconfigs := testUtils.newDBConfigs(db)
|
||||
err := tsv.StartService(nil, &dbconfigs, []SchemaOverride{}, testUtils.newMysqld(&dbconfigs))
|
||||
defer tsv.StopService()
|
||||
if err != nil {
|
||||
|
@ -322,11 +322,11 @@ func TestTabletServerCheckMysqlFailInvalidConn(t *testing.T) {
|
|||
}
|
||||
|
||||
func TestTabletServerCheckMysqlFailUninitializedQueryEngine(t *testing.T) {
|
||||
setUpTabletServerTest()
|
||||
db := setUpTabletServerTest()
|
||||
testUtils := newTestUtils()
|
||||
config := testUtils.newQueryServiceConfig()
|
||||
tsv := NewTabletServer(config)
|
||||
dbconfigs := testUtils.newDBConfigs()
|
||||
dbconfigs := testUtils.newDBConfigs(db)
|
||||
// this causes QueryEngine not being initialized properly
|
||||
tsv.setState(StateServing)
|
||||
err := tsv.StartService(nil, &dbconfigs, []SchemaOverride{}, testUtils.newMysqld(&dbconfigs))
|
||||
|
@ -342,7 +342,7 @@ func TestTabletServerCheckMysqlFailUninitializedQueryEngine(t *testing.T) {
|
|||
}
|
||||
|
||||
func TestTabletServerCheckMysqlInUnintialized(t *testing.T) {
|
||||
setUpTabletServerTest()
|
||||
_ = setUpTabletServerTest()
|
||||
testUtils := newTestUtils()
|
||||
config := testUtils.newQueryServiceConfig()
|
||||
config.EnablePublishStats = true
|
||||
|
@ -367,7 +367,7 @@ func TestTabletServerCheckMysqlInUnintialized(t *testing.T) {
|
|||
}
|
||||
|
||||
func TestTabletServerGetSessionId(t *testing.T) {
|
||||
setUpTabletServerTest()
|
||||
db := setUpTabletServerTest()
|
||||
testUtils := newTestUtils()
|
||||
config := testUtils.newQueryServiceConfig()
|
||||
tsv := NewTabletServer(config)
|
||||
|
@ -376,7 +376,7 @@ func TestTabletServerGetSessionId(t *testing.T) {
|
|||
}
|
||||
keyspace := "test_keyspace"
|
||||
shard := "0"
|
||||
dbconfigs := testUtils.newDBConfigs()
|
||||
dbconfigs := testUtils.newDBConfigs(db)
|
||||
err := tsv.StartService(nil, &dbconfigs, []SchemaOverride{}, testUtils.newMysqld(&dbconfigs))
|
||||
if err != nil {
|
||||
t.Fatalf("StartService failed: %v", err)
|
||||
|
@ -412,11 +412,11 @@ func TestTabletServerGetSessionId(t *testing.T) {
|
|||
}
|
||||
|
||||
func TestTabletServerCommandFailUnMatchedSessionId(t *testing.T) {
|
||||
setUpTabletServerTest()
|
||||
db := setUpTabletServerTest()
|
||||
testUtils := newTestUtils()
|
||||
config := testUtils.newQueryServiceConfig()
|
||||
tsv := NewTabletServer(config)
|
||||
dbconfigs := testUtils.newDBConfigs()
|
||||
dbconfigs := testUtils.newDBConfigs(db)
|
||||
err := tsv.StartService(nil, &dbconfigs, []SchemaOverride{}, testUtils.newMysqld(&dbconfigs))
|
||||
if err != nil {
|
||||
t.Fatalf("StartService failed: %v", err)
|
||||
|
@ -515,7 +515,7 @@ func TestTabletServerCommitTransaciton(t *testing.T) {
|
|||
db.AddQuery(executeSQL, executeSQLResult)
|
||||
config := testUtils.newQueryServiceConfig()
|
||||
tsv := NewTabletServer(config)
|
||||
dbconfigs := testUtils.newDBConfigs()
|
||||
dbconfigs := testUtils.newDBConfigs(db)
|
||||
err := tsv.StartService(nil, &dbconfigs, []SchemaOverride{}, testUtils.newMysqld(&dbconfigs))
|
||||
if err != nil {
|
||||
t.Fatalf("StartService failed: %v", err)
|
||||
|
@ -560,7 +560,7 @@ func TestTabletServerRollback(t *testing.T) {
|
|||
db.AddQuery(executeSQL, executeSQLResult)
|
||||
config := testUtils.newQueryServiceConfig()
|
||||
tsv := NewTabletServer(config)
|
||||
dbconfigs := testUtils.newDBConfigs()
|
||||
dbconfigs := testUtils.newDBConfigs(db)
|
||||
err := tsv.StartService(nil, &dbconfigs, []SchemaOverride{}, testUtils.newMysqld(&dbconfigs))
|
||||
if err != nil {
|
||||
t.Fatalf("StartService failed: %v", err)
|
||||
|
@ -606,7 +606,7 @@ func TestTabletServerStreamExecute(t *testing.T) {
|
|||
|
||||
config := testUtils.newQueryServiceConfig()
|
||||
tsv := NewTabletServer(config)
|
||||
dbconfigs := testUtils.newDBConfigs()
|
||||
dbconfigs := testUtils.newDBConfigs(db)
|
||||
err := tsv.StartService(nil, &dbconfigs, []SchemaOverride{}, testUtils.newMysqld(&dbconfigs))
|
||||
if err != nil {
|
||||
t.Fatalf("StartService failed: %v", err)
|
||||
|
@ -653,7 +653,7 @@ func TestTabletServerExecuteBatch(t *testing.T) {
|
|||
db.AddQuery(expanedSQL, sqlResult)
|
||||
config := testUtils.newQueryServiceConfig()
|
||||
tsv := NewTabletServer(config)
|
||||
dbconfigs := testUtils.newDBConfigs()
|
||||
dbconfigs := testUtils.newDBConfigs(db)
|
||||
err := tsv.StartService(nil, &dbconfigs, []SchemaOverride{}, testUtils.newMysqld(&dbconfigs))
|
||||
if err != nil {
|
||||
t.Fatalf("StartService failed: %v", err)
|
||||
|
@ -685,11 +685,11 @@ func TestTabletServerExecuteBatch(t *testing.T) {
|
|||
}
|
||||
|
||||
func TestTabletServerExecuteBatchFailEmptyQueryList(t *testing.T) {
|
||||
setUpTabletServerTest()
|
||||
db := setUpTabletServerTest()
|
||||
testUtils := newTestUtils()
|
||||
config := testUtils.newQueryServiceConfig()
|
||||
tsv := NewTabletServer(config)
|
||||
dbconfigs := testUtils.newDBConfigs()
|
||||
dbconfigs := testUtils.newDBConfigs(db)
|
||||
err := tsv.StartService(nil, &dbconfigs, []SchemaOverride{}, testUtils.newMysqld(&dbconfigs))
|
||||
if err != nil {
|
||||
t.Fatalf("StartService failed: %v", err)
|
||||
|
@ -709,11 +709,11 @@ func TestTabletServerExecuteBatchFailEmptyQueryList(t *testing.T) {
|
|||
}
|
||||
|
||||
func TestTabletServerExecuteBatchFailAsTransaction(t *testing.T) {
|
||||
setUpTabletServerTest()
|
||||
db := setUpTabletServerTest()
|
||||
testUtils := newTestUtils()
|
||||
config := testUtils.newQueryServiceConfig()
|
||||
tsv := NewTabletServer(config)
|
||||
dbconfigs := testUtils.newDBConfigs()
|
||||
dbconfigs := testUtils.newDBConfigs(db)
|
||||
err := tsv.StartService(nil, &dbconfigs, []SchemaOverride{}, testUtils.newMysqld(&dbconfigs))
|
||||
if err != nil {
|
||||
t.Fatalf("StartService failed: %v", err)
|
||||
|
@ -746,7 +746,7 @@ func TestTabletServerExecuteBatchBeginFail(t *testing.T) {
|
|||
db.AddRejectedQuery("begin", errRejected)
|
||||
config := testUtils.newQueryServiceConfig()
|
||||
tsv := NewTabletServer(config)
|
||||
dbconfigs := testUtils.newDBConfigs()
|
||||
dbconfigs := testUtils.newDBConfigs(db)
|
||||
err := tsv.StartService(nil, &dbconfigs, []SchemaOverride{}, testUtils.newMysqld(&dbconfigs))
|
||||
if err != nil {
|
||||
t.Fatalf("StartService failed: %v", err)
|
||||
|
@ -780,7 +780,7 @@ func TestTabletServerExecuteBatchCommitFail(t *testing.T) {
|
|||
db.AddRejectedQuery("commit", errRejected)
|
||||
config := testUtils.newQueryServiceConfig()
|
||||
tsv := NewTabletServer(config)
|
||||
dbconfigs := testUtils.newDBConfigs()
|
||||
dbconfigs := testUtils.newDBConfigs(db)
|
||||
err := tsv.StartService(nil, &dbconfigs, []SchemaOverride{}, testUtils.newMysqld(&dbconfigs))
|
||||
if err != nil {
|
||||
t.Fatalf("StartService failed: %v", err)
|
||||
|
@ -828,7 +828,7 @@ func TestTabletServerExecuteBatchSqlExecFailInTransaction(t *testing.T) {
|
|||
|
||||
config := testUtils.newQueryServiceConfig()
|
||||
tsv := NewTabletServer(config)
|
||||
dbconfigs := testUtils.newDBConfigs()
|
||||
dbconfigs := testUtils.newDBConfigs(db)
|
||||
err := tsv.StartService(nil, &dbconfigs, []SchemaOverride{}, testUtils.newMysqld(&dbconfigs))
|
||||
if err != nil {
|
||||
t.Fatalf("StartService failed: %v", err)
|
||||
|
@ -883,7 +883,7 @@ func TestTabletServerExecuteBatchSqlSucceedInTransaction(t *testing.T) {
|
|||
config := testUtils.newQueryServiceConfig()
|
||||
config.EnableAutoCommit = true
|
||||
tsv := NewTabletServer(config)
|
||||
dbconfigs := testUtils.newDBConfigs()
|
||||
dbconfigs := testUtils.newDBConfigs(db)
|
||||
err := tsv.StartService(nil, &dbconfigs, []SchemaOverride{}, testUtils.newMysqld(&dbconfigs))
|
||||
if err != nil {
|
||||
t.Fatalf("StartService failed: %v", err)
|
||||
|
@ -911,11 +911,11 @@ func TestTabletServerExecuteBatchSqlSucceedInTransaction(t *testing.T) {
|
|||
}
|
||||
|
||||
func TestTabletServerExecuteBatchCallCommitWithoutABegin(t *testing.T) {
|
||||
setUpTabletServerTest()
|
||||
db := setUpTabletServerTest()
|
||||
testUtils := newTestUtils()
|
||||
config := testUtils.newQueryServiceConfig()
|
||||
tsv := NewTabletServer(config)
|
||||
dbconfigs := testUtils.newDBConfigs()
|
||||
dbconfigs := testUtils.newDBConfigs(db)
|
||||
err := tsv.StartService(nil, &dbconfigs, []SchemaOverride{}, testUtils.newMysqld(&dbconfigs))
|
||||
if err != nil {
|
||||
t.Fatalf("StartService failed: %v", err)
|
||||
|
@ -953,7 +953,7 @@ func TestExecuteBatchNestedTransaction(t *testing.T) {
|
|||
db.AddQuery(expanedSQL, sqlResult)
|
||||
config := testUtils.newQueryServiceConfig()
|
||||
tsv := NewTabletServer(config)
|
||||
dbconfigs := testUtils.newDBConfigs()
|
||||
dbconfigs := testUtils.newDBConfigs(db)
|
||||
err := tsv.StartService(nil, &dbconfigs, []SchemaOverride{}, testUtils.newMysqld(&dbconfigs))
|
||||
if err != nil {
|
||||
t.Fatalf("StartService failed: %v", err)
|
||||
|
@ -1029,7 +1029,7 @@ func TestTabletServerSplitQuery(t *testing.T) {
|
|||
testUtils := newTestUtils()
|
||||
config := testUtils.newQueryServiceConfig()
|
||||
tsv := NewTabletServer(config)
|
||||
dbconfigs := testUtils.newDBConfigs()
|
||||
dbconfigs := testUtils.newDBConfigs(db)
|
||||
err := tsv.StartService(nil, &dbconfigs, []SchemaOverride{}, testUtils.newMysqld(&dbconfigs))
|
||||
if err != nil {
|
||||
t.Fatalf("StartService failed: %v", err)
|
||||
|
@ -1090,7 +1090,7 @@ func TestTabletServerSplitQueryInvalidQuery(t *testing.T) {
|
|||
testUtils := newTestUtils()
|
||||
config := testUtils.newQueryServiceConfig()
|
||||
tsv := NewTabletServer(config)
|
||||
dbconfigs := testUtils.newDBConfigs()
|
||||
dbconfigs := testUtils.newDBConfigs(db)
|
||||
err := tsv.StartService(nil, &dbconfigs, []SchemaOverride{}, testUtils.newMysqld(&dbconfigs))
|
||||
if err != nil {
|
||||
t.Fatalf("StartService failed: %v", err)
|
||||
|
@ -1155,7 +1155,7 @@ func TestTabletServerSplitQueryInvalidMinMax(t *testing.T) {
|
|||
|
||||
config := testUtils.newQueryServiceConfig()
|
||||
tsv := NewTabletServer(config)
|
||||
dbconfigs := testUtils.newDBConfigs()
|
||||
dbconfigs := testUtils.newDBConfigs(db)
|
||||
err := tsv.StartService(nil, &dbconfigs, []SchemaOverride{}, testUtils.newMysqld(&dbconfigs))
|
||||
if err != nil {
|
||||
t.Fatalf("StartService failed: %v", err)
|
||||
|
@ -1299,7 +1299,8 @@ func TestTerseErrors3(t *testing.T) {
|
|||
|
||||
func TestNeedInvalidator(t *testing.T) {
|
||||
testUtils := newTestUtils()
|
||||
dbconfigs := testUtils.newDBConfigs()
|
||||
db := setUpTabletServerTest()
|
||||
dbconfigs := testUtils.newDBConfigs(db)
|
||||
|
||||
// EnableRowCache is false
|
||||
if needInvalidator(nil, &dbconfigs) {
|
||||
|
|
|
@ -16,6 +16,7 @@ import (
|
|||
"github.com/youtube/vitess/go/sqldb"
|
||||
"github.com/youtube/vitess/go/vt/dbconfigs"
|
||||
"github.com/youtube/vitess/go/vt/mysqlctl"
|
||||
"github.com/youtube/vitess/go/vt/vttest/fakesqldb"
|
||||
)
|
||||
|
||||
type fakeCallInfo struct {
|
||||
|
@ -101,9 +102,9 @@ func (util *testUtils) newMysqld(dbconfigs *dbconfigs.DBConfigs) mysqlctl.MysqlD
|
|||
)
|
||||
}
|
||||
|
||||
func (util *testUtils) newDBConfigs() dbconfigs.DBConfigs {
|
||||
func (util *testUtils) newDBConfigs(db *fakesqldb.DB) dbconfigs.DBConfigs {
|
||||
appDBConfig := dbconfigs.DBConfig{
|
||||
ConnParams: sqldb.ConnParams{},
|
||||
ConnParams: sqldb.ConnParams{Engine: db.Name},
|
||||
Keyspace: "test_keyspace",
|
||||
Shard: "0",
|
||||
EnableRowcache: false,
|
||||
|
|
|
@ -26,8 +26,8 @@ func TestTxPoolExecuteCommit(t *testing.T) {
|
|||
txPool := newTxPool(true)
|
||||
txPool.SetTimeout(1 * time.Second)
|
||||
txPool.SetPoolTimeout(1 * time.Second)
|
||||
appParams := sqldb.ConnParams{}
|
||||
dbaParams := sqldb.ConnParams{}
|
||||
appParams := sqldb.ConnParams{Engine: db.Name}
|
||||
dbaParams := sqldb.ConnParams{Engine: db.Name}
|
||||
txPool.Open(&appParams, &dbaParams)
|
||||
defer txPool.Close()
|
||||
ctx := context.Background()
|
||||
|
@ -58,8 +58,8 @@ func TestTxPoolExecuteRollback(t *testing.T) {
|
|||
db.AddQuery("rollback", &proto.QueryResult{})
|
||||
|
||||
txPool := newTxPool(false)
|
||||
appParams := sqldb.ConnParams{}
|
||||
dbaParams := sqldb.ConnParams{}
|
||||
appParams := sqldb.ConnParams{Engine: db.Name}
|
||||
dbaParams := sqldb.ConnParams{Engine: db.Name}
|
||||
txPool.Open(&appParams, &dbaParams)
|
||||
defer txPool.Close()
|
||||
ctx := context.Background()
|
||||
|
@ -83,8 +83,8 @@ func TestTxPoolTransactionKiller(t *testing.T) {
|
|||
txPool := newTxPool(false)
|
||||
// make sure transaction killer will run frequent enough
|
||||
txPool.SetTimeout(time.Duration(10))
|
||||
appParams := sqldb.ConnParams{}
|
||||
dbaParams := sqldb.ConnParams{}
|
||||
appParams := sqldb.ConnParams{Engine: db.Name}
|
||||
dbaParams := sqldb.ConnParams{Engine: db.Name}
|
||||
txPool.Open(&appParams, &dbaParams)
|
||||
defer txPool.Close()
|
||||
ctx := context.Background()
|
||||
|
@ -102,11 +102,11 @@ func TestTxPoolTransactionKiller(t *testing.T) {
|
|||
}
|
||||
|
||||
func TestTxPoolBeginAfterConnPoolClosed(t *testing.T) {
|
||||
fakesqldb.Register()
|
||||
db := fakesqldb.Register()
|
||||
txPool := newTxPool(false)
|
||||
txPool.SetTimeout(time.Duration(10))
|
||||
appParams := sqldb.ConnParams{}
|
||||
dbaParams := sqldb.ConnParams{}
|
||||
appParams := sqldb.ConnParams{Engine: db.Name}
|
||||
dbaParams := sqldb.ConnParams{Engine: db.Name}
|
||||
txPool.Open(&appParams, &dbaParams)
|
||||
txPool.Close()
|
||||
ctx := context.Background()
|
||||
|
@ -128,8 +128,8 @@ func TestTxPoolBeginWithPoolTimeout(t *testing.T) {
|
|||
db.AddQuery("begin", &proto.QueryResult{})
|
||||
|
||||
txPool := newTxPool(false)
|
||||
appParams := sqldb.ConnParams{}
|
||||
dbaParams := sqldb.ConnParams{}
|
||||
appParams := sqldb.ConnParams{Engine: db.Name}
|
||||
dbaParams := sqldb.ConnParams{Engine: db.Name}
|
||||
txPool.Open(&appParams, &dbaParams)
|
||||
// set pool capacity to 1
|
||||
txPool.pool.SetCapacity(1)
|
||||
|
@ -144,10 +144,10 @@ func TestTxPoolBeginWithPoolTimeout(t *testing.T) {
|
|||
}
|
||||
|
||||
func TestTxPoolBeginWithShortDeadline(t *testing.T) {
|
||||
fakesqldb.Register()
|
||||
db := fakesqldb.Register()
|
||||
txPool := newTxPool(false)
|
||||
appParams := sqldb.ConnParams{}
|
||||
dbaParams := sqldb.ConnParams{}
|
||||
appParams := sqldb.ConnParams{Engine: db.Name}
|
||||
dbaParams := sqldb.ConnParams{Engine: db.Name}
|
||||
txPool.Open(&appParams, &dbaParams)
|
||||
// set pool capacity to 1
|
||||
txPool.pool.SetCapacity(1)
|
||||
|
@ -163,8 +163,8 @@ func TestTxPoolBeginWithPoolConnectionError(t *testing.T) {
|
|||
db := fakesqldb.Register()
|
||||
db.EnableConnFail()
|
||||
txPool := newTxPool(false)
|
||||
appParams := sqldb.ConnParams{}
|
||||
dbaParams := sqldb.ConnParams{}
|
||||
appParams := sqldb.ConnParams{Engine: db.Name}
|
||||
dbaParams := sqldb.ConnParams{Engine: db.Name}
|
||||
txPool.Open(&appParams, &dbaParams)
|
||||
defer txPool.Close()
|
||||
defer handleAndVerifyTabletError(t, "expect to get an error", ErrFatal)
|
||||
|
@ -176,8 +176,8 @@ func TestTxPoolBeginWithExecError(t *testing.T) {
|
|||
db := fakesqldb.Register()
|
||||
db.AddRejectedQuery("begin", errRejected)
|
||||
txPool := newTxPool(false)
|
||||
appParams := sqldb.ConnParams{}
|
||||
dbaParams := sqldb.ConnParams{}
|
||||
appParams := sqldb.ConnParams{Engine: db.Name}
|
||||
dbaParams := sqldb.ConnParams{Engine: db.Name}
|
||||
txPool.Open(&appParams, &dbaParams)
|
||||
defer txPool.Close()
|
||||
defer handleAndVerifyTabletError(t, "expect to get an error", ErrFail)
|
||||
|
@ -192,8 +192,8 @@ func TestTxPoolSafeCommitFail(t *testing.T) {
|
|||
db.AddQuery(sql, &proto.QueryResult{})
|
||||
db.AddRejectedQuery("commit", errRejected)
|
||||
txPool := newTxPool(false)
|
||||
appParams := sqldb.ConnParams{}
|
||||
dbaParams := sqldb.ConnParams{}
|
||||
appParams := sqldb.ConnParams{Engine: db.Name}
|
||||
dbaParams := sqldb.ConnParams{Engine: db.Name}
|
||||
txPool.Open(&appParams, &dbaParams)
|
||||
defer txPool.Close()
|
||||
ctx := context.Background()
|
||||
|
@ -218,8 +218,8 @@ func TestTxPoolRollbackFail(t *testing.T) {
|
|||
db.AddRejectedQuery("rollback", errRejected)
|
||||
|
||||
txPool := newTxPool(false)
|
||||
appParams := sqldb.ConnParams{}
|
||||
dbaParams := sqldb.ConnParams{}
|
||||
appParams := sqldb.ConnParams{Engine: db.Name}
|
||||
dbaParams := sqldb.ConnParams{Engine: db.Name}
|
||||
txPool.Open(&appParams, &dbaParams)
|
||||
defer txPool.Close()
|
||||
ctx := context.Background()
|
||||
|
@ -236,10 +236,10 @@ func TestTxPoolRollbackFail(t *testing.T) {
|
|||
}
|
||||
|
||||
func TestTxPoolGetConnFail(t *testing.T) {
|
||||
fakesqldb.Register()
|
||||
db := fakesqldb.Register()
|
||||
txPool := newTxPool(false)
|
||||
appParams := sqldb.ConnParams{}
|
||||
dbaParams := sqldb.ConnParams{}
|
||||
appParams := sqldb.ConnParams{Engine: db.Name}
|
||||
dbaParams := sqldb.ConnParams{Engine: db.Name}
|
||||
txPool.Open(&appParams, &dbaParams)
|
||||
defer txPool.Close()
|
||||
defer handleAndVerifyTabletError(t, "txpool.Get should fail", ErrNotInTx)
|
||||
|
@ -251,8 +251,8 @@ func TestTxPoolExecFailDueToConnFail(t *testing.T) {
|
|||
db.AddQuery("begin", &proto.QueryResult{})
|
||||
|
||||
txPool := newTxPool(false)
|
||||
appParams := sqldb.ConnParams{}
|
||||
dbaParams := sqldb.ConnParams{}
|
||||
appParams := sqldb.ConnParams{Engine: db.Name}
|
||||
dbaParams := sqldb.ConnParams{Engine: db.Name}
|
||||
txPool.Open(&appParams, &dbaParams)
|
||||
defer txPool.Close()
|
||||
ctx := context.Background()
|
||||
|
|
|
@ -126,12 +126,12 @@ func (rsnl rangeShardNodesList) Swap(i, j int) {
|
|||
// KeyspaceNodes represents all tablet nodes in a keyspace.
|
||||
type KeyspaceNodes struct {
|
||||
ShardNodes []*ShardNodes // sorted by shard name
|
||||
ServedFrom map[topo.TabletType]string
|
||||
ServedFrom map[string]string
|
||||
}
|
||||
|
||||
func newKeyspaceNodes() *KeyspaceNodes {
|
||||
return &KeyspaceNodes{
|
||||
ServedFrom: make(map[topo.TabletType]string),
|
||||
ServedFrom: make(map[string]string),
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -266,7 +266,7 @@ func DbServingGraph(ctx context.Context, ts topo.Server, cell string) (servingGr
|
|||
return
|
||||
}
|
||||
wg := sync.WaitGroup{}
|
||||
servingTypes := []topo.TabletType{topo.TYPE_MASTER, topo.TYPE_REPLICA, topo.TYPE_RDONLY}
|
||||
servingTypes := []pb.TabletType{pb.TabletType_MASTER, pb.TabletType_REPLICA, pb.TabletType_RDONLY}
|
||||
for _, keyspace := range keyspaces {
|
||||
kn := newKeyspaceNodes()
|
||||
servingGraph.Keyspaces[keyspace] = kn
|
||||
|
@ -279,16 +279,13 @@ func DbServingGraph(ctx context.Context, ts topo.Server, cell string) (servingGr
|
|||
rec.RecordError(fmt.Errorf("GetSrvKeyspace(%v, %v) failed: %v", cell, keyspace, err))
|
||||
return
|
||||
}
|
||||
if len(ks.ServedFrom) > 0 {
|
||||
kn.ServedFrom = make(map[topo.TabletType]string)
|
||||
for _, sf := range ks.ServedFrom {
|
||||
kn.ServedFrom[topo.ProtoToTabletType(sf.TabletType)] = sf.Keyspace
|
||||
}
|
||||
for _, sf := range ks.ServedFrom {
|
||||
kn.ServedFrom[strings.ToLower(sf.TabletType.String())] = sf.Keyspace
|
||||
}
|
||||
|
||||
displayedShards := make(map[string]bool)
|
||||
for _, partitionTabletType := range servingTypes {
|
||||
kp := topoproto.SrvKeyspaceGetPartition(ks, topo.TabletTypeToProto(partitionTabletType))
|
||||
kp := topoproto.SrvKeyspaceGetPartition(ks, partitionTabletType)
|
||||
if kp == nil {
|
||||
continue
|
||||
}
|
||||
|
|
|
@ -370,7 +370,7 @@ func (conn *vtgateConn) Rollback2(ctx context.Context, session interface{}) erro
|
|||
return nil
|
||||
}
|
||||
|
||||
func (conn *vtgateConn) SplitQuery(ctx context.Context, keyspace string, query string, bindVars map[string]interface{}, splitColumn string, splitCount int) ([]proto.SplitQueryPart, error) {
|
||||
func (conn *vtgateConn) SplitQuery(ctx context.Context, keyspace string, query string, bindVars map[string]interface{}, splitColumn string, splitCount int) ([]*pb.SplitQueryResponse_Part, error) {
|
||||
request := &pb.SplitQueryRequest{
|
||||
CallerId: callerid.EffectiveCallerIDFromContext(ctx),
|
||||
Keyspace: keyspace,
|
||||
|
@ -382,7 +382,7 @@ func (conn *vtgateConn) SplitQuery(ctx context.Context, keyspace string, query s
|
|||
if err := conn.rpcConn.Call(ctx, "VTGateP3.SplitQuery", request, response); err != nil {
|
||||
return nil, vterrors.FromJSONError(err)
|
||||
}
|
||||
return proto.ProtoToSplitQueryParts(response), nil
|
||||
return response.Splits, nil
|
||||
}
|
||||
|
||||
func (conn *vtgateConn) GetSrvKeyspace(ctx context.Context, keyspace string) (*topopb.SrvKeyspace, error) {
|
||||
|
|
|
@ -49,7 +49,7 @@ func TestBsonP3VTGateConn(t *testing.T) {
|
|||
vtgateconntest.RegisterTestDialProtocol(client)
|
||||
|
||||
// run the test suite
|
||||
// vtgateconntest.TestSuite(t, client, service)
|
||||
vtgateconntest.TestSuite(t, client, service)
|
||||
vtgateconntest.TestErrorSuite(t, service)
|
||||
|
||||
// and clean up
|
||||
|
|
|
@ -331,18 +331,16 @@ func (vtg *VTGateP3) SplitQuery(ctx context.Context, request *pb.SplitQueryReque
|
|||
ctx = callerid.NewContext(ctx,
|
||||
request.CallerId,
|
||||
callerid.NewImmediateCallerID("bsonp3 client"))
|
||||
reply := &proto.SplitQueryResult{}
|
||||
vtgErr := vtg.server.SplitQuery(ctx,
|
||||
splits, vtgErr := vtg.server.SplitQuery(ctx,
|
||||
request.Keyspace,
|
||||
string(request.Query.Sql),
|
||||
tproto.Proto3ToBindVariables(request.Query.BindVariables),
|
||||
request.SplitColumn,
|
||||
int(request.SplitCount),
|
||||
reply)
|
||||
int(request.SplitCount))
|
||||
if vtgErr != nil {
|
||||
return vterrors.ToJSONError(vtgErr)
|
||||
}
|
||||
*response = *proto.SplitQueryPartsToProto(reply.Splits)
|
||||
response.Splits = splits
|
||||
return nil
|
||||
}
|
||||
|
||||
|
|
|
@ -0,0 +1,114 @@
|
|||
// Copyright 2015, Google Inc. All rights reserved.
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
package vtgate
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"flag"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"golang.org/x/net/context"
|
||||
|
||||
mproto "github.com/youtube/vitess/go/mysql/proto"
|
||||
"github.com/youtube/vitess/go/stats"
|
||||
"github.com/youtube/vitess/go/vt/discovery"
|
||||
pbq "github.com/youtube/vitess/go/vt/proto/query"
|
||||
pbt "github.com/youtube/vitess/go/vt/proto/topodata"
|
||||
tproto "github.com/youtube/vitess/go/vt/tabletserver/proto"
|
||||
"github.com/youtube/vitess/go/vt/tabletserver/tabletconn"
|
||||
"github.com/youtube/vitess/go/vt/topo"
|
||||
)
|
||||
|
||||
var (
|
||||
cellsToWatch = flag.String("cells_to_watch", "", "comma-separated list of cells for watching endpoints")
|
||||
refreshInterval = flag.Duration("endpoint_refresh_interval", 1*time.Minute, "endpoint refresh interval")
|
||||
topoReadConcurrency = flag.Int("topo_read_concurrency", 32, "concurrent topo reads")
|
||||
)
|
||||
|
||||
var errNotImplemented = errors.New("Not implemented")
|
||||
|
||||
const (
|
||||
gatewayImplementationDiscovery = "discoverygateway"
|
||||
)
|
||||
|
||||
func init() {
|
||||
RegisterGatewayCreator(gatewayImplementationDiscovery, createDiscoveryGateway)
|
||||
}
|
||||
|
||||
func createDiscoveryGateway(hc discovery.HealthCheck, topoServer topo.Server, serv SrvTopoServer, cell string, retryDelay time.Duration, retryCount int, connTimeoutTotal, connTimeoutPerConn, connLife time.Duration, connTimings *stats.MultiTimings) Gateway {
|
||||
return &discoveryGateway{
|
||||
hc: hc,
|
||||
topoServer: topoServer,
|
||||
localCell: cell,
|
||||
tabletsWatchers: make([]*discovery.CellTabletsWatcher, 0, 1),
|
||||
}
|
||||
}
|
||||
|
||||
type discoveryGateway struct {
|
||||
hc discovery.HealthCheck
|
||||
topoServer topo.Server
|
||||
localCell string
|
||||
|
||||
tabletsWatchers []*discovery.CellTabletsWatcher
|
||||
}
|
||||
|
||||
// InitializeConnections creates connections to VTTablets.
|
||||
func (dg *discoveryGateway) InitializeConnections(ctx context.Context) error {
|
||||
dg.hc.SetListener(dg)
|
||||
for _, cell := range strings.Split(*cellsToWatch, ",") {
|
||||
ctw := discovery.NewCellTabletsWatcher(dg.topoServer, dg.hc, cell, *refreshInterval, *topoReadConcurrency)
|
||||
dg.tabletsWatchers = append(dg.tabletsWatchers, ctw)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// Execute executes the non-streaming query for the specified keyspace, shard, and tablet type.
|
||||
func (dg *discoveryGateway) Execute(ctx context.Context, keyspace, shard string, tabletType pbt.TabletType, query string, bindVars map[string]interface{}, transactionID int64) (*mproto.QueryResult, error) {
|
||||
return nil, errNotImplemented
|
||||
}
|
||||
|
||||
// ExecuteBatch executes a group of queries for the specified keyspace, shard, and tablet type.
|
||||
func (dg *discoveryGateway) ExecuteBatch(ctx context.Context, keyspace, shard string, tabletType pbt.TabletType, queries []tproto.BoundQuery, asTransaction bool, transactionID int64) (*tproto.QueryResultList, error) {
|
||||
return nil, errNotImplemented
|
||||
}
|
||||
|
||||
// StreamExecute executes a streaming query for the specified keyspace, shard, and tablet type.
|
||||
func (dg *discoveryGateway) StreamExecute(ctx context.Context, keyspace, shard string, tabletType pbt.TabletType, query string, bindVars map[string]interface{}, transactionID int64) (<-chan *mproto.QueryResult, tabletconn.ErrFunc) {
|
||||
return nil, func() error { return errNotImplemented }
|
||||
}
|
||||
|
||||
// Begin starts a transaction for the specified keyspace, shard, and tablet type.
|
||||
// It returns the transaction ID.
|
||||
func (dg *discoveryGateway) Begin(ctx context.Context, keyspace string, shard string, tabletType pbt.TabletType) (int64, error) {
|
||||
return 0, errNotImplemented
|
||||
}
|
||||
|
||||
// Commit commits the current transaction for the specified keyspace, shard, and tablet type.
|
||||
func (dg *discoveryGateway) Commit(ctx context.Context, keyspace, shard string, tabletType pbt.TabletType, transactionID int64) error {
|
||||
return errNotImplemented
|
||||
}
|
||||
|
||||
// Rollback rolls back the current transaction for the specified keyspace, shard, and tablet type.
|
||||
func (dg *discoveryGateway) Rollback(ctx context.Context, keyspace, shard string, tabletType pbt.TabletType, transactionID int64) error {
|
||||
return errNotImplemented
|
||||
}
|
||||
|
||||
// SplitQuery splits a query into sub-queries for the specified keyspace, shard, and tablet type.
|
||||
func (dg *discoveryGateway) SplitQuery(ctx context.Context, keyspace, shard string, tabletType pbt.TabletType, sql string, bindVariables map[string]interface{}, splitColumn string, splitCount int) ([]tproto.QuerySplit, error) {
|
||||
return nil, errNotImplemented
|
||||
}
|
||||
|
||||
// Close shuts down underlying connections.
|
||||
func (dg *discoveryGateway) Close() error {
|
||||
for _, ctw := range dg.tabletsWatchers {
|
||||
ctw.Stop()
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// StatsUpdate receives updates about target and realtime stats changes.
|
||||
func (dg *discoveryGateway) StatsUpdate(endPoint *pbt.EndPoint, cell string, target *pbq.Target, tabletExternallyReparentedTimestamp int64, stats *pbq.RealtimeStats) {
|
||||
}
|
|
@ -65,7 +65,7 @@ type querySplitQuery struct {
|
|||
|
||||
type splitQueryResponse struct {
|
||||
splitQuery *querySplitQuery
|
||||
reply []proto.SplitQueryPart
|
||||
reply []*pbg.SplitQueryResponse_Part
|
||||
err error
|
||||
}
|
||||
|
||||
|
@ -141,8 +141,8 @@ func (conn *FakeVTGateConn) AddSplitQuery(
|
|||
bindVariables map[string]interface{},
|
||||
splitColumn string,
|
||||
splitCount int,
|
||||
expectedResult []proto.SplitQueryPart) {
|
||||
reply := make([]proto.SplitQueryPart, splitCount)
|
||||
expectedResult []*pbg.SplitQueryResponse_Part) {
|
||||
reply := make([]*pbg.SplitQueryResponse_Part, splitCount)
|
||||
copy(reply, expectedResult)
|
||||
key := getSplitQueryKey(keyspace, sql, splitColumn, splitCount)
|
||||
conn.splitQueryMap[key] = &splitQueryResponse{
|
||||
|
@ -352,14 +352,14 @@ func (conn *FakeVTGateConn) Rollback2(ctx context.Context, session interface{})
|
|||
}
|
||||
|
||||
// SplitQuery please see vtgateconn.Impl.SplitQuery
|
||||
func (conn *FakeVTGateConn) SplitQuery(ctx context.Context, keyspace string, query string, bindVars map[string]interface{}, splitColumn string, splitCount int) ([]proto.SplitQueryPart, error) {
|
||||
func (conn *FakeVTGateConn) SplitQuery(ctx context.Context, keyspace string, query string, bindVars map[string]interface{}, splitColumn string, splitCount int) ([]*pbg.SplitQueryResponse_Part, error) {
|
||||
response, ok := conn.splitQueryMap[getSplitQueryKey(keyspace, query, splitColumn, splitCount)]
|
||||
if !ok {
|
||||
return nil, fmt.Errorf(
|
||||
"no match for keyspace: %s, query: %v, split column: %v, split count: %d",
|
||||
keyspace, query, splitColumn, splitCount)
|
||||
}
|
||||
reply := make([]proto.SplitQueryPart, splitCount, splitCount)
|
||||
reply := make([]*pbg.SplitQueryResponse_Part, splitCount, splitCount)
|
||||
copy(reply, response.reply)
|
||||
return reply, nil
|
||||
}
|
||||
|
|
|
@ -13,9 +13,11 @@ import (
|
|||
|
||||
mproto "github.com/youtube/vitess/go/mysql/proto"
|
||||
"github.com/youtube/vitess/go/stats"
|
||||
"github.com/youtube/vitess/go/vt/discovery"
|
||||
pb "github.com/youtube/vitess/go/vt/proto/topodata"
|
||||
tproto "github.com/youtube/vitess/go/vt/tabletserver/proto"
|
||||
"github.com/youtube/vitess/go/vt/tabletserver/tabletconn"
|
||||
"github.com/youtube/vitess/go/vt/topo"
|
||||
)
|
||||
|
||||
var (
|
||||
|
@ -56,7 +58,7 @@ type Gateway interface {
|
|||
}
|
||||
|
||||
// GatewayCreator is the func which can create the actual gateway object.
|
||||
type GatewayCreator func(serv SrvTopoServer, cell string, retryDelay time.Duration, retryCount int, connTimeoutTotal, connTimeoutPerConn, connLife time.Duration, connTimings *stats.MultiTimings) Gateway
|
||||
type GatewayCreator func(hc discovery.HealthCheck, topoServer topo.Server, serv SrvTopoServer, cell string, retryDelay time.Duration, retryCount int, connTimeoutTotal, connTimeoutPerConn, connLife time.Duration, connTimings *stats.MultiTimings) Gateway
|
||||
|
||||
var gatewayCreators = make(map[string]GatewayCreator)
|
||||
|
||||
|
@ -76,3 +78,13 @@ func GetGatewayCreator() GatewayCreator {
|
|||
}
|
||||
return gc
|
||||
}
|
||||
|
||||
// GetGatewayCreatorByName returns the GatewayCreator specified by the given name.
|
||||
func GetGatewayCreatorByName(name string) GatewayCreator {
|
||||
gc, ok := gatewayCreators[name]
|
||||
if !ok {
|
||||
log.Errorf("No gateway registered as %s", name)
|
||||
return nil
|
||||
}
|
||||
return gc
|
||||
}
|
||||
|
|
|
@ -431,7 +431,7 @@ func (conn *vtgateConn) Rollback2(ctx context.Context, session interface{}) erro
|
|||
return vterrors.FromRPCError(reply.Err)
|
||||
}
|
||||
|
||||
func (conn *vtgateConn) SplitQuery(ctx context.Context, keyspace string, query string, bindVars map[string]interface{}, splitColumn string, splitCount int) ([]proto.SplitQueryPart, error) {
|
||||
func (conn *vtgateConn) SplitQuery(ctx context.Context, keyspace string, query string, bindVars map[string]interface{}, splitColumn string, splitCount int) ([]*pbg.SplitQueryResponse_Part, error) {
|
||||
request := &proto.SplitQueryRequest{
|
||||
CallerID: getEffectiveCallerID(ctx),
|
||||
Keyspace: keyspace,
|
||||
|
|
|
@ -472,13 +472,13 @@ func (vtg *VTGate) SplitQuery(ctx context.Context, request *proto.SplitQueryRequ
|
|||
ctx = callerid.NewContext(ctx,
|
||||
callerid.GoRPCEffectiveCallerID(request.CallerID),
|
||||
callerid.NewImmediateCallerID("gorpc client"))
|
||||
vtgErr := vtg.server.SplitQuery(ctx,
|
||||
splits, vtgErr := vtg.server.SplitQuery(ctx,
|
||||
request.Keyspace,
|
||||
request.Query.Sql,
|
||||
request.Query.BindVariables,
|
||||
request.SplitColumn,
|
||||
request.SplitCount,
|
||||
reply)
|
||||
request.SplitCount)
|
||||
reply.Splits = splits
|
||||
vtgate.AddVtGateErrorToSplitQueryResult(vtgErr, reply)
|
||||
if *vtgate.RPCErrorOnlyInReply {
|
||||
return nil
|
||||
|
|
|
@ -397,7 +397,7 @@ func (conn *vtgateConn) Rollback2(ctx context.Context, session interface{}) erro
|
|||
return conn.Rollback(ctx, session)
|
||||
}
|
||||
|
||||
func (conn *vtgateConn) SplitQuery(ctx context.Context, keyspace string, query string, bindVars map[string]interface{}, splitColumn string, splitCount int) ([]proto.SplitQueryPart, error) {
|
||||
func (conn *vtgateConn) SplitQuery(ctx context.Context, keyspace string, query string, bindVars map[string]interface{}, splitColumn string, splitCount int) ([]*pb.SplitQueryResponse_Part, error) {
|
||||
request := &pb.SplitQueryRequest{
|
||||
CallerId: callerid.EffectiveCallerIDFromContext(ctx),
|
||||
Keyspace: keyspace,
|
||||
|
@ -409,7 +409,7 @@ func (conn *vtgateConn) SplitQuery(ctx context.Context, keyspace string, query s
|
|||
if err != nil {
|
||||
return nil, vterrors.FromGRPCError(err)
|
||||
}
|
||||
return proto.ProtoToSplitQueryParts(response), nil
|
||||
return response.Splits, nil
|
||||
}
|
||||
|
||||
func (conn *vtgateConn) GetSrvKeyspace(ctx context.Context, keyspace string) (*pbt.SrvKeyspace, error) {
|
||||
|
|
|
@ -337,18 +337,18 @@ func (vtg *VTGate) SplitQuery(ctx context.Context, request *pb.SplitQueryRequest
|
|||
ctx = callerid.NewContext(callinfo.GRPCCallInfo(ctx),
|
||||
request.CallerId,
|
||||
callerid.NewImmediateCallerID("grpc client"))
|
||||
reply := new(proto.SplitQueryResult)
|
||||
vtgErr := vtg.server.SplitQuery(ctx,
|
||||
splits, vtgErr := vtg.server.SplitQuery(ctx,
|
||||
request.Keyspace,
|
||||
string(request.Query.Sql),
|
||||
tproto.Proto3ToBindVariables(request.Query.BindVariables),
|
||||
request.SplitColumn,
|
||||
int(request.SplitCount),
|
||||
reply)
|
||||
int(request.SplitCount))
|
||||
if vtgErr != nil {
|
||||
return nil, vterrors.ToGRPCError(vtgErr)
|
||||
}
|
||||
return proto.SplitQueryPartsToProto(reply.Splits), nil
|
||||
return &pb.SplitQueryResponse{
|
||||
Splits: splits,
|
||||
}, nil
|
||||
}
|
||||
|
||||
// GetSrvKeyspace is the RPC version of vtgateservice.VTGateService method
|
||||
|
|
|
@ -4,9 +4,21 @@
|
|||
|
||||
package proto
|
||||
|
||||
import (
|
||||
mproto "github.com/youtube/vitess/go/mysql/proto"
|
||||
|
||||
pbg "github.com/youtube/vitess/go/vt/proto/vtgate"
|
||||
)
|
||||
|
||||
// This file contains the data structures used by bson rpc for vtgate service.
|
||||
|
||||
// GetSrvKeyspaceRequest is the payload to GetSrvRequest
|
||||
type GetSrvKeyspaceRequest struct {
|
||||
Keyspace string
|
||||
}
|
||||
|
||||
// SplitQueryResult is the response from SplitQueryRequest
|
||||
type SplitQueryResult struct {
|
||||
Splits []*pbg.SplitQueryResponse_Part
|
||||
Err *mproto.RPCError
|
||||
}
|
||||
|
|
|
@ -196,61 +196,3 @@ func ProtoToBoundKeyspaceIdQueries(bsq []*pb.BoundKeyspaceIdQuery) []BoundKeyspa
|
|||
}
|
||||
return result
|
||||
}
|
||||
|
||||
// SplitQueryPartsToproto transforms a SplitQueryResponse into proto
|
||||
func SplitQueryPartsToProto(sqp []SplitQueryPart) *pb.SplitQueryResponse {
|
||||
result := &pb.SplitQueryResponse{}
|
||||
if len(sqp) == 0 {
|
||||
return result
|
||||
}
|
||||
result.Splits = make([]*pb.SplitQueryResponse_Part, len(sqp))
|
||||
for i, split := range sqp {
|
||||
result.Splits[i] = &pb.SplitQueryResponse_Part{
|
||||
Size: split.Size,
|
||||
}
|
||||
if split.Query != nil {
|
||||
result.Splits[i].Query = tproto.BoundQueryToProto3(split.Query.Sql, split.Query.BindVariables)
|
||||
result.Splits[i].KeyRangePart = &pb.SplitQueryResponse_KeyRangePart{
|
||||
Keyspace: split.Query.Keyspace,
|
||||
KeyRanges: key.KeyRangesToProto(split.Query.KeyRanges),
|
||||
}
|
||||
}
|
||||
if split.QueryShard != nil {
|
||||
result.Splits[i].Query = tproto.BoundQueryToProto3(split.QueryShard.Sql, split.QueryShard.BindVariables)
|
||||
result.Splits[i].ShardPart = &pb.SplitQueryResponse_ShardPart{
|
||||
Keyspace: split.QueryShard.Keyspace,
|
||||
Shards: split.QueryShard.Shards,
|
||||
}
|
||||
}
|
||||
}
|
||||
return result
|
||||
}
|
||||
|
||||
// ProtoToSplitQueryParts transforms a proto3 SplitQueryResponse into
|
||||
// native types
|
||||
func ProtoToSplitQueryParts(sqr *pb.SplitQueryResponse) []SplitQueryPart {
|
||||
if len(sqr.Splits) == 0 {
|
||||
return nil
|
||||
}
|
||||
result := make([]SplitQueryPart, len(sqr.Splits))
|
||||
for i, split := range sqr.Splits {
|
||||
if split.KeyRangePart != nil {
|
||||
result[i].Query = &KeyRangeQuery{
|
||||
Sql: string(split.Query.Sql),
|
||||
BindVariables: tproto.Proto3ToBindVariables(split.Query.BindVariables),
|
||||
Keyspace: split.KeyRangePart.Keyspace,
|
||||
KeyRanges: key.ProtoToKeyRanges(split.KeyRangePart.KeyRanges),
|
||||
}
|
||||
}
|
||||
if split.ShardPart != nil {
|
||||
result[i].QueryShard = &QueryShard{
|
||||
Sql: string(split.Query.Sql),
|
||||
BindVariables: tproto.Proto3ToBindVariables(split.Query.BindVariables),
|
||||
Keyspace: split.ShardPart.Keyspace,
|
||||
Shards: split.ShardPart.Shards,
|
||||
}
|
||||
}
|
||||
result[i].Size = split.Size
|
||||
}
|
||||
return result
|
||||
}
|
||||
|
|
|
@ -198,20 +198,6 @@ type SplitQueryRequest struct {
|
|||
SplitCount int
|
||||
}
|
||||
|
||||
// SplitQueryPart is a sub query of SplitQueryRequest.Query
|
||||
// Only one of Query or QueryShard will be set.
|
||||
type SplitQueryPart struct {
|
||||
Query *KeyRangeQuery
|
||||
QueryShard *QueryShard
|
||||
Size int64
|
||||
}
|
||||
|
||||
// SplitQueryResult is the result for SplitQueryRequest
|
||||
type SplitQueryResult struct {
|
||||
Splits []SplitQueryPart
|
||||
Err *mproto.RPCError
|
||||
}
|
||||
|
||||
// BeginRequest is the BSON implementation of the proto3 query.BeginRequest
|
||||
type BeginRequest struct {
|
||||
CallerID *tproto.CallerID // only used by BSON
|
||||
|
|
|
@ -15,8 +15,10 @@ import (
|
|||
"time"
|
||||
|
||||
mproto "github.com/youtube/vitess/go/mysql/proto"
|
||||
"github.com/youtube/vitess/go/vt/discovery"
|
||||
tproto "github.com/youtube/vitess/go/vt/tabletserver/proto"
|
||||
"github.com/youtube/vitess/go/vt/tabletserver/tabletconn"
|
||||
"github.com/youtube/vitess/go/vt/topo"
|
||||
"github.com/youtube/vitess/go/vt/vterrors"
|
||||
"github.com/youtube/vitess/go/vt/vtgate/proto"
|
||||
"golang.org/x/net/context"
|
||||
|
@ -50,9 +52,9 @@ type Resolver struct {
|
|||
|
||||
// NewResolver creates a new Resolver. All input parameters are passed through
|
||||
// for creating ScatterConn.
|
||||
func NewResolver(serv SrvTopoServer, statsName, cell string, retryDelay time.Duration, retryCount int, connTimeoutTotal, connTimeoutPerConn, connLife time.Duration) *Resolver {
|
||||
func NewResolver(hc discovery.HealthCheck, topoServer topo.Server, serv SrvTopoServer, statsName, cell string, retryDelay time.Duration, retryCount int, connTimeoutTotal, connTimeoutPerConn, connLife time.Duration, testGateway string) *Resolver {
|
||||
return &Resolver{
|
||||
scatterConn: NewScatterConn(serv, statsName, cell, retryDelay, retryCount, connTimeoutTotal, connTimeoutPerConn, connLife),
|
||||
scatterConn: NewScatterConn(hc, topoServer, serv, statsName, cell, retryDelay, retryCount, connTimeoutTotal, connTimeoutPerConn, connLife, testGateway),
|
||||
toposerv: serv,
|
||||
cell: cell,
|
||||
}
|
||||
|
|
|
@ -16,6 +16,7 @@ import (
|
|||
mproto "github.com/youtube/vitess/go/mysql/proto"
|
||||
"github.com/youtube/vitess/go/vt/key"
|
||||
"github.com/youtube/vitess/go/vt/tabletserver/tabletconn"
|
||||
"github.com/youtube/vitess/go/vt/topo"
|
||||
"github.com/youtube/vitess/go/vt/vtgate/proto"
|
||||
"golang.org/x/net/context"
|
||||
|
||||
|
@ -27,7 +28,7 @@ import (
|
|||
|
||||
func TestResolverExecuteKeyspaceIds(t *testing.T) {
|
||||
testResolverGeneric(t, "TestResolverExecuteKeyspaceIds", func() (*mproto.QueryResult, error) {
|
||||
res := NewResolver(new(sandboxTopo), "", "aa", retryDelay, 0, connTimeoutTotal, connTimeoutPerConn, connLife)
|
||||
res := NewResolver(nil, topo.Server{}, new(sandboxTopo), "", "aa", retryDelay, 0, connTimeoutTotal, connTimeoutPerConn, connLife, "")
|
||||
return res.ExecuteKeyspaceIds(context.Background(),
|
||||
"query",
|
||||
nil,
|
||||
|
@ -41,7 +42,7 @@ func TestResolverExecuteKeyspaceIds(t *testing.T) {
|
|||
|
||||
func TestResolverExecuteKeyRanges(t *testing.T) {
|
||||
testResolverGeneric(t, "TestResolverExecuteKeyRanges", func() (*mproto.QueryResult, error) {
|
||||
res := NewResolver(new(sandboxTopo), "", "aa", retryDelay, 0, connTimeoutTotal, connTimeoutPerConn, connLife)
|
||||
res := NewResolver(nil, topo.Server{}, new(sandboxTopo), "", "aa", retryDelay, 0, connTimeoutTotal, connTimeoutPerConn, connLife, "")
|
||||
return res.ExecuteKeyRanges(context.Background(),
|
||||
"query",
|
||||
nil,
|
||||
|
@ -55,7 +56,7 @@ func TestResolverExecuteKeyRanges(t *testing.T) {
|
|||
|
||||
func TestResolverExecuteEntityIds(t *testing.T) {
|
||||
testResolverGeneric(t, "TestResolverExecuteEntityIds", func() (*mproto.QueryResult, error) {
|
||||
res := NewResolver(new(sandboxTopo), "", "aa", retryDelay, 0, connTimeoutTotal, connTimeoutPerConn, connLife)
|
||||
res := NewResolver(nil, topo.Server{}, new(sandboxTopo), "", "aa", retryDelay, 0, connTimeoutTotal, connTimeoutPerConn, connLife, "")
|
||||
return res.ExecuteEntityIds(context.Background(),
|
||||
"query",
|
||||
nil,
|
||||
|
@ -89,7 +90,7 @@ func TestResolverExecuteBatchKeyspaceIds(t *testing.T) {
|
|||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
res := NewResolver(new(sandboxTopo), "", "aa", retryDelay, 0, connTimeoutTotal, connTimeoutPerConn, connLife)
|
||||
res := NewResolver(nil, topo.Server{}, new(sandboxTopo), "", "aa", retryDelay, 0, connTimeoutTotal, connTimeoutPerConn, connLife, "")
|
||||
qrs, err := res.ExecuteBatchKeyspaceIds(context.Background(),
|
||||
[]proto.BoundKeyspaceIdQuery{{
|
||||
Sql: "query",
|
||||
|
@ -110,7 +111,7 @@ func TestResolverExecuteBatchKeyspaceIds(t *testing.T) {
|
|||
func TestResolverStreamExecuteKeyspaceIds(t *testing.T) {
|
||||
createSandbox("TestResolverStreamExecuteKeyspaceIds")
|
||||
testResolverStreamGeneric(t, "TestResolverStreamExecuteKeyspaceIds", func() (*mproto.QueryResult, error) {
|
||||
res := NewResolver(new(sandboxTopo), "", "aa", retryDelay, 0, connTimeoutTotal, connTimeoutPerConn, connLife)
|
||||
res := NewResolver(nil, topo.Server{}, new(sandboxTopo), "", "aa", retryDelay, 0, connTimeoutTotal, connTimeoutPerConn, connLife, "")
|
||||
qr := new(mproto.QueryResult)
|
||||
err := res.StreamExecuteKeyspaceIds(context.Background(),
|
||||
"query",
|
||||
|
@ -125,7 +126,7 @@ func TestResolverStreamExecuteKeyspaceIds(t *testing.T) {
|
|||
return qr, err
|
||||
})
|
||||
testResolverStreamGeneric(t, "TestResolverStreamExecuteKeyspaceIds", func() (*mproto.QueryResult, error) {
|
||||
res := NewResolver(new(sandboxTopo), "", "aa", retryDelay, 0, connTimeoutTotal, connTimeoutPerConn, connLife)
|
||||
res := NewResolver(nil, topo.Server{}, new(sandboxTopo), "", "aa", retryDelay, 0, connTimeoutTotal, connTimeoutPerConn, connLife, "")
|
||||
qr := new(mproto.QueryResult)
|
||||
err := res.StreamExecuteKeyspaceIds(context.Background(),
|
||||
"query",
|
||||
|
@ -145,7 +146,7 @@ func TestResolverStreamExecuteKeyRanges(t *testing.T) {
|
|||
createSandbox("TestResolverStreamExecuteKeyRanges")
|
||||
// streaming a single shard
|
||||
testResolverStreamGeneric(t, "TestResolverStreamExecuteKeyRanges", func() (*mproto.QueryResult, error) {
|
||||
res := NewResolver(new(sandboxTopo), "", "aa", retryDelay, 0, connTimeoutTotal, connTimeoutPerConn, connLife)
|
||||
res := NewResolver(nil, topo.Server{}, new(sandboxTopo), "", "aa", retryDelay, 0, connTimeoutTotal, connTimeoutPerConn, connLife, "")
|
||||
qr := new(mproto.QueryResult)
|
||||
err := res.StreamExecuteKeyRanges(context.Background(),
|
||||
"query",
|
||||
|
@ -161,7 +162,7 @@ func TestResolverStreamExecuteKeyRanges(t *testing.T) {
|
|||
})
|
||||
// streaming multiple shards
|
||||
testResolverStreamGeneric(t, "TestResolverStreamExecuteKeyRanges", func() (*mproto.QueryResult, error) {
|
||||
res := NewResolver(new(sandboxTopo), "", "aa", retryDelay, 0, connTimeoutTotal, connTimeoutPerConn, connLife)
|
||||
res := NewResolver(nil, topo.Server{}, new(sandboxTopo), "", "aa", retryDelay, 0, connTimeoutTotal, connTimeoutPerConn, connLife, "")
|
||||
qr := new(mproto.QueryResult)
|
||||
err := res.StreamExecuteKeyRanges(context.Background(),
|
||||
"query",
|
||||
|
@ -450,7 +451,7 @@ func TestResolverBuildEntityIds(t *testing.T) {
|
|||
}
|
||||
|
||||
func TestResolverDmlOnMultipleKeyspaceIds(t *testing.T) {
|
||||
res := NewResolver(new(sandboxTopo), "", "aa", retryDelay, 0, connTimeoutTotal, connTimeoutPerConn, connLife)
|
||||
res := NewResolver(nil, topo.Server{}, new(sandboxTopo), "", "aa", retryDelay, 0, connTimeoutTotal, connTimeoutPerConn, connLife, "")
|
||||
|
||||
s := createSandbox("TestResolverDmlOnMultipleKeyspaceIds")
|
||||
sbc0 := &sandboxConn{}
|
||||
|
@ -477,7 +478,7 @@ func TestResolverExecBatchReresolve(t *testing.T) {
|
|||
sbc := &sandboxConn{mustFailRetry: 20}
|
||||
s.MapTestConn("0", sbc)
|
||||
|
||||
res := NewResolver(new(sandboxTopo), "", "aa", retryDelay, 0, connTimeoutTotal, connTimeoutPerConn, connLife)
|
||||
res := NewResolver(nil, topo.Server{}, new(sandboxTopo), "", "aa", retryDelay, 0, connTimeoutTotal, connTimeoutPerConn, connLife, "")
|
||||
|
||||
callcount := 0
|
||||
buildBatchRequest := func() (*scatterBatchRequest, error) {
|
||||
|
@ -510,7 +511,7 @@ func TestResolverExecBatchAsTransaction(t *testing.T) {
|
|||
sbc := &sandboxConn{mustFailRetry: 20}
|
||||
s.MapTestConn("0", sbc)
|
||||
|
||||
res := NewResolver(new(sandboxTopo), "", "aa", retryDelay, 0, connTimeoutTotal, connTimeoutPerConn, connLife)
|
||||
res := NewResolver(nil, topo.Server{}, new(sandboxTopo), "", "aa", retryDelay, 0, connTimeoutTotal, connTimeoutPerConn, connLife, "")
|
||||
|
||||
callcount := 0
|
||||
buildBatchRequest := func() (*scatterBatchRequest, error) {
|
||||
|
|
|
@ -10,6 +10,7 @@ import (
|
|||
"time"
|
||||
|
||||
mproto "github.com/youtube/vitess/go/mysql/proto"
|
||||
"github.com/youtube/vitess/go/vt/topo"
|
||||
"github.com/youtube/vitess/go/vt/vtgate/planbuilder"
|
||||
_ "github.com/youtube/vitess/go/vt/vtgate/vindexes"
|
||||
"golang.org/x/net/context"
|
||||
|
@ -227,7 +228,7 @@ func createRouterEnv() (router *Router, sbc1, sbc2, sbclookup *sandboxConn) {
|
|||
createSandbox("TestBadSharding")
|
||||
|
||||
serv := new(sandboxTopo)
|
||||
scatterConn := NewScatterConn(serv, "", "aa", 1*time.Second, 10, 20*time.Millisecond, 10*time.Millisecond, 24*time.Hour)
|
||||
scatterConn := NewScatterConn(nil, topo.Server{}, serv, "", "aa", 1*time.Second, 10, 20*time.Millisecond, 10*time.Millisecond, 24*time.Hour, "")
|
||||
router = NewRouter(serv, "aa", routerSchema, "", scatterConn)
|
||||
return router, sbc1, sbc2, sbclookup
|
||||
}
|
||||
|
|
|
@ -13,6 +13,7 @@ import (
|
|||
mproto "github.com/youtube/vitess/go/mysql/proto"
|
||||
"github.com/youtube/vitess/go/sqltypes"
|
||||
tproto "github.com/youtube/vitess/go/vt/tabletserver/proto"
|
||||
"github.com/youtube/vitess/go/vt/topo"
|
||||
_ "github.com/youtube/vitess/go/vt/vtgate/vindexes"
|
||||
)
|
||||
|
||||
|
@ -515,7 +516,7 @@ func TestSelectScatter(t *testing.T) {
|
|||
s.MapTestConn(shard, sbc)
|
||||
}
|
||||
serv := new(sandboxTopo)
|
||||
scatterConn := NewScatterConn(serv, "", "aa", 1*time.Second, 10, 2*time.Millisecond, 1*time.Millisecond, 24*time.Hour)
|
||||
scatterConn := NewScatterConn(nil, topo.Server{}, serv, "", "aa", 1*time.Second, 10, 2*time.Millisecond, 1*time.Millisecond, 24*time.Hour, "")
|
||||
router := NewRouter(serv, "aa", routerSchema, "", scatterConn)
|
||||
|
||||
_, err := routerExec(router, "select * from user", nil)
|
||||
|
@ -544,7 +545,7 @@ func TestStreamSelectScatter(t *testing.T) {
|
|||
s.MapTestConn(shard, sbc)
|
||||
}
|
||||
serv := new(sandboxTopo)
|
||||
scatterConn := NewScatterConn(serv, "", "aa", 1*time.Second, 10, 2*time.Millisecond, 1*time.Millisecond, 24*time.Hour)
|
||||
scatterConn := NewScatterConn(nil, topo.Server{}, serv, "", "aa", 1*time.Second, 10, 2*time.Millisecond, 1*time.Millisecond, 24*time.Hour, "")
|
||||
router := NewRouter(serv, "aa", routerSchema, "", scatterConn)
|
||||
|
||||
sql := "select * from user"
|
||||
|
@ -583,7 +584,7 @@ func TestSelectScatterFail(t *testing.T) {
|
|||
s.MapTestConn(shard, sbc)
|
||||
}
|
||||
serv := new(sandboxTopo)
|
||||
scatterConn := NewScatterConn(serv, "", "aa", 1*time.Second, 10, 2*time.Millisecond, 1*time.Millisecond, 24*time.Hour)
|
||||
scatterConn := NewScatterConn(nil, topo.Server{}, serv, "", "aa", 1*time.Second, 10, 2*time.Millisecond, 1*time.Millisecond, 24*time.Hour, "")
|
||||
router := NewRouter(serv, "aa", routerSchema, "", scatterConn)
|
||||
|
||||
_, err := routerExec(router, "select * from user", nil)
|
||||
|
|
|
@ -15,14 +15,17 @@ import (
|
|||
mproto "github.com/youtube/vitess/go/mysql/proto"
|
||||
"github.com/youtube/vitess/go/stats"
|
||||
"github.com/youtube/vitess/go/vt/concurrency"
|
||||
kproto "github.com/youtube/vitess/go/vt/key"
|
||||
pb "github.com/youtube/vitess/go/vt/proto/topodata"
|
||||
"github.com/youtube/vitess/go/vt/proto/vtrpc"
|
||||
"github.com/youtube/vitess/go/vt/discovery"
|
||||
tproto "github.com/youtube/vitess/go/vt/tabletserver/proto"
|
||||
"github.com/youtube/vitess/go/vt/tabletserver/tabletconn"
|
||||
"github.com/youtube/vitess/go/vt/topo"
|
||||
"github.com/youtube/vitess/go/vt/vterrors"
|
||||
"github.com/youtube/vitess/go/vt/vtgate/proto"
|
||||
|
||||
pbq "github.com/youtube/vitess/go/vt/proto/query"
|
||||
pb "github.com/youtube/vitess/go/vt/proto/topodata"
|
||||
pbg "github.com/youtube/vitess/go/vt/proto/vtgate"
|
||||
"github.com/youtube/vitess/go/vt/proto/vtrpc"
|
||||
)
|
||||
|
||||
// ScatterConn is used for executing queries across
|
||||
|
@ -31,6 +34,7 @@ type ScatterConn struct {
|
|||
timings *stats.MultiTimings
|
||||
tabletCallErrorCount *stats.MultiCounters
|
||||
gateway Gateway
|
||||
testGateway Gateway // test health checking module
|
||||
}
|
||||
|
||||
// shardActionFunc defines the contract for a shard action. Every such function
|
||||
|
@ -42,7 +46,7 @@ type shardActionFunc func(shard string, transactionID int64, sResults chan<- int
|
|||
|
||||
// NewScatterConn creates a new ScatterConn. All input parameters are passed through
|
||||
// for creating the appropriate connections.
|
||||
func NewScatterConn(serv SrvTopoServer, statsName, cell string, retryDelay time.Duration, retryCount int, connTimeoutTotal, connTimeoutPerConn, connLife time.Duration) *ScatterConn {
|
||||
func NewScatterConn(hc discovery.HealthCheck, topoServer topo.Server, serv SrvTopoServer, statsName, cell string, retryDelay time.Duration, retryCount int, connTimeoutTotal, connTimeoutPerConn, connLife time.Duration, testGateway string) *ScatterConn {
|
||||
tabletCallErrorCountStatsName := ""
|
||||
tabletConnectStatsName := ""
|
||||
if statsName != "" {
|
||||
|
@ -50,12 +54,22 @@ func NewScatterConn(serv SrvTopoServer, statsName, cell string, retryDelay time.
|
|||
tabletConnectStatsName = statsName + "TabletConnect"
|
||||
}
|
||||
connTimings := stats.NewMultiTimings(tabletConnectStatsName, []string{"Keyspace", "ShardName", "DbType"})
|
||||
gateway := GetGatewayCreator()(serv, cell, retryDelay, retryCount, connTimeoutTotal, connTimeoutPerConn, connLife, connTimings)
|
||||
return &ScatterConn{
|
||||
gateway := GetGatewayCreator()(hc, topoServer, serv, cell, retryDelay, retryCount, connTimeoutTotal, connTimeoutPerConn, connLife, connTimings)
|
||||
|
||||
sc := &ScatterConn{
|
||||
timings: stats.NewMultiTimings(statsName, []string{"Operation", "Keyspace", "ShardName", "DbType"}),
|
||||
tabletCallErrorCount: stats.NewMultiCounters(tabletCallErrorCountStatsName, []string{"Operation", "Keyspace", "ShardName", "DbType"}),
|
||||
gateway: gateway,
|
||||
}
|
||||
|
||||
// this is to test health checking module when using existing gateway
|
||||
if testGateway != "" {
|
||||
if gc := GetGatewayCreatorByName(testGateway); gc != nil {
|
||||
sc.testGateway = gc(hc, topoServer, serv, cell, retryDelay, retryCount, connTimeoutTotal, connTimeoutPerConn, connLife, connTimings)
|
||||
}
|
||||
}
|
||||
|
||||
return sc
|
||||
}
|
||||
|
||||
// InitializeConnections pre-initializes connections for all shards.
|
||||
|
@ -63,6 +77,10 @@ func NewScatterConn(serv SrvTopoServer, statsName, cell string, retryDelay time.
|
|||
// It is not necessary to call this function before serving queries,
|
||||
// but it would reduce connection overhead when serving.
|
||||
func (stc *ScatterConn) InitializeConnections(ctx context.Context) error {
|
||||
// temporarily start healthchecking regardless of gateway used
|
||||
if stc.testGateway != nil {
|
||||
stc.testGateway.InitializeConnections(ctx)
|
||||
}
|
||||
return stc.gateway.InitializeConnections(ctx)
|
||||
}
|
||||
|
||||
|
@ -419,7 +437,7 @@ func (stc *ScatterConn) Rollback(ctx context.Context, session *SafeSession) (err
|
|||
// splits received from a shard, it construct a KeyRange queries by
|
||||
// appending that shard's keyrange to the splits. Aggregates all splits across
|
||||
// all shards in no specific order and returns.
|
||||
func (stc *ScatterConn) SplitQueryKeyRange(ctx context.Context, sql string, bindVariables map[string]interface{}, splitColumn string, splitCount int, keyRangeByShard map[string]*pb.KeyRange, keyspace string) ([]proto.SplitQueryPart, error) {
|
||||
func (stc *ScatterConn) SplitQueryKeyRange(ctx context.Context, sql string, bindVariables map[string]interface{}, splitColumn string, splitCount int, keyRangeByShard map[string]*pb.KeyRange, keyspace string) ([]*pbg.SplitQueryResponse_Part, error) {
|
||||
tabletType := pb.TabletType_RDONLY
|
||||
actionFunc := func(shard string, transactionID int64, results chan<- interface{}) error {
|
||||
// Get all splits from this shard
|
||||
|
@ -428,21 +446,20 @@ func (stc *ScatterConn) SplitQueryKeyRange(ctx context.Context, sql string, bind
|
|||
return err
|
||||
}
|
||||
// Append the keyrange for this shard to all the splits received
|
||||
keyranges := []kproto.KeyRange{kproto.ProtoToKeyRange(keyRangeByShard[shard])}
|
||||
splits := []proto.SplitQueryPart{}
|
||||
for _, query := range queries {
|
||||
krq := &proto.KeyRangeQuery{
|
||||
Sql: query.Query.Sql,
|
||||
BindVariables: query.Query.BindVariables,
|
||||
Keyspace: keyspace,
|
||||
KeyRanges: keyranges,
|
||||
TabletType: topo.TYPE_RDONLY,
|
||||
keyranges := []*pb.KeyRange{keyRangeByShard[shard]}
|
||||
splits := make([]*pbg.SplitQueryResponse_Part, len(queries))
|
||||
for i, query := range queries {
|
||||
splits[i] = &pbg.SplitQueryResponse_Part{
|
||||
Query: &pbq.BoundQuery{
|
||||
Sql: query.Query.Sql,
|
||||
BindVariables: tproto.BindVariablesToProto3(query.Query.BindVariables),
|
||||
},
|
||||
KeyRangePart: &pbg.SplitQueryResponse_KeyRangePart{
|
||||
Keyspace: keyspace,
|
||||
KeyRanges: keyranges,
|
||||
},
|
||||
Size: query.RowCount,
|
||||
}
|
||||
split := proto.SplitQueryPart{
|
||||
Query: krq,
|
||||
Size: query.RowCount,
|
||||
}
|
||||
splits = append(splits, split)
|
||||
}
|
||||
// Push all the splits from this shard to results channel
|
||||
results <- splits
|
||||
|
@ -454,9 +471,9 @@ func (stc *ScatterConn) SplitQueryKeyRange(ctx context.Context, sql string, bind
|
|||
shards = append(shards, shard)
|
||||
}
|
||||
allSplits, allErrors := stc.multiGo(ctx, "SplitQuery", keyspace, shards, tabletType, NewSafeSession(&proto.Session{}), false, actionFunc)
|
||||
splits := []proto.SplitQueryPart{}
|
||||
splits := []*pbg.SplitQueryResponse_Part{}
|
||||
for s := range allSplits {
|
||||
splits = append(splits, s.([]proto.SplitQueryPart)...)
|
||||
splits = append(splits, s.([]*pbg.SplitQueryResponse_Part)...)
|
||||
}
|
||||
if allErrors.HasErrors() {
|
||||
err := allErrors.AggrError(stc.aggregateErrors)
|
||||
|
@ -470,7 +487,7 @@ func (stc *ScatterConn) SplitQueryKeyRange(ctx context.Context, sql string, bind
|
|||
// KeyRange queries by appending that shard's name to the
|
||||
// splits. Aggregates all splits across all shards in no specific
|
||||
// order and returns.
|
||||
func (stc *ScatterConn) SplitQueryCustomSharding(ctx context.Context, sql string, bindVariables map[string]interface{}, splitColumn string, splitCount int, shards []string, keyspace string) ([]proto.SplitQueryPart, error) {
|
||||
func (stc *ScatterConn) SplitQueryCustomSharding(ctx context.Context, sql string, bindVariables map[string]interface{}, splitColumn string, splitCount int, shards []string, keyspace string) ([]*pbg.SplitQueryResponse_Part, error) {
|
||||
tabletType := pb.TabletType_RDONLY
|
||||
actionFunc := func(shard string, transactionID int64, results chan<- interface{}) error {
|
||||
// Get all splits from this shard
|
||||
|
@ -478,21 +495,21 @@ func (stc *ScatterConn) SplitQueryCustomSharding(ctx context.Context, sql string
|
|||
if err != nil {
|
||||
return err
|
||||
}
|
||||
// Append the keyrange for this shard to all the splits received
|
||||
splits := []proto.SplitQueryPart{}
|
||||
for _, query := range queries {
|
||||
qs := &proto.QueryShard{
|
||||
Sql: query.Query.Sql,
|
||||
BindVariables: query.Query.BindVariables,
|
||||
Keyspace: keyspace,
|
||||
Shards: []string{shard},
|
||||
TabletType: topo.TYPE_RDONLY,
|
||||
// Use the shards list for all the splits received
|
||||
shards := []string{shard}
|
||||
splits := make([]*pbg.SplitQueryResponse_Part, len(queries))
|
||||
for i, query := range queries {
|
||||
splits[i] = &pbg.SplitQueryResponse_Part{
|
||||
Query: &pbq.BoundQuery{
|
||||
Sql: query.Query.Sql,
|
||||
BindVariables: tproto.BindVariablesToProto3(query.Query.BindVariables),
|
||||
},
|
||||
ShardPart: &pbg.SplitQueryResponse_ShardPart{
|
||||
Keyspace: keyspace,
|
||||
Shards: shards,
|
||||
},
|
||||
Size: query.RowCount,
|
||||
}
|
||||
split := proto.SplitQueryPart{
|
||||
QueryShard: qs,
|
||||
Size: query.RowCount,
|
||||
}
|
||||
splits = append(splits, split)
|
||||
}
|
||||
// Push all the splits from this shard to results channel
|
||||
results <- splits
|
||||
|
@ -500,9 +517,9 @@ func (stc *ScatterConn) SplitQueryCustomSharding(ctx context.Context, sql string
|
|||
}
|
||||
|
||||
allSplits, allErrors := stc.multiGo(ctx, "SplitQuery", keyspace, shards, tabletType, NewSafeSession(&proto.Session{}), false, actionFunc)
|
||||
splits := []proto.SplitQueryPart{}
|
||||
splits := []*pbg.SplitQueryResponse_Part{}
|
||||
for s := range allSplits {
|
||||
splits = append(splits, s.([]proto.SplitQueryPart)...)
|
||||
splits = append(splits, s.([]*pbg.SplitQueryResponse_Part)...)
|
||||
}
|
||||
if allErrors.HasErrors() {
|
||||
err := allErrors.AggrError(stc.aggregateErrors)
|
||||
|
|
|
@ -26,14 +26,14 @@ import (
|
|||
|
||||
func TestScatterConnExecute(t *testing.T) {
|
||||
testScatterConnGeneric(t, "TestScatterConnExecute", func(shards []string) (*mproto.QueryResult, error) {
|
||||
stc := NewScatterConn(new(sandboxTopo), "", "aa", retryDelay, retryCount, connTimeoutTotal, connTimeoutPerConn, connLife)
|
||||
stc := NewScatterConn(nil, topo.Server{}, new(sandboxTopo), "", "aa", retryDelay, retryCount, connTimeoutTotal, connTimeoutPerConn, connLife, "")
|
||||
return stc.Execute(context.Background(), "query", nil, "TestScatterConnExecute", shards, pb.TabletType_REPLICA, nil, false)
|
||||
})
|
||||
}
|
||||
|
||||
func TestScatterConnExecuteMulti(t *testing.T) {
|
||||
testScatterConnGeneric(t, "TestScatterConnExecuteMulti", func(shards []string) (*mproto.QueryResult, error) {
|
||||
stc := NewScatterConn(new(sandboxTopo), "", "aa", retryDelay, retryCount, connTimeoutTotal, connTimeoutPerConn, connLife)
|
||||
stc := NewScatterConn(nil, topo.Server{}, new(sandboxTopo), "", "aa", retryDelay, retryCount, connTimeoutTotal, connTimeoutPerConn, connLife, "")
|
||||
shardVars := make(map[string]map[string]interface{})
|
||||
for _, shard := range shards {
|
||||
shardVars[shard] = nil
|
||||
|
@ -44,7 +44,7 @@ func TestScatterConnExecuteMulti(t *testing.T) {
|
|||
|
||||
func TestScatterConnExecuteBatch(t *testing.T) {
|
||||
testScatterConnGeneric(t, "TestScatterConnExecuteBatch", func(shards []string) (*mproto.QueryResult, error) {
|
||||
stc := NewScatterConn(new(sandboxTopo), "", "aa", retryDelay, retryCount, connTimeoutTotal, connTimeoutPerConn, connLife)
|
||||
stc := NewScatterConn(nil, topo.Server{}, new(sandboxTopo), "", "aa", retryDelay, retryCount, connTimeoutTotal, connTimeoutPerConn, connLife, "")
|
||||
queries := []proto.BoundShardQuery{{
|
||||
Sql: "query",
|
||||
BindVariables: nil,
|
||||
|
@ -62,7 +62,7 @@ func TestScatterConnExecuteBatch(t *testing.T) {
|
|||
|
||||
func TestScatterConnStreamExecute(t *testing.T) {
|
||||
testScatterConnGeneric(t, "TestScatterConnStreamExecute", func(shards []string) (*mproto.QueryResult, error) {
|
||||
stc := NewScatterConn(new(sandboxTopo), "", "aa", retryDelay, retryCount, connTimeoutTotal, connTimeoutPerConn, connLife)
|
||||
stc := NewScatterConn(nil, topo.Server{}, new(sandboxTopo), "", "aa", retryDelay, retryCount, connTimeoutTotal, connTimeoutPerConn, connLife, "")
|
||||
qr := new(mproto.QueryResult)
|
||||
err := stc.StreamExecute(context.Background(), "query", nil, "TestScatterConnStreamExecute", shards, pb.TabletType_REPLICA, func(r *mproto.QueryResult) error {
|
||||
appendResult(qr, r)
|
||||
|
@ -74,7 +74,7 @@ func TestScatterConnStreamExecute(t *testing.T) {
|
|||
|
||||
func TestScatterConnStreamExecuteMulti(t *testing.T) {
|
||||
testScatterConnGeneric(t, "TestScatterConnStreamExecuteMulti", func(shards []string) (*mproto.QueryResult, error) {
|
||||
stc := NewScatterConn(new(sandboxTopo), "", "aa", retryDelay, retryCount, connTimeoutTotal, connTimeoutPerConn, connLife)
|
||||
stc := NewScatterConn(nil, topo.Server{}, new(sandboxTopo), "", "aa", retryDelay, retryCount, connTimeoutTotal, connTimeoutPerConn, connLife, "")
|
||||
qr := new(mproto.QueryResult)
|
||||
shardVars := make(map[string]map[string]interface{})
|
||||
for _, shard := range shards {
|
||||
|
@ -214,7 +214,7 @@ func TestMultiExecs(t *testing.T) {
|
|||
s.MapTestConn("0", sbc0)
|
||||
sbc1 := &sandboxConn{}
|
||||
s.MapTestConn("1", sbc1)
|
||||
stc := NewScatterConn(new(sandboxTopo), "", "aa", retryDelay, retryCount, connTimeoutTotal, connTimeoutPerConn, connLife)
|
||||
stc := NewScatterConn(nil, topo.Server{}, new(sandboxTopo), "", "aa", retryDelay, retryCount, connTimeoutTotal, connTimeoutPerConn, connLife, "")
|
||||
shardVars := map[string]map[string]interface{}{
|
||||
"0": map[string]interface{}{
|
||||
"bv0": 0,
|
||||
|
@ -247,7 +247,7 @@ func TestScatterConnStreamExecuteSendError(t *testing.T) {
|
|||
s := createSandbox("TestScatterConnStreamExecuteSendError")
|
||||
sbc := &sandboxConn{}
|
||||
s.MapTestConn("0", sbc)
|
||||
stc := NewScatterConn(new(sandboxTopo), "", "aa", retryDelay, retryCount, connTimeoutTotal, connTimeoutPerConn, connLife)
|
||||
stc := NewScatterConn(nil, topo.Server{}, new(sandboxTopo), "", "aa", retryDelay, retryCount, connTimeoutTotal, connTimeoutPerConn, connLife, "")
|
||||
err := stc.StreamExecute(context.Background(), "query", nil, "TestScatterConnStreamExecuteSendError", []string{"0"}, pb.TabletType_REPLICA, func(*mproto.QueryResult) error {
|
||||
return fmt.Errorf("send error")
|
||||
})
|
||||
|
@ -262,7 +262,7 @@ func TestScatterCommitRollbackIncorrectSession(t *testing.T) {
|
|||
s := createSandbox("TestScatterCommitRollbackIncorrectSession")
|
||||
sbc0 := &sandboxConn{}
|
||||
s.MapTestConn("0", sbc0)
|
||||
stc := NewScatterConn(new(sandboxTopo), "", "aa", retryDelay, retryCount, connTimeoutTotal, connTimeoutPerConn, connLife)
|
||||
stc := NewScatterConn(nil, topo.Server{}, new(sandboxTopo), "", "aa", retryDelay, retryCount, connTimeoutTotal, connTimeoutPerConn, connLife, "")
|
||||
|
||||
// nil session
|
||||
err := stc.Commit(context.Background(), nil)
|
||||
|
@ -285,7 +285,7 @@ func TestScatterConnCommitSuccess(t *testing.T) {
|
|||
s.MapTestConn("0", sbc0)
|
||||
sbc1 := &sandboxConn{}
|
||||
s.MapTestConn("1", sbc1)
|
||||
stc := NewScatterConn(new(sandboxTopo), "", "aa", retryDelay, retryCount, connTimeoutTotal, connTimeoutPerConn, connLife)
|
||||
stc := NewScatterConn(nil, topo.Server{}, new(sandboxTopo), "", "aa", retryDelay, retryCount, connTimeoutTotal, connTimeoutPerConn, connLife, "")
|
||||
|
||||
// Sequence the executes to ensure commit order
|
||||
session := NewSafeSession(&proto.Session{InTransaction: true})
|
||||
|
@ -343,7 +343,7 @@ func TestScatterConnRollback(t *testing.T) {
|
|||
s.MapTestConn("0", sbc0)
|
||||
sbc1 := &sandboxConn{}
|
||||
s.MapTestConn("1", sbc1)
|
||||
stc := NewScatterConn(new(sandboxTopo), "", "aa", retryDelay, retryCount, connTimeoutTotal, connTimeoutPerConn, connLife)
|
||||
stc := NewScatterConn(nil, topo.Server{}, new(sandboxTopo), "", "aa", retryDelay, retryCount, connTimeoutTotal, connTimeoutPerConn, connLife, "")
|
||||
|
||||
// Sequence the executes to ensure commit order
|
||||
session := NewSafeSession(&proto.Session{InTransaction: true})
|
||||
|
@ -369,7 +369,7 @@ func TestScatterConnClose(t *testing.T) {
|
|||
s := createSandbox("TestScatterConnClose")
|
||||
sbc := &sandboxConn{}
|
||||
s.MapTestConn("0", sbc)
|
||||
stc := NewScatterConn(new(sandboxTopo), "", "aa", 1*time.Millisecond, 3, 2*time.Millisecond, 1*time.Millisecond, 24*time.Hour)
|
||||
stc := NewScatterConn(nil, topo.Server{}, new(sandboxTopo), "", "aa", 1*time.Millisecond, 3, 2*time.Millisecond, 1*time.Millisecond, 24*time.Hour, "")
|
||||
stc.Execute(context.Background(), "query1", nil, "TestScatterConnClose", []string{"0"}, pb.TabletType_REPLICA, nil, false)
|
||||
stc.Close()
|
||||
// retry for 10s as Close() is async.
|
||||
|
@ -410,7 +410,7 @@ func TestScatterConnQueryNotInTransaction(t *testing.T) {
|
|||
s.MapTestConn("0", sbc0)
|
||||
sbc1 := &sandboxConn{}
|
||||
s.MapTestConn("1", sbc1)
|
||||
stc := NewScatterConn(new(sandboxTopo), "", "aa", retryDelay, retryCount, connTimeoutTotal, connTimeoutPerConn, connLife)
|
||||
stc := NewScatterConn(nil, topo.Server{}, new(sandboxTopo), "", "aa", retryDelay, retryCount, connTimeoutTotal, connTimeoutPerConn, connLife, "")
|
||||
session := NewSafeSession(&proto.Session{InTransaction: true})
|
||||
stc.Execute(context.Background(), "query1", nil, "TestScatterConnQueryNotInTransaction", []string{"0"}, pb.TabletType_REPLICA, session, true)
|
||||
stc.Execute(context.Background(), "query1", nil, "TestScatterConnQueryNotInTransaction", []string{"1"}, pb.TabletType_REPLICA, session, false)
|
||||
|
@ -448,7 +448,7 @@ func TestScatterConnQueryNotInTransaction(t *testing.T) {
|
|||
s.MapTestConn("0", sbc0)
|
||||
sbc1 = &sandboxConn{}
|
||||
s.MapTestConn("1", sbc1)
|
||||
stc = NewScatterConn(new(sandboxTopo), "", "aa", retryDelay, retryCount, connTimeoutTotal, connTimeoutPerConn, connLife)
|
||||
stc = NewScatterConn(nil, topo.Server{}, new(sandboxTopo), "", "aa", retryDelay, retryCount, connTimeoutTotal, connTimeoutPerConn, connLife, "")
|
||||
session = NewSafeSession(&proto.Session{InTransaction: true})
|
||||
stc.Execute(context.Background(), "query1", nil, "TestScatterConnQueryNotInTransaction", []string{"0"}, pb.TabletType_REPLICA, session, false)
|
||||
stc.Execute(context.Background(), "query1", nil, "TestScatterConnQueryNotInTransaction", []string{"1"}, pb.TabletType_REPLICA, session, true)
|
||||
|
@ -486,7 +486,7 @@ func TestScatterConnQueryNotInTransaction(t *testing.T) {
|
|||
s.MapTestConn("0", sbc0)
|
||||
sbc1 = &sandboxConn{}
|
||||
s.MapTestConn("1", sbc1)
|
||||
stc = NewScatterConn(new(sandboxTopo), "", "aa", retryDelay, retryCount, connTimeoutTotal, connTimeoutPerConn, connLife)
|
||||
stc = NewScatterConn(nil, topo.Server{}, new(sandboxTopo), "", "aa", retryDelay, retryCount, connTimeoutTotal, connTimeoutPerConn, connLife, "")
|
||||
session = NewSafeSession(&proto.Session{InTransaction: true})
|
||||
stc.Execute(context.Background(), "query1", nil, "TestScatterConnQueryNotInTransaction", []string{"0"}, pb.TabletType_REPLICA, session, false)
|
||||
stc.Execute(context.Background(), "query1", nil, "TestScatterConnQueryNotInTransaction", []string{"0", "1"}, pb.TabletType_REPLICA, session, true)
|
||||
|
|
|
@ -15,20 +15,22 @@ import (
|
|||
mproto "github.com/youtube/vitess/go/mysql/proto"
|
||||
"github.com/youtube/vitess/go/stats"
|
||||
"github.com/youtube/vitess/go/vt/concurrency"
|
||||
"github.com/youtube/vitess/go/vt/discovery"
|
||||
pb "github.com/youtube/vitess/go/vt/proto/topodata"
|
||||
tproto "github.com/youtube/vitess/go/vt/tabletserver/proto"
|
||||
"github.com/youtube/vitess/go/vt/tabletserver/tabletconn"
|
||||
"github.com/youtube/vitess/go/vt/topo"
|
||||
)
|
||||
|
||||
const (
|
||||
gatewayImplementation = "shardgateway"
|
||||
gatewayImplementationShard = "shardgateway"
|
||||
)
|
||||
|
||||
func init() {
|
||||
RegisterGatewayCreator(gatewayImplementation, createShardGateway)
|
||||
RegisterGatewayCreator(gatewayImplementationShard, createShardGateway)
|
||||
}
|
||||
|
||||
func createShardGateway(serv SrvTopoServer, cell string, retryDelay time.Duration, retryCount int, connTimeoutTotal, connTimeoutPerConn, connLife time.Duration, connTimings *stats.MultiTimings) Gateway {
|
||||
func createShardGateway(hc discovery.HealthCheck, topoServer topo.Server, serv SrvTopoServer, cell string, retryDelay time.Duration, retryCount int, connTimeoutTotal, connTimeoutPerConn, connLife time.Duration, connTimings *stats.MultiTimings) Gateway {
|
||||
return &shardGateway{
|
||||
toposerv: serv,
|
||||
cell: cell,
|
||||
|
|
|
@ -20,14 +20,14 @@ import (
|
|||
|
||||
func TestExecuteKeyspaceAlias(t *testing.T) {
|
||||
testVerticalSplitGeneric(t, false, func(shards []string) (*mproto.QueryResult, error) {
|
||||
stc := NewScatterConn(new(sandboxTopo), "", "aa", 1*time.Millisecond, 3, 20*time.Millisecond, 10*time.Millisecond, 24*time.Hour)
|
||||
stc := NewScatterConn(nil, topo.Server{}, new(sandboxTopo), "", "aa", 1*time.Millisecond, 3, 20*time.Millisecond, 10*time.Millisecond, 24*time.Hour, "")
|
||||
return stc.Execute(context.Background(), "query", nil, KsTestUnshardedServedFrom, shards, pb.TabletType_RDONLY, nil, false)
|
||||
})
|
||||
}
|
||||
|
||||
func TestBatchExecuteKeyspaceAlias(t *testing.T) {
|
||||
testVerticalSplitGeneric(t, false, func(shards []string) (*mproto.QueryResult, error) {
|
||||
stc := NewScatterConn(new(sandboxTopo), "", "aa", 1*time.Millisecond, 3, 20*time.Millisecond, 10*time.Millisecond, 24*time.Hour)
|
||||
stc := NewScatterConn(nil, topo.Server{}, new(sandboxTopo), "", "aa", 1*time.Millisecond, 3, 20*time.Millisecond, 10*time.Millisecond, 24*time.Hour, "")
|
||||
queries := []proto.BoundShardQuery{{
|
||||
Sql: "query",
|
||||
BindVariables: nil,
|
||||
|
@ -45,7 +45,7 @@ func TestBatchExecuteKeyspaceAlias(t *testing.T) {
|
|||
|
||||
func TestStreamExecuteKeyspaceAlias(t *testing.T) {
|
||||
testVerticalSplitGeneric(t, true, func(shards []string) (*mproto.QueryResult, error) {
|
||||
stc := NewScatterConn(new(sandboxTopo), "", "aa", 1*time.Millisecond, 3, 20*time.Millisecond, 10*time.Millisecond, 24*time.Hour)
|
||||
stc := NewScatterConn(nil, topo.Server{}, new(sandboxTopo), "", "aa", 1*time.Millisecond, 3, 20*time.Millisecond, 10*time.Millisecond, 24*time.Hour, "")
|
||||
qr := new(mproto.QueryResult)
|
||||
err := stc.StreamExecute(context.Background(), "query", nil, KsTestUnshardedServedFrom, shards, pb.TabletType_RDONLY, func(r *mproto.QueryResult) error {
|
||||
appendResult(qr, r)
|
||||
|
@ -60,7 +60,7 @@ func TestInTransactionKeyspaceAlias(t *testing.T) {
|
|||
sbc := &sandboxConn{mustFailRetry: 3}
|
||||
s.MapTestConn("0", sbc)
|
||||
|
||||
stc := NewScatterConn(new(sandboxTopo), "", "aa", 1*time.Millisecond, 3, 20*time.Millisecond, 10*time.Millisecond, 24*time.Hour)
|
||||
stc := NewScatterConn(nil, topo.Server{}, new(sandboxTopo), "", "aa", 1*time.Millisecond, 3, 20*time.Millisecond, 10*time.Millisecond, 24*time.Hour, "")
|
||||
session := NewSafeSession(&proto.Session{
|
||||
InTransaction: true,
|
||||
ShardSessions: []*proto.ShardSession{{
|
||||
|
|
|
@ -21,9 +21,11 @@ import (
|
|||
"github.com/youtube/vitess/go/stats"
|
||||
"github.com/youtube/vitess/go/sync2"
|
||||
"github.com/youtube/vitess/go/tb"
|
||||
"github.com/youtube/vitess/go/vt/discovery"
|
||||
"github.com/youtube/vitess/go/vt/logutil"
|
||||
"github.com/youtube/vitess/go/vt/servenv"
|
||||
"github.com/youtube/vitess/go/vt/tabletserver/tabletconn"
|
||||
"github.com/youtube/vitess/go/vt/topo"
|
||||
"github.com/youtube/vitess/go/vt/vterrors"
|
||||
"github.com/youtube/vitess/go/vt/vtgate/planbuilder"
|
||||
"github.com/youtube/vitess/go/vt/vtgate/proto"
|
||||
|
@ -99,12 +101,12 @@ var (
|
|||
)
|
||||
|
||||
// Init initializes VTGate server.
|
||||
func Init(serv SrvTopoServer, schema *planbuilder.Schema, cell string, retryDelay time.Duration, retryCount int, connTimeoutTotal, connTimeoutPerConn, connLife time.Duration, maxInFlight int) {
|
||||
func Init(hc discovery.HealthCheck, topoServer topo.Server, serv SrvTopoServer, schema *planbuilder.Schema, cell string, retryDelay time.Duration, retryCount int, connTimeoutTotal, connTimeoutPerConn, connLife time.Duration, maxInFlight int, testGateway string) {
|
||||
if rpcVTGate != nil {
|
||||
log.Fatalf("VTGate already initialized")
|
||||
}
|
||||
rpcVTGate = &VTGate{
|
||||
resolver: NewResolver(serv, "VttabletCall", cell, retryDelay, retryCount, connTimeoutTotal, connTimeoutPerConn, connLife),
|
||||
resolver: NewResolver(hc, topoServer, serv, "VttabletCall", cell, retryDelay, retryCount, connTimeoutTotal, connTimeoutPerConn, connLife, testGateway),
|
||||
timings: stats.NewMultiTimings("VtgateApi", []string{"Operation", "Keyspace", "DbType"}),
|
||||
rowsReturned: stats.NewMultiCounters("VtgateApiRowsReturned", []string{"Operation", "Keyspace", "DbType"}),
|
||||
|
||||
|
@ -619,10 +621,10 @@ func (vtg *VTGate) Rollback(ctx context.Context, inSession *proto.Session) error
|
|||
// original query. Number of sub queries will be a multiple of N that is
|
||||
// greater than or equal to SplitQueryRequest.SplitCount, where N is the
|
||||
// number of shards.
|
||||
func (vtg *VTGate) SplitQuery(ctx context.Context, keyspace string, sql string, bindVariables map[string]interface{}, splitColumn string, splitCount int, reply *proto.SplitQueryResult) error {
|
||||
func (vtg *VTGate) SplitQuery(ctx context.Context, keyspace string, sql string, bindVariables map[string]interface{}, splitColumn string, splitCount int) ([]*pbg.SplitQueryResponse_Part, error) {
|
||||
keyspace, srvKeyspace, shards, err := getKeyspaceShards(ctx, vtg.resolver.toposerv, vtg.resolver.cell, keyspace, pb.TabletType_RDONLY)
|
||||
if err != nil {
|
||||
return err
|
||||
return nil, err
|
||||
}
|
||||
perShardSplitCount := int(math.Ceil(float64(splitCount) / float64(len(shards))))
|
||||
if srvKeyspace.ShardingColumnName != "" {
|
||||
|
@ -632,12 +634,7 @@ func (vtg *VTGate) SplitQuery(ctx context.Context, keyspace string, sql string,
|
|||
for _, shard := range shards {
|
||||
keyRangeByShard[shard.Name] = shard.KeyRange
|
||||
}
|
||||
splits, err := vtg.resolver.scatterConn.SplitQueryKeyRange(ctx, sql, bindVariables, splitColumn, perShardSplitCount, keyRangeByShard, keyspace)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
reply.Splits = splits
|
||||
return nil
|
||||
return vtg.resolver.scatterConn.SplitQueryKeyRange(ctx, sql, bindVariables, splitColumn, perShardSplitCount, keyRangeByShard, keyspace)
|
||||
}
|
||||
|
||||
// we are using custom sharding, so the result
|
||||
|
@ -646,12 +643,7 @@ func (vtg *VTGate) SplitQuery(ctx context.Context, keyspace string, sql string,
|
|||
for i, shard := range shards {
|
||||
shardNames[i] = shard.Name
|
||||
}
|
||||
splits, err := vtg.resolver.scatterConn.SplitQueryCustomSharding(ctx, sql, bindVariables, splitColumn, perShardSplitCount, shardNames, keyspace)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
reply.Splits = splits
|
||||
return nil
|
||||
return vtg.resolver.scatterConn.SplitQueryCustomSharding(ctx, sql, bindVariables, splitColumn, perShardSplitCount, shardNames, keyspace)
|
||||
}
|
||||
|
||||
// GetSrvKeyspace is part of the vtgate service API.
|
||||
|
|
|
@ -11,7 +11,6 @@ import (
|
|||
"time"
|
||||
|
||||
"github.com/youtube/vitess/go/vt/key"
|
||||
kproto "github.com/youtube/vitess/go/vt/key"
|
||||
"github.com/youtube/vitess/go/vt/tabletserver/tabletconn"
|
||||
"github.com/youtube/vitess/go/vt/topo"
|
||||
"github.com/youtube/vitess/go/vt/vtgate/proto"
|
||||
|
@ -36,7 +35,7 @@ func init() {
|
|||
}
|
||||
}
|
||||
`)
|
||||
Init(new(sandboxTopo), schema, "aa", 1*time.Second, 10, 2*time.Millisecond, 1*time.Millisecond, 24*time.Hour, 0)
|
||||
Init(nil, topo.Server{}, new(sandboxTopo), schema, "aa", 1*time.Second, 10, 2*time.Millisecond, 1*time.Millisecond, 24*time.Hour, 0, "")
|
||||
}
|
||||
|
||||
func TestVTGateExecute(t *testing.T) {
|
||||
|
@ -733,42 +732,37 @@ func TestVTGateSplitQuery(t *testing.T) {
|
|||
}
|
||||
sql := "select col1, col2 from table"
|
||||
splitCount := 24
|
||||
result := new(proto.SplitQueryResult)
|
||||
err := rpcVTGate.SplitQuery(context.Background(),
|
||||
splits, err := rpcVTGate.SplitQuery(context.Background(),
|
||||
keyspace,
|
||||
sql,
|
||||
nil,
|
||||
"",
|
||||
splitCount,
|
||||
result)
|
||||
splitCount)
|
||||
if err != nil {
|
||||
t.Errorf("want nil, got %v", err)
|
||||
}
|
||||
_, err = getAllShards(DefaultShardSpec)
|
||||
// Total number of splits should be number of shards * splitsPerShard
|
||||
if splitCount != len(result.Splits) {
|
||||
t.Errorf("wrong number of splits, want \n%+v, got \n%+v", splitCount, len(result.Splits))
|
||||
if splitCount != len(splits) {
|
||||
t.Errorf("wrong number of splits, want \n%+v, got \n%+v", splitCount, len(splits))
|
||||
}
|
||||
actualSqlsByKeyRange := map[kproto.KeyRange][]string{}
|
||||
for _, split := range result.Splits {
|
||||
actualSqlsByKeyRange := map[string][]string{}
|
||||
for _, split := range splits {
|
||||
if split.Size != sandboxSQRowCount {
|
||||
t.Errorf("wrong split size, want \n%+v, got \n%+v", sandboxSQRowCount, split.Size)
|
||||
}
|
||||
if split.Query.Keyspace != keyspace {
|
||||
t.Errorf("wrong split size, want \n%+v, got \n%+v", keyspace, split.Query.Keyspace)
|
||||
if split.KeyRangePart.Keyspace != keyspace {
|
||||
t.Errorf("wrong split size, want \n%+v, got \n%+v", keyspace, split.KeyRangePart.Keyspace)
|
||||
}
|
||||
if len(split.Query.KeyRanges) != 1 {
|
||||
t.Errorf("wrong number of keyranges, want \n%+v, got \n%+v", 1, len(split.Query.KeyRanges))
|
||||
if len(split.KeyRangePart.KeyRanges) != 1 {
|
||||
t.Errorf("wrong number of keyranges, want \n%+v, got \n%+v", 1, len(split.KeyRangePart.KeyRanges))
|
||||
}
|
||||
if split.Query.TabletType != topo.TYPE_RDONLY {
|
||||
t.Errorf("wrong tablet type, want \n%+v, got \n%+v", topo.TYPE_RDONLY, split.Query.TabletType)
|
||||
}
|
||||
kr := split.Query.KeyRanges[0]
|
||||
kr := key.KeyRangeString(split.KeyRangePart.KeyRanges[0])
|
||||
actualSqlsByKeyRange[kr] = append(actualSqlsByKeyRange[kr], split.Query.Sql)
|
||||
}
|
||||
expectedSqlsByKeyRange := map[kproto.KeyRange][]string{}
|
||||
expectedSqlsByKeyRange := map[string][]string{}
|
||||
for _, kr := range keyranges {
|
||||
expectedSqlsByKeyRange[kproto.ProtoToKeyRange(kr)] = []string{
|
||||
expectedSqlsByKeyRange[key.KeyRangeString(kr)] = []string{
|
||||
"select col1, col2 from table /*split 0 */",
|
||||
"select col1, col2 from table /*split 1 */",
|
||||
"select col1, col2 from table /*split 2 */",
|
||||
|
|
|
@ -183,7 +183,7 @@ func (conn *VTGateConn) Close() {
|
|||
|
||||
// SplitQuery splits a query into equally sized smaller queries by
|
||||
// appending primary key range clauses to the original query
|
||||
func (conn *VTGateConn) SplitQuery(ctx context.Context, keyspace string, query string, bindVars map[string]interface{}, splitColumn string, splitCount int) ([]proto.SplitQueryPart, error) {
|
||||
func (conn *VTGateConn) SplitQuery(ctx context.Context, keyspace string, query string, bindVars map[string]interface{}, splitColumn string, splitCount int) ([]*pbg.SplitQueryResponse_Part, error) {
|
||||
return conn.impl.SplitQuery(ctx, keyspace, query, bindVars, splitColumn, splitCount)
|
||||
}
|
||||
|
||||
|
@ -384,7 +384,7 @@ type Impl interface {
|
|||
|
||||
// SplitQuery splits a query into equally sized smaller queries by
|
||||
// appending primary key range clauses to the original query.
|
||||
SplitQuery(ctx context.Context, keyspace string, query string, bindVars map[string]interface{}, splitColumn string, splitCount int) ([]proto.SplitQueryPart, error)
|
||||
SplitQuery(ctx context.Context, keyspace string, query string, bindVars map[string]interface{}, splitColumn string, splitCount int) ([]*pbg.SplitQueryResponse_Part, error)
|
||||
|
||||
// GetSrvKeyspace returns a topo.SrvKeyspace.
|
||||
GetSrvKeyspace(ctx context.Context, keyspace string) (*pb.SrvKeyspace, error)
|
||||
|
|
|
@ -26,6 +26,7 @@ import (
|
|||
"github.com/youtube/vitess/go/vt/vtgate/vtgateservice"
|
||||
"golang.org/x/net/context"
|
||||
|
||||
pbq "github.com/youtube/vitess/go/vt/proto/query"
|
||||
pb "github.com/youtube/vitess/go/vt/proto/topodata"
|
||||
pbg "github.com/youtube/vitess/go/vt/proto/vtgate"
|
||||
"github.com/youtube/vitess/go/vt/proto/vtrpc"
|
||||
|
@ -606,9 +607,9 @@ type querySplitQuery struct {
|
|||
}
|
||||
|
||||
// SplitQuery is part of the VTGateService interface
|
||||
func (f *fakeVTGateService) SplitQuery(ctx context.Context, keyspace string, sql string, bindVariables map[string]interface{}, splitColumn string, splitCount int, reply *proto.SplitQueryResult) error {
|
||||
func (f *fakeVTGateService) SplitQuery(ctx context.Context, keyspace string, sql string, bindVariables map[string]interface{}, splitColumn string, splitCount int) ([]*pbg.SplitQueryResponse_Part, error) {
|
||||
if f.hasError {
|
||||
return errTestVtGateError
|
||||
return nil, errTestVtGateError
|
||||
}
|
||||
if f.panics {
|
||||
panic(fmt.Errorf("test forced panic"))
|
||||
|
@ -624,8 +625,7 @@ func (f *fakeVTGateService) SplitQuery(ctx context.Context, keyspace string, sql
|
|||
if !reflect.DeepEqual(query, splitQueryRequest) {
|
||||
f.t.Errorf("SplitQuery has wrong input: got %#v wanted %#v", query, splitQueryRequest)
|
||||
}
|
||||
*reply = *splitQueryResult
|
||||
return nil
|
||||
return splitQueryResult, nil
|
||||
}
|
||||
|
||||
// GetSrvKeyspace is part of the VTGateService interface
|
||||
|
@ -2210,9 +2210,26 @@ func testSplitQuery(t *testing.T, conn *vtgateconn.VTGateConn) {
|
|||
if err != nil {
|
||||
t.Fatalf("SplitQuery failed: %v", err)
|
||||
}
|
||||
if !reflect.DeepEqual(qsl, splitQueryResult.Splits) {
|
||||
t.Errorf("SplitQuery returned wrong result: got %+v wanted %+v", qsl, splitQueryResult.Splits)
|
||||
t.Errorf("SplitQuery returned wrong result: got %+v wanted %+v", qsl[0].Query, splitQueryResult.Splits[0].Query)
|
||||
if len(qsl) == 1 && len(qsl[0].Query.BindVariables) == 1 {
|
||||
bv := qsl[0].Query.BindVariables["bind1"]
|
||||
if len(bv.ValueBytes) == 0 {
|
||||
bv.ValueBytes = nil
|
||||
}
|
||||
if len(bv.ValueBytesList) == 0 {
|
||||
bv.ValueBytesList = nil
|
||||
}
|
||||
if len(bv.ValueIntList) == 0 {
|
||||
bv.ValueIntList = nil
|
||||
}
|
||||
if len(bv.ValueUintList) == 0 {
|
||||
bv.ValueUintList = nil
|
||||
}
|
||||
if len(bv.ValueFloatList) == 0 {
|
||||
bv.ValueFloatList = nil
|
||||
}
|
||||
}
|
||||
if !reflect.DeepEqual(qsl, splitQueryResult) {
|
||||
t.Errorf("SplitQuery returned wrong result: got %#v wanted %#v", qsl, splitQueryResult)
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -2751,24 +2768,27 @@ var splitQueryRequest = &querySplitQuery{
|
|||
SplitCount: 13,
|
||||
}
|
||||
|
||||
var splitQueryResult = &proto.SplitQueryResult{
|
||||
Splits: []proto.SplitQueryPart{
|
||||
proto.SplitQueryPart{
|
||||
Query: &proto.KeyRangeQuery{
|
||||
Sql: "out for SplitQuery",
|
||||
BindVariables: map[string]interface{}{
|
||||
"bind1": int64(1114444),
|
||||
},
|
||||
Keyspace: "ksout",
|
||||
KeyRanges: []key.KeyRange{
|
||||
key.KeyRange{
|
||||
Start: key.KeyspaceId("s"),
|
||||
End: key.KeyspaceId("e"),
|
||||
},
|
||||
var splitQueryResult = []*pbg.SplitQueryResponse_Part{
|
||||
&pbg.SplitQueryResponse_Part{
|
||||
Query: &pbq.BoundQuery{
|
||||
Sql: "out for SplitQuery",
|
||||
BindVariables: map[string]*pbq.BindVariable{
|
||||
"bind1": &pbq.BindVariable{
|
||||
Type: pbq.BindVariable_TYPE_INT,
|
||||
ValueInt: 1114444,
|
||||
},
|
||||
},
|
||||
Size: 12344,
|
||||
},
|
||||
KeyRangePart: &pbg.SplitQueryResponse_KeyRangePart{
|
||||
Keyspace: "ksout",
|
||||
KeyRanges: []*pb.KeyRange{
|
||||
&pb.KeyRange{
|
||||
Start: []byte{'s'},
|
||||
End: []byte{'e'},
|
||||
},
|
||||
},
|
||||
},
|
||||
Size: 12344,
|
||||
},
|
||||
}
|
||||
|
||||
|
|
|
@ -38,7 +38,7 @@ type VTGateService interface {
|
|||
Rollback(ctx context.Context, inSession *proto.Session) error
|
||||
|
||||
// Map Reduce support
|
||||
SplitQuery(ctx context.Context, keyspace string, sql string, bindVariables map[string]interface{}, splitColumn string, splitCount int, reply *proto.SplitQueryResult) error
|
||||
SplitQuery(ctx context.Context, keyspace string, sql string, bindVariables map[string]interface{}, splitColumn string, splitCount int) ([]*pbg.SplitQueryResponse_Part, error)
|
||||
|
||||
// Topology support
|
||||
GetSrvKeyspace(ctx context.Context, keyspace string) (*pb.SrvKeyspace, error)
|
||||
|
|
|
@ -0,0 +1,26 @@
|
|||
// Copyright 2015, Google Inc. All rights reserved.
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
package vttest
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"os"
|
||||
"path"
|
||||
|
||||
// we use gRPC everywhere, so import the vtgate client.
|
||||
_ "github.com/youtube/vitess/go/vt/vtgate/grpcvtgateconn"
|
||||
)
|
||||
|
||||
func launcherPath() (string, error) {
|
||||
vttop := os.Getenv("VTTOP")
|
||||
if vttop == "" {
|
||||
return "", errors.New("VTTOP not set")
|
||||
}
|
||||
return path.Join(vttop, "py/vttest/run_local_database.py"), nil
|
||||
}
|
||||
|
||||
func vtgateProtocol() string {
|
||||
return "grpc"
|
||||
}
|
|
@ -29,6 +29,7 @@ type Conn struct {
|
|||
|
||||
// DB is a fake database and all its methods are thread safe.
|
||||
type DB struct {
|
||||
Name string
|
||||
isConnFail bool
|
||||
data map[string]*proto.QueryResult
|
||||
rejectedData map[string]error
|
||||
|
@ -112,8 +113,8 @@ func (db *DB) IsConnFail() bool {
|
|||
return db.isConnFail
|
||||
}
|
||||
|
||||
// NewFakeSqlDBConn creates a new FakeSqlDBConn instance
|
||||
func NewFakeSqlDBConn(db *DB) *Conn {
|
||||
// NewFakeSQLDBConn creates a new FakeSqlDBConn instance
|
||||
func NewFakeSQLDBConn(db *DB) *Conn {
|
||||
return &Conn{
|
||||
db: db,
|
||||
isClosed: false,
|
||||
|
@ -271,6 +272,7 @@ func (conn *Conn) SetCharset(cs proto.Charset) error {
|
|||
func Register() *DB {
|
||||
name := fmt.Sprintf("fake-%d", rand.Int63())
|
||||
db := &DB{
|
||||
Name: name,
|
||||
data: make(map[string]*proto.QueryResult),
|
||||
rejectedData: make(map[string]error),
|
||||
queryCalled: make(map[string]int),
|
||||
|
@ -279,9 +281,8 @@ func Register() *DB {
|
|||
if db.IsConnFail() {
|
||||
return nil, newConnError()
|
||||
}
|
||||
return NewFakeSqlDBConn(db), nil
|
||||
return NewFakeSQLDBConn(db), nil
|
||||
})
|
||||
sqldb.DefaultDB = name
|
||||
return db
|
||||
}
|
||||
|
||||
|
|
|
@ -7,101 +7,185 @@
|
|||
package vttest
|
||||
|
||||
import (
|
||||
"bufio"
|
||||
"encoding/json"
|
||||
"errors"
|
||||
"fmt"
|
||||
"io"
|
||||
"io/ioutil"
|
||||
"math/rand"
|
||||
"os"
|
||||
"os/exec"
|
||||
"strings"
|
||||
"path"
|
||||
"strconv"
|
||||
"time"
|
||||
|
||||
"github.com/youtube/vitess/go/sqldb"
|
||||
)
|
||||
|
||||
var (
|
||||
curShardNames []string
|
||||
curReplicas int
|
||||
curRdonly int
|
||||
curKeyspace string
|
||||
curSchema string
|
||||
curVSchema string
|
||||
curVtGatePort int
|
||||
)
|
||||
// Handle allows you to interact with the processes launched by vttest.
|
||||
type Handle struct {
|
||||
Data map[string]interface{}
|
||||
|
||||
func run(shardNames []string, replicas, rdonly int, keyspace, schema, vschema, op string) error {
|
||||
curShardNames = shardNames
|
||||
curReplicas = replicas
|
||||
curRdonly = rdonly
|
||||
curKeyspace = keyspace
|
||||
curSchema = schema
|
||||
curVSchema = vschema
|
||||
vttop := os.Getenv("VTTOP")
|
||||
if vttop == "" {
|
||||
return errors.New("VTTOP not set")
|
||||
cmd *exec.Cmd
|
||||
stdin io.WriteCloser
|
||||
|
||||
// dbname is valid only for LaunchMySQL.
|
||||
dbname string
|
||||
}
|
||||
|
||||
// LaunchVitess launches a vitess test cluster.
|
||||
func LaunchVitess(topo, schemaDir string, verbose bool) (hdl *Handle, err error) {
|
||||
hdl = &Handle{}
|
||||
err = hdl.run(randomPort(), topo, schemaDir, false, verbose)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
cfg, err := json.Marshal(map[string]int{
|
||||
"replica": replicas,
|
||||
"rdonly": rdonly,
|
||||
})
|
||||
return hdl, nil
|
||||
}
|
||||
|
||||
// LauncMySQL launches just a MySQL instance with the specified db name. The schema
|
||||
// is specified as a string instead of a file.
|
||||
func LauncMySQL(dbName, schema string, verbose bool) (hdl *Handle, err error) {
|
||||
hdl = &Handle{
|
||||
dbname: dbName,
|
||||
}
|
||||
var schemaDir string
|
||||
if schema != "" {
|
||||
schemaDir, err = ioutil.TempDir("", "vt")
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
defer os.RemoveAll(schemaDir)
|
||||
ksDir := path.Join(schemaDir, dbName)
|
||||
err = os.Mkdir(ksDir, os.ModeDir|0775)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
fileName := path.Join(ksDir, "schema.sql")
|
||||
f, err := os.Create(fileName)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
n, err := f.WriteString(schema)
|
||||
if n != len(schema) {
|
||||
return nil, errors.New("short write")
|
||||
}
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
err = f.Close()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
}
|
||||
err = hdl.run(randomPort(), fmt.Sprintf("%s/0:%s", dbName, dbName), schemaDir, true, verbose)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return hdl, nil
|
||||
}
|
||||
|
||||
// TearDown tears down the launched processes.
|
||||
func (hdl *Handle) TearDown() error {
|
||||
_, err := hdl.stdin.Write([]byte("\n"))
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
cmd := exec.Command(
|
||||
"python",
|
||||
vttop+"/test/java_vtgate_test_helper.py",
|
||||
"--shards",
|
||||
strings.Join(shardNames, ","),
|
||||
"--tablet-config",
|
||||
string(cfg),
|
||||
"--keyspace",
|
||||
keyspace,
|
||||
)
|
||||
if schema != "" {
|
||||
cmd.Args = append(cmd.Args, "--schema", schema)
|
||||
return hdl.cmd.Wait()
|
||||
}
|
||||
|
||||
// MySQLConnParams builds the MySQL connection params.
|
||||
// It's valid only if you used LaunchMySQL.
|
||||
func (hdl *Handle) MySQLConnParams() (sqldb.ConnParams, error) {
|
||||
params := sqldb.ConnParams{
|
||||
Charset: "utf8",
|
||||
DbName: hdl.dbname,
|
||||
}
|
||||
if vschema != "" {
|
||||
cmd.Args = append(cmd.Args, "--vschema", vschema)
|
||||
if hdl.Data == nil {
|
||||
return params, errors.New("no data")
|
||||
}
|
||||
cmd.Args = append(cmd.Args, op)
|
||||
cmd.Stderr = os.Stderr
|
||||
var stdout io.ReadCloser
|
||||
var output []byte
|
||||
stdout, err = cmd.StdoutPipe()
|
||||
cmd.Start()
|
||||
r := bufio.NewReader(stdout)
|
||||
output, err = r.ReadBytes('\n')
|
||||
if err == nil {
|
||||
var data map[string]interface{}
|
||||
if err := json.Unmarshal(output, &data); err == nil {
|
||||
curVtGatePortFloat64, ok := data["port"].(float64)
|
||||
if ok {
|
||||
curVtGatePort = int(curVtGatePortFloat64)
|
||||
fmt.Printf("VtGate Port = %d\n", curVtGatePort)
|
||||
}
|
||||
iuser, ok := hdl.Data["username"]
|
||||
if !ok {
|
||||
return params, errors.New("no username")
|
||||
}
|
||||
user, ok := iuser.(string)
|
||||
if !ok {
|
||||
return params, fmt.Errorf("invalid user type: %T", iuser)
|
||||
}
|
||||
params.Uname = user
|
||||
if ipassword, ok := hdl.Data["password"]; ok {
|
||||
password, ok := ipassword.(string)
|
||||
if !ok {
|
||||
return params, fmt.Errorf("invalid password type: %T", ipassword)
|
||||
}
|
||||
params.Pass = password
|
||||
}
|
||||
return err
|
||||
if ihost, ok := hdl.Data["host"]; ok {
|
||||
host, ok := ihost.(string)
|
||||
if !ok {
|
||||
return params, fmt.Errorf("invalid host type: %T", ihost)
|
||||
}
|
||||
params.Host = host
|
||||
}
|
||||
if iport, ok := hdl.Data["port"]; ok {
|
||||
port, ok := iport.(float64)
|
||||
if !ok {
|
||||
return params, fmt.Errorf("invalid port type: %T", iport)
|
||||
}
|
||||
params.Port = int(port)
|
||||
}
|
||||
if isocket, ok := hdl.Data["socket"]; ok {
|
||||
socket, ok := isocket.(string)
|
||||
if !ok {
|
||||
return params, fmt.Errorf("invalid socket type: %T", isocket)
|
||||
}
|
||||
params.UnixSocket = socket
|
||||
}
|
||||
return params, nil
|
||||
}
|
||||
|
||||
// LocalLaunch launches the cluster. Only one cluster can be active at a time.
|
||||
func LocalLaunch(shardNames []string, replicas, rdonly int, keyspace, schema, vschema string) error {
|
||||
err := run(shardNames, replicas, rdonly, keyspace, schema, vschema, "setup")
|
||||
func (hdl *Handle) run(port int, topo, schemaDir string, mysqlOnly, verbose bool) error {
|
||||
launcher, err := launcherPath()
|
||||
if err != nil {
|
||||
LocalTeardown()
|
||||
return err
|
||||
}
|
||||
return err
|
||||
hdl.cmd = exec.Command(
|
||||
launcher,
|
||||
"--port", strconv.Itoa(port),
|
||||
"--topology", topo,
|
||||
)
|
||||
if schemaDir != "" {
|
||||
hdl.cmd.Args = append(hdl.cmd.Args, "--schema_dir", schemaDir)
|
||||
}
|
||||
if mysqlOnly {
|
||||
hdl.cmd.Args = append(hdl.cmd.Args, "--mysql_only")
|
||||
}
|
||||
if verbose {
|
||||
hdl.cmd.Args = append(hdl.cmd.Args, "--verbose")
|
||||
}
|
||||
hdl.cmd.Stderr = os.Stderr
|
||||
stdout, err := hdl.cmd.StdoutPipe()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
decoder := json.NewDecoder(stdout)
|
||||
hdl.stdin, err = hdl.cmd.StdinPipe()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
err = hdl.cmd.Start()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
return decoder.Decode(&hdl.Data)
|
||||
}
|
||||
|
||||
// LocalTeardown shuts down the previously launched cluster.
|
||||
func LocalTeardown() error {
|
||||
if curShardNames == nil {
|
||||
return nil
|
||||
}
|
||||
err := run(curShardNames, curReplicas, curRdonly, curKeyspace, curSchema, curVSchema, "teardown")
|
||||
curShardNames = nil
|
||||
return err
|
||||
// randomPort returns a random number between 10k & 30k.
|
||||
func randomPort() int {
|
||||
v := rand.Int31n(20000)
|
||||
return int(v + 10000)
|
||||
}
|
||||
|
||||
// VtGatePort returns current VtGate port
|
||||
func VtGatePort() int {
|
||||
return curVtGatePort
|
||||
func init() {
|
||||
rand.Seed(time.Now().UnixNano())
|
||||
}
|
||||
|
|
|
@ -0,0 +1,95 @@
|
|||
// Copyright 2015, Google Inc. All rights reserved.
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
package vttest
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"golang.org/x/net/context"
|
||||
|
||||
"github.com/youtube/vitess/go/mysql"
|
||||
"github.com/youtube/vitess/go/vt/proto/topodata"
|
||||
"github.com/youtube/vitess/go/vt/vtgate/vtgateconn"
|
||||
)
|
||||
|
||||
func TestVitess(t *testing.T) {
|
||||
hdl, err := LaunchVitess("test_keyspace/0:test_keyspace", "", false)
|
||||
if err != nil {
|
||||
t.Error(err)
|
||||
return
|
||||
}
|
||||
defer func() {
|
||||
err = hdl.TearDown()
|
||||
if err != nil {
|
||||
t.Error(err)
|
||||
return
|
||||
}
|
||||
}()
|
||||
if hdl.Data == nil {
|
||||
t.Error("map is nil")
|
||||
return
|
||||
}
|
||||
portName := "port"
|
||||
if vtgateProtocol() == "grpc" {
|
||||
portName = "grpc_port"
|
||||
}
|
||||
fport, ok := hdl.Data[portName]
|
||||
if !ok {
|
||||
t.Errorf("port %v not found in map", portName)
|
||||
return
|
||||
}
|
||||
port := int(fport.(float64))
|
||||
ctx := context.Background()
|
||||
conn, err := vtgateconn.DialProtocol(ctx, vtgateProtocol(), fmt.Sprintf("localhost:%d", port), 5*time.Second)
|
||||
if err != nil {
|
||||
t.Error(err)
|
||||
return
|
||||
}
|
||||
_, err = conn.ExecuteShards(ctx, "select 1 from dual", "test_keyspace", []string{"0"}, nil, topodata.TabletType_MASTER)
|
||||
if err != nil {
|
||||
t.Error(err)
|
||||
return
|
||||
}
|
||||
}
|
||||
|
||||
func TestMySQL(t *testing.T) {
|
||||
hdl, err := LauncMySQL("vttest", "create table a(id int, name varchar(128), primary key(id))", false)
|
||||
if err != nil {
|
||||
t.Error(err)
|
||||
return
|
||||
}
|
||||
defer func() {
|
||||
err = hdl.TearDown()
|
||||
if err != nil {
|
||||
t.Error(err)
|
||||
return
|
||||
}
|
||||
}()
|
||||
if hdl.Data == nil {
|
||||
t.Error("map is nil")
|
||||
return
|
||||
}
|
||||
params, err := hdl.MySQLConnParams()
|
||||
if err != nil {
|
||||
t.Error(err)
|
||||
}
|
||||
conn, err := mysql.Connect(params)
|
||||
if err != nil {
|
||||
t.Error(err)
|
||||
}
|
||||
_, err = conn.ExecuteFetch("insert into a values(1, 'name')", 10, false)
|
||||
if err != nil {
|
||||
t.Error(err)
|
||||
}
|
||||
qr, err := conn.ExecuteFetch("select * from a", 10, false)
|
||||
if err != nil {
|
||||
t.Error(err)
|
||||
}
|
||||
if qr.RowsAffected != 1 {
|
||||
t.Errorf("Rows affected: %d, want 1", qr.RowsAffected)
|
||||
}
|
||||
}
|
|
@ -21,6 +21,7 @@ import (
|
|||
"github.com/youtube/vitess/go/vt/tabletserver/grpcqueryservice"
|
||||
"github.com/youtube/vitess/go/vt/tabletserver/proto"
|
||||
"github.com/youtube/vitess/go/vt/tabletserver/queryservice"
|
||||
"github.com/youtube/vitess/go/vt/vttest/fakesqldb"
|
||||
"github.com/youtube/vitess/go/vt/wrangler"
|
||||
"github.com/youtube/vitess/go/vt/wrangler/testlib"
|
||||
"github.com/youtube/vitess/go/vt/zktopo"
|
||||
|
@ -243,25 +244,26 @@ func TestSplitClonePopulateBlpCheckpoint(t *testing.T) {
|
|||
}
|
||||
|
||||
func testSplitClone(t *testing.T, strategy string) {
|
||||
db := fakesqldb.Register()
|
||||
ts := zktopo.NewTestServer(t, []string{"cell1", "cell2"})
|
||||
wr := wrangler.New(logutil.NewConsoleLogger(), ts, tmclient.NewTabletManagerClient(), time.Second)
|
||||
|
||||
sourceMaster := testlib.NewFakeTablet(t, wr, "cell1", 0,
|
||||
pbt.TabletType_MASTER, testlib.TabletKeyspaceShard(t, "ks", "-80"))
|
||||
pbt.TabletType_MASTER, db, testlib.TabletKeyspaceShard(t, "ks", "-80"))
|
||||
sourceRdonly1 := testlib.NewFakeTablet(t, wr, "cell1", 1,
|
||||
pbt.TabletType_RDONLY, testlib.TabletKeyspaceShard(t, "ks", "-80"))
|
||||
pbt.TabletType_RDONLY, db, testlib.TabletKeyspaceShard(t, "ks", "-80"))
|
||||
sourceRdonly2 := testlib.NewFakeTablet(t, wr, "cell1", 2,
|
||||
pbt.TabletType_RDONLY, testlib.TabletKeyspaceShard(t, "ks", "-80"))
|
||||
pbt.TabletType_RDONLY, db, testlib.TabletKeyspaceShard(t, "ks", "-80"))
|
||||
|
||||
leftMaster := testlib.NewFakeTablet(t, wr, "cell1", 10,
|
||||
pbt.TabletType_MASTER, testlib.TabletKeyspaceShard(t, "ks", "-40"))
|
||||
pbt.TabletType_MASTER, db, testlib.TabletKeyspaceShard(t, "ks", "-40"))
|
||||
leftRdonly := testlib.NewFakeTablet(t, wr, "cell1", 11,
|
||||
pbt.TabletType_RDONLY, testlib.TabletKeyspaceShard(t, "ks", "-40"))
|
||||
pbt.TabletType_RDONLY, db, testlib.TabletKeyspaceShard(t, "ks", "-40"))
|
||||
|
||||
rightMaster := testlib.NewFakeTablet(t, wr, "cell1", 20,
|
||||
pbt.TabletType_MASTER, testlib.TabletKeyspaceShard(t, "ks", "40-80"))
|
||||
pbt.TabletType_MASTER, db, testlib.TabletKeyspaceShard(t, "ks", "40-80"))
|
||||
rightRdonly := testlib.NewFakeTablet(t, wr, "cell1", 21,
|
||||
pbt.TabletType_RDONLY, testlib.TabletKeyspaceShard(t, "ks", "40-80"))
|
||||
pbt.TabletType_RDONLY, db, testlib.TabletKeyspaceShard(t, "ks", "40-80"))
|
||||
|
||||
for _, ft := range []*testlib.FakeTablet{sourceMaster, sourceRdonly1, sourceRdonly2, leftMaster, leftRdonly, rightMaster, rightRdonly} {
|
||||
ft.StartActionLoop(t, wr)
|
||||
|
|
|
@ -18,6 +18,7 @@ import (
|
|||
"github.com/youtube/vitess/go/vt/tabletserver/grpcqueryservice"
|
||||
"github.com/youtube/vitess/go/vt/tabletserver/proto"
|
||||
"github.com/youtube/vitess/go/vt/tabletserver/queryservice"
|
||||
"github.com/youtube/vitess/go/vt/vttest/fakesqldb"
|
||||
"github.com/youtube/vitess/go/vt/wrangler"
|
||||
"github.com/youtube/vitess/go/vt/wrangler/testlib"
|
||||
"github.com/youtube/vitess/go/vt/zktopo"
|
||||
|
@ -146,6 +147,7 @@ func (sq *sourceTabletServer) StreamExecute(ctx context.Context, target *pb.Targ
|
|||
// TODO(aaijazi): Create a test in which source and destination data does not match
|
||||
|
||||
func TestSplitDiff(t *testing.T) {
|
||||
db := fakesqldb.Register()
|
||||
ts := zktopo.NewTestServer(t, []string{"cell1", "cell2"})
|
||||
// We need to use FakeTabletManagerClient because we don't have a good way to fake the binlog player yet,
|
||||
// which is necessary for synchronizing replication.
|
||||
|
@ -153,18 +155,18 @@ func TestSplitDiff(t *testing.T) {
|
|||
ctx := context.Background()
|
||||
|
||||
sourceMaster := testlib.NewFakeTablet(t, wr, "cell1", 0,
|
||||
pbt.TabletType_MASTER, testlib.TabletKeyspaceShard(t, "ks", "-80"))
|
||||
pbt.TabletType_MASTER, db, testlib.TabletKeyspaceShard(t, "ks", "-80"))
|
||||
sourceRdonly1 := testlib.NewFakeTablet(t, wr, "cell1", 1,
|
||||
pbt.TabletType_RDONLY, testlib.TabletKeyspaceShard(t, "ks", "-80"))
|
||||
pbt.TabletType_RDONLY, db, testlib.TabletKeyspaceShard(t, "ks", "-80"))
|
||||
sourceRdonly2 := testlib.NewFakeTablet(t, wr, "cell1", 2,
|
||||
pbt.TabletType_RDONLY, testlib.TabletKeyspaceShard(t, "ks", "-80"))
|
||||
pbt.TabletType_RDONLY, db, testlib.TabletKeyspaceShard(t, "ks", "-80"))
|
||||
|
||||
leftMaster := testlib.NewFakeTablet(t, wr, "cell1", 10,
|
||||
pbt.TabletType_MASTER, testlib.TabletKeyspaceShard(t, "ks", "-40"))
|
||||
pbt.TabletType_MASTER, db, testlib.TabletKeyspaceShard(t, "ks", "-40"))
|
||||
leftRdonly1 := testlib.NewFakeTablet(t, wr, "cell1", 11,
|
||||
pbt.TabletType_RDONLY, testlib.TabletKeyspaceShard(t, "ks", "-40"))
|
||||
pbt.TabletType_RDONLY, db, testlib.TabletKeyspaceShard(t, "ks", "-40"))
|
||||
leftRdonly2 := testlib.NewFakeTablet(t, wr, "cell1", 12,
|
||||
pbt.TabletType_RDONLY, testlib.TabletKeyspaceShard(t, "ks", "-40"))
|
||||
pbt.TabletType_RDONLY, db, testlib.TabletKeyspaceShard(t, "ks", "-40"))
|
||||
|
||||
for _, ft := range []*testlib.FakeTablet{sourceMaster, sourceRdonly1, sourceRdonly2, leftMaster, leftRdonly1, leftRdonly2} {
|
||||
ft.StartActionLoop(t, wr)
|
||||
|
|
|
@ -17,6 +17,7 @@ import (
|
|||
"github.com/youtube/vitess/go/vt/tabletserver/grpcqueryservice"
|
||||
"github.com/youtube/vitess/go/vt/tabletserver/proto"
|
||||
"github.com/youtube/vitess/go/vt/tabletserver/queryservice"
|
||||
"github.com/youtube/vitess/go/vt/vttest/fakesqldb"
|
||||
"github.com/youtube/vitess/go/vt/wrangler"
|
||||
"github.com/youtube/vitess/go/vt/wrangler/testlib"
|
||||
"github.com/youtube/vitess/go/vt/zktopo"
|
||||
|
@ -70,6 +71,7 @@ func (sq *sqlDifferTabletServer) StreamExecute(ctx context.Context, target *pb.T
|
|||
// TODO(aaijazi): Create a test in which source and destination data does not match
|
||||
// TODO(aaijazi): This test is reallly slow; investigate why.
|
||||
func TestSqlDiffer(t *testing.T) {
|
||||
db := fakesqldb.Register()
|
||||
ts := zktopo.NewTestServer(t, []string{"cell1", "cell2"})
|
||||
// We need to use FakeTabletManagerClient because we don't have a good way to fake the binlog player yet,
|
||||
// which is necessary for synchronizing replication.
|
||||
|
@ -77,18 +79,18 @@ func TestSqlDiffer(t *testing.T) {
|
|||
ctx := context.Background()
|
||||
|
||||
supersetMaster := testlib.NewFakeTablet(t, wr, "cell1", 0,
|
||||
pbt.TabletType_MASTER, testlib.TabletKeyspaceShard(t, "source_ks", "0"))
|
||||
pbt.TabletType_MASTER, db, testlib.TabletKeyspaceShard(t, "source_ks", "0"))
|
||||
supersetRdonly1 := testlib.NewFakeTablet(t, wr, "cell1", 1,
|
||||
pbt.TabletType_RDONLY, testlib.TabletKeyspaceShard(t, "source_ks", "0"))
|
||||
pbt.TabletType_RDONLY, db, testlib.TabletKeyspaceShard(t, "source_ks", "0"))
|
||||
supersetRdonly2 := testlib.NewFakeTablet(t, wr, "cell1", 2,
|
||||
pbt.TabletType_RDONLY, testlib.TabletKeyspaceShard(t, "source_ks", "0"))
|
||||
pbt.TabletType_RDONLY, db, testlib.TabletKeyspaceShard(t, "source_ks", "0"))
|
||||
|
||||
subsetMaster := testlib.NewFakeTablet(t, wr, "cell1", 10,
|
||||
pbt.TabletType_MASTER, testlib.TabletKeyspaceShard(t, "destination_ks", "0"))
|
||||
pbt.TabletType_MASTER, db, testlib.TabletKeyspaceShard(t, "destination_ks", "0"))
|
||||
subsetRdonly1 := testlib.NewFakeTablet(t, wr, "cell1", 11,
|
||||
pbt.TabletType_RDONLY, testlib.TabletKeyspaceShard(t, "destination_ks", "0"))
|
||||
pbt.TabletType_RDONLY, db, testlib.TabletKeyspaceShard(t, "destination_ks", "0"))
|
||||
subsetRdonly2 := testlib.NewFakeTablet(t, wr, "cell1", 12,
|
||||
pbt.TabletType_RDONLY, testlib.TabletKeyspaceShard(t, "destination_ks", "0"))
|
||||
pbt.TabletType_RDONLY, db, testlib.TabletKeyspaceShard(t, "destination_ks", "0"))
|
||||
|
||||
for _, ft := range []*testlib.FakeTablet{supersetMaster, supersetRdonly1, supersetRdonly2, subsetMaster, subsetRdonly1, subsetRdonly2} {
|
||||
ft.StartActionLoop(t, wr)
|
||||
|
|
|
@ -21,6 +21,7 @@ import (
|
|||
"github.com/youtube/vitess/go/vt/tabletserver/grpcqueryservice"
|
||||
"github.com/youtube/vitess/go/vt/tabletserver/proto"
|
||||
"github.com/youtube/vitess/go/vt/tabletserver/queryservice"
|
||||
"github.com/youtube/vitess/go/vt/vttest/fakesqldb"
|
||||
"github.com/youtube/vitess/go/vt/wrangler"
|
||||
"github.com/youtube/vitess/go/vt/wrangler/testlib"
|
||||
"github.com/youtube/vitess/go/vt/zktopo"
|
||||
|
@ -228,15 +229,16 @@ func TestVerticalSplitClonePopulateBlpCheckpoint(t *testing.T) {
|
|||
}
|
||||
|
||||
func testVerticalSplitClone(t *testing.T, strategy string) {
|
||||
db := fakesqldb.Register()
|
||||
ts := zktopo.NewTestServer(t, []string{"cell1", "cell2"})
|
||||
wr := wrangler.New(logutil.NewConsoleLogger(), ts, tmclient.NewTabletManagerClient(), time.Second)
|
||||
|
||||
sourceMaster := testlib.NewFakeTablet(t, wr, "cell1", 0,
|
||||
pbt.TabletType_MASTER, testlib.TabletKeyspaceShard(t, "source_ks", "0"))
|
||||
pbt.TabletType_MASTER, db, testlib.TabletKeyspaceShard(t, "source_ks", "0"))
|
||||
sourceRdonly1 := testlib.NewFakeTablet(t, wr, "cell1", 1,
|
||||
pbt.TabletType_RDONLY, testlib.TabletKeyspaceShard(t, "source_ks", "0"))
|
||||
pbt.TabletType_RDONLY, db, testlib.TabletKeyspaceShard(t, "source_ks", "0"))
|
||||
sourceRdonly2 := testlib.NewFakeTablet(t, wr, "cell1", 2,
|
||||
pbt.TabletType_RDONLY, testlib.TabletKeyspaceShard(t, "source_ks", "0"))
|
||||
pbt.TabletType_RDONLY, db, testlib.TabletKeyspaceShard(t, "source_ks", "0"))
|
||||
|
||||
// Create the destination keyspace with the appropriate ServedFromMap
|
||||
ki := &pbt.Keyspace{
|
||||
|
@ -259,9 +261,9 @@ func testVerticalSplitClone(t *testing.T, strategy string) {
|
|||
wr.TopoServer().CreateKeyspace(ctx, "destination_ks", ki)
|
||||
|
||||
destMaster := testlib.NewFakeTablet(t, wr, "cell1", 10,
|
||||
pbt.TabletType_MASTER, testlib.TabletKeyspaceShard(t, "destination_ks", "0"))
|
||||
pbt.TabletType_MASTER, db, testlib.TabletKeyspaceShard(t, "destination_ks", "0"))
|
||||
destRdonly := testlib.NewFakeTablet(t, wr, "cell1", 11,
|
||||
pbt.TabletType_RDONLY, testlib.TabletKeyspaceShard(t, "destination_ks", "0"))
|
||||
pbt.TabletType_RDONLY, db, testlib.TabletKeyspaceShard(t, "destination_ks", "0"))
|
||||
|
||||
for _, ft := range []*testlib.FakeTablet{sourceMaster, sourceRdonly1, sourceRdonly2, destMaster, destRdonly} {
|
||||
ft.StartActionLoop(t, wr)
|
||||
|
|
|
@ -18,6 +18,7 @@ import (
|
|||
"github.com/youtube/vitess/go/vt/tabletserver/grpcqueryservice"
|
||||
"github.com/youtube/vitess/go/vt/tabletserver/proto"
|
||||
"github.com/youtube/vitess/go/vt/tabletserver/queryservice"
|
||||
"github.com/youtube/vitess/go/vt/vttest/fakesqldb"
|
||||
"github.com/youtube/vitess/go/vt/wrangler"
|
||||
"github.com/youtube/vitess/go/vt/wrangler/testlib"
|
||||
"github.com/youtube/vitess/go/vt/zktopo"
|
||||
|
@ -81,6 +82,7 @@ func (sq *verticalDiffTabletServer) StreamExecute(ctx context.Context, target *p
|
|||
// TODO(aaijazi): Create a test in which source and destination data does not match
|
||||
|
||||
func TestVerticalSplitDiff(t *testing.T) {
|
||||
db := fakesqldb.Register()
|
||||
ts := zktopo.NewTestServer(t, []string{"cell1", "cell2"})
|
||||
// We need to use FakeTabletManagerClient because we don't have a good way to fake the binlog player yet,
|
||||
// which is necessary for synchronizing replication.
|
||||
|
@ -88,11 +90,11 @@ func TestVerticalSplitDiff(t *testing.T) {
|
|||
ctx := context.Background()
|
||||
|
||||
sourceMaster := testlib.NewFakeTablet(t, wr, "cell1", 0,
|
||||
pbt.TabletType_MASTER, testlib.TabletKeyspaceShard(t, "source_ks", "0"))
|
||||
pbt.TabletType_MASTER, db, testlib.TabletKeyspaceShard(t, "source_ks", "0"))
|
||||
sourceRdonly1 := testlib.NewFakeTablet(t, wr, "cell1", 1,
|
||||
pbt.TabletType_RDONLY, testlib.TabletKeyspaceShard(t, "source_ks", "0"))
|
||||
pbt.TabletType_RDONLY, db, testlib.TabletKeyspaceShard(t, "source_ks", "0"))
|
||||
sourceRdonly2 := testlib.NewFakeTablet(t, wr, "cell1", 2,
|
||||
pbt.TabletType_RDONLY, testlib.TabletKeyspaceShard(t, "source_ks", "0"))
|
||||
pbt.TabletType_RDONLY, db, testlib.TabletKeyspaceShard(t, "source_ks", "0"))
|
||||
|
||||
// Create the destination keyspace with the appropriate ServedFromMap
|
||||
ki := &pbt.Keyspace{
|
||||
|
@ -114,11 +116,11 @@ func TestVerticalSplitDiff(t *testing.T) {
|
|||
wr.TopoServer().CreateKeyspace(ctx, "destination_ks", ki)
|
||||
|
||||
destMaster := testlib.NewFakeTablet(t, wr, "cell1", 10,
|
||||
pbt.TabletType_MASTER, testlib.TabletKeyspaceShard(t, "destination_ks", "0"))
|
||||
pbt.TabletType_MASTER, db, testlib.TabletKeyspaceShard(t, "destination_ks", "0"))
|
||||
destRdonly1 := testlib.NewFakeTablet(t, wr, "cell1", 11,
|
||||
pbt.TabletType_RDONLY, testlib.TabletKeyspaceShard(t, "destination_ks", "0"))
|
||||
pbt.TabletType_RDONLY, db, testlib.TabletKeyspaceShard(t, "destination_ks", "0"))
|
||||
destRdonly2 := testlib.NewFakeTablet(t, wr, "cell1", 12,
|
||||
pbt.TabletType_RDONLY, testlib.TabletKeyspaceShard(t, "destination_ks", "0"))
|
||||
pbt.TabletType_RDONLY, db, testlib.TabletKeyspaceShard(t, "destination_ks", "0"))
|
||||
|
||||
for _, ft := range []*testlib.FakeTablet{sourceMaster, sourceRdonly1, sourceRdonly2, destMaster, destRdonly1, destRdonly2} {
|
||||
ft.StartActionLoop(t, wr)
|
||||
|
|
|
@ -19,6 +19,7 @@ import (
|
|||
myproto "github.com/youtube/vitess/go/vt/mysqlctl/proto"
|
||||
"github.com/youtube/vitess/go/vt/tabletmanager/tmclient"
|
||||
"github.com/youtube/vitess/go/vt/topo/topoproto"
|
||||
"github.com/youtube/vitess/go/vt/vttest/fakesqldb"
|
||||
"github.com/youtube/vitess/go/vt/wrangler"
|
||||
"github.com/youtube/vitess/go/vt/zktopo"
|
||||
"golang.org/x/net/context"
|
||||
|
@ -29,6 +30,7 @@ import (
|
|||
func TestBackupRestore(t *testing.T) {
|
||||
// Initialize our environment
|
||||
ctx := context.Background()
|
||||
db := fakesqldb.Register()
|
||||
ts := zktopo.NewTestServer(t, []string{"cell1", "cell2"})
|
||||
wr := wrangler.New(logutil.NewConsoleLogger(), ts, tmclient.NewTabletManagerClient(), time.Second)
|
||||
vp := NewVtctlPipe(t, ts)
|
||||
|
@ -67,10 +69,10 @@ func TestBackupRestore(t *testing.T) {
|
|||
}
|
||||
|
||||
// create a master tablet, not started, just for shard health
|
||||
master := NewFakeTablet(t, wr, "cell1", 0, pb.TabletType_MASTER)
|
||||
master := NewFakeTablet(t, wr, "cell1", 0, pb.TabletType_MASTER, db)
|
||||
|
||||
// create a single tablet, set it up so we can do backups
|
||||
sourceTablet := NewFakeTablet(t, wr, "cell1", 1, pb.TabletType_REPLICA)
|
||||
sourceTablet := NewFakeTablet(t, wr, "cell1", 1, pb.TabletType_REPLICA, db)
|
||||
sourceTablet.FakeMysqlDaemon.ReadOnly = true
|
||||
sourceTablet.FakeMysqlDaemon.Replicating = true
|
||||
sourceTablet.FakeMysqlDaemon.CurrentMasterPosition = myproto.ReplicationPosition{
|
||||
|
@ -109,7 +111,7 @@ func TestBackupRestore(t *testing.T) {
|
|||
}
|
||||
|
||||
// create a destination tablet, set it up so we can do restores
|
||||
destTablet := NewFakeTablet(t, wr, "cell1", 2, pb.TabletType_REPLICA)
|
||||
destTablet := NewFakeTablet(t, wr, "cell1", 2, pb.TabletType_REPLICA, db)
|
||||
destTablet.FakeMysqlDaemon.ReadOnly = true
|
||||
destTablet.FakeMysqlDaemon.Replicating = true
|
||||
destTablet.FakeMysqlDaemon.CurrentMasterPosition = myproto.ReplicationPosition{
|
||||
|
|
|
@ -106,12 +106,12 @@ func copySchema(t *testing.T, useShardAsSource bool) {
|
|||
defer vp.Close()
|
||||
|
||||
sourceMaster := NewFakeTablet(t, wr, "cell1", 0,
|
||||
pb.TabletType_MASTER, TabletKeyspaceShard(t, "ks", "-80"))
|
||||
pb.TabletType_MASTER, db, TabletKeyspaceShard(t, "ks", "-80"))
|
||||
sourceRdonly := NewFakeTablet(t, wr, "cell1", 1,
|
||||
pb.TabletType_RDONLY, TabletKeyspaceShard(t, "ks", "-80"))
|
||||
pb.TabletType_RDONLY, db, TabletKeyspaceShard(t, "ks", "-80"))
|
||||
|
||||
destinationMaster := NewFakeTablet(t, wr, "cell1", 10,
|
||||
pb.TabletType_MASTER, TabletKeyspaceShard(t, "ks", "-40"))
|
||||
pb.TabletType_MASTER, db, TabletKeyspaceShard(t, "ks", "-40"))
|
||||
|
||||
for _, ft := range []*FakeTablet{sourceMaster, sourceRdonly, destinationMaster} {
|
||||
ft.StartActionLoop(t, wr)
|
||||
|
|
|
@ -14,6 +14,7 @@ import (
|
|||
myproto "github.com/youtube/vitess/go/vt/mysqlctl/proto"
|
||||
"github.com/youtube/vitess/go/vt/tabletmanager/tmclient"
|
||||
"github.com/youtube/vitess/go/vt/topo/topoproto"
|
||||
"github.com/youtube/vitess/go/vt/vttest/fakesqldb"
|
||||
"github.com/youtube/vitess/go/vt/wrangler"
|
||||
"github.com/youtube/vitess/go/vt/zktopo"
|
||||
"golang.org/x/net/context"
|
||||
|
@ -22,16 +23,17 @@ import (
|
|||
)
|
||||
|
||||
func TestEmergencyReparentShard(t *testing.T) {
|
||||
db := fakesqldb.Register()
|
||||
ts := zktopo.NewTestServer(t, []string{"cell1", "cell2"})
|
||||
wr := wrangler.New(logutil.NewConsoleLogger(), ts, tmclient.NewTabletManagerClient(), time.Second)
|
||||
vp := NewVtctlPipe(t, ts)
|
||||
defer vp.Close()
|
||||
|
||||
// Create a master, a couple good slaves
|
||||
oldMaster := NewFakeTablet(t, wr, "cell1", 0, pb.TabletType_MASTER)
|
||||
newMaster := NewFakeTablet(t, wr, "cell1", 1, pb.TabletType_REPLICA)
|
||||
goodSlave1 := NewFakeTablet(t, wr, "cell1", 2, pb.TabletType_REPLICA)
|
||||
goodSlave2 := NewFakeTablet(t, wr, "cell2", 3, pb.TabletType_REPLICA)
|
||||
oldMaster := NewFakeTablet(t, wr, "cell1", 0, pb.TabletType_MASTER, db)
|
||||
newMaster := NewFakeTablet(t, wr, "cell1", 1, pb.TabletType_REPLICA, db)
|
||||
goodSlave1 := NewFakeTablet(t, wr, "cell1", 2, pb.TabletType_REPLICA, db)
|
||||
goodSlave2 := NewFakeTablet(t, wr, "cell2", 3, pb.TabletType_REPLICA, db)
|
||||
|
||||
// new master
|
||||
newMaster.FakeMysqlDaemon.ReadOnly = true
|
||||
|
@ -142,13 +144,14 @@ func TestEmergencyReparentShard(t *testing.T) {
|
|||
// to a host that is not the latest in replication position.
|
||||
func TestEmergencyReparentShardMasterElectNotBest(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
db := fakesqldb.Register()
|
||||
ts := zktopo.NewTestServer(t, []string{"cell1", "cell2"})
|
||||
wr := wrangler.New(logutil.NewConsoleLogger(), ts, tmclient.NewTabletManagerClient(), time.Second)
|
||||
|
||||
// Create a master, a couple good slaves
|
||||
oldMaster := NewFakeTablet(t, wr, "cell1", 0, pb.TabletType_MASTER)
|
||||
newMaster := NewFakeTablet(t, wr, "cell1", 1, pb.TabletType_REPLICA)
|
||||
moreAdvancedSlave := NewFakeTablet(t, wr, "cell1", 2, pb.TabletType_REPLICA)
|
||||
oldMaster := NewFakeTablet(t, wr, "cell1", 0, pb.TabletType_MASTER, db)
|
||||
newMaster := NewFakeTablet(t, wr, "cell1", 1, pb.TabletType_REPLICA, db)
|
||||
moreAdvancedSlave := NewFakeTablet(t, wr, "cell1", 2, pb.TabletType_REPLICA, db)
|
||||
|
||||
// new master
|
||||
newMaster.FakeMysqlDaemon.Replicating = true
|
||||
|
|
|
@ -24,6 +24,7 @@ import (
|
|||
"github.com/youtube/vitess/go/vt/tabletmanager/tmclient"
|
||||
"github.com/youtube/vitess/go/vt/tabletserver/tabletconn"
|
||||
"github.com/youtube/vitess/go/vt/topo"
|
||||
"github.com/youtube/vitess/go/vt/vttest/fakesqldb"
|
||||
"github.com/youtube/vitess/go/vt/wrangler"
|
||||
|
||||
pb "github.com/youtube/vitess/go/vt/proto/topodata"
|
||||
|
@ -101,7 +102,7 @@ func StartHTTPServer() TabletOption {
|
|||
// has to be between 0 and 99. All the tablet info will be derived
|
||||
// from that. Look at the implementation if you need values.
|
||||
// Use TabletOption implementations if you need to change values at creation.
|
||||
func NewFakeTablet(t *testing.T, wr *wrangler.Wrangler, cell string, uid uint32, tabletType pb.TabletType, options ...TabletOption) *FakeTablet {
|
||||
func NewFakeTablet(t *testing.T, wr *wrangler.Wrangler, cell string, uid uint32, tabletType pb.TabletType, db *fakesqldb.DB, options ...TabletOption) *FakeTablet {
|
||||
if uid < 0 || uid > 99 {
|
||||
t.Fatalf("uid has to be between 0 and 99: %v", uid)
|
||||
}
|
||||
|
@ -130,7 +131,7 @@ func NewFakeTablet(t *testing.T, wr *wrangler.Wrangler, cell string, uid uint32,
|
|||
}
|
||||
|
||||
// create a FakeMysqlDaemon with the right information by default
|
||||
fakeMysqlDaemon := mysqlctl.NewFakeMysqlDaemon()
|
||||
fakeMysqlDaemon := mysqlctl.NewFakeMysqlDaemon(db)
|
||||
fakeMysqlDaemon.MysqlPort = 3300 + int32(uid)
|
||||
|
||||
return &FakeTablet{
|
||||
|
|
|
@ -13,6 +13,7 @@ import (
|
|||
myproto "github.com/youtube/vitess/go/vt/mysqlctl/proto"
|
||||
"github.com/youtube/vitess/go/vt/tabletmanager/tmclient"
|
||||
"github.com/youtube/vitess/go/vt/topo/topoproto"
|
||||
"github.com/youtube/vitess/go/vt/vttest/fakesqldb"
|
||||
"github.com/youtube/vitess/go/vt/wrangler"
|
||||
"github.com/youtube/vitess/go/vt/zktopo"
|
||||
"golang.org/x/net/context"
|
||||
|
@ -24,15 +25,16 @@ import (
|
|||
// works as planned
|
||||
func TestInitMasterShard(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
db := fakesqldb.Register()
|
||||
ts := zktopo.NewTestServer(t, []string{"cell1", "cell2"})
|
||||
wr := wrangler.New(logutil.NewConsoleLogger(), ts, tmclient.NewTabletManagerClient(), time.Second)
|
||||
vp := NewVtctlPipe(t, ts)
|
||||
defer vp.Close()
|
||||
|
||||
// Create a master, a couple good slaves
|
||||
master := NewFakeTablet(t, wr, "cell1", 0, pb.TabletType_MASTER)
|
||||
goodSlave1 := NewFakeTablet(t, wr, "cell1", 1, pb.TabletType_REPLICA)
|
||||
goodSlave2 := NewFakeTablet(t, wr, "cell2", 2, pb.TabletType_REPLICA)
|
||||
master := NewFakeTablet(t, wr, "cell1", 0, pb.TabletType_MASTER, db)
|
||||
goodSlave1 := NewFakeTablet(t, wr, "cell1", 1, pb.TabletType_REPLICA, db)
|
||||
goodSlave2 := NewFakeTablet(t, wr, "cell2", 2, pb.TabletType_REPLICA, db)
|
||||
|
||||
// Master: set a plausible ReplicationPosition to return,
|
||||
// and expect to add entry in _vt.reparent_journal
|
||||
|
@ -122,10 +124,11 @@ func TestInitMasterShard(t *testing.T) {
|
|||
// TestInitMasterShardChecks makes sure the safety checks work
|
||||
func TestInitMasterShardChecks(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
db := fakesqldb.Register()
|
||||
ts := zktopo.NewTestServer(t, []string{"cell1", "cell2"})
|
||||
wr := wrangler.New(logutil.NewConsoleLogger(), ts, tmclient.NewTabletManagerClient(), time.Second)
|
||||
|
||||
master := NewFakeTablet(t, wr, "cell1", 0, pb.TabletType_MASTER)
|
||||
master := NewFakeTablet(t, wr, "cell1", 0, pb.TabletType_MASTER, db)
|
||||
|
||||
// InitShardMaster with an unknown tablet
|
||||
if err := wr.InitShardMaster(ctx, master.Tablet.Keyspace, master.Tablet.Shard, &pb.TabletAlias{
|
||||
|
@ -138,7 +141,7 @@ func TestInitMasterShardChecks(t *testing.T) {
|
|||
// InitShardMaster with two masters in the shard, no force flag
|
||||
// (master2 needs to run InitTablet with -force, as it is the second
|
||||
// master in the same shard)
|
||||
master2 := NewFakeTablet(t, wr, "cell1", 1, pb.TabletType_MASTER, ForceInitTablet())
|
||||
master2 := NewFakeTablet(t, wr, "cell1", 1, pb.TabletType_MASTER, db, ForceInitTablet())
|
||||
if err := wr.InitShardMaster(ctx, master2.Tablet.Keyspace, master2.Tablet.Shard, master2.Tablet.Alias, false /*force*/, 10*time.Second); err == nil || !strings.Contains(err.Error(), "is not the only master in the shard") {
|
||||
t.Errorf("InitShardMaster with two masters returned wrong error: %v", err)
|
||||
}
|
||||
|
@ -159,13 +162,14 @@ func TestInitMasterShardChecks(t *testing.T) {
|
|||
// proceed, the action completes anyway
|
||||
func TestInitMasterShardOneSlaveFails(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
db := fakesqldb.Register()
|
||||
ts := zktopo.NewTestServer(t, []string{"cell1", "cell2"})
|
||||
wr := wrangler.New(logutil.NewConsoleLogger(), ts, tmclient.NewTabletManagerClient(), time.Second)
|
||||
|
||||
// Create a master, a couple slaves
|
||||
master := NewFakeTablet(t, wr, "cell1", 0, pb.TabletType_MASTER)
|
||||
goodSlave := NewFakeTablet(t, wr, "cell1", 1, pb.TabletType_REPLICA)
|
||||
badSlave := NewFakeTablet(t, wr, "cell2", 2, pb.TabletType_REPLICA)
|
||||
master := NewFakeTablet(t, wr, "cell1", 0, pb.TabletType_MASTER, db)
|
||||
goodSlave := NewFakeTablet(t, wr, "cell1", 1, pb.TabletType_REPLICA, db)
|
||||
badSlave := NewFakeTablet(t, wr, "cell2", 2, pb.TabletType_REPLICA, db)
|
||||
|
||||
// Master: set a plausible ReplicationPosition to return,
|
||||
// and expect to add entry in _vt.reparent_journal
|
||||
|
|
|
@ -14,6 +14,7 @@ import (
|
|||
"github.com/youtube/vitess/go/vt/logutil"
|
||||
myproto "github.com/youtube/vitess/go/vt/mysqlctl/proto"
|
||||
"github.com/youtube/vitess/go/vt/tabletmanager/tmclient"
|
||||
"github.com/youtube/vitess/go/vt/vttest/fakesqldb"
|
||||
"github.com/youtube/vitess/go/vt/wrangler"
|
||||
"github.com/youtube/vitess/go/vt/zktopo"
|
||||
"golang.org/x/net/context"
|
||||
|
@ -23,17 +24,18 @@ import (
|
|||
|
||||
func TestMigrateServedFrom(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
db := fakesqldb.Register()
|
||||
ts := zktopo.NewTestServer(t, []string{"cell1", "cell2"})
|
||||
wr := wrangler.New(logutil.NewConsoleLogger(), ts, tmclient.NewTabletManagerClient(), time.Second)
|
||||
vp := NewVtctlPipe(t, ts)
|
||||
defer vp.Close()
|
||||
|
||||
// create the source keyspace tablets
|
||||
sourceMaster := NewFakeTablet(t, wr, "cell1", 10, pb.TabletType_MASTER,
|
||||
sourceMaster := NewFakeTablet(t, wr, "cell1", 10, pb.TabletType_MASTER, db,
|
||||
TabletKeyspaceShard(t, "source", "0"))
|
||||
sourceReplica := NewFakeTablet(t, wr, "cell1", 11, pb.TabletType_REPLICA,
|
||||
sourceReplica := NewFakeTablet(t, wr, "cell1", 11, pb.TabletType_REPLICA, db,
|
||||
TabletKeyspaceShard(t, "source", "0"))
|
||||
sourceRdonly := NewFakeTablet(t, wr, "cell1", 12, pb.TabletType_RDONLY,
|
||||
sourceRdonly := NewFakeTablet(t, wr, "cell1", 12, pb.TabletType_RDONLY, db,
|
||||
TabletKeyspaceShard(t, "source", "0"))
|
||||
|
||||
// create the destination keyspace, served form source
|
||||
|
@ -50,11 +52,11 @@ func TestMigrateServedFrom(t *testing.T) {
|
|||
}
|
||||
|
||||
// create the destination keyspace tablets
|
||||
destMaster := NewFakeTablet(t, wr, "cell1", 20, pb.TabletType_MASTER,
|
||||
destMaster := NewFakeTablet(t, wr, "cell1", 20, pb.TabletType_MASTER, db,
|
||||
TabletKeyspaceShard(t, "dest", "0"))
|
||||
destReplica := NewFakeTablet(t, wr, "cell1", 21, pb.TabletType_REPLICA,
|
||||
destReplica := NewFakeTablet(t, wr, "cell1", 21, pb.TabletType_REPLICA, db,
|
||||
TabletKeyspaceShard(t, "dest", "0"))
|
||||
destRdonly := NewFakeTablet(t, wr, "cell1", 22, pb.TabletType_RDONLY,
|
||||
destRdonly := NewFakeTablet(t, wr, "cell1", 22, pb.TabletType_RDONLY, db,
|
||||
TabletKeyspaceShard(t, "dest", "0"))
|
||||
|
||||
// sourceRdonly will see the refresh
|
||||
|
|
|
@ -14,6 +14,7 @@ import (
|
|||
myproto "github.com/youtube/vitess/go/vt/mysqlctl/proto"
|
||||
"github.com/youtube/vitess/go/vt/tabletmanager/tmclient"
|
||||
"github.com/youtube/vitess/go/vt/topo"
|
||||
"github.com/youtube/vitess/go/vt/vttest/fakesqldb"
|
||||
"github.com/youtube/vitess/go/vt/wrangler"
|
||||
"github.com/youtube/vitess/go/vt/zktopo"
|
||||
"golang.org/x/net/context"
|
||||
|
@ -33,33 +34,34 @@ func checkShardServedTypes(t *testing.T, ts topo.Server, shard string, expected
|
|||
}
|
||||
|
||||
func TestMigrateServedTypes(t *testing.T) {
|
||||
db := fakesqldb.Register()
|
||||
ts := zktopo.NewTestServer(t, []string{"cell1", "cell2"})
|
||||
wr := wrangler.New(logutil.NewConsoleLogger(), ts, tmclient.NewTabletManagerClient(), time.Second)
|
||||
vp := NewVtctlPipe(t, ts)
|
||||
defer vp.Close()
|
||||
|
||||
// create the source shard
|
||||
sourceMaster := NewFakeTablet(t, wr, "cell1", 10, pb.TabletType_MASTER,
|
||||
sourceMaster := NewFakeTablet(t, wr, "cell1", 10, pb.TabletType_MASTER, db,
|
||||
TabletKeyspaceShard(t, "ks", "0"))
|
||||
sourceReplica := NewFakeTablet(t, wr, "cell1", 11, pb.TabletType_REPLICA,
|
||||
sourceReplica := NewFakeTablet(t, wr, "cell1", 11, pb.TabletType_REPLICA, db,
|
||||
TabletKeyspaceShard(t, "ks", "0"))
|
||||
sourceRdonly := NewFakeTablet(t, wr, "cell1", 12, pb.TabletType_RDONLY,
|
||||
sourceRdonly := NewFakeTablet(t, wr, "cell1", 12, pb.TabletType_RDONLY, db,
|
||||
TabletKeyspaceShard(t, "ks", "0"))
|
||||
|
||||
// create the first destination shard
|
||||
dest1Master := NewFakeTablet(t, wr, "cell1", 20, pb.TabletType_MASTER,
|
||||
dest1Master := NewFakeTablet(t, wr, "cell1", 20, pb.TabletType_MASTER, db,
|
||||
TabletKeyspaceShard(t, "ks", "-80"))
|
||||
dest1Replica := NewFakeTablet(t, wr, "cell1", 21, pb.TabletType_REPLICA,
|
||||
dest1Replica := NewFakeTablet(t, wr, "cell1", 21, pb.TabletType_REPLICA, db,
|
||||
TabletKeyspaceShard(t, "ks", "-80"))
|
||||
dest1Rdonly := NewFakeTablet(t, wr, "cell1", 22, pb.TabletType_RDONLY,
|
||||
dest1Rdonly := NewFakeTablet(t, wr, "cell1", 22, pb.TabletType_RDONLY, db,
|
||||
TabletKeyspaceShard(t, "ks", "-80"))
|
||||
|
||||
// create the second destination shard
|
||||
dest2Master := NewFakeTablet(t, wr, "cell1", 30, pb.TabletType_MASTER,
|
||||
dest2Master := NewFakeTablet(t, wr, "cell1", 30, pb.TabletType_MASTER, db,
|
||||
TabletKeyspaceShard(t, "ks", "80-"))
|
||||
dest2Replica := NewFakeTablet(t, wr, "cell1", 31, pb.TabletType_REPLICA,
|
||||
dest2Replica := NewFakeTablet(t, wr, "cell1", 31, pb.TabletType_REPLICA, db,
|
||||
TabletKeyspaceShard(t, "ks", "80-"))
|
||||
dest2Rdonly := NewFakeTablet(t, wr, "cell1", 32, pb.TabletType_RDONLY,
|
||||
dest2Rdonly := NewFakeTablet(t, wr, "cell1", 32, pb.TabletType_RDONLY, db,
|
||||
TabletKeyspaceShard(t, "ks", "80-"))
|
||||
|
||||
// double check the shards have the right served types
|
||||
|
|
|
@ -13,6 +13,7 @@ import (
|
|||
"github.com/youtube/vitess/go/sqltypes"
|
||||
"github.com/youtube/vitess/go/vt/logutil"
|
||||
"github.com/youtube/vitess/go/vt/tabletmanager/tmclient"
|
||||
"github.com/youtube/vitess/go/vt/vttest/fakesqldb"
|
||||
"github.com/youtube/vitess/go/vt/wrangler"
|
||||
"github.com/youtube/vitess/go/vt/zktopo"
|
||||
"golang.org/x/net/context"
|
||||
|
@ -23,13 +24,14 @@ import (
|
|||
func TestPermissions(t *testing.T) {
|
||||
// Initialize our environment
|
||||
ctx := context.Background()
|
||||
db := fakesqldb.Register()
|
||||
ts := zktopo.NewTestServer(t, []string{"cell1", "cell2"})
|
||||
wr := wrangler.New(logutil.NewConsoleLogger(), ts, tmclient.NewTabletManagerClient(), time.Second)
|
||||
vp := NewVtctlPipe(t, ts)
|
||||
defer vp.Close()
|
||||
|
||||
master := NewFakeTablet(t, wr, "cell1", 0, pb.TabletType_MASTER)
|
||||
replica := NewFakeTablet(t, wr, "cell1", 1, pb.TabletType_REPLICA)
|
||||
master := NewFakeTablet(t, wr, "cell1", 0, pb.TabletType_MASTER, db)
|
||||
replica := NewFakeTablet(t, wr, "cell1", 1, pb.TabletType_REPLICA, db)
|
||||
|
||||
// mark the master inside the shard
|
||||
si, err := ts.GetShard(ctx, master.Tablet.Keyspace, master.Tablet.Shard)
|
||||
|
|
|
@ -14,6 +14,7 @@ import (
|
|||
"github.com/youtube/vitess/go/vt/tabletmanager/tmclient"
|
||||
"github.com/youtube/vitess/go/vt/tabletserver"
|
||||
"github.com/youtube/vitess/go/vt/topo/topoproto"
|
||||
"github.com/youtube/vitess/go/vt/vttest/fakesqldb"
|
||||
"github.com/youtube/vitess/go/vt/wrangler"
|
||||
"github.com/youtube/vitess/go/vt/zktopo"
|
||||
|
||||
|
@ -21,16 +22,17 @@ import (
|
|||
)
|
||||
|
||||
func TestPlannedReparentShard(t *testing.T) {
|
||||
db := fakesqldb.Register()
|
||||
ts := zktopo.NewTestServer(t, []string{"cell1", "cell2"})
|
||||
wr := wrangler.New(logutil.NewConsoleLogger(), ts, tmclient.NewTabletManagerClient(), time.Second)
|
||||
vp := NewVtctlPipe(t, ts)
|
||||
defer vp.Close()
|
||||
|
||||
// Create a master, a couple good slaves
|
||||
oldMaster := NewFakeTablet(t, wr, "cell1", 0, pb.TabletType_MASTER)
|
||||
newMaster := NewFakeTablet(t, wr, "cell1", 1, pb.TabletType_REPLICA)
|
||||
goodSlave1 := NewFakeTablet(t, wr, "cell1", 2, pb.TabletType_REPLICA)
|
||||
goodSlave2 := NewFakeTablet(t, wr, "cell2", 3, pb.TabletType_REPLICA)
|
||||
oldMaster := NewFakeTablet(t, wr, "cell1", 0, pb.TabletType_MASTER, db)
|
||||
newMaster := NewFakeTablet(t, wr, "cell1", 1, pb.TabletType_REPLICA, db)
|
||||
goodSlave1 := NewFakeTablet(t, wr, "cell1", 2, pb.TabletType_REPLICA, db)
|
||||
goodSlave2 := NewFakeTablet(t, wr, "cell2", 3, pb.TabletType_REPLICA, db)
|
||||
|
||||
// new master
|
||||
newMaster.FakeMysqlDaemon.ReadOnly = true
|
||||
|
|
|
@ -20,6 +20,7 @@ import (
|
|||
"github.com/youtube/vitess/go/vt/topo/topoproto"
|
||||
"github.com/youtube/vitess/go/vt/topotools"
|
||||
"github.com/youtube/vitess/go/vt/topotools/events"
|
||||
"github.com/youtube/vitess/go/vt/vttest/fakesqldb"
|
||||
"github.com/youtube/vitess/go/vt/wrangler"
|
||||
"github.com/youtube/vitess/go/vt/zktopo"
|
||||
|
||||
|
@ -30,17 +31,18 @@ func TestTabletExternallyReparented(t *testing.T) {
|
|||
tabletmanager.SetReparentFlags(time.Minute /* finalizeTimeout */)
|
||||
|
||||
ctx := context.Background()
|
||||
db := fakesqldb.Register()
|
||||
ts := zktopo.NewTestServer(t, []string{"cell1", "cell2"})
|
||||
wr := wrangler.New(logutil.NewConsoleLogger(), ts, tmclient.NewTabletManagerClient(), time.Second)
|
||||
vp := NewVtctlPipe(t, ts)
|
||||
defer vp.Close()
|
||||
|
||||
// Create an old master, a new master, two good slaves, one bad slave
|
||||
oldMaster := NewFakeTablet(t, wr, "cell1", 0, pb.TabletType_MASTER)
|
||||
newMaster := NewFakeTablet(t, wr, "cell1", 1, pb.TabletType_REPLICA)
|
||||
goodSlave1 := NewFakeTablet(t, wr, "cell1", 2, pb.TabletType_REPLICA)
|
||||
goodSlave2 := NewFakeTablet(t, wr, "cell2", 3, pb.TabletType_REPLICA)
|
||||
badSlave := NewFakeTablet(t, wr, "cell1", 4, pb.TabletType_REPLICA)
|
||||
oldMaster := NewFakeTablet(t, wr, "cell1", 0, pb.TabletType_MASTER, db)
|
||||
newMaster := NewFakeTablet(t, wr, "cell1", 1, pb.TabletType_REPLICA, db)
|
||||
goodSlave1 := NewFakeTablet(t, wr, "cell1", 2, pb.TabletType_REPLICA, db)
|
||||
goodSlave2 := NewFakeTablet(t, wr, "cell2", 3, pb.TabletType_REPLICA, db)
|
||||
badSlave := NewFakeTablet(t, wr, "cell1", 4, pb.TabletType_REPLICA, db)
|
||||
|
||||
// Add a new Cell to the Shard, that doesn't map to any read topo cell,
|
||||
// to simulate a data center being unreachable.
|
||||
|
@ -165,13 +167,14 @@ func TestTabletExternallyReparentedWithDifferentMysqlPort(t *testing.T) {
|
|||
tabletmanager.SetReparentFlags(time.Minute /* finalizeTimeout */)
|
||||
|
||||
ctx := context.Background()
|
||||
db := fakesqldb.Register()
|
||||
ts := zktopo.NewTestServer(t, []string{"cell1"})
|
||||
wr := wrangler.New(logutil.NewConsoleLogger(), ts, tmclient.NewTabletManagerClient(), time.Second)
|
||||
|
||||
// Create an old master, a new master, two good slaves, one bad slave
|
||||
oldMaster := NewFakeTablet(t, wr, "cell1", 0, pb.TabletType_MASTER)
|
||||
newMaster := NewFakeTablet(t, wr, "cell1", 1, pb.TabletType_REPLICA)
|
||||
goodSlave := NewFakeTablet(t, wr, "cell1", 2, pb.TabletType_REPLICA)
|
||||
oldMaster := NewFakeTablet(t, wr, "cell1", 0, pb.TabletType_MASTER, db)
|
||||
newMaster := NewFakeTablet(t, wr, "cell1", 1, pb.TabletType_REPLICA, db)
|
||||
goodSlave := NewFakeTablet(t, wr, "cell1", 2, pb.TabletType_REPLICA, db)
|
||||
|
||||
// Now we're restarting mysql on a different port, 3301->3303
|
||||
// but without updating the Tablet record in topology.
|
||||
|
@ -211,13 +214,14 @@ func TestTabletExternallyReparentedContinueOnUnexpectedMaster(t *testing.T) {
|
|||
tabletmanager.SetReparentFlags(time.Minute /* finalizeTimeout */)
|
||||
|
||||
ctx := context.Background()
|
||||
db := fakesqldb.Register()
|
||||
ts := zktopo.NewTestServer(t, []string{"cell1"})
|
||||
wr := wrangler.New(logutil.NewConsoleLogger(), ts, tmclient.NewTabletManagerClient(), time.Second)
|
||||
|
||||
// Create an old master, a new master, two good slaves, one bad slave
|
||||
oldMaster := NewFakeTablet(t, wr, "cell1", 0, pb.TabletType_MASTER)
|
||||
newMaster := NewFakeTablet(t, wr, "cell1", 1, pb.TabletType_REPLICA)
|
||||
goodSlave := NewFakeTablet(t, wr, "cell1", 2, pb.TabletType_REPLICA)
|
||||
oldMaster := NewFakeTablet(t, wr, "cell1", 0, pb.TabletType_MASTER, db)
|
||||
newMaster := NewFakeTablet(t, wr, "cell1", 1, pb.TabletType_REPLICA, db)
|
||||
goodSlave := NewFakeTablet(t, wr, "cell1", 2, pb.TabletType_REPLICA, db)
|
||||
|
||||
// On the elected master, we will respond to
|
||||
// TabletActionSlaveWasPromoted, so we need a MysqlDaemon
|
||||
|
@ -251,13 +255,14 @@ func TestTabletExternallyReparentedFailedOldMaster(t *testing.T) {
|
|||
tabletmanager.SetReparentFlags(time.Minute /* finalizeTimeout */)
|
||||
|
||||
ctx := context.Background()
|
||||
db := fakesqldb.Register()
|
||||
ts := zktopo.NewTestServer(t, []string{"cell1", "cell2"})
|
||||
wr := wrangler.New(logutil.NewConsoleLogger(), ts, tmclient.NewTabletManagerClient(), time.Second)
|
||||
|
||||
// Create an old master, a new master, and a good slave.
|
||||
oldMaster := NewFakeTablet(t, wr, "cell1", 0, pb.TabletType_MASTER)
|
||||
newMaster := NewFakeTablet(t, wr, "cell1", 1, pb.TabletType_REPLICA)
|
||||
goodSlave := NewFakeTablet(t, wr, "cell1", 2, pb.TabletType_REPLICA)
|
||||
oldMaster := NewFakeTablet(t, wr, "cell1", 0, pb.TabletType_MASTER, db)
|
||||
newMaster := NewFakeTablet(t, wr, "cell1", 1, pb.TabletType_REPLICA, db)
|
||||
goodSlave := NewFakeTablet(t, wr, "cell1", 2, pb.TabletType_REPLICA, db)
|
||||
|
||||
// Reparent to a replica, and pretend the old master is not responding.
|
||||
|
||||
|
|
|
@ -13,6 +13,7 @@ import (
|
|||
myproto "github.com/youtube/vitess/go/vt/mysqlctl/proto"
|
||||
"github.com/youtube/vitess/go/vt/tabletmanager/tmclient"
|
||||
"github.com/youtube/vitess/go/vt/topo/topoproto"
|
||||
"github.com/youtube/vitess/go/vt/vttest/fakesqldb"
|
||||
"github.com/youtube/vitess/go/vt/wrangler"
|
||||
"github.com/youtube/vitess/go/vt/zktopo"
|
||||
"golang.org/x/net/context"
|
||||
|
@ -22,6 +23,7 @@ import (
|
|||
|
||||
func TestShardReplicationStatuses(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
db := fakesqldb.Register()
|
||||
ts := zktopo.NewTestServer(t, []string{"cell1", "cell2"})
|
||||
wr := wrangler.New(logutil.NewConsoleLogger(), ts, tmclient.NewTabletManagerClient(), time.Second)
|
||||
|
||||
|
@ -29,8 +31,8 @@ func TestShardReplicationStatuses(t *testing.T) {
|
|||
if err := ts.CreateShard(ctx, "test_keyspace", "0"); err != nil {
|
||||
t.Fatalf("CreateShard failed: %v", err)
|
||||
}
|
||||
master := NewFakeTablet(t, wr, "cell1", 1, pb.TabletType_MASTER)
|
||||
slave := NewFakeTablet(t, wr, "cell1", 2, pb.TabletType_REPLICA)
|
||||
master := NewFakeTablet(t, wr, "cell1", 1, pb.TabletType_MASTER, db)
|
||||
slave := NewFakeTablet(t, wr, "cell1", 2, pb.TabletType_REPLICA, db)
|
||||
|
||||
// mark the master inside the shard
|
||||
si, err := ts.GetShard(ctx, "test_keyspace", "0")
|
||||
|
@ -90,6 +92,7 @@ func TestShardReplicationStatuses(t *testing.T) {
|
|||
|
||||
func TestReparentTablet(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
db := fakesqldb.Register()
|
||||
ts := zktopo.NewTestServer(t, []string{"cell1", "cell2"})
|
||||
wr := wrangler.New(logutil.NewConsoleLogger(), ts, tmclient.NewTabletManagerClient(), time.Second)
|
||||
|
||||
|
@ -97,8 +100,8 @@ func TestReparentTablet(t *testing.T) {
|
|||
if err := ts.CreateShard(ctx, "test_keyspace", "0"); err != nil {
|
||||
t.Fatalf("CreateShard failed: %v", err)
|
||||
}
|
||||
master := NewFakeTablet(t, wr, "cell1", 1, pb.TabletType_MASTER)
|
||||
slave := NewFakeTablet(t, wr, "cell1", 2, pb.TabletType_REPLICA)
|
||||
master := NewFakeTablet(t, wr, "cell1", 1, pb.TabletType_MASTER, db)
|
||||
slave := NewFakeTablet(t, wr, "cell1", 2, pb.TabletType_REPLICA, db)
|
||||
|
||||
// mark the master inside the shard
|
||||
si, err := ts.GetShard(ctx, "test_keyspace", "0")
|
||||
|
|
|
@ -14,6 +14,7 @@ import (
|
|||
|
||||
"github.com/youtube/vitess/go/vt/logutil"
|
||||
"github.com/youtube/vitess/go/vt/tabletmanager/tmclient"
|
||||
"github.com/youtube/vitess/go/vt/vttest/fakesqldb"
|
||||
"github.com/youtube/vitess/go/vt/wrangler"
|
||||
"github.com/youtube/vitess/go/vt/zktopo"
|
||||
|
||||
|
@ -49,16 +50,17 @@ func TestVersion(t *testing.T) {
|
|||
wrangler.ResetDebugVarsGetVersion()
|
||||
|
||||
// Initialize our environment
|
||||
db := fakesqldb.Register()
|
||||
ts := zktopo.NewTestServer(t, []string{"cell1", "cell2"})
|
||||
wr := wrangler.New(logutil.NewConsoleLogger(), ts, tmclient.NewTabletManagerClient(), time.Second)
|
||||
vp := NewVtctlPipe(t, ts)
|
||||
defer vp.Close()
|
||||
|
||||
// couple tablets is enough
|
||||
sourceMaster := NewFakeTablet(t, wr, "cell1", 10, pb.TabletType_MASTER,
|
||||
sourceMaster := NewFakeTablet(t, wr, "cell1", 10, pb.TabletType_MASTER, db,
|
||||
TabletKeyspaceShard(t, "source", "0"),
|
||||
StartHTTPServer())
|
||||
sourceReplica := NewFakeTablet(t, wr, "cell1", 11, pb.TabletType_REPLICA,
|
||||
sourceReplica := NewFakeTablet(t, wr, "cell1", 11, pb.TabletType_REPLICA, db,
|
||||
TabletKeyspaceShard(t, "source", "0"),
|
||||
StartHTTPServer())
|
||||
|
||||
|
|
|
@ -18,6 +18,7 @@ import (
|
|||
"github.com/youtube/vitess/go/vt/tabletmanager/tmclient"
|
||||
"github.com/youtube/vitess/go/vt/tabletserver"
|
||||
"github.com/youtube/vitess/go/vt/tabletserver/grpcqueryservice"
|
||||
"github.com/youtube/vitess/go/vt/vttest/fakesqldb"
|
||||
"github.com/youtube/vitess/go/vt/wrangler"
|
||||
"github.com/youtube/vitess/go/vt/zktopo"
|
||||
|
||||
|
@ -77,16 +78,17 @@ func TestWaitForFilteredReplication_unhealthy(t *testing.T) {
|
|||
}
|
||||
|
||||
func waitForFilteredReplication(t *testing.T, expectedErr string, initialStats *pbq.RealtimeStats, broadcastStatsFunc func() *pbq.RealtimeStats) {
|
||||
db := fakesqldb.Register()
|
||||
ts := zktopo.NewTestServer(t, []string{"cell1", "cell2"})
|
||||
wr := wrangler.New(logutil.NewConsoleLogger(), ts, tmclient.NewTabletManagerClient(), time.Second)
|
||||
vp := NewVtctlPipe(t, ts)
|
||||
defer vp.Close()
|
||||
|
||||
// source of the filtered replication. We don't start its loop because we don't connect to it.
|
||||
source := NewFakeTablet(t, wr, "cell1", 0, pbt.TabletType_MASTER,
|
||||
source := NewFakeTablet(t, wr, "cell1", 0, pbt.TabletType_MASTER, db,
|
||||
TabletKeyspaceShard(t, keyspace, "0"))
|
||||
// dest is the master of the dest shard which receives filtered replication events.
|
||||
dest := NewFakeTablet(t, wr, "cell1", 1, pbt.TabletType_MASTER,
|
||||
dest := NewFakeTablet(t, wr, "cell1", 1, pbt.TabletType_MASTER, db,
|
||||
TabletKeyspaceShard(t, keyspace, destShard))
|
||||
dest.StartActionLoop(t, wr)
|
||||
defer dest.StopActionLoop(t)
|
||||
|
|
|
@ -96,6 +96,18 @@
|
|||
</execution>
|
||||
</executions>
|
||||
</plugin>
|
||||
<plugin>
|
||||
<groupId>org.apache.maven.plugins</groupId>
|
||||
<artifactId>maven-jar-plugin</artifactId>
|
||||
<version>2.4</version>
|
||||
<executions>
|
||||
<execution>
|
||||
<goals>
|
||||
<goal>test-jar</goal>
|
||||
</goals>
|
||||
</execution>
|
||||
</executions>
|
||||
</plugin>
|
||||
</plugins>
|
||||
</build>
|
||||
</project>
|
||||
|
|
|
@ -0,0 +1,359 @@
|
|||
package com.youtube.vitess.client;
|
||||
|
||||
import com.google.common.collect.ImmutableMap;
|
||||
import com.google.protobuf.ByteString;
|
||||
|
||||
import com.youtube.vitess.proto.Query.QueryResult;
|
||||
import com.youtube.vitess.proto.Topodata.KeyRange;
|
||||
import com.youtube.vitess.proto.Topodata.KeyspaceIdType;
|
||||
import com.youtube.vitess.proto.Topodata.ShardReference;
|
||||
import com.youtube.vitess.proto.Topodata.SrvKeyspace;
|
||||
import com.youtube.vitess.proto.Topodata.SrvKeyspace.KeyspacePartition;
|
||||
import com.youtube.vitess.proto.Topodata.TabletType;
|
||||
import com.youtube.vitess.proto.Vtgate.SplitQueryResponse;
|
||||
import com.youtube.vitess.proto.Vtrpc.CallerID;
|
||||
|
||||
import org.joda.time.Duration;
|
||||
import org.junit.Assert;
|
||||
import org.junit.Before;
|
||||
import org.junit.BeforeClass;
|
||||
import org.junit.Test;
|
||||
|
||||
import java.util.Arrays;
|
||||
import java.util.HashMap;
|
||||
import java.util.List;
|
||||
import java.util.Map;
|
||||
|
||||
/**
|
||||
* RpcClientTest tests a given implementation of RpcClient
|
||||
* against a mock vtgate server (go/cmd/vtgateclienttest).
|
||||
*
|
||||
* Each implementation should extend this class and add a @BeforeClass method that starts the
|
||||
* vtgateclienttest server with the necessary parameters, and then sets 'client'.
|
||||
*/
|
||||
public abstract class RpcClientTest {
|
||||
protected static RpcClient client;
|
||||
protected static String vtRoot;
|
||||
|
||||
private Context ctx;
|
||||
private VTGateConn conn;
|
||||
|
||||
@BeforeClass
|
||||
public static void setUpBeforeSubclass() {
|
||||
vtRoot = System.getenv("VTROOT");
|
||||
if (vtRoot == null) {
|
||||
throw new RuntimeException("cannot find env variable VTROOT; make sure to source dev.env");
|
||||
}
|
||||
}
|
||||
|
||||
@Before
|
||||
public void setUp() {
|
||||
ctx = Context.getDefault().withDeadlineAfter(Duration.millis(5000)).withCallerId(CALLER_ID);
|
||||
conn = new VTGateConn(client);
|
||||
}
|
||||
|
||||
private static final String ECHO_PREFIX = "echo://";
|
||||
|
||||
private static final String QUERY = "test query";
|
||||
private static final String KEYSPACE = "test_keyspace";
|
||||
|
||||
private static final List<String> SHARDS = Arrays.asList("-80", "80-");
|
||||
private static final String SHARDS_ECHO = "[-80 80-]";
|
||||
|
||||
private static final List<byte[]> KEYSPACE_IDS =
|
||||
Arrays.asList(new byte[] {1, 2, 3, 4}, new byte[] {5, 6, 7, 8});
|
||||
private static final String KEYSPACE_IDS_ECHO = "[[1 2 3 4] [5 6 7 8]]";
|
||||
private static final String KEYSPACE_IDS_ECHO_OLD = "[01020304 05060708]";
|
||||
|
||||
private static final List<KeyRange> KEY_RANGES = Arrays.asList(
|
||||
KeyRange.newBuilder()
|
||||
.setStart(ByteString.copyFrom(new byte[] {1, 2, 3, 4}))
|
||||
.setEnd(ByteString.copyFrom(new byte[] {5, 6, 7, 8}))
|
||||
.build());
|
||||
private static final String KEY_RANGES_ECHO =
|
||||
"[start:\"\\001\\002\\003\\004\" end:\"\\005\\006\\007\\010\" ]";
|
||||
|
||||
private static final Map<byte[], Object> ENTITY_KEYSPACE_IDS =
|
||||
new ImmutableMap.Builder<byte[], Object>()
|
||||
.put(new byte[] {1, 2, 3}, 123)
|
||||
.put(new byte[] {4, 5, 6}, 2.0)
|
||||
.put(new byte[] {7, 8, 9}, new byte[] {1, 2, 3})
|
||||
.build();
|
||||
private static final String ENTITY_KEYSPACE_IDS_ECHO =
|
||||
"[xid_type:TYPE_INT xid_int:123 keyspace_id:\"\\001\\002\\003\" xid_type:TYPE_FLOAT xid_float:2 keyspace_id:\"\\004\\005\\006\" xid_type:TYPE_BYTES xid_bytes:\"\\001\\002\\003\" keyspace_id:\"\\007\\010\\t\" ]";
|
||||
|
||||
private static final TabletType TABLET_TYPE = TabletType.REPLICA;
|
||||
private static final String TABLET_TYPE_ECHO = TABLET_TYPE.toString();
|
||||
|
||||
private static final Map<String, Object> BIND_VARS =
|
||||
new ImmutableMap.Builder<String, Object>()
|
||||
.put("int", 123)
|
||||
.put("float", 2.0)
|
||||
.put("bytes", new byte[] {1, 2, 3})
|
||||
.build();
|
||||
private static final String BIND_VARS_ECHO = "map[bytes:[1 2 3] float:2 int:123]";
|
||||
|
||||
private static final String SESSION_ECHO = "InTransaction: true, ShardSession: []";
|
||||
|
||||
private static final CallerID CALLER_ID =
|
||||
CallerID.newBuilder()
|
||||
.setPrincipal("test_principal")
|
||||
.setComponent("test_component")
|
||||
.setSubcomponent("test_subcomponent")
|
||||
.build();
|
||||
private static final String CALLER_ID_ECHO =
|
||||
"principal:\"test_principal\" component:\"test_component\" subcomponent:\"test_subcomponent\" ";
|
||||
|
||||
private static Map<String, String> getEcho(QueryResult result) {
|
||||
Map<String, String> fields = new HashMap<String, String>();
|
||||
for (int i = 0; i < result.getFieldsCount(); i++) {
|
||||
fields.put(result.getFields(i).getName(), result.getRows(0).getValues(i).toStringUtf8());
|
||||
}
|
||||
return fields;
|
||||
}
|
||||
|
||||
@Test
|
||||
public void testEchoExecute() throws Exception {
|
||||
Map<String, String> echo;
|
||||
|
||||
echo = getEcho(conn.execute(ctx, ECHO_PREFIX + QUERY, BIND_VARS, TABLET_TYPE));
|
||||
Assert.assertEquals(CALLER_ID_ECHO, echo.get("callerId"));
|
||||
Assert.assertEquals(ECHO_PREFIX + QUERY, echo.get("query"));
|
||||
Assert.assertEquals(BIND_VARS_ECHO, echo.get("bindVars"));
|
||||
Assert.assertEquals(TABLET_TYPE_ECHO, echo.get("tabletType"));
|
||||
|
||||
echo = getEcho(
|
||||
conn.executeShards(ctx, ECHO_PREFIX + QUERY, KEYSPACE, SHARDS, BIND_VARS, TABLET_TYPE));
|
||||
Assert.assertEquals(CALLER_ID_ECHO, echo.get("callerId"));
|
||||
Assert.assertEquals(ECHO_PREFIX + QUERY, echo.get("query"));
|
||||
Assert.assertEquals(KEYSPACE, echo.get("keyspace"));
|
||||
Assert.assertEquals(SHARDS_ECHO, echo.get("shards"));
|
||||
Assert.assertEquals(BIND_VARS_ECHO, echo.get("bindVars"));
|
||||
Assert.assertEquals(TABLET_TYPE_ECHO, echo.get("tabletType"));
|
||||
|
||||
echo = getEcho(conn.executeKeyspaceIds(
|
||||
ctx, ECHO_PREFIX + QUERY, KEYSPACE, KEYSPACE_IDS, BIND_VARS, TABLET_TYPE));
|
||||
Assert.assertEquals(CALLER_ID_ECHO, echo.get("callerId"));
|
||||
Assert.assertEquals(ECHO_PREFIX + QUERY, echo.get("query"));
|
||||
Assert.assertEquals(KEYSPACE, echo.get("keyspace"));
|
||||
Assert.assertEquals(KEYSPACE_IDS_ECHO, echo.get("keyspaceIds"));
|
||||
Assert.assertEquals(BIND_VARS_ECHO, echo.get("bindVars"));
|
||||
Assert.assertEquals(TABLET_TYPE_ECHO, echo.get("tabletType"));
|
||||
|
||||
echo = getEcho(conn.executeKeyRanges(
|
||||
ctx, ECHO_PREFIX + QUERY, KEYSPACE, KEY_RANGES, BIND_VARS, TABLET_TYPE));
|
||||
Assert.assertEquals(CALLER_ID_ECHO, echo.get("callerId"));
|
||||
Assert.assertEquals(ECHO_PREFIX + QUERY, echo.get("query"));
|
||||
Assert.assertEquals(KEYSPACE, echo.get("keyspace"));
|
||||
Assert.assertEquals(KEY_RANGES_ECHO, echo.get("keyRanges"));
|
||||
Assert.assertEquals(BIND_VARS_ECHO, echo.get("bindVars"));
|
||||
Assert.assertEquals(TABLET_TYPE_ECHO, echo.get("tabletType"));
|
||||
|
||||
echo = getEcho(conn.executeEntityIds(ctx, ECHO_PREFIX + QUERY, KEYSPACE, "column1",
|
||||
ENTITY_KEYSPACE_IDS, BIND_VARS, TABLET_TYPE));
|
||||
Assert.assertEquals(CALLER_ID_ECHO, echo.get("callerId"));
|
||||
Assert.assertEquals(ECHO_PREFIX + QUERY, echo.get("query"));
|
||||
Assert.assertEquals(KEYSPACE, echo.get("keyspace"));
|
||||
Assert.assertEquals("column1", echo.get("entityColumnName"));
|
||||
Assert.assertEquals(ENTITY_KEYSPACE_IDS_ECHO, echo.get("entityIds"));
|
||||
Assert.assertEquals(BIND_VARS_ECHO, echo.get("bindVars"));
|
||||
Assert.assertEquals(TABLET_TYPE_ECHO, echo.get("tabletType"));
|
||||
|
||||
echo = getEcho(conn.executeBatchShards(ctx, Arrays.asList(Proto.bindShardQuery(KEYSPACE, SHARDS,
|
||||
ECHO_PREFIX + QUERY, BIND_VARS)),
|
||||
TABLET_TYPE, true).get(0));
|
||||
Assert.assertEquals(CALLER_ID_ECHO, echo.get("callerId"));
|
||||
Assert.assertEquals(ECHO_PREFIX + QUERY, echo.get("query"));
|
||||
Assert.assertEquals(KEYSPACE, echo.get("keyspace"));
|
||||
Assert.assertEquals(SHARDS_ECHO, echo.get("shards"));
|
||||
Assert.assertEquals(BIND_VARS_ECHO, echo.get("bindVars"));
|
||||
Assert.assertEquals(TABLET_TYPE_ECHO, echo.get("tabletType"));
|
||||
Assert.assertEquals("true", echo.get("asTransaction"));
|
||||
|
||||
echo = getEcho(
|
||||
conn.executeBatchKeyspaceIds(ctx, Arrays.asList(Proto.bindKeyspaceIdQuery(KEYSPACE,
|
||||
KEYSPACE_IDS, ECHO_PREFIX + QUERY, BIND_VARS)),
|
||||
TABLET_TYPE, true).get(0));
|
||||
Assert.assertEquals(CALLER_ID_ECHO, echo.get("callerId"));
|
||||
Assert.assertEquals(ECHO_PREFIX + QUERY, echo.get("query"));
|
||||
Assert.assertEquals(KEYSPACE, echo.get("keyspace"));
|
||||
Assert.assertEquals(KEYSPACE_IDS_ECHO_OLD, echo.get("keyspaceIds"));
|
||||
Assert.assertEquals(BIND_VARS_ECHO, echo.get("bindVars"));
|
||||
Assert.assertEquals(TABLET_TYPE_ECHO, echo.get("tabletType"));
|
||||
Assert.assertEquals("true", echo.get("asTransaction"));
|
||||
}
|
||||
|
||||
@Test
|
||||
public void testEchoStreamExecute() throws Exception {
|
||||
Map<String, String> echo;
|
||||
|
||||
echo = getEcho(conn.streamExecute(ctx, ECHO_PREFIX + QUERY, BIND_VARS, TABLET_TYPE).next());
|
||||
Assert.assertEquals(CALLER_ID_ECHO, echo.get("callerId"));
|
||||
Assert.assertEquals(ECHO_PREFIX + QUERY, echo.get("query"));
|
||||
Assert.assertEquals(BIND_VARS_ECHO, echo.get("bindVars"));
|
||||
Assert.assertEquals(TABLET_TYPE_ECHO, echo.get("tabletType"));
|
||||
|
||||
echo = getEcho(
|
||||
conn.streamExecuteShards(ctx, ECHO_PREFIX + QUERY, KEYSPACE, SHARDS, BIND_VARS, TABLET_TYPE)
|
||||
.next());
|
||||
Assert.assertEquals(CALLER_ID_ECHO, echo.get("callerId"));
|
||||
Assert.assertEquals(ECHO_PREFIX + QUERY, echo.get("query"));
|
||||
Assert.assertEquals(KEYSPACE, echo.get("keyspace"));
|
||||
Assert.assertEquals(SHARDS_ECHO, echo.get("shards"));
|
||||
Assert.assertEquals(BIND_VARS_ECHO, echo.get("bindVars"));
|
||||
Assert.assertEquals(TABLET_TYPE_ECHO, echo.get("tabletType"));
|
||||
|
||||
echo = getEcho(conn.streamExecuteKeyspaceIds(ctx, ECHO_PREFIX + QUERY, KEYSPACE, KEYSPACE_IDS,
|
||||
BIND_VARS, TABLET_TYPE).next());
|
||||
Assert.assertEquals(CALLER_ID_ECHO, echo.get("callerId"));
|
||||
Assert.assertEquals(ECHO_PREFIX + QUERY, echo.get("query"));
|
||||
Assert.assertEquals(KEYSPACE, echo.get("keyspace"));
|
||||
Assert.assertEquals(KEYSPACE_IDS_ECHO, echo.get("keyspaceIds"));
|
||||
Assert.assertEquals(BIND_VARS_ECHO, echo.get("bindVars"));
|
||||
Assert.assertEquals(TABLET_TYPE_ECHO, echo.get("tabletType"));
|
||||
|
||||
echo = getEcho(conn.streamExecuteKeyRanges(ctx, ECHO_PREFIX + QUERY, KEYSPACE, KEY_RANGES,
|
||||
BIND_VARS, TABLET_TYPE).next());
|
||||
Assert.assertEquals(CALLER_ID_ECHO, echo.get("callerId"));
|
||||
Assert.assertEquals(ECHO_PREFIX + QUERY, echo.get("query"));
|
||||
Assert.assertEquals(KEYSPACE, echo.get("keyspace"));
|
||||
Assert.assertEquals(KEY_RANGES_ECHO, echo.get("keyRanges"));
|
||||
Assert.assertEquals(BIND_VARS_ECHO, echo.get("bindVars"));
|
||||
Assert.assertEquals(TABLET_TYPE_ECHO, echo.get("tabletType"));
|
||||
}
|
||||
|
||||
@Test
|
||||
public void testEchoTransactionExecute() throws Exception {
|
||||
Map<String, String> echo;
|
||||
|
||||
VTGateTx tx = conn.begin(ctx);
|
||||
|
||||
echo = getEcho(tx.execute(ctx, ECHO_PREFIX + QUERY, BIND_VARS, TABLET_TYPE, true));
|
||||
Assert.assertEquals(CALLER_ID_ECHO, echo.get("callerId"));
|
||||
Assert.assertEquals(ECHO_PREFIX + QUERY, echo.get("query"));
|
||||
Assert.assertEquals(BIND_VARS_ECHO, echo.get("bindVars"));
|
||||
Assert.assertEquals(TABLET_TYPE_ECHO, echo.get("tabletType"));
|
||||
Assert.assertEquals(SESSION_ECHO, echo.get("session"));
|
||||
Assert.assertEquals("true", echo.get("notInTransaction"));
|
||||
|
||||
echo = getEcho(
|
||||
tx.executeShards(ctx, ECHO_PREFIX + QUERY, KEYSPACE, SHARDS, BIND_VARS, TABLET_TYPE, true));
|
||||
Assert.assertEquals(CALLER_ID_ECHO, echo.get("callerId"));
|
||||
Assert.assertEquals(ECHO_PREFIX + QUERY, echo.get("query"));
|
||||
Assert.assertEquals(KEYSPACE, echo.get("keyspace"));
|
||||
Assert.assertEquals(SHARDS_ECHO, echo.get("shards"));
|
||||
Assert.assertEquals(BIND_VARS_ECHO, echo.get("bindVars"));
|
||||
Assert.assertEquals(TABLET_TYPE_ECHO, echo.get("tabletType"));
|
||||
Assert.assertEquals(SESSION_ECHO, echo.get("session"));
|
||||
Assert.assertEquals("true", echo.get("notInTransaction"));
|
||||
|
||||
echo = getEcho(tx.executeKeyspaceIds(
|
||||
ctx, ECHO_PREFIX + QUERY, KEYSPACE, KEYSPACE_IDS, BIND_VARS, TABLET_TYPE, true));
|
||||
Assert.assertEquals(CALLER_ID_ECHO, echo.get("callerId"));
|
||||
Assert.assertEquals(ECHO_PREFIX + QUERY, echo.get("query"));
|
||||
Assert.assertEquals(KEYSPACE, echo.get("keyspace"));
|
||||
Assert.assertEquals(KEYSPACE_IDS_ECHO, echo.get("keyspaceIds"));
|
||||
Assert.assertEquals(BIND_VARS_ECHO, echo.get("bindVars"));
|
||||
Assert.assertEquals(TABLET_TYPE_ECHO, echo.get("tabletType"));
|
||||
Assert.assertEquals(SESSION_ECHO, echo.get("session"));
|
||||
Assert.assertEquals("true", echo.get("notInTransaction"));
|
||||
|
||||
echo = getEcho(tx.executeKeyRanges(
|
||||
ctx, ECHO_PREFIX + QUERY, KEYSPACE, KEY_RANGES, BIND_VARS, TABLET_TYPE, true));
|
||||
Assert.assertEquals(CALLER_ID_ECHO, echo.get("callerId"));
|
||||
Assert.assertEquals(ECHO_PREFIX + QUERY, echo.get("query"));
|
||||
Assert.assertEquals(KEYSPACE, echo.get("keyspace"));
|
||||
Assert.assertEquals(KEY_RANGES_ECHO, echo.get("keyRanges"));
|
||||
Assert.assertEquals(BIND_VARS_ECHO, echo.get("bindVars"));
|
||||
Assert.assertEquals(TABLET_TYPE_ECHO, echo.get("tabletType"));
|
||||
Assert.assertEquals(SESSION_ECHO, echo.get("session"));
|
||||
Assert.assertEquals("true", echo.get("notInTransaction"));
|
||||
|
||||
echo = getEcho(tx.executeEntityIds(ctx, ECHO_PREFIX + QUERY, KEYSPACE, "column1",
|
||||
ENTITY_KEYSPACE_IDS, BIND_VARS, TABLET_TYPE, true));
|
||||
Assert.assertEquals(CALLER_ID_ECHO, echo.get("callerId"));
|
||||
Assert.assertEquals(ECHO_PREFIX + QUERY, echo.get("query"));
|
||||
Assert.assertEquals(KEYSPACE, echo.get("keyspace"));
|
||||
Assert.assertEquals("column1", echo.get("entityColumnName"));
|
||||
Assert.assertEquals(ENTITY_KEYSPACE_IDS_ECHO, echo.get("entityIds"));
|
||||
Assert.assertEquals(BIND_VARS_ECHO, echo.get("bindVars"));
|
||||
Assert.assertEquals(TABLET_TYPE_ECHO, echo.get("tabletType"));
|
||||
Assert.assertEquals(SESSION_ECHO, echo.get("session"));
|
||||
Assert.assertEquals("true", echo.get("notInTransaction"));
|
||||
|
||||
tx.rollback(ctx);
|
||||
tx = conn.begin(ctx);
|
||||
|
||||
echo = getEcho(tx.executeBatchShards(ctx, Arrays.asList(Proto.bindShardQuery(KEYSPACE, SHARDS,
|
||||
ECHO_PREFIX + QUERY, BIND_VARS)),
|
||||
TABLET_TYPE, true).get(0));
|
||||
Assert.assertEquals(CALLER_ID_ECHO, echo.get("callerId"));
|
||||
Assert.assertEquals(ECHO_PREFIX + QUERY, echo.get("query"));
|
||||
Assert.assertEquals(KEYSPACE, echo.get("keyspace"));
|
||||
Assert.assertEquals(SHARDS_ECHO, echo.get("shards"));
|
||||
Assert.assertEquals(BIND_VARS_ECHO, echo.get("bindVars"));
|
||||
Assert.assertEquals(TABLET_TYPE_ECHO, echo.get("tabletType"));
|
||||
Assert.assertEquals(SESSION_ECHO, echo.get("session"));
|
||||
Assert.assertEquals("true", echo.get("asTransaction"));
|
||||
|
||||
echo =
|
||||
getEcho(tx.executeBatchKeyspaceIds(ctx, Arrays.asList(Proto.bindKeyspaceIdQuery(KEYSPACE,
|
||||
KEYSPACE_IDS, ECHO_PREFIX + QUERY, BIND_VARS)),
|
||||
TABLET_TYPE, true).get(0));
|
||||
Assert.assertEquals(CALLER_ID_ECHO, echo.get("callerId"));
|
||||
Assert.assertEquals(ECHO_PREFIX + QUERY, echo.get("query"));
|
||||
Assert.assertEquals(KEYSPACE, echo.get("keyspace"));
|
||||
Assert.assertEquals(KEYSPACE_IDS_ECHO_OLD, echo.get("keyspaceIds"));
|
||||
Assert.assertEquals(BIND_VARS_ECHO, echo.get("bindVars"));
|
||||
Assert.assertEquals(TABLET_TYPE_ECHO, echo.get("tabletType"));
|
||||
Assert.assertEquals(SESSION_ECHO, echo.get("session"));
|
||||
Assert.assertEquals("true", echo.get("asTransaction"));
|
||||
|
||||
tx.commit(ctx);
|
||||
}
|
||||
|
||||
@Test
|
||||
public void testEchoSplitQuery() throws Exception {
|
||||
SplitQueryResponse.Part expected =
|
||||
SplitQueryResponse.Part.newBuilder()
|
||||
.setQuery(Proto.bindQuery(ECHO_PREFIX + QUERY + ":split_column:123", BIND_VARS))
|
||||
.setKeyRangePart(
|
||||
SplitQueryResponse.KeyRangePart.newBuilder().setKeyspace(KEYSPACE).build())
|
||||
.build();
|
||||
SplitQueryResponse.Part actual =
|
||||
conn.splitQuery(ctx, KEYSPACE, ECHO_PREFIX + QUERY, BIND_VARS, "split_column", 123).get(0);
|
||||
Assert.assertEquals(expected, actual);
|
||||
}
|
||||
|
||||
@Test
|
||||
public void testGetSrvKeyspace() throws Exception {
|
||||
SrvKeyspace expected =
|
||||
SrvKeyspace.newBuilder()
|
||||
.addPartitions(
|
||||
KeyspacePartition.newBuilder()
|
||||
.setServedType(TabletType.REPLICA)
|
||||
.addShardReferences(
|
||||
ShardReference.newBuilder()
|
||||
.setName("shard0")
|
||||
.setKeyRange(
|
||||
KeyRange.newBuilder()
|
||||
.setStart(
|
||||
ByteString.copyFrom(new byte[] {0x40, 0, 0, 0, 0, 0, 0, 0}))
|
||||
.setEnd(ByteString.copyFrom(
|
||||
new byte[] {(byte) 0x80, 0, 0, 0, 0, 0, 0, 0}))
|
||||
.build())
|
||||
.build())
|
||||
.build())
|
||||
.setShardingColumnName("sharding_column_name")
|
||||
.setShardingColumnType(KeyspaceIdType.UINT64)
|
||||
.addServedFrom(
|
||||
SrvKeyspace.ServedFrom.newBuilder()
|
||||
.setTabletType(TabletType.MASTER)
|
||||
.setKeyspace("other_keyspace")
|
||||
.build())
|
||||
.setSplitShardCount(128)
|
||||
.build();
|
||||
SrvKeyspace actual = conn.getSrvKeyspace(ctx, "big");
|
||||
Assert.assertEquals(expected, actual);
|
||||
}
|
||||
}
|
|
@ -16,6 +16,13 @@
|
|||
<artifactId>client</artifactId>
|
||||
<version>1.0-SNAPSHOT</version>
|
||||
</dependency>
|
||||
<dependency>
|
||||
<groupId>com.youtube.vitess</groupId>
|
||||
<artifactId>client</artifactId>
|
||||
<version>1.0-SNAPSHOT</version>
|
||||
<type>test-jar</type>
|
||||
<scope>test</scope>
|
||||
</dependency>
|
||||
<dependency>
|
||||
<groupId>io.grpc</groupId>
|
||||
<artifactId>grpc-all</artifactId>
|
||||
|
|
|
@ -1,7 +1,6 @@
|
|||
package com.youtube.vitess.client.grpc;
|
||||
|
||||
import com.youtube.vitess.client.Context;
|
||||
import com.youtube.vitess.client.Proto;
|
||||
import com.youtube.vitess.client.RpcClient;
|
||||
import com.youtube.vitess.client.StreamIterator;
|
||||
import com.youtube.vitess.client.VitessException;
|
||||
|
@ -218,8 +217,7 @@ public class GrpcClient implements RpcClient {
|
|||
}
|
||||
|
||||
@Override
|
||||
public BeginResponse begin(Context ctx, BeginRequest request)
|
||||
throws VitessRpcException {
|
||||
public BeginResponse begin(Context ctx, BeginRequest request) throws VitessRpcException {
|
||||
try (GrpcContext gctx = new GrpcContext(ctx)) {
|
||||
return blockingStub.begin(request);
|
||||
} catch (Exception e) {
|
||||
|
@ -228,8 +226,7 @@ public class GrpcClient implements RpcClient {
|
|||
}
|
||||
|
||||
@Override
|
||||
public CommitResponse commit(Context ctx, CommitRequest request)
|
||||
throws VitessRpcException {
|
||||
public CommitResponse commit(Context ctx, CommitRequest request) throws VitessRpcException {
|
||||
try (GrpcContext gctx = new GrpcContext(ctx)) {
|
||||
return blockingStub.commit(request);
|
||||
} catch (Exception e) {
|
||||
|
@ -238,8 +235,7 @@ public class GrpcClient implements RpcClient {
|
|||
}
|
||||
|
||||
@Override
|
||||
public RollbackResponse rollback(Context ctx, RollbackRequest request)
|
||||
throws VitessRpcException {
|
||||
public RollbackResponse rollback(Context ctx, RollbackRequest request) throws VitessRpcException {
|
||||
try (GrpcContext gctx = new GrpcContext(ctx)) {
|
||||
return blockingStub.rollback(request);
|
||||
} catch (Exception e) {
|
||||
|
|
|
@ -1,52 +1,25 @@
|
|||
package com.youtube.vitess.client.grpc;
|
||||
|
||||
import com.google.common.collect.ImmutableMap;
|
||||
import com.google.protobuf.ByteString;
|
||||
|
||||
import com.youtube.vitess.client.Context;
|
||||
import com.youtube.vitess.client.Proto;
|
||||
import com.youtube.vitess.client.RpcClient;
|
||||
import com.youtube.vitess.client.VTGateConn;
|
||||
import com.youtube.vitess.client.VTGateTx;
|
||||
import com.youtube.vitess.proto.Query.QueryResult;
|
||||
import com.youtube.vitess.proto.Topodata.KeyRange;
|
||||
import com.youtube.vitess.proto.Topodata.KeyspaceIdType;
|
||||
import com.youtube.vitess.proto.Topodata.ShardReference;
|
||||
import com.youtube.vitess.proto.Topodata.SrvKeyspace;
|
||||
import com.youtube.vitess.proto.Topodata.SrvKeyspace.KeyspacePartition;
|
||||
import com.youtube.vitess.proto.Topodata.TabletType;
|
||||
import com.youtube.vitess.proto.Vtgate.SplitQueryResponse;
|
||||
import com.youtube.vitess.proto.Vtrpc.CallerID;
|
||||
import com.youtube.vitess.client.RpcClientTest;
|
||||
|
||||
import org.joda.time.Duration;
|
||||
import org.junit.AfterClass;
|
||||
import org.junit.Assert;
|
||||
import org.junit.Before;
|
||||
import org.junit.BeforeClass;
|
||||
import org.junit.Test;
|
||||
|
||||
import java.net.InetSocketAddress;
|
||||
import java.net.ServerSocket;
|
||||
import java.util.Arrays;
|
||||
import java.util.HashMap;
|
||||
import java.util.List;
|
||||
import java.util.Map;
|
||||
|
||||
/**
|
||||
* This tests GrpcClient with a mock vtgate server (go/cmd/vtgateclienttest).
|
||||
*/
|
||||
public class GrpcClientTest {
|
||||
public class GrpcClientTest extends RpcClientTest {
|
||||
private static Process vtgateclienttest;
|
||||
private static int port;
|
||||
private static RpcClient client;
|
||||
|
||||
@BeforeClass
|
||||
public static void setUpBeforeClass() throws Exception {
|
||||
String vtRoot = System.getenv("VTROOT");
|
||||
if (vtRoot == null) {
|
||||
throw new RuntimeException("cannot find env variable VTROOT; make sure to source dev.env");
|
||||
}
|
||||
|
||||
ServerSocket socket = new ServerSocket(0);
|
||||
port = socket.getLocalPort();
|
||||
socket.close();
|
||||
|
@ -70,313 +43,4 @@ public class GrpcClientTest {
|
|||
vtgateclienttest.destroy();
|
||||
}
|
||||
}
|
||||
|
||||
private Context ctx;
|
||||
private VTGateConn conn;
|
||||
|
||||
@Before
|
||||
public void setUp() {
|
||||
ctx = Context.getDefault().withDeadlineAfter(Duration.millis(5000)).withCallerId(CALLER_ID);
|
||||
conn = new VTGateConn(client);
|
||||
}
|
||||
|
||||
private static final String ECHO_PREFIX = "echo://";
|
||||
|
||||
private static final String QUERY = "test query";
|
||||
private static final String KEYSPACE = "test_keyspace";
|
||||
|
||||
private static final List<String> SHARDS = Arrays.asList("-80", "80-");
|
||||
private static final String SHARDS_ECHO = "[-80 80-]";
|
||||
|
||||
private static final List<byte[]> KEYSPACE_IDS =
|
||||
Arrays.asList(new byte[] {1, 2, 3, 4}, new byte[] {5, 6, 7, 8});
|
||||
private static final String KEYSPACE_IDS_ECHO = "[[1 2 3 4] [5 6 7 8]]";
|
||||
private static final String KEYSPACE_IDS_ECHO_OLD = "[01020304 05060708]";
|
||||
|
||||
private static final List<KeyRange> KEY_RANGES = Arrays.asList(
|
||||
KeyRange.newBuilder()
|
||||
.setStart(ByteString.copyFrom(new byte[] {1, 2, 3, 4}))
|
||||
.setEnd(ByteString.copyFrom(new byte[] {5, 6, 7, 8}))
|
||||
.build());
|
||||
private static final String KEY_RANGES_ECHO = "[start:\"\\001\\002\\003\\004\" end:\"\\005\\006\\007\\010\" ]";
|
||||
|
||||
private static final Map<byte[], Object> ENTITY_KEYSPACE_IDS =
|
||||
new ImmutableMap.Builder<byte[], Object>()
|
||||
.put(new byte[] {1, 2, 3}, 123)
|
||||
.put(new byte[] {4, 5, 6}, 2.0)
|
||||
.put(new byte[] {7, 8, 9}, new byte[] {1, 2, 3})
|
||||
.build();
|
||||
private static final String ENTITY_KEYSPACE_IDS_ECHO =
|
||||
"[xid_type:TYPE_INT xid_int:123 keyspace_id:\"\\001\\002\\003\" xid_type:TYPE_FLOAT xid_float:2 keyspace_id:\"\\004\\005\\006\" xid_type:TYPE_BYTES xid_bytes:\"\\001\\002\\003\" keyspace_id:\"\\007\\010\\t\" ]";
|
||||
|
||||
private static final TabletType TABLET_TYPE = TabletType.REPLICA;
|
||||
private static final String TABLET_TYPE_ECHO = TABLET_TYPE.toString();
|
||||
|
||||
private static final Map<String, Object> BIND_VARS =
|
||||
new ImmutableMap.Builder<String, Object>()
|
||||
.put("int", 123)
|
||||
.put("float", 2.0)
|
||||
.put("bytes", new byte[] {1, 2, 3})
|
||||
.build();
|
||||
private static final String BIND_VARS_ECHO = "map[bytes:[1 2 3] float:2 int:123]";
|
||||
|
||||
private static final String SESSION_ECHO = "InTransaction: true, ShardSession: []";
|
||||
|
||||
private static final CallerID CALLER_ID =
|
||||
CallerID.newBuilder()
|
||||
.setPrincipal("test_principal")
|
||||
.setComponent("test_component")
|
||||
.setSubcomponent("test_subcomponent")
|
||||
.build();
|
||||
private static final String CALLER_ID_ECHO =
|
||||
"principal:\"test_principal\" component:\"test_component\" subcomponent:\"test_subcomponent\" ";
|
||||
|
||||
private static Map<String, String> getEcho(QueryResult result) {
|
||||
Map<String, String> fields = new HashMap<String, String>();
|
||||
for (int i = 0; i < result.getFieldsCount(); i++) {
|
||||
fields.put(result.getFields(i).getName(), result.getRows(0).getValues(i).toStringUtf8());
|
||||
}
|
||||
return fields;
|
||||
}
|
||||
|
||||
@Test
|
||||
public void testEchoExecute() throws Exception {
|
||||
Map<String, String> echo;
|
||||
|
||||
echo = getEcho(conn.execute(ctx, ECHO_PREFIX + QUERY, BIND_VARS, TABLET_TYPE));
|
||||
Assert.assertEquals(CALLER_ID_ECHO, echo.get("callerId"));
|
||||
Assert.assertEquals(ECHO_PREFIX + QUERY, echo.get("query"));
|
||||
Assert.assertEquals(BIND_VARS_ECHO, echo.get("bindVars"));
|
||||
Assert.assertEquals(TABLET_TYPE_ECHO, echo.get("tabletType"));
|
||||
|
||||
echo = getEcho(
|
||||
conn.executeShards(ctx, ECHO_PREFIX + QUERY, KEYSPACE, SHARDS, BIND_VARS, TABLET_TYPE));
|
||||
Assert.assertEquals(CALLER_ID_ECHO, echo.get("callerId"));
|
||||
Assert.assertEquals(ECHO_PREFIX + QUERY, echo.get("query"));
|
||||
Assert.assertEquals(KEYSPACE, echo.get("keyspace"));
|
||||
Assert.assertEquals(SHARDS_ECHO, echo.get("shards"));
|
||||
Assert.assertEquals(BIND_VARS_ECHO, echo.get("bindVars"));
|
||||
Assert.assertEquals(TABLET_TYPE_ECHO, echo.get("tabletType"));
|
||||
|
||||
echo = getEcho(conn.executeKeyspaceIds(
|
||||
ctx, ECHO_PREFIX + QUERY, KEYSPACE, KEYSPACE_IDS, BIND_VARS, TABLET_TYPE));
|
||||
Assert.assertEquals(CALLER_ID_ECHO, echo.get("callerId"));
|
||||
Assert.assertEquals(ECHO_PREFIX + QUERY, echo.get("query"));
|
||||
Assert.assertEquals(KEYSPACE, echo.get("keyspace"));
|
||||
Assert.assertEquals(KEYSPACE_IDS_ECHO, echo.get("keyspaceIds"));
|
||||
Assert.assertEquals(BIND_VARS_ECHO, echo.get("bindVars"));
|
||||
Assert.assertEquals(TABLET_TYPE_ECHO, echo.get("tabletType"));
|
||||
|
||||
echo = getEcho(conn.executeKeyRanges(
|
||||
ctx, ECHO_PREFIX + QUERY, KEYSPACE, KEY_RANGES, BIND_VARS, TABLET_TYPE));
|
||||
Assert.assertEquals(CALLER_ID_ECHO, echo.get("callerId"));
|
||||
Assert.assertEquals(ECHO_PREFIX + QUERY, echo.get("query"));
|
||||
Assert.assertEquals(KEYSPACE, echo.get("keyspace"));
|
||||
Assert.assertEquals(KEY_RANGES_ECHO, echo.get("keyRanges"));
|
||||
Assert.assertEquals(BIND_VARS_ECHO, echo.get("bindVars"));
|
||||
Assert.assertEquals(TABLET_TYPE_ECHO, echo.get("tabletType"));
|
||||
|
||||
echo = getEcho(conn.executeEntityIds(ctx, ECHO_PREFIX + QUERY, KEYSPACE, "column1",
|
||||
ENTITY_KEYSPACE_IDS, BIND_VARS, TABLET_TYPE));
|
||||
Assert.assertEquals(CALLER_ID_ECHO, echo.get("callerId"));
|
||||
Assert.assertEquals(ECHO_PREFIX + QUERY, echo.get("query"));
|
||||
Assert.assertEquals(KEYSPACE, echo.get("keyspace"));
|
||||
Assert.assertEquals("column1", echo.get("entityColumnName"));
|
||||
Assert.assertEquals(ENTITY_KEYSPACE_IDS_ECHO, echo.get("entityIds"));
|
||||
Assert.assertEquals(BIND_VARS_ECHO, echo.get("bindVars"));
|
||||
Assert.assertEquals(TABLET_TYPE_ECHO, echo.get("tabletType"));
|
||||
|
||||
echo = getEcho(conn.executeBatchShards(ctx, Arrays.asList(Proto.bindShardQuery(KEYSPACE, SHARDS,
|
||||
ECHO_PREFIX + QUERY, BIND_VARS)),
|
||||
TABLET_TYPE, true).get(0));
|
||||
Assert.assertEquals(CALLER_ID_ECHO, echo.get("callerId"));
|
||||
Assert.assertEquals(ECHO_PREFIX + QUERY, echo.get("query"));
|
||||
Assert.assertEquals(KEYSPACE, echo.get("keyspace"));
|
||||
Assert.assertEquals(SHARDS_ECHO, echo.get("shards"));
|
||||
Assert.assertEquals(BIND_VARS_ECHO, echo.get("bindVars"));
|
||||
Assert.assertEquals(TABLET_TYPE_ECHO, echo.get("tabletType"));
|
||||
|
||||
echo = getEcho(
|
||||
conn.executeBatchKeyspaceIds(ctx, Arrays.asList(Proto.bindKeyspaceIdQuery(KEYSPACE,
|
||||
KEYSPACE_IDS, ECHO_PREFIX + QUERY, BIND_VARS)),
|
||||
TABLET_TYPE, true).get(0));
|
||||
Assert.assertEquals(CALLER_ID_ECHO, echo.get("callerId"));
|
||||
Assert.assertEquals(ECHO_PREFIX + QUERY, echo.get("query"));
|
||||
Assert.assertEquals(KEYSPACE, echo.get("keyspace"));
|
||||
Assert.assertEquals(KEYSPACE_IDS_ECHO_OLD, echo.get("keyspaceIds"));
|
||||
Assert.assertEquals(BIND_VARS_ECHO, echo.get("bindVars"));
|
||||
Assert.assertEquals(TABLET_TYPE_ECHO, echo.get("tabletType"));
|
||||
}
|
||||
|
||||
@Test
|
||||
public void testEchoStreamExecute() throws Exception {
|
||||
Map<String, String> echo;
|
||||
|
||||
echo = getEcho(conn.streamExecute(ctx, ECHO_PREFIX + QUERY, BIND_VARS, TABLET_TYPE).next());
|
||||
Assert.assertEquals(CALLER_ID_ECHO, echo.get("callerId"));
|
||||
Assert.assertEquals(ECHO_PREFIX + QUERY, echo.get("query"));
|
||||
Assert.assertEquals(BIND_VARS_ECHO, echo.get("bindVars"));
|
||||
Assert.assertEquals(TABLET_TYPE_ECHO, echo.get("tabletType"));
|
||||
|
||||
echo = getEcho(
|
||||
conn.streamExecuteShards(ctx, ECHO_PREFIX + QUERY, KEYSPACE, SHARDS, BIND_VARS, TABLET_TYPE)
|
||||
.next());
|
||||
Assert.assertEquals(CALLER_ID_ECHO, echo.get("callerId"));
|
||||
Assert.assertEquals(ECHO_PREFIX + QUERY, echo.get("query"));
|
||||
Assert.assertEquals(KEYSPACE, echo.get("keyspace"));
|
||||
Assert.assertEquals(SHARDS_ECHO, echo.get("shards"));
|
||||
Assert.assertEquals(BIND_VARS_ECHO, echo.get("bindVars"));
|
||||
Assert.assertEquals(TABLET_TYPE_ECHO, echo.get("tabletType"));
|
||||
|
||||
echo = getEcho(conn.streamExecuteKeyspaceIds(ctx, ECHO_PREFIX + QUERY, KEYSPACE, KEYSPACE_IDS,
|
||||
BIND_VARS, TABLET_TYPE).next());
|
||||
Assert.assertEquals(CALLER_ID_ECHO, echo.get("callerId"));
|
||||
Assert.assertEquals(ECHO_PREFIX + QUERY, echo.get("query"));
|
||||
Assert.assertEquals(KEYSPACE, echo.get("keyspace"));
|
||||
Assert.assertEquals(KEYSPACE_IDS_ECHO, echo.get("keyspaceIds"));
|
||||
Assert.assertEquals(BIND_VARS_ECHO, echo.get("bindVars"));
|
||||
Assert.assertEquals(TABLET_TYPE_ECHO, echo.get("tabletType"));
|
||||
|
||||
echo = getEcho(conn.streamExecuteKeyRanges(ctx, ECHO_PREFIX + QUERY, KEYSPACE, KEY_RANGES,
|
||||
BIND_VARS, TABLET_TYPE).next());
|
||||
Assert.assertEquals(CALLER_ID_ECHO, echo.get("callerId"));
|
||||
Assert.assertEquals(ECHO_PREFIX + QUERY, echo.get("query"));
|
||||
Assert.assertEquals(KEYSPACE, echo.get("keyspace"));
|
||||
Assert.assertEquals(KEY_RANGES_ECHO, echo.get("keyRanges"));
|
||||
Assert.assertEquals(BIND_VARS_ECHO, echo.get("bindVars"));
|
||||
Assert.assertEquals(TABLET_TYPE_ECHO, echo.get("tabletType"));
|
||||
}
|
||||
|
||||
@Test
|
||||
public void testEchoTransactionExecute() throws Exception {
|
||||
Map<String, String> echo;
|
||||
|
||||
VTGateTx tx = conn.begin(ctx);
|
||||
|
||||
echo = getEcho(tx.execute(ctx, ECHO_PREFIX + QUERY, BIND_VARS, TABLET_TYPE, true));
|
||||
Assert.assertEquals(CALLER_ID_ECHO, echo.get("callerId"));
|
||||
Assert.assertEquals(ECHO_PREFIX + QUERY, echo.get("query"));
|
||||
Assert.assertEquals(BIND_VARS_ECHO, echo.get("bindVars"));
|
||||
Assert.assertEquals(TABLET_TYPE_ECHO, echo.get("tabletType"));
|
||||
Assert.assertEquals(SESSION_ECHO, echo.get("session"));
|
||||
Assert.assertEquals("true", echo.get("notInTransaction"));
|
||||
|
||||
echo = getEcho(
|
||||
tx.executeShards(ctx, ECHO_PREFIX + QUERY, KEYSPACE, SHARDS, BIND_VARS, TABLET_TYPE, true));
|
||||
Assert.assertEquals(CALLER_ID_ECHO, echo.get("callerId"));
|
||||
Assert.assertEquals(ECHO_PREFIX + QUERY, echo.get("query"));
|
||||
Assert.assertEquals(KEYSPACE, echo.get("keyspace"));
|
||||
Assert.assertEquals(SHARDS_ECHO, echo.get("shards"));
|
||||
Assert.assertEquals(BIND_VARS_ECHO, echo.get("bindVars"));
|
||||
Assert.assertEquals(TABLET_TYPE_ECHO, echo.get("tabletType"));
|
||||
Assert.assertEquals(SESSION_ECHO, echo.get("session"));
|
||||
Assert.assertEquals("true", echo.get("notInTransaction"));
|
||||
|
||||
echo = getEcho(tx.executeKeyspaceIds(
|
||||
ctx, ECHO_PREFIX + QUERY, KEYSPACE, KEYSPACE_IDS, BIND_VARS, TABLET_TYPE, true));
|
||||
Assert.assertEquals(CALLER_ID_ECHO, echo.get("callerId"));
|
||||
Assert.assertEquals(ECHO_PREFIX + QUERY, echo.get("query"));
|
||||
Assert.assertEquals(KEYSPACE, echo.get("keyspace"));
|
||||
Assert.assertEquals(KEYSPACE_IDS_ECHO, echo.get("keyspaceIds"));
|
||||
Assert.assertEquals(BIND_VARS_ECHO, echo.get("bindVars"));
|
||||
Assert.assertEquals(TABLET_TYPE_ECHO, echo.get("tabletType"));
|
||||
Assert.assertEquals(SESSION_ECHO, echo.get("session"));
|
||||
Assert.assertEquals("true", echo.get("notInTransaction"));
|
||||
|
||||
echo = getEcho(tx.executeKeyRanges(
|
||||
ctx, ECHO_PREFIX + QUERY, KEYSPACE, KEY_RANGES, BIND_VARS, TABLET_TYPE, true));
|
||||
Assert.assertEquals(CALLER_ID_ECHO, echo.get("callerId"));
|
||||
Assert.assertEquals(ECHO_PREFIX + QUERY, echo.get("query"));
|
||||
Assert.assertEquals(KEYSPACE, echo.get("keyspace"));
|
||||
Assert.assertEquals(KEY_RANGES_ECHO, echo.get("keyRanges"));
|
||||
Assert.assertEquals(BIND_VARS_ECHO, echo.get("bindVars"));
|
||||
Assert.assertEquals(TABLET_TYPE_ECHO, echo.get("tabletType"));
|
||||
Assert.assertEquals(SESSION_ECHO, echo.get("session"));
|
||||
Assert.assertEquals("true", echo.get("notInTransaction"));
|
||||
|
||||
echo = getEcho(tx.executeEntityIds(ctx, ECHO_PREFIX + QUERY, KEYSPACE, "column1",
|
||||
ENTITY_KEYSPACE_IDS, BIND_VARS, TABLET_TYPE, true));
|
||||
Assert.assertEquals(CALLER_ID_ECHO, echo.get("callerId"));
|
||||
Assert.assertEquals(ECHO_PREFIX + QUERY, echo.get("query"));
|
||||
Assert.assertEquals(KEYSPACE, echo.get("keyspace"));
|
||||
Assert.assertEquals("column1", echo.get("entityColumnName"));
|
||||
Assert.assertEquals(ENTITY_KEYSPACE_IDS_ECHO, echo.get("entityIds"));
|
||||
Assert.assertEquals(BIND_VARS_ECHO, echo.get("bindVars"));
|
||||
Assert.assertEquals(TABLET_TYPE_ECHO, echo.get("tabletType"));
|
||||
Assert.assertEquals(SESSION_ECHO, echo.get("session"));
|
||||
Assert.assertEquals("true", echo.get("notInTransaction"));
|
||||
|
||||
tx.rollback(ctx);
|
||||
tx = conn.begin(ctx);
|
||||
|
||||
echo = getEcho(tx.executeBatchShards(ctx, Arrays.asList(Proto.bindShardQuery(KEYSPACE, SHARDS,
|
||||
ECHO_PREFIX + QUERY, BIND_VARS)),
|
||||
TABLET_TYPE, true).get(0));
|
||||
Assert.assertEquals(CALLER_ID_ECHO, echo.get("callerId"));
|
||||
Assert.assertEquals(ECHO_PREFIX + QUERY, echo.get("query"));
|
||||
Assert.assertEquals(KEYSPACE, echo.get("keyspace"));
|
||||
Assert.assertEquals(SHARDS_ECHO, echo.get("shards"));
|
||||
Assert.assertEquals(BIND_VARS_ECHO, echo.get("bindVars"));
|
||||
Assert.assertEquals(TABLET_TYPE_ECHO, echo.get("tabletType"));
|
||||
Assert.assertEquals(SESSION_ECHO, echo.get("session"));
|
||||
|
||||
echo =
|
||||
getEcho(tx.executeBatchKeyspaceIds(ctx, Arrays.asList(Proto.bindKeyspaceIdQuery(KEYSPACE,
|
||||
KEYSPACE_IDS, ECHO_PREFIX + QUERY, BIND_VARS)),
|
||||
TABLET_TYPE, true).get(0));
|
||||
Assert.assertEquals(CALLER_ID_ECHO, echo.get("callerId"));
|
||||
Assert.assertEquals(ECHO_PREFIX + QUERY, echo.get("query"));
|
||||
Assert.assertEquals(KEYSPACE, echo.get("keyspace"));
|
||||
Assert.assertEquals(KEYSPACE_IDS_ECHO_OLD, echo.get("keyspaceIds"));
|
||||
Assert.assertEquals(BIND_VARS_ECHO, echo.get("bindVars"));
|
||||
Assert.assertEquals(TABLET_TYPE_ECHO, echo.get("tabletType"));
|
||||
Assert.assertEquals(SESSION_ECHO, echo.get("session"));
|
||||
|
||||
tx.commit(ctx);
|
||||
}
|
||||
|
||||
@Test
|
||||
public void testEchoSplitQuery() throws Exception {
|
||||
SplitQueryResponse.Part expected =
|
||||
SplitQueryResponse.Part.newBuilder()
|
||||
.setQuery(Proto.bindQuery(ECHO_PREFIX + QUERY + ":split_column:123", BIND_VARS))
|
||||
.setKeyRangePart(
|
||||
SplitQueryResponse.KeyRangePart.newBuilder().setKeyspace(KEYSPACE).build())
|
||||
.build();
|
||||
SplitQueryResponse.Part actual =
|
||||
conn.splitQuery(ctx, KEYSPACE, ECHO_PREFIX + QUERY, BIND_VARS, "split_column", 123).get(0);
|
||||
Assert.assertEquals(expected, actual);
|
||||
}
|
||||
|
||||
@Test
|
||||
public void testGetSrvKeyspace() throws Exception {
|
||||
SrvKeyspace expected =
|
||||
SrvKeyspace.newBuilder()
|
||||
.addPartitions(
|
||||
KeyspacePartition.newBuilder()
|
||||
.setServedType(TabletType.REPLICA)
|
||||
.addShardReferences(
|
||||
ShardReference.newBuilder()
|
||||
.setName("shard0")
|
||||
.setKeyRange(
|
||||
KeyRange.newBuilder()
|
||||
.setStart(
|
||||
ByteString.copyFrom(new byte[] {0x40, 0, 0, 0, 0, 0, 0, 0}))
|
||||
.setEnd(ByteString.copyFrom(
|
||||
new byte[] {(byte) 0x80, 0, 0, 0, 0, 0, 0, 0}))
|
||||
.build())
|
||||
.build())
|
||||
.build())
|
||||
.setShardingColumnName("sharding_column_name")
|
||||
.setShardingColumnType(KeyspaceIdType.UINT64)
|
||||
.addServedFrom(
|
||||
SrvKeyspace.ServedFrom.newBuilder()
|
||||
.setTabletType(TabletType.MASTER)
|
||||
.setKeyspace("other_keyspace")
|
||||
.build())
|
||||
.setSplitShardCount(128)
|
||||
.build();
|
||||
SrvKeyspace actual = conn.getSrvKeyspace(ctx, "big");
|
||||
Assert.assertEquals(expected, actual);
|
||||
}
|
||||
}
|
||||
|
|
|
@ -20,6 +20,10 @@ public interface RpcClient {
|
|||
|
||||
public QueryResponse execute(Query query) throws ConnectionException;
|
||||
|
||||
public List<QueryResponse> executeBatchKeyspaceIds(
|
||||
List<Query> queries, String tabletType, Object session, boolean asTransaction)
|
||||
throws ConnectionException;
|
||||
|
||||
public QueryResult streamNext(List<Field> fields) throws ConnectionException;
|
||||
|
||||
public SplitQueryResponse splitQuery(SplitQueryRequest request) throws ConnectionException;
|
||||
|
|
Некоторые файлы не были показаны из-за слишком большого количества измененных файлов Показать больше
Загрузка…
Ссылка в новой задаче