YCSB/googlebigtable
Sean Busbey 780aec9235
[hbase] update HBase bindings for eom (#1396)
* remove 0.98, 1.0, 1.2, and 2.0 bindings
* change the 1.4 binding to be  a "HBase 1.y" binding
* add "HBase 2.y" binding and default it to the current 2.2 release
* incorporate README from 0.98 binding into current bindings
* incorporate README on bigtable testing from 1.0 binding into 1.4 binding
* incorporate implementation from 1.0 client into current bindings
* updated asynchbase binding to include parts of removed bindings it referenced
* update maprdb binding for the change in the hbase bindings
* update 1.4 and 2.2 to current releases
* use shaded client test for all hbase bindings.
* make hbase bindings consistently use log4j
* fixes #1173
* fixes #1172
2020-02-05 11:53:58 -06:00
..
src/main/java/site/ycsb/db
README.md
pom.xml

README.md

Google Bigtable Driver for YCSB

This driver provides a YCSB workload binding for Google's hosted Bigtable, the inspiration for a number of key-value stores like HBase and Cassandra. The Bigtable Java client provides both Protobuf based GRPC and HBase client APIs. This binding implements the Protobuf API for testing the native client. To test Bigtable using the HBase API, see the hbase1 binding.

Quickstart

1. Setup a Bigtable Instance

Login to the Google Cloud Console and follow the Creating Instance steps. Make a note of your instance ID and project ID.

2. Launch the Bigtable Shell

From the Cloud Console, launch a shell and follow the Quickstart up to step 4 where you launch the HBase shell.

3. Create a Table

For best results, use the pre-splitting strategy recommended in HBASE-4163:

hbase(main):001:0> n_splits = 200 # HBase recommends (10 * number of regionservers)
hbase(main):002:0> create 'usertable', 'cf', {SPLITS => (1..n_splits).map {|i| "user#{1000+i*(9999-1000)/n_splits}"}}

Make a note of the column family, in this example it's `cf``.

4. Download JSON Credentials

Follow these instructions for Generating a JSON key and save it to your host.

5. Load a Workload

Switch to the root of the YCSB repo and choose the workload you want to run and load it first. With the CLI you must provide the column family and instance properties to load.

bin/ycsb load googlebigtable -p columnfamily=cf -p google.bigtable.project.id=<PROJECT_ID> -p google.bigtable.instance.id=<INSTANCE> -p google.bigtable.auth.json.keyfile=<PATH_TO_JSON_KEY> -P workloads/workloada

Make sure to replace the variables in the angle brackets above with the proper value from your instance. Additional configuration parameters are available below.

The load step only executes inserts into the datastore. After loading data, run the same workload to mix reads with writes.

bin/ycsb run googlebigtable -p columnfamily=cf -p google.bigtable.project.id=<PROJECT_ID> -p google.bigtable.instance.id=<INSTANCE> -p google.bigtable.auth.json.keyfile=<PATH_TO_JSON_KEY> -P workloads/workloada

Configuration Options

The following options can be configured using CLI (using the -p parameter) or hbase-site.xml (add the HBase config directory to YCSB's class path via CLI). Check the Cloud Bigtable Client project for additional tuning parameters.

  • columnfamily: (Required) The Bigtable column family to target.
  • google.bigtable.project.id: (Required) The ID of a Bigtable project.
  • google.bigtable.instance.id: (Required) The name of a Bigtable instance.
  • google.bigtable.auth.service.account.enable: Whether or not to authenticate with a service account. The default is true.
  • google.bigtable.auth.json.keyfile: (Required) A service account key for authentication.
  • debug: If true, prints debug information to standard out. The default is false.
  • clientbuffering: Whether or not to use client side buffering and batching of write operations. This can significantly improve performance and defaults to true.