Merge branch 'master' into initial-es5

* master:
  [core] Fixing squid:S1319 -  Declarations should use Java collection interfaces such as "List" rather than specific implementation classes such as "LinkedList". (manolama - updated bindings added since the PR)
  [core] Use longs instead of ints to support larger key spaces. Changed int to long in Measurements code to support large scale workloads. (manolama - fixed checkstyle errors)
  [core] Export totalHistogram for HdrHistogram measurement
  [core] Add an operation enum to the Workload class. This can eventually be used to replace the strings.
  [core] Add a Fisher-Yates array shuffle to the Utils class.
  [core] Fix an issue where the threadid and threadCount were not passed to the workload client threads. Had to use setters to get around the checkstyle complaint of having too many parameters.
  Upgrading googlebigtable to the latest version. The API used by googlebigtable has had quite a bit of churn.  This is the minimal set of changes required for the upgrade.
  [geode] Update to apache-geode 1.2.0 release
  [core] Update to use newer version of Google Cloud Spanner client and associated required change
  [core] Add a reset() method to the ByteIterator abstract and implementations for each of the children. This lets us re-use byte iterators if we need to access the values again (when applicable).
  [hbase12] Add HBase 1.2+ specific client that relies on the shaded client artifact provided by those versions. (#970)
  [distro] Refresh Apache licence text (#969)
  [memcached] support binary protocol (#965)
  [accumulo] A general "refresh" to the Accumulo binding (#947)
  [cloudspanner] Add binding for Google's Cloud Spanner. (#939)
  [aerospike] Change the write policy to REPLACE_ONLY (#937)
This commit is contained in:
Jason Tedor 2017-08-07 07:58:02 +02:00
Родитель c52c4385b1 cf5d2ca5f5
Коммит 4c84ffa3e9
94 изменённых файлов: 1980 добавлений и 652 удалений

Просмотреть файл

@ -1,163 +1,202 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
1. Definitions.
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
1. Definitions.
"Licensor" shall mean the copyright owner or entity authorized by the
copyright owner that is granting the License.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Legal Entity" shall mean the union of the acting entity and all other
entities that control, are controlled by, or are under common control
with that entity. For the purposes of this definition, "control" means
(i) the power, direct or indirect, to cause the direction or
management of such entity, whether by contract or otherwise, or (ii)
ownership of fifty percent (50%) or more of the outstanding shares, or
(iii) beneficial ownership of such entity.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"You" (or "Your") shall mean an individual or Legal Entity exercising
permissions granted by this License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but not
limited to compiled object code, generated documentation, and
conversions to other media types.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Work" shall mean the work of authorship, whether in Source or Object
form, made available under the License, as indicated by a copyright
notice that is included in or attached to the work (an example is
provided in the Appendix below).
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the
purposes of this License, Derivative Works shall not include works
that remain separable from, or merely link (or bind by name) to the
interfaces of, the Work and Derivative Works thereof.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Contribution" shall mean any work of authorship, including the
original version of the Work and any modifications or additions to
that Work or Derivative Works thereof, that is intentionally submitted
to Licensor for inclusion in the Work by the copyright owner or by an
individual or Legal Entity authorized to submit on behalf of the
copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent to
the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control
systems, and issue tracking systems that are managed by, or on behalf
of, the Licensor for the purpose of discussing and improving the Work,
but excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have
made, use, offer to sell, sell, import, and otherwise transfer the
Work, where such license applies only to those patent claims
licensable by such Contributor that are necessarily infringed by
their Contribution(s) alone or by combination of their
Contribution(s) with the Work to which such Contribution(s) was
submitted. If You institute patent litigation against any entity
(including a cross-claim or counterclaim in a lawsuit) alleging
that the Work or a Contribution incorporated within the Work
constitutes direct or contributory patent infringement, then any
patent licenses granted to You under this License for that Work
shall terminate as of the date such litigation is filed.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
4. Redistribution. You may reproduce and distribute copies of the Work
or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You meet
the following conditions:
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
(a) You must give any other recipients of the Work or Derivative Works
a copy of this License; and
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(c) You must retain, in the Source form of any Derivative Works that
You distribute, all copyright, patent, trademark, and attribution
notices from the Source form of the Work, excluding those notices that
do not pertain to any part of the Derivative Works; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained within
such NOTICE file, excluding those notices that do not pertain to any
part of the Derivative Works, in at least one of the following places:
within a NOTICE text file distributed as part of the Derivative Works;
within the Source form or documentation, if provided along with the
Derivative Works; or, within a display generated by the Derivative
Works, if and wherever such third-party notices normally appear. The
contents of the NOTICE file are for informational purposes only and do
not modify the License. You may add Your own attribution notices
within Derivative Works that You distribute, alongside or as an
addendum to the NOTICE text from the Work, provided that such
additional attribution notices cannot be construed as modifying the
License.
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
You may add Your own copyright statement to Your modifications and may
provide additional or different license terms and conditions for use,
reproduction, or distribution of Your modifications, or for any such
Derivative Works as a whole, provided Your use, reproduction, and
distribution of the Work otherwise complies with the conditions stated
in this License.
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or
conditions. Notwithstanding the above, nothing herein shall
supersede or modify the terms of any separate license agreement you
may have executed with Licensor regarding such Contributions.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
6. Trademarks. This License does nr work.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]" replaced
with your own identifying information. (Don't include the brackets!)
The text should be enclosed in the appropriate comment syntax for the
file format. We also recommend that a file or class name and
description of purpose be included on the same "printed page" as the
copyright notice for easier identification within third-party
archives.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
Copyright [yyyy] [name of copyright owner]
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
http://www.apache.org/licenses/LICENSE-2.0
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied.
END OF TERMS AND CONDITIONS
See the License for the specific language governing permissions and
limitations under the License.
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

Просмотреть файл

@ -36,7 +36,42 @@ Git clone YCSB and compile:
cd YCSB
mvn -pl com.yahoo.ycsb:aerospike-binding -am clean package
### 3. Load Data and Run Tests
### 3. Create the Accumulo table
By default, YCSB uses a table with the name "usertable". Users must create this table before loading
data into Accumulo. For maximum Accumulo performance, the Accumulo table must be pre-split. A simple
Ruby script, based on the HBase README, can generate adequate split-point. 10's of Tablets per
TabletServer is a good starting point. Unless otherwise specified, the following commands should run
on any version of Accumulo.
$ echo 'num_splits = 20; puts (1..num_splits).map {|i| "user#{1000+i*(9999-1000)/num_splits}"}' | ruby > /tmp/splits.txt
$ accumulo shell -u <user> -p <password> -e "createtable usertable"
$ accumulo shell -u <user> -p <password> -e "addsplits -t usertable -sf /tmp/splits.txt"
$ accumulo shell -u <user> -p <password> -e "config -t usertable -s table.cache.block.enable=true"
Additionally, there are some other configuration properties which can increase performance. These
can be set on the Accumulo table via the shell after it is created. Setting the table durability
to `flush` relaxes the constraints on data durability during hard power-outages (avoids calls
to fsync). Accumulo defaults table compression to `gzip` which is not particularly fast; `snappy`
is a faster and similarly-efficient option. The mutation queue property controls how many writes
that Accumulo will buffer in memory before performing a flush; this property should be set relative
to the amount of JVM heap the TabletServers are given.
Please note that the `table.durability` and `tserver.total.mutation.queue.max` properties only
exists for >=Accumulo-1.7. There are no concise replacements for these properties in earlier versions.
accumulo> config -s table.durability=flush
accumulo> config -s tserver.total.mutation.queue.max=256M
accumulo> config -t usertable -s table.file.compress.type=snappy
On repeated data loads, the following commands may be helpful to re-set the state of the table quickly.
accumulo> createtable tmp --copy-splits usertable --copy-config usertable
accumulo> deletetable --force usertable
accumulo> renametable tmp usertable
accumulo> compact --wait -t accumulo.metadata
### 4. Load Data and Run Tests
Load the data:

Просмотреть файл

@ -18,17 +18,25 @@
package com.yahoo.ycsb.db.accumulo;
import com.yahoo.ycsb.ByteArrayByteIterator;
import com.yahoo.ycsb.ByteIterator;
import com.yahoo.ycsb.DB;
import com.yahoo.ycsb.DBException;
import com.yahoo.ycsb.Status;
import static java.nio.charset.StandardCharsets.UTF_8;
import java.io.IOException;
import java.util.HashMap;
import java.util.Iterator;
import java.util.Map;
import java.util.Map.Entry;
import java.util.Set;
import java.util.SortedMap;
import java.util.Vector;
import java.util.concurrent.ConcurrentHashMap;
import java.util.concurrent.TimeUnit;
import org.apache.accumulo.core.client.AccumuloException;
import org.apache.accumulo.core.client.AccumuloSecurityException;
import org.apache.accumulo.core.client.BatchWriter;
import org.apache.accumulo.core.client.BatchWriterConfig;
import org.apache.accumulo.core.client.Connector;
import org.apache.accumulo.core.client.IteratorSetting;
import org.apache.accumulo.core.client.MutationsRejectedException;
import org.apache.accumulo.core.client.Scanner;
import org.apache.accumulo.core.client.TableNotFoundException;
@ -39,16 +47,16 @@ import org.apache.accumulo.core.data.Key;
import org.apache.accumulo.core.data.Mutation;
import org.apache.accumulo.core.data.Range;
import org.apache.accumulo.core.data.Value;
import org.apache.accumulo.core.iterators.user.WholeRowIterator;
import org.apache.accumulo.core.security.Authorizations;
import org.apache.accumulo.core.util.CleanUp;
import org.apache.hadoop.io.Text;
import java.util.HashMap;
import java.util.Map;
import java.util.Map.Entry;
import java.util.Set;
import java.util.Vector;
import java.util.concurrent.TimeUnit;
import com.yahoo.ycsb.ByteArrayByteIterator;
import com.yahoo.ycsb.ByteIterator;
import com.yahoo.ycsb.DB;
import com.yahoo.ycsb.DBException;
import com.yahoo.ycsb.Status;
/**
* <a href="https://accumulo.apache.org/">Accumulo</a> binding for YCSB.
@ -57,14 +65,11 @@ public class AccumuloClient extends DB {
private ZooKeeperInstance inst;
private Connector connector;
private String table = "";
private BatchWriter bw = null;
private Text colFam = new Text("");
private Scanner singleScanner = null; // A scanner for reads/deletes.
private Scanner scanScanner = null; // A scanner for use by scan()
private byte[] colFamBytes = new byte[0];
private final ConcurrentHashMap<String, BatchWriter> writers = new ConcurrentHashMap<>();
static {
Runtime.getRuntime().addShutdownHook(new Thread() {
@Override
public void run() {
@ -76,6 +81,7 @@ public class AccumuloClient extends DB {
@Override
public void init() throws DBException {
colFam = new Text(getProperties().getProperty("accumulo.columnFamily"));
colFamBytes = colFam.toString().getBytes(UTF_8);
inst = new ZooKeeperInstance(
getProperties().getProperty("accumulo.instanceName"),
@ -85,9 +91,7 @@ public class AccumuloClient extends DB {
AuthenticationToken token =
new PasswordToken(getProperties().getProperty("accumulo.password"));
connector = inst.getConnector(principal, token);
} catch (AccumuloException e) {
throw new DBException(e);
} catch (AccumuloSecurityException e) {
} catch (AccumuloException | AccumuloSecurityException e) {
throw new DBException(e);
}
@ -100,45 +104,56 @@ public class AccumuloClient extends DB {
@Override
public void cleanup() throws DBException {
try {
if (bw != null) {
bw.close();
Iterator<BatchWriter> iterator = writers.values().iterator();
while (iterator.hasNext()) {
BatchWriter writer = iterator.next();
writer.close();
iterator.remove();
}
} catch (MutationsRejectedException e) {
throw new DBException(e);
}
}
/**
* Commonly repeated functionality: Before doing any operation, make sure
* we're working on the correct table. If not, open the correct one.
*
* @param t
* The table to open.
*/
public void checkTable(String t) throws TableNotFoundException {
if (!table.equals(t)) {
getTable(t);
}
}
/**
* Called when the user specifies a table that isn't the same as the existing
* table. Connect to it and if necessary, close our current connection.
*
* @param t
* @param table
* The table to open.
*/
public void getTable(String t) throws TableNotFoundException {
if (bw != null) { // Close the existing writer if necessary.
try {
bw.close();
} catch (MutationsRejectedException e) {
// Couldn't spit out the mutations we wanted.
// Ignore this for now.
System.err.println("MutationsRejectedException: " + e.getMessage());
public BatchWriter getWriter(String table) throws TableNotFoundException {
// tl;dr We're paying a cost for the ConcurrentHashMap here to deal with the DB api.
// We know that YCSB is really only ever going to send us data for one table, so using
// a concurrent data structure is overkill (especially in such a hot code path).
// However, the impact seems to be relatively negligible in trivial local tests and it's
// "more correct" WRT to the API.
BatchWriter writer = writers.get(table);
if (null == writer) {
BatchWriter newWriter = createBatchWriter(table);
BatchWriter oldWriter = writers.putIfAbsent(table, newWriter);
// Someone beat us to creating a BatchWriter for this table, use their BatchWriters
if (null != oldWriter) {
try {
// Make sure to clean up our new batchwriter!
newWriter.close();
} catch (MutationsRejectedException e) {
throw new RuntimeException(e);
}
writer = oldWriter;
} else {
writer = newWriter;
}
}
return writer;
}
/**
* Creates a BatchWriter with the expected configuration.
*
* @param table The table to write to
*/
private BatchWriter createBatchWriter(String table) throws TableNotFoundException {
BatchWriterConfig bwc = new BatchWriterConfig();
bwc.setMaxLatency(
Long.parseLong(getProperties()
@ -146,16 +161,15 @@ public class AccumuloClient extends DB {
TimeUnit.MILLISECONDS);
bwc.setMaxMemory(Long.parseLong(
getProperties().getProperty("accumulo.batchWriterSize", "100000")));
bwc.setMaxWriteThreads(Integer.parseInt(
getProperties().getProperty("accumulo.batchWriterThreads", "1")));
bw = connector.createBatchWriter(t, bwc);
// Create our scanners
singleScanner = connector.createScanner(t, Authorizations.EMPTY);
scanScanner = connector.createScanner(t, Authorizations.EMPTY);
table = t; // Store the name of the table we have open.
final String numThreadsValue = getProperties().getProperty("accumulo.batchWriterThreads");
// Try to saturate the client machine.
int numThreads = Math.max(1, Runtime.getRuntime().availableProcessors() / 2);
if (null != numThreadsValue) {
numThreads = Integer.parseInt(numThreadsValue);
}
System.err.println("Using " + numThreads + " threads to write data");
bwc.setMaxWriteThreads(numThreads);
return connector.createBatchWriter(table, bwc);
}
/**
@ -165,120 +179,120 @@ public class AccumuloClient extends DB {
* @param fields the set of columns to scan
* @return an Accumulo {@link Scanner} bound to the given row and columns
*/
private Scanner getRow(Text row, Set<String> fields) {
singleScanner.clearColumns();
singleScanner.setRange(new Range(row));
private Scanner getRow(String table, Text row, Set<String> fields) throws TableNotFoundException {
Scanner scanner = connector.createScanner(table, Authorizations.EMPTY);
scanner.setRange(new Range(row));
if (fields != null) {
for (String field : fields) {
singleScanner.fetchColumn(colFam, new Text(field));
scanner.fetchColumn(colFam, new Text(field));
}
}
return singleScanner;
return scanner;
}
@Override
public Status read(String t, String key, Set<String> fields,
HashMap<String, ByteIterator> result) {
try {
checkTable(t);
} catch (TableNotFoundException e) {
System.err.println("Error trying to connect to Accumulo table." + e);
return Status.ERROR;
}
public Status read(String table, String key, Set<String> fields,
Map<String, ByteIterator> result) {
Scanner scanner = null;
try {
scanner = getRow(table, new Text(key), null);
// Pick out the results we care about.
for (Entry<Key, Value> entry : getRow(new Text(key), null)) {
final Text cq = new Text();
for (Entry<Key, Value> entry : scanner) {
entry.getKey().getColumnQualifier(cq);
Value v = entry.getValue();
byte[] buf = v.get();
result.put(entry.getKey().getColumnQualifier().toString(),
result.put(cq.toString(),
new ByteArrayByteIterator(buf));
}
} catch (Exception e) {
System.err.println("Error trying to reading Accumulo table" + key + e);
System.err.println("Error trying to reading Accumulo table " + table + " " + key);
e.printStackTrace();
return Status.ERROR;
} finally {
if (null != scanner) {
scanner.close();
}
}
return Status.OK;
}
@Override
public Status scan(String t, String startkey, int recordcount,
public Status scan(String table, String startkey, int recordcount,
Set<String> fields, Vector<HashMap<String, ByteIterator>> result) {
try {
checkTable(t);
} catch (TableNotFoundException e) {
System.err.println("Error trying to connect to Accumulo table." + e);
return Status.ERROR;
}
// There doesn't appear to be a way to create a range for a given
// LENGTH. Just start and end keys. So we'll do this the hard way for
// now:
// Just make the end 'infinity' and only read as much as we need.
scanScanner.clearColumns();
scanScanner.setRange(new Range(new Text(startkey), null));
Scanner scanner = null;
try {
scanner = connector.createScanner(table, Authorizations.EMPTY);
scanner.setRange(new Range(new Text(startkey), null));
// Batch size is how many key/values to try to get per call. Here, I'm
// guessing that the number of keys in a row is equal to the number of
// fields we're interested in.
// Have Accumulo send us complete rows, serialized in a single Key-Value pair
IteratorSetting cfg = new IteratorSetting(100, WholeRowIterator.class);
scanner.addScanIterator(cfg);
// We try to fetch one more so as to tell when we've run out of fields.
// If no fields are provided, we assume one column/row.
if (fields != null) {
// And add each of them as fields we want.
for (String field : fields) {
scanScanner.fetchColumn(colFam, new Text(field));
// If no fields are provided, we assume one column/row.
if (fields != null) {
// And add each of them as fields we want.
for (String field : fields) {
scanner.fetchColumn(colFam, new Text(field));
}
}
}
String rowKey = "";
HashMap<String, ByteIterator> currentHM = null;
int count = 0;
// Begin the iteration.
for (Entry<Key, Value> entry : scanScanner) {
// Check for a new row.
if (!rowKey.equals(entry.getKey().getRow().toString())) {
int count = 0;
for (Entry<Key, Value> entry : scanner) {
// Deserialize the row
SortedMap<Key, Value> row = WholeRowIterator.decodeRow(entry.getKey(), entry.getValue());
HashMap<String, ByteIterator> rowData;
if (null != fields) {
rowData = new HashMap<>(fields.size());
} else {
rowData = new HashMap<>();
}
result.add(rowData);
// Parse the data in the row, avoid unnecessary Text object creation
final Text cq = new Text();
for (Entry<Key, Value> rowEntry : row.entrySet()) {
rowEntry.getKey().getColumnQualifier(cq);
rowData.put(cq.toString(), new ByteArrayByteIterator(rowEntry.getValue().get()));
}
if (count++ == recordcount) { // Done reading the last row.
break;
}
rowKey = entry.getKey().getRow().toString();
if (fields != null) {
// Initial Capacity for all keys.
currentHM = new HashMap<String, ByteIterator>(fields.size());
} else {
// An empty result map.
currentHM = new HashMap<String, ByteIterator>();
}
result.add(currentHM);
}
// Now add the key to the hashmap.
Value v = entry.getValue();
byte[] buf = v.get();
currentHM.put(entry.getKey().getColumnQualifier().toString(),
new ByteArrayByteIterator(buf));
} catch (TableNotFoundException e) {
System.err.println("Error trying to connect to Accumulo table.");
e.printStackTrace();
return Status.ERROR;
} catch (IOException e) {
System.err.println("Error deserializing data from Accumulo.");
e.printStackTrace();
return Status.ERROR;
} finally {
if (null != scanner) {
scanner.close();
}
}
return Status.OK;
}
@Override
public Status update(String t, String key,
HashMap<String, ByteIterator> values) {
public Status update(String table, String key,
Map<String, ByteIterator> values) {
BatchWriter bw = null;
try {
checkTable(t);
bw = getWriter(table);
} catch (TableNotFoundException e) {
System.err.println("Error trying to connect to Accumulo table." + e);
System.err.println("Error opening batch writer to Accumulo table " + table);
e.printStackTrace();
return Status.ERROR;
}
Mutation mutInsert = new Mutation(new Text(key));
Mutation mutInsert = new Mutation(key.getBytes(UTF_8));
for (Map.Entry<String, ByteIterator> entry : values.entrySet()) {
mutInsert.put(colFam, new Text(entry.getKey()),
System.currentTimeMillis(), new Value(entry.getValue().toArray()));
mutInsert.put(colFamBytes, entry.getKey().getBytes(UTF_8), entry.getValue().toArray());
}
try {
@ -289,27 +303,29 @@ public class AccumuloClient extends DB {
return Status.ERROR;
}
return Status.OK;
return Status.BATCHED_OK;
}
@Override
public Status insert(String t, String key,
HashMap<String, ByteIterator> values) {
Map<String, ByteIterator> values) {
return update(t, key, values);
}
@Override
public Status delete(String t, String key) {
public Status delete(String table, String key) {
BatchWriter bw;
try {
checkTable(t);
bw = getWriter(table);
} catch (TableNotFoundException e) {
System.err.println("Error trying to connect to Accumulo table." + e);
System.err.println("Error trying to connect to Accumulo table.");
e.printStackTrace();
return Status.ERROR;
}
try {
deleteRow(new Text(key));
} catch (MutationsRejectedException e) {
deleteRow(table, new Text(key), bw);
} catch (TableNotFoundException | MutationsRejectedException e) {
System.err.println("Error performing delete.");
e.printStackTrace();
return Status.ERROR;
@ -323,24 +339,31 @@ public class AccumuloClient extends DB {
}
// These functions are adapted from RowOperations.java:
private void deleteRow(Text row) throws MutationsRejectedException {
deleteRow(getRow(row, null));
private void deleteRow(String table, Text row, BatchWriter bw) throws MutationsRejectedException,
TableNotFoundException {
// TODO Use a batchDeleter instead
deleteRow(getRow(table, row, null), bw);
}
/**
* Deletes a row, given a Scanner of JUST that row.
*/
private void deleteRow(Scanner scanner) throws MutationsRejectedException {
private void deleteRow(Scanner scanner, BatchWriter bw) throws MutationsRejectedException {
Mutation deleter = null;
// iterate through the keys
final Text row = new Text();
final Text cf = new Text();
final Text cq = new Text();
for (Entry<Key, Value> entry : scanner) {
// create a mutation for the row
if (deleter == null) {
deleter = new Mutation(entry.getKey().getRow());
entry.getKey().getRow(row);
deleter = new Mutation(row);
}
entry.getKey().getColumnFamily(cf);
entry.getKey().getColumnQualifier(cq);
// the remove function adds the key with the delete flag set to true
deleter.putDelete(entry.getKey().getColumnFamily(),
entry.getKey().getColumnQualifier());
deleter.putDelete(cf, cq);
}
bw.addMutation(deleter);

Просмотреть файл

@ -57,7 +57,7 @@ public class AerospikeClient extends com.yahoo.ycsb.DB {
@Override
public void init() throws DBException {
insertPolicy.recordExistsAction = RecordExistsAction.CREATE_ONLY;
updatePolicy.recordExistsAction = RecordExistsAction.UPDATE_ONLY;
updatePolicy.recordExistsAction = RecordExistsAction.REPLACE_ONLY;
Properties props = getProperties();
@ -98,7 +98,7 @@ public class AerospikeClient extends com.yahoo.ycsb.DB {
@Override
public Status read(String table, String key, Set<String> fields,
HashMap<String, ByteIterator> result) {
Map<String, ByteIterator> result) {
try {
Record record;
@ -134,7 +134,7 @@ public class AerospikeClient extends com.yahoo.ycsb.DB {
}
private Status write(String table, String key, WritePolicy writePolicy,
HashMap<String, ByteIterator> values) {
Map<String, ByteIterator> values) {
Bin[] bins = new Bin[values.size()];
int index = 0;
@ -156,13 +156,13 @@ public class AerospikeClient extends com.yahoo.ycsb.DB {
@Override
public Status update(String table, String key,
HashMap<String, ByteIterator> values) {
Map<String, ByteIterator> values) {
return write(table, key, updatePolicy, values);
}
@Override
public Status insert(String table, String key,
HashMap<String, ByteIterator> values) {
Map<String, ByteIterator> values) {
return write(table, key, insertPolicy, values);
}

Просмотреть файл

@ -187,7 +187,7 @@ public class ArangoDBClient extends DB {
* {@link DB} class's description for a discussion of error codes.
*/
@Override
public Status insert(String table, String key, HashMap<String, ByteIterator> values) {
public Status insert(String table, String key, Map<String, ByteIterator> values) {
try {
BaseDocument toInsert = new BaseDocument(key);
for (Map.Entry<String, ByteIterator> entry : values.entrySet()) {
@ -225,7 +225,7 @@ public class ArangoDBClient extends DB {
*/
@SuppressWarnings("unchecked")
@Override
public Status read(String table, String key, Set<String> fields, HashMap<String, ByteIterator> result) {
public Status read(String table, String key, Set<String> fields, Map<String, ByteIterator> result) {
try {
DocumentEntity<BaseDocument> targetDoc = arangoDriver.getDocument(table, key, BaseDocument.class);
BaseDocument aDocument = targetDoc.getEntity();
@ -261,7 +261,7 @@ public class ArangoDBClient extends DB {
* description for a discussion of error codes.
*/
@Override
public Status update(String table, String key, HashMap<String, ByteIterator> values) {
public Status update(String table, String key, Map<String, ByteIterator> values) {
try {
if (!transactionUpdate) {
@ -455,8 +455,8 @@ public class ArangoDBClient extends DB {
return new StringByteIterator(content);
}
private String mapToJson(HashMap<String, ByteIterator> values) {
HashMap<String, String> intervalRst = new HashMap<String, String>();
private String mapToJson(Map<String, ByteIterator> values) {
Map<String, String> intervalRst = new HashMap<String, String>();
for (Map.Entry<String, ByteIterator> entry : values.entrySet()) {
intervalRst.put(entry.getKey(), byteIteratorToString(entry.getValue()));
}

Просмотреть файл

@ -175,7 +175,7 @@ public class ArangoDB3Client extends DB {
* {@link DB} class's description for a discussion of error codes.
*/
@Override
public Status insert(String table, String key, HashMap<String, ByteIterator> values) {
public Status insert(String table, String key, Map<String, ByteIterator> values) {
try {
BaseDocument toInsert = new BaseDocument(key);
for (Map.Entry<String, ByteIterator> entry : values.entrySet()) {
@ -205,7 +205,7 @@ public class ArangoDB3Client extends DB {
* @return Zero on success, a non-zero error code on error or "not found".
*/
@Override
public Status read(String table, String key, Set<String> fields, HashMap<String, ByteIterator> result) {
public Status read(String table, String key, Set<String> fields, Map<String, ByteIterator> result) {
try {
VPackSlice document = arangoDB.db(databaseName).collection(table).getDocument(key, VPackSlice.class, null);
if (!this.fillMap(result, document, fields)) {
@ -233,7 +233,7 @@ public class ArangoDB3Client extends DB {
* description for a discussion of error codes.
*/
@Override
public Status update(String table, String key, HashMap<String, ByteIterator> values) {
public Status update(String table, String key, Map<String, ByteIterator> values) {
try {
if (!transactionUpdate) {
BaseDocument updateDoc = new BaseDocument();
@ -414,7 +414,7 @@ public class ArangoDB3Client extends DB {
return new StringByteIterator(content);
}
private String mapToJson(HashMap<String, ByteIterator> values) {
private String mapToJson(Map<String, ByteIterator> values) {
VPackBuilder builder = new VPackBuilder().add(ValueType.OBJECT);
for (Map.Entry<String, ByteIterator> entry : values.entrySet()) {
builder.add(entry.getKey(), byteIteratorToString(entry.getValue()));

Просмотреть файл

@ -21,6 +21,7 @@ import java.nio.charset.Charset;
import java.util.ArrayList;
import java.util.HashMap;
import java.util.Iterator;
import java.util.Map;
import java.util.Map.Entry;
import java.util.Set;
import java.util.Vector;
@ -196,7 +197,7 @@ public class AsyncHBaseClient extends com.yahoo.ycsb.DB {
@Override
public Status read(String table, String key, Set<String> fields,
HashMap<String, ByteIterator> result) {
Map<String, ByteIterator> result) {
setTable(table);
final GetRequest get = new GetRequest(
@ -299,7 +300,7 @@ public class AsyncHBaseClient extends com.yahoo.ycsb.DB {
@Override
public Status update(String table, String key,
HashMap<String, ByteIterator> values) {
Map<String, ByteIterator> values) {
setTable(table);
if (debug) {
@ -347,7 +348,7 @@ public class AsyncHBaseClient extends com.yahoo.ycsb.DB {
@Override
public Status insert(String table, String key,
HashMap<String, ByteIterator> values) {
Map<String, ByteIterator> values) {
return update(table, key, values);
}

Просмотреть файл

@ -23,6 +23,7 @@ import com.microsoft.azure.documentdb.DocumentCollection;
import com.microsoft.azure.documentdb.FeedOptions;
import java.util.HashMap;
import java.util.Map;
import java.util.Map.Entry;
import java.util.Set;
import java.util.Vector;
@ -74,7 +75,7 @@ public class AzureDocumentDBClient extends DB {
@Override
public Status read(String table, String key, Set<String> fields,
HashMap<String, ByteIterator> result) {
Map<String, ByteIterator> result) {
Document record = getDocumentById(table, key);
if (record != null) {
@ -95,7 +96,7 @@ public class AzureDocumentDBClient extends DB {
@Override
public Status update(String table, String key,
HashMap<String, ByteIterator> values) {
Map<String, ByteIterator> values) {
Document record = getDocumentById(table, key);
if (record == null) {
@ -120,7 +121,7 @@ public class AzureDocumentDBClient extends DB {
@Override
public Status insert(String table, String key,
HashMap<String, ByteIterator> values) {
Map<String, ByteIterator> values) {
Document record = new Document();
record.set("id", key);

Просмотреть файл

@ -35,6 +35,7 @@ import com.yahoo.ycsb.Status;
import java.util.Date;
import java.util.HashMap;
import java.util.Map;
import java.util.Map.Entry;
import java.util.Properties;
import java.util.Set;
@ -105,7 +106,7 @@ public class AzureClient extends DB {
@Override
public Status read(String table, String key, Set<String> fields,
final HashMap<String, ByteIterator> result) {
Map<String, ByteIterator> result) {
if (fields != null) {
return readSubset(key, fields, result);
} else {
@ -145,12 +146,12 @@ public class AzureClient extends DB {
}
@Override
public Status update(String table, String key, HashMap<String, ByteIterator> values) {
public Status update(String table, String key, Map<String, ByteIterator> values) {
return insertOrUpdate(key, values);
}
@Override
public Status insert(String table, String key, HashMap<String, ByteIterator> values) {
public Status insert(String table, String key, Map<String, ByteIterator> values) {
if (batchSize == 1) {
return insertOrUpdate(key, values);
} else {
@ -187,7 +188,7 @@ public class AzureClient extends DB {
/*
* Read subset of properties instead of full fields with projection.
*/
public Status readSubset(String key, Set<String> fields, HashMap<String, ByteIterator> result) {
public Status readSubset(String key, Set<String> fields, Map<String, ByteIterator> result) {
String whereStr = String.format("RowKey eq '%s'", key);
TableQuery<TableServiceEntity> projectionQuery = TableQuery.from(
@ -220,7 +221,7 @@ public class AzureClient extends DB {
}
}
private Status readEntity(String key, HashMap<String, ByteIterator> result) {
private Status readEntity(String key, Map<String, ByteIterator> result) {
try {
// firstly, retrieve the entity to be deleted
TableOperation retrieveOp =
@ -238,7 +239,7 @@ public class AzureClient extends DB {
}
}
private Status insertBatch(String key, HashMap<String, ByteIterator> values) {
private Status insertBatch(String key, Map<String, ByteIterator> values) {
HashMap<String, EntityProperty> properties = new HashMap<String, EntityProperty>();
for (Entry<String, ByteIterator> entry : values.entrySet()) {
String fieldName = entry.getKey();
@ -259,7 +260,7 @@ public class AzureClient extends DB {
return Status.OK;
}
private Status insertOrUpdate(String key, HashMap<String, ByteIterator> values) {
private Status insertOrUpdate(String key, Map<String, ByteIterator> values) {
HashMap<String, EntityProperty> properties = new HashMap<String, EntityProperty>();
for (Entry<String, ByteIterator> entry : values.entrySet()) {
String fieldName = entry.getKey();

Просмотреть файл

@ -35,6 +35,7 @@ azuretablestorage:com.yahoo.ycsb.db.azuretablestorage.AzureClient
basic:com.yahoo.ycsb.BasicDB
cassandra-cql:com.yahoo.ycsb.db.CassandraCQLClient
cassandra2-cql:com.yahoo.ycsb.db.CassandraCQLClient
cloudspanner:com.yahoo.ycsb.db.cloudspanner.CloudSpannerClient
couchbase:com.yahoo.ycsb.db.CouchbaseClient
couchbase2:com.yahoo.ycsb.db.couchbase2.Couchbase2Client
dynamodb:com.yahoo.ycsb.db.DynamoDBClient
@ -46,6 +47,7 @@ googledatastore:com.yahoo.ycsb.db.GoogleDatastoreClient
hbase094:com.yahoo.ycsb.db.HBaseClient
hbase098:com.yahoo.ycsb.db.HBaseClient
hbase10:com.yahoo.ycsb.db.HBaseClient10
hbase12:com.yahoo.ycsb.db.hbase12.HBaseClient12
hypertable:com.yahoo.ycsb.db.HypertableClient
infinispan-cs:com.yahoo.ycsb.db.InfinispanRemoteClient
infinispan:com.yahoo.ycsb.db.InfinispanClient

Просмотреть файл

@ -54,14 +54,15 @@ DATABASES = {
"accumulo" : "com.yahoo.ycsb.db.accumulo.AccumuloClient",
"aerospike" : "com.yahoo.ycsb.db.AerospikeClient",
"arangodb" : "com.yahoo.ycsb.db.ArangoDBClient",
"arangodb3" : "com.yahoo.ycsb.db.arangodb.ArangoDB3Client",
"arangodb3" : "com.yahoo.ycsb.db.arangodb.ArangoDB3Client",
"asynchbase" : "com.yahoo.ycsb.db.AsyncHBaseClient",
"basic" : "com.yahoo.ycsb.BasicDB",
"cassandra-cql": "com.yahoo.ycsb.db.CassandraCQLClient",
"cassandra2-cql": "com.yahoo.ycsb.db.CassandraCQLClient",
"cloudspanner" : "com.yahoo.ycsb.db.cloudspanner.CloudSpannerClient",
"couchbase" : "com.yahoo.ycsb.db.CouchbaseClient",
"couchbase2" : "com.yahoo.ycsb.db.couchbase2.Couchbase2Client",
"azuredocumentdb" : "com.yahoo.ycsb.db.azuredocumentdb.AzureDocumentDBClient",
"azuredocumentdb" : "com.yahoo.ycsb.db.azuredocumentdb.AzureDocumentDBClient",
"dynamodb" : "com.yahoo.ycsb.db.DynamoDBClient",
"elasticsearch": "com.yahoo.ycsb.db.ElasticsearchClient",
"geode" : "com.yahoo.ycsb.db.GeodeClient",
@ -70,6 +71,7 @@ DATABASES = {
"hbase094" : "com.yahoo.ycsb.db.HBaseClient",
"hbase098" : "com.yahoo.ycsb.db.HBaseClient",
"hbase10" : "com.yahoo.ycsb.db.HBaseClient10",
"hbase12" : "com.yahoo.ycsb.db.hbase12.HBaseClient12",
"hypertable" : "com.yahoo.ycsb.db.HypertableClient",
"infinispan-cs": "com.yahoo.ycsb.db.InfinispanRemoteClient",
"infinispan" : "com.yahoo.ycsb.db.InfinispanClient",

Просмотреть файл

@ -239,7 +239,7 @@ public class CassandraCQLClient extends DB {
*/
@Override
public Status read(String table, String key, Set<String> fields,
HashMap<String, ByteIterator> result) {
Map<String, ByteIterator> result) {
try {
Statement stmt;
Select.Builder selectBuilder;
@ -402,7 +402,7 @@ public class CassandraCQLClient extends DB {
*/
@Override
public Status update(String table, String key,
HashMap<String, ByteIterator> values) {
Map<String, ByteIterator> values) {
// Insert and updates provide the same functionality
return insert(table, key, values);
}
@ -422,7 +422,7 @@ public class CassandraCQLClient extends DB {
*/
@Override
public Status insert(String table, String key,
HashMap<String, ByteIterator> values) {
Map<String, ByteIterator> values) {
try {
Insert insertStmt = QueryBuilder.insertInto(table);

Просмотреть файл

@ -157,7 +157,7 @@ public class CassandraCQLClientTest {
@Test
public void testUpdate() throws Exception {
final String key = "key";
final HashMap<String, String> input = new HashMap<String, String>();
final Map<String, String> input = new HashMap<String, String>();
input.put("field0", "value1");
input.put("field1", "value2");

111
cloudspanner/README.md Normal file
Просмотреть файл

@ -0,0 +1,111 @@
<!--
Copyright (c) 2017 YCSB contributors. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you
may not use this file except in compliance with the License. You
may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied. See the License for the specific language governing
permissions and limitations under the License. See accompanying
LICENSE file.
-->
# Cloud Spanner Driver for YCSB
This driver provides a YCSB workload binding for Google's Cloud Spanner database, the first relational database service that is both strongly consistent and horizontally scalable. This binding is implemented using the official Java client library for Cloud Spanner which uses GRPC for making calls.
For best results, we strongly recommend running the benchmark from a Google Compute Engine (GCE) VM.
## Running a Workload
We recommend reading the [general guidelines](https://github.com/brianfrankcooper/YCSB/wiki/Running-a-Workload) in the YCSB documentation, and following the Cloud Spanner specific steps below.
### 1. Set up Cloud Spanner with the Expected Schema
Follow the [Quickstart instructions](https://cloud.google.com/spanner/docs/quickstart-console) in the Cloud Spanner documentation to set up a Cloud Spanner instance, and create a database with the following schema:
```
CREATE TABLE usertable (
id STRING(MAX),
field0 STRING(MAX),
field1 STRING(MAX),
field2 STRING(MAX),
field3 STRING(MAX),
field4 STRING(MAX),
field5 STRING(MAX),
field6 STRING(MAX),
field7 STRING(MAX),
field8 STRING(MAX),
field9 STRING(MAX),
) PRIMARY KEY(id);
```
Make note of your project ID, instance ID, and database name.
### 2. Set Up Your Environment and Auth
Follow the [set up instructions](https://cloud.google.com/spanner/docs/getting-started/set-up) in the Cloud Spanner documentation to set up your environment and authentication. When not running on a GCE VM, make sure you run `gcloud auth application-default login`.
### 3. Edit Properties
In your YCSB root directory, edit `cloudspanner/conf/cloudspanner.properties` and specify your project ID, instance ID, and database name.
### 4. Run the YCSB Shell
Start the YCBS shell connected to Cloud Spanner using the following command:
```
./bin/ycsb shell cloudspanner -P cloudspanner/conf/cloudspanner.properties
```
You can use the `insert`, `read`, `update`, `scan`, and `delete` commands in the shell to experiment with your database and make sure the connection works. For example, try the following:
```
insert name field0=adam
read name field0
delete name
```
### 5. Load the Data
You can load, say, 10 GB of data into your YCSB database using the following command:
```
./bin/ycsb load cloudspanner -P cloudspanner/conf/cloudspanner.properties -P workloads/workloada -p recordcount=10000000 -p cloudspanner.batchinserts=1000 -threads 10 -s
```
We recommend batching insertions so as to reach ~1 MB of data per commit request; this is controlled via the `cloudspanner.batchinserts` parameter which we recommend setting to `1000` during data load.
If you wish to load a large database, you can run YCSB on multiple client VMs in parallel and use the `insertstart` and `insertcount` parameters to distribute the load as described [here](https://github.com/brianfrankcooper/YCSB/wiki/Running-a-Workload-in-Parallel). In this case, we recommend the following:
* Use ordered inserts via specifying the YCSB parameter `insertorder=ordered`;
* Use zero-padding so that ordered inserts are actually lexicographically ordered; the option `zeropadding = 12` is set in the default `cloudspanner.properties` file;
* Split the key range evenly between client VMs;
* Use few threads on each client VM, so that each individual commit request contains keys which are (close to) consecutive, and would thus likely address a single split; this also helps avoid overloading the servers.
The idea is that we have a number of 'write heads' which are all writing to different parts of the database (and thus talking to different servers), but each individual head is writing its own data (more or less) in order. See the [best practices page](https://cloud.google.com/spanner/docs/best-practices#loading_data) for further details.
### 6. Run a Workload
After data load, you can a run a workload, say, workload B, using the following command:
```
./bin/ycsb run cloudspanner -P cloudspanner/conf/cloudspanner.properties -P workloads/workloadb -p recordcount=10000000 -p operationcount=1000000 -threads 10 -s
```
Make sure that you use the same `insertorder` (i.e. `ordered` or `hashed`) and `zeropadding` as specified during the data load. Further details about running workloads are given in the [YCSB wiki pages](https://github.com/brianfrankcooper/YCSB/wiki/Running-a-Workload).
## Configuration Options
In addition to the standard YCSB parameters, the following Cloud Spanner specific options can be configured using the `-p` parameter or in `cloudspanner/conf/cloudspanner.properties`.
* `cloudspanner.database`: (Required) The name of the database created in the instance, e.g. `ycsb-database`.
* `cloudspanner.instance`: (Required) The ID of the Cloud Spanner instance, e.g. `ycsb-instance`.
* `cloudspanner.project`: The ID of the project containing the Cloud Spanner instance, e.g. `myproject`. This is not strictly required and can often be automatically inferred from the environment.
* `cloudspanner.readmode`: Allows choosing between the `read` and `query` interface of Cloud Spanner. The default is `query`.
* `cloudspanner.batchinserts`: The number of inserts to batch into a single commit request. The default value is 1 which means no batching is done. Recommended value during data load is 1000.
* `cloudspanner.boundedstaleness`: Number of seconds we allow reads to be stale for. Set to 0 for strong reads (default). For performance gains, this should be set to 10 seconds.

Просмотреть файл

@ -0,0 +1,26 @@
# Copyright (c) 2017 YCSB contributors. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you
# may not use this file except in compliance with the License. You
# may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied. See the License for the specific language governing
# permissions and limitations under the License. See accompanying
# LICENSE file.
# Core YCSB properties.
table = usertable
zeropadding = 12
# Cloud Spanner properties
cloudspanner.instance = ycsb-instance
cloudspanner.database = ycsb-database
cloudspanner.readmode = query
cloudspanner.boundedstaleness = 0
cloudspanner.batchinserts = 1

53
cloudspanner/pom.xml Normal file
Просмотреть файл

@ -0,0 +1,53 @@
<?xml version="1.0" encoding="UTF-8"?>
<!--
Copyright (c) 2017 YCSB contributors. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you
may not use this file except in compliance with the License. You
may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied. See the License for the specific language governing
permissions and limitations under the License. See accompanying
LICENSE file.
-->
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd">
<modelVersion>4.0.0</modelVersion>
<parent>
<groupId>com.yahoo.ycsb</groupId>
<artifactId>binding-parent</artifactId>
<version>0.13.0-SNAPSHOT</version>
<relativePath>../binding-parent/</relativePath>
</parent>
<artifactId>cloudspanner-binding</artifactId>
<name>Cloud Spanner DB Binding</name>
<packaging>jar</packaging>
<dependencies>
<dependency>
<groupId>com.google.cloud</groupId>
<artifactId>google-cloud-spanner</artifactId>
<version>${cloudspanner.version}</version>
<exclusions>
<exclusion> <!-- exclude an old version of Guava -->
<groupId>com.google.guava</groupId>
<artifactId>guava-jdk5</artifactId>
</exclusion>
</exclusions>
</dependency>
<dependency>
<groupId>com.yahoo.ycsb</groupId>
<artifactId>core</artifactId>
<version>${project.version}</version>
<scope>provided</scope>
</dependency>
</dependencies>
</project>

Просмотреть файл

@ -0,0 +1,397 @@
/**
* Copyright (c) 2017 YCSB contributors. All rights reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License"); you
* may not use this file except in compliance with the License. You
* may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
* implied. See the License for the specific language governing
* permissions and limitations under the License. See accompanying
* LICENSE file.
*/
package com.yahoo.ycsb.db.cloudspanner;
import com.google.common.base.Joiner;
import com.google.cloud.spanner.DatabaseId;
import com.google.cloud.spanner.DatabaseClient;
import com.google.cloud.spanner.Key;
import com.google.cloud.spanner.KeySet;
import com.google.cloud.spanner.KeyRange;
import com.google.cloud.spanner.Mutation;
import com.google.cloud.spanner.Options;
import com.google.cloud.spanner.ResultSet;
import com.google.cloud.spanner.SessionPoolOptions;
import com.google.cloud.spanner.Spanner;
import com.google.cloud.spanner.SpannerOptions;
import com.google.cloud.spanner.Statement;
import com.google.cloud.spanner.Struct;
import com.google.cloud.spanner.StructReader;
import com.google.cloud.spanner.TimestampBound;
import com.yahoo.ycsb.ByteIterator;
import com.yahoo.ycsb.Client;
import com.yahoo.ycsb.DB;
import com.yahoo.ycsb.DBException;
import com.yahoo.ycsb.Status;
import com.yahoo.ycsb.StringByteIterator;
import com.yahoo.ycsb.workloads.CoreWorkload;
import java.util.ArrayList;
import java.util.Arrays;
import java.util.HashMap;
import java.util.Map;
import java.util.Properties;
import java.util.Set;
import java.util.Vector;
import java.util.logging.Level;
import java.util.logging.Logger;
import java.util.concurrent.TimeUnit;
/**
* YCSB Client for Google's Cloud Spanner.
*/
public class CloudSpannerClient extends DB {
/**
* The names of properties which can be specified in the config files and flags.
*/
public static final class CloudSpannerProperties {
private CloudSpannerProperties() {}
/**
* The Cloud Spanner database name to use when running the YCSB benchmark, e.g. 'ycsb-database'.
*/
static final String DATABASE = "cloudspanner.database";
/**
* The Cloud Spanner instance ID to use when running the YCSB benchmark, e.g. 'ycsb-instance'.
*/
static final String INSTANCE = "cloudspanner.instance";
/**
* Choose between 'read' and 'query'. Affects both read() and scan() operations.
*/
static final String READ_MODE = "cloudspanner.readmode";
/**
* The number of inserts to batch during the bulk loading phase. The default value is 1, which means no batching
* is done. Recommended value during data load is 1000.
*/
static final String BATCH_INSERTS = "cloudspanner.batchinserts";
/**
* Number of seconds we allow reads to be stale for. Set to 0 for strong reads (default).
* For performance gains, this should be set to 10 seconds.
*/
static final String BOUNDED_STALENESS = "cloudspanner.boundedstaleness";
// The properties below usually do not need to be set explicitly.
/**
* The Cloud Spanner project ID to use when running the YCSB benchmark, e.g. 'myproject'. This is not strictly
* necessary and can often be inferred from the environment.
*/
static final String PROJECT = "cloudspanner.project";
/**
* The Cloud Spanner host name to use in the YCSB run.
*/
static final String HOST = "cloudspanner.host";
/**
* Number of Cloud Spanner client channels to use. It's recommended to leave this to be the default value.
*/
static final String NUM_CHANNELS = "cloudspanner.channels";
}
private static int fieldCount;
private static boolean queriesForReads;
private static int batchInserts;
private static TimestampBound timestampBound;
private static String standardQuery;
private static String standardScan;
private static final ArrayList<String> STANDARD_FIELDS = new ArrayList<>();
private static final String PRIMARY_KEY_COLUMN = "id";
private static final Logger LOGGER = Logger.getLogger(CloudSpannerClient.class.getName());
// Static lock for the class.
private static final Object CLASS_LOCK = new Object();
// Single Spanner client per process.
private static Spanner spanner = null;
// Single database client per process.
private static DatabaseClient dbClient = null;
// Buffered mutations on a per object/thread basis for batch inserts.
// Note that we have a separate CloudSpannerClient object per thread.
private final ArrayList<Mutation> bufferedMutations = new ArrayList<>();
private static void constructStandardQueriesAndFields(Properties properties) {
String table = properties.getProperty(CoreWorkload.TABLENAME_PROPERTY, CoreWorkload.TABLENAME_PROPERTY_DEFAULT);
standardQuery = new StringBuilder()
.append("SELECT * FROM ").append(table).append(" WHERE id=@key").toString();
standardScan = new StringBuilder()
.append("SELECT * FROM ").append(table).append(" WHERE id>=@startKey LIMIT @count").toString();
for (int i = 0; i < fieldCount; i++) {
STANDARD_FIELDS.add("field" + i);
}
}
private static Spanner getSpanner(Properties properties, String host, String project) {
if (spanner != null) {
return spanner;
}
String numChannels = properties.getProperty(CloudSpannerProperties.NUM_CHANNELS);
int numThreads = Integer.parseInt(properties.getProperty(Client.THREAD_COUNT_PROPERTY, "1"));
SpannerOptions.Builder optionsBuilder = SpannerOptions.newBuilder()
.setSessionPoolOption(SessionPoolOptions.newBuilder()
.setMinSessions(numThreads)
// Since we have no read-write transactions, we can set the write session fraction to 0.
.setWriteSessionsFraction(0)
.build());
if (host != null) {
optionsBuilder.setHost(host);
}
if (project != null) {
optionsBuilder.setProjectId(project);
}
if (numChannels != null) {
optionsBuilder.setNumChannels(Integer.parseInt(numChannels));
}
spanner = optionsBuilder.build().getService();
Runtime.getRuntime().addShutdownHook(new Thread("spannerShutdown") {
@Override
public void run() {
spanner.close();
}
});
return spanner;
}
@Override
public void init() throws DBException {
synchronized (CLASS_LOCK) {
if (dbClient != null) {
return;
}
Properties properties = getProperties();
String host = properties.getProperty(CloudSpannerProperties.HOST);
String project = properties.getProperty(CloudSpannerProperties.PROJECT);
String instance = properties.getProperty(CloudSpannerProperties.INSTANCE, "ycsb-instance");
String database = properties.getProperty(CloudSpannerProperties.DATABASE, "ycsb-database");
fieldCount = Integer.parseInt(properties.getProperty(
CoreWorkload.FIELD_COUNT_PROPERTY, CoreWorkload.FIELD_COUNT_PROPERTY_DEFAULT));
queriesForReads = properties.getProperty(CloudSpannerProperties.READ_MODE, "query").equals("query");
batchInserts = Integer.parseInt(properties.getProperty(CloudSpannerProperties.BATCH_INSERTS, "1"));
constructStandardQueriesAndFields(properties);
int boundedStalenessSeconds = Integer.parseInt(properties.getProperty(
CloudSpannerProperties.BOUNDED_STALENESS, "0"));
timestampBound = (boundedStalenessSeconds <= 0) ?
TimestampBound.strong() : TimestampBound.ofMaxStaleness(boundedStalenessSeconds, TimeUnit.SECONDS);
try {
spanner = getSpanner(properties, host, project);
if (project == null) {
project = spanner.getOptions().getProjectId();
}
dbClient = spanner.getDatabaseClient(DatabaseId.of(project, instance, database));
} catch (Exception e) {
LOGGER.log(Level.SEVERE, "init()", e);
throw new DBException(e);
}
LOGGER.log(Level.INFO, new StringBuilder()
.append("\nHost: ").append(spanner.getOptions().getHost())
.append("\nProject: ").append(project)
.append("\nInstance: ").append(instance)
.append("\nDatabase: ").append(database)
.append("\nUsing queries for reads: ").append(queriesForReads)
.append("\nBatching inserts: ").append(batchInserts)
.append("\nBounded staleness seconds: ").append(boundedStalenessSeconds)
.toString());
}
}
private Status readUsingQuery(
String table, String key, Set<String> fields, Map<String, ByteIterator> result) {
Statement query;
Iterable<String> columns = fields == null ? STANDARD_FIELDS : fields;
if (fields == null || fields.size() == fieldCount) {
query = Statement.newBuilder(standardQuery).bind("key").to(key).build();
} else {
Joiner joiner = Joiner.on(',');
query = Statement.newBuilder("SELECT ")
.append(joiner.join(fields))
.append(" FROM ")
.append(table)
.append(" WHERE id=@key")
.bind("key").to(key)
.build();
}
try (ResultSet resultSet = dbClient.singleUse(timestampBound).executeQuery(query)) {
resultSet.next();
decodeStruct(columns, resultSet, result);
if (resultSet.next()) {
throw new Exception("Expected exactly one row for each read.");
}
return Status.OK;
} catch (Exception e) {
LOGGER.log(Level.INFO, "readUsingQuery()", e);
return Status.ERROR;
}
}
@Override
public Status read(
String table, String key, Set<String> fields, Map<String, ByteIterator> result) {
if (queriesForReads) {
return readUsingQuery(table, key, fields, result);
}
Iterable<String> columns = fields == null ? STANDARD_FIELDS : fields;
try {
Struct row = dbClient.singleUse(timestampBound).readRow(table, Key.of(key), columns);
decodeStruct(columns, row, result);
return Status.OK;
} catch (Exception e) {
LOGGER.log(Level.INFO, "read()", e);
return Status.ERROR;
}
}
private Status scanUsingQuery(
String table, String startKey, int recordCount, Set<String> fields,
Vector<HashMap<String, ByteIterator>> result) {
Iterable<String> columns = fields == null ? STANDARD_FIELDS : fields;
Statement query;
if (fields == null || fields.size() == fieldCount) {
query = Statement.newBuilder(standardScan).bind("startKey").to(startKey).bind("count").to(recordCount).build();
} else {
Joiner joiner = Joiner.on(',');
query = Statement.newBuilder("SELECT ")
.append(joiner.join(fields))
.append(" FROM ")
.append(table)
.append(" WHERE id>=@startKey LIMIT @count")
.bind("startKey").to(startKey)
.bind("count").to(recordCount)
.build();
}
try (ResultSet resultSet = dbClient.singleUse(timestampBound).executeQuery(query)) {
while (resultSet.next()) {
HashMap<String, ByteIterator> row = new HashMap<>();
decodeStruct(columns, resultSet, row);
result.add(row);
}
return Status.OK;
} catch (Exception e) {
LOGGER.log(Level.INFO, "scanUsingQuery()", e);
return Status.ERROR;
}
}
@Override
public Status scan(
String table, String startKey, int recordCount, Set<String> fields,
Vector<HashMap<String, ByteIterator>> result) {
if (queriesForReads) {
return scanUsingQuery(table, startKey, recordCount, fields, result);
}
Iterable<String> columns = fields == null ? STANDARD_FIELDS : fields;
KeySet keySet =
KeySet.newBuilder().addRange(KeyRange.closedClosed(Key.of(startKey), Key.of())).build();
try (ResultSet resultSet = dbClient.singleUse(timestampBound)
.read(table, keySet, columns, Options.limit(recordCount))) {
while (resultSet.next()) {
HashMap<String, ByteIterator> row = new HashMap<>();
decodeStruct(columns, resultSet, row);
result.add(row);
}
return Status.OK;
} catch (Exception e) {
LOGGER.log(Level.INFO, "scan()", e);
return Status.ERROR;
}
}
@Override
public Status update(String table, String key, Map<String, ByteIterator> values) {
Mutation.WriteBuilder m = Mutation.newInsertOrUpdateBuilder(table);
m.set(PRIMARY_KEY_COLUMN).to(key);
for (Map.Entry<String, ByteIterator> e : values.entrySet()) {
m.set(e.getKey()).to(e.getValue().toString());
}
try {
dbClient.writeAtLeastOnce(Arrays.asList(m.build()));
} catch (Exception e) {
LOGGER.log(Level.INFO, "update()", e);
return Status.ERROR;
}
return Status.OK;
}
@Override
public Status insert(String table, String key, Map<String, ByteIterator> values) {
if (bufferedMutations.size() < batchInserts) {
Mutation.WriteBuilder m = Mutation.newInsertOrUpdateBuilder(table);
m.set(PRIMARY_KEY_COLUMN).to(key);
for (Map.Entry<String, ByteIterator> e : values.entrySet()) {
m.set(e.getKey()).to(e.getValue().toString());
}
bufferedMutations.add(m.build());
} else {
LOGGER.log(Level.INFO, "Limit of cached mutations reached. The given mutation with key " + key +
" is ignored. Is this a retry?");
}
if (bufferedMutations.size() < batchInserts) {
return Status.BATCHED_OK;
}
try {
dbClient.writeAtLeastOnce(bufferedMutations);
bufferedMutations.clear();
} catch (Exception e) {
LOGGER.log(Level.INFO, "insert()", e);
return Status.ERROR;
}
return Status.OK;
}
@Override
public void cleanup() {
try {
if (bufferedMutations.size() > 0) {
dbClient.writeAtLeastOnce(bufferedMutations);
bufferedMutations.clear();
}
} catch (Exception e) {
LOGGER.log(Level.INFO, "cleanup()", e);
}
}
@Override
public Status delete(String table, String key) {
try {
dbClient.writeAtLeastOnce(Arrays.asList(Mutation.delete(table, Key.of(key))));
} catch (Exception e) {
LOGGER.log(Level.INFO, "delete()", e);
return Status.ERROR;
}
return Status.OK;
}
private static void decodeStruct(
Iterable<String> columns, StructReader structReader, Map<String, ByteIterator> result) {
for (String col : columns) {
result.put(col, new StringByteIterator(structReader.getString(col)));
}
}
}

Просмотреть файл

@ -0,0 +1,22 @@
/*
* Copyright (c) 2017 YCSB contributors. All rights reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License"); you
* may not use this file except in compliance with the License. You
* may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
* implied. See the License for the specific language governing
* permissions and limitations under the License. See accompanying
* LICENSE file.
*/
/**
* The YCSB binding for Google's <a href="https://cloud.google.com/spanner/">
* Cloud Spanner</a>.
*/
package com.yahoo.ycsb.db.cloudspanner;

Просмотреть файл

@ -18,6 +18,7 @@
package com.yahoo.ycsb;
import java.util.*;
import java.util.Map;
import java.util.concurrent.TimeUnit;
import java.util.concurrent.locks.LockSupport;
@ -107,7 +108,7 @@ public class BasicDB extends DB {
* @param result A HashMap of field/value pairs for the result
* @return Zero on success, a non-zero error code on error
*/
public Status read(String table, String key, Set<String> fields, HashMap<String, ByteIterator> result) {
public Status read(String table, String key, Set<String> fields, Map<String, ByteIterator> result) {
delay();
if (verbose) {
@ -170,7 +171,7 @@ public class BasicDB extends DB {
* @param values A HashMap of field/value pairs to update in the record
* @return Zero on success, a non-zero error code on error
*/
public Status update(String table, String key, HashMap<String, ByteIterator> values) {
public Status update(String table, String key, Map<String, ByteIterator> values) {
delay();
if (verbose) {
@ -197,7 +198,7 @@ public class BasicDB extends DB {
* @param values A HashMap of field/value pairs to insert in the record
* @return Zero on success, a non-zero error code on error
*/
public Status insert(String table, String key, HashMap<String, ByteIterator> values) {
public Status insert(String table, String key, Map<String, ByteIterator> values) {
delay();
if (verbose) {

Просмотреть файл

@ -20,6 +20,7 @@ package com.yahoo.ycsb;
* A ByteIterator that iterates through a byte array.
*/
public class ByteArrayByteIterator extends ByteIterator {
private final int originalOffset;
private byte[] str;
private int off;
private final int len;
@ -28,12 +29,14 @@ public class ByteArrayByteIterator extends ByteIterator {
this.str = s;
this.off = 0;
this.len = s.length;
originalOffset = 0;
}
public ByteArrayByteIterator(byte[] s, int off, int len) {
this.str = s;
this.off = off;
this.len = off + len;
originalOffset = off;
}
@Override
@ -53,4 +56,9 @@ public class ByteArrayByteIterator extends ByteIterator {
return len - off;
}
@Override
public void reset() {
off = originalOffset;
}
}

Просмотреть файл

@ -73,6 +73,15 @@ public abstract class ByteIterator implements Iterator<Byte> {
throw new UnsupportedOperationException();
}
/** Resets the iterator so that it can be consumed again. Not all
* implementations support this call.
* @throws UnsupportedOperationException if the implementation hasn't implemented
* the method.
*/
public void reset() {
throw new UnsupportedOperationException();
}
/** Consumes remaining contents of this object, and returns them as a string. */
public String toString() {
Charset cset = Charset.forName("UTF-8");

Просмотреть файл

@ -405,6 +405,14 @@ class ClientThread implements Runnable {
this.completeLatch = completeLatch;
}
public void setThreadId(final int threadId) {
threadid = threadId;
}
public void setThreadCount(final int threadCount) {
threadcount = threadCount;
}
public int getOpsDone() {
return opsdone;
}
@ -877,7 +885,8 @@ public final class Client {
ClientThread t = new ClientThread(db, dotransactions, workload, props, threadopcount, targetperthreadperms,
completeLatch);
t.setThreadId(threadid);
t.setThreadCount(threadcount);
clients.add(t);
}

Просмотреть файл

@ -303,7 +303,7 @@ public final class CommandLine {
} else {
System.out.println("--------------------------------");
}
for (HashMap<String, ByteIterator> result : results) {
for (Map<String, ByteIterator> result : results) {
System.out.println("Record " + (record++));
for (Map.Entry<String, ByteIterator> ent : result.entrySet()) {
System.out.println(ent.getKey() + "=" + ent.getValue());

Просмотреть файл

@ -18,6 +18,7 @@
package com.yahoo.ycsb;
import java.util.HashMap;
import java.util.Map;
import java.util.Properties;
import java.util.Set;
import java.util.Vector;
@ -85,7 +86,7 @@ public abstract class DB {
* @param result A HashMap of field/value pairs for the result
* @return The result of the operation.
*/
public abstract Status read(String table, String key, Set<String> fields, HashMap<String, ByteIterator> result);
public abstract Status read(String table, String key, Set<String> fields, Map<String, ByteIterator> result);
/**
* Perform a range scan for a set of records in the database. Each field/value pair from the result will be stored
@ -110,7 +111,7 @@ public abstract class DB {
* @param values A HashMap of field/value pairs to update in the record
* @return The result of the operation.
*/
public abstract Status update(String table, String key, HashMap<String, ByteIterator> values);
public abstract Status update(String table, String key, Map<String, ByteIterator> values);
/**
* Insert a record in the database. Any field/value pairs in the specified values HashMap will be written into the
@ -121,7 +122,7 @@ public abstract class DB {
* @param values A HashMap of field/value pairs to insert in the record
* @return The result of the operation.
*/
public abstract Status insert(String table, String key, HashMap<String, ByteIterator> values);
public abstract Status insert(String table, String key, Map<String, ByteIterator> values);
/**
* Delete a record from the database.

Просмотреть файл

@ -17,6 +17,7 @@
package com.yahoo.ycsb;
import java.util.Map;
import com.yahoo.ycsb.measurements.Measurements;
import org.apache.htrace.core.TraceScope;
import org.apache.htrace.core.Tracer;
@ -33,7 +34,7 @@ public class DBWrapper extends DB {
private final Tracer tracer;
private boolean reportLatencyForEachError = false;
private HashSet<String> latencyTrackedErrors = new HashSet<String>();
private Set<String> latencyTrackedErrors = new HashSet<String>();
private static final String REPORT_LATENCY_FOR_EACH_ERROR_PROPERTY = "reportlatencyforeacherror";
private static final String REPORT_LATENCY_FOR_EACH_ERROR_PROPERTY_DEFAULT = "false";
@ -127,7 +128,7 @@ public class DBWrapper extends DB {
* @return The result of the operation.
*/
public Status read(String table, String key, Set<String> fields,
HashMap<String, ByteIterator> result) {
Map<String, ByteIterator> result) {
try (final TraceScope span = tracer.newScope(scopeStringRead)) {
long ist = measurements.getIntendedtartTimeNs();
long st = System.nanoTime();
@ -190,7 +191,7 @@ public class DBWrapper extends DB {
* @return The result of the operation.
*/
public Status update(String table, String key,
HashMap<String, ByteIterator> values) {
Map<String, ByteIterator> values) {
try (final TraceScope span = tracer.newScope(scopeStringUpdate)) {
long ist = measurements.getIntendedtartTimeNs();
long st = System.nanoTime();
@ -213,7 +214,7 @@ public class DBWrapper extends DB {
* @return The result of the operation.
*/
public Status insert(String table, String key,
HashMap<String, ByteIterator> values) {
Map<String, ByteIterator> values) {
try (final TraceScope span = tracer.newScope(scopeStringInsert)) {
long ist = measurements.getIntendedtartTimeNs();
long st = System.nanoTime();

Просмотреть файл

@ -18,6 +18,7 @@
package com.yahoo.ycsb;
import java.util.HashMap;
import java.util.Map;
import java.util.Random;
import java.util.Set;
import java.util.Vector;
@ -93,7 +94,7 @@ public class GoodBadUglyDB extends DB {
* @param result A HashMap of field/value pairs for the result
* @return Zero on success, a non-zero error code on error
*/
public Status read(String table, String key, Set<String> fields, HashMap<String, ByteIterator> result) {
public Status read(String table, String key, Set<String> fields, Map<String, ByteIterator> result) {
delay();
return Status.OK;
}
@ -125,7 +126,7 @@ public class GoodBadUglyDB extends DB {
* @param values A HashMap of field/value pairs to update in the record
* @return Zero on success, a non-zero error code on error
*/
public Status update(String table, String key, HashMap<String, ByteIterator> values) {
public Status update(String table, String key, Map<String, ByteIterator> values) {
delay();
return Status.OK;
@ -140,7 +141,7 @@ public class GoodBadUglyDB extends DB {
* @param values A HashMap of field/value pairs to insert in the record
* @return Zero on success, a non-zero error code on error
*/
public Status insert(String table, String key, HashMap<String, ByteIterator> values) {
public Status insert(String table, String key, Map<String, ByteIterator> values) {
delay();
return Status.OK;
}

Просмотреть файл

@ -16,6 +16,7 @@
*/
package com.yahoo.ycsb;
import java.io.IOException;
import java.io.InputStream;
/**
@ -25,11 +26,16 @@ public class InputStreamByteIterator extends ByteIterator {
private long len;
private InputStream ins;
private long off;
private final boolean resetable;
public InputStreamByteIterator(InputStream ins, long len) {
this.len = len;
this.ins = ins;
off = 0;
resetable = ins.markSupported();
if (resetable) {
ins.mark((int) len);
}
}
@Override
@ -57,4 +63,17 @@ public class InputStreamByteIterator extends ByteIterator {
return len - off;
}
@Override
public void reset() {
if (resetable) {
try {
ins.reset();
ins.mark((int) len);
} catch (IOException e) {
throw new IllegalStateException("Failed to reset the input stream", e);
}
}
throw new UnsupportedOperationException();
}
}

Просмотреть файл

@ -93,4 +93,10 @@ public class RandomByteIterator extends ByteIterator {
public long bytesLeft() {
return len - off - bufOff;
}
@Override
public void reset() {
off = 0;
}
}

Просмотреть файл

@ -51,7 +51,7 @@ public class StringByteIterator extends ByteIterator {
* Create a copy of a map, converting the values from Strings to
* StringByteIterators.
*/
public static HashMap<String, ByteIterator> getByteIteratorMap(Map<String, String> m) {
public static Map<String, ByteIterator> getByteIteratorMap(Map<String, String> m) {
HashMap<String, ByteIterator> ret =
new HashMap<String, ByteIterator>();
@ -65,7 +65,7 @@ public class StringByteIterator extends ByteIterator {
* Create a copy of a map, converting the values from
* StringByteIterators to Strings.
*/
public static HashMap<String, String> getStringMap(Map<String, ByteIterator> m) {
public static Map<String, String> getStringMap(Map<String, ByteIterator> m) {
HashMap<String, String> ret = new HashMap<String, String>();
for (Map.Entry<String, ByteIterator> entry : m.entrySet()) {
@ -96,6 +96,11 @@ public class StringByteIterator extends ByteIterator {
return str.length() - off;
}
@Override
public void reset() {
off = 0;
}
/**
* Specialization of general purpose toString() to avoid unnecessary
* copies.

Просмотреть файл

@ -226,4 +226,19 @@ public final class Utils {
}
return map;
}
/**
* Simple Fisher-Yates array shuffle to randomize discrete sets.
* @param array The array to randomly shuffle.
* @return The shuffled array.
*/
public static <T> T [] shuffleArray(final T[] array) {
for (int i = array.length -1; i > 0; i--) {
final int idx = RAND.nextInt(i + 1);
final T temp = array[idx];
array[idx] = array[i];
array[i] = temp;
}
return array;
}
}

Просмотреть файл

@ -43,6 +43,15 @@ public abstract class Workload {
private volatile AtomicBoolean stopRequested = new AtomicBoolean(false);
/** Operations available for a database. */
public enum Operation {
READ,
UPDATE,
INSERT,
SCAN,
DELETE
}
/**
* Initialize the scenario. Create any generators and other shared objects here.
* Called once, in the main client thread, before any operations are started.

Просмотреть файл

@ -31,12 +31,12 @@ public class AcknowledgedCounterGenerator extends CounterGenerator {
private final ReentrantLock lock;
private final boolean[] window;
private volatile int limit;
private volatile long limit;
/**
* Create a counter that starts at countstart.
*/
public AcknowledgedCounterGenerator(int countstart) {
public AcknowledgedCounterGenerator(long countstart) {
super(countstart);
lock = new ReentrantLock();
window = new boolean[WINDOW_SIZE];
@ -48,15 +48,15 @@ public class AcknowledgedCounterGenerator extends CounterGenerator {
* (as opposed to the highest generated counter value).
*/
@Override
public Integer lastValue() {
public Long lastValue() {
return limit;
}
/**
* Make a generated counter value available via lastInt().
*/
public void acknowledge(int value) {
final int currentSlot = (value & WINDOW_MASK);
public void acknowledge(long value) {
final int currentSlot = (int)(value & WINDOW_MASK);
if (window[currentSlot]) {
throw new RuntimeException("Too many unacknowledged insertion keys.");
}
@ -68,10 +68,10 @@ public class AcknowledgedCounterGenerator extends CounterGenerator {
// over to the "limit" variable
try {
// Only loop through the entire window at most once.
int beforeFirstSlot = (limit & WINDOW_MASK);
int index;
long beforeFirstSlot = (limit & WINDOW_MASK);
long index;
for (index = limit + 1; index != beforeFirstSlot; ++index) {
int slot = (index & WINDOW_MASK);
int slot = (int)(index & WINDOW_MASK);
if (!window[slot]) {
break;
}

Просмотреть файл

@ -1,5 +1,5 @@
/**
* Copyright (c) 2010-2016 Yahoo! Inc., 2017 YCSB contributors. All rights reserved.
* Copyright (c) 2010 Yahoo! Inc., Copyright (c) 2017 YCSB contributors. All rights reserved.
* <p>
* Licensed under the Apache License, Version 2.0 (the "License"); you
* may not use this file except in compliance with the License. You
@ -17,29 +17,29 @@
package com.yahoo.ycsb.generator;
import java.util.concurrent.atomic.AtomicInteger;
import java.util.concurrent.atomic.AtomicLong;
/**
* Generates a sequence of integers.
* (0, 1, ...)
*/
public class CounterGenerator extends NumberGenerator {
private final AtomicInteger counter;
private final AtomicLong counter;
/**
* Create a counter that starts at countstart.
*/
public CounterGenerator(int countstart) {
counter = new AtomicInteger(countstart);
public CounterGenerator(long countstart) {
counter=new AtomicLong(countstart);
}
@Override
public Integer nextValue() {
public Long nextValue() {
return counter.getAndIncrement();
}
@Override
public Integer lastValue() {
public Long lastValue() {
return counter.get() - 1;
}

Просмотреть файл

@ -1,5 +1,5 @@
/**
* Copyright (c) 2010-2016 Yahoo! Inc., 2017 YCSB contributors. All rights reserved.
* Copyright (c) 2010 Yahoo! Inc. Copyright (c) 2017 YCSB contributors. All rights reserved.
* <p>
* Licensed under the Apache License, Version 2.0 (the "License"); you
* may not use this file except in compliance with the License. You
@ -31,10 +31,10 @@ import java.util.Random;
*/
public class HotspotIntegerGenerator extends NumberGenerator {
private final int lowerBound;
private final int upperBound;
private final int hotInterval;
private final int coldInterval;
private final long lowerBound;
private final long upperBound;
private final long hotInterval;
private final long coldInterval;
private final double hotsetFraction;
private final double hotOpnFraction;
@ -46,7 +46,7 @@ public class HotspotIntegerGenerator extends NumberGenerator {
* @param hotsetFraction percentage of data item
* @param hotOpnFraction percentage of operations accessing the hot set.
*/
public HotspotIntegerGenerator(int lowerBound, int upperBound,
public HotspotIntegerGenerator(long lowerBound, long upperBound,
double hotsetFraction, double hotOpnFraction) {
if (hotsetFraction < 0.0 || hotsetFraction > 1.0) {
System.err.println("Hotset fraction out of range. Setting to 0.0");
@ -59,29 +59,29 @@ public class HotspotIntegerGenerator extends NumberGenerator {
if (lowerBound > upperBound) {
System.err.println("Upper bound of Hotspot generator smaller than the lower bound. " +
"Swapping the values.");
int temp = lowerBound;
long temp = lowerBound;
lowerBound = upperBound;
upperBound = temp;
}
this.lowerBound = lowerBound;
this.upperBound = upperBound;
this.hotsetFraction = hotsetFraction;
int interval = upperBound - lowerBound + 1;
long interval = upperBound - lowerBound + 1;
this.hotInterval = (int) (interval * hotsetFraction);
this.coldInterval = interval - hotInterval;
this.hotOpnFraction = hotOpnFraction;
}
@Override
public Integer nextValue() {
int value = 0;
public Long nextValue() {
long value = 0;
Random random = Utils.random();
if (random.nextDouble() < hotOpnFraction) {
// Choose a value from the hot set.
value = lowerBound + random.nextInt(hotInterval);
value = lowerBound + Math.abs(Utils.random().nextLong()) % hotInterval;
} else {
// Choose a value from the cold set.
value = lowerBound + hotInterval + random.nextInt(coldInterval);
value = lowerBound + hotInterval + Math.abs(Utils.random().nextLong()) % coldInterval;
}
setLastValue(value);
return value;
@ -90,14 +90,14 @@ public class HotspotIntegerGenerator extends NumberGenerator {
/**
* @return the lowerBound
*/
public int getLowerBound() {
public long getLowerBound() {
return lowerBound;
}
/**
* @return the upperBound
*/
public int getUpperBound() {
public long getUpperBound() {
return upperBound;
}

Просмотреть файл

@ -17,38 +17,39 @@
package com.yahoo.ycsb.generator;
import java.util.concurrent.atomic.AtomicInteger;
import java.util.concurrent.atomic.AtomicLong;
/**
* Generates a sequence of integers 0, 1, ...
*/
public class SequentialGenerator extends NumberGenerator {
protected final AtomicInteger counter;
protected int interval, countstart;
private final AtomicLong counter;
private long interval;
private long countstart;
/**
* Create a counter that starts at countstart.
*/
public SequentialGenerator(int countstart, int countend) {
counter = new AtomicInteger();
public SequentialGenerator(long countstart, long countend) {
counter = new AtomicLong();
setLastValue(counter.get());
this.countstart = countstart;
interval = countend - countstart + 1;
}
/**
* If the generator returns numeric (integer) values, return the next value as an int.
* If the generator returns numeric (long) values, return the next value as an long.
* Default is to return -1, which is appropriate for generators that do not return numeric values.
*/
public int nextInt() {
int ret = countstart + counter.getAndIncrement() % interval;
public long nextLong() {
long ret = countstart + counter.getAndIncrement() % interval;
setLastValue(ret);
return ret;
}
@Override
public Number nextValue() {
int ret = countstart + counter.getAndIncrement() % interval;
long ret = countstart + counter.getAndIncrement() % interval;
setLastValue(ret);
return ret;
}

Просмотреть файл

@ -1,5 +1,5 @@
/**
* Copyright (c) 2010-2016 Yahoo! Inc., 2017 YCSB contributors. All rights reserved.
* Copyright (c) 2010 Yahoo! Inc. Copyright (c) 2017 YCSB contributors. All rights reserved.
* <p>
* Licensed under the Apache License, Version 2.0 (the "License"); you
* may not use this file except in compliance with the License. You
@ -27,7 +27,7 @@ import java.util.List;
public class UniformGenerator extends Generator<String> {
private final List<String> values;
private String laststring;
private final UniformIntegerGenerator gen;
private final UniformLongGenerator gen;
/**
* Creates a generator that will return strings from the specified set uniformly randomly.
@ -35,7 +35,7 @@ public class UniformGenerator extends Generator<String> {
public UniformGenerator(Collection<String> values) {
this.values = new ArrayList<>(values);
laststring = null;
gen = new UniformIntegerGenerator(0, values.size() - 1);
gen = new UniformLongGenerator(0, values.size() - 1);
}
/**
@ -43,7 +43,7 @@ public class UniformGenerator extends Generator<String> {
*/
@Override
public String nextValue() {
laststring = values.get(gen.nextValue());
laststring = values.get(gen.nextValue().intValue());
return laststring;
}

Просмотреть файл

@ -1,5 +1,5 @@
/**
* Copyright (c) 2010-2016 Yahoo! Inc., 2017 YCSB contributors. All rights reserved.
* Copyright (c) 2010 Yahoo! Inc. Copyright (c) 2017 YCSB contributors. All rights reserved.
* <p>
* Licensed under the Apache License, Version 2.0 (the "License"); you
* may not use this file except in compliance with the License. You
@ -20,27 +20,28 @@ package com.yahoo.ycsb.generator;
import com.yahoo.ycsb.Utils;
/**
* Generates integers randomly uniform from an interval.
* Generates longs randomly uniform from an interval.
*/
public class UniformIntegerGenerator extends NumberGenerator {
private final int lb, ub, interval;
public class UniformLongGenerator extends NumberGenerator {
private final long lb, ub, interval;
/**
* Creates a generator that will return integers uniformly randomly from the interval [lb,ub] inclusive.
* Creates a generator that will return longs uniformly randomly from the
* interval [lb,ub] inclusive (that is, lb and ub are possible values)
* (lb and ub are possible values).
*
* @param lb the lower bound (inclusive) of generated values
* @param ub the upper bound (inclusive) of generated values
*/
public UniformIntegerGenerator(int lb, int ub) {
public UniformLongGenerator(long lb, long ub) {
this.lb = lb;
this.ub = ub;
interval = this.ub - this.lb + 1;
}
@Override
public Integer nextValue() {
int ret = Utils.random().nextInt(interval) + lb;
public Long nextValue() {
long ret = Math.abs(Utils.random().nextLong()) % interval + lb;
setLastValue(ret);
return ret;

Просмотреть файл

@ -19,6 +19,7 @@ package com.yahoo.ycsb.measurements;
import com.yahoo.ycsb.measurements.exporter.MeasurementsExporter;
import org.HdrHistogram.Histogram;
import org.HdrHistogram.HistogramIterationValue;
import org.HdrHistogram.HistogramLogWriter;
import org.HdrHistogram.Recorder;
@ -112,6 +113,18 @@ public class OneMeasurementHdrHistogram extends OneMeasurement {
}
exportStatusCounts(exporter);
// also export totalHistogram
for (HistogramIterationValue v : totalHistogram.recordedValues()) {
int value;
if (v.getValueIteratedTo() > (long)Integer.MAX_VALUE) {
value = Integer.MAX_VALUE;
} else {
value = (int)v.getValueIteratedTo();
}
exporter.write(getName(), Integer.toString(value), (double)v.getCountAtValueIteratedTo());
}
}
/**

Просмотреть файл

@ -39,17 +39,17 @@ public class OneMeasurementHistogram extends OneMeasurement {
/**
* Groups operations in discrete blocks of 1ms width.
*/
private final int[] histogram;
private long[] histogram;
/**
* Counts all operations outside the histogram's range.
*/
private int histogramoverflow;
private long histogramoverflow;
/**
* The total number of reported operations.
*/
private int operations;
private long operations;
/**
* The sum of each latency measurement over all operations.
@ -65,7 +65,7 @@ public class OneMeasurementHistogram extends OneMeasurement {
private double totalsquaredlatency;
//keep a windowed version of these stats for printing status
private int windowoperations;
private long windowoperations;
private long windowtotallatency;
private int min;
@ -74,7 +74,7 @@ public class OneMeasurementHistogram extends OneMeasurement {
public OneMeasurementHistogram(String name, Properties props) {
super(name);
buckets = Integer.parseInt(props.getProperty(BUCKETS, BUCKETS_DEFAULT));
histogram = new int[buckets];
histogram = new long[buckets];
histogramoverflow = 0;
operations = 0;
totallatency = 0;
@ -120,7 +120,7 @@ public class OneMeasurementHistogram extends OneMeasurement {
exporter.write(getName(), "MinLatency(us)", min);
exporter.write(getName(), "MaxLatency(us)", max);
int opcounter = 0;
long opcounter=0;
boolean done95th = false;
for (int i = 0; i < buckets; i++) {
opcounter += histogram[i];

Просмотреть файл

@ -54,9 +54,9 @@ public class OneMeasurementTimeSeries extends OneMeasurement {
private long start = -1;
private long currentunit = -1;
private int count = 0;
private int sum = 0;
private int operations = 0;
private long count = 0;
private long sum = 0;
private long operations = 0;
private long totallatency = 0;
//keep a windowed version of these stats for printing status

Просмотреть файл

@ -47,6 +47,14 @@ public class JSONArrayMeasurementsExporter implements MeasurementsExporter {
g.writeEndObject();
}
public void write(String metric, String measurement, long i) throws IOException {
g.writeStartObject();
g.writeStringField("metric", metric);
g.writeStringField("measurement", measurement);
g.writeNumberField("value", i);
g.writeEndObject();
}
public void write(String metric, String measurement, double d) throws IOException {
g.writeStartObject();
g.writeStringField("metric", metric);

Просмотреть файл

@ -48,6 +48,14 @@ public class JSONMeasurementsExporter implements MeasurementsExporter {
g.writeEndObject();
}
public void write(String metric, String measurement, long i) throws IOException {
g.writeStartObject();
g.writeStringField("metric", metric);
g.writeStringField("measurement", measurement);
g.writeNumberField("value", i);
g.writeEndObject();
}
public void write(String metric, String measurement, double d) throws IOException {
g.writeStartObject();
g.writeStringField("metric", metric);

Просмотреть файл

@ -39,6 +39,16 @@ public interface MeasurementsExporter extends Closeable {
*
* @param metric Metric name, for example "READ LATENCY".
* @param measurement Measurement name, for example "Average latency".
* @param i Measurement to write.
* @throws IOException if writing failed
*/
void write(String metric, String measurement, long i) throws IOException;
/**
* Write a measurement to the exported format.
*
* @param metric Metric name, for example "READ LATENCY".
* @param measurement Measurement name, for example "Average latency".
* @param d Measurement to write.
* @throws IOException if writing failed
*/

Просмотреть файл

@ -36,6 +36,11 @@ public class TextMeasurementsExporter implements MeasurementsExporter {
bw.newLine();
}
public void write(String metric, String measurement, long i) throws IOException {
bw.write("[" + metric + "], " + measurement + ", " + i);
bw.newLine();
}
public void write(String metric, String measurement, double d) throws IOException {
bw.write("[" + metric + "], " + measurement + ", " + d);
bw.newLine();

Просмотреть файл

@ -1,5 +1,5 @@
/**
* Copyright (c) 2010 Yahoo! Inc., Copyright (c) 2016 YCSB contributors. All rights reserved.
* Copyright (c) 2010 Yahoo! Inc., Copyright (c) 2016-2017 YCSB contributors. All rights reserved.
* <p>
* Licensed under the Apache License, Version 2.0 (the "License"); you
* may not use this file except in compliance with the License. You
@ -19,6 +19,7 @@ package com.yahoo.ycsb.workloads;
import com.yahoo.ycsb.*;
import com.yahoo.ycsb.generator.*;
import com.yahoo.ycsb.generator.UniformLongGenerator;
import com.yahoo.ycsb.measurements.Measurements;
import java.io.IOException;
@ -82,9 +83,7 @@ public class CoreWorkload extends Workload {
* Default number of fields in a record.
*/
public static final String FIELD_COUNT_PROPERTY_DEFAULT = "10";
protected int fieldcount;
private List<String> fieldnames;
/**
@ -315,7 +314,8 @@ public class CoreWorkload extends Workload {
protected AcknowledgedCounterGenerator transactioninsertkeysequence;
protected NumberGenerator scanlength;
protected boolean orderedinserts;
protected int recordcount;
protected long fieldcount;
protected long recordcount;
protected int zeropadding;
protected int insertionRetryLimit;
protected int insertionRetryInterval;
@ -333,7 +333,7 @@ public class CoreWorkload extends Workload {
if (fieldlengthdistribution.compareTo("constant") == 0) {
fieldlengthgenerator = new ConstantIntegerGenerator(fieldlength);
} else if (fieldlengthdistribution.compareTo("uniform") == 0) {
fieldlengthgenerator = new UniformIntegerGenerator(1, fieldlength);
fieldlengthgenerator = new UniformLongGenerator(1, fieldlength);
} else if (fieldlengthdistribution.compareTo("zipfian") == 0) {
fieldlengthgenerator = new ZipfianGenerator(1, fieldlength);
} else if (fieldlengthdistribution.compareTo("histogram") == 0) {
@ -359,7 +359,7 @@ public class CoreWorkload extends Workload {
table = p.getProperty(TABLENAME_PROPERTY, TABLENAME_PROPERTY_DEFAULT);
fieldcount =
Integer.parseInt(p.getProperty(FIELD_COUNT_PROPERTY, FIELD_COUNT_PROPERTY_DEFAULT));
Long.parseLong(p.getProperty(FIELD_COUNT_PROPERTY, FIELD_COUNT_PROPERTY_DEFAULT));
fieldnames = new ArrayList<>();
for (int i = 0; i < fieldcount; i++) {
fieldnames.add("field" + i);
@ -367,7 +367,7 @@ public class CoreWorkload extends Workload {
fieldlengthgenerator = CoreWorkload.getFieldLengthGenerator(p);
recordcount =
Integer.parseInt(p.getProperty(Client.RECORD_COUNT_PROPERTY, Client.DEFAULT_RECORD_COUNT));
Long.parseLong(p.getProperty(Client.RECORD_COUNT_PROPERTY, Client.DEFAULT_RECORD_COUNT));
if (recordcount == 0) {
recordcount = Integer.MAX_VALUE;
}
@ -378,9 +378,9 @@ public class CoreWorkload extends Workload {
String scanlengthdistrib =
p.getProperty(SCAN_LENGTH_DISTRIBUTION_PROPERTY, SCAN_LENGTH_DISTRIBUTION_PROPERTY_DEFAULT);
int insertstart =
Integer.parseInt(p.getProperty(INSERT_START_PROPERTY, INSERT_START_PROPERTY_DEFAULT));
int insertcount =
long insertstart =
Long.parseLong(p.getProperty(INSERT_START_PROPERTY, INSERT_START_PROPERTY_DEFAULT));
long insertcount=
Integer.parseInt(p.getProperty(INSERT_COUNT_PROPERTY, String.valueOf(recordcount - insertstart)));
// Confirm valid values for insertstart and insertcount in relation to recordcount
if (recordcount < (insertstart + insertcount)) {
@ -426,7 +426,7 @@ public class CoreWorkload extends Workload {
transactioninsertkeysequence = new AcknowledgedCounterGenerator(recordcount);
if (requestdistrib.compareTo("uniform") == 0) {
keychooser = new UniformIntegerGenerator(insertstart, insertstart + insertcount - 1);
keychooser = new UniformLongGenerator(insertstart, insertstart + insertcount - 1);
} else if (requestdistrib.compareTo("sequential") == 0) {
keychooser = new SequentialGenerator(insertstart, insertstart + insertcount - 1);
} else if (requestdistrib.compareTo("zipfian") == 0) {
@ -458,10 +458,10 @@ public class CoreWorkload extends Workload {
throw new WorkloadException("Unknown request distribution \"" + requestdistrib + "\"");
}
fieldchooser = new UniformIntegerGenerator(0, fieldcount - 1);
fieldchooser = new UniformLongGenerator(0, fieldcount - 1);
if (scanlengthdistrib.compareTo("uniform") == 0) {
scanlength = new UniformIntegerGenerator(1, maxscanlength);
scanlength = new UniformLongGenerator(1, maxscanlength);
} else if (scanlengthdistrib.compareTo("zipfian") == 0) {
scanlength = new ZipfianGenerator(1, maxscanlength);
} else {
@ -646,8 +646,8 @@ public class CoreWorkload extends Workload {
measurements.reportStatus("VERIFY", verifyStatus);
}
protected int nextKeynum() {
int keynum;
long nextKeynum() {
long keynum;
if (keychooser instanceof ExponentialGenerator) {
do {
keynum = transactioninsertkeysequence.lastValue() - keychooser.nextValue().intValue();
@ -662,7 +662,7 @@ public class CoreWorkload extends Workload {
public void doTransactionRead(DB db) {
// choose a random key
int keynum = nextKeynum();
long keynum = nextKeynum();
String keyname = buildKeyName(keynum);
@ -689,7 +689,7 @@ public class CoreWorkload extends Workload {
public void doTransactionReadModifyWrite(DB db) {
// choose a random key
int keynum = nextKeynum();
long keynum = nextKeynum();
String keyname = buildKeyName(keynum);
@ -736,7 +736,7 @@ public class CoreWorkload extends Workload {
public void doTransactionScan(DB db) {
// choose a random key
int keynum = nextKeynum();
long keynum = nextKeynum();
String startkeyname = buildKeyName(keynum);
@ -758,7 +758,7 @@ public class CoreWorkload extends Workload {
public void doTransactionUpdate(DB db) {
// choose a random key
int keynum = nextKeynum();
long keynum = nextKeynum();
String keyname = buildKeyName(keynum);
@ -777,7 +777,7 @@ public class CoreWorkload extends Workload {
public void doTransactionInsert(DB db) {
// choose the next key
int keynum = transactioninsertkeysequence.nextValue();
long keynum = transactioninsertkeysequence.nextValue();
try {
String dbkey = buildKeyName(keynum);

Просмотреть файл

@ -1,5 +1,5 @@
/**
* Copyright (c) 2016 YCSB contributors. All rights reserved.
* Copyright (c) 2016-2017 YCSB contributors. All rights reserved.
* <p>
* Licensed under the Apache License, Version 2.0 (the "License"); you
* may not use this file except in compliance with the License. You
@ -30,6 +30,7 @@ import java.util.HashMap;
import java.util.Map;
import java.util.Properties;
import com.yahoo.ycsb.generator.UniformLongGenerator;
/**
* Typical RESTFul services benchmarking scenario. Represents a set of client
* calling REST operations like HTTP DELETE, GET, POST, PUT on a web service.
@ -171,7 +172,7 @@ public class RestWorkload extends CoreWorkload {
keychooser = new ExponentialGenerator(percentile, recordCount * frac);
break;
case "uniform":
keychooser = new UniformIntegerGenerator(0, recordCount - 1);
keychooser = new UniformLongGenerator(0, recordCount - 1);
break;
case "zipfian":
keychooser = new ZipfianGenerator(recordCount, zipfContant);

Просмотреть файл

@ -1,5 +1,5 @@
/**
* Copyright (c) 2015 YCSB contributors. All rights reserved.
* Copyright (c) 2015-2017 YCSB contributors. All rights reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License"); you
* may not use this file except in compliance with the License. You
@ -38,19 +38,19 @@ public class AcknowledgedCounterGeneratorTest {
new AcknowledgedCounterGenerator(Integer.MAX_VALUE - 1000);
Random rand = new Random(System.currentTimeMillis());
BlockingQueue<Integer> pending = new ArrayBlockingQueue<Integer>(1000);
BlockingQueue<Long> pending = new ArrayBlockingQueue<Long>(1000);
for (long i = 0; i < toTry; ++i) {
int value = generator.nextValue();
long value = generator.nextValue();
while (!pending.offer(value)) {
Integer first = pending.poll();
Long first = pending.poll();
// Don't always advance by one.
if (rand.nextBoolean()) {
generator.acknowledge(first);
} else {
Integer second = pending.poll();
Long second = pending.poll();
pending.add(first);
generator.acknowledge(second);
}

Просмотреть файл

@ -180,7 +180,7 @@ public class CouchbaseClient extends DB {
@Override
public Status read(final String table, final String key, final Set<String> fields,
final HashMap<String, ByteIterator> result) {
final Map<String, ByteIterator> result) {
String formattedKey = formatKey(table, key);
try {
@ -225,7 +225,7 @@ public class CouchbaseClient extends DB {
}
@Override
public Status update(final String table, final String key, final HashMap<String, ByteIterator> values) {
public Status update(final String table, final String key, final Map<String, ByteIterator> values) {
String formattedKey = formatKey(table, key);
try {
@ -240,7 +240,7 @@ public class CouchbaseClient extends DB {
}
@Override
public Status insert(final String table, final String key, final HashMap<String, ByteIterator> values) {
public Status insert(final String table, final String key, final Map<String, ByteIterator> values) {
String formattedKey = formatKey(table, key);
try {
@ -301,7 +301,7 @@ public class CouchbaseClient extends DB {
* @param fields the fields to check.
* @param dest the result passed back to the ycsb core.
*/
private void decode(final Object source, final Set<String> fields, final HashMap<String, ByteIterator> dest) {
private void decode(final Object source, final Set<String> fields, final Map<String, ByteIterator> dest) {
if (useJson) {
try {
JsonNode json = JSON_MAPPER.readTree((String) source);
@ -321,7 +321,7 @@ public class CouchbaseClient extends DB {
throw new RuntimeException("Could not decode JSON");
}
} else {
HashMap<String, String> converted = (HashMap<String, String>) source;
Map<String, String> converted = (HashMap<String, String>) source;
for (Map.Entry<String, String> entry : converted.entrySet()) {
dest.put(entry.getKey(), new StringByteIterator(entry.getValue()));
}
@ -334,8 +334,8 @@ public class CouchbaseClient extends DB {
* @param source the source value.
* @return the storable object.
*/
private Object encode(final HashMap<String, ByteIterator> source) {
HashMap<String, String> stringMap = StringByteIterator.getStringMap(source);
private Object encode(final Map<String, ByteIterator> source) {
Map<String, String> stringMap = StringByteIterator.getStringMap(source);
if (!useJson) {
return stringMap;
}

Просмотреть файл

@ -241,7 +241,7 @@ public class Couchbase2Client extends DB {
@Override
public Status read(final String table, final String key, Set<String> fields,
final HashMap<String, ByteIterator> result) {
final Map<String, ByteIterator> result) {
try {
String docId = formatId(table, key);
if (kv) {
@ -256,14 +256,14 @@ public class Couchbase2Client extends DB {
}
/**
* Performs the {@link #read(String, String, Set, HashMap)} operation via Key/Value ("get").
* Performs the {@link #read(String, String, Set, Map)} operation via Key/Value ("get").
*
* @param docId the document ID
* @param fields the fields to be loaded
* @param result the result map where the doc needs to be converted into
* @return The result of the operation.
*/
private Status readKv(final String docId, final Set<String> fields, final HashMap<String, ByteIterator> result)
private Status readKv(final String docId, final Set<String> fields, final Map<String, ByteIterator> result)
throws Exception {
RawJsonDocument loaded = bucket.get(docId, RawJsonDocument.class);
if (loaded == null) {
@ -274,7 +274,7 @@ public class Couchbase2Client extends DB {
}
/**
* Performs the {@link #read(String, String, Set, HashMap)} operation via N1QL ("SELECT").
* Performs the {@link #read(String, String, Set, Map)} operation via N1QL ("SELECT").
*
* If this option should be used, the "-p couchbase.kv=false" property must be set.
*
@ -283,7 +283,7 @@ public class Couchbase2Client extends DB {
* @param result the result map where the doc needs to be converted into
* @return The result of the operation.
*/
private Status readN1ql(final String docId, Set<String> fields, final HashMap<String, ByteIterator> result)
private Status readN1ql(final String docId, Set<String> fields, final Map<String, ByteIterator> result)
throws Exception {
String readQuery = "SELECT " + joinFields(fields) + " FROM `" + bucketName + "` USE KEYS [$1]";
N1qlQueryResult queryResult = bucket.query(N1qlQuery.parameterized(
@ -319,7 +319,7 @@ public class Couchbase2Client extends DB {
}
@Override
public Status update(final String table, final String key, final HashMap<String, ByteIterator> values) {
public Status update(final String table, final String key, final Map<String, ByteIterator> values) {
if (upsert) {
return upsert(table, key, values);
}
@ -338,13 +338,13 @@ public class Couchbase2Client extends DB {
}
/**
* Performs the {@link #update(String, String, HashMap)} operation via Key/Value ("replace").
* Performs the {@link #update(String, String, Map)} operation via Key/Value ("replace").
*
* @param docId the document ID
* @param values the values to update the document with.
* @return The result of the operation.
*/
private Status updateKv(final String docId, final HashMap<String, ByteIterator> values) {
private Status updateKv(final String docId, final Map<String, ByteIterator> values) {
waitForMutationResponse(bucket.async().replace(
RawJsonDocument.create(docId, documentExpiry, encode(values)),
persistTo,
@ -354,7 +354,7 @@ public class Couchbase2Client extends DB {
}
/**
* Performs the {@link #update(String, String, HashMap)} operation via N1QL ("UPDATE").
* Performs the {@link #update(String, String, Map)} operation via N1QL ("UPDATE").
*
* If this option should be used, the "-p couchbase.kv=false" property must be set.
*
@ -362,7 +362,7 @@ public class Couchbase2Client extends DB {
* @param values the values to update the document with.
* @return The result of the operation.
*/
private Status updateN1ql(final String docId, final HashMap<String, ByteIterator> values)
private Status updateN1ql(final String docId, final Map<String, ByteIterator> values)
throws Exception {
String fields = encodeN1qlFields(values);
String updateQuery = "UPDATE `" + bucketName + "` USE KEYS [$1] SET " + fields;
@ -381,7 +381,7 @@ public class Couchbase2Client extends DB {
}
@Override
public Status insert(final String table, final String key, final HashMap<String, ByteIterator> values) {
public Status insert(final String table, final String key, final Map<String, ByteIterator> values) {
if (upsert) {
return upsert(table, key, values);
}
@ -400,7 +400,7 @@ public class Couchbase2Client extends DB {
}
/**
* Performs the {@link #insert(String, String, HashMap)} operation via Key/Value ("INSERT").
* Performs the {@link #insert(String, String, Map)} operation via Key/Value ("INSERT").
*
* Note that during the "load" phase it makes sense to retry TMPFAILS (so that even if the server is
* overloaded temporarily the ops will succeed eventually). The current code will retry TMPFAILs
@ -410,7 +410,7 @@ public class Couchbase2Client extends DB {
* @param values the values to update the document with.
* @return The result of the operation.
*/
private Status insertKv(final String docId, final HashMap<String, ByteIterator> values) {
private Status insertKv(final String docId, final Map<String, ByteIterator> values) {
int tries = 60; // roughly 60 seconds with the 1 second sleep, not 100% accurate.
for(int i = 0; i < tries; i++) {
@ -435,7 +435,7 @@ public class Couchbase2Client extends DB {
}
/**
* Performs the {@link #insert(String, String, HashMap)} operation via N1QL ("INSERT").
* Performs the {@link #insert(String, String, Map)} operation via N1QL ("INSERT").
*
* If this option should be used, the "-p couchbase.kv=false" property must be set.
*
@ -443,7 +443,7 @@ public class Couchbase2Client extends DB {
* @param values the values to update the document with.
* @return The result of the operation.
*/
private Status insertN1ql(final String docId, final HashMap<String, ByteIterator> values)
private Status insertN1ql(final String docId, final Map<String, ByteIterator> values)
throws Exception {
String insertQuery = "INSERT INTO `" + bucketName + "`(KEY,VALUE) VALUES ($1,$2)";
@ -470,7 +470,7 @@ public class Couchbase2Client extends DB {
* @param values A HashMap of field/value pairs to insert in the record
* @return The result of the operation.
*/
private Status upsert(final String table, final String key, final HashMap<String, ByteIterator> values) {
private Status upsert(final String table, final String key, final Map<String, ByteIterator> values) {
try {
String docId = formatId(table, key);
if (kv) {
@ -485,7 +485,7 @@ public class Couchbase2Client extends DB {
}
/**
* Performs the {@link #upsert(String, String, HashMap)} operation via Key/Value ("upsert").
* Performs the {@link #upsert(String, String, Map)} operation via Key/Value ("upsert").
*
* If this option should be used, the "-p couchbase.upsert=true" property must be set.
*
@ -493,7 +493,7 @@ public class Couchbase2Client extends DB {
* @param values the values to update the document with.
* @return The result of the operation.
*/
private Status upsertKv(final String docId, final HashMap<String, ByteIterator> values) {
private Status upsertKv(final String docId, final Map<String, ByteIterator> values) {
waitForMutationResponse(bucket.async().upsert(
RawJsonDocument.create(docId, documentExpiry, encode(values)),
persistTo,
@ -503,7 +503,7 @@ public class Couchbase2Client extends DB {
}
/**
* Performs the {@link #upsert(String, String, HashMap)} operation via N1QL ("UPSERT").
* Performs the {@link #upsert(String, String, Map)} operation via N1QL ("UPSERT").
*
* If this option should be used, the "-p couchbase.upsert=true -p couchbase.kv=false" properties must be set.
*
@ -511,7 +511,7 @@ public class Couchbase2Client extends DB {
* @param values the values to update the document with.
* @return The result of the operation.
*/
private Status upsertN1ql(final String docId, final HashMap<String, ByteIterator> values)
private Status upsertN1ql(final String docId, final Map<String, ByteIterator> values)
throws Exception {
String upsertQuery = "UPSERT INTO `" + bucketName + "`(KEY,VALUE) VALUES ($1,$2)";
@ -734,12 +734,12 @@ public class Couchbase2Client extends DB {
}
/**
* Helper method to turn the values into a String, used with {@link #upsertN1ql(String, HashMap)}.
* Helper method to turn the values into a String, used with {@link #upsertN1ql(String, Map)}.
*
* @param values the values to encode.
* @return the encoded string.
*/
private static String encodeN1qlFields(final HashMap<String, ByteIterator> values) {
private static String encodeN1qlFields(final Map<String, ByteIterator> values) {
if (values.isEmpty()) {
return "";
}
@ -760,7 +760,7 @@ public class Couchbase2Client extends DB {
* @param values the values to transform.
* @return the created json object.
*/
private static JsonObject valuesToJsonObject(final HashMap<String, ByteIterator> values) {
private static JsonObject valuesToJsonObject(final Map<String, ByteIterator> values) {
JsonObject result = JsonObject.create();
for (Map.Entry<String, ByteIterator> entry : values.entrySet()) {
result.put(entry.getKey(), entry.getValue().toString());
@ -853,7 +853,7 @@ public class Couchbase2Client extends DB {
* @param dest the result passed back to YCSB.
*/
private void decode(final String source, final Set<String> fields,
final HashMap<String, ByteIterator> dest) {
final Map<String, ByteIterator> dest) {
try {
JsonNode json = JacksonTransformers.MAPPER.readTree(source);
boolean checkFields = fields != null && !fields.isEmpty();
@ -879,8 +879,8 @@ public class Couchbase2Client extends DB {
* @param source the source value.
* @return the encoded string.
*/
private String encode(final HashMap<String, ByteIterator> source) {
HashMap<String, String> stringMap = StringByteIterator.getStringMap(source);
private String encode(final Map<String, ByteIterator> source) {
Map<String, String> stringMap = StringByteIterator.getStringMap(source);
ObjectNode node = JacksonTransformers.MAPPER.createObjectNode();
for (Map.Entry<String, String> pair : stringMap.entrySet()) {
node.put(pair.getKey(), pair.getValue());

Просмотреть файл

@ -69,6 +69,11 @@ LICENSE file.
<artifactId>cassandra-binding</artifactId>
<version>${project.version}</version>
</dependency>
<dependency>
<groupId>com.yahoo.ycsb</groupId>
<artifactId>cloudspanner-binding</artifactId>
<version>${project.version}</version>
</dependency>
<dependency>
<groupId>com.yahoo.ycsb</groupId>
<artifactId>couchbase-binding</artifactId>
@ -129,6 +134,11 @@ LICENSE file.
<artifactId>hbase10-binding</artifactId>
<version>${project.version}</version>
</dependency>
<dependency>
<groupId>com.yahoo.ycsb</groupId>
<artifactId>hbase12-binding</artifactId>
<version>${project.version}</version>
</dependency>
<dependency>
<groupId>com.yahoo.ycsb</groupId>
<artifactId>hypertable-binding</artifactId>

Просмотреть файл

@ -137,7 +137,7 @@ public class DynamoDBClient extends DB {
}
@Override
public Status read(String table, String key, Set<String> fields, HashMap<String, ByteIterator> result) {
public Status read(String table, String key, Set<String> fields, Map<String, ByteIterator> result) {
if (LOGGER.isDebugEnabled()) {
LOGGER.debug("readkey: " + key + " from table: " + table);
}
@ -228,7 +228,7 @@ public class DynamoDBClient extends DB {
}
@Override
public Status update(String table, String key, HashMap<String, ByteIterator> values) {
public Status update(String table, String key, Map<String, ByteIterator> values) {
if (LOGGER.isDebugEnabled()) {
LOGGER.debug("updatekey: " + key + " from table: " + table);
}
@ -254,7 +254,7 @@ public class DynamoDBClient extends DB {
}
@Override
public Status insert(String table, String key, HashMap<String, ByteIterator> values) {
public Status insert(String table, String key, Map<String, ByteIterator> values) {
if (LOGGER.isDebugEnabled()) {
LOGGER.debug("insertkey: " + primaryKeyName + "-" + key + " from table: " + table);
}
@ -302,8 +302,7 @@ public class DynamoDBClient extends DB {
return Status.OK;
}
private static Map<String, AttributeValue> createAttributes(HashMap<String, ByteIterator> values) {
//leave space for the PrimaryKey
private static Map<String, AttributeValue> createAttributes(Map<String, ByteIterator> values) {
Map<String, AttributeValue> attributes = new HashMap<>(values.size() + 1);
for (Entry<String, ByteIterator> val : values.entrySet()) {
attributes.put(val.getKey(), new AttributeValue(val.getValue().toString()));

Просмотреть файл

@ -46,6 +46,7 @@ import org.elasticsearch.search.SearchHit;
import java.net.InetAddress;
import java.net.UnknownHostException;
import java.util.HashMap;
import java.util.Map;
import java.util.Map.Entry;
import java.util.Properties;
import java.util.Set;
@ -195,7 +196,7 @@ public class ElasticsearchClient extends DB {
* description for a discussion of error codes.
*/
@Override
public Status insert(String table, String key, HashMap<String, ByteIterator> values) {
public Status insert(String table, String key, Map<String, ByteIterator> values) {
try {
final XContentBuilder doc = jsonBuilder().startObject();
@ -254,7 +255,7 @@ public class ElasticsearchClient extends DB {
* @return Zero on success, a non-zero error code on error or "not found".
*/
@Override
public Status read(String table, String key, Set<String> fields, HashMap<String, ByteIterator> result) {
public Status read(String table, String key, Set<String> fields, Map<String, ByteIterator> result) {
try {
final GetResponse response = client.prepareGet(indexKey, table, key).execute().actionGet();
@ -295,7 +296,7 @@ public class ElasticsearchClient extends DB {
* description for a discussion of error codes.
*/
@Override
public Status update(String table, String key, HashMap<String, ByteIterator> values) {
public Status update(String table, String key, Map<String, ByteIterator> values) {
try {
final GetResponse response = client.prepareGet(indexKey, table, key).execute().actionGet();

Просмотреть файл

@ -39,6 +39,7 @@ import java.io.IOException;
import java.net.InetAddress;
import java.net.UnknownHostException;
import java.util.HashMap;
import java.util.Map;
import java.util.Map.Entry;
import java.util.Properties;
import java.util.Set;
@ -179,7 +180,7 @@ public class ElasticsearchClient extends DB {
}
@Override
public Status insert(String table, String key, HashMap<String, ByteIterator> values) {
public Status insert(String table, String key, Map<String, ByteIterator> values) {
try {
final XContentBuilder doc = jsonBuilder().startObject();
@ -214,7 +215,7 @@ public class ElasticsearchClient extends DB {
}
@Override
public Status read(String table, String key, Set<String> fields, HashMap<String, ByteIterator> result) {
public Status read(String table, String key, Set<String> fields, Map<String, ByteIterator> result) {
try {
final GetResponse response = client.prepareGet(indexKey, table, key).execute().actionGet();
@ -241,7 +242,7 @@ public class ElasticsearchClient extends DB {
}
@Override
public Status update(String table, String key, HashMap<String, ByteIterator> values) {
public Status update(String table, String key, Map<String, ByteIterator> values) {
try {
final GetResponse response = client.prepareGet(indexKey, table, key).execute().actionGet();

Просмотреть файл

@ -110,7 +110,7 @@ public class ElasticsearchRestClient extends DB {
}
@Override
public Status insert(String table, String key, HashMap<String, ByteIterator> values) {
public Status insert(String table, String key, Map<String, ByteIterator> values) {
try {
Map<String, String> data = StringByteIterator.getStringMap(values);
@ -142,7 +142,7 @@ public class ElasticsearchRestClient extends DB {
}
@Override
public Status read(String table, String key, Set<String> fields, HashMap<String, ByteIterator> result) {
public Status read(String table, String key, Set<String> fields, Map<String, ByteIterator> result) {
try {
Response response = restClient.performRequest(HttpGet.METHOD_NAME, "/");
@ -178,7 +178,7 @@ public class ElasticsearchRestClient extends DB {
}
@Override
public Status update(String table, String key, HashMap<String, ByteIterator> values) {
public Status update(String table, String key, Map<String, ByteIterator> values) {
// try {
// final GetResponse response = client.prepareGet(indexKey, table, key).execute().actionGet();
//

Просмотреть файл

@ -40,9 +40,9 @@ Start a locator and two servers:
```
gfsh> start locator --name=locator1
gfsh> configure pdx --read-serialized=true
gfsh> start server --name=server1 --server-port=40404
gfsh> start server --name=server2 --server-port=40405
gfsh> configure pdx --read-serialized=true
```
Create the "usertable" region required by YCSB driver:

Просмотреть файл

@ -17,16 +17,16 @@
package com.yahoo.ycsb.db;
import com.gemstone.gemfire.cache.*;
import com.gemstone.gemfire.cache.client.ClientCache;
import com.gemstone.gemfire.cache.client.ClientCacheFactory;
import com.gemstone.gemfire.cache.client.ClientRegionFactory;
import com.gemstone.gemfire.cache.client.ClientRegionShortcut;
import com.gemstone.gemfire.internal.admin.remote.DistributionLocatorId;
import com.gemstone.gemfire.internal.cache.GemFireCacheImpl;
import com.gemstone.gemfire.pdx.JSONFormatter;
import com.gemstone.gemfire.pdx.PdxInstance;
import com.gemstone.gemfire.pdx.PdxInstanceFactory;
import org.apache.geode.cache.*;
import org.apache.geode.cache.client.ClientCache;
import org.apache.geode.cache.client.ClientCacheFactory;
import org.apache.geode.cache.client.ClientRegionFactory;
import org.apache.geode.cache.client.ClientRegionShortcut;
import org.apache.geode.internal.admin.remote.DistributionLocatorId;
import org.apache.geode.internal.cache.GemFireCacheImpl;
import org.apache.geode.pdx.JSONFormatter;
import org.apache.geode.pdx.PdxInstance;
import org.apache.geode.pdx.PdxInstanceFactory;
import com.yahoo.ycsb.*;
import java.util.*;
@ -125,6 +125,7 @@ public class GeodeClient extends DB {
locator = new DistributionLocatorId(locatorStr);
}
ClientCacheFactory ccf = new ClientCacheFactory();
ccf.setPdxReadSerialized(true);
if (serverPort != 0) {
ccf.addPoolServer(serverHost, serverPort);
} else if (locator != null) {
@ -135,7 +136,7 @@ public class GeodeClient extends DB {
@Override
public Status read(String table, String key, Set<String> fields,
HashMap<String, ByteIterator> result) {
Map<String, ByteIterator> result) {
Region<String, PdxInstance> r = getRegion(table);
PdxInstance val = r.get(key);
if (val != null) {
@ -161,13 +162,13 @@ public class GeodeClient extends DB {
}
@Override
public Status update(String table, String key, HashMap<String, ByteIterator> values) {
public Status update(String table, String key, Map<String, ByteIterator> values) {
getRegion(table).put(key, convertToBytearrayMap(values));
return Status.OK;
}
@Override
public Status insert(String table, String key, HashMap<String, ByteIterator> values) {
public Status insert(String table, String key, Map<String, ByteIterator> values) {
getRegion(table).put(key, convertToBytearrayMap(values));
return Status.OK;
}
@ -207,4 +208,4 @@ public class GeodeClient extends DB {
}
return r;
}
}
}

Просмотреть файл

@ -21,9 +21,9 @@ This driver provides a YCSB workload binding for Google's hosted Bigtable, the i
## Quickstart
### 1. Setup a Bigtable Cluster
### 1. Setup a Bigtable Instance
Login to the Google Cloud Console and follow the [Creating Cluster](https://cloud.google.com/bigtable/docs/creating-cluster) steps. Make a note of your cluster name, zone and project ID.
Login to the Google Cloud Console and follow the [Creating Instance](https://cloud.google.com/bigtable/docs/creating-instance) steps. Make a note of your instance ID and project ID.
### 2. Launch the Bigtable Shell
@ -40,29 +40,25 @@ hbase(main):002:0> create 'usertable', 'cf', {SPLITS => (1..n_splits).map {|i| "
Make a note of the column family, in this example it's `cf``.
### 4. Fetch the Proper ALPN Boot Jar
The Bigtable protocol uses HTTP/2 which requires an ALPN protocol negotiation implementation. On JVM instantiation the implementation must be loaded before attempting to connect to the cluster. If you're using Java 7 or 8, use this [Jetty Version Table](http://www.eclipse.org/jetty/documentation/current/alpn-chapter.html#alpn-versions) to determine the version appropriate for your JVM. (ALPN is included in JDK 9+). Download the proper jar from [Maven](http://search.maven.org/#search%7Cgav%7C1%7Cg%3A%22org.mortbay.jetty.alpn%22%20AND%20a%3A%22alpn-boot%22) somewhere on your system.
### 5. Download JSON Credentials
### 4. Download JSON Credentials
Follow these instructions for [Generating a JSON key](https://cloud.google.com/bigtable/docs/installing-hbase-shell#service-account) and save it to your host.
### 6. Load a Workload
### 5. Load a Workload
Switch to the root of the YCSB repo and choose the workload you want to run and `load` it first. With the CLI you must provide the column family, cluster properties and the ALPN jar to load.
Switch to the root of the YCSB repo and choose the workload you want to run and `load` it first. With the CLI you must provide the column family and instance properties to load.
```
bin/ycsb load googlebigtable -p columnfamily=cf -p google.bigtable.project.id=<PROJECT_ID> -p google.bigtable.cluster.name=<CLUSTER> -p google.bigtable.zone.name=<ZONE> -p google.bigtable.auth.service.account.enable=true -p google.bigtable.auth.json.keyfile=<PATH_TO_JSON_KEY> -jvm-args='-Xbootclasspath/p:<PATH_TO_ALPN_JAR>' -P workloads/workloada
bin/ycsb load googlebigtable -p columnfamily=cf -p google.bigtable.project.id=<PROJECT_ID> -p google.bigtable.instance.id=<INSTANCE> -p google.bigtable.auth.json.keyfile=<PATH_TO_JSON_KEY> -P workloads/workloada
```
Make sure to replace the variables in the angle brackets above with the proper value from your cluster. Additional configuration parameters are available below.
Make sure to replace the variables in the angle brackets above with the proper value from your instance. Additional configuration parameters are available below.
The `load` step only executes inserts into the datastore. After loading data, run the same workload to mix reads with writes.
```
bin/ycsb run googlebigtable -p columnfamily=cf -p google.bigtable.project.id=<PROJECT_ID> -p google.bigtable.cluster.name=<CLUSTER> -p google.bigtable.zone.name=<ZONE> -p google.bigtable.auth.service.account.enable=true -p google.bigtable.auth.json.keyfile=<PATH_TO_JSON_KEY> -jvm-args='-Xbootclasspath/p:<PATH_TO_ALPN_JAR>' -P workloads/workloada
bin/ycsb run googlebigtable -p columnfamily=cf -p google.bigtable.project.id=<PROJECT_ID> -p google.bigtable.instance.id=<INSTANCE> -p google.bigtable.auth.json.keyfile=<PATH_TO_JSON_KEY> -P workloads/workloada
```
@ -72,8 +68,7 @@ The following options can be configured using CLI (using the `-p` parameter) or
* `columnfamily`: (Required) The Bigtable column family to target.
* `google.bigtable.project.id`: (Required) The ID of a Bigtable project.
* `google.bigtable.cluster.name`: (Required) The name of a Bigtable cluster.
* `google.bigtable.zone.name`: (Required) Zone where the Bigtable cluster is running.
* `google.bigtable.instance.id`: (Required) The name of a Bigtable instance.
* `google.bigtable.auth.service.account.enable`: Whether or not to authenticate with a service account. The default is true.
* `google.bigtable.auth.json.keyfile`: (Required) A service account key for authentication.
* `debug`: If true, prints debug information to standard out. The default is false.

Просмотреть файл

@ -36,6 +36,12 @@ LICENSE file.
<version>${googlebigtable.version}</version>
</dependency>
<dependency>
<groupId>io.netty</groupId>
<artifactId>netty-tcnative-boringssl-static</artifactId>
<version>1.1.33.Fork26</version>
</dependency>
<dependency>
<groupId>com.yahoo.ycsb</groupId>
<artifactId>core</artifactId>
@ -44,4 +50,4 @@ LICENSE file.
</dependency>
</dependencies>
</project>
</project>

Просмотреть файл

@ -20,6 +20,7 @@ import java.io.IOException;
import java.nio.charset.Charset;
import java.util.Arrays;
import java.util.HashMap;
import java.util.Map;
import java.util.Iterator;
import java.util.List;
import java.util.Map.Entry;
@ -34,23 +35,24 @@ import java.util.Vector;
import java.util.concurrent.ExecutionException;
import com.google.bigtable.repackaged.com.google.protobuf.ByteString;
import com.google.bigtable.repackaged.com.google.protobuf.ServiceException;
import com.google.bigtable.v1.Column;
import com.google.bigtable.v1.Family;
import com.google.bigtable.v1.MutateRowRequest;
import com.google.bigtable.v1.Mutation;
import com.google.bigtable.v1.ReadRowsRequest;
import com.google.bigtable.v1.Row;
import com.google.bigtable.v1.RowFilter;
import com.google.bigtable.v1.RowRange;
import com.google.bigtable.v1.Mutation.DeleteFromRow;
import com.google.bigtable.v1.Mutation.SetCell;
import com.google.bigtable.v1.RowFilter.Chain.Builder;
import com.google.bigtable.v2.Column;
import com.google.bigtable.v2.Family;
import com.google.bigtable.v2.MutateRowRequest;
import com.google.bigtable.v2.Mutation;
import com.google.bigtable.v2.ReadRowsRequest;
import com.google.bigtable.v2.Row;
import com.google.bigtable.v2.RowFilter;
import com.google.bigtable.v2.RowRange;
import com.google.bigtable.v2.RowSet;
import com.google.bigtable.v2.Mutation.DeleteFromRow;
import com.google.bigtable.v2.Mutation.SetCell;
import com.google.bigtable.v2.RowFilter.Chain.Builder;
import com.google.cloud.bigtable.config.BigtableOptions;
import com.google.cloud.bigtable.grpc.BigtableDataClient;
import com.google.cloud.bigtable.grpc.BigtableSession;
import com.google.cloud.bigtable.grpc.BigtableTableName;
import com.google.cloud.bigtable.grpc.async.AsyncExecutor;
import com.google.cloud.bigtable.grpc.async.HeapSizeManager;
import com.google.cloud.bigtable.grpc.async.BulkMutation;
import com.google.cloud.bigtable.hbase.BigtableOptionsFactory;
import com.google.cloud.bigtable.util.ByteStringer;
import com.yahoo.ycsb.ByteArrayByteIterator;
@ -89,7 +91,6 @@ public class GoogleBigtableClient extends com.yahoo.ycsb.DB {
/** Thread loacal Bigtable native API objects. */
private BigtableDataClient client;
private HeapSizeManager heapSizeManager;
private AsyncExecutor asyncExecutor;
/** The column family use for the workload. */
@ -105,13 +106,21 @@ public class GoogleBigtableClient extends com.yahoo.ycsb.DB {
*/
private boolean clientSideBuffering = false;
private BulkMutation bulkMutation;
@Override
public void init() throws DBException {
Properties props = getProperties();
// Defaults the user can override if needed
CONFIG.set("google.bigtable.auth.service.account.enable", "true");
if (getProperties().containsKey(ASYNC_MUTATOR_MAX_MEMORY)) {
CONFIG.set(BigtableOptionsFactory.BIGTABLE_BUFFERED_MUTATOR_MAX_MEMORY_KEY,
getProperties().getProperty(ASYNC_MUTATOR_MAX_MEMORY));
}
if (getProperties().containsKey(ASYNC_MAX_INFLIGHT_RPCS)) {
CONFIG.set(BigtableOptionsFactory.BIGTABLE_BULK_MAX_ROW_KEY_COUNT,
getProperties().getProperty(ASYNC_MAX_INFLIGHT_RPCS));
}
// make it easy on ourselves by copying all CLI properties into the config object.
final Iterator<Entry<Object, Object>> it = props.entrySet().iterator();
while (it.hasNext()) {
@ -143,14 +152,7 @@ public class GoogleBigtableClient extends com.yahoo.ycsb.DB {
}
if (clientSideBuffering) {
heapSizeManager = new HeapSizeManager(
Long.parseLong(
getProperties().getProperty(ASYNC_MUTATOR_MAX_MEMORY,
Long.toString(AsyncExecutor.ASYNC_MUTATOR_MAX_MEMORY_DEFAULT))),
Integer.parseInt(
getProperties().getProperty(ASYNC_MAX_INFLIGHT_RPCS,
Integer.toString(AsyncExecutor.MAX_INFLIGHT_RPCS_DEFAULT))));
asyncExecutor = new AsyncExecutor(client, heapSizeManager);
asyncExecutor = session.createAsyncExecutor();
}
}
@ -169,6 +171,13 @@ public class GoogleBigtableClient extends com.yahoo.ycsb.DB {
@Override
public void cleanup() throws DBException {
if (bulkMutation != null) {
try {
bulkMutation.flush();
} catch(RuntimeException e){
throw new DBException(e);
}
}
if (asyncExecutor != null) {
try {
asyncExecutor.flush();
@ -190,7 +199,7 @@ public class GoogleBigtableClient extends com.yahoo.ycsb.DB {
@Override
public Status read(String table, String key, Set<String> fields,
HashMap<String, ByteIterator> result) {
Map<String, ByteIterator> result) {
if (debug) {
System.out.println("Doing read from Bigtable columnfamily "
+ new String(columnFamilyBytes));
@ -226,7 +235,8 @@ public class GoogleBigtableClient extends com.yahoo.ycsb.DB {
final ReadRowsRequest.Builder rrr = ReadRowsRequest.newBuilder()
.setTableNameBytes(ByteStringer.wrap(lastTableBytes))
.setFilter(filter)
.setRowKey(ByteStringer.wrap(key.getBytes()));
.setRows(RowSet.newBuilder()
.addRowKeys(ByteStringer.wrap(key.getBytes())));
List<Row> rows;
try {
@ -292,13 +302,17 @@ public class GoogleBigtableClient extends com.yahoo.ycsb.DB {
}
final RowRange range = RowRange.newBuilder()
.setStartKey(ByteStringer.wrap(startkey.getBytes()))
.setStartKeyClosed(ByteStringer.wrap(startkey.getBytes()))
.build();
final RowSet rowSet = RowSet.newBuilder()
.addRowRanges(range)
.build();
final ReadRowsRequest.Builder rrr = ReadRowsRequest.newBuilder()
.setTableNameBytes(ByteStringer.wrap(lastTableBytes))
.setFilter(filter)
.setRowRange(range);
.setRows(rowSet);
List<Row> rows;
try {
@ -347,7 +361,7 @@ public class GoogleBigtableClient extends com.yahoo.ycsb.DB {
@Override
public Status update(String table, String key,
HashMap<String, ByteIterator> values) {
Map<String, ByteIterator> values) {
if (debug) {
System.out.println("Setting up put for key: " + key);
}
@ -372,25 +386,20 @@ public class GoogleBigtableClient extends com.yahoo.ycsb.DB {
try {
if (clientSideBuffering) {
asyncExecutor.mutateRowAsync(rowMutation.build());
bulkMutation.add(rowMutation.build());
} else {
client.mutateRow(rowMutation.build());
}
return Status.OK;
} catch (ServiceException e) {
} catch (RuntimeException e) {
System.err.println("Failed to insert key: " + key + " " + e.getMessage());
return Status.ERROR;
} catch (InterruptedException e) {
System.err.println("Interrupted while inserting key: " + key + " "
+ e.getMessage());
Thread.currentThread().interrupt();
return Status.ERROR; // never get here, but lets make the compiler happy
}
}
@Override
public Status insert(String table, String key,
HashMap<String, ByteIterator> values) {
Map<String, ByteIterator> values) {
return update(table, key, values);
}
@ -410,19 +419,14 @@ public class GoogleBigtableClient extends com.yahoo.ycsb.DB {
try {
if (clientSideBuffering) {
asyncExecutor.mutateRowAsync(rowMutation.build());
bulkMutation.add(rowMutation.build());
} else {
client.mutateRow(rowMutation.build());
}
return Status.OK;
} catch (ServiceException e) {
} catch (RuntimeException e) {
System.err.println("Failed to delete key: " + key + " " + e.getMessage());
return Status.ERROR;
} catch (InterruptedException e) {
System.err.println("Interrupted while delete key: " + key + " "
+ e.getMessage());
Thread.currentThread().interrupt();
return Status.ERROR; // never get here, but lets make the compiler happy
}
}
@ -434,11 +438,18 @@ public class GoogleBigtableClient extends com.yahoo.ycsb.DB {
private void setTable(final String table) {
if (!lastTable.equals(table)) {
lastTable = table;
lastTableBytes = options
.getClusterName()
.toTableName(table)
BigtableTableName tableName = options
.getInstanceName()
.toTableName(table);
lastTableBytes = tableName
.toString()
.getBytes();
synchronized(this) {
if (bulkMutation != null) {
bulkMutation.flush();
}
bulkMutation = session.createBulkMutation(tableName, asyncExecutor);
}
}
}

Просмотреть файл

@ -181,7 +181,7 @@ public class GoogleDatastoreClient extends DB {
@Override
public Status read(String table, String key, Set<String> fields,
HashMap<String, ByteIterator> result) {
Map<String, ByteIterator> result) {
LookupRequest.Builder lookupRequest = LookupRequest.newBuilder();
lookupRequest.addKeys(buildPrimaryKey(table, key));
lookupRequest.getReadOptionsBuilder().setReadConsistency(
@ -241,14 +241,14 @@ public class GoogleDatastoreClient extends DB {
@Override
public Status update(String table, String key,
HashMap<String, ByteIterator> values) {
Map<String, ByteIterator> values) {
return doSingleItemMutation(table, key, values, MutationType.UPDATE);
}
@Override
public Status insert(String table, String key,
HashMap<String, ByteIterator> values) {
Map<String, ByteIterator> values) {
// Use Upsert to allow overwrite of existing key instead of failing the
// load (or run) just because the DB already has the key.
// This is the same behavior as what other DB does here (such as
@ -275,7 +275,7 @@ public class GoogleDatastoreClient extends DB {
}
private Status doSingleItemMutation(String table, String key,
@Nullable HashMap<String, ByteIterator> values,
@Nullable Map<String, ByteIterator> values,
MutationType mutationType) {
// First build the key.
Key.Builder datastoreKey = buildPrimaryKey(table, key);

Просмотреть файл

@ -168,7 +168,7 @@ public class HBaseClient extends com.yahoo.ycsb.DB {
* @param result A HashMap of field/value pairs for the result
* @return Zero on success, a non-zero error code on error
*/
public Status read(String table, String key, Set<String> fields, HashMap<String, ByteIterator> result) {
public Status read(String table, String key, Set<String> fields, Map<String, ByteIterator> result) {
//if this is a "new" tableName, init HTable object. Else, use existing one
if (!this.tableName.equals(table)) {
hTable = null;
@ -307,7 +307,7 @@ public class HBaseClient extends com.yahoo.ycsb.DB {
* @param values A HashMap of field/value pairs to update in the record
* @return Zero on success, a non-zero error code on error
*/
public Status update(String table, String key, HashMap<String, ByteIterator> values) {
public Status update(String table, String key, Map<String, ByteIterator> values) {
//if this is a "new" tableName, init HTable object. Else, use existing one
if (!this.tableName.equals(table)) {
hTable = null;
@ -358,7 +358,7 @@ public class HBaseClient extends com.yahoo.ycsb.DB {
* @param values A HashMap of field/value pairs to insert in the record
* @return Zero on success, a non-zero error code on error
*/
public Status insert(String table, String key, HashMap<String, ByteIterator> values) {
public Status insert(String table, String key, Map<String, ByteIterator> values) {
return update(table, key, values);
}
@ -439,6 +439,7 @@ public class HBaseClient extends com.yahoo.ycsb.DB {
long st = System.currentTimeMillis();
Status result;
Vector<HashMap<String, ByteIterator>> scanResults = new Vector<>();
Set<String> scanFields = new HashSet<String>();
result = cli.scan("table1", "user2", 20, null, scanResults);
long en = System.currentTimeMillis();

Просмотреть файл

@ -250,7 +250,7 @@ public class HBaseClient10 extends com.yahoo.ycsb.DB {
* @return Zero on success, a non-zero error code on error
*/
public Status read(String table, String key, Set<String> fields,
HashMap<String, ByteIterator> result) {
Map<String, ByteIterator> result) {
// if this is a "new" table, init HTable object. Else, use existing one
if (!tableName.equals(table)) {
currentTable = null;
@ -418,7 +418,7 @@ public class HBaseClient10 extends com.yahoo.ycsb.DB {
*/
@Override
public Status update(String table, String key,
HashMap<String, ByteIterator> values) {
Map<String, ByteIterator> values) {
// if this is a "new" table, init HTable object. Else, use existing one
if (!tableName.equals(table)) {
currentTable = null;
@ -480,7 +480,7 @@ public class HBaseClient10 extends com.yahoo.ycsb.DB {
*/
@Override
public Status insert(String table, String key,
HashMap<String, ByteIterator> values) {
Map<String, ByteIterator> values) {
return update(table, key, values);
}

Просмотреть файл

@ -47,6 +47,7 @@ import org.junit.Test;
import java.nio.ByteBuffer;
import java.util.ArrayList;
import java.util.HashMap;
import java.util.Map;
import java.util.List;
import java.util.Properties;
import java.util.Vector;
@ -173,7 +174,7 @@ public class HBaseClient10Test {
assertEquals(5, result.size());
for(int i = 0; i < 5; i++) {
final HashMap<String, ByteIterator> row = result.get(i);
final Map<String, ByteIterator> row = result.get(i);
assertEquals(1, row.size());
assertTrue(row.containsKey(colStr));
final byte[] bytes = row.get(colStr).toArray();
@ -186,7 +187,7 @@ public class HBaseClient10Test {
@Test
public void testUpdate() throws Exception{
final String key = "key";
final HashMap<String, String> input = new HashMap<String, String>();
final Map<String, String> input = new HashMap<String, String>();
input.put("column1", "value1");
input.put("column2", "value2");
final Status status = client.insert(tableName, key, StringByteIterator.getByteIteratorMap(input));

27
hbase12/README.md Normal file
Просмотреть файл

@ -0,0 +1,27 @@
<!--
Copyright (c) 2015-2017 YCSB contributors. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you
may not use this file except in compliance with the License. You
may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied. See the License for the specific language governing
permissions and limitations under the License. See accompanying
LICENSE file.
-->
# HBase (1.2+) Driver for YCSB
This driver is a binding for the YCSB facilities to operate against a HBase 1.2+ Server cluster, using a shaded client that tries to avoid leaking third party libraries.
See `hbase098/README.md` for a quickstart to setup HBase for load testing and common configuration details.
## Configuration Options
In addition to those options available for the `hbase098` binding, the following options are available for the `hbase12` binding:
* `durability`: Whether or not writes should be appended to the WAL. Bypassing the WAL can improve throughput but data cannot be recovered in the event of a crash. The default is true.

85
hbase12/pom.xml Normal file
Просмотреть файл

@ -0,0 +1,85 @@
<?xml version="1.0" encoding="UTF-8"?>
<!--
Copyright (c) 2012 - 2016 YCSB contributors. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you
may not use this file except in compliance with the License. You
may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied. See the License for the specific language governing
permissions and limitations under the License. See accompanying
LICENSE file.
-->
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd">
<modelVersion>4.0.0</modelVersion>
<parent>
<groupId>com.yahoo.ycsb</groupId>
<artifactId>binding-parent</artifactId>
<version>0.13.0-SNAPSHOT</version>
<relativePath>../binding-parent/</relativePath>
</parent>
<artifactId>hbase12-binding</artifactId>
<name>HBase 1.2 DB Binding</name>
<properties>
<!-- Tests do not run on jdk9 -->
<skipJDK9Tests>true</skipJDK9Tests>
<!-- Tests can't run without a shaded hbase testing util.
See HBASE-15666, which blocks us.
For now, we rely on the HBase 1.0 binding and manual testing.
-->
<maven.test.skip>true</maven.test.skip>
</properties>
<dependencies>
<dependency>
<groupId>com.yahoo.ycsb</groupId>
<artifactId>hbase10-binding</artifactId>
<version>${project.version}</version>
<!-- Should match all compile scoped dependencies -->
<exclusions>
<exclusion>
<groupId>org.apache.hbase</groupId>
<artifactId>hbase-client</artifactId>
</exclusion>
</exclusions>
</dependency>
<dependency>
<groupId>com.yahoo.ycsb</groupId>
<artifactId>core</artifactId>
<version>${project.version}</version>
<scope>provided</scope>
</dependency>
<dependency>
<groupId>org.apache.hbase</groupId>
<artifactId>hbase-shaded-client</artifactId>
<version>${hbase12.version}</version>
</dependency>
<dependency>
<groupId>junit</groupId>
<artifactId>junit</artifactId>
<version>4.12</version>
<scope>test</scope>
</dependency>
<!-- blocked on HBASE-15666
<dependency>
<groupId>org.apache.hbase</groupId>
<artifactId>hbase-testing-util</artifactId>
<version>${hbase12.version}</version>
<scope>test</scope>
<exclusions>
<exclusion>
<groupId>jdk.tools</groupId>
<artifactId>jdk.tools</artifactId>
</exclusion>
</exclusions>
</dependency>
-->
</dependencies>
</project>

Просмотреть файл

@ -0,0 +1,28 @@
/**
* Licensed under the Apache License, Version 2.0 (the "License"); you
* may not use this file except in compliance with the License. You
* may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
* implied. See the License for the specific language governing
* permissions and limitations under the License. See accompanying
* LICENSE file.
*/
package com.yahoo.ycsb.db.hbase12;
/**
* HBase 1.2 client for YCSB framework.
*
* A modified version of HBaseClient (which targets HBase v1.2) utilizing the
* shaded client.
*
* It should run equivalent to following the hbase098 binding README.
*
*/
public class HBaseClient12 extends com.yahoo.ycsb.db.HBaseClient10 {
}

Просмотреть файл

@ -0,0 +1,23 @@
/*
* Copyright (c) 2014, Yahoo!, Inc. All rights reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License"); you
* may not use this file except in compliance with the License. You
* may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
* implied. See the License for the specific language governing
* permissions and limitations under the License. See accompanying
* LICENSE file.
*/
/**
* The YCSB binding for <a href="https://hbase.apache.org/">HBase</a>
* using the HBase 1.2+ shaded API.
*/
package com.yahoo.ycsb.db.hbase12;

Просмотреть файл

@ -0,0 +1,213 @@
/**
* Licensed under the Apache License, Version 2.0 (the "License"); you
* may not use this file except in compliance with the License. You
* may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
* implied. See the License for the specific language governing
* permissions and limitations under the License. See accompanying
* LICENSE file.
*/
package com.yahoo.ycsb.db.hbase12;
import static com.yahoo.ycsb.workloads.CoreWorkload.TABLENAME_PROPERTY;
import static com.yahoo.ycsb.workloads.CoreWorkload.TABLENAME_PROPERTY_DEFAULT;
import static org.junit.Assert.assertEquals;
import static org.junit.Assert.assertFalse;
import static org.junit.Assert.assertTrue;
import static org.junit.Assert.fail;
import static org.junit.Assume.assumeTrue;
import com.yahoo.ycsb.ByteIterator;
import com.yahoo.ycsb.Status;
import com.yahoo.ycsb.StringByteIterator;
import com.yahoo.ycsb.measurements.Measurements;
import com.yahoo.ycsb.workloads.CoreWorkload;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.hbase.HBaseTestingUtility;
import org.apache.hadoop.hbase.TableName;
import org.apache.hadoop.hbase.client.Get;
import org.apache.hadoop.hbase.client.Put;
import org.apache.hadoop.hbase.client.Result;
import org.apache.hadoop.hbase.client.Table;
import org.apache.hadoop.hbase.util.Bytes;
import org.junit.After;
import org.junit.AfterClass;
import org.junit.Before;
import org.junit.BeforeClass;
import org.junit.Ignore;
import org.junit.Test;
import java.nio.ByteBuffer;
import java.util.ArrayList;
import java.util.HashMap;
import java.util.List;
import java.util.Properties;
import java.util.Vector;
/**
* Integration tests for the YCSB HBase client 1.2, using an HBase minicluster.
*/
public class HBaseClient12Test {
private final static String COLUMN_FAMILY = "cf";
private static HBaseTestingUtility testingUtil;
private HBaseClient12 client;
private Table table = null;
private String tableName;
private static boolean isWindows() {
final String os = System.getProperty("os.name");
return os.startsWith("Windows");
}
/**
* Creates a mini-cluster for use in these tests.
*
* This is a heavy-weight operation, so invoked only once for the test class.
*/
@BeforeClass
public static void setUpClass() throws Exception {
// Minicluster setup fails on Windows with an UnsatisfiedLinkError.
// Skip if windows.
assumeTrue(!isWindows());
testingUtil = HBaseTestingUtility.createLocalHTU();
testingUtil.startMiniCluster();
}
/**
* Tears down mini-cluster.
*/
@AfterClass
public static void tearDownClass() throws Exception {
if (testingUtil != null) {
testingUtil.shutdownMiniCluster();
}
}
/**
* Sets up the mini-cluster for testing.
*
* We re-create the table for each test.
*/
@Before
public void setUp() throws Exception {
client = new HBaseClient12();
client.setConfiguration(new Configuration(testingUtil.getConfiguration()));
Properties p = new Properties();
p.setProperty("columnfamily", COLUMN_FAMILY);
Measurements.setProperties(p);
final CoreWorkload workload = new CoreWorkload();
workload.init(p);
tableName = p.getProperty(TABLENAME_PROPERTY, TABLENAME_PROPERTY_DEFAULT);
table = testingUtil.createTable(TableName.valueOf(tableName), Bytes.toBytes(COLUMN_FAMILY));
client.setProperties(p);
client.init();
}
@After
public void tearDown() throws Exception {
table.close();
testingUtil.deleteTable(tableName);
}
@Test
public void testRead() throws Exception {
final String rowKey = "row1";
final Put p = new Put(Bytes.toBytes(rowKey));
p.addColumn(Bytes.toBytes(COLUMN_FAMILY),
Bytes.toBytes("column1"), Bytes.toBytes("value1"));
p.addColumn(Bytes.toBytes(COLUMN_FAMILY),
Bytes.toBytes("column2"), Bytes.toBytes("value2"));
table.put(p);
final HashMap<String, ByteIterator> result = new HashMap<String, ByteIterator>();
final Status status = client.read(tableName, rowKey, null, result);
assertEquals(Status.OK, status);
assertEquals(2, result.size());
assertEquals("value1", result.get("column1").toString());
assertEquals("value2", result.get("column2").toString());
}
@Test
public void testReadMissingRow() throws Exception {
final HashMap<String, ByteIterator> result = new HashMap<String, ByteIterator>();
final Status status = client.read(tableName, "Missing row", null, result);
assertEquals(Status.NOT_FOUND, status);
assertEquals(0, result.size());
}
@Test
public void testScan() throws Exception {
// Fill with data
final String colStr = "row_number";
final byte[] col = Bytes.toBytes(colStr);
final int n = 10;
final List<Put> puts = new ArrayList<Put>(n);
for(int i = 0; i < n; i++) {
final byte[] key = Bytes.toBytes(String.format("%05d", i));
final byte[] value = java.nio.ByteBuffer.allocate(4).putInt(i).array();
final Put p = new Put(key);
p.addColumn(Bytes.toBytes(COLUMN_FAMILY), col, value);
puts.add(p);
}
table.put(puts);
// Test
final Vector<HashMap<String, ByteIterator>> result =
new Vector<HashMap<String, ByteIterator>>();
// Scan 5 records, skipping the first
client.scan(tableName, "00001", 5, null, result);
assertEquals(5, result.size());
for(int i = 0; i < 5; i++) {
final HashMap<String, ByteIterator> row = result.get(i);
assertEquals(1, row.size());
assertTrue(row.containsKey(colStr));
final byte[] bytes = row.get(colStr).toArray();
final ByteBuffer buf = ByteBuffer.wrap(bytes);
final int rowNum = buf.getInt();
assertEquals(i + 1, rowNum);
}
}
@Test
public void testUpdate() throws Exception{
final String key = "key";
final HashMap<String, String> input = new HashMap<String, String>();
input.put("column1", "value1");
input.put("column2", "value2");
final Status status = client.insert(tableName, key, StringByteIterator.getByteIteratorMap(input));
assertEquals(Status.OK, status);
// Verify result
final Get get = new Get(Bytes.toBytes(key));
final Result result = this.table.get(get);
assertFalse(result.isEmpty());
assertEquals(2, result.size());
for(final java.util.Map.Entry<String, String> entry : input.entrySet()) {
assertEquals(entry.getValue(),
new String(result.getValue(Bytes.toBytes(COLUMN_FAMILY),
Bytes.toBytes(entry.getKey()))));
}
}
@Test
@Ignore("Not yet implemented")
public void testDelete() {
fail("Not yet implemented");
}
}

Просмотреть файл

@ -0,0 +1,34 @@
<?xml version="1.0" encoding="UTF-8"?>
<!--
Copyright (c) 2016 YCSB contributors. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you
may not use this file except in compliance with the License. You
may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied. See the License for the specific language governing
permissions and limitations under the License. See accompanying
LICENSE file.
-->
<configuration>
<property>
<name>hbase.master.info.port</name>
<value>-1</value>
<description>The port for the hbase master web UI
Set to -1 if you do not want the info server to run.
</description>
</property>
<property>
<name>hbase.regionserver.info.port</name>
<value>-1</value>
<description>The port for the hbase regionserver web UI
Set to -1 if you do not want the info server to run.
</description>
</property>
</configuration>

Просмотреть файл

@ -0,0 +1,28 @@
#
# Copyright (c) 2015 YCSB contributors. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you
# may not use this file except in compliance with the License. You
# may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied. See the License for the specific language governing
# permissions and limitations under the License. See accompanying
# LICENSE file.
#
# Root logger option
log4j.rootLogger=WARN, stderr
log4j.appender.stderr=org.apache.log4j.ConsoleAppender
log4j.appender.stderr.target=System.err
log4j.appender.stderr.layout=org.apache.log4j.PatternLayout
log4j.appender.stderr.layout.conversionPattern=%d{yyyy/MM/dd HH:mm:ss} %-5p %c %x - %m%n
# Suppress messages from ZKTableStateManager: Creates a large number of table
# state change messages.
log4j.logger.org.apache.hadoop.hbase.zookeeper.ZKTableStateManager=ERROR

Просмотреть файл

@ -117,7 +117,7 @@ public class HypertableClient extends com.yahoo.ycsb.DB {
*/
@Override
public Status read(String table, String key, Set<String> fields,
HashMap<String, ByteIterator> result) {
Map<String, ByteIterator> result) {
// SELECT _column_family:field[i]
// FROM table WHERE ROW=key MAX_VERSIONS 1;
@ -252,7 +252,7 @@ public class HypertableClient extends com.yahoo.ycsb.DB {
*/
@Override
public Status update(String table, String key,
HashMap<String, ByteIterator> values) {
Map<String, ByteIterator> values) {
return insert(table, key, values);
}
@ -271,7 +271,7 @@ public class HypertableClient extends com.yahoo.ycsb.DB {
*/
@Override
public Status insert(String table, String key,
HashMap<String, ByteIterator> values) {
Map<String, ByteIterator> values) {
// INSERT INTO table VALUES
// (key, _column_family:entry,getKey(), entry.getValue()), (...);

Просмотреть файл

@ -65,8 +65,7 @@ public class InfinispanClient extends DB {
infinispanManager = null;
}
public Status read(String table, String key, Set<String> fields,
HashMap<String, ByteIterator> result) {
public Status read(String table, String key, Set<String> fields, Map<String, ByteIterator> result) {
try {
Map<String, String> row;
if (clustered) {
@ -98,8 +97,7 @@ public class InfinispanClient extends DB {
return Status.OK;
}
public Status update(String table, String key,
HashMap<String, ByteIterator> values) {
public Status update(String table, String key, Map<String, ByteIterator> values) {
try {
if (clustered) {
AtomicMap<String, String> row = AtomicMapLookup.getAtomicMap(infinispanManager.getCache(table), key);
@ -122,8 +120,7 @@ public class InfinispanClient extends DB {
}
}
public Status insert(String table, String key,
HashMap<String, ByteIterator> values) {
public Status insert(String table, String key, Map<String, ByteIterator> values) {
try {
if (clustered) {
AtomicMap<String, String> row = AtomicMapLookup.getAtomicMap(infinispanManager.getCache(table), key);

Просмотреть файл

@ -51,7 +51,7 @@ public class InfinispanRemoteClient extends DB {
}
@Override
public Status insert(String table, String recordKey, HashMap<String, ByteIterator> values) {
public Status insert(String table, String recordKey, Map<String, ByteIterator> values) {
String compositKey = createKey(table, recordKey);
Map<String, String> stringValues = new HashMap<>();
StringByteIterator.putAllAsStrings(stringValues, values);
@ -65,7 +65,7 @@ public class InfinispanRemoteClient extends DB {
}
@Override
public Status read(String table, String recordKey, Set<String> fields, HashMap<String, ByteIterator> result) {
public Status read(String table, String recordKey, Set<String> fields, Map<String, ByteIterator> result) {
String compositKey = createKey(table, recordKey);
try {
Map<String, String> values = cache().get(compositKey);
@ -100,7 +100,7 @@ public class InfinispanRemoteClient extends DB {
}
@Override
public Status update(String table, String recordKey, HashMap<String, ByteIterator> values) {
public Status update(String table, String recordKey, Map<String, ByteIterator> values) {
String compositKey = createKey(table, recordKey);
try {
Map<String, String> stringValues = new HashMap<>();

Просмотреть файл

@ -82,7 +82,7 @@ public class JdbcDBClient extends DB {
/** The field name prefix in the table. */
public static final String COLUMN_PREFIX = "FIELD";
private ArrayList<Connection> conns;
private List<Connection> conns;
private boolean initialized = false;
private Properties props;
private int jdbcFetchSize;
@ -312,7 +312,7 @@ public class JdbcDBClient extends DB {
}
@Override
public Status read(String tableName, String key, Set<String> fields, HashMap<String, ByteIterator> result) {
public Status read(String tableName, String key, Set<String> fields, Map<String, ByteIterator> result) {
try {
StatementType type = new StatementType(StatementType.Type.READ, tableName, 1, "", getShardIndexByKey(key));
PreparedStatement readStatement = cachedStatements.get(type);
@ -370,7 +370,7 @@ public class JdbcDBClient extends DB {
}
@Override
public Status update(String tableName, String key, HashMap<String, ByteIterator> values) {
public Status update(String tableName, String key, Map<String, ByteIterator> values) {
try {
int numFields = values.size();
OrderedFieldInfo fieldInfo = getFieldInfo(values);
@ -397,7 +397,7 @@ public class JdbcDBClient extends DB {
}
@Override
public Status insert(String tableName, String key, HashMap<String, ByteIterator> values) {
public Status insert(String tableName, String key, Map<String, ByteIterator> values) {
try {
int numFields = values.size();
OrderedFieldInfo fieldInfo = getFieldInfo(values);
@ -483,7 +483,7 @@ public class JdbcDBClient extends DB {
}
}
private OrderedFieldInfo getFieldInfo(HashMap<String, ByteIterator> values) {
private OrderedFieldInfo getFieldInfo(Map<String, ByteIterator> values) {
String fieldKeys = "";
List<String> fieldValues = new ArrayList<>();
int count = 0;

Просмотреть файл

@ -26,7 +26,9 @@ import org.junit.*;
import java.sql.*;
import java.util.HashMap;
import java.util.Map;
import java.util.HashSet;
import java.util.Set;
import java.util.Properties;
import java.util.Vector;
@ -246,7 +248,7 @@ public class JdbcDBClientTest {
public void readTest() {
String insertKey = "user0";
HashMap<String, ByteIterator> insertMap = insertRow(insertKey);
HashSet<String> readFields = new HashSet<String>();
Set<String> readFields = new HashSet<String>();
HashMap<String, ByteIterator> readResultMap = new HashMap<String, ByteIterator>();
// Test reading a single field
@ -300,12 +302,12 @@ public class JdbcDBClientTest {
@Test
public void scanTest() throws SQLException {
HashMap<String, HashMap<String, ByteIterator>> keyMap = new HashMap<String, HashMap<String, ByteIterator>>();
Map<String, HashMap<String, ByteIterator>> keyMap = new HashMap<String, HashMap<String, ByteIterator>>();
for (int i = 0; i < 5; i++) {
String insertKey = KEY_PREFIX + i;
keyMap.put(insertKey, insertRow(insertKey));
}
HashSet<String> fieldSet = new HashSet<String>();
Set<String> fieldSet = new HashSet<String>();
fieldSet.add("FIELD0");
fieldSet.add("FIELD1");
int startIndex = 1;
@ -318,7 +320,7 @@ public class JdbcDBClientTest {
assertEquals("Assert the correct number of results rows were returned", resultRows, resultVector.size());
// Check each vector row to make sure we have the correct fields
int testIndex = startIndex;
for (HashMap<String, ByteIterator> result: resultVector) {
for (Map<String, ByteIterator> result: resultVector) {
assertEquals("Assert that this row has the correct number of fields", fieldSet.size(), result.size());
for (String field: fieldSet) {
assertEquals("Assert this field is correct in this row", keyMap.get(KEY_PREFIX + testIndex).get(field).toString(), result.get(field).toString());

Просмотреть файл

@ -32,6 +32,7 @@ import org.apache.kudu.client.*;
import java.util.ArrayList;
import java.util.HashMap;
import java.util.Map;
import java.util.List;
import java.util.Properties;
import java.util.Set;
@ -188,10 +189,8 @@ public class KuduYCSBClient extends com.yahoo.ycsb.DB {
}
@Override
public Status read(String table,
String key,
Set<String> fields,
HashMap<String, ByteIterator> result) {
public Status read(String table, String key, Set<String> fields,
Map<String, ByteIterator> result) {
Vector<HashMap<String, ByteIterator>> results = new Vector<>();
final Status status = scan(table, key, 1, fields, results);
if (!status.equals(Status.OK)) {
@ -272,7 +271,7 @@ public class KuduYCSBClient extends com.yahoo.ycsb.DB {
}
@Override
public Status update(String table, String key, HashMap<String, ByteIterator> values) {
public Status update(String table, String key, Map<String, ByteIterator> values) {
Update update = this.kuduTable.newUpdate();
PartialRow row = update.getRow();
row.addString(KEY, key);
@ -288,7 +287,7 @@ public class KuduYCSBClient extends com.yahoo.ycsb.DB {
}
@Override
public Status insert(String table, String key, HashMap<String, ByteIterator> values) {
public Status insert(String table, String key, Map<String, ByteIterator> values) {
Insert insert = this.kuduTable.newInsert();
PartialRow row = insert.getRow();
row.addString(KEY, key);

Просмотреть файл

@ -129,7 +129,7 @@ public class MapKeeperClient extends DB {
@Override
public int read(String table, String key, Set<String> fields,
HashMap<String, ByteIterator> result) {
Map<String, ByteIterator> result) {
try {
ByteBuffer buf = bufStr(key);
@ -177,7 +177,7 @@ public class MapKeeperClient extends DB {
@Override
public int update(String table, String key,
HashMap<String, ByteIterator> values) {
Map<String, ByteIterator> values) {
try {
if(!writeallfields) {
HashMap<String, ByteIterator> oldval = new HashMap<String, ByteIterator>();
@ -197,7 +197,7 @@ public class MapKeeperClient extends DB {
@Override
public int insert(String table, String key,
HashMap<String, ByteIterator> values) {
Map<String, ByteIterator> values) {
try {
int ret = ycsbThriftRet(c.insert(table, bufStr(key), encode(values)), ResponseCode.Success, ResponseCode.RecordExists);
return ret;

Просмотреть файл

@ -91,6 +91,10 @@ A sample configuration is provided in
What to do with failures; this is one of `net.spy.memcached.FailureMode` enum
values, which are currently: `Redistribute`, `Retry`, or `Cancel`.
- `memcached.protocol`
Set to 'binary' to use memcached binary protocol. Set to 'text' or omit this field
to use memcached text protocol
You can set properties on the command line via `-p`, e.g.:
./bin/ycsb load memcached -s -P workloads/workloada \

Просмотреть файл

@ -98,6 +98,10 @@ public class MemcachedClient extends DB {
public static final FailureMode FAILURE_MODE_PROPERTY_DEFAULT =
FailureMode.Redistribute;
public static final String PROTOCOL_PROPERTY = "memcached.protocol";
public static final ConnectionFactoryBuilder.Protocol DEFAULT_PROTOCOL =
ConnectionFactoryBuilder.Protocol.TEXT;
/**
* The MemcachedClient implementation that will be used to communicate
* with the memcached server.
@ -142,6 +146,11 @@ public class MemcachedClient extends DB {
connectionFactoryBuilder.setOpTimeout(Integer.parseInt(
getProperties().getProperty(OP_TIMEOUT_PROPERTY, DEFAULT_OP_TIMEOUT)));
String protocolString = getProperties().getProperty(PROTOCOL_PROPERTY);
connectionFactoryBuilder.setProtocol(
protocolString == null ? DEFAULT_PROTOCOL
: ConnectionFactoryBuilder.Protocol.valueOf(protocolString.toUpperCase()));
String failureString = getProperties().getProperty(FAILURE_MODE_PROPERTY);
connectionFactoryBuilder.setFailureMode(
failureString == null ? FAILURE_MODE_PROPERTY_DEFAULT
@ -171,7 +180,7 @@ public class MemcachedClient extends DB {
@Override
public Status read(
String table, String key, Set<String> fields,
HashMap<String, ByteIterator> result) {
Map<String, ByteIterator> result) {
key = createQualifiedKey(table, key);
try {
GetFuture<Object> future = memcachedClient().asyncGet(key);
@ -195,7 +204,7 @@ public class MemcachedClient extends DB {
@Override
public Status update(
String table, String key, HashMap<String, ByteIterator> values) {
String table, String key, Map<String, ByteIterator> values) {
key = createQualifiedKey(table, key);
try {
OperationFuture<Boolean> future =
@ -209,7 +218,7 @@ public class MemcachedClient extends DB {
@Override
public Status insert(
String table, String key, HashMap<String, ByteIterator> values) {
String table, String key, Map<String, ByteIterator> values) {
key = createQualifiedKey(table, key);
try {
OperationFuture<Boolean> future =
@ -281,7 +290,7 @@ public class MemcachedClient extends DB {
protected static String toJson(Map<String, ByteIterator> values)
throws IOException {
ObjectNode node = MAPPER.createObjectNode();
HashMap<String, String> stringMap = StringByteIterator.getStringMap(values);
Map<String, String> stringMap = StringByteIterator.getStringMap(values);
for (Map.Entry<String, String> pair : stringMap.entrySet()) {
node.put(pair.getKey(), pair.getValue());
}

Просмотреть файл

@ -251,7 +251,7 @@ public class AsyncMongoDbClient extends DB {
*/
@Override
public final Status insert(final String table, final String key,
final HashMap<String, ByteIterator> values) {
final Map<String, ByteIterator> values) {
try {
final MongoCollection collection = database.getCollection(table);
final DocumentBuilder toInsert =
@ -329,7 +329,7 @@ public class AsyncMongoDbClient extends DB {
*/
@Override
public final Status read(final String table, final String key,
final Set<String> fields, final HashMap<String, ByteIterator> result) {
final Set<String> fields, final Map<String, ByteIterator> result) {
try {
final MongoCollection collection = database.getCollection(table);
final DocumentBuilder query =
@ -450,7 +450,7 @@ public class AsyncMongoDbClient extends DB {
*/
@Override
public final Status update(final String table, final String key,
final HashMap<String, ByteIterator> values) {
final Map<String, ByteIterator> values) {
try {
final MongoCollection collection = database.getCollection(table);
final DocumentBuilder query = BuilderFactory.start().add("_id", key);
@ -477,7 +477,7 @@ public class AsyncMongoDbClient extends DB {
* @param queryResult
* The document to fill from.
*/
protected final void fillMap(final HashMap<String, ByteIterator> result,
protected final void fillMap(final Map<String, ByteIterator> result,
final Document queryResult) {
for (final Element be : queryResult) {
if (be.getType() == ElementType.BINARY) {

Просмотреть файл

@ -251,7 +251,7 @@ public class MongoDbClient extends DB {
*/
@Override
public Status insert(String table, String key,
HashMap<String, ByteIterator> values) {
Map<String, ByteIterator> values) {
try {
MongoCollection<Document> collection = database.getCollection(table);
Document toInsert = new Document("_id", key);
@ -315,7 +315,7 @@ public class MongoDbClient extends DB {
*/
@Override
public Status read(String table, String key, Set<String> fields,
HashMap<String, ByteIterator> result) {
Map<String, ByteIterator> result) {
try {
MongoCollection<Document> collection = database.getCollection(table);
Document query = new Document("_id", key);
@ -428,7 +428,7 @@ public class MongoDbClient extends DB {
*/
@Override
public Status update(String table, String key,
HashMap<String, ByteIterator> values) {
Map<String, ByteIterator> values) {
try {
MongoCollection<Document> collection = database.getCollection(table);

Просмотреть файл

@ -37,6 +37,7 @@ import java.net.InetAddress;
import java.net.Socket;
import java.util.Collections;
import java.util.HashMap;
import java.util.Map;
import java.util.Properties;
import java.util.Set;
import java.util.Vector;
@ -281,7 +282,7 @@ public abstract class AbstractDBTestCases {
assertThat("Read did not return success (0).", result, is(Status.OK));
assertThat(results.size(), is(5));
for (int i = 0; i < 5; ++i) {
HashMap<String, ByteIterator> read = results.get(i);
Map<String, ByteIterator> read = results.get(i);
for (String key : keys) {
ByteIterator iter = read.get(key);

Просмотреть файл

@ -180,8 +180,7 @@ public class NoSqlDbClient extends DB {
}
@Override
public Status read(String table, String key, Set<String> fields,
HashMap<String, ByteIterator> result) {
public Status read(String table, String key, Set<String> fields, Map<String, ByteIterator> result) {
Key kvKey = createKey(table, key);
SortedMap<Key, ValueVersion> kvResult;
try {
@ -212,8 +211,7 @@ public class NoSqlDbClient extends DB {
}
@Override
public Status update(String table, String key,
HashMap<String, ByteIterator> values) {
public Status update(String table, String key, Map<String, ByteIterator> values) {
for (Map.Entry<String, ByteIterator> entry : values.entrySet()) {
Key kvKey = createKey(table, key, entry.getKey());
Value kvValue = Value.createValue(entry.getValue().toArray());
@ -229,8 +227,7 @@ public class NoSqlDbClient extends DB {
}
@Override
public Status insert(String table, String key,
HashMap<String, ByteIterator> values) {
public Status insert(String table, String key, Map<String, ByteIterator> values) {
return update(table, key, values);
}

Просмотреть файл

@ -31,7 +31,11 @@ import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import java.io.File;
import java.util.*;
import java.util.HashMap;
import java.util.Map;
import java.util.Properties;
import java.util.Set;
import java.util.Vector;
import java.util.concurrent.locks.Lock;
import java.util.concurrent.locks.ReentrantLock;
@ -192,7 +196,7 @@ public class OrientDBClient extends DB {
}
@Override
public Status insert(String table, String key, HashMap<String, ByteIterator> values) {
public Status insert(String table, String key, Map<String, ByteIterator> values) {
try (ODatabaseDocumentTx db = databasePool.acquire()) {
final ODocument document = new ODocument(CLASS);
@ -228,7 +232,7 @@ public class OrientDBClient extends DB {
}
@Override
public Status read(String table, String key, Set<String> fields, HashMap<String, ByteIterator> result) {
public Status read(String table, String key, Set<String> fields, Map<String, ByteIterator> result) {
try (ODatabaseDocumentTx db = databasePool.acquire()) {
final ODictionary<ORecord> dictionary = db.getMetadata().getIndexManager().getDictionary();
final ODocument document = dictionary.get(key);
@ -251,7 +255,7 @@ public class OrientDBClient extends DB {
}
@Override
public Status update(String table, String key, HashMap<String, ByteIterator> values) {
public Status update(String table, String key, Map<String, ByteIterator> values) {
while (true) {
try (ODatabaseDocumentTx db = databasePool.acquire()) {
final ODictionary<ORecord> dictionary = db.getMetadata().getIndexManager().getDictionary();

47
pom.xml
Просмотреть файл

@ -1,6 +1,6 @@
<?xml version="1.0" encoding="UTF-8"?>
<!--
Copyright (c) 2012 - 2017 YCSB contributors. All rights reserved.
Copyright (c) 2012 - 2016 YCSB contributors. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you
may not use this file except in compliance with the License. You
@ -66,43 +66,44 @@ LICENSE file.
<!-- Properties Management -->
<properties>
<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
<maven.assembly.version>2.5.5</maven.assembly.version>
<maven.dependency.version>2.10</maven.dependency.version>
<!-- Binding Versions -->
<accumulo.version>1.6.0</accumulo.version>
<aerospike.version>3.1.2</aerospike.version>
<arangodb.version>2.7.3</arangodb.version>
<arangodb3.version>4.1.7</arangodb3.version>
<asynchbase.version>1.7.1</asynchbase.version>
<azuredocumentdb.version>1.8.1</azuredocumentdb.version>
<azurestorage.version>4.0.0</azurestorage.version>
<cassandra.cql.version>3.0.0</cassandra.cql.version>
<couchbase.version>1.4.10</couchbase.version>
<couchbase2.version>2.3.1</couchbase2.version>
<hbase094.version>0.94.27</hbase094.version>
<hbase098.version>0.98.14-hadoop2</hbase098.version>
<hbase10.version>1.0.2</hbase10.version>
<hypertable.version>0.9.5.6</hypertable.version>
<elasticsearch-version>2.4.4</elasticsearch-version>
<elasticsearch5-version>5.2.0</elasticsearch5-version>
<geode.version>1.0.0-incubating.M3</geode.version>
<googlebigtable.version>0.2.3</googlebigtable.version>
<hbase12.version>1.2.5</hbase12.version>
<accumulo.version>1.6.0</accumulo.version>
<cassandra.cql.version>3.0.0</cassandra.cql.version>
<geode.version>1.2.0</geode.version>
<azuredocumentdb.version>1.8.1</azuredocumentdb.version>
<googlebigtable.version>0.9.7</googlebigtable.version>
<infinispan.version>7.2.2.Final</infinispan.version>
<kudu.version>1.1.0</kudu.version>
<openjpa.jdbc.version>2.1.1</openjpa.jdbc.version>
<!--<mapkeeper.version>1.0</mapkeeper.version>-->
<mongodb.version>3.0.3</mongodb.version>
<mongodb.async.version>2.0.1</mongodb.async.version>
<orientdb.version>2.2.10</orientdb.version>
<redis.version>2.0.0</redis.version>
<riak.version>2.0.5</riak.version>
<s3.version>1.10.20</s3.version>
<voldemort.version>0.81</voldemort.version>
<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
<thrift.version>0.8.0</thrift.version>
<hypertable.version>0.9.5.6</hypertable.version>
<elasticsearch-version>2.4.4</elasticsearch-version>
<elasticsearch5-version>5.2.0</elasticsearch5-version>
<couchbase.version>1.4.10</couchbase.version>
<couchbase2.version>2.3.1</couchbase2.version>
<tarantool.version>1.6.5</tarantool.version>
<riak.version>2.0.5</riak.version>
<aerospike.version>3.1.2</aerospike.version>
<solr.version>5.5.3</solr.version>
<solr6.version>6.4.1</solr6.version>
<tarantool.version>1.6.5</tarantool.version>
<voldemort.version>0.81</voldemort.version>
<arangodb.version>2.7.3</arangodb.version>
<arangodb3.version>4.1.7</arangodb3.version>
<azurestorage.version>4.0.0</azurestorage.version>
<cloudspanner.version>0.20.3-beta</cloudspanner.version>
</properties>
<modules>
@ -117,6 +118,7 @@ LICENSE file.
<module>asynchbase</module>
<module>azuretablestorage</module>
<module>cassandra</module>
<module>cloudspanner</module>
<module>couchbase</module>
<module>couchbase2</module>
<module>distribution</module>
@ -130,6 +132,7 @@ LICENSE file.
<module>hbase094</module>
<module>hbase098</module>
<module>hbase10</module>
<module>hbase12</module>
<module>hypertable</module>
<module>infinispan</module>
<module>jdbc</module>

Просмотреть файл

@ -32,6 +32,7 @@ import com.yahoo.ycsb.StringByteIterator;
import java.io.File;
import java.util.HashMap;
import java.util.Map;
import java.util.Map.Entry;
import java.util.Properties;
import java.util.Set;
@ -102,7 +103,7 @@ public class RadosClient extends DB {
}
@Override
public Status read(String table, String key, Set<String> fields, HashMap<String, ByteIterator> result) {
public Status read(String table, String key, Set<String> fields, Map<String, ByteIterator> result) {
byte[] buffer;
try {
@ -137,7 +138,7 @@ public class RadosClient extends DB {
}
@Override
public Status insert(String table, String key, HashMap<String, ByteIterator> values) {
public Status insert(String table, String key, Map<String, ByteIterator> values) {
JSONObject json = new JSONObject();
for (final Entry<String, ByteIterator> e : values.entrySet()) {
json.put(e.getKey(), e.getValue().toString());
@ -162,7 +163,7 @@ public class RadosClient extends DB {
}
@Override
public Status update(String table, String key, HashMap<String, ByteIterator> values) {
public Status update(String table, String key, Map<String, ByteIterator> values) {
Status rtn = delete(table, key);
if (rtn.equals(Status.OK)) {
return insert(table, key, values);

Просмотреть файл

@ -34,6 +34,7 @@ import redis.clients.jedis.Jedis;
import redis.clients.jedis.Protocol;
import java.util.HashMap;
import java.util.Map;
import java.util.Iterator;
import java.util.List;
import java.util.Properties;
@ -94,7 +95,7 @@ public class RedisClient extends DB {
@Override
public Status read(String table, String key, Set<String> fields,
HashMap<String, ByteIterator> result) {
Map<String, ByteIterator> result) {
if (fields == null) {
StringByteIterator.putAllAsByteIterators(result, jedis.hgetAll(key));
} else {
@ -116,7 +117,7 @@ public class RedisClient extends DB {
@Override
public Status insert(String table, String key,
HashMap<String, ByteIterator> values) {
Map<String, ByteIterator> values) {
if (jedis.hmset(key, StringByteIterator.getStringMap(values))
.equals("OK")) {
jedis.zadd(INDEX_KEY, hash(key), key);
@ -133,7 +134,7 @@ public class RedisClient extends DB {
@Override
public Status update(String table, String key,
HashMap<String, ByteIterator> values) {
Map<String, ByteIterator> values) {
return jedis.hmset(key, StringByteIterator.getStringMap(values))
.equals("OK") ? Status.OK : Status.ERROR;
}

Просмотреть файл

@ -23,6 +23,7 @@ import java.io.IOException;
import java.io.InputStream;
import java.io.InputStreamReader;
import java.util.HashMap;
import java.util.Map;
import java.util.Properties;
import java.util.Set;
import java.util.Vector;
@ -101,7 +102,7 @@ public class RestClient extends DB {
}
@Override
public Status read(String table, String endpoint, Set<String> fields, HashMap<String, ByteIterator> result) {
public Status read(String table, String endpoint, Set<String> fields, Map<String, ByteIterator> result) {
int responseCode;
try {
responseCode = httpGet(urlPrefix + endpoint, result);
@ -116,7 +117,7 @@ public class RestClient extends DB {
}
@Override
public Status insert(String table, String endpoint, HashMap<String, ByteIterator> values) {
public Status insert(String table, String endpoint, Map<String, ByteIterator> values) {
int responseCode;
try {
responseCode = httpExecute(new HttpPost(urlPrefix + endpoint), values.get("data").toString());
@ -146,7 +147,7 @@ public class RestClient extends DB {
}
@Override
public Status update(String table, String endpoint, HashMap<String, ByteIterator> values) {
public Status update(String table, String endpoint, Map<String, ByteIterator> values) {
int responseCode;
try {
responseCode = httpExecute(new HttpPut(urlPrefix + endpoint), values.get("data").toString());
@ -199,7 +200,7 @@ public class RestClient extends DB {
}
// Connection is automatically released back in case of an exception.
private int httpGet(String endpoint, HashMap<String, ByteIterator> result) throws IOException {
private int httpGet(String endpoint, Map<String, ByteIterator> result) throws IOException {
requestTimedout.setIsSatisfied(false);
Thread timer = new Thread(new Timer(execTimeout, requestTimedout));
timer.start();

Просмотреть файл

@ -228,7 +228,7 @@ public class RiakKVClient extends DB {
* @return Zero on success, a non-zero error code on error
*/
@Override
public Status read(String table, String key, Set<String> fields, HashMap<String, ByteIterator> result) {
public Status read(String table, String key, Set<String> fields, Map<String, ByteIterator> result) {
Location location = new Location(new Namespace(bucketType, table), key);
FetchValue fv = new FetchValue.Builder(location).withOption(FetchValue.Option.R, rvalue).build();
FetchValue.Response response;
@ -258,8 +258,9 @@ public class RiakKVClient extends DB {
}
// Create the result HashMap.
createResultHashMap(fields, response, result);
HashMap<String, ByteIterator> partialResult = new HashMap<>();
createResultHashMap(fields, response, partialResult);
result.putAll(partialResult);
return Status.OK;
}
@ -403,7 +404,7 @@ public class RiakKVClient extends DB {
* @return Zero on success, a non-zero error code on error
*/
@Override
public Status insert(String table, String key, HashMap<String, ByteIterator> values) {
public Status insert(String table, String key, Map<String, ByteIterator> values) {
Location location = new Location(new Namespace(bucketType, table), key);
RiakObject object = new RiakObject();
@ -492,7 +493,7 @@ public class RiakKVClient extends DB {
* @return Zero on success, a non-zero error code on error
*/
@Override
public Status update(String table, String key, HashMap<String, ByteIterator> values) {
public Status update(String table, String key, Map<String, ByteIterator> values) {
// If eventual consistency model is in use, then an update operation is pratically equivalent to an insert one.
if (!strongConsistency) {
return insert(table, key, values);

Просмотреть файл

@ -258,7 +258,7 @@ public class S3Client extends DB {
*/
@Override
public Status insert(String bucket, String key,
HashMap<String, ByteIterator> values) {
Map<String, ByteIterator> values) {
return writeToStorage(bucket, key, values, true, sse, ssecKey);
}
/**
@ -278,7 +278,7 @@ public class S3Client extends DB {
*/
@Override
public Status read(String bucket, String key, Set<String> fields,
HashMap<String, ByteIterator> result) {
Map<String, ByteIterator> result) {
return readFromStorage(bucket, key, result, ssecKey);
}
/**
@ -296,7 +296,7 @@ public class S3Client extends DB {
*/
@Override
public Status update(String bucket, String key,
HashMap<String, ByteIterator> values) {
Map<String, ByteIterator> values) {
return writeToStorage(bucket, key, values, false, sse, ssecKey);
}
/**
@ -336,8 +336,8 @@ public class S3Client extends DB {
*
*/
protected Status writeToStorage(String bucket, String key,
HashMap<String, ByteIterator> values, Boolean updateMarker,
String sseLocal, SSECustomerKey ssecLocal) {
Map<String, ByteIterator> values, Boolean updateMarker,
String sseLocal, SSECustomerKey ssecLocal) {
int totalSize = 0;
int fieldCount = values.size(); //number of fields to concatenate
// getting the first field in the values
@ -422,7 +422,7 @@ public class S3Client extends DB {
*
*/
protected Status readFromStorage(String bucket, String key,
HashMap<String, ByteIterator> result, SSECustomerKey ssecLocal) {
Map<String, ByteIterator> result, SSECustomerKey ssecLocal) {
try {
Map.Entry<S3Object, ObjectMetadata> objectAndMetadata = getS3ObjectAndMetadata(bucket, key, ssecLocal);
InputStream objectData = objectAndMetadata.getKey().getObjectContent(); //consuming the stream

Просмотреть файл

@ -116,7 +116,7 @@ public class SolrClient extends DB {
* discussion of error codes.
*/
@Override
public Status insert(String table, String key, HashMap<String, ByteIterator> values) {
public Status insert(String table, String key, Map<String, ByteIterator> values) {
try {
SolrInputDocument doc = new SolrInputDocument();
@ -182,7 +182,7 @@ public class SolrClient extends DB {
*/
@Override
public Status read(String table, String key, Set<String> fields,
HashMap<String, ByteIterator> result) {
Map<String, ByteIterator> result) {
try {
Boolean returnFields = false;
String[] fieldList = null;
@ -225,7 +225,7 @@ public class SolrClient extends DB {
* discussion of error codes.
*/
@Override
public Status update(String table, String key, HashMap<String, ByteIterator> values) {
public Status update(String table, String key, Map<String, ByteIterator> values) {
try {
SolrInputDocument updatedDoc = new SolrInputDocument();
updatedDoc.addField("id", key);

Просмотреть файл

@ -115,7 +115,7 @@ public class SolrClient extends DB {
* discussion of error codes.
*/
@Override
public Status insert(String table, String key, HashMap<String, ByteIterator> values) {
public Status insert(String table, String key, Map<String, ByteIterator> values) {
try {
SolrInputDocument doc = new SolrInputDocument();
@ -181,7 +181,7 @@ public class SolrClient extends DB {
*/
@Override
public Status read(String table, String key, Set<String> fields,
HashMap<String, ByteIterator> result) {
Map<String, ByteIterator> result) {
try {
Boolean returnFields = false;
String[] fieldList = null;
@ -224,7 +224,7 @@ public class SolrClient extends DB {
* discussion of error codes.
*/
@Override
public Status update(String table, String key, HashMap<String, ByteIterator> values) {
public Status update(String table, String key, Map<String, ByteIterator> values) {
try {
SolrInputDocument updatedDoc = new SolrInputDocument();
updatedDoc.addField("id", key);

Просмотреть файл

@ -60,7 +60,7 @@ public class TarantoolClient extends DB {
}
@Override
public Status insert(String table, String key, HashMap<String, ByteIterator> values) {
public Status insert(String table, String key, Map<String, ByteIterator> values) {
return replace(key, values, "Can't insert element");
}
@ -78,7 +78,7 @@ public class TarantoolClient extends DB {
}
@Override
public Status read(String table, String key, Set<String> fields, HashMap<String, ByteIterator> result) {
public Status read(String table, String key, Set<String> fields, Map<String, ByteIterator> result) {
try {
List<String> response = this.connection.select(this.spaceNo, 0, Arrays.asList(key), 0, 1, 0);
result = tupleConvertFilter(response, fields);
@ -127,11 +127,11 @@ public class TarantoolClient extends DB {
}
@Override
public Status update(String table, String key, HashMap<String, ByteIterator> values) {
public Status update(String table, String key, Map<String, ByteIterator> values) {
return replace(key, values, "Can't replace element");
}
private Status replace(String key, HashMap<String, ByteIterator> values, String exceptionDescription) {
private Status replace(String key, Map<String, ByteIterator> values, String exceptionDescription) {
int j = 0;
String[] tuple = new String[1 + 2 * values.size()];
tuple[0] = key;

Просмотреть файл

@ -85,7 +85,7 @@ public class VoldemortClient extends DB {
@Override
public Status insert(String table, String key,
HashMap<String, ByteIterator> values) {
Map<String, ByteIterator> values) {
if (checkStore(table) == Status.ERROR) {
return Status.ERROR;
}
@ -96,7 +96,7 @@ public class VoldemortClient extends DB {
@Override
public Status read(String table, String key, Set<String> fields,
HashMap<String, ByteIterator> result) {
Map<String, ByteIterator> result) {
if (checkStore(table) == Status.ERROR) {
return Status.ERROR;
}
@ -130,7 +130,7 @@ public class VoldemortClient extends DB {
@Override
public Status update(String table, String key,
HashMap<String, ByteIterator> values) {
Map<String, ByteIterator> values) {
if (checkStore(table) == Status.ERROR) {
return Status.ERROR;
}