bigquery-etl/tests
Anna Scholtz f6c67c25f2 Support alternate projects in publish_static 2020-10-05 12:59:58 -07:00
..
account_ecosystem_derived/ecosystem_user_id_lookup_v1/test_script Bug fixes for AET daily ETL (#1371) 2020-10-02 12:50:32 -04:00
assert Event analysis udfs (#1343) 2020-09-29 16:33:38 -04:00
cli Simplify CLI UDF rename and allow wildcard patterns 2020-09-25 14:06:41 -07:00
data Dryrun stored procedures (#1367) 2020-10-01 16:48:28 -04:00
docs Inject function names and descriptions into docs (#1129) 2020-07-07 14:47:03 -04:00
firefox_accounts_derived/fxa_users_daily_v1/test_aggregation Move all fxa derived tables to shared-prod firefox_accounts_derived 2020-05-28 14:17:06 -04:00
format_sql Upgrade to pytest 6.0.1 (#1281) 2020-09-02 11:30:14 -07:00
metadata Fix tests 2020-05-28 14:12:24 -07:00
org_mozilla_firefox_derived Create view for event_types from recent day's data (#1306) 2020-09-11 16:39:07 -04:00
public_data Fix writing to temporary table when publishing JSON (#1273) 2020-09-01 08:51:15 -07:00
query_scheduling Use cached_property for DryRun 2020-08-14 11:08:15 -07:00
resources/casing Use snake_case() and associated tests. 2020-06-03 16:29:48 -04:00
search_derived Bug 1648241 - Create mobile search clients last seen (#1196) 2020-07-24 11:31:52 -04:00
telemetry Move all fxa derived tables to shared-prod firefox_accounts_derived 2020-05-28 14:17:06 -04:00
telemetry_derived Add `clients_last_seen.days_interacted_bits`. (#1280) 2020-09-02 16:21:32 -04:00
udf Project support for publishing UDFs 2020-10-05 12:59:58 -07:00
util Support alternate projects in publish_static 2020-10-05 12:59:58 -07:00
validation Add tests for UDF description publishing 2020-09-16 10:40:42 -07:00
README.md Bug 1635906 Add bqetl support for scripts and script for AET lookup (#1323) 2020-09-23 15:28:25 -04:00
__init__.py Add first test (#9) 2019-03-07 12:43:21 -08:00
test_dryrun.py Use cached_property for DryRun 2020-08-14 11:08:15 -07:00
test_entrypoint.py Check error output for entrypoint tests 2020-08-31 09:25:45 -07:00
test_run_query.py Add tests for run_query 2020-08-27 14:48:32 -07:00

README.md

How to Run Tests

This repository uses pytest:

# create a venv
python3.8 -m venv venv/

# install requirements
venv/bin/pip install -r requirements.txt

# run pytest with all linters and 4 workers in parallel
venv/bin/pytest --black --docstyle --flake8 --mypy-ignore-missing-imports -n 4

# use -k to selectively run a set of tests that matches the expression `udf`
venv/bin/pytest -k udf

# run integration tests with 4 workers in parallel
gcloud auth application-default login # or set GOOGLE_APPLICATION_CREDENTIALS
export GOOGLE_PROJECT_ID="bigquery-etl-integration-test"
venv/bin/pytest -m integration -n 4

To provide authentication credentials for the Google Cloud API the GOOGLE_APPLICATION_CREDENTIALS environment variable must be set to the file path of the JSON file that contains the service account key. See Mozilla BigQuery API Access instructions to request credentials if you don't already have them.

How to Configure a UDF Test

Include a comment like -- Tests followed by one or more query statements after the UDF in the SQL file where it is defined. Each statement in a SQL file that defines a UDF that does not define a temporary function is collected as a test and executed independently of other tests in the file.

Each test must use the UDF and throw an error to fail. Assert functions defined in tests/assert/ may be used to evaluate outputs. Tests must not use any query parameters and should not reference any tables. Each test that is expected to fail must be preceded by a comment like #xfail, similar to a SQL dialect prefix in the BigQuery Cloud Console.

For example:

CREATE TEMP FUNCTION udf_example(option INT64) AS (
  CASE
  WHEN option > 0 then TRUE
  WHEN option = 0 then FALSE
  ELSE ERROR("invalid option")
  END
);
-- Tests
SELECT
  assert_true(udf_example(1)),
  assert_false(udf_example(0));
#xfail
SELECT
  udf_example(-1);
#xfail
SELECT
  udf_example(NULL);

How to Configure a Generated Test

  1. Make a directory for test resources named tests/{dataset}/{table}/{test_name}/, e.g. tests/telemetry_derived/clients_last_seen_raw_v1/test_single_day
    • table must match a directory named like sql/{dataset}/{table}, e.g. sql/telemetry_derived/clients_last_seen_v1
    • test_name should start with test_, e.g. test_single_day
    • If test_name is test_init or test_script, then the query will run init.sql or script.sql respectively; otherwise, the test will run query.sql
  2. Add .yaml files for input tables, e.g. clients_daily_v6.yaml
    • Include the dataset prefix if it's set in the tested query, e.g. analysis.clients_last_seen_v1.yaml
      • This will result in the dataset prefix being removed from the query, e.g. query = query.replace("analysis.clients_last_seen_v1", "clients_last_seen_v1")
  3. Add .sql files for input view queries, e.g. main_summary_v4.sql
    • Don't include a CREATE ... AS clause
    • Fully qualify table names as `{project}.{dataset}.table`
    • Include the dataset prefix if it's set in the tested query, e.g. telemetry.main_summary_v4.sql
      • This will result in the dataset prefix being removed from the query, e.g. query = query.replace("telemetry.main_summary_v4", "main_summary_v4")
  4. Add expect.yaml to validate the result
    • DATE and DATETIME type columns in the result are coerced to strings using .isoformat()
    • Columns named generated_time are removed from the result before comparing to expect because they should not be static
  5. Optionally add .schema.json files for input table schemas, e.g. clients_daily_v6.schema.json
  6. Optionally add query_params.yaml to define query parameters
    • query_params must be a list

Init Tests

Tests of init.sql statements are supported, similarly to other generated tests. Simply name the test test_init. The other guidelines still apply.

Note: Init SQL statements must contain a create statement with the dataset and table name, like so:

CREATE OR REPLACE TABLE
  dataset.table_v1
AS
...

Additional Guidelines and Options

  • If the destination table is also an input table then generated_time should be a required DATETIME field to ensure minimal validation
  • Input table files
    • All of the formats supported by bq load are supported
    • yaml and json format are supported and must contain an array of rows which are converted in memory to ndjson before loading
    • Preferred formats are yaml for readability or ndjson for compatiblity with bq load
  • expect.yaml
    • File extensions yaml, json and ndjson are supported
    • Preferred formats are yaml for readability or ndjson for compatiblity with bq load
  • Schema files
    • Setting the description of a top level field to time_partitioning_field will cause the table to use it for time partitioning
    • File extensions yaml, json and ndjson are supported
    • Preferred formats are yaml for readability or json for compatiblity with bq load
  • Query parameters
    • Scalar query params should be defined as a dict with keys name, type or type_, and value
    • query_parameters.yaml may be used instead of query_params.yaml, but they are mutually exclusive
    • File extensions yaml, json and ndjson are supported
    • Preferred format is yaml for readability

How to Run CircleCI Locally

  • Install the CircleCI Local CI
  • Download GCP service account keys
    • Integration tests will only successfully run with service account keys that belong to the circleci service account in the biguqery-etl-integration-test project
  • Run circleci build and set required environment variables GOOGLE_PROJECT_ID and GCLOUD_SERVICE_KEY:
gcloud_service_key=`cat /path/to/key_file.json`

# to run a specific job, e.g. integration:
circleci build --job integration \
  --env GOOGLE_PROJECT_ID=bigquery-etl-integration-test \
  --env GCLOUD_SERVICE_KEY=$gcloud_service_key

# to run all jobs
circleci build \
  --env GOOGLE_PROJECT_ID=bigquery-etl-integration-test \
  --env GCLOUD_SERVICE_KEY=$gcloud_service_key