Bigquery ETL
Перейти к файлу
Anna Scholtz b31fbe3497 Metadata publish improvements and update clients_daily_v6 metadata 2020-01-17 16:03:59 -08:00
.circleci Enforce script/format_sql for all new sql files (#656) 2020-01-09 13:55:46 -08:00
bigquery_etl Enforce format_sql on more files (#659) 2020-01-10 17:07:21 -08:00
script Metadata publish improvements and update clients_daily_v6 metadata 2020-01-17 16:03:59 -08:00
sql Metadata publish improvements and update clients_daily_v6 metadata 2020-01-17 16:03:59 -08:00
templates Metadata publish improvements and update clients_daily_v6 metadata 2020-01-17 16:03:59 -08:00
tests Enforce format_sql on more files (#659) 2020-01-10 17:07:21 -08:00
udf Bug 1609666 Use SAFE_CAST in udf.json_extract_int_map (#681) 2020-01-16 09:06:14 -08:00
udf_js Enforce format_sql on more files (#659) 2020-01-10 17:07:21 -08:00
udf_legacy Add semi-compatible date_trunc version to udf_legacy (#329) 2019-09-04 10:52:09 -05:00
.bigqueryrc Create ~/.bigqueryrc without GCLOUD_SERVICE_KEY (#112) 2019-05-01 13:38:31 -07:00
.eslintrc.yml Add script to format sql (#173) 2019-09-18 17:48:53 -07:00
.flake8 Add first test (#9) 2019-03-07 12:43:21 -08:00
.gitignore Rewrite script/format_sql in python (#640) 2020-01-06 16:17:41 -08:00
CODE_OF_CONDUCT.md Create CODE_OF_CONDUCT.md (#50) 2019-03-30 10:01:54 -07:00
Dockerfile Upgrade to python 3.8 (#564) 2019-12-03 12:50:38 -08:00
GRAVEYARD.md Add GRAVEYARD.md 2019-10-31 21:15:14 -04:00
README.md Enforce script/format_sql for all new sql files (#656) 2020-01-09 13:55:46 -08:00
conftest.py Add pytest plugins to lint python scripts (#410) 2019-10-08 14:00:11 -07:00
constraints.txt Bump py from 1.8.0 to 1.8.1 (#636) 2020-01-06 16:40:56 -08:00
pytest.ini Add athena query migration script for posterity (#592) 2019-12-12 17:55:02 -05:00
requirements.txt Bump pytest-xdist from 1.30.0 to 1.31.0 2019-12-27 10:19:52 -05:00

README.md

CircleCI

BigQuery ETL

Bigquery UDFs and SQL queries for building derived datasets.

Formatting SQL

We enforce consistent SQL formatting as part of CI. After adding or changing a query, use script/format_sql to apply formatting rules.

Directories and files passed as arguments to script/format_sql will be formatted in place, with directories recursively searched for files with a .sql extension, e.g.:

$ echo 'SELECT 1,2,3' > test.sql
$ script/format_sql test.sql
modified test.sql
1 file(s) modified
$ cat test.sql
SELECT
  1,
  2,
  3

If no arguments are specified the script will read from stdin and write to stdout, e.g.:

$ echo 'SELECT 1,2,3' | script/format_sql
SELECT
  1,
  2,
  3

To turn off sql formatting for a block of SQL, wrap it in format:off and format:on comments, like this:

SELECT
  -- format:off
  submission_date, sample_id, client_id
  -- format:on

Queries

  • Should be defined in files named as templates/<dataset>/<table>_<version>/query.sql e.g. sql/telemetry_derived/clients_daily_v7/query.sql
  • May be generated using a python script that prints the query to stdout
    • Should save output as templates/<dataset>/<table>_<version>/query.sql as above
    • Should be named as sql/query_type.sql.py e.g. sql/clients_daily.sql.py
    • May use options to generate queries for different destination tables e.g. using --source telemetry_core_parquet_v3 to generate sql/telemetry/core_clients_daily_v1/query.sql and using --source main_summary_v4 to generate sql/telemetry/clients_daily_v7/query.sql
    • Should output a header indicating options used e.g.
      -- Query generated by: sql/clients_daily.sql.py --source telemetry_core_parquet
      
  • Should not specify a project or dataset in table names to simplify testing
  • Should be incremental
  • Should filter input tables on partition and clustering columns
  • Should use _ prefix in generated column names not meant for output
  • Should use _bits suffix for any integer column that represents a bit pattern
  • Should not use DATETIME type, due to incompatibility with spark-bigquery-connector
  • Should read from *_stable tables instead of including custom deduplication
    • Should use the earliest row for each document_id by submission_timestamp where filtering duplicates is necessary
  • Should escape identifiers that match keywords, even if they aren't reserved keywords

Views

  • Should be defined in files named as sql/dataset/table_version/view.sql e.g. sql/telemetry/telemetry_core_parquet_v3/view.sql
  • Must specify project and dataset in all table names
    • Should default to using the moz-fx-data-shared-prod project

UDFs

  • Should limit the number of expression subqueries to avoid: BigQuery error in query operation: Resources exceeded during query execution: Not enough resources for query planning - too many subqueries or query is too complex.
  • Should be used to avoid code duplication
  • Must be named in files with lower snake case names ending in .sql e.g. mode_last.sql
    • Each file must only define effectively private helper functions and one public function which must be defined last
      • Helper functions must not conflict with function names in other files
    • SQL UDFs must be defined in the udf/ directory and JS UDFs must be defined in the udf_js directory
      • The udf_legacy/ directory is an exception which must only contain compatibility functions for queries migrated from Athena/Presto.
    • Functions must be named with a prefix of <dir_name>_ so all functions in udf/*.sql must start with udf_
      • The final function in a file must be named as <dir_name>_<file_name_without_suffix> so udf/mode_last.sql must define a function udf_mode_last
  • Functions must be defined as temporary using CREATE TEMP FUNCTION syntax
    • We provide tooling in scripts/publish_persistent_udfs for converting these definitions to persistent UDFs (temporary UDF udf_mode_last is published as persistent UDF udf.mode_last)
  • Should use SQL over js for performance

Backfills

  • Should be avoided on large tables
    • Backfills may double storage cost for a table for 90 days by moving data from long-term storage to short-term storage
      • For example regenerating clients_last_seen_v1 from scratch would cost about $1600 for the query and about $6800 for data moved to short-term storage
    • Should combine multiple backfills happening around the same time
    • Should delay column deletes until the next other backfill
      • Should use NULL for new data and EXCEPT to exclude from views until dropped
  • Should use copy operations in append mode to change column order
    • Copy operations do not allow changing partitioning, changing clustering, or column deletes
  • Should split backfilling into queries that finish in minutes not hours
  • May use script/generate_incremental_table to automate backfilling incremental queries
  • May be performed in a single query for smaller tables that do not depend on history
    • A useful pattern is to have the only reference to @submission_date be a clause WHERE (@submission_date IS NULL OR @submission_date = submission_date) which allows recreating all dates by passing --parameter=submission_date:DATE:NULL

Incremental Queries

Benefits

Properties

  • Must accept a date via @submission_date query parameter
    • Must output a column named submission_date matching the query parameter
  • Must produce similar results when run multiple times
    • Should produce identical results when run multiple times
  • May depend on the previous partition
    • If using previous partition, must include an init.sql query to initialize the table, e.g. templates/telemetry_derived/clients_last_seen_v1/init.sql
    • Should be impacted by values from a finite number of preceding partitions
      • This allows for backfilling in chunks instead of serially for all time and limiting backfills to a certain number of days following updated data
      • For example sql/clients_last_seen_v1.sql can be run serially on any 28 day period and the last day will be the same whether or not the partition preceding the first day was missing because values are only impacted by 27 preceding days

Scheduling Queries in Airflow

Instructions for scheduling queries in Airflow can be found in this cookbook.

Contributing

When adding or modifying a query in this repository, make your changes in the templates/ directory. Each time you run tests locally (see Tests below), the sql/ directory will be regenerated, inserting definitions of any UDFs referenced by the query. To force recreation of the sql/ directory without running tests, invoke:

./script/generate_sql

You are expected to commit the generated content in sql/ along with your changes to the source in templates/, otherwise CI will fail. This matches the strategy used by mozilla-pipeline-schemas and ensures that the final queries being run by Airflow are directly available to reference via URL and to view via the GitHub UI.

Tests

See the documentation in tests/