Compile JSON Schema into Avro and BigQuery schemas
Перейти к файлу
Jan-Erik Rediger a47640cd86 object metric: Add a test for nested fields 2024-02-07 13:36:23 +01:00
.circleci CI: Install requireded clang dependency 2023-09-11 14:02:47 +02:00
docs Reduce README and add a docs/ section 2019-04-22 16:24:14 -07:00
scripts Fix excess nesting for nested lists in avro schemas 2019-10-01 13:53:27 -07:00
src object metric: Rename field 2024-02-07 13:36:23 +01:00
tests object metric: Add a test for nested fields 2024-02-07 13:36:23 +01:00
.gitignore Add script for testing --tuple-struct option 2019-09-18 12:56:42 -07:00
CODE_OF_CONDUCT.md Update CODE_OF_CONDUCT.md 2019-04-23 11:03:05 -07:00
Cargo.lock Update to clap 4 2023-09-11 14:02:47 +02:00
Cargo.toml Update to clap 4 2023-09-11 14:02:47 +02:00
LICENSE Add MPL-2.0 2019-04-22 16:31:02 -07:00
README.md Updated command examples in README to represent current output 2021-11-19 19:17:14 +01:00
build.rs BigQuery: Convert `json` atoms to a JSON column type if configured in `mozPipelineMetadata` 2024-02-07 13:36:23 +01:00

README.md

jsonschema-transpiler

CircleCI

A tool for transpiling JSON Schema into schemas for Avro and BigQuery.

JSON Schema is primarily used to validate incoming data, but contains enough information to describe the structure of the data. The transpiler encodes the schema for use with data serialization and processing frameworks. The main use-case is to enable ingestion of JSON documents into BigQuery through an Avro intermediary.

This tool can handle many of the composite types seen in modern data processing tools that support a SQL interface such as lists, structures, key-value maps, and type-variants.

This tool is designed for generating new schemas from mozilla-pipeline-schemas, the canonical source of truth for JSON schemas in the Firefox Data Platform.

Installation

cargo install jsonschema-transpiler

Usage

A tool to transpile JSON Schema into schemas for data processing

USAGE:
    jsonschema-transpiler [FLAGS] [OPTIONS] [file]

FLAGS:
    -w, --allow-maps-without-value    Produces maps without a value field for incompatible or under-specified value
                                      schema
    -n, --force-nullable              Treats all columns as NULLABLE, ignoring the required section in the JSON Schema
                                      object
    -h, --help                        Prints help information
    -c, --normalize-case              snake_case column-names for consistent behavior between SQL engines
        --tuple-struct                Treats tuple validation as an anonymous struct
    -V, --version                     Prints version information

OPTIONS:
    -r, --resolve <resolve>    The resolution strategy for incompatible or under-specified schema [default: cast]
                               [possible values: cast, panic, drop]
    -t, --type <type>          The output schema format [default: avro]  [possible values: avro, bigquery]

ARGS:
    <file>    Sets the input file to use

JSON Schemas can be read from stdin or from a file.

Examples usage

# An object with a single, optional boolean field
$ schema='{"type": "object", "properties": {"foo": {"type": "boolean"}}}'

$ echo $schema | jq
{
  "type": "object",
  "properties": {
    "foo": {
      "type": "boolean"
    }
  }
}

$ echo $schema | jsonschema-transpiler --type avro
{
  "fields": [
    {
      "default": null,
      "name": "foo",
      "type": [
        {
          "type": "null"
        },
        {
          "type": "boolean"
        }
      ]
    }
  ],
  "name": "root",
  "type": "record"
}

$ echo $schema | jsonschema-transpiler --type bigquery
[
  {
    "mode": "NULLABLE",
    "name": "foo",
    "type": "BOOL"
  }
]

Building

To build and test the package:

cargo build
cargo test

Older versions of the package (<= 1.9) relied on the use of oniguruma for performing snake-casing logic. To enable the use of this module, add a feature flag:

cargo test --features oniguruma

Contributing

Contributions are welcome. The API may change significantly, but the transformation between various source formats should remain consistent. To aid in the development of the transpiler, tests cases are generated from a language agnostic format under tests/resources.

{
    "name": "test-suite",
    "tests": [
        {
            "name": "test-case",
            "description": [
                "A short description of the test case."
            ],
            "tests": {
                "avro": {...},
                "bigquery": {...},
                "json": {...}
            }
        },
        ...
    ]
}

Schemas provide a type system for data-structures. Most schema languages support a similar set of primitives. There are atomic data types like booleans, integers, and floats. These atomic data types can form compound units of structure, such as objects, arrays, and maps. The absence of a value is usually denoted by a null type. There are type modifiers, like the union of two types.

The following schemas are currently supported:

  • JSON Schema
  • Avro
  • BigQuery

In the future, it may be possible to support schemas from similar systems like Parquet and Spark, or into various interactive data languages (IDL) like Avro IDL.

Publishing

The jsonschema-transpiler is distributed as a crate via Cargo. Follow this checklist for deploying to crates.io.

  1. Bump the version number in the Cargo.toml, as per Semantic Versioning.
  2. Double check that cargo test and CI succeeds.
  3. Run cargo publish. It must be run with the --no-verify flag due to issue #59.
  4. Draft a new release in GitHub corresponding with the version bump.