The Kubernetes tests are now run using Helm chart
rather than the custom templates we used to have.
The Helm Chart uses locally build production image
so the tests are testing not only Airflow but also
Helm Chart and a Production image - all at the
same time. Later on we will add more tests
covering more functionalities of both Helm Chart
and Production Image. This is the first step to
get all of those bundle together and become
testable.
This change introduces also 'shell' sub-command
for Breeze's kind-cluster command and
EMBEDDED_DAGS build args for production image -
both of them useful to run the Kubernetes tests
more easily - without building two images
and with an easy-to-iterate-over-tests
shell command - which works without any
other development environment.
Co-authored-by: Jarek Potiuk <jarek@potiuk.com>
Co-authored-by: Daniel Imberman <daniel@astronomer.io>
For a long time the way how entrypoint worked in ci scripts
was wrong. The way it worked was convoluted and short of black
magic. This did not allow to pass multiple test targets and
required separate execute command scripts in Breeze.
This is all now straightened out and both production and
CI image are always using the right entrypoint by default
and we can simply pass parameters to the image as usual without
escaping strings.
This also allowed to remove some breeze commands and
change names of several flags in Breeze to make them more
meaningful.
Both CI and PROD image have now embedded scripts for log
cleaning.
History of image releases is added for 1.10.10-*
alpha quality images.
Tests requiring Kubernetes Cluster are now moved out of
the regular CI tests and moved to "kubernetes_tests" folder
so that they can be run entirely on host without having
the CI image built at all. They use production image
to run the tests on KinD cluster and we add tooling
to start/stop/deploy the application to the KinD cluster
automatically - for both CI testing and local development.
This is a pre-requisite to convert the tests to convert the
tests to use the official Helm Chart and Docker images or
Apache Airflow.
It closes#8782
We have now mechanism to keep release notes updated for the
backport operators in an automated way.
It really nicely generates all the necessary information:
* summary of requirements for each backport package
* list of dependencies (including extras to install them) when package
depends on other providers packages
* table of new hooks/operators/sensors/protocols/secrets
* table of moved hooks/operators/sensors/protocols/secrets with
information where they were moved from
* changelog of all the changes to the provider package (this will be
automatically updated with incremental changelog whenever we decide to
release separate packages.
The system is fully automated - we will be able to produce release notes
automatically (per-package) whenever we decide to release new version of
the package in the future.
The CRON job from previous runs did not have everything working
after the emergency migration to Github Actions.
This change brings back following improvements:
* rebuilding images from the scratch in CRON job
* automatically upgrading all requirements to test if they are new
* pushing production images to github packages as cache
* pushing nightly tag to github
Originally Breeze was used to run unit and integration tests, recently system
tests and finally we make it a bit more friendly to test your DAGs there. You
can now install any older airflow version in Breeze via
--install-airflow-version switch and "files/dags" folder is mounted to
"/files/dags" and this folder is used to read the dags from.
This change introduces sub-commands in breeze tool.
It is much needed as we have many commands now
and it was difficult to separate commands from flags.
Also --help output was very long and unreadable.
With this change help it is much easier to discover
what breeze can do for you as well as navigate with it.
Co-authored-by: Jarek Potiuk <jarek@potiuk.com>
Co-authored-by: Kamil Breguła <mik-laj@users.noreply.github.com>
We will run system test on back-ported operators for 1.10* series of airflow
and for that we need to have support for running system tests using pytest's
markers and reading environment variables passed from HOST machine (to pass
credentials).
This is the first step to automate system tests execution.
* Add DAG return type
* Add airflow dag test CLI command for running a complete DAG with CLI
* Added new command note to updating.md
* Add airflow dags test example to testing.rst
* Fix rst inline code mistake
You can now choose which integration you want to start when you run
Breeze as well as when CI tests are run. Now by default Breeze
and CI runs without integrations, but you can add them via
Breeze flags or by environment variables when CI is executed.
We have pytest markers now that mark tests that can be run for
integration, backend and runtime selected.
Also we have now more test jobs - we have separate test run
for all non-integration tests (with less memory used by the
integrations) and separate jobs that run integration tests
only (more memory used for integrations but far less number
of tests to run)
We have far too much bash code around that is not automatically tested.
This is the first step to change it (simplifications and more tests are coming
soon).
Adds new executor that is meant to be used mainly
for debugging and DAG development purposes. This
executor executes single task instance at time and
is able to work with SQLLite and sensors.