* Better content of backport packages CHANGELOG and INSTALL files
The content of Backport Packages CHANGELOG.txt and INSTALL files
has been updated to reflect that those are not full Airflow
releases.
1) Source package:
- INSTALL contains only references to preparing backport packages
- CHANGELOG.txt contains combined change log of all the packages
2) Binary packages:
- No INSTALL
- CHANGELOG.txt contains changelog for this package only
3) Whl packages
No change
* Update backport_packages/INSTALL
* Move setup order check back to pre-commit
The order check used to be working from pre-commit but then it
was moved to be regular test case. That was a mistake
The test is super-fast and actually making it use assertEquals
was not very useful and it was very late when you found it out.
I changed it to be normal python script which made it works again
(it did not work when it was a test because pre-commit does not
run tests - it runs python scripts).
The messages printed now are much more informative as well.
* All classes in backport providers are now importable in Airflow 1.10
* fixup! All classes in backport providers are now importable in Airflow 1.10
* fixup! fixup! All classes in backport providers are now importable in Airflow 1.10
* Push CI images to Docker packcage cache for v1-10 branches
This is done as a commit to master so that we can keep the two branches
in sync
Co-Authored-By: Ash Berlin-Taylor <ash_github@firemirror.com>
* Run Github Actions against v1-10-stable too
Co-authored-by: Ash Berlin-Taylor <ash_github@firemirror.com>
The scheduler_dag_execution_timing script wants to run _n_ dag runs to
completion. However since the start date of those dags is Dynamic (`now
- delta`) we can't pre-compute the execution_dates like we were before.
(This is because the execution_date of the very first dag run would be
`now()` of the parser process, but if we try to pre-compute that in
the benchmark process it would see a different value of now().)
This PR changes it to instead watch for the first _n_ dag runs to be
completed. This should make it work with more dags with less changes to
them.
All PRs will used cached "latest good" version of the python
base images from our GitHub registry. The python versions in
the Github Registry will only get updated after a master
build (which pulls latest Python image from DockerHub) builds
and passes test correctly.
This is to avoid problems that we had recently with Python
patchlevel releases breaking our Docker builds.
Currently there is no way to determine the state of a TaskInstance in the graph view or tree view for people with colour blindness
Approximately 4.5% of people experience some form of colour vision deficiency
Debian Buster only ships with a JDK11, and Hive/Hadoop fails in odd,
hard to debug ways (complains about metastore not being initalized,
possibly related to the class loader issues.)
Until we rip Hive out from the CI (replacing it with Hadoop in a seprate
integration, only on for some builds) we'll have to stick with JRE8
Our previous approach of installing openjdk-8 from Sid/Unstable started
failing as Debian Sid has a new (and conflicting) version of GCC/libc.
The adoptopenjdk package archive is designed for Buster so should be
more resilient
Installing the JDK (not even the JRE) from Sid is starting to break on
Buster as the versions of packages conflict:
> The following packages have unmet dependencies:
> libgcc-8-dev : Depends: gcc-8-base (= 8.4.0-4) but 8.3.0-6 is to be installed
> Depends: libmpx2 (>= 8.4.0-4) but 8.3.0-6 is to be installed
This changes our CI docker images to:
1. Not install something from Sid (unstable, packages change/get
updated) when we are using Buster (stable, only security fixes).
2. Installed the JRE, not the JDK. We don't need to compile Java code.
* [AIRFLOW-6586] Improvements to gcs sensor
refactors GoogleCloudStorageUploadSessionCompleteSensor to use set instead of number of objects
add poke mode only decorator
assert that poke_mode_only applied to child of BaseSensorOperator
refactor tests
remove assert
[AIRFLOW-6586] Improvements to gcs sensor
refactors GoogleCloudStorageUploadSessionCompleteSensor to use set instead of number of objects
add poke mode only decorator
assert that poke_mode_only applied to child of BaseSensorOperator
remove assert
fix static checks
add back inadvertently remove requirements
pre-commit
fix typo
* gix gcs sensor unit test
* move poke_mode_only to base_sensor_operator module
* add sensor / poke_mode_only docs
* fix ci check add sensor how-to docs
* Update airflow/providers/google/cloud/sensors/gcs.py
Co-authored-by: Tomek Urbaszek <turbaszek@gmail.com>
* Update airflow/sensors/base_sensor_operator.py
Co-authored-by: Tomek Urbaszek <turbaszek@gmail.com>
* Update airflow/sensors/base_sensor_operator.py
Co-authored-by: Kamil Breguła <mik-laj@users.noreply.github.com>
* simplify class decorator
* remove type hint
* add note to UPDATING.md
* remove unecessary declaration of class member
* Fix to kwargs in UPDATING.md
Co-authored-by: Tomek Urbaszek <turbaszek@gmail.com>
Co-authored-by: Kamil Breguła <mik-laj@users.noreply.github.com>
We noticed our Celery tests failing sometimes with
> (psycopg2.errors.UniqueViolation) duplicate key value violates unique
> constraint "pg_type_typname_nsp_index"
> DETAIL: Key (typname, typnamespace)=(celery_tasksetmeta, 2200) already exists
It appears this is a race condition in SQLAlchemy's "create_all()"
function, where it first checks which tables exist, builds up a list of
`CREATE TABLE` statements, then issues them. Thus if two celery worker
processes start at the same time, they will find the the table doesn't
yet exist, and both try to create it.
This is _probably_ a bug in SQLA, but this should be an easy enough fix
here, to just ensure that the table exists before launching any Celery tasks.
d.dag_id is not a valid attribute. in order to use dag_id variable
in a closure callback, it needs to be passed in as a fuction so the
right value can be captured for each for loop.
When you build from the scratch and some transient requirements
fail, the initial step of installation might fail.
We are now using latest valid constraints from the DEFAULT_BRANCH
branch to avoid it.
After preparing the 2020.5.19 release candidate and
reviewing the packages, some changes turned out to be necessary.
Therefore the date was changed to 2020.5.20 with the folowing
fixes:
* cncf.kubernetes.example_dags were hard-coded and added for all
packagesa and they were removed
* Version suffix is only used to rename the binary packages not for
the version itself
* Release process description is updated with the release process
* Package version is consistent - leading 0s are skipped in month
and day