Relative and absolute imports are functionally equivalent, the only
pratical difference is that relative is shorter.
But it is also less obvious what exactly is imported, and harder to find
such imports with simple tools (such as grep).
Thus we have decided that Airflow house style is to use absolute imports
only
Until pre-commit implements export of all configured
checks, we need to maintain the list manually updated.
We check both - pre-commit list in breeze-complete and
descriptions in STATIC_CODE_CHECKS.rst
After #10368, we've changed the way we build the images
on CI. We are overriding the ci scripts that we use
to build the image with the scripts taken from master
to not give roque PR authors the possibiility to run
something with the write credentials.
We should not override the in_container scripts, however
because they become part of the image, so we should use
those that came with the PR. That's why we have to move
the "in_container" scripts out of the "ci" folder and
only override the "ci" folder with the one from
master. We've made sure that those scripts in ci
are self-contained and they do not need reach outside of
that folder.
Also the static checks are done with local files mounted
on CI because we want to check all the files - not only
those that are embedded in the container.
* CI Images are now pre-build and stored in registry
With this change we utilise the latest pull_request_target
event type from Github Actions and we are building the
CI image only once (per version) for the entire run.
This safes from 2 to 10 minutes per job (!) depending on
how much of the Docker image needs to be rebuilt.
It works in the way that the image is built only in the
build-or-wait step. In case of direct push run or
scheduled runs, the build-or-wait step builds and pushes
to the GitHub registry the CI image. In case of the
pull_request runs, the build-and-wait step waits until
separate build-ci-image.yml workflow builds and pushes
the image and it will only move forward once the image
is ready.
This has numerous advantages:
1) Each job that requires CI image is much faster because
instead of pulling + rebuilding the image it only pulls
the image that was build once. This saves around 2 minutes
per job in regular builds but in case of python patch level
updates, or adding new requirements it can save up to 10
minutes per job (!)
2) While the images are buing rebuilt we only block one job waiting
for all the images. The tests will start running in parallell
only when all images are ready, so we are not blocking
other runs from running.
3) Whole run uses THE SAME image. Previously we could have some
variations because the images were built at different times
and potentially releases of dependencies in-between several
jobs could make different jobs in the same run use slightly
different image. This is not happening any more.
4) Also when we push image to github or dockerhub we push the
very same image that was built and tested. Previously it could
happen that the image pushed was slightly different than the
one that was used for testing (for the same reason)
5) Similar case is with the production images. We are now building
and pushing consistently the same images accross the board.
6) Documentation building is split into two parallel jobs docs
building and spell checking - decreases elapsed time for
the docs build.
7) Last but not least - we keep the history of al the images
- those images contain SHA of the commit. This means
that we can simply download and run the image locally to reproduce
any problem that anyone had in their PR (!). This is super useful
to be able to help others to test their problems.
* fixup! CI Images are now pre-build and stored in registry
* fixup! fixup! CI Images are now pre-build and stored in registry
* fixup! fixup! fixup! CI Images are now pre-build and stored in registry
* fixup! fixup! fixup! CI Images are now pre-build and stored in registry
* Pylint checks should be way faster now
Instead of running separate pylint checks for tests and main source
we are running a single check now. This is possible thanks to a
nice hack - we have pylint plugin that injects the right
"# pylint: disable=" comment for all test files while reading
the file content by astroid (just before tokenization)
Thanks to that we can also separate out pylint checks
to a separate job in CI - this way all pylint checks will
be run in parallel to all other checks effectively halfing
the time needed to get the static check feedback and potentially
cancelling other jobs much faster.
* fixup! Pylint checks should be way faster now
It's fairly common to say whitelisting and blacklisting to describe
desirable and undesirable things in cyber security. However just because
it is common doesn't mean it's right.
However, there's an issue with the terminology. It only makes sense if
you equate white with 'good, permitted, safe' and black with 'bad,
dangerous, forbidden'. There are some obvious problems with this.
You may not see why this matters. If you're not adversely affected by
racial stereotyping yourself, then please count yourself lucky. For some
of your friends and colleagues (and potential future colleagues), this
really is a change worth making.
From now on, we will use 'allow list' and 'deny list' in place of
'whitelist' and 'blacklist' wherever possible. Which, in fact, is
clearer and less ambiguous. So as well as being more inclusive of all,
this is a net benefit to our understandability.
(Words mostly borrowed from
<https://www.ncsc.gov.uk/blog-post/terminology-its-not-black-and-white>)
Co-authored-by: Jarek Potiuk <jarek@potiuk.com>
This change introduces sub-commands in breeze tool.
It is much needed as we have many commands now
and it was difficult to separate commands from flags.
Also --help output was very long and unreadable.
With this change help it is much easier to discover
what breeze can do for you as well as navigate with it.
Co-authored-by: Jarek Potiuk <jarek@potiuk.com>
Co-authored-by: Kamil Breguła <mik-laj@users.noreply.github.com>