Password stay None value and not None (str) in case there is no password set through webadmin interfaces.
This is fix for connections for Redis that not expect autorisation from clients.
To launch an instance of a Dataflow template in the configured region,
the API service.projects().locations().teplates() instead of
service.projects().templates() has to be used. Otherwise, all jobs will
always be started in us-central1.
In case there is no region configured, the default region `us-central1`
will get picked up.
To make it even worse, the polling for the job status already honors the
region parameter and will search for the job in the wrong region in the
current implementation. Because the job's status is not found, the
corresponding Airflow task will hang.
Sometimes when you run tasks from command line you get exit code = 1 due
to race condition (job runner tries to get process group from the
process that has already been terminated in the meantime)
* Better instructions for airflow flower
It is not clear in the documentation that you need to have flower installed to successful run airflow flower. If you don't have flower installed, running airflow flower will show the following error which is not of much help:
airflow flower
[2018-11-20 17:01:14,836] {__init__.py:51} INFO - Using executor SequentialExecutor
Traceback (most recent call last):
File "/mnt/secondary/workspace/f4/typo-backend/pipelines/model-pipeline/airflow/bin/airflow", line 32, in <module>
args.func(args)
File "/mnt/secondary/workspace/f4/typo-backend/pipelines/model-pipeline/airflow/lib/python3.6/site-packages/airflow/utils/cli.py", line
74, in wrapper
return f(*args, **kwargs)
File "/mnt/secondary/workspace/f4/typo-backend/pipelines/model-pipeline/airflow/lib/python3.6/site-packages/airflow/bin/cli.py", line 1
221, in flower
broka, address, port, api, flower_conf, url_prefix])
File "/mnt/secondary/workspace/f4/typo-backend/pipelines/model-pipeline/airflow/lib/python3.6/os.py", line 559, in execvp
_execvpe(file, args)
File "/mnt/secondary/workspace/f4/typo-backend/pipelines/model-pipeline/airflow/lib/python3.6/os.py", line 604, in _execvpe
raise last_exc.with_traceback(tb)
File "/mnt/secondary/workspace/f4/typo-backend/pipelines/model-pipeline/airflow/lib/python3.6/os.py", line 594, in _execvpe
exec_func(fullname, *argrest) FileNotFoundError: [Errno 2] No such file or directory
* Update use-celery.rst
Creating a pod that exceeds a namespace's resource quota throws an
ApiException. This change catches the exception and the task is
re-queued inside the Executor instead of killing the scheduler.
Creating a pod that exceeds a namespace's resource quota throws an
ApiException. This change catches the exception and the task is
re-queued inside the Executor instead of killing the scheduler.
The API of `wait()` changed to return a dict, not just a number so this
Operator wasn't actually working, but the tests were passing because the
return was mocked in-correctly.
I also removed `shm_size` from kwargs passed to BaseOperator to avoid
the deprecation warning about unknown args.