[AIRFLOW-XXX] Update the UPDATING.md file for 1.10.2

This commit is contained in:
Kaxil Naik 2019-01-23 01:03:45 +00:00
Родитель 634feb784e
Коммит 0b1e453033
1 изменённых файлов: 19 добавлений и 17 удалений

Просмотреть файл

@ -90,13 +90,6 @@ If the `AIRFLOW_CONFIG` environment variable was not set and the
will discover its config file using the `$AIRFLOW_CONFIG` and `$AIRFLOW_HOME`
environment variables rather than checking for the presence of a file.
### Modification to `ts_nodash` macro
`ts_nodash` previously contained TimeZone information alongwith execution date. For Example: `20150101T000000+0000`. This is not user-friendly for file or folder names which was a popular use case for `ts_nodash`. Hence this behavior has been changed and using `ts_nodash` will no longer contain TimeZone information, restoring the pre-1.10 behavior of this macro. And a new macro `ts_nodash_with_tz` has been added which can be used to get a string with execution date and timezone info without dashes.
Examples:
* `ts_nodash`: `20150101T000000`
* `ts_nodash_with_tz`: `20150101T000000+0000`
### New `dag_processor_manager_log_location` config option
The DAG parsing manager log now by default will be log into a file, where its location is
@ -107,10 +100,6 @@ controlled by the new `dag_processor_manager_log_location` config option in core
The new `sync_parallelism` config option will control how many processes CeleryExecutor will use to
fetch celery task state in parallel. Default value is max(1, number of cores - 1)
### Semantics of next_ds/prev_ds changed for manually triggered runs
next_ds/prev_ds now map to execution_date instead of the next/previous schedule-aligned execution date for DAGs triggered in the UI.
### Rename of BashTaskRunner to StandardTaskRunner
BashTaskRunner has been renamed to StandardTaskRunner. It is the default task runner
@ -163,6 +152,19 @@ To remove a user from a role:
airflow users --remove-role --username jondoe --role Public
```
## Airflow 1.10.2
### Modification to `ts_nodash` macro
`ts_nodash` previously contained TimeZone information alongwith execution date. For Example: `20150101T000000+0000`. This is not user-friendly for file or folder names which was a popular use case for `ts_nodash`. Hence this behavior has been changed and using `ts_nodash` will no longer contain TimeZone information, restoring the pre-1.10 behavior of this macro. And a new macro `ts_nodash_with_tz` has been added which can be used to get a string with execution date and timezone info without dashes.
Examples:
* `ts_nodash`: `20150101T000000`
* `ts_nodash_with_tz`: `20150101T000000+0000`
### Semantics of next_ds/prev_ds changed for manually triggered runs
next_ds/prev_ds now map to execution_date instead of the next/previous schedule-aligned execution date for DAGs triggered in the UI.
### User model changes
This patch changes the `User.superuser` field from a hardcoded boolean to a `Boolean()` database column. `User.superuser` will default to `False`, which means that this privilege will have to be granted manually to any users that may require it.
@ -181,12 +183,6 @@ session.add(admin)
session.commit()
```
## Airflow 1.10.1
### StatsD Metrics
The `scheduler_heartbeat` metric has been changed from a gauge to a counter. Each loop of the scheduler will increment the counter by 1. This provides a higher degree of visibility and allows for better integration with Prometheus using the [StatsD Exporter](https://github.com/prometheus/statsd_exporter). The scheduler's activity status can be determined by graphing and alerting using a rate of change of the counter. If the scheduler goes down, the rate will drop to 0.
### Custom auth backends interface change
We have updated the version of flask-login we depend upon, and as a result any
@ -203,6 +199,12 @@ then you need to change it like this
def is_active(self):
return self.active
## Airflow 1.10.1
### StatsD Metrics
The `scheduler_heartbeat` metric has been changed from a gauge to a counter. Each loop of the scheduler will increment the counter by 1. This provides a higher degree of visibility and allows for better integration with Prometheus using the [StatsD Exporter](https://github.com/prometheus/statsd_exporter). The scheduler's activity status can be determined by graphing and alerting using a rate of change of the counter. If the scheduler goes down, the rate will drop to 0.
### EMRHook now passes all of connection's extra to CreateJobFlow API
EMRHook.create_job_flow has been changed to pass all keys to the create_job_flow API, rather than