workqueue: clear POOL_DISASSOCIATED in rebind_workers()

a9ab775bca ("workqueue: directly restore CPU affinity of workers
from CPU_ONLINE") moved pool locking into rebind_workers() but left
"pool->flags &= ~POOL_DISASSOCIATED" in workqueue_cpu_up_callback().

There is nothing necessarily wrong with it, but there is no benefit
either.  Let's move it into rebind_workers() and achieve the following
benefits:

  1) better readability, POOL_DISASSOCIATED is cleared in rebind_workers()
     as expected.

  2) we can guarantee that, when POOL_DISASSOCIATED is clear, the
     running workers of the pool are on the local CPU (pool->cpu).

tj: Minor description update.

Signed-off-by: Lai Jiangshan <laijs@cn.fujitsu.com>
Signed-off-by: Tejun Heo <tj@kernel.org>
This commit is contained in:
Lai Jiangshan 2014-06-03 15:33:27 +08:00 коммит произвёл Tejun Heo
Родитель 92b69f5091
Коммит 3de5e88485
1 изменённых файлов: 1 добавлений и 4 удалений

Просмотреть файл

@ -4535,6 +4535,7 @@ static void rebind_workers(struct worker_pool *pool)
pool->attrs->cpumask) < 0); pool->attrs->cpumask) < 0);
spin_lock_irq(&pool->lock); spin_lock_irq(&pool->lock);
pool->flags &= ~POOL_DISASSOCIATED;
for_each_pool_worker(worker, pool) { for_each_pool_worker(worker, pool) {
unsigned int worker_flags = worker->flags; unsigned int worker_flags = worker->flags;
@ -4637,10 +4638,6 @@ static int workqueue_cpu_up_callback(struct notifier_block *nfb,
mutex_lock(&pool->attach_mutex); mutex_lock(&pool->attach_mutex);
if (pool->cpu == cpu) { if (pool->cpu == cpu) {
spin_lock_irq(&pool->lock);
pool->flags &= ~POOL_DISASSOCIATED;
spin_unlock_irq(&pool->lock);
rebind_workers(pool); rebind_workers(pool);
} else if (pool->cpu < 0) { } else if (pool->cpu < 0) {
restore_unbound_workers_cpumask(pool, cpu); restore_unbound_workers_cpumask(pool, cpu);