sched/uclamp: Update CPU's refcount on TG's clamp changes

On updates of task group (TG) clamp values, ensure that these new values
are enforced on all RUNNABLE tasks of the task group, i.e. all RUNNABLE
tasks are immediately boosted and/or capped as requested.

Do that each time we update effective clamps from cpu_util_update_eff().
Use the *cgroup_subsys_state (css) to walk the list of tasks in each
affected TG and update their RUNNABLE tasks.
Update each task by using the same mechanism used for cpu affinity masks
updates, i.e. by taking the rq lock.

Signed-off-by: Patrick Bellasi <patrick.bellasi@arm.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Reviewed-by: Michal Koutny <mkoutny@suse.com>
Acked-by: Tejun Heo <tj@kernel.org>
Cc: Alessio Balsini <balsini@android.com>
Cc: Dietmar Eggemann <dietmar.eggemann@arm.com>
Cc: Joel Fernandes <joelaf@google.com>
Cc: Juri Lelli <juri.lelli@redhat.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Morten Rasmussen <morten.rasmussen@arm.com>
Cc: Paul Turner <pjt@google.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Quentin Perret <quentin.perret@arm.com>
Cc: Rafael J . Wysocki <rafael.j.wysocki@intel.com>
Cc: Steve Muckle <smuckle@google.com>
Cc: Suren Baghdasaryan <surenb@google.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Todd Kjos <tkjos@google.com>
Cc: Vincent Guittot <vincent.guittot@linaro.org>
Cc: Viresh Kumar <viresh.kumar@linaro.org>
Link: https://lkml.kernel.org/r/20190822132811.31294-6-patrick.bellasi@arm.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
This commit is contained in:
Patrick Bellasi 2019-08-22 14:28:10 +01:00 коммит произвёл Ingo Molnar
Родитель 3eac870a32
Коммит babbe170e0
1 изменённых файлов: 54 добавлений и 1 удалений

Просмотреть файл

@ -1043,6 +1043,54 @@ static inline void uclamp_rq_dec(struct rq *rq, struct task_struct *p)
uclamp_rq_dec_id(rq, p, clamp_id);
}
static inline void
uclamp_update_active(struct task_struct *p, unsigned int clamp_id)
{
struct rq_flags rf;
struct rq *rq;
/*
* Lock the task and the rq where the task is (or was) queued.
*
* We might lock the (previous) rq of a !RUNNABLE task, but that's the
* price to pay to safely serialize util_{min,max} updates with
* enqueues, dequeues and migration operations.
* This is the same locking schema used by __set_cpus_allowed_ptr().
*/
rq = task_rq_lock(p, &rf);
/*
* Setting the clamp bucket is serialized by task_rq_lock().
* If the task is not yet RUNNABLE and its task_struct is not
* affecting a valid clamp bucket, the next time it's enqueued,
* it will already see the updated clamp bucket value.
*/
if (!p->uclamp[clamp_id].active) {
uclamp_rq_dec_id(rq, p, clamp_id);
uclamp_rq_inc_id(rq, p, clamp_id);
}
task_rq_unlock(rq, p, &rf);
}
static inline void
uclamp_update_active_tasks(struct cgroup_subsys_state *css,
unsigned int clamps)
{
struct css_task_iter it;
struct task_struct *p;
unsigned int clamp_id;
css_task_iter_start(css, 0, &it);
while ((p = css_task_iter_next(&it))) {
for_each_clamp_id(clamp_id) {
if ((0x1 << clamp_id) & clamps)
uclamp_update_active(p, clamp_id);
}
}
css_task_iter_end(&it);
}
#ifdef CONFIG_UCLAMP_TASK_GROUP
static void cpu_util_update_eff(struct cgroup_subsys_state *css);
static void uclamp_update_root_tg(void)
@ -7160,8 +7208,13 @@ static void cpu_util_update_eff(struct cgroup_subsys_state *css)
uc_se[clamp_id].bucket_id = uclamp_bucket_id(eff[clamp_id]);
clamps |= (0x1 << clamp_id);
}
if (!clamps)
if (!clamps) {
css = css_rightmost_descendant(css);
continue;
}
/* Immediately update descendants RUNNABLE tasks */
uclamp_update_active_tasks(css, clamps);
}
}