Commit f5bfb7d9 authored by Peter Zijlstra's avatar Peter Zijlstra Committed by Ingo Molnar

sched: bias effective_load() error towards failing wake_affine().

Measurement shows that the difference between cgroup:/ and cgroup:/foo
wake_affine() results is that the latter succeeds significantly more.

Therefore bias the calculations towards failing the test.
Signed-off-by: default avatarPeter Zijlstra <peterz@infradead.org>
Cc: Srivatsa Vaddagiri <vatsa@linux.vnet.ibm.com>
Cc: Mike Galbraith <efault@gmx.de>
Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
parent f1d239f7
...@@ -1074,6 +1074,27 @@ static inline int wake_idle(int cpu, struct task_struct *p) ...@@ -1074,6 +1074,27 @@ static inline int wake_idle(int cpu, struct task_struct *p)
static const struct sched_class fair_sched_class; static const struct sched_class fair_sched_class;
#ifdef CONFIG_FAIR_GROUP_SCHED #ifdef CONFIG_FAIR_GROUP_SCHED
/*
* effective_load() calculates the load change as seen from the root_task_group
*
* Adding load to a group doesn't make a group heavier, but can cause movement
* of group shares between cpus. Assuming the shares were perfectly aligned one
* can calculate the shift in shares.
*
* The problem is that perfectly aligning the shares is rather expensive, hence
* we try to avoid doing that too often - see update_shares(), which ratelimits
* this change.
*
* We compensate this by not only taking the current delta into account, but
* also considering the delta between when the shares were last adjusted and
* now.
*
* We still saw a performance dip, some tracing learned us that between
* cgroup:/ and cgroup:/foo balancing the number of affine wakeups increased
* significantly. Therefore try to bias the error in direction of failing
* the affine wakeup.
*
*/
static long effective_load(struct task_group *tg, int cpu, static long effective_load(struct task_group *tg, int cpu,
long wl, long wg) long wl, long wg)
{ {
...@@ -1083,6 +1104,13 @@ static long effective_load(struct task_group *tg, int cpu, ...@@ -1083,6 +1104,13 @@ static long effective_load(struct task_group *tg, int cpu,
if (!tg->parent) if (!tg->parent)
return wl; return wl;
/*
* By not taking the decrease of shares on the other cpu into
* account our error leans towards reducing the affine wakeups.
*/
if (!wl && sched_feat(ASYM_EFF_LOAD))
return wl;
/* /*
* Instead of using this increment, also add the difference * Instead of using this increment, also add the difference
* between when the shares were last updated and now. * between when the shares were last updated and now.
......
...@@ -10,3 +10,4 @@ SCHED_FEAT(DOUBLE_TICK, 0) ...@@ -10,3 +10,4 @@ SCHED_FEAT(DOUBLE_TICK, 0)
SCHED_FEAT(ASYM_GRAN, 1) SCHED_FEAT(ASYM_GRAN, 1)
SCHED_FEAT(LB_BIAS, 0) SCHED_FEAT(LB_BIAS, 0)
SCHED_FEAT(LB_WAKEUP_UPDATE, 1) SCHED_FEAT(LB_WAKEUP_UPDATE, 1)
SCHED_FEAT(ASYM_EFF_LOAD, 1)
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment