]> Git Repo - linux.git/commitdiff
sched/fair: Fix wake_affine_llc() balancing rules
authorPeter Zijlstra <[email protected]>
Wed, 6 Sep 2017 10:51:31 +0000 (12:51 +0200)
committerIngo Molnar <[email protected]>
Thu, 7 Sep 2017 07:29:31 +0000 (09:29 +0200)
Chris Wilson reported that the SMT balance rules got the +1 on the
wrong side, resulting in a bias towards the current LLC; which the
load-balancer would then try and undo.

Reported-by: Chris Wilson <[email protected]>
Tested-by: Chris Wilson <[email protected]>
Signed-off-by: Peter Zijlstra (Intel) <[email protected]>
Cc: Andy Lutomirski <[email protected]>
Cc: Linus Torvalds <[email protected]>
Cc: Mike Galbraith <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: [email protected]
Fixes: 90001d67be2f ("sched/fair: Fix wake_affine() for !NUMA_BALANCING")
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Ingo Molnar <[email protected]>
kernel/sched/fair.c

index 8d5868771cb307c8dbfe5f2b08e6e16b1018e18b..9dd2ce1e5ca2aa192b669f9e7551404f1fff28aa 100644 (file)
@@ -5435,7 +5435,7 @@ wake_affine_llc(struct sched_domain *sd, struct task_struct *p,
                return false;
 
        /* if this cache has capacity, come here */
-       if (this_stats.has_capacity && this_stats.nr_running < prev_stats.nr_running+1)
+       if (this_stats.has_capacity && this_stats.nr_running+1 < prev_stats.nr_running)
                return true;
 
        /*
This page took 0.069651 seconds and 4 git commands to generate.