]> Git Repo - linux.git/commitdiff
lib/percpu_counter.c: fix __percpu_counter_add()
authorMing Lei <[email protected]>
Wed, 15 Jan 2014 01:56:42 +0000 (17:56 -0800)
committerLinus Torvalds <[email protected]>
Wed, 15 Jan 2014 07:19:42 +0000 (14:19 +0700)
__percpu_counter_add() may be called in softirq/hardirq handler (such
as, blk_mq_queue_exit() is typically called in hardirq/softirq handler),
so we need to call this_cpu_add()(irq safe helper) to update percpu
counter, otherwise counts may be lost.

This fixes the problem that 'rmmod null_blk' hangs in blk_cleanup_queue()
because of miscounting of request_queue->mq_usage_counter.

This patch is the v1 of previous one of "lib/percpu_counter.c:
disable local irq when updating percpu couter", and takes Andrew's
approach which may be more efficient for ARCHs(x86, s390) that
have optimized this_cpu_add().

Signed-off-by: Ming Lei <[email protected]>
Cc: Paul Gortmaker <[email protected]>
Cc: Shaohua Li <[email protected]>
Cc: Jens Axboe <[email protected]>
Cc: Fan Du <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
lib/percpu_counter.c

index 7473ee3b4ee712b0759e264b79f350457888be1e..1da85bb1bc07ad3289b698776ed174b0f901e17c 100644 (file)
@@ -82,10 +82,10 @@ void __percpu_counter_add(struct percpu_counter *fbc, s64 amount, s32 batch)
                unsigned long flags;
                raw_spin_lock_irqsave(&fbc->lock, flags);
                fbc->count += count;
+                __this_cpu_sub(*fbc->counters, count);
                raw_spin_unlock_irqrestore(&fbc->lock, flags);
-               __this_cpu_write(*fbc->counters, 0);
        } else {
-               __this_cpu_write(*fbc->counters, count);
+               this_cpu_add(*fbc->counters, amount);
        }
        preempt_enable();
 }
This page took 0.057046 seconds and 4 git commands to generate.