* [patch v2 0/4] percpu_counter: cleanup and fix
@ 2011-04-13 7:57 shaohua.li
2011-04-13 7:57 ` [patch v2 1/4] percpu_counter: change return value and add comments shaohua.li
` (4 more replies)
0 siblings, 5 replies; 12+ messages in thread
From: shaohua.li @ 2011-04-13 7:57 UTC (permalink / raw)
To: linux-kernel; +Cc: akpm, cl, tj, eric.dumazet
Cleanup percpu_counter code and fix some bugs. The main purpose is to convert
percpu_counter to use atomic64, which is useful for workloads which cause
percpu_counter->lock contented. In a workload I tested, the atomic method is
50x faster (please see patch 4 for detail).
patch 1&2: clean up
patch 3: fix bug of percpu_counter for 32-bit systems.
patch 4: convert percpu_counter to use atomic64
^ permalink raw reply [flat|nested] 12+ messages in thread
* [patch v2 1/4] percpu_counter: change return value and add comments
2011-04-13 7:57 [patch v2 0/4] percpu_counter: cleanup and fix shaohua.li
@ 2011-04-13 7:57 ` shaohua.li
2011-04-13 19:05 ` Tejun Heo
2011-04-13 7:57 ` [patch v2 2/4] percpu_counter: delete dead code shaohua.li
` (3 subsequent siblings)
4 siblings, 1 reply; 12+ messages in thread
From: shaohua.li @ 2011-04-13 7:57 UTC (permalink / raw)
To: linux-kernel; +Cc: akpm, cl, tj, eric.dumazet, Shaohua Li
[-- Attachment #1: percpu-counter-positive.patch --]
[-- Type: text/plain, Size: 1115 bytes --]
the percpu_counter_*_positive() API SMP and UP aren't consistent. Add comments
to explain it.
Also if count < 0, returns 0 instead of 1 for *read_positive().
Signed-off-by: Shaohua Li <shaohua.li@intel.com>
---
include/linux/percpu_counter.h | 6 +++++-
1 file changed, 5 insertions(+), 1 deletion(-)
Index: linux/include/linux/percpu_counter.h
===================================================================
--- linux.orig/include/linux/percpu_counter.h 2011-04-13 13:10:13.000000000 +0800
+++ linux/include/linux/percpu_counter.h 2011-04-13 13:21:21.000000000 +0800
@@ -75,7 +75,7 @@ static inline s64 percpu_counter_read_po
barrier(); /* Prevent reloads of fbc->count */
if (ret >= 0)
return ret;
- return 1;
+ return 0;
}
static inline int percpu_counter_initialized(struct percpu_counter *fbc)
@@ -135,6 +135,10 @@ static inline s64 percpu_counter_read(st
static inline s64 percpu_counter_read_positive(struct percpu_counter *fbc)
{
+ /*
+ * percpu_counter is intended to track positive number. In UP case, the
+ * number should never be negative.
+ */
return fbc->count;
}
^ permalink raw reply [flat|nested] 12+ messages in thread
* [patch v2 2/4] percpu_counter: delete dead code
2011-04-13 7:57 [patch v2 0/4] percpu_counter: cleanup and fix shaohua.li
2011-04-13 7:57 ` [patch v2 1/4] percpu_counter: change return value and add comments shaohua.li
@ 2011-04-13 7:57 ` shaohua.li
2011-04-13 18:59 ` Tejun Heo
2011-04-18 0:12 ` Ted Ts'o
2011-04-13 7:57 ` [patch v2 3/4] percpu_counter: fix code for 32bit systems shaohua.li
` (2 subsequent siblings)
4 siblings, 2 replies; 12+ messages in thread
From: shaohua.li @ 2011-04-13 7:57 UTC (permalink / raw)
To: linux-kernel; +Cc: akpm, cl, tj, eric.dumazet, Shaohua Li
[-- Attachment #1: percpu-counter-cleanup.patch --]
[-- Type: text/plain, Size: 849 bytes --]
percpu_counter_sum_positive never returns negative.
Signed-off-by: Shaohua Li <shaohua.li@intel.com>
---
fs/ext4/balloc.c | 5 -----
1 file changed, 5 deletions(-)
Index: linux/fs/ext4/balloc.c
===================================================================
--- linux.orig/fs/ext4/balloc.c 2011-04-12 16:22:59.000000000 +0800
+++ linux/fs/ext4/balloc.c 2011-04-13 13:23:01.000000000 +0800
@@ -507,11 +507,6 @@ static int ext4_has_free_blocks(struct e
EXT4_FREEBLOCKS_WATERMARK) {
free_blocks = percpu_counter_sum_positive(fbc);
dirty_blocks = percpu_counter_sum_positive(dbc);
- if (dirty_blocks < 0) {
- printk(KERN_CRIT "Dirty block accounting "
- "went wrong %lld\n",
- (long long)dirty_blocks);
- }
}
/* Check whether we have space after
* accounting for current dirty blocks & root reserved blocks.
^ permalink raw reply [flat|nested] 12+ messages in thread
* [patch v2 3/4] percpu_counter: fix code for 32bit systems
2011-04-13 7:57 [patch v2 0/4] percpu_counter: cleanup and fix shaohua.li
2011-04-13 7:57 ` [patch v2 1/4] percpu_counter: change return value and add comments shaohua.li
2011-04-13 7:57 ` [patch v2 2/4] percpu_counter: delete dead code shaohua.li
@ 2011-04-13 7:57 ` shaohua.li
2011-04-13 19:04 ` Tejun Heo
2011-04-13 7:57 ` [patch v2 4/4] percpu_counter: use atomic64 for counter shaohua.li
2011-04-13 14:08 ` [patch v2 0/4] percpu_counter: cleanup and fix Christoph Lameter
4 siblings, 1 reply; 12+ messages in thread
From: shaohua.li @ 2011-04-13 7:57 UTC (permalink / raw)
To: linux-kernel; +Cc: akpm, cl, tj, eric.dumazet, Shaohua Li
[-- Attachment #1: percpu-counter-32bits.patch --]
[-- Type: text/plain, Size: 2946 bytes --]
percpu_counter.counter is a 's64'. Accessing it in 32-bit system is racing.
we need some locking to protect it otherwise some very wrong value could be
accessed.
Signed-off-by: Shaohua Li <shaohua.li@intel.com>
---
include/linux/percpu_counter.h | 48 ++++++++++++++++++++++++++++++-----------
1 file changed, 36 insertions(+), 12 deletions(-)
Index: linux/include/linux/percpu_counter.h
===================================================================
--- linux.orig/include/linux/percpu_counter.h 2011-04-13 13:21:21.000000000 +0800
+++ linux/include/linux/percpu_counter.h 2011-04-13 13:27:22.000000000 +0800
@@ -60,7 +60,16 @@ static inline s64 percpu_counter_sum(str
static inline s64 percpu_counter_read(struct percpu_counter *fbc)
{
+#if BITS_PER_LONG == 32
+ s64 count;
+ unsigned long flags;
+ spin_lock_irqsave(&fbc->lock, flags);
+ count = fbc->count;
+ spin_unlock_irqrestore(&fbc->lock, flags);
+ return count;
+#else
return fbc->count;
+#endif
}
/*
@@ -70,7 +79,7 @@ static inline s64 percpu_counter_read(st
*/
static inline s64 percpu_counter_read_positive(struct percpu_counter *fbc)
{
- s64 ret = fbc->count;
+ s64 ret = percpu_counter_read(fbc);
barrier(); /* Prevent reloads of fbc->count */
if (ret >= 0)
@@ -89,9 +98,20 @@ struct percpu_counter {
s64 count;
};
-static inline int percpu_counter_init(struct percpu_counter *fbc, s64 amount)
+static inline void percpu_counter_set(struct percpu_counter *fbc, s64 amount)
{
+#if BITS_PER_LONG == 32
+ preempt_disable();
fbc->count = amount;
+ preempt_enable();
+#else
+ fbc->count = amount;
+#endif
+}
+
+static inline int percpu_counter_init(struct percpu_counter *fbc, s64 amount)
+{
+ percpu_counter_set(fbc, amount);
return 0;
}
@@ -99,16 +119,25 @@ static inline void percpu_counter_destro
{
}
-static inline void percpu_counter_set(struct percpu_counter *fbc, s64 amount)
+static inline s64 percpu_counter_read(struct percpu_counter *fbc)
{
- fbc->count = amount;
+#if BITS_PER_LONG == 32
+ s64 count;
+ preempt_disable();
+ count = fbc->count;
+ preempt_enable();
+ return count;
+#else
+ return fbc->count;
+#endif
}
static inline int percpu_counter_compare(struct percpu_counter *fbc, s64 rhs)
{
- if (fbc->count > rhs)
+ s64 count = percpu_counter_read(fbc);
+ if (count > rhs)
return 1;
- else if (fbc->count < rhs)
+ else if (count < rhs)
return -1;
else
return 0;
@@ -128,18 +157,13 @@ __percpu_counter_add(struct percpu_count
percpu_counter_add(fbc, amount);
}
-static inline s64 percpu_counter_read(struct percpu_counter *fbc)
-{
- return fbc->count;
-}
-
static inline s64 percpu_counter_read_positive(struct percpu_counter *fbc)
{
/*
* percpu_counter is intended to track positive number. In UP case, the
* number should never be negative.
*/
- return fbc->count;
+ return percpu_counter_read(fbc);
}
static inline s64 percpu_counter_sum_positive(struct percpu_counter *fbc)
^ permalink raw reply [flat|nested] 12+ messages in thread
* [patch v2 4/4] percpu_counter: use atomic64 for counter
2011-04-13 7:57 [patch v2 0/4] percpu_counter: cleanup and fix shaohua.li
` (2 preceding siblings ...)
2011-04-13 7:57 ` [patch v2 3/4] percpu_counter: fix code for 32bit systems shaohua.li
@ 2011-04-13 7:57 ` shaohua.li
2011-04-13 19:07 ` Tejun Heo
2011-04-13 14:08 ` [patch v2 0/4] percpu_counter: cleanup and fix Christoph Lameter
4 siblings, 1 reply; 12+ messages in thread
From: shaohua.li @ 2011-04-13 7:57 UTC (permalink / raw)
To: linux-kernel; +Cc: akpm, cl, tj, eric.dumazet, Shaohua Li
[-- Attachment #1: percpu-counter-atomic.patch --]
[-- Type: text/plain, Size: 5354 bytes --]
Uses atomic64 for percpu_counter, because it is cheaper than spinlock. This
doesn't slow fast path (percpu_counter_read). atomic64_read equals to fbc->count
for 64-bit system, or equals to spin_lock-read-spin_unlock for 32-bit system.
This can improve some workloads with percpu_counter->lock heavily contented.
For example, vm_committed_as sometimes causes the contention. We should tune
the batch count, but if we can make percpu_counter better, why not? In a 24
CPUs system, 24 processes run stress mmap()/mmunmap(), the atomic method
gives 50x faster.
In percpu_counter_set() and __percpu_counter_sum(), there will be no lock
protecting. This means we might get inprecise count, but we have the same issue
even with lock protecting, because __percpu_counter_add doesn't hold locking
to update cpu local count.
Signed-off-by: Shaohua Li <shaohua.li@intel.com>
---
include/linux/percpu_counter.h | 25 +++----------------------
lib/percpu_counter.c | 40 ++++++++++++++++++++--------------------
2 files changed, 23 insertions(+), 42 deletions(-)
Index: linux/include/linux/percpu_counter.h
===================================================================
--- linux.orig/include/linux/percpu_counter.h 2011-04-13 13:27:22.000000000 +0800
+++ linux/include/linux/percpu_counter.h 2011-04-13 13:47:15.000000000 +0800
@@ -16,8 +16,7 @@
#ifdef CONFIG_SMP
struct percpu_counter {
- spinlock_t lock;
- s64 count;
+ atomic64_t count;
#ifdef CONFIG_HOTPLUG_CPU
struct list_head list; /* All percpu_counters are on a list */
#endif
@@ -26,16 +25,7 @@ struct percpu_counter {
extern int percpu_counter_batch;
-int __percpu_counter_init(struct percpu_counter *fbc, s64 amount,
- struct lock_class_key *key);
-
-#define percpu_counter_init(fbc, value) \
- ({ \
- static struct lock_class_key __key; \
- \
- __percpu_counter_init(fbc, value, &__key); \
- })
-
+int percpu_counter_init(struct percpu_counter *fbc, s64 amount);
void percpu_counter_destroy(struct percpu_counter *fbc);
void percpu_counter_set(struct percpu_counter *fbc, s64 amount);
void __percpu_counter_add(struct percpu_counter *fbc, s64 amount, s32 batch);
@@ -60,16 +50,7 @@ static inline s64 percpu_counter_sum(str
static inline s64 percpu_counter_read(struct percpu_counter *fbc)
{
-#if BITS_PER_LONG == 32
- s64 count;
- unsigned long flags;
- spin_lock_irqsave(&fbc->lock, flags);
- count = fbc->count;
- spin_unlock_irqrestore(&fbc->lock, flags);
- return count;
-#else
- return fbc->count;
-#endif
+ return atomic64_read(&fbc->count);
}
/*
Index: linux/lib/percpu_counter.c
===================================================================
--- linux.orig/lib/percpu_counter.c 2011-04-12 16:22:59.000000000 +0800
+++ linux/lib/percpu_counter.c 2011-04-13 13:38:02.000000000 +0800
@@ -59,13 +59,17 @@ void percpu_counter_set(struct percpu_co
{
int cpu;
- spin_lock(&fbc->lock);
+ /*
+ * Don't really need to disable preempt here, just make sure this is no
+ * big latency because of preemption
+ */
+ preempt_disable();
for_each_possible_cpu(cpu) {
s32 *pcount = per_cpu_ptr(fbc->counters, cpu);
*pcount = 0;
}
- fbc->count = amount;
- spin_unlock(&fbc->lock);
+ atomic64_set(&fbc->count, amount);
+ preempt_enable();
}
EXPORT_SYMBOL(percpu_counter_set);
@@ -76,10 +80,8 @@ void __percpu_counter_add(struct percpu_
preempt_disable();
count = __this_cpu_read(*fbc->counters) + amount;
if (count >= batch || count <= -batch) {
- spin_lock(&fbc->lock);
- fbc->count += count;
+ atomic64_add(count, &fbc->count);
__this_cpu_write(*fbc->counters, 0);
- spin_unlock(&fbc->lock);
} else {
__this_cpu_write(*fbc->counters, count);
}
@@ -93,26 +95,27 @@ EXPORT_SYMBOL(__percpu_counter_add);
*/
s64 __percpu_counter_sum(struct percpu_counter *fbc)
{
- s64 ret;
+ s64 ret = 0;
int cpu;
- spin_lock(&fbc->lock);
- ret = fbc->count;
+ /*
+ * Don't really need to disable preempt here, just make sure this is no
+ * big latency because of preemption
+ */
+ preempt_disable();
for_each_online_cpu(cpu) {
s32 *pcount = per_cpu_ptr(fbc->counters, cpu);
ret += *pcount;
}
- spin_unlock(&fbc->lock);
+ ret += atomic64_read(&fbc->count);
+ preempt_enable();
return ret;
}
EXPORT_SYMBOL(__percpu_counter_sum);
-int __percpu_counter_init(struct percpu_counter *fbc, s64 amount,
- struct lock_class_key *key)
+int percpu_counter_init(struct percpu_counter *fbc, s64 amount)
{
- spin_lock_init(&fbc->lock);
- lockdep_set_class(&fbc->lock, key);
- fbc->count = amount;
+ atomic64_set(&fbc->count, amount);
fbc->counters = alloc_percpu(s32);
if (!fbc->counters)
return -ENOMEM;
@@ -127,7 +130,7 @@ int __percpu_counter_init(struct percpu_
#endif
return 0;
}
-EXPORT_SYMBOL(__percpu_counter_init);
+EXPORT_SYMBOL(percpu_counter_init);
void percpu_counter_destroy(struct percpu_counter *fbc)
{
@@ -171,13 +174,10 @@ static int __cpuinit percpu_counter_hotc
mutex_lock(&percpu_counters_lock);
list_for_each_entry(fbc, &percpu_counters, list) {
s32 *pcount;
- unsigned long flags;
- spin_lock_irqsave(&fbc->lock, flags);
pcount = per_cpu_ptr(fbc->counters, cpu);
- fbc->count += *pcount;
+ atomic64_add(*pcount, &fbc->count);
*pcount = 0;
- spin_unlock_irqrestore(&fbc->lock, flags);
}
mutex_unlock(&percpu_counters_lock);
#endif
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [patch v2 0/4] percpu_counter: cleanup and fix
2011-04-13 7:57 [patch v2 0/4] percpu_counter: cleanup and fix shaohua.li
` (3 preceding siblings ...)
2011-04-13 7:57 ` [patch v2 4/4] percpu_counter: use atomic64 for counter shaohua.li
@ 2011-04-13 14:08 ` Christoph Lameter
2011-04-14 1:04 ` Shaohua Li
4 siblings, 1 reply; 12+ messages in thread
From: Christoph Lameter @ 2011-04-13 14:08 UTC (permalink / raw)
To: shaohua.li; +Cc: linux-kernel, akpm, tj, eric.dumazet
On Wed, 13 Apr 2011, shaohua.li@intel.com wrote:
> Cleanup percpu_counter code and fix some bugs. The main purpose is to convert
> percpu_counter to use atomic64, which is useful for workloads which cause
> percpu_counter->lock contented. In a workload I tested, the atomic method is
> 50x faster (please see patch 4 for detail).
Could you post your test and the results please?
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [patch v2 2/4] percpu_counter: delete dead code
2011-04-13 7:57 ` [patch v2 2/4] percpu_counter: delete dead code shaohua.li
@ 2011-04-13 18:59 ` Tejun Heo
2011-04-18 0:12 ` Ted Ts'o
1 sibling, 0 replies; 12+ messages in thread
From: Tejun Heo @ 2011-04-13 18:59 UTC (permalink / raw)
To: shaohua.li; +Cc: linux-kernel, akpm, cl, eric.dumazet
On Wed, Apr 13, 2011 at 03:57:17PM +0800, shaohua.li@intel.com wrote:
> percpu_counter_sum_positive never returns negative.
>
> Signed-off-by: Shaohua Li <shaohua.li@intel.com>
Can you please send this to ext4? - linux-ext4@vger.kernel.org and
"Theodore Ts'o" <tytso@mit.edu>.
Thanks.
--
tejun
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [patch v2 3/4] percpu_counter: fix code for 32bit systems
2011-04-13 7:57 ` [patch v2 3/4] percpu_counter: fix code for 32bit systems shaohua.li
@ 2011-04-13 19:04 ` Tejun Heo
0 siblings, 0 replies; 12+ messages in thread
From: Tejun Heo @ 2011-04-13 19:04 UTC (permalink / raw)
To: shaohua.li; +Cc: linux-kernel, akpm, cl, eric.dumazet
On Wed, Apr 13, 2011 at 03:57:18PM +0800, shaohua.li@intel.com wrote:
> static inline s64 percpu_counter_read(struct percpu_counter *fbc)
> {
> +#if BITS_PER_LONG == 32
> + s64 count;
> + unsigned long flags;
> + spin_lock_irqsave(&fbc->lock, flags);
> + count = fbc->count;
> + spin_unlock_irqrestore(&fbc->lock, flags);
> + return count;
> +#else
> return fbc->count;
> +#endif
I don't think this is safe. The possible deadlock scenario was
percpu_counter_read() being called from irq context and adding irq
locking to percpu_counter_read() doesn't change that in any way. You
should be changing locking in other places. Given that the next patch
would make this dancing with locks all pointless, my suggestion is to
drop this patch and proceed with atomic64_t conversion directly and
note that the conversion also removes possible 32bit deviation on
32bit archs.
Thanks.
--
tejun
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [patch v2 1/4] percpu_counter: change return value and add comments
2011-04-13 7:57 ` [patch v2 1/4] percpu_counter: change return value and add comments shaohua.li
@ 2011-04-13 19:05 ` Tejun Heo
0 siblings, 0 replies; 12+ messages in thread
From: Tejun Heo @ 2011-04-13 19:05 UTC (permalink / raw)
To: shaohua.li; +Cc: linux-kernel, akpm, cl, eric.dumazet
On Wed, Apr 13, 2011 at 03:57:16PM +0800, shaohua.li@intel.com wrote:
> the percpu_counter_*_positive() API SMP and UP aren't consistent. Add comments
> to explain it.
> Also if count < 0, returns 0 instead of 1 for *read_positive().
>
> Signed-off-by: Shaohua Li <shaohua.li@intel.com>
Patch looks technically okay to me but may I suggest...
* Revise patch description. It doesn't really match the patch
content.
* I would much prefer having docbook comments on top of
*read_positive() functions.
Thanks.
--
tejun
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [patch v2 4/4] percpu_counter: use atomic64 for counter
2011-04-13 7:57 ` [patch v2 4/4] percpu_counter: use atomic64 for counter shaohua.li
@ 2011-04-13 19:07 ` Tejun Heo
0 siblings, 0 replies; 12+ messages in thread
From: Tejun Heo @ 2011-04-13 19:07 UTC (permalink / raw)
To: shaohua.li; +Cc: linux-kernel, akpm, cl, eric.dumazet
On Wed, Apr 13, 2011 at 03:57:19PM +0800, shaohua.li@intel.com wrote:
> This can improve some workloads with percpu_counter->lock heavily contented.
> For example, vm_committed_as sometimes causes the contention. We should tune
> the batch count, but if we can make percpu_counter better, why not? In a 24
> CPUs system, 24 processes run stress mmap()/mmunmap(), the atomic method
> gives 50x faster.
Christoph already raised the issue but I'd also love to know a bit
more detail on the test than "50x faster".
Thanks.
--
tejun
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [patch v2 0/4] percpu_counter: cleanup and fix
2011-04-13 14:08 ` [patch v2 0/4] percpu_counter: cleanup and fix Christoph Lameter
@ 2011-04-14 1:04 ` Shaohua Li
0 siblings, 0 replies; 12+ messages in thread
From: Shaohua Li @ 2011-04-14 1:04 UTC (permalink / raw)
To: Christoph Lameter
Cc: linux-kernel@vger.kernel.org, akpm@linux-foundation.org,
tj@kernel.org, eric.dumazet@gmail.com
On Wed, 2011-04-13 at 22:08 +0800, Christoph Lameter wrote:
> On Wed, 13 Apr 2011, shaohua.li@intel.com wrote:
>
> > Cleanup percpu_counter code and fix some bugs. The main purpose is to convert
> > percpu_counter to use atomic64, which is useful for workloads which cause
> > percpu_counter->lock contented. In a workload I tested, the atomic method is
> > 50x faster (please see patch 4 for detail).
>
> Could you post your test and the results please?
the test is very simple, 24 processes in 24 CPU, and each does:
while (1) {
mmap(128M);
munmap(128M)
}
we then measure how many loops the process can do.
I'll attach the test in next post.
Just found when I said 50x faster, I actually forgot one other patch's
effect, which is http://marc.info/?l=linux-kernel&m=130127782901127&w=2.
If only having the atomic change, it's about 7x faster. Sorry about
this. I'll add detail data in next post.
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [patch v2 2/4] percpu_counter: delete dead code
2011-04-13 7:57 ` [patch v2 2/4] percpu_counter: delete dead code shaohua.li
2011-04-13 18:59 ` Tejun Heo
@ 2011-04-18 0:12 ` Ted Ts'o
1 sibling, 0 replies; 12+ messages in thread
From: Ted Ts'o @ 2011-04-18 0:12 UTC (permalink / raw)
To: shaohua.li; +Cc: linux-kernel, akpm, cl, tj, eric.dumazet, linux-ext4
I'll take care of merging this patch via the ext4 tree.
- Ted
On Wed, Apr 13, 2011 at 03:57:17PM +0800, shaohua.li@intel.com wrote:
> percpu_counter_sum_positive never returns negative.
>
> Signed-off-by: Shaohua Li <shaohua.li@intel.com>
>
> ---
> fs/ext4/balloc.c | 5 -----
> 1 file changed, 5 deletions(-)
>
> Index: linux/fs/ext4/balloc.c
> ===================================================================
> --- linux.orig/fs/ext4/balloc.c 2011-04-12 16:22:59.000000000 +0800
> +++ linux/fs/ext4/balloc.c 2011-04-13 13:23:01.000000000 +0800
> @@ -507,11 +507,6 @@ static int ext4_has_free_blocks(struct e
> EXT4_FREEBLOCKS_WATERMARK) {
> free_blocks = percpu_counter_sum_positive(fbc);
> dirty_blocks = percpu_counter_sum_positive(dbc);
> - if (dirty_blocks < 0) {
> - printk(KERN_CRIT "Dirty block accounting "
> - "went wrong %lld\n",
> - (long long)dirty_blocks);
> - }
> }
> /* Check whether we have space after
> * accounting for current dirty blocks & root reserved blocks.
>
>
^ permalink raw reply [flat|nested] 12+ messages in thread
end of thread, other threads:[~2011-04-18 0:13 UTC | newest]
Thread overview: 12+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2011-04-13 7:57 [patch v2 0/4] percpu_counter: cleanup and fix shaohua.li
2011-04-13 7:57 ` [patch v2 1/4] percpu_counter: change return value and add comments shaohua.li
2011-04-13 19:05 ` Tejun Heo
2011-04-13 7:57 ` [patch v2 2/4] percpu_counter: delete dead code shaohua.li
2011-04-13 18:59 ` Tejun Heo
2011-04-18 0:12 ` Ted Ts'o
2011-04-13 7:57 ` [patch v2 3/4] percpu_counter: fix code for 32bit systems shaohua.li
2011-04-13 19:04 ` Tejun Heo
2011-04-13 7:57 ` [patch v2 4/4] percpu_counter: use atomic64 for counter shaohua.li
2011-04-13 19:07 ` Tejun Heo
2011-04-13 14:08 ` [patch v2 0/4] percpu_counter: cleanup and fix Christoph Lameter
2011-04-14 1:04 ` Shaohua Li
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).