public inbox for linux-ext4@vger.kernel.org
 help / color / mirror / Atom feed
* [RFC PATCH] percpu_counters: make fbc->count read atomic on 32 bit architecture
@ 2008-08-22 13:34 Aneesh Kumar K.V
  2008-08-22 13:34 ` [RFC PATCH] percpu_counters: Add new function percpu_counter_sum_and_sub Aneesh Kumar K.V
  2008-08-22 18:29 ` [RFC PATCH] percpu_counters: make fbc->count read atomic on 32 bit architecture Mingming Cao
  0 siblings, 2 replies; 7+ messages in thread
From: Aneesh Kumar K.V @ 2008-08-22 13:34 UTC (permalink / raw)
  To: cmm, tytso, sandeen; +Cc: linux-ext4, Aneesh Kumar K.V, Peter Zijlstra

fbc->count is of type s64. The change was introduced by
0216bfcffe424a5473daa4da47440881b36c1f4 which changed the type
from long to s64. Moving to s64 also means on 32 bit architectures
we can get wrong values on fbc->count.

percpu_counter_read is used within interrupt context also. So
use the irq safe version of spinlock while reading

Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
CC: Peter Zijlstra <a.p.zijlstra@chello.nl>
---
 include/linux/percpu_counter.h |   23 +++++++++++++++++++++--
 1 files changed, 21 insertions(+), 2 deletions(-)

diff --git a/include/linux/percpu_counter.h b/include/linux/percpu_counter.h
index 9007ccd..af485b1 100644
--- a/include/linux/percpu_counter.h
+++ b/include/linux/percpu_counter.h
@@ -53,10 +53,29 @@ static inline s64 percpu_counter_sum(struct percpu_counter *fbc)
 	return __percpu_counter_sum(fbc);
 }
 
-static inline s64 percpu_counter_read(struct percpu_counter *fbc)
+#if BITS_PER_LONG == 64
+static inline s64 fbc_count(struct percpu_counter *fbc)
 {
 	return fbc->count;
 }
+#else
+/* doesn't have atomic 64 bit operation */
+static inline s64 fbc_count(struct percpu_counter *fbc)
+{
+	s64 ret;
+	unsigned long flags;
+	spin_lock_irqsave(&fbc->lock, flags);
+	ret = fbc->count;
+	spin_unlock_irqrestore(&fbc->lock, flags);
+	return ret;
+
+}
+#endif
+
+static inline s64 percpu_counter_read(struct percpu_counter *fbc)
+{
+	return fbc_count(fbc);
+}
 
 /*
  * It is possible for the percpu_counter_read() to return a small negative
@@ -65,7 +84,7 @@ static inline s64 percpu_counter_read(struct percpu_counter *fbc)
  */
 static inline s64 percpu_counter_read_positive(struct percpu_counter *fbc)
 {
-	s64 ret = fbc->count;
+	s64 ret = fbc_count(fbc);
 
 	barrier();		/* Prevent reloads of fbc->count */
 	if (ret >= 0)
-- 
1.6.0.2.g2ebc0


^ permalink raw reply related	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2008-08-22 18:34 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2008-08-22 13:34 [RFC PATCH] percpu_counters: make fbc->count read atomic on 32 bit architecture Aneesh Kumar K.V
2008-08-22 13:34 ` [RFC PATCH] percpu_counters: Add new function percpu_counter_sum_and_sub Aneesh Kumar K.V
2008-08-22 13:34   ` [RFC PATCH] ext4: Make sure all the block allocation patch reserve blocks Aneesh Kumar K.V
2008-08-22 18:22     ` Mingming Cao
2008-08-22 18:01   ` [RFC PATCH] percpu_counters: Add new function percpu_counter_sum_and_sub Mingming Cao
2008-08-22 18:29 ` [RFC PATCH] percpu_counters: make fbc->count read atomic on 32 bit architecture Mingming Cao
2008-08-22 18:33   ` Peter Zijlstra

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox