public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
* [patch] Avoid use of spinlock for percpu_counter
@ 2006-01-25 23:16 Ravikiran G Thirumalai
  2006-01-26 14:17 ` Eric Dumazet
  0 siblings, 1 reply; 3+ messages in thread
From: Ravikiran G Thirumalai @ 2006-01-25 23:16 UTC (permalink / raw)
  To: linux-kernel; +Cc: Andrew Morton

The spinlock in struct percpu_counter protects just one counter.  It's
not obvious why it was done this way (I am guessing it was because earlier,
atomic_t was guaranteed 24 bits only on some arches).  Since we have
atomic_long_t now, I don't see why this cannot be replaced with an atomic_t.

Comments?

Index: linux-2.6.16-rc1/include/linux/percpu_counter.h
===================================================================
--- linux-2.6.16-rc1.orig/include/linux/percpu_counter.h	2006-01-24 13:52:24.000000000 -0800
+++ linux-2.6.16-rc1/include/linux/percpu_counter.h	2006-01-24 14:10:26.000000000 -0800
@@ -15,8 +15,7 @@
 #ifdef CONFIG_SMP
 
 struct percpu_counter {
-	spinlock_t lock;
-	long count;
+	atomic_long_t count;
 	long *counters;
 };
 
@@ -28,8 +27,7 @@ struct percpu_counter {
 
 static inline void percpu_counter_init(struct percpu_counter *fbc)
 {
-	spin_lock_init(&fbc->lock);
-	fbc->count = 0;
+	atomic_long_set(&fbc->count, 0);
 	fbc->counters = alloc_percpu(long);
 }
 
@@ -42,7 +40,7 @@ void percpu_counter_mod(struct percpu_co
 
 static inline long percpu_counter_read(struct percpu_counter *fbc)
 {
-	return fbc->count;
+	return atomic_long_read(&fbc->count);
 }
 
 /*
@@ -51,7 +49,7 @@ static inline long percpu_counter_read(s
  */
 static inline long percpu_counter_read_positive(struct percpu_counter *fbc)
 {
-	long ret = fbc->count;
+	long ret = atomic_long_read(&fbc->count);
 
 	barrier();		/* Prevent reloads of fbc->count */
 	if (ret > 0)
Index: linux-2.6.16-rc1/mm/swap.c
===================================================================
--- linux-2.6.16-rc1.orig/mm/swap.c	2006-01-17 14:12:17.000000000 -0800
+++ linux-2.6.16-rc1/mm/swap.c	2006-01-24 14:23:20.000000000 -0800
@@ -449,9 +449,7 @@ void percpu_counter_mod(struct percpu_co
 	pcount = per_cpu_ptr(fbc->counters, cpu);
 	count = *pcount + amount;
 	if (count >= FBC_BATCH || count <= -FBC_BATCH) {
-		spin_lock(&fbc->lock);
-		fbc->count += count;
-		spin_unlock(&fbc->lock);
+		atomic_long_add(count, &fbc->count);
 		count = 0;
 	}
 	*pcount = count;

^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [patch] Avoid use of spinlock for percpu_counter
  2006-01-25 23:16 [patch] Avoid use of spinlock for percpu_counter Ravikiran G Thirumalai
@ 2006-01-26 14:17 ` Eric Dumazet
  2006-01-26 18:04   ` Ravikiran G Thirumalai
  0 siblings, 1 reply; 3+ messages in thread
From: Eric Dumazet @ 2006-01-26 14:17 UTC (permalink / raw)
  To: Ravikiran G Thirumalai; +Cc: linux-kernel, Andrew Morton

Ravikiran G Thirumalai a écrit :
> The spinlock in struct percpu_counter protects just one counter.  It's
> not obvious why it was done this way (I am guessing it was because earlier,
> atomic_t was guaranteed 24 bits only on some arches).  Since we have
> atomic_long_t now, I don't see why this cannot be replaced with an atomic_t.
> 
> Comments?

Yes this makes sense.

Furthermore, we could try to fix 'struct percpu_counter' management (if SMP) 
if alloc_percpu(long) call done in percpu_counter_init() fails. This is 
currently ignored and can crash.

Something like (hybrid patch, to get the idea) :

--- a/mm/swap.c 2006-01-26 15:58:42.000000000 +0100
+++ b/mm/swap.c 2006-01-26 16:00:54.000000000 +0100
@@ -472,9 +472,12 @@
  {
         long count;
         long *pcount;
-       int cpu = get_cpu();

-       pcount = per_cpu_ptr(fbc->counters, cpu);
+       if (unlikely(fbc->counters == NULL)) {
+               atomic_long_add(amount, &fbc->count);
+               return;
+       }
+       pcount = per_cpu_ptr(fbc->counters, get_cpu());
         count = *pcount + amount;
         if (count >= FBC_BATCH || count <= -FBC_BATCH) {
                 atomic_long_add(count, &fbc->count);
--- a/include/linux/percpu_counter.h    2006-01-26 16:02:31.000000000 +0100
+++ b/include/linux/percpu_counter.h    2006-01-26 16:02:53.000000000 +0100
@@ -35,7 +35,8 @@

  static inline void percpu_counter_destroy(struct percpu_counter *fbc)
  {
-       free_percpu(fbc->counters);
+       if (fbc->counters)
+               free_percpu(fbc->counters);
  }

  void percpu_counter_mod(struct percpu_counter *fbc, long amount);


Eric

^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [patch] Avoid use of spinlock for percpu_counter
  2006-01-26 14:17 ` Eric Dumazet
@ 2006-01-26 18:04   ` Ravikiran G Thirumalai
  0 siblings, 0 replies; 3+ messages in thread
From: Ravikiran G Thirumalai @ 2006-01-26 18:04 UTC (permalink / raw)
  To: Eric Dumazet; +Cc: linux-kernel, Andrew Morton

On Thu, Jan 26, 2006 at 03:17:35PM +0100, Eric Dumazet wrote:
> Ravikiran G Thirumalai a écrit :
> >The spinlock in struct percpu_counter protects just one counter.  It's
> >not obvious why it was done this way (I am guessing it was because earlier,
> >atomic_t was guaranteed 24 bits only on some arches).  Since we have
> >atomic_long_t now, I don't see why this cannot be replaced with an 
> >atomic_t.
> >
> >Comments?
> 
> Yes this makes sense.
> 
> Furthermore, we could try to fix 'struct percpu_counter' management (if 
> SMP) if alloc_percpu(long) call done in percpu_counter_init() fails. This 
> is currently ignored and can crash.
> Something like (hybrid patch, to get the idea) :
> 
> --- a/mm/swap.c 2006-01-26 15:58:42.000000000 +0100
> +++ b/mm/swap.c 2006-01-26 16:00:54.000000000 +0100
> @@ -472,9 +472,12 @@
>  {
>         long count;
>         long *pcount;
> -       int cpu = get_cpu();
> 
> -       pcount = per_cpu_ptr(fbc->counters, cpu);
> +       if (unlikely(fbc->counters == NULL)) {
> +               atomic_long_add(amount, &fbc->count);
> +               return;

I don't know if adding another branch to the fast path is a good idea, would
it not be better if this was handled by returning an error at
percpu_counter_init?  If we are in agreement, then I can make a patch for
the same.

Thanks,
Kiran

^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2006-01-26 18:04 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2006-01-25 23:16 [patch] Avoid use of spinlock for percpu_counter Ravikiran G Thirumalai
2006-01-26 14:17 ` Eric Dumazet
2006-01-26 18:04   ` Ravikiran G Thirumalai

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox