From: Peter Zijlstra <a.p.zijlstra@chello.nl>
To: Eric Dumazet <dada1@cosmosbay.com>
Cc: Theodore Tso <tytso@mit.edu>,
Andrew Morton <akpm@linux-foundation.org>,
linux kernel <linux-kernel@vger.kernel.org>,
"David S. Miller" <davem@davemloft.net>,
Mingming Cao <cmm@us.ibm.com>,
linux-ext4@vger.kernel.org
Subject: Re: [PATCH] percpu_counter: Fix __percpu_counter_sum()
Date: Tue, 09 Dec 2008 09:34:13 +0100 [thread overview]
Message-ID: <1228811653.6809.26.camel@twins> (raw)
In-Reply-To: <493E2884.6010600@cosmosbay.com>
On Tue, 2008-12-09 at 09:12 +0100, Eric Dumazet wrote:
> Peter Zijlstra a écrit :
> > On Mon, 2008-12-08 at 18:00 -0500, Theodore Tso wrote:
> >> On Mon, Dec 08, 2008 at 11:20:35PM +0100, Peter Zijlstra wrote:
> >>> atomic_t is pretty good on all archs, but you get to keep the cacheline
> >>> ping-pong.
> >>>
> >> Stupid question --- if you're worried about cacheline ping-pongs, why
> >> aren't each cpu's delta counter cacheline aligned? With a 64-byte
> >> cache-line, and a 32-bit counters entry, with less than 16 CPU's we're
> >> going to be getting cache ping-pong effects with percpu_counter's,
> >> right? Or am I missing something?
> >
> > sorta - a new per-cpu allocator is in the works, but we do cacheline
> > align the per-cpu allocations (or used to), also, the allocations are
> > node affine.
> >
>
> I did work on a 'light weight percpu counter', aka percpu_lcounter, for
> all metrics that dont need 64 bits wide, but a plain 'long'
> (network, nr_files, nr_dentry, nr_inodes, ...)
>
> struct percpu_lcounter {
> atomic_long_t count;
> #ifdef CONFIG_SMP
> #ifdef CONFIG_HOTPLUG_CPU
> struct list_head list; /* All percpu_counters are on a list */
> #endif
> long *counters;
> #endif
> };
>
> (No more spinlock)
>
> Then I tried to have atomic_t (or atomic_long_t) for 'counters', but got a
> 10% slow down of __percpu_lcounter_add(), even if never hitting the 'slow path'
> atomic_long_add_return() is really expensiven, even on a non contended cache
> line.
>
> struct percpu_lcounter {
> atomic_long_t count;
> #ifdef CONFIG_SMP
> #ifdef CONFIG_HOTPLUG_CPU
> struct list_head list; /* All percpu_counters are on a list */
> #endif
> atomic_long_t *counters;
> #endif
> };
>
> So I believe the percpu_clounter_sum() that tries to reset to 0 all cpu local
> counts would be really too expensive, if it slows down _add() so much.
>
> long percpu_lcounter_sum(struct percpu_lcounter *fblc)
> {
> long acc = 0;
> int cpu;
>
> for_each_online_cpu(cpu)
> acc += atomic_long_xchg(per_cpu_ptr(fblc->counters, cpu), 0);
> return atomic_long_add_return(acc, &fblc->count);
> }
>
> void __percpu_lcounter_add(struct percpu_lcounter *flbc, long amount, s32 batch)
> {
> long count;
> atomic_long_t *pcount;
>
> pcount = per_cpu_ptr(flbc->counters, get_cpu());
> count = atomic_long_add_return(amount, pcount); /* way too expensive !!! */
Yeah, its an extra LOCK ins where there wasn't one before.
> if (unlikely(count >= batch || count <= -batch)) {
> atomic_long_add(count, &flbc->count);
> atomic_long_sub(count, pcount);
Also, this are two LOCKs where, with the spinlock, you'd likely only
have 1.
So yes, having the per-cpu variable an atomic seems like a way too
expensive idea. That xchg based _sum is cool though.
> }
> put_cpu();
> }
>
> Just forget about it and let percpu_lcounter_sum() only read the values, and
> let percpu_lcounter_add() not using atomic ops in fast path.
>
> void __percpu_lcounter_add(struct percpu_lcounter *flbc, long amount, s32 batch)
> {
> long count;
> long *pcount;
>
> pcount = per_cpu_ptr(flbc->counters, get_cpu());
> count = *pcount + amount;
> if (unlikely(count >= batch || count <= -batch)) {
> atomic_long_add(count, &flbc->count);
> count = 0;
> }
> *pcount = count;
> put_cpu();
> }
> EXPORT_SYMBOL(__percpu_lcounter_add);
>
>
> Also, with upcoming NR_CPUS=4096, it may be time to design a hierarchical percpu_counter,
> to avoid hitting one shared "fbc->count" all the time a local counter overflows.
So we'd normally write to the shared cacheline every cpus/batch.
Cascading this you'd get ln(cpus)/(batch^ln(cpus)) or something like
that, right? Won't just increasing batch give the same result - or are
we going to play funny games with the topology information?
--
To unsubscribe from this list: send the line "unsubscribe linux-ext4" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
WARNING: multiple messages have this Message-ID (diff)
From: Peter Zijlstra <a.p.zijlstra@chello.nl>
To: Eric Dumazet <dada1@cosmosbay.com>
Cc: Theodore Tso <tytso@mit.edu>,
Andrew Morton <akpm@linux-foundation.org>,
linux kernel <linux-kernel@vger.kernel.org>,
"David S. Miller" <davem@davemloft.net>,
Mingming Cao <cmm@us.ibm.com>,
linux-ext4@vger.kernel.org
Subject: Re: [PATCH] percpu_counter: Fix __percpu_counter_sum()
Date: Tue, 09 Dec 2008 09:34:13 +0100 [thread overview]
Message-ID: <1228811653.6809.26.camel@twins> (raw)
In-Reply-To: <493E2884.6010600@cosmosbay.com>
On Tue, 2008-12-09 at 09:12 +0100, Eric Dumazet wrote:
> Peter Zijlstra a écrit :
> > On Mon, 2008-12-08 at 18:00 -0500, Theodore Tso wrote:
> >> On Mon, Dec 08, 2008 at 11:20:35PM +0100, Peter Zijlstra wrote:
> >>> atomic_t is pretty good on all archs, but you get to keep the cacheline
> >>> ping-pong.
> >>>
> >> Stupid question --- if you're worried about cacheline ping-pongs, why
> >> aren't each cpu's delta counter cacheline aligned? With a 64-byte
> >> cache-line, and a 32-bit counters entry, with less than 16 CPU's we're
> >> going to be getting cache ping-pong effects with percpu_counter's,
> >> right? Or am I missing something?
> >
> > sorta - a new per-cpu allocator is in the works, but we do cacheline
> > align the per-cpu allocations (or used to), also, the allocations are
> > node affine.
> >
>
> I did work on a 'light weight percpu counter', aka percpu_lcounter, for
> all metrics that dont need 64 bits wide, but a plain 'long'
> (network, nr_files, nr_dentry, nr_inodes, ...)
>
> struct percpu_lcounter {
> atomic_long_t count;
> #ifdef CONFIG_SMP
> #ifdef CONFIG_HOTPLUG_CPU
> struct list_head list; /* All percpu_counters are on a list */
> #endif
> long *counters;
> #endif
> };
>
> (No more spinlock)
>
> Then I tried to have atomic_t (or atomic_long_t) for 'counters', but got a
> 10% slow down of __percpu_lcounter_add(), even if never hitting the 'slow path'
> atomic_long_add_return() is really expensiven, even on a non contended cache
> line.
>
> struct percpu_lcounter {
> atomic_long_t count;
> #ifdef CONFIG_SMP
> #ifdef CONFIG_HOTPLUG_CPU
> struct list_head list; /* All percpu_counters are on a list */
> #endif
> atomic_long_t *counters;
> #endif
> };
>
> So I believe the percpu_clounter_sum() that tries to reset to 0 all cpu local
> counts would be really too expensive, if it slows down _add() so much.
>
> long percpu_lcounter_sum(struct percpu_lcounter *fblc)
> {
> long acc = 0;
> int cpu;
>
> for_each_online_cpu(cpu)
> acc += atomic_long_xchg(per_cpu_ptr(fblc->counters, cpu), 0);
> return atomic_long_add_return(acc, &fblc->count);
> }
>
> void __percpu_lcounter_add(struct percpu_lcounter *flbc, long amount, s32 batch)
> {
> long count;
> atomic_long_t *pcount;
>
> pcount = per_cpu_ptr(flbc->counters, get_cpu());
> count = atomic_long_add_return(amount, pcount); /* way too expensive !!! */
Yeah, its an extra LOCK ins where there wasn't one before.
> if (unlikely(count >= batch || count <= -batch)) {
> atomic_long_add(count, &flbc->count);
> atomic_long_sub(count, pcount);
Also, this are two LOCKs where, with the spinlock, you'd likely only
have 1.
So yes, having the per-cpu variable an atomic seems like a way too
expensive idea. That xchg based _sum is cool though.
> }
> put_cpu();
> }
>
> Just forget about it and let percpu_lcounter_sum() only read the values, and
> let percpu_lcounter_add() not using atomic ops in fast path.
>
> void __percpu_lcounter_add(struct percpu_lcounter *flbc, long amount, s32 batch)
> {
> long count;
> long *pcount;
>
> pcount = per_cpu_ptr(flbc->counters, get_cpu());
> count = *pcount + amount;
> if (unlikely(count >= batch || count <= -batch)) {
> atomic_long_add(count, &flbc->count);
> count = 0;
> }
> *pcount = count;
> put_cpu();
> }
> EXPORT_SYMBOL(__percpu_lcounter_add);
>
>
> Also, with upcoming NR_CPUS=4096, it may be time to design a hierarchical percpu_counter,
> to avoid hitting one shared "fbc->count" all the time a local counter overflows.
So we'd normally write to the shared cacheline every cpus/batch.
Cascading this you'd get ln(cpus)/(batch^ln(cpus)) or something like
that, right? Won't just increasing batch give the same result - or are
we going to play funny games with the topology information?
next prev parent reply other threads:[~2008-12-09 8:34 UTC|newest]
Thread overview: 52+ messages / expand[flat|nested] mbox.gz Atom feed top
2008-12-03 18:40 [PATCH] percpu_counter: fix CPU unplug race in percpu_counter_destroy() Eric Dumazet
2008-12-03 20:24 ` [PATCH] percpu_counter: Fix __percpu_counter_sum() Eric Dumazet
2008-12-03 20:45 ` Peter Zijlstra
2008-12-04 6:14 ` David Miller
2008-12-07 4:22 ` Andrew Morton
2008-12-07 4:22 ` Andrew Morton
2008-12-07 10:25 ` Peter Zijlstra
2008-12-07 13:28 ` Eric Dumazet
2008-12-07 13:28 ` Eric Dumazet
2008-12-07 17:28 ` Andrew Morton
2008-12-07 18:00 ` Eric Dumazet
2008-12-07 18:00 ` Eric Dumazet
2008-12-08 4:52 ` Andrew Morton
2008-12-08 22:12 ` Theodore Tso
2008-12-08 22:20 ` Peter Zijlstra
2008-12-08 23:00 ` Theodore Tso
2008-12-08 23:05 ` Peter Zijlstra
2008-12-08 23:08 ` Peter Zijlstra
2008-12-09 8:12 ` Eric Dumazet
2008-12-09 8:12 ` Eric Dumazet
2008-12-09 8:34 ` Peter Zijlstra [this message]
2008-12-09 8:34 ` Peter Zijlstra
2008-12-10 5:09 ` Eric Dumazet
2008-12-10 5:49 ` Andrew Morton
2008-12-10 22:56 ` Eric Dumazet
2008-12-10 22:56 ` Eric Dumazet
2008-12-12 8:17 ` Rusty Russell
2008-12-12 8:22 ` Eric Dumazet
2008-12-12 8:22 ` Eric Dumazet
2008-12-12 11:08 ` [PATCH] percpu_counter: use local_t and atomic_long_t if possible Eric Dumazet
2008-12-12 11:08 ` Eric Dumazet
2008-12-12 11:29 ` Peter Zijlstra
2008-12-23 11:43 ` Peter Zijlstra
2008-12-25 13:26 ` Rusty Russell
2008-12-15 12:53 ` [PATCH] percpu_counter: Fix __percpu_counter_sum() Rusty Russell
2008-12-16 20:16 ` Ingo Molnar
2008-12-10 7:12 ` Peter Zijlstra
2008-12-08 23:07 ` Andrew Morton
2008-12-08 23:49 ` Theodore Tso
2008-12-08 22:22 ` Andrew Morton
2008-12-08 22:44 ` Mingming Cao
2008-12-08 22:44 ` Mingming Cao
2008-12-07 22:24 ` [PATCH] atomic: fix a typo in atomic_long_xchg() Eric Dumazet
2008-12-07 22:24 ` Eric Dumazet
2008-12-07 15:28 ` [PATCH] percpu_counter: Fix __percpu_counter_sum() Theodore Tso
2008-12-08 4:42 ` Andrew Morton
2008-12-08 17:55 ` Mingming Cao
2008-12-08 17:55 ` Mingming Cao
2008-12-11 16:32 ` [stable] " Greg KH
2008-12-08 17:44 ` Mingming Cao
2008-12-08 17:44 ` Mingming Cao
2008-12-04 6:13 ` [PATCH] percpu_counter: fix CPU unplug race in percpu_counter_destroy() David Miller
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1228811653.6809.26.camel@twins \
--to=a.p.zijlstra@chello.nl \
--cc=akpm@linux-foundation.org \
--cc=cmm@us.ibm.com \
--cc=dada1@cosmosbay.com \
--cc=davem@davemloft.net \
--cc=linux-ext4@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=tytso@mit.edu \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.