From: Peter Zijlstra <a.p.zijlstra@chello.nl>
To: Paul Mackerras <paulus@samba.org>
Cc: Ingo Molnar <mingo@elte.hu>,
linux-kernel@vger.kernel.org, Mike Galbraith <efault@gmx.de>,
Arjan van de Ven <arjan@infradead.org>,
Wu Fengguang <fengguang.wu@intel.com>
Subject: Re: [PATCH 2/9] perf_counter: fix update_userpage()
Date: Thu, 02 Apr 2009 12:36:27 +0200 [thread overview]
Message-ID: <1238668587.8530.5793.camel@twins> (raw)
In-Reply-To: <18900.35934.799877.893556@cargo.ozlabs.ibm.com>
On Thu, 2009-04-02 at 20:58 +1100, Paul Mackerras wrote:
> Peter Zijlstra writes:
>
> > > Good point. This should work, though:
> > >
> > > do {
> > > seq = pc->lock;
> > > barrier();
> > > value = read_pmc(pc->index) + pc->offset;
> > > barrier();
> > > } while (pc->lock != seq);
> > > return value;
> >
> > I don't think you need the first barrier(), all you need to avoid is it
> > reusing the first pc->lock read, so one should suffice.
>
> I need it to make sure that the compiler doesn't put the load of
> pc->index or pc->offset before the first load of pc->lock. The second
> barrier is needed to make sure the compiler puts the second load of
> pc->lock after the loads of pc->index and pc->offset. So I think I do
> need to barrier()s (but only compiler barriers, not cpu memory
> barriers).
Ah, you're right indeed.
> > Also, you need to handle the !pc->index case.
>
> Hmmm, yeah. I claim that read_pmc(0) always returns 0. :)
Hehe :-)
Ok, updated that patch.
next prev parent reply other threads:[~2009-04-02 10:35 UTC|newest]
Thread overview: 24+ messages / expand[flat|nested] mbox.gz Atom feed top
2009-03-28 19:43 [PATCH 0/9] perf_counter patches Peter Zijlstra
2009-03-28 19:44 ` [PATCH 1/9] perf_counter: unify and fix delayed counter wakeup Peter Zijlstra
2009-03-29 0:14 ` Paul Mackerras
2009-03-29 9:16 ` Peter Zijlstra
2009-03-29 9:25 ` Peter Zijlstra
2009-03-29 10:02 ` Paul Mackerras
2009-03-28 19:44 ` [PATCH 2/9] perf_counter: fix update_userpage() Peter Zijlstra
2009-03-29 0:24 ` Paul Mackerras
2009-04-02 8:50 ` Peter Zijlstra
2009-04-02 9:00 ` Peter Zijlstra
2009-04-02 9:21 ` Paul Mackerras
2009-04-02 9:28 ` Peter Zijlstra
2009-04-02 9:15 ` Paul Mackerras
2009-04-02 9:36 ` Peter Zijlstra
2009-04-02 9:58 ` Paul Mackerras
2009-04-02 10:36 ` Peter Zijlstra [this message]
2009-03-28 19:44 ` [PATCH 3/9] perf_counter: kerneltop: simplify data_head read Peter Zijlstra
2009-03-28 19:44 ` [PATCH 4/9] perf_counter: executable mmap() information Peter Zijlstra
2009-03-28 19:44 ` [PATCH 5/9] perf_counter: kerneltop: parse the mmap data stream Peter Zijlstra
2009-03-28 19:44 ` [PATCH 6/9] perf_counter: powerpc: only reserve PMU hardware when we need it Peter Zijlstra
2009-03-28 19:44 ` [PATCH 7/9] perf_counter: make it possible for hw_perf_counter_init to return error codes Peter Zijlstra
2009-03-30 4:13 ` Paul Mackerras
2009-03-28 19:44 ` [PATCH 8/9] perf_counter tools: optionally scale counter values in perfstat mode Peter Zijlstra
2009-03-28 19:44 ` [PATCH 9/9] RFC perf_counter: event overlow handling Peter Zijlstra
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1238668587.8530.5793.camel@twins \
--to=a.p.zijlstra@chello.nl \
--cc=arjan@infradead.org \
--cc=efault@gmx.de \
--cc=fengguang.wu@intel.com \
--cc=linux-kernel@vger.kernel.org \
--cc=mingo@elte.hu \
--cc=paulus@samba.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox