public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: Tom Zanussi <tom.zanussi@linux.intel.com>
To: Dave Jiang <dave.jiang@intel.com>,
	Uros Bizjak <ubizjak@gmail.com>,
	dmaengine@vger.kernel.org, linux-kernel@vger.kernel.org
Cc: Fenghua Yu <fenghua.yu@intel.com>, Vinod Koul <vkoul@kernel.org>,
	"Zanussi, Tom" <tom.zanussi@intel.com>
Subject: Re: [PATCH] dmaengine:idxd: Use local64_try_cmpxchg in perfmon_pmu_event_update
Date: Mon, 10 Jul 2023 16:28:10 -0500	[thread overview]
Message-ID: <60b70751a1a1ad786f534ca0b7fd3cc423736f0a.camel@linux.intel.com> (raw)
In-Reply-To: <e816aa75-4588-dae4-2d01-6f5ba9d4a4f3@intel.com>

On Wed, 2023-07-05 at 07:53 -0700, Dave Jiang wrote:
> 
> 
> On 7/3/23 07:52, Uros Bizjak wrote:
> > Use local64_try_cmpxchg instead of local64_cmpxchg (*ptr, old, new) == old
> > in perfmon_pmu_event_update.  x86 CMPXCHG instruction returns success in
> > ZF flag, so this change saves a compare after cmpxchg (and related move
> > instruction in front of cmpxchg).
> > 
> > Also, try_cmpxchg implicitly assigns old *ptr value to "old" when cmpxchg
> > fails. There is no need to re-read the value in the loop.
> > 
> > No functional change intended.
> > 
> > Cc: Fenghua Yu <fenghua.yu@intel.com>
> > Cc: Dave Jiang <dave.jiang@intel.com>
> > Cc: Vinod Koul <vkoul@kernel.org>
> > Signed-off-by: Uros Bizjak <ubizjak@gmail.com>
> 
> 
> Cc: Tom Zanussi
> 
> Tom do you mind review this patch? Thanks!

Looks fine to me.

Thanks,

Reviewed-by: Tom Zanussi <tom.zanussi@linux.intel.com>

> 
> > ---
> >   drivers/dma/idxd/perfmon.c | 7 +++----
> >   1 file changed, 3 insertions(+), 4 deletions(-)
> > 
> > diff --git a/drivers/dma/idxd/perfmon.c b/drivers/dma/idxd/perfmon.c
> > index d73004f47cf4..fdda6d604262 100644
> > --- a/drivers/dma/idxd/perfmon.c
> > +++ b/drivers/dma/idxd/perfmon.c
> > @@ -245,12 +245,11 @@ static void perfmon_pmu_event_update(struct perf_event *event)
> >         int shift = 64 - idxd->idxd_pmu->counter_width;
> >         struct hw_perf_event *hwc = &event->hw;
> >   
> > +       prev_raw_count = local64_read(&hwc->prev_count);
> >         do {
> > -               prev_raw_count = local64_read(&hwc->prev_count);
> >                 new_raw_count = perfmon_pmu_read_counter(event);
> > -       } while (local64_cmpxchg(&hwc->prev_count, prev_raw_count,
> > -                       new_raw_count) != prev_raw_count);
> > -
> > +       } while (!local64_try_cmpxchg(&hwc->prev_count,
> > +                                     &prev_raw_count, new_raw_count));
> >         n = (new_raw_count << shift);
> >         p = (prev_raw_count << shift);
> >   


  reply	other threads:[~2023-07-10 21:28 UTC|newest]

Thread overview: 4+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-07-03 14:52 [PATCH] dmaengine:idxd: Use local64_try_cmpxchg in perfmon_pmu_event_update Uros Bizjak
2023-07-05 14:53 ` Dave Jiang
2023-07-10 21:28   ` Tom Zanussi [this message]
2023-08-01 18:45 ` Vinod Koul

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=60b70751a1a1ad786f534ca0b7fd3cc423736f0a.camel@linux.intel.com \
    --to=tom.zanussi@linux.intel.com \
    --cc=dave.jiang@intel.com \
    --cc=dmaengine@vger.kernel.org \
    --cc=fenghua.yu@intel.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=tom.zanussi@intel.com \
    --cc=ubizjak@gmail.com \
    --cc=vkoul@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox