From: "Rafael J. Wysocki" <rjw@novell.com>
To: Suresh Siddha <suresh.b.siddha@intel.com>
Cc: Thomas Gleixner <tglx@linutronix.de>, Robin Holt <holt@sgi.com>,
Andi Kleen <andi@firstfloor.org>, Ingo Molnar <mingo@redhat.com>,
"H. Peter Anvin" <hpa@zytor.com>,
Venkatesh Pallipadi <venkatesh.pallipadi@gmail.com>,
Linux Kernel Mailing List <linux-kernel@vger.kernel.org>,
"x86@kernel.org" <x86@kernel.org>,
Peter Zijlstra <peterz@infradead.org>,
Takashi Iwai <tiwai@suse.de>
Subject: Re: [patch 0/2] x86,pat: Reduce contention on the memtype_lock -V4
Date: Wed, 24 Mar 2010 21:32:11 +0100 [thread overview]
Message-ID: <201003242132.11642.rjw@novell.com> (raw)
In-Reply-To: <1269447737.2881.13.camel@sbs-t61.sc.intel.com>
On Wednesday 24 March 2010, Suresh Siddha wrote:
> On Wed, 2010-03-24 at 04:15 -0700, Thomas Gleixner wrote:
> > On Wed, 24 Mar 2010, Robin Holt wrote:
> > > On Wed, Mar 24, 2010 at 03:16:14AM +0100, Andi Kleen wrote:
> > > > holt@sgi.com writes:
> > > >
> > > > > Tracking memtype on x86 uses a single global spin_lock for either reading
> > > > > or changing the memory type. This includes changes made to page flags
> > > > > which is perfectly parallel.
> > > > >
> > > > > Part one of the patchset makes the page-based tracking use cmpxchg
> > > > > without a need for a lock.
> > > > >
> > > > > Part two of the patchset converts the spin_lock into a read/write lock.
> > > >
> > > > I'm curious: in what workloads did you see contention?
> > > >
> > > > For any scalability patches it would be always good to have a description
> > > > of the workload.
> > >
> > > It was a job using xpmem (an out of tree kernel module) which uses
> > > vm_insert_pfn to establish ptes. The scalability issues were shown
> > > in the first patch. I do not have any test which shows a performance
> > > difference with the spin_lock to rw_lock conversion.
> >
> > And what's exactly the point of converting it to a rw_lock then ?
>
> Thomas, As I mentioned earlier I am ok in not doing this conversion. If
> we see any performance issues with this spinlock, we can use RCU based
> logic to address that.
>
> For now, first patch in this series (which avoid the lock for RAM pages)
> is good to go. Thanks Rafael for spotting the page flags bit
> manipulation issue.
In fact Takashi did that, to put the record straight. :-)
Thanks,
Rafael
prev parent reply other threads:[~2010-03-24 20:49 UTC|newest]
Thread overview: 8+ messages / expand[flat|nested] mbox.gz Atom feed top
2010-03-24 0:36 [patch 0/2] x86,pat: Reduce contention on the memtype_lock -V4 holt
2010-03-24 0:36 ` [patch 1/2] x86,pat Update the page flags for memtype atomically instead of using memtype_lock. -V4 holt
2010-03-24 0:36 ` [patch 2/2] x86,pat Convert memtype_lock into an rw_lock holt
2010-03-24 2:16 ` [patch 0/2] x86,pat: Reduce contention on the memtype_lock -V4 Andi Kleen
2010-03-24 8:55 ` Robin Holt
2010-03-24 11:15 ` Thomas Gleixner
2010-03-24 16:22 ` Suresh Siddha
2010-03-24 20:32 ` Rafael J. Wysocki [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=201003242132.11642.rjw@novell.com \
--to=rjw@novell.com \
--cc=andi@firstfloor.org \
--cc=holt@sgi.com \
--cc=hpa@zytor.com \
--cc=linux-kernel@vger.kernel.org \
--cc=mingo@redhat.com \
--cc=peterz@infradead.org \
--cc=suresh.b.siddha@intel.com \
--cc=tglx@linutronix.de \
--cc=tiwai@suse.de \
--cc=venkatesh.pallipadi@gmail.com \
--cc=x86@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox