From: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
To: Alan Cox <alan@lxorguk.ukuu.org.uk>
Cc: Will Newton <will.newton@gmail.com>,
Luke Kenneth Casson Leighton <luke.leighton@gmail.com>,
linux-kernel@vger.kernel.org
Subject: Re: advice sought: practicality of SMP cache coherency implemented in assembler (and a hardware detect line)
Date: Mon, 28 Mar 2011 11:06:55 -0700 [thread overview]
Message-ID: <20110328180655.GI2287@linux.vnet.ibm.com> (raw)
In-Reply-To: <20110326120847.71b6ae4d@lxorguk.ukuu.org.uk>
On Sat, Mar 26, 2011 at 12:08:47PM +0000, Alan Cox wrote:
> > Probably not. Is it a virtual or physical indexed cache? Do you have a
> > precise workload in mind? If you have a very precise workload and you
> > don't expect to get many write conflicts then it could be made to
> > work.
>
> I'm unconvinced. The user space isn't the hard bit - little user memory
> is shared writable, the kernel data structures on the other hand,
> especially in the RCU realm are going to be interesting.
Indeed. One approach is to flush the caches on each rcu_dereference().
Of course, this assumes that the updaters flush their caches on each
smp_wmb(). You probably also need to make ACCESS_ONCE() flush caches
(which would automatically take care of rcu_dereference()). So might
work, but won't be fast.
You can of course expect a lot of odd bugs in taking this approach.
The assumption of cache coherence is baked pretty deeply into most
shared-memory parallel software. As you might have heard in the 2005
discussion. ;-)
> > There are a number of mature cores out there that can do this already
> > and can be bought off the shelf, I wouldn't underestimate the
> > difficulty of getting your cache coherency protocol right particularly
> > on a limited time/resource budget.
>
> Architecturally you may want to look at running one kernel per device
> (remembering that you can share the non writable kernel pages between
> different instances a bit if you are careful) - and in theory certain
> remote mappings.
>
> Basically it would become a cluster with a very very fast "page transfer"
> operation for moving data between nodes.
This works for applications coded specially for this platform, but unless
I am missing something, not for existing pthreads applications. Might
be able to handle things like Erlang that do parallelism without shared
memory.
Thanx, Paul
next prev parent reply other threads:[~2011-03-28 18:07 UTC|newest]
Thread overview: 13+ messages / expand[flat|nested] mbox.gz Atom feed top
2011-03-25 21:52 advice sought: practicality of SMP cache coherency implemented in assembler (and a hardware detect line) Luke Kenneth Casson Leighton
2011-03-25 22:41 ` Will Newton
2011-03-26 12:08 ` Alan Cox
2011-03-28 18:06 ` Paul E. McKenney [this message]
2011-03-28 18:48 ` Luke Kenneth Casson Leighton
2011-03-28 22:12 ` Paul E. McKenney
2011-03-28 22:18 ` Alan Cox
2011-03-28 23:38 ` Luke Kenneth Casson Leighton
2011-03-28 23:39 ` Luke Kenneth Casson Leighton
2011-03-28 23:53 ` Paul E. McKenney
2011-03-29 9:16 ` Alan Cox
2011-04-07 12:09 ` Luke Kenneth Casson Leighton
2011-04-08 16:24 ` Paul E. McKenney
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20110328180655.GI2287@linux.vnet.ibm.com \
--to=paulmck@linux.vnet.ibm.com \
--cc=alan@lxorguk.ukuu.org.uk \
--cc=linux-kernel@vger.kernel.org \
--cc=luke.leighton@gmail.com \
--cc=will.newton@gmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox