From: Ira Snyder <iws@ovro.caltech.edu>
To: Arnd Bergmann <arnd@arndb.de>
Cc: netdev@vger.kernel.org, Rusty Russell <rusty@rustcorp.com.au>,
linux-kernel@vger.kernel.org, linuxppc-dev@ozlabs.org,
shemminger@vyatta.com, David Miller <davem@davemloft.net>
Subject: Re: [PATCH RFC v5] net: add PCINet driver
Date: Tue, 13 Jan 2009 08:40:08 -0800 [thread overview]
Message-ID: <20090113164007.GA7434@ovro.caltech.edu> (raw)
In-Reply-To: <200901131733.04341.arnd@arndb.de>
On Tue, Jan 13, 2009 at 05:33:03PM +0100, Arnd Bergmann wrote:
> On Tuesday 13 January 2009, Ira Snyder wrote:
> > On Tue, Jan 13, 2009 at 01:02:52PM +1030, Rusty Russell wrote:
> > >
> > > Interesting system: the guest being able to access the
> > > host's memory but not (fully) vice-versa makes this a
> > > little different from the current implementations where
> > > that was assumed. virtio assumes that the guest will
> > > publish buffers and someone else (ie. the host) will access them.
> >
> > The guest system /could/ publish all of its RAM, but with 256MB per
> > board, 19 boards per cPCI crate, that's way too much for a 32-bit PC to
> > map into it's memory space. That's the real reason I use the 1MB
> > windows. I could make them bigger (16MB would be fine, I think), but I
> > doubt it would make much of a difference to the implementation.
>
> The way we do it in the existing driver for cell, both sides export
> just a little part of their memory to the other side, and they
> also both get access to one channel of the DMA engine, which is
> enough to transfer larger data sections, as the DMA engine has
> access to all the memory on both sides.
So do you program one channel of the DMA engine from the host side and
another channel from the guest side?
I tried to avoid having the host program the DMA controller at all.
Using the DMAEngine API on the guest did better than I could achieve by
programming the registers manually. I didn't use chaining or any of the
fancier features in my tests, though.
Ira
next prev parent reply other threads:[~2009-01-13 16:40 UTC|newest]
Thread overview: 24+ messages / expand[flat|nested] mbox.gz Atom feed top
2009-01-07 19:50 [PATCH RFC v5] net: add PCINet driver Ira Snyder
2009-01-08 19:16 ` David Miller
2009-01-08 19:27 ` Ira Snyder
2009-01-08 21:51 ` Ira Snyder
2009-01-10 23:32 ` Benjamin Herrenschmidt
2009-01-12 17:56 ` Arnd Bergmann
2009-01-13 2:32 ` Rusty Russell
2009-01-13 3:34 ` Ira Snyder
2009-01-13 16:33 ` Arnd Bergmann
2009-01-13 16:40 ` Ira Snyder [this message]
2009-01-13 17:42 ` Arnd Bergmann
2009-01-15 0:12 ` Ira Snyder
2009-01-15 12:58 ` Arnd Bergmann
2009-01-15 16:54 ` Ira Snyder
2009-01-15 17:53 ` Arnd Bergmann
2009-01-15 18:20 ` Ira Snyder
2009-01-15 20:57 ` Arnd Bergmann
2009-01-15 23:27 ` Ira Snyder
2009-01-15 19:21 ` Ira Snyder
2009-01-15 21:22 ` Arnd Bergmann
2009-01-15 21:40 ` Ira Snyder
2009-01-15 22:53 ` Arnd Bergmann
2009-01-15 23:31 ` Ira Snyder
2009-01-16 9:15 ` Jan-Bernd Themann
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20090113164007.GA7434@ovro.caltech.edu \
--to=iws@ovro.caltech.edu \
--cc=arnd@arndb.de \
--cc=davem@davemloft.net \
--cc=linux-kernel@vger.kernel.org \
--cc=linuxppc-dev@ozlabs.org \
--cc=netdev@vger.kernel.org \
--cc=rusty@rustcorp.com.au \
--cc=shemminger@vyatta.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).