From: Ritu kaur <ritu.kaur.us@gmail.com>
To: Daniel Stodden <daniel.stodden@citrix.com>
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Subject: Re: Shared memory and event channel
Date: Mon, 22 Feb 2010 14:16:30 -0800 [thread overview]
Message-ID: <29b32d341002221416t4e00b899q18e07a69ad24b07f@mail.gmail.com> (raw)
In-Reply-To: <1266874463.27288.57.camel@agari.van.xensource.com>
[-- Attachment #1.1: Type: text/plain, Size: 3862 bytes --]
Hi Daniel,
Please see inline...
On Mon, Feb 22, 2010 at 1:34 PM, Daniel Stodden
<daniel.stodden@citrix.com>wrote:
> On Mon, 2010-02-22 at 12:36 -0500, Ritu kaur wrote:
>
> >
> > I'm not sure right now how easy the control plane in XCP will
> > make it
> > without other domU's notice, but maybe consider something
> > like:
> >
> > 1. Take the physical NIC out of the virtual network.
> > 2. Take the driver down.
> > 3. Pass access to the NIC to a domU.
> > 4. Let domU do the unspeakable.
> > 5.-7. Revert 3,2,1 to normal.
> >
> > This won't mess with the the PV drivers. Get PCI passthrough
> > to work for
> > 3 and 4 and you save yourself a tedious ring protocol design.
> > If not,
> > consider doing the hardware programming in dom0, because
> > there's not
> > much left for domU anyway.
> >
> > You need a split toolstack to get the dom0 network control
> > steps on
> > behalf of domU done. Might be just a scripted agent,
> > accessible to domU
> > via a couple RPCs. Could also turn out to be as simple as
> > talking
> > through the primary vif, because the connection between domU
> > and dom0
> > could remain unaffected.
> >
> >
> >
> > PCI passthrough is via config changes and no code changes, if that's
> > the case I am not sure how it would solve multiple domU accesses.
>
> My understanding after catching up a little on the past of this thread
> was that you want the network controller in some maintenance mode. Is
> this correct?
>
All I need to is access NIC registers via domU's(network controller will
still be working normally). Using PCI passthrough solves the problem for a
domU, however, it doesn't solve when multiple domU's wanting to read NIC
registers(ex. statistics).
>
> To get it there you will need to temporarily remove it from the virtual
> network topology.
>
> The PCI passthrough mode might solve your second problem, which is how
> the domU is supposed to access the device once it's been pulled off the
> data path.
>
> > For the second paragraph, do you have recommended readings? frankly I
> > don't completely understand the solution any pointers appreciated.
>
> > In addition, registers in NIC are memory mapped(ioremap function is
> > used, and in ioctls memcpy_toio and memcpy_fromio is used to
> > write/read registers) and wanted to know if its possible to map
> > memory from dom0 into domU's?
>
> Yes. This is the third problem, which is how to program a device. I'd
> recommend "Linux Device Drivers" on that subject. There are also free
> books like http://tldp.org/LDP/tlk/tlk-title.html. Examples likely
> outdate, but the concepts remain.
>
> If the device is memory mapped, it doesn't mean it's in memory. It means
> it's in the machine memory address space. The difference should become
> clear once you're done with understanding your driver.
>
> Is this the reason why you are so concerned about the memory sharing
> mechanism?
No not really. I wanted to use shared memory between dom's as a solution for
multiple domU access(since pci passthrough doesn't solve it).
The clarification I wanted here(NIC registers are memory mapped), can I take
"machine memory address space(which is in dom0)" and remap it to domU's such
that I can get multiple domU access.
To summarize,
1. PCI passthrough mechanism works for single domU
2. Shared memory rings between dom's as a solution to have multiple domU
access, not a workable solution though
3. Take mapped machine address in dom0 and remap it into domU's(just another
thought, not sure it works) and wanted clarification here.
Thanks
> The good news is now you won't need to bother, that's only
> for memory. :)
>
> Daniel
>
>
>
[-- Attachment #1.2: Type: text/html, Size: 5404 bytes --]
[-- Attachment #2: Type: text/plain, Size: 138 bytes --]
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel
next prev parent reply other threads:[~2010-02-22 22:16 UTC|newest]
Thread overview: 16+ messages / expand[flat|nested] mbox.gz Atom feed top
[not found] <29b32d341002211058l7e283336pa4fdfd0dc0b7124b@mail.gmail.com>
[not found] ` <1266787199.24577.18.camel@agari.van.xensource.com>
2010-02-21 23:33 ` Shared memory and event channel Ritu kaur
2010-02-22 7:55 ` Daniel Stodden
2010-02-22 17:36 ` Ritu kaur
2010-02-22 21:34 ` Daniel Stodden
2010-02-22 22:16 ` Ritu kaur [this message]
2010-02-23 9:38 ` Ian Campbell
2010-02-23 14:47 ` Konrad Rzeszutek Wilk
2010-02-23 15:42 ` Ian Campbell
2010-02-23 15:53 ` Ritu kaur
2010-02-23 17:42 ` djmagee
2010-02-23 19:26 ` Ritu kaur
2010-02-24 9:38 ` Ian Campbell
2007-11-19 7:59 shared " Amit Singh
2007-11-28 1:45 ` Mark Williamson
2007-12-21 8:39 ` tgh
2007-12-21 12:54 ` Keir Fraser
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=29b32d341002221416t4e00b899q18e07a69ad24b07f@mail.gmail.com \
--to=ritu.kaur.us@gmail.com \
--cc=daniel.stodden@citrix.com \
--cc=xen-devel@lists.xensource.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).