From: Ritu kaur <ritu.kaur.us@gmail.com>
To: Jeremy Fitzhardinge <jeremy@goop.org>
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
Ian Campbell <Ian.Campbell@citrix.com>
Subject: Re: modifying drivers
Date: Fri, 19 Feb 2010 18:15:56 -0800 [thread overview]
Message-ID: <29b32d341002191815t2b06944s8670721e7325d8d8@mail.gmail.com> (raw)
In-Reply-To: <4B7F2B34.6040608@goop.org>
[-- Attachment #1.1: Type: text/plain, Size: 3614 bytes --]
Hi Jeremy,
Thanks for clarification, however, what I dont understand is this(I have
read documents and looked into driver code). Both netfront and netback
registers with xenbus and monitors "vif" interface. From netback point of
view I clearly understand its communication and other stuff as I see
vif<domid>:<intf-id> being created in dom0. However, when I look into domU,
I do not see any vif interface created(looked with ifconfig, ifconfig -a
commands) is it hidden from the user? In domU, I just see "eth*" interfaces
created. how does eth* interfaces interact with netfront? I looked under
lib/modules/linux*/... for any pseudo drivers which might interact with
eth*, didn't get any answers. I am completely confused. By the way I am
using Debian etch 4.0 as a domU.
Jeremy/Ian, have any inputs on ioctl support?
Thanks
On Fri, Feb 19, 2010 at 4:22 PM, Jeremy Fitzhardinge <jeremy@goop.org>wrote:
> On 02/19/2010 02:30 PM, Ritu kaur wrote:
>
>> Thanks for the clarification. In our team meeting we decided to drop
>> netback changes to support exclusive access and go with xe command line or
>> xencenter way to do it(We are using Citrix Xenserver). Had couple of
>> follow-up questions related to Xen.
>>
>> 1.Is it correct that netfront driver(or any *front driver) has to be
>> explicitly integrated or compiled in the guest OS? the reason I ask this is,
>>
>
> An HVM domain can be completely unmodified, but it will be using emulated
> hardware devices with its normal drivers.
>
> a. In the documents I have read, it mentions guest OS can run without any
>> modification, however, if above is true we have to make sure guest OS we use
>> are compiled with the relevant *front drivers.
>>
>
> An HVM domain can use PV drivers to optimise its IO path by bypassing the
> emulated devices and talking directly to the backends. PV domains always
> use PV drivers (but they've already been modified).
>
> b. we had done some changes to netback and netfront(as mentioned in the
>> previous email), when compiling kernel for dom0 it includes both netfront
>> and netback and assumed via some mechanism this netfront driver would be
>> integrated/installed into guest domains when they are installed.
>>
>
> No. A dom0 kernel doesn't have much use for frontends. They're usually
> present because a given kernel can run in either the dom0 or domU roles.
>
> 2. Any front or back driver communication is via xenbus only?
>>
>
> Xenbus is used to pass small amounts of control/status/config information
> between front and backends. Bulk data transfer is usually handled with
> shared pages containing ring buffers, and event channels for event
> signalling.
>
> 3. Supporting ioctl calls. Our driver has ioctl support to read/write
>> hardware registers and one solution was to use pci passthrough mechanism,
>> however, it binds the NIC to a specific domU and we do not want that. We
>> would like to have multiple users access to hw registers(mainly stats and
>> other stuff) via guest domains and be able to access them simultaneously.
>> For this, we decided to go with the mechanism of shared memory/event channel
>> similar to front and back drivers. Can you please provide some inputs on
>> this?
>>
>
> It's hard to make any suggestions without knowing what your hardware is or
> what the use-cases are for these ioctls. Are you saying that you want to
> give multiple domUs direct unrestricted (read only?) access to the same set
> of registers? What kind of stats? Do guests need to read them at a very
> high rate, or could they fetch accumulated results at a lower rate?
>
> J
>
[-- Attachment #1.2: Type: text/html, Size: 4940 bytes --]
[-- Attachment #2: Type: text/plain, Size: 138 bytes --]
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel
next prev parent reply other threads:[~2010-02-20 2:15 UTC|newest]
Thread overview: 14+ messages / expand[flat|nested] mbox.gz Atom feed top
2010-02-18 16:27 modifying drivers Ritu kaur
2010-02-18 16:39 ` Ian Campbell
2010-02-19 0:03 ` Ritu kaur
2010-02-19 9:07 ` Ian Campbell
2010-02-19 17:12 ` Ritu kaur
2010-02-19 17:24 ` Ian Campbell
2010-02-19 22:30 ` Ritu kaur
2010-02-20 0:22 ` Jeremy Fitzhardinge
2010-02-20 2:15 ` Ritu kaur [this message]
2010-02-20 3:03 ` Ritu kaur
2010-02-21 13:03 ` Problems with Xen 4.0-rc4 forcedeth driver / Regression? Carsten Schiers
2010-02-20 7:29 ` modifying drivers Daniel Stodden
2010-02-20 11:58 ` Pasi Kärkkäinen
2010-02-20 23:10 ` Ritu kaur
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=29b32d341002191815t2b06944s8670721e7325d8d8@mail.gmail.com \
--to=ritu.kaur.us@gmail.com \
--cc=Ian.Campbell@citrix.com \
--cc=jeremy@goop.org \
--cc=xen-devel@lists.xensource.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).