xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
* modifying drivers
@ 2010-02-18 16:27 Ritu kaur
  2010-02-18 16:39 ` Ian Campbell
  0 siblings, 1 reply; 14+ messages in thread
From: Ritu kaur @ 2010-02-18 16:27 UTC (permalink / raw)
  To: xen-devel


[-- Attachment #1.1: Type: text/plain, Size: 1554 bytes --]

Hi,

I am modifying netback driver code such that it allows access to only one
domU(based on lowest domid) when multiple domUs are present. When the domU
with the lowest domid is suspended then the next domU in the list will get
access. I believe it can be done via xe/xm commands or via Citrix Xencenter
by selecting or deselecting during vm installation or by adding and deleting
nics, however, I wanted to control this from netback driver. For this,

1.  keep track of devices created via netback_probe function which is called
for every device.
2. Use domid field in netif_st data structure
3. Added new function netif_check_domid and placed it along with
netif_schedulable, I add a check if netif->domid is the lowest one(via list
created in step 1)
4. Function netif_schedulable is called from
a. tx_queue_callback
b. netif_be_start_xmit
c. net_rx_action
d. add_to_net_schedule_tail
e. netif_be_int

This works fine for the first vm that comes up. However, subsequent vm
bringup has issues which reboots dom0 itself.

5. I removed the function added by me in function netif_be_start_xmit only,
this allows multiple vm's to be up and will allow only first vm to access
netback. However, this has issues with the second functionality I would like
to have i.e when first vm is suspended, next vm in the list should get
access. I added kernel printfs in above functions and none of them are
called after first vm is suspended and subsequent vm is trying to access.

Wanted to know inputs from experts on this and how to proceed with
debugging.

Thanks

[-- Attachment #1.2: Type: text/html, Size: 1641 bytes --]

[-- Attachment #2: Type: text/plain, Size: 138 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: modifying drivers
  2010-02-18 16:27 modifying drivers Ritu kaur
@ 2010-02-18 16:39 ` Ian Campbell
  2010-02-19  0:03   ` Ritu kaur
  0 siblings, 1 reply; 14+ messages in thread
From: Ian Campbell @ 2010-02-18 16:39 UTC (permalink / raw)
  To: Ritu kaur; +Cc: xen-devel@lists.xensource.com

On Thu, 2010-02-18 at 16:27 +0000, Ritu kaur wrote:
> Hi,
> 
> I am modifying netback driver code such that it allows access to only
> one domU(based on lowest domid) when multiple domUs are present. When
> the domU with the lowest domid is suspended then the next domU in the
> list will get access.

Why?

>  I believe it can be done via xe/xm commands or via Citrix Xencenter
> by selecting or deselecting during vm installation or by adding and
> deleting nics, however, I wanted to control this from netback driver.

The toolstack is exactly the correct place to make and implement this
sort of policy decision -- it has no place in the kernel netback driver.

Ian.

>  For this,
> 
> 1.  keep track of devices created via netback_probe function which is
> called for every device. 
> 2. Use domid field in netif_st data structure
> 3. Added new function netif_check_domid and placed it along with
> netif_schedulable, I add a check if netif->domid is the lowest one(via
> list created in step 1)
> 4. Function netif_schedulable is called from 
> a. tx_queue_callback
> b. netif_be_start_xmit
> c. net_rx_action
> d. add_to_net_schedule_tail
> e. netif_be_int
> 
> This works fine for the first vm that comes up. However, subsequent vm
> bringup has issues which reboots dom0 itself. 
> 
> 5. I removed the function added by me in function netif_be_start_xmit
> only, this allows multiple vm's to be up and will allow only first vm
> to access netback. However, this has issues with the second
> functionality I would like to have i.e when first vm is suspended,
> next vm in the list should get access. I added kernel printfs in above
> functions and none of them are called after first vm is suspended and
> subsequent vm is trying to access.
> 
> Wanted to know inputs from experts on this and how to proceed with
> debugging.
> 
> Thanks
> 

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: modifying drivers
  2010-02-18 16:39 ` Ian Campbell
@ 2010-02-19  0:03   ` Ritu kaur
  2010-02-19  9:07     ` Ian Campbell
  0 siblings, 1 reply; 14+ messages in thread
From: Ritu kaur @ 2010-02-19  0:03 UTC (permalink / raw)
  To: Ian Campbell; +Cc: xen-devel@lists.xensource.com


[-- Attachment #1.1: Type: text/plain, Size: 2864 bytes --]

On Thu, Feb 18, 2010 at 8:39 AM, Ian Campbell <Ian.Campbell@citrix.com>wrote:

> On Thu, 2010-02-18 at 16:27 +0000, Ritu kaur wrote:
> > Hi,
> >
> > I am modifying netback driver code such that it allows access to only
> > one domU(based on lowest domid) when multiple domUs are present. When
> > the domU with the lowest domid is suspended then the next domU in the
> > list will get access.
>
> Why?
>
> >  I believe it can be done via xe/xm commands or via Citrix Xencenter
> > by selecting or deselecting during vm installation or by adding and
> > deleting nics, however, I wanted to control this from netback driver.
>
> The toolstack is exactly the correct place to make and implement this
> sort of policy decision -- it has no place in the kernel netback driver.
>

Hi Ian,

Consider a case when I have multiple domU's and both have NIC's installed.
>From one domU, i would like to run nightly scripts which involve

1. download fpga code
2. bringup the driver
3. start the test scripts in a domU which checks for packets
transmitted/received. During this time, I would like exclusive access to my
domU only.

One way to do it is via xe/xm cli or xencenter deleting the NIC from the
other domU or letting the user of the other domU know that tests are running
and not access the NIC (if there is any other way to do it let me know),
which to me is a overhead and we want to avoid it by modifying netback
drivers. By the way, plan is to have seperate netback/netfront for our NIC
such that it doesn't meddle with existing drivers. Hence would like some
inputs w.r.t debugging the netback tx/rx code.

Thanks


> Ian.
>
> >  For this,
> >
> > 1.  keep track of devices created via netback_probe function which is
> > called for every device.
> > 2. Use domid field in netif_st data structure
> > 3. Added new function netif_check_domid and placed it along with
> > netif_schedulable, I add a check if netif->domid is the lowest one(via
> > list created in step 1)
> > 4. Function netif_schedulable is called from
> > a. tx_queue_callback
> > b. netif_be_start_xmit
> > c. net_rx_action
> > d. add_to_net_schedule_tail
> > e. netif_be_int
> >
> > This works fine for the first vm that comes up. However, subsequent vm
> > bringup has issues which reboots dom0 itself.
> >
> > 5. I removed the function added by me in function netif_be_start_xmit
> > only, this allows multiple vm's to be up and will allow only first vm
> > to access netback. However, this has issues with the second
> > functionality I would like to have i.e when first vm is suspended,
> > next vm in the list should get access. I added kernel printfs in above
> > functions and none of them are called after first vm is suspended and
> > subsequent vm is trying to access.
> >
> > Wanted to know inputs from experts on this and how to proceed with
> > debugging.
> >
> > Thanks
> >
>
>
>

[-- Attachment #1.2: Type: text/html, Size: 3742 bytes --]

[-- Attachment #2: Type: text/plain, Size: 138 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: modifying drivers
  2010-02-19  0:03   ` Ritu kaur
@ 2010-02-19  9:07     ` Ian Campbell
  2010-02-19 17:12       ` Ritu kaur
  0 siblings, 1 reply; 14+ messages in thread
From: Ian Campbell @ 2010-02-19  9:07 UTC (permalink / raw)
  To: Ritu kaur; +Cc: xen-devel@lists.xensource.com

On Fri, 2010-02-19 at 00:03 +0000, Ritu kaur wrote:
> 
> 
> On Thu, Feb 18, 2010 at 8:39 AM, Ian Campbell
> <Ian.Campbell@citrix.com> wrote:
>         On Thu, 2010-02-18 at 16:27 +0000, Ritu kaur wrote:
>         > Hi,
>         >
>         > I am modifying netback driver code such that it allows
>         access to only
>         > one domU(based on lowest domid) when multiple domUs are
>         present. When
>         > the domU with the lowest domid is suspended then the next
>         domU in the
>         > list will get access.
>         
>         
>         Why?
>         
>         >  I believe it can be done via xe/xm commands or via Citrix
>         Xencenter
>         > by selecting or deselecting during vm installation or by
>         adding and
>         > deleting nics, however, I wanted to control this from
>         netback driver.
>         
>         
>         The toolstack is exactly the correct place to make and
>         implement this
>         sort of policy decision -- it has no place in the kernel
>         netback driver.
> 
> Hi Ian,
> 
> Consider a case when I have multiple domU's and both have NIC's
> installed. From one domU, i would like to run nightly scripts which
> involve
> 
> 1. download fpga code
> 2. bringup the driver
> 3. start the test scripts in a domU which checks for packets
> transmitted/received. During this time, I would like exclusive access
> to my domU only. 
> 
> One way to do it is via xe/xm cli or xencenter deleting the NIC from
> the other domU or letting the user of the other domU know that tests
> are running and not access the NIC (if there is any other way to do it
> let me know), which to me is a overhead

It's not overhead, it is the *right* way to implement control operations
of this sort. Your QA scripts are ideally placed to do this.

>  and we want to avoid it by modifying netback drivers. By the way,
> plan is to have seperate netback/netfront for our NIC such that it
> doesn't meddle with existing drivers.

What functionality is the existing netback missing that requires you to
fork it? netback is supposed to be independent of any specific hardware.
Netback usually interacts with actual physical NICs via bridging,
routing or NAT within domain 0 (bridging being most common) to the
regular NIC driver -- there is nothing Xen or netback specific about
this operation.

You should be aware the requiring users of your hardware to install
complete new front and backend drivers as well as new toolstack support
is likely to be a large barrier to the use of your hardware.

Furthermore the Xen project is not likely to be interested in a fork of
netback to support a single piece of hardware so your chances of getting
your work accepted are low. The same is true of the upstream OS projects
(e.g. Linux) where the net{front,back} drivers actually run.

Are you sure you shouldn't be looking at PCI passthrough support or
something of that nature?

Ian.

>  Hence would like some inputs w.r.t debugging the netback tx/rx code.
> 
> Thanks
> 
> 
>         
>         Ian.
>         
>         
>         >  For this,
>         >
>         > 1.  keep track of devices created via netback_probe function
>         which is
>         > called for every device.
>         > 2. Use domid field in netif_st data structure
>         > 3. Added new function netif_check_domid and placed it along
>         with
>         > netif_schedulable, I add a check if netif->domid is the
>         lowest one(via
>         > list created in step 1)
>         > 4. Function netif_schedulable is called from
>         > a. tx_queue_callback
>         > b. netif_be_start_xmit
>         > c. net_rx_action
>         > d. add_to_net_schedule_tail
>         > e. netif_be_int
>         >
>         > This works fine for the first vm that comes up. However,
>         subsequent vm
>         > bringup has issues which reboots dom0 itself.
>         >
>         > 5. I removed the function added by me in function
>         netif_be_start_xmit
>         > only, this allows multiple vm's to be up and will allow only
>         first vm
>         > to access netback. However, this has issues with the second
>         > functionality I would like to have i.e when first vm is
>         suspended,
>         > next vm in the list should get access. I added kernel
>         printfs in above
>         > functions and none of them are called after first vm is
>         suspended and
>         > subsequent vm is trying to access.
>         >
>         > Wanted to know inputs from experts on this and how to
>         proceed with
>         > debugging.
>         >
>         > Thanks
>         >
>         
>         
>         
> 

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: modifying drivers
  2010-02-19  9:07     ` Ian Campbell
@ 2010-02-19 17:12       ` Ritu kaur
  2010-02-19 17:24         ` Ian Campbell
  0 siblings, 1 reply; 14+ messages in thread
From: Ritu kaur @ 2010-02-19 17:12 UTC (permalink / raw)
  To: Ian Campbell; +Cc: xen-devel@lists.xensource.com


[-- Attachment #1.1: Type: text/plain, Size: 5906 bytes --]

On Fri, Feb 19, 2010 at 1:07 AM, Ian Campbell <Ian.Campbell@citrix.com>wrote:

> On Fri, 2010-02-19 at 00:03 +0000, Ritu kaur wrote:
> >
> >
> > On Thu, Feb 18, 2010 at 8:39 AM, Ian Campbell
> > <Ian.Campbell@citrix.com> wrote:
> >         On Thu, 2010-02-18 at 16:27 +0000, Ritu kaur wrote:
> >         > Hi,
> >         >
> >         > I am modifying netback driver code such that it allows
> >         access to only
> >         > one domU(based on lowest domid) when multiple domUs are
> >         present. When
> >         > the domU with the lowest domid is suspended then the next
> >         domU in the
> >         > list will get access.
> >
> >
> >         Why?
> >
> >         >  I believe it can be done via xe/xm commands or via Citrix
> >         Xencenter
> >         > by selecting or deselecting during vm installation or by
> >         adding and
> >         > deleting nics, however, I wanted to control this from
> >         netback driver.
> >
> >
> >         The toolstack is exactly the correct place to make and
> >         implement this
> >         sort of policy decision -- it has no place in the kernel
> >         netback driver.
> >
> > Hi Ian,
> >
> > Consider a case when I have multiple domU's and both have NIC's
> > installed. From one domU, i would like to run nightly scripts which
> > involve
> >
> > 1. download fpga code
> > 2. bringup the driver
> > 3. start the test scripts in a domU which checks for packets
> > transmitted/received. During this time, I would like exclusive access
> > to my domU only.
> >
> > One way to do it is via xe/xm cli or xencenter deleting the NIC from
> > the other domU or letting the user of the other domU know that tests
> > are running and not access the NIC (if there is any other way to do it
> > let me know), which to me is a overhead
>
> It's not overhead, it is the *right* way to implement control operations
> of this sort. Your QA scripts are ideally placed to do this.
>

Can you elaborate on this? If I understand this correctly, you are saying QA
scripts written by us can be used to  access or restrict  i.e run these
scripts from dom0 and allow or restrict access to a specific domU? I am not
aware if this is possible without modifying toolstack?

>
> >  and we want to avoid it by modifying netback drivers. By the way,
> > plan is to have seperate netback/netfront for our NIC such that it
> > doesn't meddle with existing drivers.
>
> What functionality is the existing netback missing that requires you to
> fork it? netback is supposed to be independent of any specific hardware.
> Netback usually interacts with actual physical NICs via bridging,
> routing or NAT within domain 0 (bridging being most common) to the
> regular NIC driver -- there is nothing Xen or netback specific about
> this operation.
>
> You should be aware the requiring users of your hardware to install
> complete new front and backend drivers as well as new toolstack support
> is likely to be a large barrier to the use of your hardware.
>
> Furthermore the Xen project is not likely to be interested in a fork of
> netback to support a single piece of hardware so your chances of getting
> your work accepted are low. The same is true of the upstream OS projects
> (e.g. Linux) where the net{front,back} drivers actually run.
>
> Are you sure you shouldn't be looking at PCI passthrough support or
> something of that nature?
>
>
We are looking into this option as well. However from the following wiki it
seems we have to compile guest OS with pcifrontend driver support.

http://wiki.xensource.com/xenwiki/Assign_hardware_to_DomU_with_PCIBack_as_module?highlight=%28pci%29

We are looking at different ways to accomplish the task and clearly we would
like to test out all options before making a decision.

Modifying netback is one of the options(not the final one) and clearly the
changes we are doing has nothing netback specific, modifying and testing it
out doesn't hurt either. Appreciate if you or someone on the list can
provide some inputs on debugging the issue I mentioned in my first email.

Thanks

Ian.
>
> >  Hence would like some inputs w.r.t debugging the netback tx/rx code.
> >
> > Thanks
> >
> >
> >
> >         Ian.
> >
> >
> >         >  For this,
> >         >
> >         > 1.  keep track of devices created via netback_probe function
> >         which is
> >         > called for every device.
> >         > 2. Use domid field in netif_st data structure
> >         > 3. Added new function netif_check_domid and placed it along
> >         with
> >         > netif_schedulable, I add a check if netif->domid is the
> >         lowest one(via
> >         > list created in step 1)
> >         > 4. Function netif_schedulable is called from
> >         > a. tx_queue_callback
> >         > b. netif_be_start_xmit
> >         > c. net_rx_action
> >         > d. add_to_net_schedule_tail
> >         > e. netif_be_int
> >         >
> >         > This works fine for the first vm that comes up. However,
> >         subsequent vm
> >         > bringup has issues which reboots dom0 itself.
> >         >
> >         > 5. I removed the function added by me in function
> >         netif_be_start_xmit
> >         > only, this allows multiple vm's to be up and will allow only
> >         first vm
> >         > to access netback. However, this has issues with the second
> >         > functionality I would like to have i.e when first vm is
> >         suspended,
> >         > next vm in the list should get access. I added kernel
> >         printfs in above
> >         > functions and none of them are called after first vm is
> >         suspended and
> >         > subsequent vm is trying to access.
> >         >
> >         > Wanted to know inputs from experts on this and how to
> >         proceed with
> >         > debugging.
> >         >
> >         > Thanks
> >         >
> >
> >
> >
> >
>
>
>

[-- Attachment #1.2: Type: text/html, Size: 7727 bytes --]

[-- Attachment #2: Type: text/plain, Size: 138 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: modifying drivers
  2010-02-19 17:12       ` Ritu kaur
@ 2010-02-19 17:24         ` Ian Campbell
  2010-02-19 22:30           ` Ritu kaur
  0 siblings, 1 reply; 14+ messages in thread
From: Ian Campbell @ 2010-02-19 17:24 UTC (permalink / raw)
  To: Ritu kaur; +Cc: xen-devel@lists.xensource.com

On Fri, 2010-02-19 at 17:12 +0000, Ritu kaur wrote:
> 
> 
> On Fri, Feb 19, 2010 at 1:07 AM, Ian Campbell
> <Ian.Campbell@citrix.com> wrote:

>         It's not overhead, it is the *right* way to implement control
>         operations
>         of this sort. Your QA scripts are ideally placed to do this.
> 
> Can you elaborate on this? If I understand this correctly, you are
> saying QA scripts written by us can be used to  access or restrict
> i.e run these scripts from dom0 and allow or restrict access to a
> specific domU? I am not aware if this is possible without modifying
> toolstack?

You can use "xm network-attach" and "xm network-detach" to add and
remove a guest VIF to ensure only the guest you wish to test has a vif.
You can call these commands from scripts etc. You can also modify (or
generate) your guest configuration files as necessary to ensure guests
are started with the VIFs you require. Nothing here should require
toolstack or kernel modifications.

>         
>         Are you sure you shouldn't be looking at PCI passthrough
>         support or
>         something of that nature?
>         
> 
> We are looking into this option as well. However from the following
> wiki it seems we have to compile guest OS with pcifrontend driver
> support.

Most PV guests have this support enabled out of the box.

> http://wiki.xensource.com/xenwiki/Assign_hardware_to_DomU_with_PCIBack_as_module?highlight=%28pci%29
>  
> We are looking at different ways to accomplish the task and clearly we
> would like to test out all options before making a decision. 
> 
> Modifying netback is one of the options(not the final one) and clearly
> the changes we are doing has nothing netback specific, modifying and
> testing it out doesn't hurt either. Appreciate if you or someone on
> the list can provide some inputs on debugging the issue I mentioned in
> my first email.

I think you need to take a step back and become familiar with how a Xen
system currently works and is normally configured and managed before you
dive in and start modifying kernel drivers and toolstacks. You are in
danger of going completely off into the weeds at the moment.

Ian.

> 
> Thanks
> 
> 
>         Ian.
>         
>         
>         >  Hence would like some inputs w.r.t debugging the netback
>         tx/rx code.
>         >
>         > Thanks
>         >
>         >
>         >
>         >         Ian.
>         >
>         >
>         >         >  For this,
>         >         >
>         >         > 1.  keep track of devices created via
>         netback_probe function
>         >         which is
>         >         > called for every device.
>         >         > 2. Use domid field in netif_st data structure
>         >         > 3. Added new function netif_check_domid and placed
>         it along
>         >         with
>         >         > netif_schedulable, I add a check if netif->domid
>         is the
>         >         lowest one(via
>         >         > list created in step 1)
>         >         > 4. Function netif_schedulable is called from
>         >         > a. tx_queue_callback
>         >         > b. netif_be_start_xmit
>         >         > c. net_rx_action
>         >         > d. add_to_net_schedule_tail
>         >         > e. netif_be_int
>         >         >
>         >         > This works fine for the first vm that comes up.
>         However,
>         >         subsequent vm
>         >         > bringup has issues which reboots dom0 itself.
>         >         >
>         >         > 5. I removed the function added by me in function
>         >         netif_be_start_xmit
>         >         > only, this allows multiple vm's to be up and will
>         allow only
>         >         first vm
>         >         > to access netback. However, this has issues with
>         the second
>         >         > functionality I would like to have i.e when first
>         vm is
>         >         suspended,
>         >         > next vm in the list should get access. I added
>         kernel
>         >         printfs in above
>         >         > functions and none of them are called after first
>         vm is
>         >         suspended and
>         >         > subsequent vm is trying to access.
>         >         >
>         >         > Wanted to know inputs from experts on this and how
>         to
>         >         proceed with
>         >         > debugging.
>         >         >
>         >         > Thanks
>         >         >
>         >
>         >
>         >
>         >
>         
>         
>         
> 

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: modifying drivers
  2010-02-19 17:24         ` Ian Campbell
@ 2010-02-19 22:30           ` Ritu kaur
  2010-02-20  0:22             ` Jeremy Fitzhardinge
  0 siblings, 1 reply; 14+ messages in thread
From: Ritu kaur @ 2010-02-19 22:30 UTC (permalink / raw)
  To: Ian Campbell; +Cc: xen-devel@lists.xensource.com


[-- Attachment #1.1: Type: text/plain, Size: 6359 bytes --]

Hi Ian,

Thanks for the clarification. In our team meeting we decided to drop netback
changes to support exclusive access and go with xe command line or xencenter
way to do it(We are using Citrix Xenserver). Had couple of follow-up
questions related to Xen.

1.Is it correct that netfront driver(or any *front driver) has to be
explicitly integrated or compiled in the guest OS? the reason I ask this is,


a. In the documents I have read, it mentions guest OS can run without any
modification, however, if above is true we have to make sure guest OS we use
are compiled with the relevant *front drivers.

b. we had done some changes to netback and netfront(as mentioned in the
previous email), when compiling kernel for dom0 it includes both netfront
and netback and assumed via some mechanism this netfront driver would be
integrated/installed into guest domains when they are installed.

2. Any front or back driver communication is via xenbus only?

3. Supporting ioctl calls. Our driver has ioctl support to read/write
hardware registers and one solution was to use pci passthrough mechanism,
however, it binds the NIC to a specific domU and we do not want that. We
would like to have multiple users access to hw registers(mainly stats and
other stuff) via guest domains and be able to access them simultaneously.
For this, we decided to go with the mechanism of shared memory/event channel
similar to front and back drivers.  Can you please provide some inputs on
this?

Thanks


On Fri, Feb 19, 2010 at 9:24 AM, Ian Campbell <Ian.Campbell@citrix.com>wrote:

> On Fri, 2010-02-19 at 17:12 +0000, Ritu kaur wrote:
> >
> >
> > On Fri, Feb 19, 2010 at 1:07 AM, Ian Campbell
> > <Ian.Campbell@citrix.com> wrote:
>
> >         It's not overhead, it is the *right* way to implement control
> >         operations
> >         of this sort. Your QA scripts are ideally placed to do this.
> >
> > Can you elaborate on this? If I understand this correctly, you are
> > saying QA scripts written by us can be used to  access or restrict
> > i.e run these scripts from dom0 and allow or restrict access to a
> > specific domU? I am not aware if this is possible without modifying
> > toolstack?
>
> You can use "xm network-attach" and "xm network-detach" to add and
> remove a guest VIF to ensure only the guest you wish to test has a vif.
> You can call these commands from scripts etc. You can also modify (or
> generate) your guest configuration files as necessary to ensure guests
> are started with the VIFs you require. Nothing here should require
> toolstack or kernel modifications.
>
> >
> >         Are you sure you shouldn't be looking at PCI passthrough
> >         support or
> >         something of that nature?
> >
> >
> > We are looking into this option as well. However from the following
> > wiki it seems we have to compile guest OS with pcifrontend driver
> > support.
>
> Most PV guests have this support enabled out of the box.
>
> >
> http://wiki.xensource.com/xenwiki/Assign_hardware_to_DomU_with_PCIBack_as_module?highlight=%28pci%29
> >
> > We are looking at different ways to accomplish the task and clearly we
> > would like to test out all options before making a decision.
> >
> > Modifying netback is one of the options(not the final one) and clearly
> > the changes we are doing has nothing netback specific, modifying and
> > testing it out doesn't hurt either. Appreciate if you or someone on
> > the list can provide some inputs on debugging the issue I mentioned in
> > my first email.
>
> I think you need to take a step back and become familiar with how a Xen
> system currently works and is normally configured and managed before you
> dive in and start modifying kernel drivers and toolstacks. You are in
> danger of going completely off into the weeds at the moment.
>
> Ian.
>
> >
> > Thanks
> >
> >
> >         Ian.
> >
> >
> >         >  Hence would like some inputs w.r.t debugging the netback
> >         tx/rx code.
> >         >
> >         > Thanks
> >         >
> >         >
> >         >
> >         >         Ian.
> >         >
> >         >
> >         >         >  For this,
> >         >         >
> >         >         > 1.  keep track of devices created via
> >         netback_probe function
> >         >         which is
> >         >         > called for every device.
> >         >         > 2. Use domid field in netif_st data structure
> >         >         > 3. Added new function netif_check_domid and placed
> >         it along
> >         >         with
> >         >         > netif_schedulable, I add a check if netif->domid
> >         is the
> >         >         lowest one(via
> >         >         > list created in step 1)
> >         >         > 4. Function netif_schedulable is called from
> >         >         > a. tx_queue_callback
> >         >         > b. netif_be_start_xmit
> >         >         > c. net_rx_action
> >         >         > d. add_to_net_schedule_tail
> >         >         > e. netif_be_int
> >         >         >
> >         >         > This works fine for the first vm that comes up.
> >         However,
> >         >         subsequent vm
> >         >         > bringup has issues which reboots dom0 itself.
> >         >         >
> >         >         > 5. I removed the function added by me in function
> >         >         netif_be_start_xmit
> >         >         > only, this allows multiple vm's to be up and will
> >         allow only
> >         >         first vm
> >         >         > to access netback. However, this has issues with
> >         the second
> >         >         > functionality I would like to have i.e when first
> >         vm is
> >         >         suspended,
> >         >         > next vm in the list should get access. I added
> >         kernel
> >         >         printfs in above
> >         >         > functions and none of them are called after first
> >         vm is
> >         >         suspended and
> >         >         > subsequent vm is trying to access.
> >         >         >
> >         >         > Wanted to know inputs from experts on this and how
> >         to
> >         >         proceed with
> >         >         > debugging.
> >         >         >
> >         >         > Thanks
> >         >         >
> >         >
> >         >
> >         >
> >         >
> >
> >
> >
> >
>
>
>

[-- Attachment #1.2: Type: text/html, Size: 8039 bytes --]

[-- Attachment #2: Type: text/plain, Size: 138 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: modifying drivers
  2010-02-19 22:30           ` Ritu kaur
@ 2010-02-20  0:22             ` Jeremy Fitzhardinge
  2010-02-20  2:15               ` Ritu kaur
  0 siblings, 1 reply; 14+ messages in thread
From: Jeremy Fitzhardinge @ 2010-02-20  0:22 UTC (permalink / raw)
  To: Ritu kaur; +Cc: xen-devel@lists.xensource.com, Ian Campbell

On 02/19/2010 02:30 PM, Ritu kaur wrote:
> Thanks for the clarification. In our team meeting we decided to drop 
> netback changes to support exclusive access and go with xe command 
> line or xencenter way to do it(We are using Citrix Xenserver). Had 
> couple of follow-up questions related to Xen.
>
> 1.Is it correct that netfront driver(or any *front driver) has to be 
> explicitly integrated or compiled in the guest OS? the reason I ask 
> this is,

An HVM domain can be completely unmodified, but it will be using 
emulated hardware devices with its normal drivers.

> a. In the documents I have read, it mentions guest OS can run without 
> any modification, however, if above is true we have to make sure guest 
> OS we use are compiled with the relevant *front drivers.

An HVM domain can use PV drivers to optimise its IO path by bypassing 
the emulated devices and talking directly to the backends.  PV domains 
always use PV drivers (but they've already been modified).

> b. we had done some changes to netback and netfront(as mentioned in 
> the previous email), when compiling kernel for dom0 it includes both 
> netfront and netback and assumed via some mechanism this netfront 
> driver would be integrated/installed into guest domains when they are 
> installed.

No.  A dom0 kernel doesn't have much use for frontends.  They're usually 
present because a given kernel can run in either the dom0 or domU roles.

> 2. Any front or back driver communication is via xenbus only?

Xenbus is used to pass small amounts of control/status/config 
information between front and backends.  Bulk data transfer is usually 
handled with shared pages containing ring buffers, and event channels 
for event signalling.

> 3. Supporting ioctl calls. Our driver has ioctl support to read/write 
> hardware registers and one solution was to use pci passthrough 
> mechanism, however, it binds the NIC to a specific domU and we do not 
> want that. We would like to have multiple users access to hw 
> registers(mainly stats and other stuff) via guest domains and be able 
> to access them simultaneously. For this, we decided to go with the 
> mechanism of shared memory/event channel similar to front and back 
> drivers.  Can you please provide some inputs on this?

It's hard to make any suggestions without knowing what your hardware is 
or what the use-cases are for these ioctls.  Are you saying that you 
want to give multiple domUs direct unrestricted (read only?) access to 
the same set of registers?  What kind of stats?  Do guests need to read 
them at a very high rate, or could they fetch accumulated results at a 
lower rate?

     J

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: modifying drivers
  2010-02-20  0:22             ` Jeremy Fitzhardinge
@ 2010-02-20  2:15               ` Ritu kaur
  2010-02-20  3:03                 ` Ritu kaur
                                   ` (2 more replies)
  0 siblings, 3 replies; 14+ messages in thread
From: Ritu kaur @ 2010-02-20  2:15 UTC (permalink / raw)
  To: Jeremy Fitzhardinge; +Cc: xen-devel@lists.xensource.com, Ian Campbell


[-- Attachment #1.1: Type: text/plain, Size: 3614 bytes --]

Hi Jeremy,

Thanks for clarification, however, what I dont understand is this(I have
read documents and looked into driver code). Both netfront and netback
registers with xenbus and monitors "vif" interface. From netback point of
view I clearly understand its communication and other stuff as I see
vif<domid>:<intf-id> being created in dom0. However, when I look into domU,
I do not see any vif interface created(looked with ifconfig, ifconfig -a
commands) is it hidden from the user? In domU, I just see "eth*" interfaces
created. how does eth* interfaces interact with netfront? I looked under
lib/modules/linux*/... for any pseudo drivers which might interact with
eth*, didn't get any answers. I am completely confused. By the way I am
using Debian etch 4.0 as a domU.

Jeremy/Ian, have any inputs on ioctl support?

Thanks


On Fri, Feb 19, 2010 at 4:22 PM, Jeremy Fitzhardinge <jeremy@goop.org>wrote:

> On 02/19/2010 02:30 PM, Ritu kaur wrote:
>
>> Thanks for the clarification. In our team meeting we decided to drop
>> netback changes to support exclusive access and go with xe command line or
>> xencenter way to do it(We are using Citrix Xenserver). Had couple of
>> follow-up questions related to Xen.
>>
>> 1.Is it correct that netfront driver(or any *front driver) has to be
>> explicitly integrated or compiled in the guest OS? the reason I ask this is,
>>
>
> An HVM domain can be completely unmodified, but it will be using emulated
> hardware devices with its normal drivers.
>
>  a. In the documents I have read, it mentions guest OS can run without any
>> modification, however, if above is true we have to make sure guest OS we use
>> are compiled with the relevant *front drivers.
>>
>
> An HVM domain can use PV drivers to optimise its IO path by bypassing the
> emulated devices and talking directly to the backends.  PV domains always
> use PV drivers (but they've already been modified).
>
>  b. we had done some changes to netback and netfront(as mentioned in the
>> previous email), when compiling kernel for dom0 it includes both netfront
>> and netback and assumed via some mechanism this netfront driver would be
>> integrated/installed into guest domains when they are installed.
>>
>
> No.  A dom0 kernel doesn't have much use for frontends.  They're usually
> present because a given kernel can run in either the dom0 or domU roles.
>
>  2. Any front or back driver communication is via xenbus only?
>>
>
> Xenbus is used to pass small amounts of control/status/config information
> between front and backends.  Bulk data transfer is usually handled with
> shared pages containing ring buffers, and event channels for event
> signalling.
>
>  3. Supporting ioctl calls. Our driver has ioctl support to read/write
>> hardware registers and one solution was to use pci passthrough mechanism,
>> however, it binds the NIC to a specific domU and we do not want that. We
>> would like to have multiple users access to hw registers(mainly stats and
>> other stuff) via guest domains and be able to access them simultaneously.
>> For this, we decided to go with the mechanism of shared memory/event channel
>> similar to front and back drivers.  Can you please provide some inputs on
>> this?
>>
>
> It's hard to make any suggestions without knowing what your hardware is or
> what the use-cases are for these ioctls.  Are you saying that you want to
> give multiple domUs direct unrestricted (read only?) access to the same set
> of registers?  What kind of stats?  Do guests need to read them at a very
> high rate, or could they fetch accumulated results at a lower rate?
>
>    J
>

[-- Attachment #1.2: Type: text/html, Size: 4940 bytes --]

[-- Attachment #2: Type: text/plain, Size: 138 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: modifying drivers
  2010-02-20  2:15               ` Ritu kaur
@ 2010-02-20  3:03                 ` Ritu kaur
  2010-02-21 13:03                   ` Problems with Xen 4.0-rc4 forcedeth driver / Regression? Carsten Schiers
  2010-02-20  7:29                 ` modifying drivers Daniel Stodden
  2010-02-20 11:58                 ` Pasi Kärkkäinen
  2 siblings, 1 reply; 14+ messages in thread
From: Ritu kaur @ 2010-02-20  3:03 UTC (permalink / raw)
  To: Jeremy Fitzhardinge; +Cc: xen-devel@lists.xensource.com, Ian Campbell


[-- Attachment #1.1: Type: text/plain, Size: 4798 bytes --]

Hi Jeremy,

Sorry I missed your inputs on ioctls.

1. Stats are normal tx/rx/errors related to nic's. Sanity tests make sure
driver is sane by reading and comparing stats. In addition, users can
read/write specific registers which will help in debugging(one case is to
make sure correct version of fpga code is loaded or users in domU can just
make sure driver is working fine by reading stats register individually)

2. multiple users in domU need access to register read/write.

Idea is this

1. ioctl call is issued by application(sanity test or individual user
reg_read/reg_write executables)
2. ioctl is intercepted by frontend driver or some module in kernel(not sure
how to do it yet)
2. via shared memory frontend and backend drivers or kernel modules
communicate.
4. backend driver or kernel module sends ioctl call to actual driver(not
sure how to do it)
5. for reg read, data is sent from backend->frontend->actual driver
6. for reg write, data is from frontend->backend->actual driver.

Thanks


On Fri, Feb 19, 2010 at 6:15 PM, Ritu kaur <ritu.kaur.us@gmail.com> wrote:

> Hi Jeremy,
>
> Thanks for clarification, however, what I dont understand is this(I have
> read documents and looked into driver code). Both netfront and netback
> registers with xenbus and monitors "vif" interface. From netback point of
> view I clearly understand its communication and other stuff as I see
> vif<domid>:<intf-id> being created in dom0. However, when I look into domU,
> I do not see any vif interface created(looked with ifconfig, ifconfig -a
> commands) is it hidden from the user? In domU, I just see "eth*" interfaces
> created. how does eth* interfaces interact with netfront? I looked under
> lib/modules/linux*/... for any pseudo drivers which might interact with
> eth*, didn't get any answers. I am completely confused. By the way I am
> using Debian etch 4.0 as a domU.
>
> Jeremy/Ian, have any inputs on ioctl support?
>
> Thanks
>
>
>
> On Fri, Feb 19, 2010 at 4:22 PM, Jeremy Fitzhardinge <jeremy@goop.org>wrote:
>
>> On 02/19/2010 02:30 PM, Ritu kaur wrote:
>>
>>> Thanks for the clarification. In our team meeting we decided to drop
>>> netback changes to support exclusive access and go with xe command line or
>>> xencenter way to do it(We are using Citrix Xenserver). Had couple of
>>> follow-up questions related to Xen.
>>>
>>> 1.Is it correct that netfront driver(or any *front driver) has to be
>>> explicitly integrated or compiled in the guest OS? the reason I ask this is,
>>>
>>
>> An HVM domain can be completely unmodified, but it will be using emulated
>> hardware devices with its normal drivers.
>>
>>  a. In the documents I have read, it mentions guest OS can run without any
>>> modification, however, if above is true we have to make sure guest OS we use
>>> are compiled with the relevant *front drivers.
>>>
>>
>> An HVM domain can use PV drivers to optimise its IO path by bypassing the
>> emulated devices and talking directly to the backends.  PV domains always
>> use PV drivers (but they've already been modified).
>>
>>  b. we had done some changes to netback and netfront(as mentioned in the
>>> previous email), when compiling kernel for dom0 it includes both netfront
>>> and netback and assumed via some mechanism this netfront driver would be
>>> integrated/installed into guest domains when they are installed.
>>>
>>
>> No.  A dom0 kernel doesn't have much use for frontends.  They're usually
>> present because a given kernel can run in either the dom0 or domU roles.
>>
>>  2. Any front or back driver communication is via xenbus only?
>>>
>>
>> Xenbus is used to pass small amounts of control/status/config information
>> between front and backends.  Bulk data transfer is usually handled with
>> shared pages containing ring buffers, and event channels for event
>> signalling.
>>
>>  3. Supporting ioctl calls. Our driver has ioctl support to read/write
>>> hardware registers and one solution was to use pci passthrough mechanism,
>>> however, it binds the NIC to a specific domU and we do not want that. We
>>> would like to have multiple users access to hw registers(mainly stats and
>>> other stuff) via guest domains and be able to access them simultaneously.
>>> For this, we decided to go with the mechanism of shared memory/event channel
>>> similar to front and back drivers.  Can you please provide some inputs on
>>> this?
>>>
>>
>> It's hard to make any suggestions without knowing what your hardware is or
>> what the use-cases are for these ioctls.  Are you saying that you want to
>> give multiple domUs direct unrestricted (read only?) access to the same set
>> of registers?  What kind of stats?  Do guests need to read them at a very
>> high rate, or could they fetch accumulated results at a lower rate?
>>
>>    J
>>
>
>

[-- Attachment #1.2: Type: text/html, Size: 6336 bytes --]

[-- Attachment #2: Type: text/plain, Size: 138 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: modifying drivers
  2010-02-20  2:15               ` Ritu kaur
  2010-02-20  3:03                 ` Ritu kaur
@ 2010-02-20  7:29                 ` Daniel Stodden
  2010-02-20 11:58                 ` Pasi Kärkkäinen
  2 siblings, 0 replies; 14+ messages in thread
From: Daniel Stodden @ 2010-02-20  7:29 UTC (permalink / raw)
  To: Ritu kaur
  Cc: Ian Campbell, Jeremy Fitzhardinge, xen-devel@lists.xensource.com

On Fri, 2010-02-19 at 21:15 -0500, Ritu kaur wrote:
> Hi Jeremy, 
> 
> Thanks for clarification, however, what I dont understand is this(I
> have read documents and looked into driver code). Both netfront and
> netback registers with xenbus and monitors "vif" interface. >From
> netback point of view I clearly understand its communication and other
> stuff as I see vif<domid>:<intf-id> being created in dom0. However,
> when I look into domU, I do not see any vif interface created(looked
> with ifconfig, ifconfig -a commands) is it hidden from the user?
>  In domU, I just see "eth*" interfaces created. how does eth*
> interfaces interact with netfront?

These *are* the netfront devices. No need to look further.

The "vif" you see in netfront is the _xenbus_ name. It's a building
block of the driver, but it means nothing to the kernel network layer.

>  I looked under lib/modules/linux*/... for any pseudo drivers which
> might interact with eth*, didn't get any answers. I am completely
> confused. By the way I am using Debian etch 4.0 as a domU. 

Network interfaces can have pretty arbitrary names, whether virtual or
not. I guess "eth<n>" in domU is mainly chosen because it gives users
and tools a warm fuzzy feeling. On domU, it makes everything look a
little more like like a native system would.

As a rule of thumb, ethX is what connects any respective domain to their
network environment, whether that's primarily a virtual one (in domU,
per blkfront) or a physical one (dom0, driving your physical NIC).

The vifs in dom0 are network interfaces. Each is a netback instance.
Each could carry a separate IP, but that's normally not done. They are
rather used as the ports of a virtual switch. Essentially the local end
of a point-to-point link. One vif for each interface on each guest.

You should see one or more xenbrX devices. These are basically software
switches, each connects all guests in a common virtual network, and each
xenbr<N> also connects to eth<n>, as the uplink.

Try 'brctl show', you should see how all these interfaces are connected.

Daniel

> Jeremy/Ian, have any inputs on ioctl support?
> 
> Thanks
> 
> 
> On Fri, Feb 19, 2010 at 4:22 PM, Jeremy Fitzhardinge <jeremy@goop.org>
> wrote:
>         On 02/19/2010 02:30 PM, Ritu kaur wrote:
>         
>                 Thanks for the clarification. In our team meeting we
>                 decided to drop netback changes to support exclusive
>                 access and go with xe command line or xencenter way to
>                 do it(We are using Citrix Xenserver). Had couple of
>                 follow-up questions related to Xen.
>                 
>                 1.Is it correct that netfront driver(or any *front
>                 driver) has to be explicitly integrated or compiled in
>                 the guest OS? the reason I ask this is,
>         
>         
>         An HVM domain can be completely unmodified, but it will be
>         using emulated hardware devices with its normal drivers.
>         
>         
>                 a. In the documents I have read, it mentions guest OS
>                 can run without any modification, however, if above is
>                 true we have to make sure guest OS we use are compiled
>                 with the relevant *front drivers.
>         
>         
>         An HVM domain can use PV drivers to optimise its IO path by
>         bypassing the emulated devices and talking directly to the
>         backends.  PV domains always use PV drivers (but they've
>         already been modified).
>         
>         
>                 b. we had done some changes to netback and netfront(as
>                 mentioned in the previous email), when compiling
>                 kernel for dom0 it includes both netfront and netback
>                 and assumed via some mechanism this netfront driver
>                 would be integrated/installed into guest domains when
>                 they are installed.
>         
>         
>         No.  A dom0 kernel doesn't have much use for frontends.
>          They're usually present because a given kernel can run in
>         either the dom0 or domU roles.
>         
>         
>                 2. Any front or back driver communication is via
>                 xenbus only?
>         
>         
>         Xenbus is used to pass small amounts of control/status/config
>         information between front and backends.  Bulk data transfer is
>         usually handled with shared pages containing ring buffers, and
>         event channels for event signalling.
>         
>         
>                 3. Supporting ioctl calls. Our driver has ioctl
>                 support to read/write hardware registers and one
>                 solution was to use pci passthrough mechanism,
>                 however, it binds the NIC to a specific domU and we do
>                 not want that. We would like to have multiple users
>                 access to hw registers(mainly stats and other stuff)
>                 via guest domains and be able to access them
>                 simultaneously. For this, we decided to go with the
>                 mechanism of shared memory/event channel similar to
>                 front and back drivers.  Can you please provide some
>                 inputs on this?
>         
>         
>         
>         It's hard to make any suggestions without knowing what your
>         hardware is or what the use-cases are for these ioctls.  Are
>         you saying that you want to give multiple domUs direct
>         unrestricted (read only?) access to the same set of
>         registers?  What kind of stats?  Do guests need to read them
>         at a very high rate, or could they fetch accumulated results
>         at a lower rate?
>         
>            J
>         
> 

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: modifying drivers
  2010-02-20  2:15               ` Ritu kaur
  2010-02-20  3:03                 ` Ritu kaur
  2010-02-20  7:29                 ` modifying drivers Daniel Stodden
@ 2010-02-20 11:58                 ` Pasi Kärkkäinen
  2010-02-20 23:10                   ` Ritu kaur
  2 siblings, 1 reply; 14+ messages in thread
From: Pasi Kärkkäinen @ 2010-02-20 11:58 UTC (permalink / raw)
  To: Ritu kaur
  Cc: Jeremy Fitzhardinge, xen-devel@lists.xensource.com, Ian Campbell

On Fri, Feb 19, 2010 at 06:15:56PM -0800, Ritu kaur wrote:
>    Hi Jeremy,
> 
>    Thanks for clarification, however, what I dont understand is this(I have
>    read documents and looked into driver code). Both netfront and netback
>    registers with xenbus and monitors "vif" interface. >From netback point of
>    view I clearly understand its communication and other stuff as I see
>    vif<domid>:<intf-id> being created in dom0. However, when I look into
>    domU, I do not see any vif interface created(looked with ifconfig,
>    ifconfig -a commands) is it hidden from the user? In domU, I just see
>    "eth*" interfaces created. how does eth* interfaces interact with
>    netfront? I looked under lib/modules/linux*/... for any pseudo drivers
>    which might interact with eth*, didn't get any answers. I am completely
>    confused. By the way I am using Debian etch 4.0 as a domU.
>

In a PV guest, xennet (frontend) driver registers the network interface as ethX.
eth0 in the PV guest corresponds to vifX.0 backend in dom0. 
eth1 in the PV guest corresponds to vifX.1 backend in dom0.
(X is the domain id of the guest).

Hopefully that clears it up.

-- Pasi
 
>    Jeremy/Ian, have any inputs on ioctl support?
> 
>    Thanks
> 
>    On Fri, Feb 19, 2010 at 4:22 PM, Jeremy Fitzhardinge <[1]jeremy@goop.org>
>    wrote:
> 
>      On 02/19/2010 02:30 PM, Ritu kaur wrote:
> 
>        Thanks for the clarification. In our team meeting we decided to drop
>        netback changes to support exclusive access and go with xe command
>        line or xencenter way to do it(We are using Citrix Xenserver). Had
>        couple of follow-up questions related to Xen.
> 
>        1.Is it correct that netfront driver(or any *front driver) has to be
>        explicitly integrated or compiled in the guest OS? the reason I ask
>        this is,
> 
>      An HVM domain can be completely unmodified, but it will be using
>      emulated hardware devices with its normal drivers.
> 
>        a. In the documents I have read, it mentions guest OS can run without
>        any modification, however, if above is true we have to make sure guest
>        OS we use are compiled with the relevant *front drivers.
> 
>      An HVM domain can use PV drivers to optimise its IO path by bypassing
>      the emulated devices and talking directly to the backends.  PV domains
>      always use PV drivers (but they've already been modified).
> 
>        b. we had done some changes to netback and netfront(as mentioned in
>        the previous email), when compiling kernel for dom0 it includes both
>        netfront and netback and assumed via some mechanism this netfront
>        driver would be integrated/installed into guest domains when they are
>        installed.
> 
>      No.  A dom0 kernel doesn't have much use for frontends.  They're usually
>      present because a given kernel can run in either the dom0 or domU roles.
> 
>        2. Any front or back driver communication is via xenbus only?
> 
>      Xenbus is used to pass small amounts of control/status/config
>      information between front and backends.  Bulk data transfer is usually
>      handled with shared pages containing ring buffers, and event channels
>      for event signalling.
> 
>        3. Supporting ioctl calls. Our driver has ioctl support to read/write
>        hardware registers and one solution was to use pci passthrough
>        mechanism, however, it binds the NIC to a specific domU and we do not
>        want that. We would like to have multiple users access to hw
>        registers(mainly stats and other stuff) via guest domains and be able
>        to access them simultaneously. For this, we decided to go with the
>        mechanism of shared memory/event channel similar to front and back
>        drivers.  Can you please provide some inputs on this?
> 
>      It's hard to make any suggestions without knowing what your hardware is
>      or what the use-cases are for these ioctls.  Are you saying that you
>      want to give multiple domUs direct unrestricted (read only?) access to
>      the same set of registers?  What kind of stats?  Do guests need to read
>      them at a very high rate, or could they fetch accumulated results at a
>      lower rate?
> 
>         J
> 
> References
> 
>    Visible links
>    1. mailto:jeremy@goop.org

> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xensource.com
> http://lists.xensource.com/xen-devel

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: modifying drivers
  2010-02-20 11:58                 ` Pasi Kärkkäinen
@ 2010-02-20 23:10                   ` Ritu kaur
  0 siblings, 0 replies; 14+ messages in thread
From: Ritu kaur @ 2010-02-20 23:10 UTC (permalink / raw)
  To: Pasi Kärkkäinen, Daniel Stodden
  Cc: Jeremy Fitzhardinge, xen-devel@lists.xensource.com, Ian Campbell


[-- Attachment #1.1: Type: text/plain, Size: 6350 bytes --]

Thanks Daniel and Pasi. However, I have followup questions

1. Daniel, I looked at Debian Etch 4.0 source code available from Citrix
website, couldn't find netfront specific code, could you please point me to
the right one.

2. Since netfront in domU actually monitors "ethX" interface and when ethX
is used for transmit(via ping or other network protocols) it invokes
network_start_xmit which then uses shared memory rings to communicate to
backend and uses netif_poll to check for responses. In struct
netfront_info_, it has a field struct net_device *netdev, and entry points
are hooked to this

    netdev->open            = network_open;
        netdev->hard_start_xmit = network_start_xmit;
        netdev->stop            = network_close;
        netdev->get_stats       = network_get_stats;
        netdev->poll            = netif_poll;
        netdev->set_multicast_list = network_set_multicast_list;
        netdev->uninit          = netif_uninit;
...

Since netback has similar code, I will not go into it.

I looked at struct net_device, it has a function ptr called "do_ioctl" and I
wanted to know if this can be used for out ioctl need i.e

1. setup netdev->do_ioctl = network_ioctl
2. application invokes ioctl(eth2(associated with our intf in backend), ...)
3. network_ioctl is called and it does similar to xmit and recv via shared
memory.

is this feasible?

Thanks


On Sat, Feb 20, 2010 at 3:58 AM, Pasi Kärkkäinen <pasik@iki.fi> wrote:

> On Fri, Feb 19, 2010 at 06:15:56PM -0800, Ritu kaur wrote:
> >    Hi Jeremy,
> >
> >    Thanks for clarification, however, what I dont understand is this(I
> have
> >    read documents and looked into driver code). Both netfront and netback
> >    registers with xenbus and monitors "vif" interface. >From netback
> point of
> >    view I clearly understand its communication and other stuff as I see
> >    vif<domid>:<intf-id> being created in dom0. However, when I look into
> >    domU, I do not see any vif interface created(looked with ifconfig,
> >    ifconfig -a commands) is it hidden from the user? In domU, I just see
> >    "eth*" interfaces created. how does eth* interfaces interact with
> >    netfront? I looked under lib/modules/linux*/... for any pseudo drivers
> >    which might interact with eth*, didn't get any answers. I am
> completely
> >    confused. By the way I am using Debian etch 4.0 as a domU.
> >
>
> In a PV guest, xennet (frontend) driver registers the network interface as
> ethX.
> eth0 in the PV guest corresponds to vifX.0 backend in dom0.
> eth1 in the PV guest corresponds to vifX.1 backend in dom0.
> (X is the domain id of the guest).
>
> Hopefully that clears it up.
>
> -- Pasi
>
> >    Jeremy/Ian, have any inputs on ioctl support?
> >
> >    Thanks
> >
> >    On Fri, Feb 19, 2010 at 4:22 PM, Jeremy Fitzhardinge <[1]
> jeremy@goop.org>
> >    wrote:
> >
> >      On 02/19/2010 02:30 PM, Ritu kaur wrote:
> >
> >        Thanks for the clarification. In our team meeting we decided to
> drop
> >        netback changes to support exclusive access and go with xe command
> >        line or xencenter way to do it(We are using Citrix Xenserver). Had
> >        couple of follow-up questions related to Xen.
> >
> >        1.Is it correct that netfront driver(or any *front driver) has to
> be
> >        explicitly integrated or compiled in the guest OS? the reason I
> ask
> >        this is,
> >
> >      An HVM domain can be completely unmodified, but it will be using
> >      emulated hardware devices with its normal drivers.
> >
> >        a. In the documents I have read, it mentions guest OS can run
> without
> >        any modification, however, if above is true we have to make sure
> guest
> >        OS we use are compiled with the relevant *front drivers.
> >
> >      An HVM domain can use PV drivers to optimise its IO path by
> bypassing
> >      the emulated devices and talking directly to the backends.  PV
> domains
> >      always use PV drivers (but they've already been modified).
> >
> >        b. we had done some changes to netback and netfront(as mentioned
> in
> >        the previous email), when compiling kernel for dom0 it includes
> both
> >        netfront and netback and assumed via some mechanism this netfront
> >        driver would be integrated/installed into guest domains when they
> are
> >        installed.
> >
> >      No.  A dom0 kernel doesn't have much use for frontends.  They're
> usually
> >      present because a given kernel can run in either the dom0 or domU
> roles.
> >
> >        2. Any front or back driver communication is via xenbus only?
> >
> >      Xenbus is used to pass small amounts of control/status/config
> >      information between front and backends.  Bulk data transfer is
> usually
> >      handled with shared pages containing ring buffers, and event
> channels
> >      for event signalling.
> >
> >        3. Supporting ioctl calls. Our driver has ioctl support to
> read/write
> >        hardware registers and one solution was to use pci passthrough
> >        mechanism, however, it binds the NIC to a specific domU and we do
> not
> >        want that. We would like to have multiple users access to hw
> >        registers(mainly stats and other stuff) via guest domains and be
> able
> >        to access them simultaneously. For this, we decided to go with the
> >        mechanism of shared memory/event channel similar to front and back
> >        drivers.  Can you please provide some inputs on this?
> >
> >      It's hard to make any suggestions without knowing what your hardware
> is
> >      or what the use-cases are for these ioctls.  Are you saying that you
> >      want to give multiple domUs direct unrestricted (read only?) access
> to
> >      the same set of registers?  What kind of stats?  Do guests need to
> read
> >      them at a very high rate, or could they fetch accumulated results at
> a
> >      lower rate?
> >
> >         J
> >
> > References
> >
> >    Visible links
> >    1. mailto:jeremy@goop.org
>
> > _______________________________________________
> > Xen-devel mailing list
> > Xen-devel@lists.xensource.com
> > http://lists.xensource.com/xen-devel
>
>

[-- Attachment #1.2: Type: text/html, Size: 7416 bytes --]

[-- Attachment #2: Type: text/plain, Size: 138 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Problems with Xen 4.0-rc4 forcedeth driver / Regression?
  2010-02-20  3:03                 ` Ritu kaur
@ 2010-02-21 13:03                   ` Carsten Schiers
  0 siblings, 0 replies; 14+ messages in thread
From: Carsten Schiers @ 2010-02-21 13:03 UTC (permalink / raw)
  To: xen-devel


[-- Attachment #1.1: Type: text/plain, Size: 747 bytes --]

Dear all,

 

just did a very quick test. Xen 4.0-rc4 on amd64, all PV DomU setup. One
of my DomUs uses the

onboard forcedeth NIC thru PCI passthrough. 

 

NIC is not usable out of the box any longer, it complains that MAC is
wrong and it will set up a random

MAC instead. 

 

This issue was very common also in non-Xen environments some time ago.
Some workarounds where

to set a fixed MAC with UDEV via a rule with something like
DRIVER=forcedeth.

 

But: it works with Xen 3.4.1 out of the box, with Xen 4.0-rc4, it does
not. Dom0/DomU kernels not 

changed, they are at 2.6.18.8 at time of 3.4.1.

 

Does this already provide enough information to know what’s going wrong?

 

BR,

Carsten.



[-- Attachment #1.2: Type: text/html, Size: 4869 bytes --]

[-- Attachment #2: Type: text/plain, Size: 138 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel

^ permalink raw reply	[flat|nested] 14+ messages in thread

end of thread, other threads:[~2010-02-21 13:03 UTC | newest]

Thread overview: 14+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2010-02-18 16:27 modifying drivers Ritu kaur
2010-02-18 16:39 ` Ian Campbell
2010-02-19  0:03   ` Ritu kaur
2010-02-19  9:07     ` Ian Campbell
2010-02-19 17:12       ` Ritu kaur
2010-02-19 17:24         ` Ian Campbell
2010-02-19 22:30           ` Ritu kaur
2010-02-20  0:22             ` Jeremy Fitzhardinge
2010-02-20  2:15               ` Ritu kaur
2010-02-20  3:03                 ` Ritu kaur
2010-02-21 13:03                   ` Problems with Xen 4.0-rc4 forcedeth driver / Regression? Carsten Schiers
2010-02-20  7:29                 ` modifying drivers Daniel Stodden
2010-02-20 11:58                 ` Pasi Kärkkäinen
2010-02-20 23:10                   ` Ritu kaur

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).