From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mailman by lists.gnu.org with tmda-scanned (Exim 4.43) id 1LL44w-0003Db-0M for qemu-devel@nongnu.org; Thu, 08 Jan 2009 18:14:18 -0500 Received: from exim by lists.gnu.org with spam-scanned (Exim 4.43) id 1LL44u-0003DJ-Fc for qemu-devel@nongnu.org; Thu, 08 Jan 2009 18:14:16 -0500 Received: from [199.232.76.173] (port=36979 helo=monty-python.gnu.org) by lists.gnu.org with esmtp (Exim 4.43) id 1LL44u-0003DG-90 for qemu-devel@nongnu.org; Thu, 08 Jan 2009 18:14:16 -0500 Received: from mx2.redhat.com ([66.187.237.31]:59702) by monty-python.gnu.org with esmtp (Exim 4.60) (envelope-from ) id 1LL44t-00056z-Ov for qemu-devel@nongnu.org; Thu, 08 Jan 2009 18:14:16 -0500 Received: from int-mx2.corp.redhat.com (int-mx2.corp.redhat.com [172.16.27.26]) by mx2.redhat.com (8.13.8/8.13.8) with ESMTP id n08NEEFJ027503 for ; Thu, 8 Jan 2009 18:14:14 -0500 Received: from ns3.rdu.redhat.com (ns3.rdu.redhat.com [10.11.255.199]) by int-mx2.corp.redhat.com (8.13.1/8.13.1) with ESMTP id n08NEE51012140 for ; Thu, 8 Jan 2009 18:14:14 -0500 Received: from localhost.localdomain (vpn-13-103.rdu.redhat.com [10.11.13.103]) by ns3.rdu.redhat.com (8.13.8/8.13.8) with ESMTP id n08NEC5J005334 for ; Thu, 8 Jan 2009 18:14:13 -0500 Message-ID: <496688D9.1040708@redhat.com> Date: Fri, 09 Jan 2009 01:14:33 +0200 From: Dor Laor MIME-Version: 1.0 Subject: Re: [Qemu-devel] [PATCH] mark nic as trusted References: <4964D98B.6030404@codemonkey.ws> <20090107165050.GI3267@redhat.com> <4964EC2B.1080406@codemonkey.ws> <4964EC55.4000507@codemonkey.ws> <20090107184103.GA19406@redhat.com> <496501CD.8060202@codemonkey.ws> <20090107194633.GB19406@redhat.com> <49665AE7.3000708@codemonkey.ws> <20090108212652.GB22504@redhat.com> <49667330.5070001@codemonkey.ws> <20090108224942.GA12848@shareable.org> In-Reply-To: <20090108224942.GA12848@shareable.org> Content-Type: multipart/alternative; boundary="------------090708080404050903080409" Reply-To: dlaor@redhat.com, qemu-devel@nongnu.org List-Id: qemu-devel.nongnu.org List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: qemu-devel@nongnu.org This is a multi-part message in MIME format. --------------090708080404050903080409 Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Jamie Lokier wrote: > Anthony Liguori wrote: > >> Are we going to have a standard way of doing this in Linux distros such >> that these nics are treated differently from other nics? Have we gotten >> the appropriate distro folks to agree to this? >> > > That wouldn't work for older distros and Windows anyway. But you > might reasonably want to run apps doing guest-host communication on > older guest distros too, simply as an app, not requiring guest > customisation. > We can make fedora, rhel and libvirt support it. It might be a bit painful but since a network device was chosen for this propose then that's the right way to go. Others can either use 3rd agents to cancel firewalls like we plan to do for windows, or as was suggested, change the pci device id and load a bit different driver. You suggestion is good also since it will be good for personal usages where the guest can easily reach the host network and the user can easily cancel firewalls. > Is there some way to mark a PCI device so it will be ignored at boot > time generically? Changing the PCI ID will do that for all guests, > but is it then feasible for the vmchannel guest admin software to bind > a NIC driver to a non-standard PCI ID, on the major OSes? > Alternatively one can hot plug the vmchanneled nic, right after boot. IMHO I rather stick with guest mgmt agent take care of the accesses to the nic. > Suppose you start a guest with two "trusted" nics, because you want to > run two unrelated vmchannel-using admin apps. How does each app know > which nic to use - or do they share it? > > Each vmchannel is bond on the host to a separate pair of qemu_chr_device and a matching ip:port listening. So if there are n agents in the guest, each should connect to his ip:port without being aware to the others. > As the guest OS's TCP is being used, what do you do about IP address > space conflicts? > > I.e. if NIC #1 is the guest's LAN, and NIC #2 is the vmchannel, how is > the vmchannel NIC going to be configured in a way that's guaranteed to > avoid breaking the LAN networking, which could be assigned any legal > subnet (especially when bridging is used), and on some networks > changes from time to time? > > Perhaps vmchannel will only use IPv6, so it can confidently pick a > unique link-local address? > > We plan to pick link local subnets for ipv4. It solved all the above questions. It should be connected to slirp without passing through the host stack/bridges (although it can be open too). > -- Jamie > > > w.r.t the option of using virtio nic, there is advantage of using any other nic since this way there is no requirement to install virtio driver on windows or on other older Linux/other OSs. --------------090708080404050903080409 Content-Type: text/html; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit Jamie Lokier wrote:
Anthony Liguori wrote:
  
Are we going to have a standard way of doing this in Linux distros such 
that these nics are treated differently from other nics?  Have we gotten 
the appropriate distro folks to agree to this?
    

That wouldn't work for older distros and Windows anyway.  But you
might reasonably want to run apps doing guest-host communication on
older guest distros too, simply as an app, not requiring guest
customisation.
  
We can make fedora, rhel and libvirt support it. It might be a bit painful but since
a network device was chosen for this propose then that's the right way to go.
Others can either use 3rd agents to cancel firewalls like we plan to do for windows,
or as was suggested, change the pci device id and load a bit different driver.

You suggestion is good also since it will be good for personal usages where
the guest can easily reach the host network and the user can easily cancel firewalls.

Is there some way to mark a PCI device so it will be ignored at boot
time generically?  Changing the PCI ID will do that for all guests,
but is it then feasible for the vmchannel guest admin software to bind
a NIC driver to a non-standard PCI ID, on the major OSes?
  
Alternatively one can hot plug the vmchanneled nic, right after boot.
IMHO I rather stick with guest mgmt agent take care of the accesses to the nic.
Suppose you start a guest with two "trusted" nics, because you want to
run two unrelated vmchannel-using admin apps.  How does each app know
which nic to use - or do they share it?

  

Each vmchannel is bond on the host to a separate pair of qemu_chr_device and
a matching ip:port listening. So if there are n agents in the guest, each should
connect to his ip:port without being aware to the others.
As the guest OS's TCP is being used, what do you do about IP address
space conflicts?

I.e. if NIC #1 is the guest's LAN, and NIC #2 is the vmchannel, how is
the vmchannel NIC going to be configured in a way that's guaranteed to
avoid breaking the LAN networking, which could be assigned any legal
subnet (especially when bridging is used), and on some networks
changes from time to time?

Perhaps vmchannel will only use IPv6, so it can confidently pick a
unique link-local address?

  
We plan to pick link local subnets for ipv4.
It solved all the above questions. It should be connected to slirp without passing through
the host stack/bridges (although it can be open too).
-- Jamie


  

w.r.t the option of using virtio nic, there is advantage of using any other nic since this way
there is no requirement to install virtio driver on windows or on other older Linux/other OSs.
--------------090708080404050903080409--