From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mailman by lists.gnu.org with tmda-scanned (Exim 4.43) id 1LMCPu-00052k-Sr for qemu-devel@nongnu.org; Sun, 11 Jan 2009 21:20:38 -0500 Received: from exim by lists.gnu.org with spam-scanned (Exim 4.43) id 1LMCPt-00052Y-Cj for qemu-devel@nongnu.org; Sun, 11 Jan 2009 21:20:37 -0500 Received: from [199.232.76.173] (port=57140 helo=monty-python.gnu.org) by lists.gnu.org with esmtp (Exim 4.43) id 1LMCPt-00052V-6Q for qemu-devel@nongnu.org; Sun, 11 Jan 2009 21:20:37 -0500 Received: from mail2.shareable.org ([80.68.89.115]:35712) by monty-python.gnu.org with esmtps (TLS-1.0:RSA_AES_256_CBC_SHA1:32) (Exim 4.60) (envelope-from ) id 1LMCPs-0001Pd-Ma for qemu-devel@nongnu.org; Sun, 11 Jan 2009 21:20:37 -0500 Date: Mon, 12 Jan 2009 02:20:33 +0000 From: Jamie Lokier Subject: Re: [Qemu-devel] [PATCH] mark nic as trusted Message-ID: <20090112022033.GB6428@shareable.org> References: <496688D9.1040708@redhat.com> <20090109104154.GA5164@redhat.com> <20090110021811.GJ1972@shareable.org> <4968E74E.5040905@codemonkey.ws> <20090111045524.GB15975@shareable.org> <4969FD59.10509@gmx.net> <496A0B4C.7000004@redhat.com> <496A17E4.2070904@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <496A17E4.2070904@redhat.com> Reply-To: qemu-devel@nongnu.org List-Id: qemu-devel.nongnu.org List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: dlaor@redhat.com, qemu-devel@nongnu.org Dor Laor wrote: > The installer of the guest agent is responsible for punching a hole in the > firewall. That's asking a lot from a generic installer. Guests differ enormously in how you do that - including different Linux guests. Something else you have to do is disable forwarding between the vmchannel NIC and other NICs - even if the other NICs are forwarding enabled to each other. How do you do that on Linux? /proc/sys/net/ipv4/ip_forward is global, not per NIC... How do you do it on other guests? It's easy to imagine a few simple guest agents written in C that compile easily on any guest unix you might want to run on... except this vmchannel setup would be the only non-portable part, and highly non-portable at that. > - Link local addresses for ipv4 are problematic when > using on other > nics in parallel > Likewise, the guest could check the address situation beforehand. > It does check (meaning we need to fully implement the link local rfc). > The problem is that even if we check that no one is using this guest local > link address, another nic can use link local addresses. So a remote host on > the LAN of the other nic might chose the same address we are using. No, that's not enough. Even when you have globally unique link-local addresses, you have the problem that NICs configured for link-local IP always have the same subnet, so routing doesn't work. You could workaround this by using non-standard link-local IP on the vmchannel NIC. Now you're playing more games... > - We should either 1. not use link local on other > links 2. Use > standard dhcp addresses 3. do > not use tcp/ip for vmchannel communication. > > So additional nic can do the job and we have several > flavours to choose > from. > The solution should be generic enough so that any nic can be > connected > to vmchannel. It sounds "generic" in the sense that you need a custom configuration which depends on the rest of the guest's configuration. Not really "drop in guest vmchannel app and it just works", is it? If the guest vmchannel app installer looks at other NICs, and picks an IP subnet that the others aren't using, or uses link-local when that's not used on the others... That will work most of the time. But sometimes it will break a working guest some hours after it's installed. What happens if the guests's LAN NIC is using DHCP, so the vmchannel app picks link-local - and then the guests's LAN NIC changes to link-local itself after some hours running? That's not uncommon behaviour nowadays on some networks. Handling all the cases _reliably_, adapting reactively to network config _changes_ on the other NICs while running, and doing so across many guest types (even just Linux distros and Windows) without having to have custom code for each guest type, is harder than it looks. On the other hand, using packet sockets and not IP over the vmchannel NIC... (just pick another ethernet type) that would work reliably, but without the convenience of TCP/IP. It would need more support in the guest vmchannel app, and guest root access, but both sound plausible to implement. -- Jamie