From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mailman by lists.gnu.org with tmda-scanned (Exim 4.43) id 1NJcFn-0006Lz-QN for qemu-devel@nongnu.org; Sat, 12 Dec 2009 19:24:03 -0500 Received: from exim by lists.gnu.org with spam-scanned (Exim 4.43) id 1NJcFi-0006KH-5J for qemu-devel@nongnu.org; Sat, 12 Dec 2009 19:24:02 -0500 Received: from [199.232.76.173] (port=38977 helo=monty-python.gnu.org) by lists.gnu.org with esmtp (Exim 4.43) id 1NJcFi-0006KB-29 for qemu-devel@nongnu.org; Sat, 12 Dec 2009 19:23:58 -0500 Received: from mx1.redhat.com ([209.132.183.28]:65447) by monty-python.gnu.org with esmtp (Exim 4.60) (envelope-from ) id 1NJcFh-0007LY-I4 for qemu-devel@nongnu.org; Sat, 12 Dec 2009 19:23:57 -0500 Date: Sun, 13 Dec 2009 00:23:52 +0000 From: "Daniel P. Berrange" Subject: Re: [Qemu-devel] Re: Spice project is now open Message-ID: <20091213002352.GA31569@redhat.com> References: <20091211233158.22e6681f@redhat.com> <4B22C093.2090806@codemonkey.ws> <4B231182.1080208@codemonkey.ws> <20091212144433.GA26966@random.random> <4B23B0BE.7080408@codemonkey.ws> <20091212160626.GB26966@random.random> <4B23D585.70400@codemonkey.ws> <4B241A99.2000704@redhat.com> <4B242B40.4050409@codemonkey.ws> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <4B242B40.4050409@codemonkey.ws> Reply-To: "Daniel P. Berrange" List-Id: qemu-devel.nongnu.org List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Anthony Liguori Cc: Andrea Arcangeli , Paolo Bonzini , dlaor@redhat.com, qemu-devel@nongnu.org On Sat, Dec 12, 2009 at 05:46:08PM -0600, Anthony Liguori wrote: > Dor Laor wrote: > >On 12/12/2009 07:40 PM, Anthony Liguori wrote: > >>If Spice can crash a guest, that indicates to me that Spice is > >>maintaining guest visible state. That is difficult architecturally > >>because if we want to do something like introduce a secure sandbox for > >>running guest visible emulation, libspice would have to be part of that > >>sandbox which would seem to be difficult. > >> > >>The VNC server cannot crash a guest by comparison. > > > >That's not accurate: > > Cannot crash the *guest*. It can crash qemu but it's not guest > visible. IOW, the guest never interacts directly with the VNC server. > The difference matters when it comes to security sandboxing and live > migration. > > >If we'll break spice to components we have the following (and I'm not > >a spice expert): > >1. QXL device/driver pair > > Is anyone debate we should have it in qemu? > > We should attach it SDL and vnc backend too anyway. > >2. VDI (Virtual Desktop Interface) > > http://www.spice-space.org/vdi.html > > FYI, www.spice-space.org is not responding for me. There is a planned outage for a physical relocation of the server that hosts spice-space.org, virt-manager.org, ovirt.org, etc & a lot of other sites. It should be back online before Monday if all has gone to plan. > Where #3 lives is purely a function of what level of integration it > needs with qemu. There may be advantages to having it external to > qemu. I actually think we should move the VNC server out of qemu... > > Dan Berrange and I have been talking about being able to move VNC server > into a central process such that all of the VMs can have a single VNC > port that can be connected to. This greatly simplifies the firewalling > logic that an administrator has to deal with. That's a problem I've > already had to deal with for our management tools. We use a private > network for management and we bridge the VNC traffic into the customers > network so they can see the VGA session. But since that traffic can be > a large range of ports and we have to tunnel the traffic through a > central server to get into the customer network, it's very difficult to > setup without opening up a mess of ports. I think we're currently > opening a few thousand just for VNC. Actually my plan was to have a VNC proxy server, that sat between the end user & the real VNC in QEMU. Specifically I wanted to allow for a model where the VNC server end users connected to for console servers was on a physically separate host from the VMs. I had a handful of use cases, mostly to deal with an oVirt deployment where console users could be from the internet, rather than an intranet. - Avoiding the need to open up many ports on firewalls - Allow on the fly switching between any VMs the currently authenticated user was authorized to view without opening more connections (avoids needing to re-authenticate for each VM) - Avoid needing to expose virtualization hosts to console users, since console users may be coming in from an untrusted network, or even the internet itself. - Allow seemless migration where proxy server simply re-connects to the VM on new host, without the end user VNC connection ever noticing. > For VNC, to make this efficient we just need a shared memory transport > that we can use locally. I doubt the added latency will matter as long > as we're not copying data. That would preclude running it as an off-node service, but since latency is important that's probably inevitable. In any case there'd be nothing to stop someone adding an off-node proxy in front of that anyway should requirements truely require it. The first point of just getting away from the one-TCP port per VM model is a worthwhile use case all of its own. > Of course, Spice is a different thing altogether. I have no idea > whether it makes sense for Spice like it would for VNC. But I'd like to > understand if the option is available. I believe Spice shares the same needs as VNC in this regard, since when spawning a VM with Spice, each must be given a pair of unique ports (one runs cleartext, one with TLS/SSL). Regards, Daniel -- |: Red Hat, Engineering, London -o- http://people.redhat.com/berrange/ :| |: http://libvirt.org -o- http://virt-manager.org -o- http://ovirt.org :| |: http://autobuild.org -o- http://search.cpan.org/~danberr/ :| |: GnuPG: 7D3B9505 -o- F3C9 553F A1DA 4AC2 5648 23C1 B3DF F742 7D3B 9505 :|