From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mailman by lists.gnu.org with tmda-scanned (Exim 4.43) id 1I3vrw-0001dJ-0f for qemu-devel@nongnu.org; Thu, 28 Jun 2007 11:25:16 -0400 Received: from exim by lists.gnu.org with spam-scanned (Exim 4.43) id 1I3vru-0001d7-99 for qemu-devel@nongnu.org; Thu, 28 Jun 2007 11:25:14 -0400 Received: from [199.232.76.173] (helo=monty-python.gnu.org) by lists.gnu.org with esmtp (Exim 4.43) id 1I3vru-0001d4-5j for qemu-devel@nongnu.org; Thu, 28 Jun 2007 11:25:14 -0400 Received: from nz-out-0506.google.com ([64.233.162.237]) by monty-python.gnu.org with esmtp (Exim 4.60) (envelope-from ) id 1I3vrt-0000Lc-NS for qemu-devel@nongnu.org; Thu, 28 Jun 2007 11:25:13 -0400 Received: by nz-out-0506.google.com with SMTP id f1so330683nzc for ; Thu, 28 Jun 2007 08:25:12 -0700 (PDT) Message-ID: <4683D2CC.3070003@codemonkey.ws> Date: Thu, 28 Jun 2007 10:25:00 -0500 From: Anthony Liguori MIME-Version: 1.0 Subject: Re: [Qemu-devel] [PATCH] starting qemu vnc session on a pre-allocated port References: <467E6C25.3010908@codefidence.com> <467E7AB0.1000909@codemonkey.ws> <467EE5E0.5010605@codefidence.com> <467EF2D6.90501@codemonkey.ws> <467F7CC6.6000207@codefidence.com> <467FAD19.4000404@codemonkey.ws> <4680E7BB.80408@codefidence.com> In-Reply-To: <4680E7BB.80408@codefidence.com> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Reply-To: qemu-devel@nongnu.org List-Id: qemu-devel.nongnu.org List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: qemu-devel@nongnu.org Gilad Ben-Yossef wrote: >> It also implies that the daemon will be running for the entire >> lifetime of the VM. > > No. In fact, running an extra daemon for the entire life time of the > VM is exactly what I'm trying to avoid (one of the things, anyway). > > Now I see why you think the unix domain socket option solves the problem > already. Our use case is actully a little different. Let me explain: > > The machine running qemu has a web based interface to start VMs. > A user asks for a new VM to start by browsing to a URL. The CGI > implmenting that URL will start a new qemu instance, send to the user > web browser an HTML page with a JAVA VNC viewer embedded and terminate. Passing an fd is still the wrong solution due to the problems with save/restore/migrate. It may be interesting to have something like -vnc tcp://0.0.0.0:5900-6000 to let QEMU try to find an unused port in the given range. Combined with -daemonize and having the monitor on a Unix socket, you could: 1) create a vm with qemu -vnc tcp://0.0.0.0:5900-6000 -monitor unix:/path/to/socket -daemonize 2) *wait* for qemu to finish running and daemonize properly 3) connect to /path/to/socket and issue a 'info vnc' command to discover which port it's actually using 4) render that port with your HTML. The nice thing about this is that it not only continues to work with save/restore/migrate, it's smart enough to allocate a new port to ensure that you always tend to succeed. Choosing :3 might be okay on machine A, but there's no guarantee that it's okay on machine B so you have to allow QEMU to try and find a new port after restore/migrate. I prefer this syntax over Xen's -vncunused since you can restrict the allocated ports to a particular region. Regards, Anthony Liguori > Here is the problem: the HTML page needs to have the port number > for the JAVA VNC viewer to connect to embedded in it. > > Of course, the CGI can pick a free port and ask qemu to start the VNC > server on it, but it means CGI needs to maintain a list of free/used > port ranges in some shared data structue, track the qemu instance to know > when it is termianted and the port is free again and of course, hope that > not non related proccess will snatch a port in the port range and > generally > duplicate the ifnormation the operating system already has on free/in use > ports. > > In our suggested solution, our CGI simply opens a listening socket on an > ethermal port, letting the OS do the allocation, hands the file > descriptor > to qemu to use and *terminates* (after sending the HTML page). > No long running daemons. > > Having a daemon sit around just to shove the data from the Unix domain > socket > to the TCP socket and back and needing to track it and all really puts > an ugly > dent on the whole idea and, more important - I think what we are doing is > a rather general concept, certainly not unique to us (just look at > qemudo, > only of course, they got it wrong... :-) > > Hope this explains things a little better. > > >> Since VM's are meant to run for very long periods of time, this is >> quite limiting. By utilizing a domain socket, you gain the ability >> to record on disk the state of the daemon and then restart. The >> layer of redirection also allows you to let your uses change the VNC >> server properties while the VM is running (so you change the >> listening vnc display from localhost:3 to :22 without restarting the >> VM). > > All the above are really nice to have, but nit with the cost of > extra management overhead, as explained above. > > Also, our VM life time is typically 15 minutes long... :-) > >> Plus, live migration has no hope of working if you're passing file >> descriptors on the command line as they're meaningless once you've >> migrated. > > That, I have no answer for. What do you do with the Unix domain socket? > open it by path/filename on the new machines? > > Gilad >