From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([140.186.70.92]:40893) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1Qb7zs-0003wI-DW for qemu-devel@nongnu.org; Mon, 27 Jun 2011 05:20:49 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1Qb7zq-0002cZ-He for qemu-devel@nongnu.org; Mon, 27 Jun 2011 05:20:48 -0400 Received: from mx1.redhat.com ([209.132.183.28]:55042) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1Qb7zp-0002cM-Qp for qemu-devel@nongnu.org; Mon, 27 Jun 2011 05:20:46 -0400 Received: from int-mx02.intmail.prod.int.phx2.redhat.com (int-mx02.intmail.prod.int.phx2.redhat.com [10.5.11.12]) by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id p5R9Ki8b024241 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK) for ; Mon, 27 Jun 2011 05:20:44 -0400 Date: Mon, 27 Jun 2011 11:20:36 +0200 From: Alon Levy Message-ID: <20110627092036.GR2731@bow.redhat.com> References: <20110622095754.GG14599@bow.redhat.com> <730034918.33031.1309107546747.JavaMail.root@zmail02.collab.prod.int.phx2.redhat.com> <20110626174715.GJ2731@bow.redhat.com> <4E08231D.30506@redhat.com> <20110627081635.GN2731@bow.redhat.com> <4E083E8B.7010302@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <4E083E8B.7010302@redhat.com> Subject: Re: [Qemu-devel] [PATCH 2/2] qxl: add QXL_IO_UPDATE_MEM for guest S3&S4 support List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: yhalperi Cc: qemu-devel@nongnu.org, Gerd Hoffmann On Mon, Jun 27, 2011 at 11:25:47AM +0300, yhalperi wrote: > On 06/27/2011 11:16 AM, Alon Levy wrote: > >On Mon, Jun 27, 2011 at 09:28:45AM +0300, yhalperi wrote: > >>On 06/26/2011 08:47 PM, Alon Levy wrote: > >>>On Sun, Jun 26, 2011 at 12:59:06PM -0400, Yonit Halperin wrote: > >>>>Sorry for the late response, wasn't available. > >>>>I'm afraid that (1) and (2) will indeed wakeup the worker, but will not assure emptying the command ring, as it depends on the client pipe size. > >>>> > >>> > >>>I actually can't figure out what wakeup does (that's what both NOTIFY and > >>>NOTIFY_CURSOR do, see hw/qxl.c). > >>It turns on an event the worker is waiting for on epoll_wait. > >Ah, ok. So it will only read up to the pipe size like you said. > > > >> > >>What we did in prepare_to_sleep before was > >>>call flush_all_qxl_commands, which reads the command and cursor rings until > >>>they are empty (calling flush_cursor_commands and flush_display_commands), waiting > >>>whenever the pipe is too large. (avoiding this wait still needs to be done, but > >>>after ensuring suspend is correct). > >>> > >>>More to the point this flush is done from handle_dev_destroy_surfaces, but > >>>this is not good enough since this also destroys the surfaces, before we have > >>>a chance to actually render the surfaces. > >>> > >>>Perhaps RED_WORKER_MESSAGE_STOP should do a flush_all_qxl_commands? > >>> > >>We can do it as long as it doesn't affect migration - does STOP > >>happens before or after savevm? If it happens after - we can't touch > >>the command ring, i.e., we can't call flush. And even if it happens > >>before - do we really want to call flush during migration and > >>presumably slow it down? > >But if we don't then the client connected before migration doesn't get some of > >the messages it was supposed to get. > I think it will receive them after migration, since the command ring > was stored. Our confusion here is because you think there is still seemless migration. Unfortunately it doesn't work right now, unless you plan to fix it the only form of migration right now is switch-host, and for that those commands will be lost, the new connection will receive images for each surface. If you treat the client as seemless you are completely right. > > > >stop is called before hw/qxl.c:qxl_pre_save, from > >ui/spice-display.c:qemu_spice_vm_change_state_handler > > > > > >> > >>Cheers, > >>Yonit. > >> > >>>> > >>>>----- Original Message ----- > >>>>From: "Alon Levy" > >>>>To: "Gerd Hoffmann" > >>>>Cc: qemu-devel@nongnu.org, yhalperi@redhat.com > >>>>Sent: Wednesday, June 22, 2011 11:57:54 AM > >>>>Subject: Re: [Qemu-devel] [PATCH 2/2] qxl: add QXL_IO_UPDATE_MEM for guest S3&S4 support > >>>> > >>>>On Wed, Jun 22, 2011 at 11:13:19AM +0200, Gerd Hoffmann wrote: > >>>>> Hi, > >>>>> > >>>>>>>worker call. We can add a I/O command to ask qxl to push the > >>>>>>>release queue head to the release ring. > >>>>>> > >>>>>>So you suggest to replace QXL_IO_UPDATE_MEM with what, two io commands instead > >>>>>>of using the val parameter? > >>>>> > >>>>>I'd like to (a) avoid updating the libspice-server API if possible > >>>>>and (b) have one I/O command for each logical step. So going into > >>>>>S3 could look like this: > >>>>> > >>>>> (0) stop putting new commands into the rings > >>>>> (1) QXL_IO_NOTIFY_CMD > >>>>> (2) QXL_IO_NOTIFY_CURSOR > >>>>> qxl calls notify(), to make the worker thread empty the command > >>>>> rings before it processes the next dispatcher request. > >>>>> (3) QXl_IO_FLUSH_SURFACES (to be implemented) > >>>>> qxl calls stop()+start(), spice-server renders all surfaces, > >>>>> thereby flushing state to device memory. > >>>>> (4) QXL_IO_DESTROY_ALL_SURFACES > >>>>> zap surfaces > >>>>> (5) QXL_IO_FLUSH_RELEASE (to be implemented) > >>>>> push release queue head into the release ring, so the guest > >>>>> will see it and can release everything. > >>>>> > >>>>>(1)+(2)+(4) exist already. > >>>>>(3)+(5) can be done without libspice-server changes. > >>>>> > >>>>>Looks workable? > >>>> > >>>>yeah. Yonit? > >>>> > >>>>> > >>>>>cheers, > >>>>> Gerd > >>>>> > >>>>> > >>>> > >> > >> >