From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([140.186.70.92]:35153) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1QcERF-00086c-6Y for qemu-devel@nongnu.org; Thu, 30 Jun 2011 06:25:38 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1QcERD-00070u-E3 for qemu-devel@nongnu.org; Thu, 30 Jun 2011 06:25:36 -0400 Received: from mx1.redhat.com ([209.132.183.28]:11592) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1QcERC-00070V-PZ for qemu-devel@nongnu.org; Thu, 30 Jun 2011 06:25:35 -0400 Received: from int-mx12.intmail.prod.int.phx2.redhat.com (int-mx12.intmail.prod.int.phx2.redhat.com [10.5.11.25]) by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id p5UAPWcV007909 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK) for ; Thu, 30 Jun 2011 06:25:32 -0400 Message-ID: <4E0C4F52.90405@redhat.com> Date: Thu, 30 Jun 2011 13:26:26 +0300 From: Yonit Halperin MIME-Version: 1.0 References: <20110622095754.GG14599@bow.redhat.com> <730034918.33031.1309107546747.JavaMail.root@zmail02.collab.prod.int.phx2.redhat.com> <20110626174715.GJ2731@bow.redhat.com> <4E08231D.30506@redhat.com> <20110627081635.GN2731@bow.redhat.com> <4E083E8B.7010302@redhat.com> <20110627092036.GR2731@bow.redhat.com> <4E0AE9D7.6090706@redhat.com> <20110629092133.GL30873@bow.redhat.com> <4E0AFD7C.2050209@redhat.com> <20110629113812.GS30873@bow.redhat.com> In-Reply-To: <20110629113812.GS30873@bow.redhat.com> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Subject: Re: [Qemu-devel] [PATCH 2/2] qxl: add QXL_IO_UPDATE_MEM for guest S3&S4 support List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Gerd Hoffmann , qemu-devel@nongnu.org, Alon Levy On 06/29/2011 02:38 PM, Alon Levy wrote: > On Wed, Jun 29, 2011 at 12:25:00PM +0200, Gerd Hoffmann wrote: >> On 06/29/11 11:21, Alon Levy wrote: >>> On Wed, Jun 29, 2011 at 11:01:11AM +0200, Gerd Hoffmann wrote: >>>> Hi, >>>> >>>>>> I think it will receive them after migration, since the command ring >>>>>> was stored. >>>>> Our confusion here is because you think there is still seemless migration. Unfortunately >>>>> it doesn't work right now, unless you plan to fix it the only form of migration right >>>>> now is switch-host, and for that those commands will be lost, the new connection will receive >>>>> images for each surface. If you treat the client as seemless you are completely right. >>>> >>>> The spice server needs this too so it can render the surfaces >>>> correctly before sending the surface images to the client (or send >>>> the old surfaces and the commands on top of that). >>>> >>>> That is one difference between qemu migration and S3 state: For qemu >>>> migration it is no problem to have unprocessed commands in the >>>> rings, they will simply be processed once the spice server state is >>>> restored. When the guest driver restores the state when it comes >>>> back from S3 it needs the command rings to do so, thats why they >>>> must be flushed before entering S3 ... >>> >>> You mean it needs the command rings to be empty before, since they are lost >>> during the reset, right? >> >> One more reason. Wasn't aware there is a reset anyway, was thinking > hah. The reset is the whole mess.. otherwise S3 would have been trivial, > and actually disabling the reset was the first thing we did, but of course > it doesn't solve S4 in that case. > >> more about the command ordering. Without reset spice-server would >> first process the old commands (which may reference non-existing >> surfaces), then the new commands which re-recreate all state, which >> is simply the wrong order. With reset the old commands just get >> lost which causes rendering bugs. >> >> Is it an option to have the driver just remove the commands from the >> ring (and resubmit after resume)? I suspect it isn't as there is no >> race-free way to do that, right? > Right - the whole ring assumes that the same side removes. of course we > can add an IO for that (two - FREEZE and UNFREEZE). But I think this is > the wrong approach. Instead, rendering all the commands, and dropping the > wait for the client. Right now if we flush we do actually wait for the client, > but I plan to remove this later. (we do this right now for update_area as > well and that's much higher frequency). > >> >> cheers, >> Gerd >> >> To conclude, we still need to flush the command ring before stop. We don't want to change migration. So we still need to change spice server api. Gerd?