From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([208.118.235.92]:33001) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1S6kpX-0006JC-Ns for qemu-devel@nongnu.org; Sun, 11 Mar 2012 11:37:10 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1S6kpV-0002YU-C6 for qemu-devel@nongnu.org; Sun, 11 Mar 2012 11:37:06 -0400 Received: from mail-iy0-f173.google.com ([209.85.210.173]:61135) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1S6kpV-0002YH-77 for qemu-devel@nongnu.org; Sun, 11 Mar 2012 11:37:05 -0400 Received: by iafj26 with SMTP id j26so5997254iaf.4 for ; Sun, 11 Mar 2012 08:37:03 -0700 (PDT) Message-ID: <4F5CC692.7050002@codemonkey.ws> Date: Sun, 11 Mar 2012 10:36:50 -0500 From: Anthony Liguori MIME-Version: 1.0 References: <4F5CA590.1000605@redhat.com> <4F5CB429.4000907@codemonkey.ws> <20120311152528.GD7273@garlic.redhat.com> In-Reply-To: <20120311152528.GD7273@garlic.redhat.com> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Subject: Re: [Qemu-devel] seamless migration with spice List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Yonit Halperin , qemu-devel , "spice-devel@freedesktop.org" , Gerd Hoffmann On 03/11/2012 10:25 AM, Alon Levy wrote: > On Sun, Mar 11, 2012 at 09:18:17AM -0500, Anthony Liguori wrote: >> On 03/11/2012 08:16 AM, Yonit Halperin wrote: >>> Hi, >>> >>> We would like to implement seamless migration for Spice, i.e., keeping the >>> currently opened spice client session valid after migration. >>> Today, the spice client establishes the connection to the destination before >>> migration starts, and when migration completes, the client's session is moved to >>> the destination, but all the session data is being reset. >>> >>> We face 2 main challenges when coming to implement seamless migration: >>> >>> (1) Spice client must establish the connection to the destination before the >>> spice password expires. However, during migration, qemu main loop is not >>> processed, and when migration completes, the password might have already expired. >>> >>> Today we solve this by the async command client_migrate_info, which is expected >>> to be called before migration starts. The command is completed >>> once spice client has connected to the destination (or a timeout). >>> >>> Since async monitor commands are no longer supported, we are looking for a new >>> solution. >> >> We need to fix async monitor commands. Luiz sent a note our to >> qemu-devel recently on this topic. >> >> I'm not sure we'll get there for 1.1 but if we do a 3 month release >> cycle for 1.2, then that's a pretty reasonable target IMHO. > > What about the second part? it's independant of the async issue. Isn't this a client problem? The client has this state, no? If the state is stored in the server, wouldn't it be marshaled as part of the server's migration state? I read that as the client needs to marshal it's own local state in the session and restore it in the new session. Regards, Anthony Liguori > >> >> Regards, >> >> Anthony Liguori >> >>> The straightforward solution would be to process the main loop on the >>> destination side during migration. >>> >>> (2) In order to restore the source-client spice session in the destination, we >>> need to pass data from the source to the destination. >>> Example for such data: in flight copy paste data, in flight usb data >>> We want to pass the data from the source spice server to the destination, via >>> Spice client. This introduces a possible race: after migration completes, the >>> source qemu can be killed before the spice-server completes transferring the >>> migration data to the client. >>> >>> Possible solutions: >>> - Have an async migration state notifiers. The migration state will change after >>> all the notifiers complete callbacks are called. >>> - libvirt will wait for qmp event corresponding to spice completing its >>> migration, and only then will kill the source qemu process. >>> >>> Any thoughts? >>> >>> Thanks, >>> Yonit. >>> >> >>