* [Qemu-devel] seamless migration with spice @ 2012-03-11 13:16 Yonit Halperin 2012-03-11 14:18 ` Anthony Liguori 2012-03-12 8:42 ` Hans de Goede 0 siblings, 2 replies; 26+ messages in thread From: Yonit Halperin @ 2012-03-11 13:16 UTC (permalink / raw) To: qemu-devel, spice-devel@freedesktop.org; +Cc: Gerd Hoffmann, Anthony Liguori Hi, We would like to implement seamless migration for Spice, i.e., keeping the currently opened spice client session valid after migration. Today, the spice client establishes the connection to the destination before migration starts, and when migration completes, the client's session is moved to the destination, but all the session data is being reset. We face 2 main challenges when coming to implement seamless migration: (1) Spice client must establish the connection to the destination before the spice password expires. However, during migration, qemu main loop is not processed, and when migration completes, the password might have already expired. Today we solve this by the async command client_migrate_info, which is expected to be called before migration starts. The command is completed once spice client has connected to the destination (or a timeout). Since async monitor commands are no longer supported, we are looking for a new solution. The straightforward solution would be to process the main loop on the destination side during migration. (2) In order to restore the source-client spice session in the destination, we need to pass data from the source to the destination. Example for such data: in flight copy paste data, in flight usb data We want to pass the data from the source spice server to the destination, via Spice client. This introduces a possible race: after migration completes, the source qemu can be killed before the spice-server completes transferring the migration data to the client. Possible solutions: - Have an async migration state notifiers. The migration state will change after all the notifiers complete callbacks are called. - libvirt will wait for qmp event corresponding to spice completing its migration, and only then will kill the source qemu process. Any thoughts? Thanks, Yonit. ^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [Qemu-devel] seamless migration with spice 2012-03-11 13:16 [Qemu-devel] seamless migration with spice Yonit Halperin @ 2012-03-11 14:18 ` Anthony Liguori 2012-03-11 15:25 ` Alon Levy 2012-03-12 8:42 ` Hans de Goede 1 sibling, 1 reply; 26+ messages in thread From: Anthony Liguori @ 2012-03-11 14:18 UTC (permalink / raw) To: Yonit Halperin; +Cc: qemu-devel, spice-devel@freedesktop.org, Gerd Hoffmann On 03/11/2012 08:16 AM, Yonit Halperin wrote: > Hi, > > We would like to implement seamless migration for Spice, i.e., keeping the > currently opened spice client session valid after migration. > Today, the spice client establishes the connection to the destination before > migration starts, and when migration completes, the client's session is moved to > the destination, but all the session data is being reset. > > We face 2 main challenges when coming to implement seamless migration: > > (1) Spice client must establish the connection to the destination before the > spice password expires. However, during migration, qemu main loop is not > processed, and when migration completes, the password might have already expired. > > Today we solve this by the async command client_migrate_info, which is expected > to be called before migration starts. The command is completed > once spice client has connected to the destination (or a timeout). > > Since async monitor commands are no longer supported, we are looking for a new > solution. We need to fix async monitor commands. Luiz sent a note our to qemu-devel recently on this topic. I'm not sure we'll get there for 1.1 but if we do a 3 month release cycle for 1.2, then that's a pretty reasonable target IMHO. Regards, Anthony Liguori > The straightforward solution would be to process the main loop on the > destination side during migration. > > (2) In order to restore the source-client spice session in the destination, we > need to pass data from the source to the destination. > Example for such data: in flight copy paste data, in flight usb data > We want to pass the data from the source spice server to the destination, via > Spice client. This introduces a possible race: after migration completes, the > source qemu can be killed before the spice-server completes transferring the > migration data to the client. > > Possible solutions: > - Have an async migration state notifiers. The migration state will change after > all the notifiers complete callbacks are called. > - libvirt will wait for qmp event corresponding to spice completing its > migration, and only then will kill the source qemu process. > > Any thoughts? > > Thanks, > Yonit. > ^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [Qemu-devel] seamless migration with spice 2012-03-11 14:18 ` Anthony Liguori @ 2012-03-11 15:25 ` Alon Levy 2012-03-11 15:36 ` Anthony Liguori 0 siblings, 1 reply; 26+ messages in thread From: Alon Levy @ 2012-03-11 15:25 UTC (permalink / raw) To: Anthony Liguori Cc: Yonit Halperin, qemu-devel, spice-devel@freedesktop.org, Gerd Hoffmann On Sun, Mar 11, 2012 at 09:18:17AM -0500, Anthony Liguori wrote: > On 03/11/2012 08:16 AM, Yonit Halperin wrote: > >Hi, > > > >We would like to implement seamless migration for Spice, i.e., keeping the > >currently opened spice client session valid after migration. > >Today, the spice client establishes the connection to the destination before > >migration starts, and when migration completes, the client's session is moved to > >the destination, but all the session data is being reset. > > > >We face 2 main challenges when coming to implement seamless migration: > > > >(1) Spice client must establish the connection to the destination before the > >spice password expires. However, during migration, qemu main loop is not > >processed, and when migration completes, the password might have already expired. > > > >Today we solve this by the async command client_migrate_info, which is expected > >to be called before migration starts. The command is completed > >once spice client has connected to the destination (or a timeout). > > > >Since async monitor commands are no longer supported, we are looking for a new > >solution. > > We need to fix async monitor commands. Luiz sent a note our to > qemu-devel recently on this topic. > > I'm not sure we'll get there for 1.1 but if we do a 3 month release > cycle for 1.2, then that's a pretty reasonable target IMHO. What about the second part? it's independant of the async issue. > > Regards, > > Anthony Liguori > > >The straightforward solution would be to process the main loop on the > >destination side during migration. > > > >(2) In order to restore the source-client spice session in the destination, we > >need to pass data from the source to the destination. > >Example for such data: in flight copy paste data, in flight usb data > >We want to pass the data from the source spice server to the destination, via > >Spice client. This introduces a possible race: after migration completes, the > >source qemu can be killed before the spice-server completes transferring the > >migration data to the client. > > > >Possible solutions: > >- Have an async migration state notifiers. The migration state will change after > >all the notifiers complete callbacks are called. > >- libvirt will wait for qmp event corresponding to spice completing its > >migration, and only then will kill the source qemu process. > > > >Any thoughts? > > > >Thanks, > >Yonit. > > > > ^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [Qemu-devel] seamless migration with spice 2012-03-11 15:25 ` Alon Levy @ 2012-03-11 15:36 ` Anthony Liguori 2012-03-11 19:11 ` Yonit Halperin 2012-03-12 7:57 ` Gerd Hoffmann 0 siblings, 2 replies; 26+ messages in thread From: Anthony Liguori @ 2012-03-11 15:36 UTC (permalink / raw) To: Yonit Halperin, qemu-devel, spice-devel@freedesktop.org, Gerd Hoffmann On 03/11/2012 10:25 AM, Alon Levy wrote: > On Sun, Mar 11, 2012 at 09:18:17AM -0500, Anthony Liguori wrote: >> On 03/11/2012 08:16 AM, Yonit Halperin wrote: >>> Hi, >>> >>> We would like to implement seamless migration for Spice, i.e., keeping the >>> currently opened spice client session valid after migration. >>> Today, the spice client establishes the connection to the destination before >>> migration starts, and when migration completes, the client's session is moved to >>> the destination, but all the session data is being reset. >>> >>> We face 2 main challenges when coming to implement seamless migration: >>> >>> (1) Spice client must establish the connection to the destination before the >>> spice password expires. However, during migration, qemu main loop is not >>> processed, and when migration completes, the password might have already expired. >>> >>> Today we solve this by the async command client_migrate_info, which is expected >>> to be called before migration starts. The command is completed >>> once spice client has connected to the destination (or a timeout). >>> >>> Since async monitor commands are no longer supported, we are looking for a new >>> solution. >> >> We need to fix async monitor commands. Luiz sent a note our to >> qemu-devel recently on this topic. >> >> I'm not sure we'll get there for 1.1 but if we do a 3 month release >> cycle for 1.2, then that's a pretty reasonable target IMHO. > > What about the second part? it's independant of the async issue. Isn't this a client problem? The client has this state, no? If the state is stored in the server, wouldn't it be marshaled as part of the server's migration state? I read that as the client needs to marshal it's own local state in the session and restore it in the new session. Regards, Anthony Liguori > >> >> Regards, >> >> Anthony Liguori >> >>> The straightforward solution would be to process the main loop on the >>> destination side during migration. >>> >>> (2) In order to restore the source-client spice session in the destination, we >>> need to pass data from the source to the destination. >>> Example for such data: in flight copy paste data, in flight usb data >>> We want to pass the data from the source spice server to the destination, via >>> Spice client. This introduces a possible race: after migration completes, the >>> source qemu can be killed before the spice-server completes transferring the >>> migration data to the client. >>> >>> Possible solutions: >>> - Have an async migration state notifiers. The migration state will change after >>> all the notifiers complete callbacks are called. >>> - libvirt will wait for qmp event corresponding to spice completing its >>> migration, and only then will kill the source qemu process. >>> >>> Any thoughts? >>> >>> Thanks, >>> Yonit. >>> >> >> ^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [Qemu-devel] seamless migration with spice 2012-03-11 15:36 ` Anthony Liguori @ 2012-03-11 19:11 ` Yonit Halperin 2012-03-12 7:57 ` Gerd Hoffmann 1 sibling, 0 replies; 26+ messages in thread From: Yonit Halperin @ 2012-03-11 19:11 UTC (permalink / raw) To: Anthony Liguori; +Cc: qemu-devel, spice-devel@freedesktop.org, Gerd Hoffmann Hi. On 03/11/2012 05:36 PM, Anthony Liguori wrote: > On 03/11/2012 10:25 AM, Alon Levy wrote: >> On Sun, Mar 11, 2012 at 09:18:17AM -0500, Anthony Liguori wrote: >>> On 03/11/2012 08:16 AM, Yonit Halperin wrote: >>>> Hi, >>>> >>>> We would like to implement seamless migration for Spice, i.e., >>>> keeping the >>>> currently opened spice client session valid after migration. >>>> Today, the spice client establishes the connection to the >>>> destination before >>>> migration starts, and when migration completes, the client's session >>>> is moved to >>>> the destination, but all the session data is being reset. >>>> >>>> We face 2 main challenges when coming to implement seamless migration: >>>> >>>> (1) Spice client must establish the connection to the destination >>>> before the >>>> spice password expires. However, during migration, qemu main loop is >>>> not >>>> processed, and when migration completes, the password might have >>>> already expired. >>>> >>>> Today we solve this by the async command client_migrate_info, which >>>> is expected >>>> to be called before migration starts. The command is completed >>>> once spice client has connected to the destination (or a timeout). >>>> >>>> Since async monitor commands are no longer supported, we are looking >>>> for a new >>>> solution. >>> >>> We need to fix async monitor commands. Luiz sent a note our to >>> qemu-devel recently on this topic. >>> >>> I'm not sure we'll get there for 1.1 but if we do a 3 month release >>> cycle for 1.2, then that's a pretty reasonable target IMHO. >> >> What about the second part? it's independant of the async issue. > > Isn't this a client problem? The client has this state, no? > No, part of the data is server specific. > If the state is stored in the server, wouldn't it be marshaled as part > of the server's migration state? > We currently don't restore the server state. That is the problem we want to solve. I meant that the server state can be marshaled from the source to the client, and from the client to the destination. The client serves as the mediator. Another option that we thought about was using save/load vmstate. Regards, Yonit. > I read that as the client needs to marshal it's own local state in the > session and restore it in the new session. > > Regards, > > Anthony Liguori > >> >>> >>> Regards, >>> >>> Anthony Liguori >>> >>>> The straightforward solution would be to process the main loop on the >>>> destination side during migration. >>>> >>>> (2) In order to restore the source-client spice session in the >>>> destination, we >>>> need to pass data from the source to the destination. >>>> Example for such data: in flight copy paste data, in flight usb data >>>> We want to pass the data from the source spice server to the >>>> destination, via >>>> Spice client. This introduces a possible race: after migration >>>> completes, the >>>> source qemu can be killed before the spice-server completes >>>> transferring the >>>> migration data to the client. >>>> >>>> Possible solutions: >>>> - Have an async migration state notifiers. The migration state will >>>> change after >>>> all the notifiers complete callbacks are called. >>>> - libvirt will wait for qmp event corresponding to spice completing its >>>> migration, and only then will kill the source qemu process. >>>> >>>> Any thoughts? >>>> >>>> Thanks, >>>> Yonit. >>>> >>> >>> > ^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [Qemu-devel] seamless migration with spice 2012-03-11 15:36 ` Anthony Liguori 2012-03-11 19:11 ` Yonit Halperin @ 2012-03-12 7:57 ` Gerd Hoffmann 2012-03-12 8:51 ` [Qemu-devel] [Spice-devel] " Hans de Goede 1 sibling, 1 reply; 26+ messages in thread From: Gerd Hoffmann @ 2012-03-12 7:57 UTC (permalink / raw) To: Anthony Liguori; +Cc: Yonit Halperin, qemu-devel, spice-devel@freedesktop.org Hi, >> What about the second part? it's independant of the async issue. > > Isn't this a client problem? The client has this state, no? It is state of the client <-> server session. Today spice client creates a new session on migration, so there is simply no need to maintain any state. Drawback is that everything needs to be resent from the server to the client. Thats why we want be able to continue the spice session, so the client caches will stay valid. Of course the spice-server on the migration target needs the session state for that, i.e. know for example which bits the client has cached which it hasn't. > If the state is stored in the server, wouldn't it be marshaled as part > of the server's migration state? spice-server is stateless today when it comes to migration. QXL handles all (device) state, by keeping track of some commands (such as create/destroy surface) which it needs to transfer on migration, and by asking spice-server to render all surfaces on migration, which effectively flushes the spice server state to qxl device memory. To transfer the client session state there are basically two options: (a) transfer it as part of the qemu migration data stream. I don't want have any details about the qemu migration implementation and/or protocol in the spice-server library api, which basically leaves a ugly "transfer-this-blob-for-me-please" style interface as only option. (b) transfer it as part of the spice protocol. As the spice client has a connection to both source and target while the migration runs we can send session state from the source host via spice client to the target host. This needs some form of synchronization, to make sure both vmstate and spice migration are completed when qemu on the source machine quits. I think (b) is the better approach. cheers, Gerd ^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [Qemu-devel] [Spice-devel] seamless migration with spice 2012-03-12 7:57 ` Gerd Hoffmann @ 2012-03-12 8:51 ` Hans de Goede 2012-03-12 9:46 ` Gerd Hoffmann 2012-03-12 11:39 ` David Jaša 0 siblings, 2 replies; 26+ messages in thread From: Hans de Goede @ 2012-03-12 8:51 UTC (permalink / raw) To: Gerd Hoffmann; +Cc: spice-devel@freedesktop.org, qemu-devel, Anthony Liguori Hi, On 03/12/2012 08:57 AM, Gerd Hoffmann wrote: > Hi, > >>> What about the second part? it's independant of the async issue. >> >> Isn't this a client problem? The client has this state, no? > > It is state of the client<-> server session. Today spice client > creates a new session on migration, so there is simply no need to > maintain any state. Drawback is that everything needs to be resent from > the server to the client. Thats why we want be able to continue the > spice session, so the client caches will stay valid. > > Of course the spice-server on the migration target needs the session > state for that, i.e. know for example which bits the client has cached > which it hasn't. > >> If the state is stored in the server, wouldn't it be marshaled as part >> of the server's migration state? > > spice-server is stateless today when it comes to migration. QXL handles > all (device) state, by keeping track of some commands (such as > create/destroy surface) which it needs to transfer on migration, and by > asking spice-server to render all surfaces on migration, which > effectively flushes the spice server state to qxl device memory. > > To transfer the client session state there are basically two options: > > (a) transfer it as part of the qemu migration data stream. I don't > want have any details about the qemu migration implementation > and/or protocol in the spice-server library api, which basically > leaves a ugly "transfer-this-blob-for-me-please" style interface > as only option. > > (b) transfer it as part of the spice protocol. As the spice > client has a connection to both source and target while the > migration runs we can send session state from the source host via > spice client to the target host. This needs some form of > synchronization, to make sure both vmstate and spice migration > are completed when qemu on the source machine quits. The problem with (b) is, that iirc the way b was implemented in the past was still the big blob approach, but then pass the blob through the client, which means an evil client could modify it, causing all sorts of "interesting" behavior inside spice-server. Since we're re-implementing this to me the send a blob through the client approach is simply not acceptable from a security pov, also see my previous mail in this thread. > I think (b) is the better approach. I disagree. Note that there is more info to send over then just which surfaces / images are cached by the client. There also is things like partial complete agent channel messages, including how much bytes must be read to complete the command, etc. IMHO (b) would only be acceptable if the data send through the client stops becoming a blob. Instead the client could simply send a list of all surface ids, etc. which it has cached after it connects to / starts using the new host. Note that the old hosts needs to send nothing for this, this is info the client already has, also removing the need for synchronization. As for certain other data, such as (but not limited to) partially parsed agent messages, these should be send through the regular vmstate methods IMHO. So I see 2 options 1) Do (a), sending everything that way 2) Do (a) sending non client state that way; and let the client send state like which surfaces it has cached when the new session starts. Regards, Hans ^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [Qemu-devel] [Spice-devel] seamless migration with spice 2012-03-12 8:51 ` [Qemu-devel] [Spice-devel] " Hans de Goede @ 2012-03-12 9:46 ` Gerd Hoffmann 2012-03-12 10:03 ` Alon Levy ` (2 more replies) 2012-03-12 11:39 ` David Jaša 1 sibling, 3 replies; 26+ messages in thread From: Gerd Hoffmann @ 2012-03-12 9:46 UTC (permalink / raw) To: Hans de Goede; +Cc: spice-devel@freedesktop.org, qemu-devel, Anthony Liguori Hi, > The problem with (b) is, that iirc the way b was implemented in the past > was still the big blob approach, but then pass the blob through the client, > which means an evil client could modify it, causing all sorts of > "interesting" > behavior inside spice-server. Since we're re-implementing this to me the > send a blob through the client approach is simply not acceptable from a > security pov, also see my previous mail in this thread. Agree. It should be a normal spice message which goes through the spice marshaller for parsing & sanity checking. > I disagree. Note that there is more info to send over then just which > surfaces / images are cached by the client. There also is things like > partial complete agent channel messages, including how much bytes must > be read > to complete the command, etc. Is there a complete list of the session state we need to save? > IMHO (b) would only be acceptable if the data send through the client stops > becoming a blob. Using (a) to send a blob isn't better. > Instead the client could simply send a list of all > surface ids, > etc. which it has cached after it connects to / starts using the new > host. Note > that the old hosts needs to send nothing for this, this is info the > client already > has, also removing the need for synchronization. Yes, some session state is known to the client anyway so we don't need a source <-> target channel for them. > As for certain other > data, such > as (but not limited to) partially parsed agent messages, these should be > send through the regular vmstate methods IMHO. That isn't easy to handle via vmstate, at least as soon as this goes beyond a fixed number of fields aka 'migrate over this struct for me'. Think multiple spice clients connected at the same time. > 1) Do (a), sending everything that way > 2) Do (a) sending non client state that way; and > let the client send state like which surfaces it has cached > when the new session starts. I think we have to look at each piece of state information needed by the target and look how to handle this best. cheers, Gerd ^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [Qemu-devel] [Spice-devel] seamless migration with spice 2012-03-12 9:46 ` Gerd Hoffmann @ 2012-03-12 10:03 ` Alon Levy 2012-03-12 10:26 ` Gerd Hoffmann 2012-03-12 11:23 ` Hans de Goede 2012-03-12 12:47 ` Yonit Halperin 2 siblings, 1 reply; 26+ messages in thread From: Alon Levy @ 2012-03-12 10:03 UTC (permalink / raw) To: Gerd Hoffmann Cc: Anthony Liguori, Hans de Goede, qemu-devel, spice-devel@freedesktop.org On Mon, Mar 12, 2012 at 10:46:44AM +0100, Gerd Hoffmann wrote: > Hi, > > > The problem with (b) is, that iirc the way b was implemented in the past > > was still the big blob approach, but then pass the blob through the client, > > which means an evil client could modify it, causing all sorts of > > "interesting" > > behavior inside spice-server. Since we're re-implementing this to me the > > send a blob through the client approach is simply not acceptable from a > > security pov, also see my previous mail in this thread. > > Agree. It should be a normal spice message which goes through the spice > marshaller for parsing & sanity checking. > > > I disagree. Note that there is more info to send over then just which > > surfaces / images are cached by the client. There also is things like > > partial complete agent channel messages, including how much bytes must > > be read > > to complete the command, etc. > > Is there a complete list of the session state we need to save? > > > IMHO (b) would only be acceptable if the data send through the client stops > > becoming a blob. > > Using (a) to send a blob isn't better. Actually, we discussed this in the past and one benefit is that network between source and target qemu would be fast (otherwise migration wouldn't work in the first place), as opposed to source->client and client->dest. Additionally you save one transaction. > > > Instead the client could simply send a list of all > > surface ids, > > etc. which it has cached after it connects to / starts using the new > > host. Note > > that the old hosts needs to send nothing for this, this is info the > > client already > > has, also removing the need for synchronization. > > Yes, some session state is known to the client anyway so we don't need a > source <-> target channel for them. > > > As for certain other > > data, such > > as (but not limited to) partially parsed agent messages, these should be > > send through the regular vmstate methods IMHO. > > That isn't easy to handle via vmstate, at least as soon as this goes > beyond a fixed number of fields aka 'migrate over this struct for me'. > Think multiple spice clients connected at the same time. Migrate this struct n times for me. > > > 1) Do (a), sending everything that way > > 2) Do (a) sending non client state that way; and > > let the client send state like which surfaces it has cached > > when the new session starts. > > I think we have to look at each piece of state information needed by the > target and look how to handle this best. > > cheers, > Gerd > > ^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [Qemu-devel] [Spice-devel] seamless migration with spice 2012-03-12 10:03 ` Alon Levy @ 2012-03-12 10:26 ` Gerd Hoffmann 2012-03-12 11:29 ` Alon Levy 0 siblings, 1 reply; 26+ messages in thread From: Gerd Hoffmann @ 2012-03-12 10:26 UTC (permalink / raw) To: Hans de Goede, spice-devel@freedesktop.org, qemu-devel, Anthony Liguori Hi, >>> As for certain other >>> data, such >>> as (but not limited to) partially parsed agent messages, these should be >>> send through the regular vmstate methods IMHO. >> >> That isn't easy to handle via vmstate, at least as soon as this goes >> beyond a fixed number of fields aka 'migrate over this struct for me'. >> Think multiple spice clients connected at the same time. > > Migrate this struct n times for me. I think for the agent case this isn't needed. Or is every client allowed to speak to the agent in case of multiple clients connected? I somehow doubt this can work as the agent protocol can't multicast ... cheers, Gerd ^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [Qemu-devel] [Spice-devel] seamless migration with spice 2012-03-12 10:26 ` Gerd Hoffmann @ 2012-03-12 11:29 ` Alon Levy 2012-03-12 11:34 ` Gerd Hoffmann 0 siblings, 1 reply; 26+ messages in thread From: Alon Levy @ 2012-03-12 11:29 UTC (permalink / raw) To: Gerd Hoffmann Cc: Anthony Liguori, Hans de Goede, qemu-devel, spice-devel@freedesktop.org On Mon, Mar 12, 2012 at 11:26:50AM +0100, Gerd Hoffmann wrote: > Hi, > > >>> As for certain other > >>> data, such > >>> as (but not limited to) partially parsed agent messages, these should be > >>> send through the regular vmstate methods IMHO. > >> > >> That isn't easy to handle via vmstate, at least as soon as this goes > >> beyond a fixed number of fields aka 'migrate over this struct for me'. > >> Think multiple spice clients connected at the same time. > > > > Migrate this struct n times for me. > > I think for the agent case this isn't needed. Or is every client > allowed to speak to the agent in case of multiple clients connected? I > somehow doubt this can work as the agent protocol can't multicast ... > Actually the agent protocol does extend nicely to multiple clients - I forgot the name but there is an additional wrapper between the client/server originating message and the guest received message, that is currently used for server or client originating messages, and can be reused to have multiple in flight different client messages. We don't use it for guest generated messages, but we could as well. Multicast would be another number. > cheers, > Gerd > > ^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [Qemu-devel] [Spice-devel] seamless migration with spice 2012-03-12 11:29 ` Alon Levy @ 2012-03-12 11:34 ` Gerd Hoffmann 2012-03-12 11:45 ` Alon Levy 0 siblings, 1 reply; 26+ messages in thread From: Gerd Hoffmann @ 2012-03-12 11:34 UTC (permalink / raw) To: Hans de Goede, spice-devel@freedesktop.org, qemu-devel, Anthony Liguori On 03/12/12 12:29, Alon Levy wrote: > On Mon, Mar 12, 2012 at 11:26:50AM +0100, Gerd Hoffmann wrote: >> Hi, >> >>> Migrate this struct n times for me. >> >> I think for the agent case this isn't needed. Or is every client >> allowed to speak to the agent in case of multiple clients connected? I >> somehow doubt this can work as the agent protocol can't multicast ... >> > > Actually the agent protocol does extend nicely to multiple clients - I > forgot the name but there is an additional wrapper between the > client/server originating message and the guest received message, that > is currently used for server or client originating messages, and can be > reused to have multiple in flight different client messages. I think you'll have issues in the layer above though. Two spice clients doing cut+paste operations at the same time? Two spice clients requesting different screen resolutions? cheers, Gerd ^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [Qemu-devel] [Spice-devel] seamless migration with spice 2012-03-12 11:34 ` Gerd Hoffmann @ 2012-03-12 11:45 ` Alon Levy 2012-03-12 12:44 ` Gerd Hoffmann 0 siblings, 1 reply; 26+ messages in thread From: Alon Levy @ 2012-03-12 11:45 UTC (permalink / raw) To: Gerd Hoffmann Cc: Anthony Liguori, Hans de Goede, qemu-devel, spice-devel@freedesktop.org On Mon, Mar 12, 2012 at 12:34:42PM +0100, Gerd Hoffmann wrote: > On 03/12/12 12:29, Alon Levy wrote: > > On Mon, Mar 12, 2012 at 11:26:50AM +0100, Gerd Hoffmann wrote: > >> Hi, > >> > >>> Migrate this struct n times for me. > >> > >> I think for the agent case this isn't needed. Or is every client > >> allowed to speak to the agent in case of multiple clients connected? I > >> somehow doubt this can work as the agent protocol can't multicast ... > >> > > > > Actually the agent protocol does extend nicely to multiple clients - I > > forgot the name but there is an additional wrapper between the > > client/server originating message and the guest received message, that > > is currently used for server or client originating messages, and can be > > reused to have multiple in flight different client messages. > > I think you'll have issues in the layer above though. Two spice clients > doing cut+paste operations at the same time? Two spice clients > requesting different screen resolutions? Yeah, you're right of course, this needs to be dealt with somehow. cut+paste: maps nicely to a number of different buffers. Would need some policy, and the session agent becomes closer to a buffer manager. resolutions: again policy, perhaps have a master client, or if none defined let the last or just the first choose. Not sure. But these issues don't need to be solved now, do they? > > cheers, > Gerd > > _______________________________________________ > Spice-devel mailing list > Spice-devel@lists.freedesktop.org > http://lists.freedesktop.org/mailman/listinfo/spice-devel ^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [Qemu-devel] [Spice-devel] seamless migration with spice 2012-03-12 11:45 ` Alon Levy @ 2012-03-12 12:44 ` Gerd Hoffmann 2012-03-12 14:24 ` Alon Levy 0 siblings, 1 reply; 26+ messages in thread From: Gerd Hoffmann @ 2012-03-12 12:44 UTC (permalink / raw) To: Hans de Goede, spice-devel@freedesktop.org, qemu-devel, Anthony Liguori On 03/12/12 12:45, Alon Levy wrote: > On Mon, Mar 12, 2012 at 12:34:42PM +0100, Gerd Hoffmann wrote: >> On 03/12/12 12:29, Alon Levy wrote: >>> >>> Actually the agent protocol does extend nicely to multiple clients - I >>> forgot the name but there is an additional wrapper between the >>> client/server originating message and the guest received message, that >>> is currently used for server or client originating messages, and can be >>> reused to have multiple in flight different client messages. >> >> I think you'll have issues in the layer above though. Two spice clients >> doing cut+paste operations at the same time? Two spice clients >> requesting different screen resolutions? > > Yeah, you're right of course, this needs to be dealt with somehow. > cut+paste: maps nicely to a number of different buffers. Would need > some policy, and the session agent becomes closer to a buffer manager. > resolutions: again policy, perhaps have a master client, or if none > defined let the last or just the first choose. Not sure. > > But these issues don't need to be solved now, do they? Surely not. But better keep it in mind when figuring how to handle migration, so we are prepared to xfer all needed state in case we implement that some day. How does multi-client handle this today? cheers, Gerd ^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [Qemu-devel] [Spice-devel] seamless migration with spice 2012-03-12 12:44 ` Gerd Hoffmann @ 2012-03-12 14:24 ` Alon Levy 2012-03-12 14:35 ` Alon Levy 0 siblings, 1 reply; 26+ messages in thread From: Alon Levy @ 2012-03-12 14:24 UTC (permalink / raw) To: Gerd Hoffmann Cc: Anthony Liguori, Hans de Goede, qemu-devel, spice-devel@freedesktop.org On Mon, Mar 12, 2012 at 01:44:47PM +0100, Gerd Hoffmann wrote: > On 03/12/12 12:45, Alon Levy wrote: > > On Mon, Mar 12, 2012 at 12:34:42PM +0100, Gerd Hoffmann wrote: > >> On 03/12/12 12:29, Alon Levy wrote: > >>> > >>> Actually the agent protocol does extend nicely to multiple clients - I > >>> forgot the name but there is an additional wrapper between the > >>> client/server originating message and the guest received message, that > >>> is currently used for server or client originating messages, and can be > >>> reused to have multiple in flight different client messages. > >> > >> I think you'll have issues in the layer above though. Two spice clients > >> doing cut+paste operations at the same time? Two spice clients > >> requesting different screen resolutions? > > > > Yeah, you're right of course, this needs to be dealt with somehow. > > cut+paste: maps nicely to a number of different buffers. Would need > > some policy, and the session agent becomes closer to a buffer manager. > > resolutions: again policy, perhaps have a master client, or if none > > defined let the last or just the first choose. Not sure. > > > > But these issues don't need to be solved now, do they? > > Surely not. But better keep it in mind when figuring how to handle > migration, so we are prepared to xfer all needed state in case we > implement that some day. > > How does multi-client handle this today? Just a single agent iirc. Or perhaps it breaks.. > > cheers, > Gerd > _______________________________________________ > Spice-devel mailing list > Spice-devel@lists.freedesktop.org > http://lists.freedesktop.org/mailman/listinfo/spice-devel ^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [Qemu-devel] [Spice-devel] seamless migration with spice 2012-03-12 14:24 ` Alon Levy @ 2012-03-12 14:35 ` Alon Levy 0 siblings, 0 replies; 26+ messages in thread From: Alon Levy @ 2012-03-12 14:35 UTC (permalink / raw) To: Gerd Hoffmann, Hans de Goede, spice-devel@freedesktop.org, qemu-devel, Anthony Liguori On Mon, Mar 12, 2012 at 04:24:16PM +0200, Alon Levy wrote: > On Mon, Mar 12, 2012 at 01:44:47PM +0100, Gerd Hoffmann wrote: > > On 03/12/12 12:45, Alon Levy wrote: > > > On Mon, Mar 12, 2012 at 12:34:42PM +0100, Gerd Hoffmann wrote: > > >> On 03/12/12 12:29, Alon Levy wrote: > > >>> > > >>> Actually the agent protocol does extend nicely to multiple clients - I > > >>> forgot the name but there is an additional wrapper between the > > >>> client/server originating message and the guest received message, that > > >>> is currently used for server or client originating messages, and can be > > >>> reused to have multiple in flight different client messages. > > >> > > >> I think you'll have issues in the layer above though. Two spice clients > > >> doing cut+paste operations at the same time? Two spice clients > > >> requesting different screen resolutions? > > > > > > Yeah, you're right of course, this needs to be dealt with somehow. > > > cut+paste: maps nicely to a number of different buffers. Would need > > > some policy, and the session agent becomes closer to a buffer manager. > > > resolutions: again policy, perhaps have a master client, or if none > > > defined let the last or just the first choose. Not sure. > > > > > > But these issues don't need to be solved now, do they? > > > > Surely not. But better keep it in mind when figuring how to handle > > migration, so we are prepared to xfer all needed state in case we > > implement that some day. > > > > How does multi-client handle this today? > > Just a single agent iirc. Or perhaps it breaks.. s/agent/client/, i.e. just one of the clients gets to have client mouse, c&p, resolution changes (well, all the rest get affected by the triggered resolution changes). IOW, left as a todo. > > > > > cheers, > > Gerd > > _______________________________________________ > > Spice-devel mailing list > > Spice-devel@lists.freedesktop.org > > http://lists.freedesktop.org/mailman/listinfo/spice-devel > ^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [Qemu-devel] [Spice-devel] seamless migration with spice 2012-03-12 9:46 ` Gerd Hoffmann 2012-03-12 10:03 ` Alon Levy @ 2012-03-12 11:23 ` Hans de Goede 2012-03-12 12:21 ` Gerd Hoffmann 2012-03-12 12:47 ` Yonit Halperin 2 siblings, 1 reply; 26+ messages in thread From: Hans de Goede @ 2012-03-12 11:23 UTC (permalink / raw) To: Gerd Hoffmann; +Cc: spice-devel@freedesktop.org, qemu-devel, Anthony Liguori Hi, On 03/12/2012 10:46 AM, Gerd Hoffmann wrote: > Hi, > >> The problem with (b) is, that iirc the way b was implemented in the past >> was still the big blob approach, but then pass the blob through the client, >> which means an evil client could modify it, causing all sorts of >> "interesting" >> behavior inside spice-server. Since we're re-implementing this to me the >> send a blob through the client approach is simply not acceptable from a >> security pov, also see my previous mail in this thread. > > Agree. It should be a normal spice message which goes through the spice > marshaller for parsing& sanity checking. > >> I disagree. Note that there is more info to send over then just which >> surfaces / images are cached by the client. There also is things like >> partial complete agent channel messages, including how much bytes must >> be read >> to complete the command, etc. > > Is there a complete list of the session state we need to save? > There is still code in spice-server for the old seamless migration, someone could go through that and use that as an initial list of session state we need to save. >> IMHO (b) would only be acceptable if the data send through the client stops >> becoming a blob. > > Using (a) to send a blob isn't better. > It has the distinct advantage that we can trust the contents of the blob which makes life significantly easier IMHO. >> Instead the client could simply send a list of all >> surface ids, >> etc. which it has cached after it connects to / starts using the new >> host. Note >> that the old hosts needs to send nothing for this, this is info the >> client already >> has, also removing the need for synchronization. > > Yes, some session state is known to the client anyway so we don't need a > source<-> target channel for them. > >> As for certain other >> data, such >> as (but not limited to) partially parsed agent messages, these should be >> send through the regular vmstate methods IMHO. > > That isn't easy to handle via vmstate, at least as soon as this goes > beyond a fixed number of fields aka 'migrate over this struct for me'. > Think multiple spice clients connected at the same time. > >> 1) Do (a), sending everything that way >> 2) Do (a) sending non client state that way; and >> let the client send state like which surfaces it has cached >> when the new session starts. > > I think we have to look at each piece of state information needed by the > target and look how to handle this best. Agreed, so for starts someone needs to make a list of all session state we need to save. Regards, Hans ^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [Qemu-devel] [Spice-devel] seamless migration with spice 2012-03-12 11:23 ` Hans de Goede @ 2012-03-12 12:21 ` Gerd Hoffmann 0 siblings, 0 replies; 26+ messages in thread From: Gerd Hoffmann @ 2012-03-12 12:21 UTC (permalink / raw) To: Hans de Goede; +Cc: spice-devel@freedesktop.org, qemu-devel, Anthony Liguori Hi, >> Is there a complete list of the session state we need to save? >> > > There is still code in spice-server for the old seamless migration, > someone could go through that and use that as an initial list of > session state we need to save. That doesn't help much as it is _way_ too old. Predates surfaces & wan support which needs additional state. Predates smardcard and usb too. Also agent didn't have bulky stuff (cut+paste) back then, so chances are not bad that just not saving any agent state works in 99.99% of the cases just fine, so I'm not sure this is handled at all. Also some bits are are not needed any more. Transfering the ticket has been offloaded that to management. QXL device handles some bits which used to be transfered with the spice-server state (mouse pointer shape). > Agreed, so for starts someone needs to make a list of all > session state we need to save. Here is what I'm aware of: * session id (needed when sending state via vmstate, I think we don't need it when sending state via spice client). * surfaces known to the client (can also be negotiated between client and target directly). * surface state (lossy vs. lossless quality if wan support is enabled). Dunno whenever the client knows this. * glz compression dictionary state (not sure what exactly is transfered here and why). * vmchannel state (agent, smartcard, usb). agent is tricky because spice-server needs to maintain state there because of the message multiplexing. A fixed number of fields and maybe a VD_AGENT_MAX_DATA_SIZE-sized buffer could work for that though. smartcard + usb: This is just pass-through for spice-server, right? There shouldn't be anything to save, except maybe for stuff buffered in spice-server. Is there any? I mean really in spice-server, migrating spice-qemu-char.c buffers via vmstate is not a big issue. cheers, Gerd ^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [Qemu-devel] [Spice-devel] seamless migration with spice 2012-03-12 9:46 ` Gerd Hoffmann 2012-03-12 10:03 ` Alon Levy 2012-03-12 11:23 ` Hans de Goede @ 2012-03-12 12:47 ` Yonit Halperin 2012-03-12 13:50 ` Gerd Hoffmann 2 siblings, 1 reply; 26+ messages in thread From: Yonit Halperin @ 2012-03-12 12:47 UTC (permalink / raw) To: Gerd Hoffmann Cc: Anthony Liguori, Hans de Goede, qemu-devel, spice-devel@freedesktop.org On 03/12/2012 11:46 AM, Gerd Hoffmann wrote: > Hi, > >> The problem with (b) is, that iirc the way b was implemented in the past >> was still the big blob approach, but then pass the blob through the client, >> which means an evil client could modify it, causing all sorts of >> "interesting" >> behavior inside spice-server. Since we're re-implementing this to me the >> send a blob through the client approach is simply not acceptable from a >> security pov, also see my previous mail in this thread. > > Agree. It should be a normal spice message which goes through the spice > marshaller for parsing& sanity checking. > >> I disagree. Note that there is more info to send over then just which >> surfaces / images are cached by the client. There also is things like >> partial complete agent channel messages, including how much bytes must >> be read >> to complete the command, etc. > > Is there a complete list of the session state we need to save? > >> IMHO (b) would only be acceptable if the data send through the client stops >> becoming a blob. > > Using (a) to send a blob isn't better. > Gerd/Hans, Can you explain/exemplify, why sending data as a blob (either by (a) or (b)), that is verified only by the two ends that actually use it, is a problem? Lets say the client/qemu are completely aware of the migration data, what prevent it from harming it then? >> Instead the client could simply send a list of all >> surface ids, >> etc. which it has cached after it connects to / starts using the new >> host. Note >> that the old hosts needs to send nothing for this, this is info the >> client already >> has, also removing the need for synchronization. > > Yes, some session state is known to the client anyway so we don't need a > source<-> target channel for them. > >> As for certain other >> data, such >> as (but not limited to) partially parsed agent messages, these should be >> send through the regular vmstate methods IMHO. > > That isn't easy to handle via vmstate, at least as soon as this goes > beyond a fixed number of fields aka 'migrate over this struct for me'. > Think multiple spice clients connected at the same time. > >> 1) Do (a), sending everything that way >> 2) Do (a) sending non client state that way; and >> let the client send state like which surfaces it has cached >> when the new session starts. > > I think we have to look at each piece of state information needed by the > target and look how to handle this best. > > cheers, > Gerd > > _______________________________________________ > Spice-devel mailing list > Spice-devel@lists.freedesktop.org > http://lists.freedesktop.org/mailman/listinfo/spice-devel ^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [Qemu-devel] [Spice-devel] seamless migration with spice 2012-03-12 12:47 ` Yonit Halperin @ 2012-03-12 13:50 ` Gerd Hoffmann 2012-03-12 18:45 ` Yonit Halperin 0 siblings, 1 reply; 26+ messages in thread From: Gerd Hoffmann @ 2012-03-12 13:50 UTC (permalink / raw) To: Yonit Halperin Cc: Anthony Liguori, Hans de Goede, qemu-devel, spice-devel@freedesktop.org Hi, > Can you explain/exemplify, why sending data as a blob (either by (a) or > (b)), that is verified only by the two ends that actually use it, is a > problem? It tends to be not very robust. Especially when the creating/parsing is done ad-hoc and the format changes now and then due to more info needing to be stored later on. The qemu migration format which has almost no structure breaks now and then because of that. Thus I'd prefer to not go down this route when creating something new. cheers, Gerd ^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [Qemu-devel] [Spice-devel] seamless migration with spice 2012-03-12 13:50 ` Gerd Hoffmann @ 2012-03-12 18:45 ` Yonit Halperin 2012-03-13 6:40 ` Gerd Hoffmann 0 siblings, 1 reply; 26+ messages in thread From: Yonit Halperin @ 2012-03-12 18:45 UTC (permalink / raw) To: Gerd Hoffmann Cc: Anthony Liguori, Hans de Goede, Alon Levy, qemu-devel, spice-devel@freedesktop.org Hi, On 03/12/2012 03:50 PM, Gerd Hoffmann wrote: > Hi, > >> Can you explain/exemplify, why sending data as a blob (either by (a) or >> (b)), that is verified only by the two ends that actually use it, is a >> problem? > > It tends to be not very robust. Especially when the creating/parsing is > done ad-hoc and the format changes now and then due to more info needing > to be stored later on. The qemu migration format which has almost no > structure breaks now and then because of that. Thus I'd prefer to not > go down this route when creating something new. > > cheers, > Gerd Exposing spice server internals to the client/qemu seems to me more vulnerable then sending it as a blob. Nonetheless, it introduces more complexity to backward compatibility support and it will need to involve not only the capabilities/versions of the server but also those of the qemu/client. Which reminds me, that we also need capabilities negotiation for the migration protocol between the src and the destination. Regards, Yonit. ^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [Qemu-devel] [Spice-devel] seamless migration with spice 2012-03-12 18:45 ` Yonit Halperin @ 2012-03-13 6:40 ` Gerd Hoffmann 2012-03-13 6:52 ` Yonit Halperin 0 siblings, 1 reply; 26+ messages in thread From: Gerd Hoffmann @ 2012-03-13 6:40 UTC (permalink / raw) To: Yonit Halperin Cc: Anthony Liguori, Hans de Goede, Alon Levy, qemu-devel, spice-devel@freedesktop.org On 03/12/12 19:45, Yonit Halperin wrote: > Hi, > On 03/12/2012 03:50 PM, Gerd Hoffmann wrote: >> Hi, >> >>> Can you explain/exemplify, why sending data as a blob (either by (a) or >>> (b)), that is verified only by the two ends that actually use it, is a >>> problem? >> >> It tends to be not very robust. Especially when the creating/parsing is >> done ad-hoc and the format changes now and then due to more info needing >> to be stored later on. The qemu migration format which has almost no >> structure breaks now and then because of that. Thus I'd prefer to not >> go down this route when creating something new. >> >> cheers, >> Gerd > > Exposing spice server internals to the client/qemu seems to me more > vulnerable then sending it as a blob. That also depends on what and how much we need to transfer. > Nonetheless, it introduces more > complexity to backward compatibility support and it will need to involve > not only the capabilities/versions of the server but also those of the > qemu/client Backward compatibility isn't that easy both ways. >.Which reminds me, that we also need capabilities > negotiation for the migration protocol between the src and the destination. If this is a hard requirement then using the vmstate channel isn't going to work. The vmstate is a one-way channel, no way to negotiate anything between source and target. cheers, Gerd ^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [Qemu-devel] [Spice-devel] seamless migration with spice 2012-03-13 6:40 ` Gerd Hoffmann @ 2012-03-13 6:52 ` Yonit Halperin 2012-03-13 7:40 ` Gerd Hoffmann 0 siblings, 1 reply; 26+ messages in thread From: Yonit Halperin @ 2012-03-13 6:52 UTC (permalink / raw) To: Gerd Hoffmann Cc: Anthony Liguori, Hans de Goede, Alon Levy, qemu-devel, spice-devel@freedesktop.org Hi, On 03/13/2012 08:40 AM, Gerd Hoffmann wrote: > On 03/12/12 19:45, Yonit Halperin wrote: >> Hi, >> On 03/12/2012 03:50 PM, Gerd Hoffmann wrote: >>> Hi, >>> >>>> Can you explain/exemplify, why sending data as a blob (either by (a) or >>>> (b)), that is verified only by the two ends that actually use it, is a >>>> problem? >>> >>> It tends to be not very robust. Especially when the creating/parsing is >>> done ad-hoc and the format changes now and then due to more info needing >>> to be stored later on. The qemu migration format which has almost no >>> structure breaks now and then because of that. Thus I'd prefer to not >>> go down this route when creating something new. >>> >>> cheers, >>> Gerd >> >> Exposing spice server internals to the client/qemu seems to me more >> vulnerable then sending it as a blob. > > That also depends on what and how much we need to transfer. > >> Nonetheless, it introduces more >> complexity to backward compatibility support and it will need to involve >> not only the capabilities/versions of the server but also those of the >> qemu/client > > Backward compatibility isn't that easy both ways. > It is not easy when you have 2 components, and it is much less easy when you have 3 or 4 components. So why make it more complicated if you can avoid it. Especially since there is no functional reason for making the qemu/client capabilities/versions dependent on the server internal data. >> .Which reminds me, that we also need capabilities >> negotiation for the migration protocol between the src and the destination. > > If this is a hard requirement then using the vmstate channel isn't going > to work. The vmstate is a one-way channel, no way to negotiate anything > between source and target. > We can do this via the client. Regards, Yonit. > cheers, > Gerd ^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [Qemu-devel] [Spice-devel] seamless migration with spice 2012-03-13 6:52 ` Yonit Halperin @ 2012-03-13 7:40 ` Gerd Hoffmann 0 siblings, 0 replies; 26+ messages in thread From: Gerd Hoffmann @ 2012-03-13 7:40 UTC (permalink / raw) To: Yonit Halperin Cc: Anthony Liguori, Hans de Goede, Alon Levy, qemu-devel, spice-devel@freedesktop.org Hi, > It is not easy when you have 2 components, and it is much less easy when > you have 3 or 4 components. So why make it more complicated if you can > avoid it. Especially since there is no functional reason for making the > qemu/client capabilities/versions dependent on the server internal data. qemu has ways to handle compatibility in the vmstate format. We can use those capabilities. That of course requires exposing the structs to be saved to qemu and adds some complexity to the qemu <-> spice interface. What session state is needed by the target? What of this can be negotiated between client and target host without bothering the source? What needs be transfered from source to target, either directly or via client? >> If this is a hard requirement then using the vmstate channel isn't going >> to work. The vmstate is a one-way channel, no way to negotiate anything >> between source and target. >> > We can do this via the client. Then you can send the actual state via client too. Out-of-band negotiation for the blob send via vmstate scares me. Can we please start with a look at which state we actually have to send over? cheers, Gerd ^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [Qemu-devel] [Spice-devel] seamless migration with spice 2012-03-12 8:51 ` [Qemu-devel] [Spice-devel] " Hans de Goede 2012-03-12 9:46 ` Gerd Hoffmann @ 2012-03-12 11:39 ` David Jaša 1 sibling, 0 replies; 26+ messages in thread From: David Jaša @ 2012-03-12 11:39 UTC (permalink / raw) To: spice-devel@freedesktop.org, qemu-devel; +Cc: Anthony Liguori Hans de Goede píše v Po 12. 03. 2012 v 09:51 +0100: > Hi, > > On 03/12/2012 08:57 AM, Gerd Hoffmann wrote: > > Hi, > > > >>> What about the second part? it's independant of the async issue. > >> > >> Isn't this a client problem? The client has this state, no? > > > > It is state of the client<-> server session. Today spice client > > creates a new session on migration, so there is simply no need to > > maintain any state. Drawback is that everything needs to be resent from > > the server to the client. Thats why we want be able to continue the > > spice session, so the client caches will stay valid. > > > > Of course the spice-server on the migration target needs the session > > state for that, i.e. know for example which bits the client has cached > > which it hasn't. > > > >> If the state is stored in the server, wouldn't it be marshaled as part > >> of the server's migration state? > > > > spice-server is stateless today when it comes to migration. QXL handles > > all (device) state, by keeping track of some commands (such as > > create/destroy surface) which it needs to transfer on migration, and by > > asking spice-server to render all surfaces on migration, which > > effectively flushes the spice server state to qxl device memory. > > > > To transfer the client session state there are basically two options: > > > > (a) transfer it as part of the qemu migration data stream. I don't > > want have any details about the qemu migration implementation > > and/or protocol in the spice-server library api, which basically > > leaves a ugly "transfer-this-blob-for-me-please" style interface > > as only option. > > > > (b) transfer it as part of the spice protocol. As the spice > > client has a connection to both source and target while the > > migration runs we can send session state from the source host via > > spice client to the target host. This needs some form of > > synchronization, to make sure both vmstate and spice migration > > are completed when qemu on the source machine quits. > > The problem with (b) is, that iirc the way b was implemented in the past > was still the big blob approach, but then pass the blob through the client, > which means an evil client could modify it, causing all sorts of "interesting" > behavior inside spice-server. Since we're re-implementing this to me the > send a blob through the client approach is simply not acceptable from a > security pov, also see my previous mail in this thread. > In addition to security POV, it's also bad from network usage POV - while network connection among hosts is gigabit or better, client may be connected over high-latency low-bandwidth WAN. Sending any data through client makes absolutely no sense in such cases. David > > I think (b) is the better approach. > > I disagree. Note that there is more info to send over then just which > surfaces / images are cached by the client. There also is things like > partial complete agent channel messages, including how much bytes must be read > to complete the command, etc. > > IMHO (b) would only be acceptable if the data send through the client stops > becoming a blob. Instead the client could simply send a list of all surface ids, > etc. which it has cached after it connects to / starts using the new host. Note > that the old hosts needs to send nothing for this, this is info the client already > has, also removing the need for synchronization. As for certain other data, such > as (but not limited to) partially parsed agent messages, these should be > send through the regular vmstate methods IMHO. > > So I see 2 options > > 1) Do (a), sending everything that way > 2) Do (a) sending non client state that way; and > let the client send state like which surfaces it has cached > when the new session starts. > > Regards, > > Hans > _______________________________________________ > Spice-devel mailing list > Spice-devel@lists.freedesktop.org > http://lists.freedesktop.org/mailman/listinfo/spice-devel -- David Jaša, RHCE SPICE QE based in Brno GPG Key: 22C33E24 Fingerprint: 513A 060B D1B4 2A72 7F0D 0278 B125 CD00 22C3 3E24 ^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [Qemu-devel] [Spice-devel] seamless migration with spice 2012-03-11 13:16 [Qemu-devel] seamless migration with spice Yonit Halperin 2012-03-11 14:18 ` Anthony Liguori @ 2012-03-12 8:42 ` Hans de Goede 1 sibling, 0 replies; 26+ messages in thread From: Hans de Goede @ 2012-03-12 8:42 UTC (permalink / raw) To: Yonit Halperin; +Cc: Anthony Liguori, qemu-devel, spice-devel@freedesktop.org Hi, On 03/11/2012 02:16 PM, Yonit Halperin wrote: > Hi, > > We would like to implement seamless migration for Spice, i.e., keeping the currently opened spice client session valid after migration. > Today, the spice client establishes the connection to the destination before migration starts, and when migration completes, the client's session is moved to the destination, but all the session data is being reset. > > We face 2 main challenges when coming to implement seamless migration: > <snip (1)> > (2) In order to restore the source-client spice session in the destination, we need to pass data from the source to the destination. > Example for such data: in flight copy paste data, in flight usb data > We want to pass the data from the source spice server to the destination, via Spice client. This introduces a possible race: after migration completes, the source qemu can be killed before the spice-server completes transferring the migration data to the client. I don't understand why we must transfer this via the client, we should transfer this info using established qemu migration technologies, and we should transfer it directly from the source to the dest. Passing this through the client, means trusting the client which is crazy (from a security pov), the data passed is not always just data buffers often it contains state info. And transferring this through the client means opening a whole window of injection vulnerabilities, which can simply be avoided by sending the data directly. I know this has been discussed before and I was not involved in that discussion due to -ENOTIME, sorry about that. But just as the solution for sending the data directly from source to dest proposed then was nacked by various qemu people, I nack the send the data through the client solution. That one simply is not acceptable from a security pov. So we must re-think how we can send this data directly from source to dest, in a way which is acceptable in upstream qemu. Regards, Hans ^ permalink raw reply [flat|nested] 26+ messages in thread
end of thread, other threads:[~2012-03-13 7:40 UTC | newest] Thread overview: 26+ messages (download: mbox.gz follow: Atom feed -- links below jump to the message on this page -- 2012-03-11 13:16 [Qemu-devel] seamless migration with spice Yonit Halperin 2012-03-11 14:18 ` Anthony Liguori 2012-03-11 15:25 ` Alon Levy 2012-03-11 15:36 ` Anthony Liguori 2012-03-11 19:11 ` Yonit Halperin 2012-03-12 7:57 ` Gerd Hoffmann 2012-03-12 8:51 ` [Qemu-devel] [Spice-devel] " Hans de Goede 2012-03-12 9:46 ` Gerd Hoffmann 2012-03-12 10:03 ` Alon Levy 2012-03-12 10:26 ` Gerd Hoffmann 2012-03-12 11:29 ` Alon Levy 2012-03-12 11:34 ` Gerd Hoffmann 2012-03-12 11:45 ` Alon Levy 2012-03-12 12:44 ` Gerd Hoffmann 2012-03-12 14:24 ` Alon Levy 2012-03-12 14:35 ` Alon Levy 2012-03-12 11:23 ` Hans de Goede 2012-03-12 12:21 ` Gerd Hoffmann 2012-03-12 12:47 ` Yonit Halperin 2012-03-12 13:50 ` Gerd Hoffmann 2012-03-12 18:45 ` Yonit Halperin 2012-03-13 6:40 ` Gerd Hoffmann 2012-03-13 6:52 ` Yonit Halperin 2012-03-13 7:40 ` Gerd Hoffmann 2012-03-12 11:39 ` David Jaša 2012-03-12 8:42 ` Hans de Goede
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).