* [Qemu-devel] [PATCH 0/2] Suspend (S3) support
@ 2011-06-20 11:11 Alon Levy
2011-06-20 11:11 ` [Qemu-devel] [PATCH 1/2] qxl: interface_get_command: fix reported mode Alon Levy
2011-06-20 11:11 ` [Qemu-devel] [PATCH 2/2] qxl: add QXL_IO_UPDATE_MEM for guest S3&S4 support Alon Levy
0 siblings, 2 replies; 31+ messages in thread
From: Alon Levy @ 2011-06-20 11:11 UTC (permalink / raw)
To: qemu-devel; +Cc: yhalperi, kraxel
The first patch is a slightly revised patch send before, introducing a
print helper (qxl_mode_to_string) that is used by the second patch too, hence
I'm sending them together. I've looked for additional places to use qxl_mode_to_string
like Gerd asked before, found just one.
The second patch is the one adding support for QXL_IO_UPDATE_MEM.
This is just part of the suspend support. It requires a revised spice-server
(to implement the update_mem io), and a revised driver (patches on spice-devel
for the windows driver, the linux to-be-done).
Alon Levy (2):
qxl: interface_get_command: fix reported mode
qxl: add QXL_IO_UPDATE_MEM for guest S3&S4 support
hw/qxl.c | 47 ++++++++++++++++++++++++++++++++++++++++++++---
1 files changed, 44 insertions(+), 3 deletions(-)
--
1.7.5.2
^ permalink raw reply [flat|nested] 31+ messages in thread
* [Qemu-devel] [PATCH 1/2] qxl: interface_get_command: fix reported mode
2011-06-20 11:11 [Qemu-devel] [PATCH 0/2] Suspend (S3) support Alon Levy
@ 2011-06-20 11:11 ` Alon Levy
2011-06-20 11:11 ` [Qemu-devel] [PATCH 2/2] qxl: add QXL_IO_UPDATE_MEM for guest S3&S4 support Alon Levy
1 sibling, 0 replies; 31+ messages in thread
From: Alon Levy @ 2011-06-20 11:11 UTC (permalink / raw)
To: qemu-devel; +Cc: yhalperi, kraxel
report correct mode when in undefined mode.
introduces qxl_mode_to_string, and uses it in another place that looks
helpful (qxl_post_load)
---
hw/qxl.c | 21 ++++++++++++++++++---
1 files changed, 18 insertions(+), 3 deletions(-)
diff --git a/hw/qxl.c b/hw/qxl.c
index 1906e84..ca5c8b3 100644
--- a/hw/qxl.c
+++ b/hw/qxl.c
@@ -336,6 +336,21 @@ static void interface_get_init_info(QXLInstance *sin, QXLDevInitInfo *info)
info->n_surfaces = NUM_SURFACES;
}
+static const char *qxl_mode_to_string(int mode)
+{
+ switch (mode) {
+ case QXL_MODE_COMPAT:
+ return "compat";
+ case QXL_MODE_NATIVE:
+ return "native";
+ case QXL_MODE_UNDEFINED:
+ return "undefined";
+ case QXL_MODE_VGA:
+ return "vga";
+ }
+ return "unknown";
+}
+
/* called from spice server thread context only */
static int interface_get_command(QXLInstance *sin, struct QXLCommandExt *ext)
{
@@ -364,8 +379,7 @@ static int interface_get_command(QXLInstance *sin, struct QXLCommandExt *ext)
case QXL_MODE_COMPAT:
case QXL_MODE_NATIVE:
case QXL_MODE_UNDEFINED:
- dprint(qxl, 2, "%s: %s\n", __FUNCTION__,
- qxl->cmdflags ? "compat" : "native");
+ dprint(qxl, 2, "%s: %s\n", __FUNCTION__, qxl_mode_to_string(qxl->mode));
ring = &qxl->ram->cmd_ring;
if (SPICE_RING_IS_EMPTY(ring)) {
return false;
@@ -1378,7 +1392,8 @@ static int qxl_post_load(void *opaque, int version)
d->modes = (QXLModes*)((uint8_t*)d->rom + d->rom->modes_offset);
- dprint(d, 1, "%s: restore mode\n", __FUNCTION__);
+ dprint(d, 1, "%s: restore mode (%s)\n", __FUNCTION__,
+ qxl_mode_to_string(d->mode));
newmode = d->mode;
d->mode = QXL_MODE_UNDEFINED;
switch (newmode) {
--
1.7.5.2
^ permalink raw reply related [flat|nested] 31+ messages in thread
* [Qemu-devel] [PATCH 2/2] qxl: add QXL_IO_UPDATE_MEM for guest S3&S4 support
2011-06-20 11:11 [Qemu-devel] [PATCH 0/2] Suspend (S3) support Alon Levy
2011-06-20 11:11 ` [Qemu-devel] [PATCH 1/2] qxl: interface_get_command: fix reported mode Alon Levy
@ 2011-06-20 11:11 ` Alon Levy
2011-06-20 12:13 ` Gerd Hoffmann
1 sibling, 1 reply; 31+ messages in thread
From: Alon Levy @ 2011-06-20 11:11 UTC (permalink / raw)
To: qemu-devel; +Cc: yhalperi, kraxel
Add QXL_IO_UPDATE_MEM. Used to reduce vmexits from the guest when it resets the
spice server state before going to sleep.
The implementation requires an updated spice-server (0.8.2) with the new
worker function update_mem.
Cc: Yonit Halperin <yhalperi@redhat.com>
---
hw/qxl.c | 26 ++++++++++++++++++++++++++
1 files changed, 26 insertions(+), 0 deletions(-)
diff --git a/hw/qxl.c b/hw/qxl.c
index ca5c8b3..4945d95 100644
--- a/hw/qxl.c
+++ b/hw/qxl.c
@@ -1016,6 +1016,32 @@ static void ioport_write(void *opaque, uint32_t addr, uint32_t val)
case QXL_IO_DESTROY_ALL_SURFACES:
d->ssd.worker->destroy_surfaces(d->ssd.worker);
break;
+ case QXL_IO_UPDATE_MEM:
+ dprint(d, 1, "QXL_IO_UPDATE_MEM (%d) entry (%s, s#=%d, res#=%d)\n",
+ val, qxl_mode_to_string(d->mode), d->guest_surfaces.count,
+ d->num_free_res);
+ switch (val) {
+ case (QXL_UPDATE_MEM_RENDER_ALL):
+ d->ssd.worker->update_mem(d->ssd.worker);
+ break;
+ case (QXL_UPDATE_MEM_FLUSH): {
+ QXLReleaseRing *ring = &d->ram->release_ring;
+ if (ring->prod - ring->cons + 1 == ring->num_items) {
+ // TODO - "return" a value to the guest and let it loop?
+ fprintf(stderr,
+ "ERROR: no flush, full release ring [p%d,%dc]\n",
+ ring->prod, ring->cons);
+ }
+ qxl_push_free_res(d, 1 /* flush */);
+ dprint(d, 1, "QXL_IO_UPDATE_MEM exit (%s, s#=%d, res#=%d,%p)\n",
+ qxl_mode_to_string(d->mode), d->guest_surfaces.count,
+ d->num_free_res, d->last_release);
+ break;
+ }
+ default:
+ fprintf(stderr, "ERROR: unexpected value to QXL_IO_UPDATE_MEM\n");
+ }
+ break;
default:
fprintf(stderr, "%s: ioport=0x%x, abort()\n", __FUNCTION__, io_port);
abort();
--
1.7.5.2
^ permalink raw reply related [flat|nested] 31+ messages in thread
* Re: [Qemu-devel] [PATCH 2/2] qxl: add QXL_IO_UPDATE_MEM for guest S3&S4 support
2011-06-20 11:11 ` [Qemu-devel] [PATCH 2/2] qxl: add QXL_IO_UPDATE_MEM for guest S3&S4 support Alon Levy
@ 2011-06-20 12:13 ` Gerd Hoffmann
2011-06-20 12:57 ` Alon Levy
2011-06-20 12:58 ` Alon Levy
0 siblings, 2 replies; 31+ messages in thread
From: Gerd Hoffmann @ 2011-06-20 12:13 UTC (permalink / raw)
To: Alon Levy; +Cc: yhalperi, qemu-devel
Hi,
> + case QXL_IO_UPDATE_MEM:
> + switch (val) {
> + case (QXL_UPDATE_MEM_RENDER_ALL):
> + d->ssd.worker->update_mem(d->ssd.worker);
> + break;
What is the difference to one worker->stop() + worker->start() cycle?
cheers,
Gerd
^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: [Qemu-devel] [PATCH 2/2] qxl: add QXL_IO_UPDATE_MEM for guest S3&S4 support
2011-06-20 12:13 ` Gerd Hoffmann
@ 2011-06-20 12:57 ` Alon Levy
2011-06-20 12:58 ` Alon Levy
1 sibling, 0 replies; 31+ messages in thread
From: Alon Levy @ 2011-06-20 12:57 UTC (permalink / raw)
To: Gerd Hoffmann; +Cc: yhalperi, qemu-devel
On Mon, Jun 20, 2011 at 02:13:36PM +0200, Gerd Hoffmann wrote:
> Hi,
>
> >+ case QXL_IO_UPDATE_MEM:
> >+ switch (val) {
> >+ case (QXL_UPDATE_MEM_RENDER_ALL):
> >+ d->ssd.worker->update_mem(d->ssd.worker);
> >+ break;
>
> What is the difference to one worker->stop() + worker->start() cycle?
this won't disconnect any clients.
>
> cheers,
> Gerd
>
^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: [Qemu-devel] [PATCH 2/2] qxl: add QXL_IO_UPDATE_MEM for guest S3&S4 support
2011-06-20 12:13 ` Gerd Hoffmann
2011-06-20 12:57 ` Alon Levy
@ 2011-06-20 12:58 ` Alon Levy
2011-06-20 14:07 ` Gerd Hoffmann
1 sibling, 1 reply; 31+ messages in thread
From: Alon Levy @ 2011-06-20 12:58 UTC (permalink / raw)
To: Gerd Hoffmann; +Cc: yhalperi, qemu-devel
On Mon, Jun 20, 2011 at 02:13:36PM +0200, Gerd Hoffmann wrote:
> Hi,
>
> >+ case QXL_IO_UPDATE_MEM:
> >+ switch (val) {
> >+ case (QXL_UPDATE_MEM_RENDER_ALL):
> >+ d->ssd.worker->update_mem(d->ssd.worker);
> >+ break;
>
> What is the difference to one worker->stop() + worker->start() cycle?
>
ok, stop+start won't disconnect any clients either. But does stop render all waiting commands?
I'll have to look, I don't know if it does.
> cheers,
> Gerd
>
^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: [Qemu-devel] [PATCH 2/2] qxl: add QXL_IO_UPDATE_MEM for guest S3&S4 support
2011-06-20 12:58 ` Alon Levy
@ 2011-06-20 14:07 ` Gerd Hoffmann
2011-06-20 15:11 ` Alon Levy
0 siblings, 1 reply; 31+ messages in thread
From: Gerd Hoffmann @ 2011-06-20 14:07 UTC (permalink / raw)
To: qemu-devel, yhalperi
>> What is the difference to one worker->stop() + worker->start() cycle?
>>
>
> ok, stop+start won't disconnect any clients either. But does stop render all waiting commands?
> I'll have to look, I don't know if it does.
It does. This is what qemu uses to flush all spice server state to
device memory on migration.
What is the reason for deleting all surfaces?
cheers,
Gerd
^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: [Qemu-devel] [PATCH 2/2] qxl: add QXL_IO_UPDATE_MEM for guest S3&S4 support
2011-06-20 14:07 ` Gerd Hoffmann
@ 2011-06-20 15:11 ` Alon Levy
2011-06-20 15:48 ` Alon Levy
2011-06-20 15:50 ` Gerd Hoffmann
0 siblings, 2 replies; 31+ messages in thread
From: Alon Levy @ 2011-06-20 15:11 UTC (permalink / raw)
To: Gerd Hoffmann; +Cc: yhalperi, qemu-devel
On Mon, Jun 20, 2011 at 04:07:59PM +0200, Gerd Hoffmann wrote:
> >>What is the difference to one worker->stop() + worker->start() cycle?
> >>
> >
> >ok, stop+start won't disconnect any clients either. But does stop render all waiting commands?
> >I'll have to look, I don't know if it does.
>
> It does. This is what qemu uses to flush all spice server state to
> device memory on migration.
>
> What is the reason for deleting all surfaces?
Making sure all references are dropped to pci memory in devram. We would need to recreate all
the surfaces after reset anyway.
>
> cheers,
> Gerd
>
^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: [Qemu-devel] [PATCH 2/2] qxl: add QXL_IO_UPDATE_MEM for guest S3&S4 support
2011-06-20 15:11 ` Alon Levy
@ 2011-06-20 15:48 ` Alon Levy
2011-06-20 15:50 ` Gerd Hoffmann
1 sibling, 0 replies; 31+ messages in thread
From: Alon Levy @ 2011-06-20 15:48 UTC (permalink / raw)
To: Gerd Hoffmann, qemu-devel, yhalperi
On Mon, Jun 20, 2011 at 05:11:07PM +0200, Alon Levy wrote:
> On Mon, Jun 20, 2011 at 04:07:59PM +0200, Gerd Hoffmann wrote:
> > >>What is the difference to one worker->stop() + worker->start() cycle?
> > >>
> > >
> > >ok, stop+start won't disconnect any clients either. But does stop render all waiting commands?
> > >I'll have to look, I don't know if it does.
> >
> > It does. This is what qemu uses to flush all spice server state to
> > device memory on migration.
> >
> > What is the reason for deleting all surfaces?
>
> Making sure all references are dropped to pci memory in devram. We would need to recreate all
> the surfaces after reset anyway.
That's not right. The reason is that for the windows driver I don't know
if this is a resolution change or a suspend. So it was easier to destroy all the surfaces and then the two cases are equal - before going to sleep / leaving the current resolution I destroy all the surfaces, when coming back I recreate the surfaces. If it's a resolution change there is no coming back stage, but since all surfaces are destroyed there is no error when the same surface id's are reused.
>
> >
> > cheers,
> > Gerd
> >
>
^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: [Qemu-devel] [PATCH 2/2] qxl: add QXL_IO_UPDATE_MEM for guest S3&S4 support
2011-06-20 15:11 ` Alon Levy
2011-06-20 15:48 ` Alon Levy
@ 2011-06-20 15:50 ` Gerd Hoffmann
2011-06-20 16:32 ` Alon Levy
1 sibling, 1 reply; 31+ messages in thread
From: Gerd Hoffmann @ 2011-06-20 15:50 UTC (permalink / raw)
To: qemu-devel, yhalperi
On 06/20/11 17:11, Alon Levy wrote:
> On Mon, Jun 20, 2011 at 04:07:59PM +0200, Gerd Hoffmann wrote:
>>>> What is the difference to one worker->stop() + worker->start() cycle?
>>>>
>>>
>>> ok, stop+start won't disconnect any clients either. But does stop render all waiting commands?
>>> I'll have to look, I don't know if it does.
>>
>> It does. This is what qemu uses to flush all spice server state to
>> device memory on migration.
>>
>> What is the reason for deleting all surfaces?
>
> Making sure all references are dropped to pci memory in devram.
Ah, because the spice server keeps a reference to the create command
until the surface is destroyed, right?
There is is QXL_IO_DESTROY_ALL_SURFACES + worker->destroy_surfaces() ...
The QXL_IO_UPDATE_MEM command does too much special stuff IMHO.
I also think we don't need to extend the libspice-server API.
We can add a I/O command which renders everything to device memory via
stop+start. We can zap all surfaces with the existing command + worker
call. We can add a I/O command to ask qxl to push the release queue
head to the release ring.
Comments?
cheers,
Gerd
^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: [Qemu-devel] [PATCH 2/2] qxl: add QXL_IO_UPDATE_MEM for guest S3&S4 support
2011-06-20 15:50 ` Gerd Hoffmann
@ 2011-06-20 16:32 ` Alon Levy
2011-06-20 20:53 ` Alon Levy
` (2 more replies)
0 siblings, 3 replies; 31+ messages in thread
From: Alon Levy @ 2011-06-20 16:32 UTC (permalink / raw)
To: Gerd Hoffmann; +Cc: yhalperi, qemu-devel
On Mon, Jun 20, 2011 at 05:50:32PM +0200, Gerd Hoffmann wrote:
> On 06/20/11 17:11, Alon Levy wrote:
> >On Mon, Jun 20, 2011 at 04:07:59PM +0200, Gerd Hoffmann wrote:
> >>>>What is the difference to one worker->stop() + worker->start() cycle?
> >>>>
> >>>
> >>>ok, stop+start won't disconnect any clients either. But does stop render all waiting commands?
> >>>I'll have to look, I don't know if it does.
> >>
> >>It does. This is what qemu uses to flush all spice server state to
> >>device memory on migration.
> >>
> >>What is the reason for deleting all surfaces?
> >
> >Making sure all references are dropped to pci memory in devram.
>
> Ah, because the spice server keeps a reference to the create command
> until the surface is destroyed, right?
Actually right, so my correction stands corrected.
>
> There is is QXL_IO_DESTROY_ALL_SURFACES + worker->destroy_surfaces() ...
>
Regarding QXL_IO_DESTROY_ALL_SURFACES, it destroys the primary surface too,
which is a little special, that's another difference - update_mem destroys
everything except the primary. I know I tried to destroy the primary but it
didn't work right, don't recall why right now, so I guess I'll have to retry.
> The QXL_IO_UPDATE_MEM command does too much special stuff IMHO.
> I also think we don't need to extend the libspice-server API.
>
> We can add a I/O command which renders everything to device memory
> via stop+start. We can zap all surfaces with the existing command +
Yes, start+stop work nicely, didn't realize (saw it before, assumed
it wouldn't be good enough), just need to destroy the surfaces too.
> worker call. We can add a I/O command to ask qxl to push the
> release queue head to the release ring.
So you suggest to replace QXL_IO_UPDATE_MEM with what, two io commands instead
of using the val parameter?
QXL_IO_UPDATE_MEM
QXL_IO_FLUSH_RELEASE
?
>
> Comments?
>
> cheers,
> Gerd
>
^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: [Qemu-devel] [PATCH 2/2] qxl: add QXL_IO_UPDATE_MEM for guest S3&S4 support
2011-06-20 16:32 ` Alon Levy
@ 2011-06-20 20:53 ` Alon Levy
2011-06-21 6:29 ` Yonit Halperin
2011-06-22 9:13 ` Gerd Hoffmann
2 siblings, 0 replies; 31+ messages in thread
From: Alon Levy @ 2011-06-20 20:53 UTC (permalink / raw)
To: Gerd Hoffmann, qemu-devel, yhalperi
On Mon, Jun 20, 2011 at 06:32:30PM +0200, Alon Levy wrote:
> On Mon, Jun 20, 2011 at 05:50:32PM +0200, Gerd Hoffmann wrote:
> > On 06/20/11 17:11, Alon Levy wrote:
> > >On Mon, Jun 20, 2011 at 04:07:59PM +0200, Gerd Hoffmann wrote:
> > >>>>What is the difference to one worker->stop() + worker->start() cycle?
> > >>>>
> > >>>
> > >>>ok, stop+start won't disconnect any clients either. But does stop render all waiting commands?
> > >>>I'll have to look, I don't know if it does.
> > >>
> > >>It does. This is what qemu uses to flush all spice server state to
> > >>device memory on migration.
> > >>
> > >>What is the reason for deleting all surfaces?
> > >
> > >Making sure all references are dropped to pci memory in devram.
> >
> > Ah, because the spice server keeps a reference to the create command
> > until the surface is destroyed, right?
>
> Actually right, so my correction stands corrected.
>
> >
> > There is is QXL_IO_DESTROY_ALL_SURFACES + worker->destroy_surfaces() ...
> >
>
> Regarding QXL_IO_DESTROY_ALL_SURFACES, it destroys the primary surface too,
> which is a little special, that's another difference - update_mem destroys
> everything except the primary. I know I tried to destroy the primary but it
> didn't work right, don't recall why right now, so I guess I'll have to retry.
>
> > The QXL_IO_UPDATE_MEM command does too much special stuff IMHO.
> > I also think we don't need to extend the libspice-server API.
> >
> > We can add a I/O command which renders everything to device memory
> > via stop+start. We can zap all surfaces with the existing command +
> Yes, start+stop work nicely, didn't realize (saw it before, assumed
> it wouldn't be good enough), just need to destroy the surfaces too.
>
ok, it all works nicely except with the current driver patches I get a double
destroy for the primary surface. Removing it with the following patch makes
everything (resolution change/suspend/hibernate) work. I would really suggest
we remove that PANIC_ON, besides of course fixing the driver patches (I'll do a
v2 for the affected patche, the last series of qxl, I didn't cc you since I
didn't assume you'd want to review, but you probably saw it). Something like:
diff --git a/server/red_worker.c b/server/red_worker.c
index f0a8dfc..3b53a3f 100644
--- a/server/red_worker.c
+++ b/server/red_worker.c
@@ -9684,7 +9684,11 @@ static inline void handle_dev_destroy_primary_surface(RedWorker *worker)
receive_data(worker->channel, &surface_id, sizeof(uint32_t));
PANIC_ON(surface_id != 0);
- PANIC_ON(!worker->surfaces[surface_id].context.canvas);
+
+ if (!worker->surfaces[surface_id].context.canvas) {
+ red_printf("warning: double destroy of primary surface\n");
+ goto end;
+ }
if (worker->cursor) {
red_release_cursor(worker, worker->cursor);
@@ -9711,6 +9715,7 @@ static inline void handle_dev_destroy_primary_surface(RedWorker *worker)
worker->cursor_position.x = worker->cursor_position.y = 0;
worker->cursor_trail_length = worker->cursor_trail_frequency = 0;
+end:
message = RED_WORKER_MESSAGE_READY;
write_message(worker->channel, &message);
}
> > worker call. We can add a I/O command to ask qxl to push the
> > release queue head to the release ring.
>
> So you suggest to replace QXL_IO_UPDATE_MEM with what, two io commands instead
> of using the val parameter?
> QXL_IO_UPDATE_MEM
> QXL_IO_FLUSH_RELEASE
> ?
>
> >
> > Comments?
> >
> > cheers,
> > Gerd
> >
>
^ permalink raw reply related [flat|nested] 31+ messages in thread
* Re: [Qemu-devel] [PATCH 2/2] qxl: add QXL_IO_UPDATE_MEM for guest S3&S4 support
2011-06-20 16:32 ` Alon Levy
2011-06-20 20:53 ` Alon Levy
@ 2011-06-21 6:29 ` Yonit Halperin
2011-06-22 9:13 ` Gerd Hoffmann
2 siblings, 0 replies; 31+ messages in thread
From: Yonit Halperin @ 2011-06-21 6:29 UTC (permalink / raw)
To: Gerd Hoffmann, qemu-devel, Alon Levy
On 06/20/2011 07:32 PM, Alon Levy wrote:
> On Mon, Jun 20, 2011 at 05:50:32PM +0200, Gerd Hoffmann wrote:
>> On 06/20/11 17:11, Alon Levy wrote:
>>> On Mon, Jun 20, 2011 at 04:07:59PM +0200, Gerd Hoffmann wrote:
>>>>>> What is the difference to one worker->stop() + worker->start() cycle?
>>>>>>
>>>>>
>>>>> ok, stop+start won't disconnect any clients either. But does stop render all waiting commands?
>>>>> I'll have to look, I don't know if it does.
>>>>
>>>> It does. This is what qemu uses to flush all spice server state to
>>>> device memory on migration.
I don't see that STOP flushes the command rings. We must read and empty
the command rings before going to S3.
>>>>
>>>> What is the reason for deleting all surfaces?
>>>
>>> Making sure all references are dropped to pci memory in devram.
>>
>> Ah, because the spice server keeps a reference to the create command
>> until the surface is destroyed, right?
>
> Actually right, so my correction stands corrected.
>
>>
>> There is is QXL_IO_DESTROY_ALL_SURFACES + worker->destroy_surfaces() ...
>>
>
> Regarding QXL_IO_DESTROY_ALL_SURFACES, it destroys the primary surface too,
> which is a little special, that's another difference - update_mem destroys
> everything except the primary. I know I tried to destroy the primary but it
> didn't work right, don't recall why right now, so I guess I'll have to retry.
>
I guess it is because DrvAssertMode(disable) destroyed the primary
surface in a separate call. Don't think we need to separate the calls
any more (we probably needed it when we thought S3 and resolution
changes will have different paths).
>> The QXL_IO_UPDATE_MEM command does too much special stuff IMHO.
>> I also think we don't need to extend the libspice-server API.
>>
>> We can add a I/O command which renders everything to device memory
>> via stop+start. We can zap all surfaces with the existing command +
> Yes, start+stop work nicely, didn't realize (saw it before, assumed
> it wouldn't be good enough), just need to destroy the surfaces too.
>
>> worker call. We can add a I/O command to ask qxl to push the
>> release queue head to the release ring.
>
> So you suggest to replace QXL_IO_UPDATE_MEM with what, two io commands instead
> of using the val parameter?
> QXL_IO_UPDATE_MEM
> QXL_IO_FLUSH_RELEASE
> ?
>
>>
>> Comments?
>>
>> cheers,
>> Gerd
>>
^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: [Qemu-devel] [PATCH 2/2] qxl: add QXL_IO_UPDATE_MEM for guest S3&S4 support
2011-06-20 16:32 ` Alon Levy
2011-06-20 20:53 ` Alon Levy
2011-06-21 6:29 ` Yonit Halperin
@ 2011-06-22 9:13 ` Gerd Hoffmann
2011-06-22 9:57 ` Alon Levy
2 siblings, 1 reply; 31+ messages in thread
From: Gerd Hoffmann @ 2011-06-22 9:13 UTC (permalink / raw)
To: qemu-devel, yhalperi
Hi,
>> worker call. We can add a I/O command to ask qxl to push the
>> release queue head to the release ring.
>
> So you suggest to replace QXL_IO_UPDATE_MEM with what, two io commands instead
> of using the val parameter?
I'd like to (a) avoid updating the libspice-server API if possible and
(b) have one I/O command for each logical step. So going into S3 could
look like this:
(0) stop putting new commands into the rings
(1) QXL_IO_NOTIFY_CMD
(2) QXL_IO_NOTIFY_CURSOR
qxl calls notify(), to make the worker thread empty the command
rings before it processes the next dispatcher request.
(3) QXl_IO_FLUSH_SURFACES (to be implemented)
qxl calls stop()+start(), spice-server renders all surfaces,
thereby flushing state to device memory.
(4) QXL_IO_DESTROY_ALL_SURFACES
zap surfaces
(5) QXL_IO_FLUSH_RELEASE (to be implemented)
push release queue head into the release ring, so the guest
will see it and can release everything.
(1)+(2)+(4) exist already.
(3)+(5) can be done without libspice-server changes.
Looks workable?
cheers,
Gerd
^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: [Qemu-devel] [PATCH 2/2] qxl: add QXL_IO_UPDATE_MEM for guest S3&S4 support
2011-06-22 9:13 ` Gerd Hoffmann
@ 2011-06-22 9:57 ` Alon Levy
2011-06-26 16:59 ` Yonit Halperin
0 siblings, 1 reply; 31+ messages in thread
From: Alon Levy @ 2011-06-22 9:57 UTC (permalink / raw)
To: Gerd Hoffmann; +Cc: yhalperi, qemu-devel
On Wed, Jun 22, 2011 at 11:13:19AM +0200, Gerd Hoffmann wrote:
> Hi,
>
> >>worker call. We can add a I/O command to ask qxl to push the
> >>release queue head to the release ring.
> >
> >So you suggest to replace QXL_IO_UPDATE_MEM with what, two io commands instead
> >of using the val parameter?
>
> I'd like to (a) avoid updating the libspice-server API if possible
> and (b) have one I/O command for each logical step. So going into
> S3 could look like this:
>
> (0) stop putting new commands into the rings
> (1) QXL_IO_NOTIFY_CMD
> (2) QXL_IO_NOTIFY_CURSOR
> qxl calls notify(), to make the worker thread empty the command
> rings before it processes the next dispatcher request.
> (3) QXl_IO_FLUSH_SURFACES (to be implemented)
> qxl calls stop()+start(), spice-server renders all surfaces,
> thereby flushing state to device memory.
> (4) QXL_IO_DESTROY_ALL_SURFACES
> zap surfaces
> (5) QXL_IO_FLUSH_RELEASE (to be implemented)
> push release queue head into the release ring, so the guest
> will see it and can release everything.
>
> (1)+(2)+(4) exist already.
> (3)+(5) can be done without libspice-server changes.
>
> Looks workable?
yeah. Yonit?
>
> cheers,
> Gerd
>
>
^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: [Qemu-devel] [PATCH 2/2] qxl: add QXL_IO_UPDATE_MEM for guest S3&S4 support
2011-06-22 9:57 ` Alon Levy
@ 2011-06-26 16:59 ` Yonit Halperin
2011-06-26 17:47 ` Alon Levy
0 siblings, 1 reply; 31+ messages in thread
From: Yonit Halperin @ 2011-06-26 16:59 UTC (permalink / raw)
To: Alon Levy; +Cc: qemu-devel, Gerd Hoffmann
Sorry for the late response, wasn't available.
I'm afraid that (1) and (2) will indeed wakeup the worker, but will not assure emptying the command ring, as it depends on the client pipe size.
----- Original Message -----
From: "Alon Levy" <alevy@redhat.com>
To: "Gerd Hoffmann" <kraxel@redhat.com>
Cc: qemu-devel@nongnu.org, yhalperi@redhat.com
Sent: Wednesday, June 22, 2011 11:57:54 AM
Subject: Re: [Qemu-devel] [PATCH 2/2] qxl: add QXL_IO_UPDATE_MEM for guest S3&S4 support
On Wed, Jun 22, 2011 at 11:13:19AM +0200, Gerd Hoffmann wrote:
> Hi,
>
> >>worker call. We can add a I/O command to ask qxl to push the
> >>release queue head to the release ring.
> >
> >So you suggest to replace QXL_IO_UPDATE_MEM with what, two io commands instead
> >of using the val parameter?
>
> I'd like to (a) avoid updating the libspice-server API if possible
> and (b) have one I/O command for each logical step. So going into
> S3 could look like this:
>
> (0) stop putting new commands into the rings
> (1) QXL_IO_NOTIFY_CMD
> (2) QXL_IO_NOTIFY_CURSOR
> qxl calls notify(), to make the worker thread empty the command
> rings before it processes the next dispatcher request.
> (3) QXl_IO_FLUSH_SURFACES (to be implemented)
> qxl calls stop()+start(), spice-server renders all surfaces,
> thereby flushing state to device memory.
> (4) QXL_IO_DESTROY_ALL_SURFACES
> zap surfaces
> (5) QXL_IO_FLUSH_RELEASE (to be implemented)
> push release queue head into the release ring, so the guest
> will see it and can release everything.
>
> (1)+(2)+(4) exist already.
> (3)+(5) can be done without libspice-server changes.
>
> Looks workable?
yeah. Yonit?
>
> cheers,
> Gerd
>
>
^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: [Qemu-devel] [PATCH 2/2] qxl: add QXL_IO_UPDATE_MEM for guest S3&S4 support
2011-06-26 16:59 ` Yonit Halperin
@ 2011-06-26 17:47 ` Alon Levy
2011-06-27 6:28 ` yhalperi
0 siblings, 1 reply; 31+ messages in thread
From: Alon Levy @ 2011-06-26 17:47 UTC (permalink / raw)
To: Yonit Halperin; +Cc: qemu-devel, Gerd Hoffmann
On Sun, Jun 26, 2011 at 12:59:06PM -0400, Yonit Halperin wrote:
> Sorry for the late response, wasn't available.
> I'm afraid that (1) and (2) will indeed wakeup the worker, but will not assure emptying the command ring, as it depends on the client pipe size.
>
I actually can't figure out what wakeup does (that's what both NOTIFY and
NOTIFY_CURSOR do, see hw/qxl.c). What we did in prepare_to_sleep before was
call flush_all_qxl_commands, which reads the command and cursor rings until
they are empty (calling flush_cursor_commands and flush_display_commands), waiting
whenever the pipe is too large. (avoiding this wait still needs to be done, but
after ensuring suspend is correct).
More to the point this flush is done from handle_dev_destroy_surfaces, but
this is not good enough since this also destroys the surfaces, before we have
a chance to actually render the surfaces.
Perhaps RED_WORKER_MESSAGE_STOP should do a flush_all_qxl_commands?
>
> ----- Original Message -----
> From: "Alon Levy" <alevy@redhat.com>
> To: "Gerd Hoffmann" <kraxel@redhat.com>
> Cc: qemu-devel@nongnu.org, yhalperi@redhat.com
> Sent: Wednesday, June 22, 2011 11:57:54 AM
> Subject: Re: [Qemu-devel] [PATCH 2/2] qxl: add QXL_IO_UPDATE_MEM for guest S3&S4 support
>
> On Wed, Jun 22, 2011 at 11:13:19AM +0200, Gerd Hoffmann wrote:
> > Hi,
> >
> > >>worker call. We can add a I/O command to ask qxl to push the
> > >>release queue head to the release ring.
> > >
> > >So you suggest to replace QXL_IO_UPDATE_MEM with what, two io commands instead
> > >of using the val parameter?
> >
> > I'd like to (a) avoid updating the libspice-server API if possible
> > and (b) have one I/O command for each logical step. So going into
> > S3 could look like this:
> >
> > (0) stop putting new commands into the rings
> > (1) QXL_IO_NOTIFY_CMD
> > (2) QXL_IO_NOTIFY_CURSOR
> > qxl calls notify(), to make the worker thread empty the command
> > rings before it processes the next dispatcher request.
> > (3) QXl_IO_FLUSH_SURFACES (to be implemented)
> > qxl calls stop()+start(), spice-server renders all surfaces,
> > thereby flushing state to device memory.
> > (4) QXL_IO_DESTROY_ALL_SURFACES
> > zap surfaces
> > (5) QXL_IO_FLUSH_RELEASE (to be implemented)
> > push release queue head into the release ring, so the guest
> > will see it and can release everything.
> >
> > (1)+(2)+(4) exist already.
> > (3)+(5) can be done without libspice-server changes.
> >
> > Looks workable?
>
> yeah. Yonit?
>
> >
> > cheers,
> > Gerd
> >
> >
>
^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: [Qemu-devel] [PATCH 2/2] qxl: add QXL_IO_UPDATE_MEM for guest S3&S4 support
2011-06-26 17:47 ` Alon Levy
@ 2011-06-27 6:28 ` yhalperi
2011-06-27 8:16 ` Alon Levy
0 siblings, 1 reply; 31+ messages in thread
From: yhalperi @ 2011-06-27 6:28 UTC (permalink / raw)
To: qemu-devel, Gerd Hoffmann
On 06/26/2011 08:47 PM, Alon Levy wrote:
> On Sun, Jun 26, 2011 at 12:59:06PM -0400, Yonit Halperin wrote:
>> Sorry for the late response, wasn't available.
>> I'm afraid that (1) and (2) will indeed wakeup the worker, but will not assure emptying the command ring, as it depends on the client pipe size.
>>
>
> I actually can't figure out what wakeup does (that's what both NOTIFY and
> NOTIFY_CURSOR do, see hw/qxl.c).
It turns on an event the worker is waiting for on epoll_wait.
What we did in prepare_to_sleep before was
> call flush_all_qxl_commands, which reads the command and cursor rings until
> they are empty (calling flush_cursor_commands and flush_display_commands), waiting
> whenever the pipe is too large. (avoiding this wait still needs to be done, but
> after ensuring suspend is correct).
>
> More to the point this flush is done from handle_dev_destroy_surfaces, but
> this is not good enough since this also destroys the surfaces, before we have
> a chance to actually render the surfaces.
>
> Perhaps RED_WORKER_MESSAGE_STOP should do a flush_all_qxl_commands?
>
We can do it as long as it doesn't affect migration - does STOP happens
before or after savevm? If it happens after - we can't touch the command
ring, i.e., we can't call flush. And even if it happens before - do we
really want to call flush during migration and presumably slow it down?
Cheers,
Yonit.
>>
>> ----- Original Message -----
>> From: "Alon Levy"<alevy@redhat.com>
>> To: "Gerd Hoffmann"<kraxel@redhat.com>
>> Cc: qemu-devel@nongnu.org, yhalperi@redhat.com
>> Sent: Wednesday, June 22, 2011 11:57:54 AM
>> Subject: Re: [Qemu-devel] [PATCH 2/2] qxl: add QXL_IO_UPDATE_MEM for guest S3&S4 support
>>
>> On Wed, Jun 22, 2011 at 11:13:19AM +0200, Gerd Hoffmann wrote:
>>> Hi,
>>>
>>>>> worker call. We can add a I/O command to ask qxl to push the
>>>>> release queue head to the release ring.
>>>>
>>>> So you suggest to replace QXL_IO_UPDATE_MEM with what, two io commands instead
>>>> of using the val parameter?
>>>
>>> I'd like to (a) avoid updating the libspice-server API if possible
>>> and (b) have one I/O command for each logical step. So going into
>>> S3 could look like this:
>>>
>>> (0) stop putting new commands into the rings
>>> (1) QXL_IO_NOTIFY_CMD
>>> (2) QXL_IO_NOTIFY_CURSOR
>>> qxl calls notify(), to make the worker thread empty the command
>>> rings before it processes the next dispatcher request.
>>> (3) QXl_IO_FLUSH_SURFACES (to be implemented)
>>> qxl calls stop()+start(), spice-server renders all surfaces,
>>> thereby flushing state to device memory.
>>> (4) QXL_IO_DESTROY_ALL_SURFACES
>>> zap surfaces
>>> (5) QXL_IO_FLUSH_RELEASE (to be implemented)
>>> push release queue head into the release ring, so the guest
>>> will see it and can release everything.
>>>
>>> (1)+(2)+(4) exist already.
>>> (3)+(5) can be done without libspice-server changes.
>>>
>>> Looks workable?
>>
>> yeah. Yonit?
>>
>>>
>>> cheers,
>>> Gerd
>>>
>>>
>>
^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: [Qemu-devel] [PATCH 2/2] qxl: add QXL_IO_UPDATE_MEM for guest S3&S4 support
2011-06-27 6:28 ` yhalperi
@ 2011-06-27 8:16 ` Alon Levy
2011-06-27 8:25 ` yhalperi
0 siblings, 1 reply; 31+ messages in thread
From: Alon Levy @ 2011-06-27 8:16 UTC (permalink / raw)
To: yhalperi; +Cc: qemu-devel, Gerd Hoffmann
On Mon, Jun 27, 2011 at 09:28:45AM +0300, yhalperi wrote:
> On 06/26/2011 08:47 PM, Alon Levy wrote:
> >On Sun, Jun 26, 2011 at 12:59:06PM -0400, Yonit Halperin wrote:
> >>Sorry for the late response, wasn't available.
> >>I'm afraid that (1) and (2) will indeed wakeup the worker, but will not assure emptying the command ring, as it depends on the client pipe size.
> >>
> >
> >I actually can't figure out what wakeup does (that's what both NOTIFY and
> >NOTIFY_CURSOR do, see hw/qxl.c).
> It turns on an event the worker is waiting for on epoll_wait.
Ah, ok. So it will only read up to the pipe size like you said.
>
> What we did in prepare_to_sleep before was
> >call flush_all_qxl_commands, which reads the command and cursor rings until
> >they are empty (calling flush_cursor_commands and flush_display_commands), waiting
> >whenever the pipe is too large. (avoiding this wait still needs to be done, but
> >after ensuring suspend is correct).
> >
> >More to the point this flush is done from handle_dev_destroy_surfaces, but
> >this is not good enough since this also destroys the surfaces, before we have
> >a chance to actually render the surfaces.
> >
> >Perhaps RED_WORKER_MESSAGE_STOP should do a flush_all_qxl_commands?
> >
> We can do it as long as it doesn't affect migration - does STOP
> happens before or after savevm? If it happens after - we can't touch
> the command ring, i.e., we can't call flush. And even if it happens
> before - do we really want to call flush during migration and
> presumably slow it down?
But if we don't then the client connected before migration doesn't get some of
the messages it was supposed to get.
stop is called before hw/qxl.c:qxl_pre_save, from
ui/spice-display.c:qemu_spice_vm_change_state_handler
>
> Cheers,
> Yonit.
>
> >>
> >>----- Original Message -----
> >>From: "Alon Levy"<alevy@redhat.com>
> >>To: "Gerd Hoffmann"<kraxel@redhat.com>
> >>Cc: qemu-devel@nongnu.org, yhalperi@redhat.com
> >>Sent: Wednesday, June 22, 2011 11:57:54 AM
> >>Subject: Re: [Qemu-devel] [PATCH 2/2] qxl: add QXL_IO_UPDATE_MEM for guest S3&S4 support
> >>
> >>On Wed, Jun 22, 2011 at 11:13:19AM +0200, Gerd Hoffmann wrote:
> >>> Hi,
> >>>
> >>>>>worker call. We can add a I/O command to ask qxl to push the
> >>>>>release queue head to the release ring.
> >>>>
> >>>>So you suggest to replace QXL_IO_UPDATE_MEM with what, two io commands instead
> >>>>of using the val parameter?
> >>>
> >>>I'd like to (a) avoid updating the libspice-server API if possible
> >>>and (b) have one I/O command for each logical step. So going into
> >>>S3 could look like this:
> >>>
> >>> (0) stop putting new commands into the rings
> >>> (1) QXL_IO_NOTIFY_CMD
> >>> (2) QXL_IO_NOTIFY_CURSOR
> >>> qxl calls notify(), to make the worker thread empty the command
> >>> rings before it processes the next dispatcher request.
> >>> (3) QXl_IO_FLUSH_SURFACES (to be implemented)
> >>> qxl calls stop()+start(), spice-server renders all surfaces,
> >>> thereby flushing state to device memory.
> >>> (4) QXL_IO_DESTROY_ALL_SURFACES
> >>> zap surfaces
> >>> (5) QXL_IO_FLUSH_RELEASE (to be implemented)
> >>> push release queue head into the release ring, so the guest
> >>> will see it and can release everything.
> >>>
> >>>(1)+(2)+(4) exist already.
> >>>(3)+(5) can be done without libspice-server changes.
> >>>
> >>>Looks workable?
> >>
> >>yeah. Yonit?
> >>
> >>>
> >>>cheers,
> >>> Gerd
> >>>
> >>>
> >>
>
>
^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: [Qemu-devel] [PATCH 2/2] qxl: add QXL_IO_UPDATE_MEM for guest S3&S4 support
2011-06-27 8:16 ` Alon Levy
@ 2011-06-27 8:25 ` yhalperi
2011-06-27 9:20 ` Alon Levy
0 siblings, 1 reply; 31+ messages in thread
From: yhalperi @ 2011-06-27 8:25 UTC (permalink / raw)
To: qemu-devel, Gerd Hoffmann, Alon Levy
On 06/27/2011 11:16 AM, Alon Levy wrote:
> On Mon, Jun 27, 2011 at 09:28:45AM +0300, yhalperi wrote:
>> On 06/26/2011 08:47 PM, Alon Levy wrote:
>>> On Sun, Jun 26, 2011 at 12:59:06PM -0400, Yonit Halperin wrote:
>>>> Sorry for the late response, wasn't available.
>>>> I'm afraid that (1) and (2) will indeed wakeup the worker, but will not assure emptying the command ring, as it depends on the client pipe size.
>>>>
>>>
>>> I actually can't figure out what wakeup does (that's what both NOTIFY and
>>> NOTIFY_CURSOR do, see hw/qxl.c).
>> It turns on an event the worker is waiting for on epoll_wait.
> Ah, ok. So it will only read up to the pipe size like you said.
>
>>
>> What we did in prepare_to_sleep before was
>>> call flush_all_qxl_commands, which reads the command and cursor rings until
>>> they are empty (calling flush_cursor_commands and flush_display_commands), waiting
>>> whenever the pipe is too large. (avoiding this wait still needs to be done, but
>>> after ensuring suspend is correct).
>>>
>>> More to the point this flush is done from handle_dev_destroy_surfaces, but
>>> this is not good enough since this also destroys the surfaces, before we have
>>> a chance to actually render the surfaces.
>>>
>>> Perhaps RED_WORKER_MESSAGE_STOP should do a flush_all_qxl_commands?
>>>
>> We can do it as long as it doesn't affect migration - does STOP
>> happens before or after savevm? If it happens after - we can't touch
>> the command ring, i.e., we can't call flush. And even if it happens
>> before - do we really want to call flush during migration and
>> presumably slow it down?
> But if we don't then the client connected before migration doesn't get some of
> the messages it was supposed to get.
I think it will receive them after migration, since the command ring
was stored.
>
> stop is called before hw/qxl.c:qxl_pre_save, from
> ui/spice-display.c:qemu_spice_vm_change_state_handler
>
>
>>
>> Cheers,
>> Yonit.
>>
>>>>
>>>> ----- Original Message -----
>>>> From: "Alon Levy"<alevy@redhat.com>
>>>> To: "Gerd Hoffmann"<kraxel@redhat.com>
>>>> Cc: qemu-devel@nongnu.org, yhalperi@redhat.com
>>>> Sent: Wednesday, June 22, 2011 11:57:54 AM
>>>> Subject: Re: [Qemu-devel] [PATCH 2/2] qxl: add QXL_IO_UPDATE_MEM for guest S3&S4 support
>>>>
>>>> On Wed, Jun 22, 2011 at 11:13:19AM +0200, Gerd Hoffmann wrote:
>>>>> Hi,
>>>>>
>>>>>>> worker call. We can add a I/O command to ask qxl to push the
>>>>>>> release queue head to the release ring.
>>>>>>
>>>>>> So you suggest to replace QXL_IO_UPDATE_MEM with what, two io commands instead
>>>>>> of using the val parameter?
>>>>>
>>>>> I'd like to (a) avoid updating the libspice-server API if possible
>>>>> and (b) have one I/O command for each logical step. So going into
>>>>> S3 could look like this:
>>>>>
>>>>> (0) stop putting new commands into the rings
>>>>> (1) QXL_IO_NOTIFY_CMD
>>>>> (2) QXL_IO_NOTIFY_CURSOR
>>>>> qxl calls notify(), to make the worker thread empty the command
>>>>> rings before it processes the next dispatcher request.
>>>>> (3) QXl_IO_FLUSH_SURFACES (to be implemented)
>>>>> qxl calls stop()+start(), spice-server renders all surfaces,
>>>>> thereby flushing state to device memory.
>>>>> (4) QXL_IO_DESTROY_ALL_SURFACES
>>>>> zap surfaces
>>>>> (5) QXL_IO_FLUSH_RELEASE (to be implemented)
>>>>> push release queue head into the release ring, so the guest
>>>>> will see it and can release everything.
>>>>>
>>>>> (1)+(2)+(4) exist already.
>>>>> (3)+(5) can be done without libspice-server changes.
>>>>>
>>>>> Looks workable?
>>>>
>>>> yeah. Yonit?
>>>>
>>>>>
>>>>> cheers,
>>>>> Gerd
>>>>>
>>>>>
>>>>
>>
>>
^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: [Qemu-devel] [PATCH 2/2] qxl: add QXL_IO_UPDATE_MEM for guest S3&S4 support
2011-06-27 8:25 ` yhalperi
@ 2011-06-27 9:20 ` Alon Levy
2011-06-29 9:01 ` Gerd Hoffmann
0 siblings, 1 reply; 31+ messages in thread
From: Alon Levy @ 2011-06-27 9:20 UTC (permalink / raw)
To: yhalperi; +Cc: qemu-devel, Gerd Hoffmann
On Mon, Jun 27, 2011 at 11:25:47AM +0300, yhalperi wrote:
> On 06/27/2011 11:16 AM, Alon Levy wrote:
> >On Mon, Jun 27, 2011 at 09:28:45AM +0300, yhalperi wrote:
> >>On 06/26/2011 08:47 PM, Alon Levy wrote:
> >>>On Sun, Jun 26, 2011 at 12:59:06PM -0400, Yonit Halperin wrote:
> >>>>Sorry for the late response, wasn't available.
> >>>>I'm afraid that (1) and (2) will indeed wakeup the worker, but will not assure emptying the command ring, as it depends on the client pipe size.
> >>>>
> >>>
> >>>I actually can't figure out what wakeup does (that's what both NOTIFY and
> >>>NOTIFY_CURSOR do, see hw/qxl.c).
> >>It turns on an event the worker is waiting for on epoll_wait.
> >Ah, ok. So it will only read up to the pipe size like you said.
> >
> >>
> >>What we did in prepare_to_sleep before was
> >>>call flush_all_qxl_commands, which reads the command and cursor rings until
> >>>they are empty (calling flush_cursor_commands and flush_display_commands), waiting
> >>>whenever the pipe is too large. (avoiding this wait still needs to be done, but
> >>>after ensuring suspend is correct).
> >>>
> >>>More to the point this flush is done from handle_dev_destroy_surfaces, but
> >>>this is not good enough since this also destroys the surfaces, before we have
> >>>a chance to actually render the surfaces.
> >>>
> >>>Perhaps RED_WORKER_MESSAGE_STOP should do a flush_all_qxl_commands?
> >>>
> >>We can do it as long as it doesn't affect migration - does STOP
> >>happens before or after savevm? If it happens after - we can't touch
> >>the command ring, i.e., we can't call flush. And even if it happens
> >>before - do we really want to call flush during migration and
> >>presumably slow it down?
> >But if we don't then the client connected before migration doesn't get some of
> >the messages it was supposed to get.
> I think it will receive them after migration, since the command ring
> was stored.
Our confusion here is because you think there is still seemless migration. Unfortunately
it doesn't work right now, unless you plan to fix it the only form of migration right
now is switch-host, and for that those commands will be lost, the new connection will receive
images for each surface. If you treat the client as seemless you are completely right.
> >
> >stop is called before hw/qxl.c:qxl_pre_save, from
> >ui/spice-display.c:qemu_spice_vm_change_state_handler
> >
> >
> >>
> >>Cheers,
> >>Yonit.
> >>
> >>>>
> >>>>----- Original Message -----
> >>>>From: "Alon Levy"<alevy@redhat.com>
> >>>>To: "Gerd Hoffmann"<kraxel@redhat.com>
> >>>>Cc: qemu-devel@nongnu.org, yhalperi@redhat.com
> >>>>Sent: Wednesday, June 22, 2011 11:57:54 AM
> >>>>Subject: Re: [Qemu-devel] [PATCH 2/2] qxl: add QXL_IO_UPDATE_MEM for guest S3&S4 support
> >>>>
> >>>>On Wed, Jun 22, 2011 at 11:13:19AM +0200, Gerd Hoffmann wrote:
> >>>>> Hi,
> >>>>>
> >>>>>>>worker call. We can add a I/O command to ask qxl to push the
> >>>>>>>release queue head to the release ring.
> >>>>>>
> >>>>>>So you suggest to replace QXL_IO_UPDATE_MEM with what, two io commands instead
> >>>>>>of using the val parameter?
> >>>>>
> >>>>>I'd like to (a) avoid updating the libspice-server API if possible
> >>>>>and (b) have one I/O command for each logical step. So going into
> >>>>>S3 could look like this:
> >>>>>
> >>>>> (0) stop putting new commands into the rings
> >>>>> (1) QXL_IO_NOTIFY_CMD
> >>>>> (2) QXL_IO_NOTIFY_CURSOR
> >>>>> qxl calls notify(), to make the worker thread empty the command
> >>>>> rings before it processes the next dispatcher request.
> >>>>> (3) QXl_IO_FLUSH_SURFACES (to be implemented)
> >>>>> qxl calls stop()+start(), spice-server renders all surfaces,
> >>>>> thereby flushing state to device memory.
> >>>>> (4) QXL_IO_DESTROY_ALL_SURFACES
> >>>>> zap surfaces
> >>>>> (5) QXL_IO_FLUSH_RELEASE (to be implemented)
> >>>>> push release queue head into the release ring, so the guest
> >>>>> will see it and can release everything.
> >>>>>
> >>>>>(1)+(2)+(4) exist already.
> >>>>>(3)+(5) can be done without libspice-server changes.
> >>>>>
> >>>>>Looks workable?
> >>>>
> >>>>yeah. Yonit?
> >>>>
> >>>>>
> >>>>>cheers,
> >>>>> Gerd
> >>>>>
> >>>>>
> >>>>
> >>
> >>
>
^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: [Qemu-devel] [PATCH 2/2] qxl: add QXL_IO_UPDATE_MEM for guest S3&S4 support
2011-06-27 9:20 ` Alon Levy
@ 2011-06-29 9:01 ` Gerd Hoffmann
2011-06-29 9:21 ` Alon Levy
0 siblings, 1 reply; 31+ messages in thread
From: Gerd Hoffmann @ 2011-06-29 9:01 UTC (permalink / raw)
To: yhalperi, qemu-devel
Hi,
>> I think it will receive them after migration, since the command ring
>> was stored.
> Our confusion here is because you think there is still seemless migration. Unfortunately
> it doesn't work right now, unless you plan to fix it the only form of migration right
> now is switch-host, and for that those commands will be lost, the new connection will receive
> images for each surface. If you treat the client as seemless you are completely right.
The spice server needs this too so it can render the surfaces correctly
before sending the surface images to the client (or send the old
surfaces and the commands on top of that).
That is one difference between qemu migration and S3 state: For qemu
migration it is no problem to have unprocessed commands in the rings,
they will simply be processed once the spice server state is restored.
When the guest driver restores the state when it comes back from S3 it
needs the command rings to do so, thats why they must be flushed before
entering S3 ...
cheers,
Gerd
^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: [Qemu-devel] [PATCH 2/2] qxl: add QXL_IO_UPDATE_MEM for guest S3&S4 support
2011-06-29 9:01 ` Gerd Hoffmann
@ 2011-06-29 9:21 ` Alon Levy
2011-06-29 10:25 ` Gerd Hoffmann
0 siblings, 1 reply; 31+ messages in thread
From: Alon Levy @ 2011-06-29 9:21 UTC (permalink / raw)
To: Gerd Hoffmann; +Cc: yhalperi, qemu-devel
On Wed, Jun 29, 2011 at 11:01:11AM +0200, Gerd Hoffmann wrote:
> Hi,
>
> >>I think it will receive them after migration, since the command ring
> >>was stored.
> >Our confusion here is because you think there is still seemless migration. Unfortunately
> >it doesn't work right now, unless you plan to fix it the only form of migration right
> >now is switch-host, and for that those commands will be lost, the new connection will receive
> >images for each surface. If you treat the client as seemless you are completely right.
>
> The spice server needs this too so it can render the surfaces
> correctly before sending the surface images to the client (or send
> the old surfaces and the commands on top of that).
>
> That is one difference between qemu migration and S3 state: For qemu
> migration it is no problem to have unprocessed commands in the
> rings, they will simply be processed once the spice server state is
> restored. When the guest driver restores the state when it comes
> back from S3 it needs the command rings to do so, thats why they
> must be flushed before entering S3 ...
You mean it needs the command rings to be empty before, since they are lost
during the reset, right?
>
> cheers,
> Gerd
>
^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: [Qemu-devel] [PATCH 2/2] qxl: add QXL_IO_UPDATE_MEM for guest S3&S4 support
2011-06-29 9:21 ` Alon Levy
@ 2011-06-29 10:25 ` Gerd Hoffmann
2011-06-29 11:38 ` Alon Levy
0 siblings, 1 reply; 31+ messages in thread
From: Gerd Hoffmann @ 2011-06-29 10:25 UTC (permalink / raw)
To: yhalperi, qemu-devel
On 06/29/11 11:21, Alon Levy wrote:
> On Wed, Jun 29, 2011 at 11:01:11AM +0200, Gerd Hoffmann wrote:
>> Hi,
>>
>>>> I think it will receive them after migration, since the command ring
>>>> was stored.
>>> Our confusion here is because you think there is still seemless migration. Unfortunately
>>> it doesn't work right now, unless you plan to fix it the only form of migration right
>>> now is switch-host, and for that those commands will be lost, the new connection will receive
>>> images for each surface. If you treat the client as seemless you are completely right.
>>
>> The spice server needs this too so it can render the surfaces
>> correctly before sending the surface images to the client (or send
>> the old surfaces and the commands on top of that).
>>
>> That is one difference between qemu migration and S3 state: For qemu
>> migration it is no problem to have unprocessed commands in the
>> rings, they will simply be processed once the spice server state is
>> restored. When the guest driver restores the state when it comes
>> back from S3 it needs the command rings to do so, thats why they
>> must be flushed before entering S3 ...
>
> You mean it needs the command rings to be empty before, since they are lost
> during the reset, right?
One more reason. Wasn't aware there is a reset anyway, was thinking
more about the command ordering. Without reset spice-server would first
process the old commands (which may reference non-existing surfaces),
then the new commands which re-recreate all state, which is simply the
wrong order. With reset the old commands just get lost which causes
rendering bugs.
Is it an option to have the driver just remove the commands from the
ring (and resubmit after resume)? I suspect it isn't as there is no
race-free way to do that, right?
cheers,
Gerd
^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: [Qemu-devel] [PATCH 2/2] qxl: add QXL_IO_UPDATE_MEM for guest S3&S4 support
2011-06-29 10:25 ` Gerd Hoffmann
@ 2011-06-29 11:38 ` Alon Levy
2011-06-30 10:26 ` Yonit Halperin
0 siblings, 1 reply; 31+ messages in thread
From: Alon Levy @ 2011-06-29 11:38 UTC (permalink / raw)
To: Gerd Hoffmann; +Cc: yhalperi, qemu-devel
On Wed, Jun 29, 2011 at 12:25:00PM +0200, Gerd Hoffmann wrote:
> On 06/29/11 11:21, Alon Levy wrote:
> >On Wed, Jun 29, 2011 at 11:01:11AM +0200, Gerd Hoffmann wrote:
> >> Hi,
> >>
> >>>>I think it will receive them after migration, since the command ring
> >>>>was stored.
> >>>Our confusion here is because you think there is still seemless migration. Unfortunately
> >>>it doesn't work right now, unless you plan to fix it the only form of migration right
> >>>now is switch-host, and for that those commands will be lost, the new connection will receive
> >>>images for each surface. If you treat the client as seemless you are completely right.
> >>
> >>The spice server needs this too so it can render the surfaces
> >>correctly before sending the surface images to the client (or send
> >>the old surfaces and the commands on top of that).
> >>
> >>That is one difference between qemu migration and S3 state: For qemu
> >>migration it is no problem to have unprocessed commands in the
> >>rings, they will simply be processed once the spice server state is
> >>restored. When the guest driver restores the state when it comes
> >>back from S3 it needs the command rings to do so, thats why they
> >>must be flushed before entering S3 ...
> >
> >You mean it needs the command rings to be empty before, since they are lost
> >during the reset, right?
>
> One more reason. Wasn't aware there is a reset anyway, was thinking
hah. The reset is the whole mess.. otherwise S3 would have been trivial,
and actually disabling the reset was the first thing we did, but of course
it doesn't solve S4 in that case.
> more about the command ordering. Without reset spice-server would
> first process the old commands (which may reference non-existing
> surfaces), then the new commands which re-recreate all state, which
> is simply the wrong order. With reset the old commands just get
> lost which causes rendering bugs.
>
> Is it an option to have the driver just remove the commands from the
> ring (and resubmit after resume)? I suspect it isn't as there is no
> race-free way to do that, right?
Right - the whole ring assumes that the same side removes. of course we
can add an IO for that (two - FREEZE and UNFREEZE). But I think this is
the wrong approach. Instead, rendering all the commands, and dropping the
wait for the client. Right now if we flush we do actually wait for the client,
but I plan to remove this later. (we do this right now for update_area as
well and that's much higher frequency).
>
> cheers,
> Gerd
>
>
^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: [Qemu-devel] [PATCH 2/2] qxl: add QXL_IO_UPDATE_MEM for guest S3&S4 support
2011-06-29 11:38 ` Alon Levy
@ 2011-06-30 10:26 ` Yonit Halperin
2011-06-30 10:46 ` Gerd Hoffmann
0 siblings, 1 reply; 31+ messages in thread
From: Yonit Halperin @ 2011-06-30 10:26 UTC (permalink / raw)
To: Gerd Hoffmann, qemu-devel, Alon Levy
On 06/29/2011 02:38 PM, Alon Levy wrote:
> On Wed, Jun 29, 2011 at 12:25:00PM +0200, Gerd Hoffmann wrote:
>> On 06/29/11 11:21, Alon Levy wrote:
>>> On Wed, Jun 29, 2011 at 11:01:11AM +0200, Gerd Hoffmann wrote:
>>>> Hi,
>>>>
>>>>>> I think it will receive them after migration, since the command ring
>>>>>> was stored.
>>>>> Our confusion here is because you think there is still seemless migration. Unfortunately
>>>>> it doesn't work right now, unless you plan to fix it the only form of migration right
>>>>> now is switch-host, and for that those commands will be lost, the new connection will receive
>>>>> images for each surface. If you treat the client as seemless you are completely right.
>>>>
>>>> The spice server needs this too so it can render the surfaces
>>>> correctly before sending the surface images to the client (or send
>>>> the old surfaces and the commands on top of that).
>>>>
>>>> That is one difference between qemu migration and S3 state: For qemu
>>>> migration it is no problem to have unprocessed commands in the
>>>> rings, they will simply be processed once the spice server state is
>>>> restored. When the guest driver restores the state when it comes
>>>> back from S3 it needs the command rings to do so, thats why they
>>>> must be flushed before entering S3 ...
>>>
>>> You mean it needs the command rings to be empty before, since they are lost
>>> during the reset, right?
>>
>> One more reason. Wasn't aware there is a reset anyway, was thinking
> hah. The reset is the whole mess.. otherwise S3 would have been trivial,
> and actually disabling the reset was the first thing we did, but of course
> it doesn't solve S4 in that case.
>
>> more about the command ordering. Without reset spice-server would
>> first process the old commands (which may reference non-existing
>> surfaces), then the new commands which re-recreate all state, which
>> is simply the wrong order. With reset the old commands just get
>> lost which causes rendering bugs.
>>
>> Is it an option to have the driver just remove the commands from the
>> ring (and resubmit after resume)? I suspect it isn't as there is no
>> race-free way to do that, right?
> Right - the whole ring assumes that the same side removes. of course we
> can add an IO for that (two - FREEZE and UNFREEZE). But I think this is
> the wrong approach. Instead, rendering all the commands, and dropping the
> wait for the client. Right now if we flush we do actually wait for the client,
> but I plan to remove this later. (we do this right now for update_area as
> well and that's much higher frequency).
>
>>
>> cheers,
>> Gerd
>>
>>
To conclude, we still need to flush the command ring before stop. We
don't want to change migration. So we still need to change spice server
api. Gerd?
^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: [Qemu-devel] [PATCH 2/2] qxl: add QXL_IO_UPDATE_MEM for guest S3&S4 support
2011-06-30 10:26 ` Yonit Halperin
@ 2011-06-30 10:46 ` Gerd Hoffmann
2011-06-30 11:41 ` Alon Levy
0 siblings, 1 reply; 31+ messages in thread
From: Gerd Hoffmann @ 2011-06-30 10:46 UTC (permalink / raw)
To: Yonit Halperin; +Cc: Alon Levy, qemu-devel
Hi,
>> Right - the whole ring assumes that the same side removes. of course we
>> can add an IO for that (two - FREEZE and UNFREEZE). But I think this is
>> the wrong approach. Instead, rendering all the commands, and dropping the
>> wait for the client. Right now if we flush we do actually wait for the
>> client,
>> but I plan to remove this later. (we do this right now for update_area as
>> well and that's much higher frequency).
> To conclude, we still need to flush the command ring before stop. We
> don't want to change migration. So we still need to change spice server
> api. Gerd?
Yes, looks like there is no way around that to flush the command rings.
When we have to change the spice-server api anyway, then we should
support async I/O at libspice-server API level I think. Drop the qxl
async thread, have a way to submit async requests to the worker, let
libspice call us back on completion.
comments?
cheers,
Gerd
^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: [Qemu-devel] [PATCH 2/2] qxl: add QXL_IO_UPDATE_MEM for guest S3&S4 support
2011-06-30 10:46 ` Gerd Hoffmann
@ 2011-06-30 11:41 ` Alon Levy
2011-06-30 12:12 ` Gerd Hoffmann
0 siblings, 1 reply; 31+ messages in thread
From: Alon Levy @ 2011-06-30 11:41 UTC (permalink / raw)
To: Gerd Hoffmann; +Cc: Yonit Halperin, qemu-devel
On Thu, Jun 30, 2011 at 12:46:59PM +0200, Gerd Hoffmann wrote:
> Hi,
>
> >>Right - the whole ring assumes that the same side removes. of course we
> >>can add an IO for that (two - FREEZE and UNFREEZE). But I think this is
> >>the wrong approach. Instead, rendering all the commands, and dropping the
> >>wait for the client. Right now if we flush we do actually wait for the
> >>client,
> >>but I plan to remove this later. (we do this right now for update_area as
> >>well and that's much higher frequency).
>
> >To conclude, we still need to flush the command ring before stop. We
> >don't want to change migration. So we still need to change spice server
> >api. Gerd?
>
> Yes, looks like there is no way around that to flush the command rings.
>
> When we have to change the spice-server api anyway, then we should
> support async I/O at libspice-server API level I think. Drop the
> qxl async thread, have a way to submit async requests to the worker,
> let libspice call us back on completion.
>
> comments?
My thoughts exactly. Any reason to support the old non async messages if we
do this? i.e. do we add _ASYNC versions or just change the meaning of the existing ones?
as long as we change the major version of the server (it will be 0.10) I think it isn't
problematic, right?
The only difference with this approach is that we will have to do the reads from the
io thread of qemu, which means a single thread reading for multiple qxl devices, but
since it will just be doing very minimal work I don't think it should be a problem (it
will just be setting the irq).
>
> cheers,
> Gerd
^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: [Qemu-devel] [PATCH 2/2] qxl: add QXL_IO_UPDATE_MEM for guest S3&S4 support
2011-06-30 11:41 ` Alon Levy
@ 2011-06-30 12:12 ` Gerd Hoffmann
2011-06-30 12:50 ` Alon Levy
0 siblings, 1 reply; 31+ messages in thread
From: Gerd Hoffmann @ 2011-06-30 12:12 UTC (permalink / raw)
To: Yonit Halperin, qemu-devel
Hi,
> My thoughts exactly. Any reason to support the old non async messages if we
> do this?
Yes. Backward compatibility.
> The only difference with this approach is that we will have to do the reads from the
> io thread of qemu,
Hmm? Which reads?
I'd add a completion callback to QXLInterface.
cheers,
Gerd
^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: [Qemu-devel] [PATCH 2/2] qxl: add QXL_IO_UPDATE_MEM for guest S3&S4 support
2011-06-30 12:12 ` Gerd Hoffmann
@ 2011-06-30 12:50 ` Alon Levy
2011-06-30 13:17 ` Gerd Hoffmann
0 siblings, 1 reply; 31+ messages in thread
From: Alon Levy @ 2011-06-30 12:50 UTC (permalink / raw)
To: Gerd Hoffmann; +Cc: Yonit Halperin, qemu-devel
On Thu, Jun 30, 2011 at 02:12:52PM +0200, Gerd Hoffmann wrote:
> Hi,
>
> >My thoughts exactly. Any reason to support the old non async messages if we
> >do this?
>
> Yes. Backward compatibility.
So at least deprecate it to be dropped later? I don't like that the code just gets
bigger and bigger.
>
> >The only difference with this approach is that we will have to do the reads from the
> >io thread of qemu,
>
> Hmm? Which reads?
I was thinking of a different solution - one in which the same "READY" messages are
written, but read from a different place. That would not have actually required any changes
to the spice-server api. But if you say you prefer to add a completion callback, that's cool.
Just to answer, I was thinking of this flow for the async commands:
vcpu thread -> pipe_to_red_worker : update_area_async
red_worker thread -> pipe_to_io_thread : update_area_async complete
but that wouldn't have worked, would it? unless we made sure to prevent tries to do async/sync
while async in progress.
>
> I'd add a completion callback to QXLInterface.
>
> cheers,
> Gerd
>
>
^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: [Qemu-devel] [PATCH 2/2] qxl: add QXL_IO_UPDATE_MEM for guest S3&S4 support
2011-06-30 12:50 ` Alon Levy
@ 2011-06-30 13:17 ` Gerd Hoffmann
0 siblings, 0 replies; 31+ messages in thread
From: Gerd Hoffmann @ 2011-06-30 13:17 UTC (permalink / raw)
To: Yonit Halperin, qemu-devel
Hi,
>> Yes. Backward compatibility.
>
> So at least deprecate it to be dropped later? I don't like that the code just gets
> bigger and bigger.
Deprecate them is fine.
> I was thinking of a different solution - one in which the same "READY" messages are
> written, but read from a different place. That would not have actually required any changes
> to the spice-server api. But if you say you prefer to add a completion callback, that's cool.
>
> Just to answer, I was thinking of this flow for the async commands:
>
> vcpu thread -> pipe_to_red_worker : update_area_async
> red_worker thread -> pipe_to_io_thread : update_area_async complete
>
> but that wouldn't have worked, would it? unless we made sure to prevent tries to do async/sync
> while async in progress.
The pipe is a libspice-server internal thing and it should stay that
way. libspice-server should be able to use something completely
different for dispatcher <-> worker communication (say a linked job list
with mutex locking and condition variable wakeup) and everything should
continue to work.
cheers,
Gerd
^ permalink raw reply [flat|nested] 31+ messages in thread
end of thread, other threads:[~2011-06-30 13:17 UTC | newest]
Thread overview: 31+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2011-06-20 11:11 [Qemu-devel] [PATCH 0/2] Suspend (S3) support Alon Levy
2011-06-20 11:11 ` [Qemu-devel] [PATCH 1/2] qxl: interface_get_command: fix reported mode Alon Levy
2011-06-20 11:11 ` [Qemu-devel] [PATCH 2/2] qxl: add QXL_IO_UPDATE_MEM for guest S3&S4 support Alon Levy
2011-06-20 12:13 ` Gerd Hoffmann
2011-06-20 12:57 ` Alon Levy
2011-06-20 12:58 ` Alon Levy
2011-06-20 14:07 ` Gerd Hoffmann
2011-06-20 15:11 ` Alon Levy
2011-06-20 15:48 ` Alon Levy
2011-06-20 15:50 ` Gerd Hoffmann
2011-06-20 16:32 ` Alon Levy
2011-06-20 20:53 ` Alon Levy
2011-06-21 6:29 ` Yonit Halperin
2011-06-22 9:13 ` Gerd Hoffmann
2011-06-22 9:57 ` Alon Levy
2011-06-26 16:59 ` Yonit Halperin
2011-06-26 17:47 ` Alon Levy
2011-06-27 6:28 ` yhalperi
2011-06-27 8:16 ` Alon Levy
2011-06-27 8:25 ` yhalperi
2011-06-27 9:20 ` Alon Levy
2011-06-29 9:01 ` Gerd Hoffmann
2011-06-29 9:21 ` Alon Levy
2011-06-29 10:25 ` Gerd Hoffmann
2011-06-29 11:38 ` Alon Levy
2011-06-30 10:26 ` Yonit Halperin
2011-06-30 10:46 ` Gerd Hoffmann
2011-06-30 11:41 ` Alon Levy
2011-06-30 12:12 ` Gerd Hoffmann
2011-06-30 12:50 ` Alon Levy
2011-06-30 13:17 ` Gerd Hoffmann
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).