qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: Juan Quintela <quintela@redhat.com>
To: Gerd Hoffmann <kraxel@redhat.com>
Cc: "Søren Sandmann" <sandmann@cs.au.dk>,
	qemu-devel@nongnu.org, "Søren Sandmann Pedersen" <ssp@redhat.com>
Subject: Re: [Qemu-devel] [PATCH 1/2] spice: Change NUM_SURFACES to 4096
Date: Tue, 28 Aug 2012 14:20:19 +0200	[thread overview]
Message-ID: <87y5kzfbss.fsf@elfo.mitica> (raw)
In-Reply-To: <503C65F1.5040704@redhat.com> (Gerd Hoffmann's message of "Tue, 28 Aug 2012 08:32:17 +0200")

Gerd Hoffmann <kraxel@redhat.com> wrote:
> On 08/27/12 18:21, Søren Sandmann wrote:
>> From: Søren Sandmann Pedersen <ssp@redhat.com>
>> 
>> It's not uncommon for an X workload to have more than 1024 pixmaps
>> live at the same time. Ideally, there wouldn't be any fixed limit like
>> this, but since we have one, increase it to 4096.
>> ---
>>  ui/spice-display.h |    2 +-
>>  1 files changed, 1 insertions(+), 1 deletions(-)
>> 
>> diff --git a/ui/spice-display.h b/ui/spice-display.h
>> index 12e50b6..e8d01a5 100644
>> --- a/ui/spice-display.h
>> +++ b/ui/spice-display.h
>> @@ -32,7 +32,7 @@
>>  #define MEMSLOT_GROUP_GUEST 1
>>  #define NUM_MEMSLOTS_GROUPS 2
>>  
>> -#define NUM_SURFACES 1024
>> +#define NUM_SURFACES 4096
>
> Breaks live migration.

Live migcation always on the middle :-()

> Second the vmstate must be adapted to handle this.  The number of
> surfaces is in the migration data stream, so this should be doable
> without too much trouble.  Right now it looks like this:
>
>         [ ... ]
>         VMSTATE_INT32_EQUAL(num_surfaces, PCIQXLDevice),
>         VMSTATE_ARRAY(guest_surfaces.cmds, PCIQXLDevice, NUM_SURFACES, 0,
>                       vmstate_info_uint64, uint64_t),
>         [ ... ]
>
> Juan?  Suggestions how to handle this?  There seems to be no direct way
> to make the array size depend on num_surfaces.  I think we could have
> two VMSTATE_ARRAY_TEST() entries, one for 1024 and one for 4096.

I would left things as they are, and just add a new section for the
rest of the surfaces.  If we are always going to have _more_ than 1024
surfaces, the easier solution that I can think of is:

       * move guest_surfaces.cmds to a pointer (now, it is runtime configurable)

        /* notice removal of _EQUAL */
        VMSTATE_INT32(num_surfaces, PCIQXLDevice),
        /* move from ARRAY to VARRAY with sive on num_surfaces */
        VMSTATE_VARRAY_INT32(guest_surfaces.cmds, PCIQXLDevice, num_surfaces, 0,
                       vmstate_info_uint64, uint64_t),

And thinking about it, no subsection is needed.  if num_surfaces is
1024, things can migrate to old qemu.  if it is bigger, it would break
migration with good reason (num_surfaces has changed).

The VMSTATE_INT32_EQUAL() will break (on the incoming side) of migration
if we are migrating with a numbef of surfaces != 1024.

What do you think?

Later, Juan.

  reply	other threads:[~2012-08-28 12:21 UTC|newest]

Thread overview: 4+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2012-08-27 16:21 [Qemu-devel] [PATCH 1/2] spice: Change NUM_SURFACES to 4096 Søren Sandmann
2012-08-28  6:32 ` Gerd Hoffmann
2012-08-28 12:20   ` Juan Quintela [this message]
2012-08-28 12:37     ` Gerd Hoffmann

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=87y5kzfbss.fsf@elfo.mitica \
    --to=quintela@redhat.com \
    --cc=kraxel@redhat.com \
    --cc=qemu-devel@nongnu.org \
    --cc=sandmann@cs.au.dk \
    --cc=ssp@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).