From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([208.118.235.92]:51217) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1T6KnW-0001sz-GG for qemu-devel@nongnu.org; Tue, 28 Aug 2012 08:21:39 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1T6KnV-0006XR-Dc for qemu-devel@nongnu.org; Tue, 28 Aug 2012 08:21:34 -0400 Received: from mx1.redhat.com ([209.132.183.28]:23777) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1T6KnV-0006XN-5C for qemu-devel@nongnu.org; Tue, 28 Aug 2012 08:21:33 -0400 From: Juan Quintela In-Reply-To: <503C65F1.5040704@redhat.com> (Gerd Hoffmann's message of "Tue, 28 Aug 2012 08:32:17 +0200") References: <1346084480-7994-1-git-send-email-sandmann@cs.au.dk> <503C65F1.5040704@redhat.com> Date: Tue, 28 Aug 2012 14:20:19 +0200 Message-ID: <87y5kzfbss.fsf@elfo.mitica> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable Subject: Re: [Qemu-devel] [PATCH 1/2] spice: Change NUM_SURFACES to 4096 Reply-To: quintela@redhat.com List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Gerd Hoffmann Cc: =?utf-8?Q?S=C3=B8ren?= Sandmann , qemu-devel@nongnu.org, =?utf-8?Q?S=C3=B8ren?= Sandmann Pedersen Gerd Hoffmann wrote: > On 08/27/12 18:21, S=C3=B8ren Sandmann wrote: >> From: S=C3=B8ren Sandmann Pedersen >>=20 >> It's not uncommon for an X workload to have more than 1024 pixmaps >> live at the same time. Ideally, there wouldn't be any fixed limit like >> this, but since we have one, increase it to 4096. >> --- >> ui/spice-display.h | 2 +- >> 1 files changed, 1 insertions(+), 1 deletions(-) >>=20 >> diff --git a/ui/spice-display.h b/ui/spice-display.h >> index 12e50b6..e8d01a5 100644 >> --- a/ui/spice-display.h >> +++ b/ui/spice-display.h >> @@ -32,7 +32,7 @@ >> #define MEMSLOT_GROUP_GUEST 1 >> #define NUM_MEMSLOTS_GROUPS 2 >>=20=20 >> -#define NUM_SURFACES 1024 >> +#define NUM_SURFACES 4096 > > Breaks live migration. Live migcation always on the middle :-() > Second the vmstate must be adapted to handle this. The number of > surfaces is in the migration data stream, so this should be doable > without too much trouble. Right now it looks like this: > > [ ... ] > VMSTATE_INT32_EQUAL(num_surfaces, PCIQXLDevice), > VMSTATE_ARRAY(guest_surfaces.cmds, PCIQXLDevice, NUM_SURFACES, 0, > vmstate_info_uint64, uint64_t), > [ ... ] > > Juan? Suggestions how to handle this? There seems to be no direct way > to make the array size depend on num_surfaces. I think we could have > two VMSTATE_ARRAY_TEST() entries, one for 1024 and one for 4096. I would left things as they are, and just add a new section for the rest of the surfaces. If we are always going to have _more_ than 1024 surfaces, the easier solution that I can think of is: * move guest_surfaces.cmds to a pointer (now, it is runtime configur= able) /* notice removal of _EQUAL */ VMSTATE_INT32(num_surfaces, PCIQXLDevice), /* move from ARRAY to VARRAY with sive on num_surfaces */ VMSTATE_VARRAY_INT32(guest_surfaces.cmds, PCIQXLDevice, num_surface= s, 0, vmstate_info_uint64, uint64_t), And thinking about it, no subsection is needed. if num_surfaces is 1024, things can migrate to old qemu. if it is bigger, it would break migration with good reason (num_surfaces has changed). The VMSTATE_INT32_EQUAL() will break (on the incoming side) of migration if we are migrating with a numbef of surfaces !=3D 1024. What do you think? Later, Juan.