From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:48836) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1Xol4w-0006p7-Af for qemu-devel@nongnu.org; Wed, 12 Nov 2014 22:28:15 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1Xol4r-00022a-4H for qemu-devel@nongnu.org; Wed, 12 Nov 2014 22:28:14 -0500 Received: from ozlabs.org ([2401:3900:2:1::2]:55978) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1Xol4q-000222-Ek for qemu-devel@nongnu.org; Wed, 12 Nov 2014 22:28:09 -0500 Date: Thu, 13 Nov 2014 14:10:39 +1100 From: David Gibson Message-ID: <20141113031039.GE7291@voom.fritz.box> References: <1412358473-31398-1-git-send-email-dgilbert@redhat.com> <1412358473-31398-44-git-send-email-dgilbert@redhat.com> MIME-Version: 1.0 Content-Type: multipart/signed; micalg=pgp-sha1; protocol="application/pgp-signature"; boundary="gMR3gsNFwZpnI/Ts" Content-Disposition: inline In-Reply-To: <1412358473-31398-44-git-send-email-dgilbert@redhat.com> Subject: Re: [Qemu-devel] [PATCH v4 43/47] Host page!=target page: Cleanup bitmaps List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: "Dr. David Alan Gilbert (git)" Cc: aarcange@redhat.com, yamahata@private.email.ne.jp, lilei@linux.vnet.ibm.com, quintela@redhat.com, cristian.klein@cs.umu.se, qemu-devel@nongnu.org, amit.shah@redhat.com, yanghy@cn.fujitsu.com --gMR3gsNFwZpnI/Ts Content-Type: text/plain; charset=us-ascii Content-Disposition: inline Content-Transfer-Encoding: quoted-printable On Fri, Oct 03, 2014 at 06:47:49PM +0100, Dr. David Alan Gilbert (git) wrot= e: > From: "Dr. David Alan Gilbert" >=20 > Prior to the start of postcopy, ensure that everything that will > be transferred later is a whole host-page in size. >=20 > This is accomplished by discarding partially transferred host pages > and marking any that are partially dirty as fully dirty. >=20 > Signed-off-by: Dr. David Alan Gilbert > --- > arch_init.c | 112 ++++++++++++++++++++++++++++++++++++++++++++++++++++++= +++++- > 1 file changed, 111 insertions(+), 1 deletion(-) >=20 > diff --git a/arch_init.c b/arch_init.c > index 1fe4fab..aac250c 100644 > --- a/arch_init.c > +++ b/arch_init.c > @@ -1024,7 +1024,6 @@ static uint32_t get_32bits_map(unsigned long *map, = int64_t start) > * A helper to put 32 bits into a bit map; trivial for HOST_LONG_BITS=3D= 32 > * messier for 64; the bitmaps are actually long's that are 32 or 64bit > */ > -__attribute__ (( unused )) /* Until later in patch series */ > static void put_32bits_map(unsigned long *map, int64_t start, > uint32_t v) > { > @@ -1153,15 +1152,126 @@ static int pc_each_ram_discard(MigrationState *m= s) > } > =20 > /* > + * Utility for the outgoing postcopy code. > + * > + * Discard any partially sent host-page size chunks, mark any partially > + * dirty host-page size chunks as all dirty. > + * > + * Returns: 0 on success > + */ > +static int postcopy_chunk_hostpages(MigrationState *ms) > +{ > + struct RAMBlock *block; > + unsigned int host_bits =3D sysconf(_SC_PAGESIZE) / TARGET_PAGE_SIZE; > + uint32_t host_mask; > + > + /* Should be a power of 2 */ > + assert(host_bits && !(host_bits & (host_bits - 1))); > + /* > + * If the host_bits isn't a division of 32 (the minimum long size) > + * then the code gets a lot more complex; disallow for now > + * (I'm not aware of a system where it's true anyway) > + */ > + assert((32 % host_bits) =3D=3D 0); This assert makes the first one redundant. > + > + /* A mask, starting at bit 0, containing host_bits continuous set bi= ts */ > + host_mask =3D (1u << host_bits) - 1; > + > + > + if (host_bits =3D=3D 1) { > + /* Easy case - TPS=3D=3DHPS - nothing to be done */ > + return 0; > + } > + > + QTAILQ_FOREACH(block, &ram_list.blocks, next) { > + unsigned long first32, last32, cur32; > + unsigned long first =3D block->offset >> TARGET_PAGE_BITS; > + unsigned long last =3D (block->offset + (block->length-1)) > + >> TARGET_PAGE_BITS; > + PostcopyDiscardState *pds =3D postcopy_discard_send_init(ms, > + first & 3= 1, > + block->id= str); > + > + first32 =3D first / 32; > + last32 =3D last / 32; > + for (cur32 =3D first32; cur32 <=3D last32; cur32++) { > + unsigned int current_hp; > + /* Deal with start/end not on alignment */ > + uint32_t mask =3D make_32bit_mask(first, last, cur32); > + > + /* a chunk of sent pages */ > + uint32_t sdata =3D get_32bits_map(ms->sentmap, cur32 * 32); > + /* a chunk of dirty pages */ > + uint32_t ddata =3D get_32bits_map(migration_bitmap, cur32 * = 32); > + uint32_t discard =3D 0; > + uint32_t redirty =3D 0; > + sdata &=3D mask; > + ddata &=3D mask; > + > + for (current_hp =3D 0; current_hp < 32; current_hp +=3D host= _bits) { > + uint32_t host_sent =3D (sdata >> current_hp) & host_mask; > + uint32_t host_dirty =3D (ddata >> current_hp) & host_mas= k; > + > + if (host_sent && (host_sent !=3D host_mask)) { > + /* Partially sent host page */ > + redirty |=3D host_mask << current_hp; > + discard |=3D host_mask << current_hp; > + > + } else if (host_dirty && (host_dirty !=3D host_mask)) { > + /* Partially dirty host page */ > + redirty |=3D host_mask << current_hp; > + } > + } > + if (discard) { > + /* Tell the destination to discard these pages */ > + postcopy_discard_send_chunk(ms, pds, (cur32-first32) * 3= 2, > + discard); > + /* And clear them in the sent data structure */ > + sdata =3D get_32bits_map(ms->sentmap, cur32 * 32); > + put_32bits_map(ms->sentmap, cur32 * 32, sdata & ~discard= ); > + } > + if (redirty) { > + /* > + * Reread original dirty bits and OR in ones we clear; we > + * must reread since we might be at the start or end of > + * a RAMBlock that the original 'mask' discarded some > + * bits from > + */ > + ddata =3D get_32bits_map(migration_bitmap, cur32 * 32); > + put_32bits_map(migration_bitmap, cur32 * 32, > + ddata | redirty); > + /* Inc the count of dirty pages */ > + migration_dirty_pages +=3D ctpop32(redirty - (ddata & re= dirty)); > + } > + } > + > + postcopy_discard_send_finish(ms, pds); > + } > + /* Easiest way to make sure we don't resume in the middle of a host-= page */ > + last_seen_block =3D NULL; > + last_sent_block =3D NULL; > + > + return 0; > +} > + > +/* > * Transmit the set of pages to be discarded after precopy to the target > * these are pages that have been sent previously but have been dirtied > * Hopefully this is pretty sparse > */ > int ram_postcopy_send_discard_bitmap(MigrationState *ms) > { > + int ret; > + > /* This should be our last sync, the src is now paused */ > migration_bitmap_sync(); > =20 > + /* Deal with TPS !=3D HPS */ > + ret =3D postcopy_chunk_hostpages(ms); > + if (ret) { > + return ret; > + } This really seems like a bogus thing to be doing on the outgoing migration side. Doesn't the host page size constraint come from the destination (due to the need to atomically instate pages). Source host page size =3D=3D destination host page size doesn't seem like it should be an inherent constraint, and it's not clear why you can't do this rounding out to host page sized chunks on the receive end. > /* > * Update the sentmap to be sentmap&=3Ddirty > */ --=20 David Gibson | I'll have my music baroque, and my code david AT gibson.dropbear.id.au | minimalist, thank you. NOT _the_ _other_ | _way_ _around_! http://www.ozlabs.org/~dgibson --gMR3gsNFwZpnI/Ts Content-Type: application/pgp-signature -----BEGIN PGP SIGNATURE----- Version: GnuPG v1 iQIcBAEBAgAGBQJUZCEvAAoJEGw4ysog2bOSDEsP/1iNqcdJSmWzb6NE/jAum46Q N/7X7FTLMUbJ15yPbngm8DAFEeSvtBl1xtaRWv95kfeKozPUKtPEuV7FGq4e/vdT Sj+H/NOYJajpsl6zpMOqp4bZAbdawz2FvuCvUGcyPAl2ji8eh1MW0b6XQep/HAcS QmpFrn4tWWmZ+V36KYPgPfH2OV95NECwIByNc0/3jAGyNRe/mc0Otb2VE6jqLCTZ lomEj5cKONYTGZl/WwveKqGYbtv5mr+Rz38R8OrHSeNPReZMDYezzvnP6Ir1R9HN UbEzx4UTuZhBUwkhn0Bupq2Ew06WwRMOscgWEt/V3vCQW4dyVEsUhREPoj7eljQD 2BVKkrMFVbLzvSEoPr7Hj7VDAihWgq4ByeMJytIYVK3BJouai+r2LDg5wqsDfXVw usxDF987gO3WOHEhb0EpAbosqvgW9D14kDJjnejazYTCbJmtWbK5Joz2kVghLRIR wamnuG0xRXeivBrNJTlEbn6kww5hV7WCzCahiHy4S94Ce6i03FfwAoqFZE3oF2wU JQ5+hotAlXnmrqyJtKrvSRsypuZwludCyFVYXNDqB279h3Fbt6e2GZ87Qd6xKlFw iNmXvOb1Umxdg93oaMr7RdaXIZ3OzgDTgAQHZgjkNX4BxrbyL3kcPA26Ky2i7oMP vx9PXglFCIVC8O/cv9Ii =P8Jw -----END PGP SIGNATURE----- --gMR3gsNFwZpnI/Ts--