From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:50777) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1VBTMT-0004ru-M5 for qemu-devel@nongnu.org; Mon, 19 Aug 2013 13:35:32 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1VBTMM-000745-DU for qemu-devel@nongnu.org; Mon, 19 Aug 2013 13:35:25 -0400 Received: from mx1.redhat.com ([209.132.183.28]:31892) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1VBTMM-00073s-2U for qemu-devel@nongnu.org; Mon, 19 Aug 2013 13:35:18 -0400 Message-ID: <521257E8.6050800@redhat.com> Date: Mon, 19 Aug 2013 19:37:44 +0200 From: Laszlo Ersek MIME-Version: 1.0 References: <1376922370-5681-1-git-send-email-mst@redhat.com> <1376922370-5681-2-git-send-email-mst@redhat.com> In-Reply-To: <1376922370-5681-2-git-send-email-mst@redhat.com> Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit Subject: Re: [Qemu-devel] [PATCH v4 1/2] arch_init: align MR size to target page size List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: "Michael S. Tsirkin" Cc: Peter Maydell , pbonzini@redhat.com, kraxel@redhat.com, qemu-devel@nongnu.org, quintela@redhat.com On 08/19/13 16:26, Michael S. Tsirkin wrote: > Migration code assumes that each MR is a multiple of TARGET_PAGE_SIZE: > MR size is divided by TARGET_PAGE_SIZE, so if it isn't migration > never completes. > But this isn't really required for regions set up with > memory_region_init_ram, since that calls qemu_ram_alloc > which aligns size up using TARGET_PAGE_ALIGN. > > Align MR size up to full target page sizes, this way > migration completes even if we create a RAM MR > which is not a full target page size. > > Signed-off-by: Michael S. Tsirkin > --- > arch_init.c | 3 ++- > 1 file changed, 2 insertions(+), 1 deletion(-) > > diff --git a/arch_init.c b/arch_init.c > index 68a7ab7..ac8eb59 100644 > --- a/arch_init.c > +++ b/arch_init.c > @@ -342,7 +342,8 @@ ram_addr_t migration_bitmap_find_and_reset_dirty(MemoryRegion *mr, > { > unsigned long base = mr->ram_addr >> TARGET_PAGE_BITS; > unsigned long nr = base + (start >> TARGET_PAGE_BITS); > - unsigned long size = base + (int128_get64(mr->size) >> TARGET_PAGE_BITS); > + uint64_t mr_size = TARGET_PAGE_ALIGN(memory_region_size(mr)); > + unsigned long size = base + (mr_size >> TARGET_PAGE_BITS); > > unsigned long next; > > (1) The patch (and the update to 2/2) seem correct to me. (2) But is this patch complete? Long version: (1) The "only" danger in migration_bitmap_find_and_reset_dirty(), AFAICS, is over-subscripting "migration_bitmap" with find_next_bit(). However, ram_save_setup() seems to initialize "migration_bitmap" for "ram_pages" bits, and "ram_pages" comes from last_ram_offset(). last_ram_offset() in turn finds the highest offset any RAMBlock has. The RAMBlock backing the fw_cfg file has already rounded-up size, so I think "migration_bitmap" will have a bit allocated for the last (possibly not fully populated) page of any fw_cfg RAMBlock. So this patch should be correct. (2) Regarding completeness, are we sure that nothing else depends on mr->size being an integer multiple of TARGET_PAGE_SIZE? I think v3 is perhaps less intrusive (as in, it doesn't raise (2)). ((3) memory_region_size() is slightly different from int128_get64(mr->size); it has a special case for int128_2_64() -- and I don't understand that. int128_2_64() represents 2 raised to the power of 64. It seems to be the replacement for UINT64_MAX.) Thanks Laszlo