From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:43198) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1Z1ypo-0006Vk-ID for qemu-devel@nongnu.org; Mon, 08 Jun 2015 11:19:35 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1Z1ypk-0005NH-BG for qemu-devel@nongnu.org; Mon, 08 Jun 2015 11:19:32 -0400 Received: from mx1.redhat.com ([209.132.183.28]:39414) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1Z1ypk-0005MP-6S for qemu-devel@nongnu.org; Mon, 08 Jun 2015 11:19:28 -0400 Received: from int-mx14.intmail.prod.int.phx2.redhat.com (int-mx14.intmail.prod.int.phx2.redhat.com [10.5.11.27]) by mx1.redhat.com (Postfix) with ESMTPS id E136873 for ; Mon, 8 Jun 2015 15:19:27 +0000 (UTC) From: Igor Mammedov Date: Mon, 8 Jun 2015 17:19:17 +0200 Message-Id: <1433776757-61958-7-git-send-email-imammedo@redhat.com> In-Reply-To: <1433776757-61958-1-git-send-email-imammedo@redhat.com> References: <1433776757-61958-1-git-send-email-imammedo@redhat.com> Subject: [Qemu-devel] [RFC v2 6/6] pc: fix QEMU crashing when more than ~50 memory hotplugged List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: qemu-devel@nongnu.org Cc: pbonzini@redhat.com, mst@redhat.com QEMU asserts in vhost due to hitting vhost backend limit on number of supported memory regions. Instead of increasing limit in backends, describe all hotplugged memory as one continuos range to vhost with linear 1:1 HVA->GPA mapping in backend. It allows to avoid increasing VHOST_MEMORY_MAX_NREGIONS limit in kernel and refactoring current region lookup algorithm to a faster/scalable data structure. The same applies to vhost user which has even lower limit. Signed-off-by: Igor Mammedov --- hw/virtio/vhost.c | 15 ++++++++++++--- 1 file changed, 12 insertions(+), 3 deletions(-) diff --git a/hw/virtio/vhost.c b/hw/virtio/vhost.c index 01f1e04..49a7b2e 100644 --- a/hw/virtio/vhost.c +++ b/hw/virtio/vhost.c @@ -419,6 +419,7 @@ static void vhost_set_memory(MemoryListener *listener, bool log_dirty = memory_region_is_logging(section->mr); int s = offsetof(struct vhost_memory, regions) + (dev->mem->nregions + 1) * sizeof dev->mem->regions[0]; + MemoryRegionSection rsvd_hva; void *ram; dev->mem = g_realloc(dev->mem, s); @@ -427,17 +428,25 @@ static void vhost_set_memory(MemoryListener *listener, add = false; } + rsvd_hva = memory_region_find_hva_range(section->mr); + if (rsvd_hva.mr) { + start_addr = rsvd_hva.offset_within_address_space; + size = int128_get64(rsvd_hva.size); + ram = memory_region_get_ram_ptr(rsvd_hva.mr); + } else { + ram = memory_region_get_ram_ptr(section->mr) + section->offset_within_region; + } + assert(size); /* Optimize no-change case. At least cirrus_vga does this a lot at this time. */ - ram = memory_region_get_ram_ptr(section->mr) + section->offset_within_region; if (add) { - if (!vhost_dev_cmp_memory(dev, start_addr, size, (uintptr_t)ram)) { + if (!rsvd_hva.mr && !vhost_dev_cmp_memory(dev, start_addr, size, (uintptr_t)ram)) { /* Region exists with same address. Nothing to do. */ return; } } else { - if (!vhost_dev_find_reg(dev, start_addr, size)) { + if (!rsvd_hva.mr && !vhost_dev_find_reg(dev, start_addr, size)) { /* Removing region that we don't access. Nothing to do. */ return; } -- 1.8.3.1