From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:36114) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1Z20VY-0004q0-Tk for qemu-devel@nongnu.org; Mon, 08 Jun 2015 13:06:45 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1Z20VX-0002uA-Ez for qemu-devel@nongnu.org; Mon, 08 Jun 2015 13:06:44 -0400 Received: from mx1.redhat.com ([209.132.183.28]:60093) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1Z20VX-0002tz-9b for qemu-devel@nongnu.org; Mon, 08 Jun 2015 13:06:43 -0400 Received: from int-mx14.intmail.prod.int.phx2.redhat.com (int-mx14.intmail.prod.int.phx2.redhat.com [10.5.11.27]) by mx1.redhat.com (Postfix) with ESMTPS id 052B8A0CAA for ; Mon, 8 Jun 2015 17:06:43 +0000 (UTC) Message-ID: <5575CB9F.4060807@redhat.com> Date: Mon, 08 Jun 2015 19:06:39 +0200 From: Paolo Bonzini MIME-Version: 1.0 References: <1433776757-61958-1-git-send-email-imammedo@redhat.com> <1433776757-61958-4-git-send-email-imammedo@redhat.com> <5575B58B.50105@redhat.com> <20150608181314.3ab8fc80@nial.brq.redhat.com> <20150608182206-mutt-send-email-mst@redhat.com> In-Reply-To: <20150608182206-mutt-send-email-mst@redhat.com> Content-Type: text/plain; charset=windows-1252 Content-Transfer-Encoding: quoted-printable Subject: Re: [Qemu-devel] [RFC v2 3/6] memory: support unmapping of MemoryRegion mapped into HVA parent List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: "Michael S. Tsirkin" , Igor Mammedov Cc: qemu-devel@nongnu.org On 08/06/2015 18:25, Michael S. Tsirkin wrote: > > issue is that we have to re-reserve HVA region first so no other allo= cation > > would claim gap and the only way I found was just to call mmap() on i= t > > which as side effect invalidates MemoryRegion's backing RAM. >=20 > Well the only point we need to mmap is where we'd unmap > normally, if that's not safe then unmapping wouldn't > be safe either? I think it is it possible to map slot 2 at address 0x12340000 right after unmapping slot 1 at the same address but before an RCU grace period has expired. If this is possible, then you can have two DIMMs trying to mmap themselves at the same address. Probably you need to stop using object_child_foreach in hw/mem/pc-dimm.c, and instead build your own list. An object can keep a "weak" reference to itself in the list, and remove itself from the list at instance_finalize time. Paolo