From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([140.186.70.92]:39927) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1RJN7w-0005aD-Bw for qemu-devel@nongnu.org; Thu, 27 Oct 2011 06:24:01 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1RJN7v-0001aP-3g for qemu-devel@nongnu.org; Thu, 27 Oct 2011 06:24:00 -0400 Received: from mx1.redhat.com ([209.132.183.28]:6128) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1RJN7u-0001aE-Oj for qemu-devel@nongnu.org; Thu, 27 Oct 2011 06:23:59 -0400 Message-ID: <4EA9313A.9060104@redhat.com> Date: Thu, 27 Oct 2011 12:23:54 +0200 From: Avi Kivity MIME-Version: 1.0 References: <4EA81094.5050400@virtualcomputer.com> In-Reply-To: <4EA81094.5050400@virtualcomputer.com> Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit Subject: Re: [Qemu-devel] New Memory API Question List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: John Baboval Cc: qemu-devel@nongnu.org On 10/26/2011 03:52 PM, John Baboval wrote: > Sorry for coming late to the party on this... I only read qemu-devel > through a filter so I missed all the discussions on the new memory > API. I have a question as to how it works and how it's supposed to > work in certain scenarios. > > It's a question of flow. I'm following the code path through the > creation of a new memory subregion. If I'm reading this properly, it > would seem that a MemoryRegion - for example the ones used by VGA - go > through the following flow: > > memory_region_init_ram() - (mr->destructor is set to > memory_region_destructor_ram) > memory_region_add_subregion(system_memory, ...) -> > memory_region_update_topology() -> > address_space_update_topology() > address_space_update_topology_part() > as_memory_range_add() - through the ops vector > memory_region_prepare_ram_addr() > > > At this point it seems that the destructor is overwritten with the > memory_region_destructor_iomem(), and it loses track of the proper way > to ever free the memory region. Is this correct, or am I missing > something? It's correct; this is a bug. > > Or does it not matter because nobody ever calls memory_region_destroy > for system memory regions? It can still happen via hotunplug of an ivshmem device, or memory hotunplug (when it is eventually implemented). -- I have a truly marvellous patch that fixes the bug which this signature is too narrow to contain.