From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([208.118.235.92]:52136) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1Su84y-0005DO-Ox for qemu-devel@nongnu.org; Wed, 25 Jul 2012 16:21:09 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1Su84x-0004On-K2 for qemu-devel@nongnu.org; Wed, 25 Jul 2012 16:21:08 -0400 Received: from mail-pb0-f45.google.com ([209.85.160.45]:35902) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1Su84x-0004Oa-AZ for qemu-devel@nongnu.org; Wed, 25 Jul 2012 16:21:07 -0400 Received: by pbbro12 with SMTP id ro12so1886751pbb.4 for ; Wed, 25 Jul 2012 13:21:06 -0700 (PDT) Sender: fluxion Date: Wed, 25 Jul 2012 15:20:48 -0500 From: Michael Roth Message-ID: <20120725202048.GA2708@illuin> References: <1343155012-26316-1-git-send-email-quintela@redhat.com> <1343155012-26316-3-git-send-email-quintela@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <1343155012-26316-3-git-send-email-quintela@redhat.com> Subject: Re: [Qemu-devel] [PATCH 02/27] split MRU ram list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Juan Quintela Cc: Paolo Bonzini , ryanh@us.ibm.com, qemu-devel@nongnu.org On Tue, Jul 24, 2012 at 08:36:27PM +0200, Juan Quintela wrote: > From: Paolo Bonzini > > Outside the execution threads the normal, non-MRU-ized order of > the RAM blocks should always be enough. So manage two separate > lists, which will have separate locking rules. One thing I'm noticing is that, prior to this series, we're traversing the blocks in MRU order for migration. This seems counter-intuitive, as those are the blocks most likely to get re-dirtied and re-sent, so it make sense to hold off on sending those till last to reduce the amount of time the running guest has to invalidate the target's copy of it. This isn't as bad as it could be, since we at least don't restart the loop on every iteration, but it might still make sense to come up with a way to keep RAMList.blocks roughly in sync with RAMList.blocks_mru, and then traverse that in reverse order for ram_save_iterate. The fact that we're switching from the MRU ordering in the current version might be obscuring performance issues as well, which is probably worth keeping in mind. > Signed-off-by: Paolo Bonzini > Signed-off-by: Juan Quintela > --- > cpu-all.h | 4 +++- > exec.c | 16 +++++++++++----- > 2 files changed, 14 insertions(+), 6 deletions(-) > > diff --git a/cpu-all.h b/cpu-all.h > index 82ba1d7..ca3bb24 100644 > --- a/cpu-all.h > +++ b/cpu-all.h > @@ -476,8 +476,9 @@ typedef struct RAMBlock { > ram_addr_t offset; > ram_addr_t length; > uint32_t flags; > - char idstr[256]; > + QLIST_ENTRY(RAMBlock) next_mru; > QLIST_ENTRY(RAMBlock) next; > + char idstr[256]; > #if defined(__linux__) && !defined(TARGET_S390X) > int fd; > #endif > @@ -485,6 +486,7 @@ typedef struct RAMBlock { > > typedef struct RAMList { > uint8_t *phys_dirty; > + QLIST_HEAD(, RAMBlock) blocks_mru; > QLIST_HEAD(, RAMBlock) blocks; > uint64_t dirty_pages; > } RAMList; > diff --git a/exec.c b/exec.c > index feb4795..afc472f 100644 > --- a/exec.c > +++ b/exec.c > @@ -112,7 +112,10 @@ static uint8_t *code_gen_ptr; > int phys_ram_fd; > static int in_migration; > > -RAMList ram_list = { .blocks = QLIST_HEAD_INITIALIZER(ram_list.blocks) }; > +RAMList ram_list = { > + .blocks = QLIST_HEAD_INITIALIZER(ram_list.blocks), > + .blocks_mru = QLIST_HEAD_INITIALIZER(ram_list.blocks_mru) > +}; > > static MemoryRegion *system_memory; > static MemoryRegion *system_io; > @@ -2550,6 +2553,7 @@ ram_addr_t qemu_ram_alloc_from_ptr(ram_addr_t size, void *host, > new_block->length = size; > > QLIST_INSERT_HEAD(&ram_list.blocks, new_block, next); > + QLIST_INSERT_HEAD(&ram_list.blocks_mru, new_block, next_mru); > > ram_list.phys_dirty = g_realloc(ram_list.phys_dirty, > last_ram_offset() >> TARGET_PAGE_BITS); > @@ -2573,6 +2577,7 @@ void qemu_ram_free_from_ptr(ram_addr_t addr) > QLIST_FOREACH(block, &ram_list.blocks, next) { > if (addr == block->offset) { > QLIST_REMOVE(block, next); > + QLIST_REMOVE(block, next_mru); > g_free(block); > return; > } > @@ -2586,6 +2591,7 @@ void qemu_ram_free(ram_addr_t addr) > QLIST_FOREACH(block, &ram_list.blocks, next) { > if (addr == block->offset) { > QLIST_REMOVE(block, next); > + QLIST_REMOVE(block, next_mru); > if (block->flags & RAM_PREALLOC_MASK) { > ; > } else if (mem_path) { > @@ -2690,12 +2696,12 @@ void *qemu_get_ram_ptr(ram_addr_t addr) > { > RAMBlock *block; > > - QLIST_FOREACH(block, &ram_list.blocks, next) { > + QLIST_FOREACH(block, &ram_list.blocks_mru, next_mru) { > if (addr - block->offset < block->length) { > /* Move this entry to to start of the list. */ > if (block != QLIST_FIRST(&ram_list.blocks)) { > - QLIST_REMOVE(block, next); > - QLIST_INSERT_HEAD(&ram_list.blocks, block, next); > + QLIST_REMOVE(block, next_mru); > + QLIST_INSERT_HEAD(&ram_list.blocks_mru, block, next_mru); > } > if (xen_enabled()) { > /* We need to check if the requested address is in the RAM > @@ -2790,7 +2796,7 @@ int qemu_ram_addr_from_host(void *ptr, ram_addr_t *ram_addr) > return 0; > } > > - QLIST_FOREACH(block, &ram_list.blocks, next) { > + QLIST_FOREACH(block, &ram_list.blocks_mru, next_mru) { > /* This case append when the block is not mapped. */ > if (block->host == NULL) { > continue; > -- > 1.7.10.4 > >