From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:38916) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1Yumfi-0001qY-EI for qemu-devel@nongnu.org; Tue, 19 May 2015 14:55:26 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1YumfZ-0006Ph-Oi for qemu-devel@nongnu.org; Tue, 19 May 2015 14:55:16 -0400 Received: from mx1.redhat.com ([209.132.183.28]:44354) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1YumfZ-0006PW-I7 for qemu-devel@nongnu.org; Tue, 19 May 2015 14:55:13 -0400 Date: Tue, 19 May 2015 19:55:04 +0100 From: "Dr. David Alan Gilbert" Message-ID: <20150519185503.GI2127@work-vm> References: <1429545445-28216-1-git-send-email-dgilbert@redhat.com> <1429545445-28216-9-git-send-email-dgilbert@redhat.com> <555B85B4.6070903@linux.vnet.ibm.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <555B85B4.6070903@linux.vnet.ibm.com> Subject: Re: [Qemu-devel] [PATCH 08/10] Rework ram block hash List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: "Michael R. Hines" Cc: amit.shah@redhat.com, quintela@redhat.com, arei.gonglei@huawei.com, qemu-devel@nongnu.org, mrhines@us.ibm.com * Michael R. Hines (mrhines@linux.vnet.ibm.com) wrote: > On 04/20/2015 10:57 AM, Dr. David Alan Gilbert (git) wrote: > >From: "Dr. David Alan Gilbert" > > > >RDMA uses a hash from block offset->RAM Block; this isn't needed > >on the destination, and now that the destination sorts the ramblock > >list, is harder to maintain. > > Destination sorts the ramblock list? Is the patchset out-of-order? > I didn't see that yet..... Why is it sorted? It's the next patch in the list - please see that one. Since I use an index into the list it made it the easiest thing to index on. > I would like to keep the ramblock list directly addressable by hash > on both sides, because, as I mentioned earlier, we want as much > flexibility in registering RAMBlock memory as possible by being > able to add or delete arbitrary blocks int the list at anytime during > a migration. > > I will try to get the patchset that allows anyone to register memory > for transfer out as soon as I can. Hmm OK, I think I can rework that to regenerate the hash; it's a little difficult without knowing how you're intending to use it. Dave > > > > >Split the hash so that it's only generated on the source. > > > >Signed-off-by: Dr. David Alan Gilbert > >--- > > migration/rdma.c | 32 ++++++++++++++++++++------------ > > 1 file changed, 20 insertions(+), 12 deletions(-) > > > >diff --git a/migration/rdma.c b/migration/rdma.c > >index fe3b76e..185817e 100644 > >--- a/migration/rdma.c > >+++ b/migration/rdma.c > >@@ -533,23 +533,22 @@ static int rdma_add_block(RDMAContext *rdma, const char *block_name, > > ram_addr_t block_offset, uint64_t length) > > { > > RDMALocalBlocks *local = &rdma->local_ram_blocks; > >- RDMALocalBlock *block = g_hash_table_lookup(rdma->blockmap, > >- (void *)(uintptr_t)block_offset); > >+ RDMALocalBlock *block; > > RDMALocalBlock *old = local->block; > > > >- assert(block == NULL); > >- > > local->block = g_malloc0(sizeof(RDMALocalBlock) * (local->nb_blocks + 1)); > > > > if (local->nb_blocks) { > > int x; > > > >- for (x = 0; x < local->nb_blocks; x++) { > >- g_hash_table_remove(rdma->blockmap, > >- (void *)(uintptr_t)old[x].offset); > >- g_hash_table_insert(rdma->blockmap, > >- (void *)(uintptr_t)old[x].offset, > >- &local->block[x]); > >+ if (rdma->blockmap) { > >+ for (x = 0; x < local->nb_blocks; x++) { > >+ g_hash_table_remove(rdma->blockmap, > >+ (void *)(uintptr_t)old[x].offset); > >+ g_hash_table_insert(rdma->blockmap, > >+ (void *)(uintptr_t)old[x].offset, > >+ &local->block[x]); > >+ } > > } > > memcpy(local->block, old, sizeof(RDMALocalBlock) * local->nb_blocks); > > g_free(old); > >@@ -571,7 +570,9 @@ static int rdma_add_block(RDMAContext *rdma, const char *block_name, > > > > block->is_ram_block = local->init ? false : true; > > > >- g_hash_table_insert(rdma->blockmap, (void *) block_offset, block); > >+ if (rdma->blockmap) { > >+ g_hash_table_insert(rdma->blockmap, (void *) block_offset, block); > >+ } > > > > trace_rdma_add_block(block_name, local->nb_blocks, > > (uintptr_t) block->local_host_addr, > >@@ -607,7 +608,6 @@ static int qemu_rdma_init_ram_blocks(RDMAContext *rdma) > > RDMALocalBlocks *local = &rdma->local_ram_blocks; > > > > assert(rdma->blockmap == NULL); > >- rdma->blockmap = g_hash_table_new(g_direct_hash, g_direct_equal); > > memset(local, 0, sizeof *local); > > qemu_ram_foreach_block(qemu_rdma_init_one_block, rdma); > > trace_qemu_rdma_init_ram_blocks(local->nb_blocks); > >@@ -2248,6 +2248,14 @@ static int qemu_rdma_source_init(RDMAContext *rdma, Error **errp, bool pin_all) > > goto err_rdma_source_init; > > } > > > >+ /* Build the hash that maps from offset to RAMBlock */ > >+ rdma->blockmap = g_hash_table_new(g_direct_hash, g_direct_equal); > >+ for (idx = 0; idx < rdma->local_ram_blocks.nb_blocks; idx++) { > >+ g_hash_table_insert(rdma->blockmap, > >+ (void *)(uintptr_t)rdma->local_ram_blocks.block[idx].offset, > >+ &rdma->local_ram_blocks.block[idx]); > >+ } > >+ > > for (idx = 0; idx < RDMA_WRID_MAX; idx++) { > > ret = qemu_rdma_reg_control(rdma, idx); > > if (ret) { > > -- Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK