From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:36316) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1YumZu-0001kn-TN for qemu-devel@nongnu.org; Tue, 19 May 2015 14:49:28 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1YumZr-0003kt-Nt for qemu-devel@nongnu.org; Tue, 19 May 2015 14:49:22 -0400 Received: from e18.ny.us.ibm.com ([129.33.205.208]:39361) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1YumZr-0003kk-Kt for qemu-devel@nongnu.org; Tue, 19 May 2015 14:49:19 -0400 Received: from /spool/local by e18.ny.us.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Tue, 19 May 2015 14:49:19 -0400 Received: from b01cxnp22035.gho.pok.ibm.com (b01cxnp22035.gho.pok.ibm.com [9.57.198.25]) by d01dlp01.pok.ibm.com (Postfix) with ESMTP id 5091B38C803B for ; Tue, 19 May 2015 14:49:16 -0400 (EDT) Received: from d01av02.pok.ibm.com (d01av02.pok.ibm.com [9.56.224.216]) by b01cxnp22035.gho.pok.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id t4JInGQY62324920 for ; Tue, 19 May 2015 18:49:16 GMT Received: from d01av02.pok.ibm.com (localhost [127.0.0.1]) by d01av02.pok.ibm.com (8.14.4/8.14.4/NCO v10.0 AVout) with ESMTP id t4JInFCf017058 for ; Tue, 19 May 2015 14:49:15 -0400 Message-ID: <555B85B4.6070903@linux.vnet.ibm.com> Date: Tue, 19 May 2015 13:49:24 -0500 From: "Michael R. Hines" MIME-Version: 1.0 References: <1429545445-28216-1-git-send-email-dgilbert@redhat.com> <1429545445-28216-9-git-send-email-dgilbert@redhat.com> In-Reply-To: <1429545445-28216-9-git-send-email-dgilbert@redhat.com> Content-Type: text/plain; charset=windows-1252; format=flowed Content-Transfer-Encoding: 7bit Subject: Re: [Qemu-devel] [PATCH 08/10] Rework ram block hash List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: "Dr. David Alan Gilbert (git)" , qemu-devel@nongnu.org Cc: amit.shah@redhat.com, arei.gonglei@huawei.com, mrhines@us.ibm.com, quintela@redhat.com On 04/20/2015 10:57 AM, Dr. David Alan Gilbert (git) wrote: > From: "Dr. David Alan Gilbert" > > RDMA uses a hash from block offset->RAM Block; this isn't needed > on the destination, and now that the destination sorts the ramblock > list, is harder to maintain. Destination sorts the ramblock list? Is the patchset out-of-order? I didn't see that yet..... Why is it sorted? I would like to keep the ramblock list directly addressable by hash on both sides, because, as I mentioned earlier, we want as much flexibility in registering RAMBlock memory as possible by being able to add or delete arbitrary blocks int the list at anytime during a migration. I will try to get the patchset that allows anyone to register memory for transfer out as soon as I can. > > Split the hash so that it's only generated on the source. > > Signed-off-by: Dr. David Alan Gilbert > --- > migration/rdma.c | 32 ++++++++++++++++++++------------ > 1 file changed, 20 insertions(+), 12 deletions(-) > > diff --git a/migration/rdma.c b/migration/rdma.c > index fe3b76e..185817e 100644 > --- a/migration/rdma.c > +++ b/migration/rdma.c > @@ -533,23 +533,22 @@ static int rdma_add_block(RDMAContext *rdma, const char *block_name, > ram_addr_t block_offset, uint64_t length) > { > RDMALocalBlocks *local = &rdma->local_ram_blocks; > - RDMALocalBlock *block = g_hash_table_lookup(rdma->blockmap, > - (void *)(uintptr_t)block_offset); > + RDMALocalBlock *block; > RDMALocalBlock *old = local->block; > > - assert(block == NULL); > - > local->block = g_malloc0(sizeof(RDMALocalBlock) * (local->nb_blocks + 1)); > > if (local->nb_blocks) { > int x; > > - for (x = 0; x < local->nb_blocks; x++) { > - g_hash_table_remove(rdma->blockmap, > - (void *)(uintptr_t)old[x].offset); > - g_hash_table_insert(rdma->blockmap, > - (void *)(uintptr_t)old[x].offset, > - &local->block[x]); > + if (rdma->blockmap) { > + for (x = 0; x < local->nb_blocks; x++) { > + g_hash_table_remove(rdma->blockmap, > + (void *)(uintptr_t)old[x].offset); > + g_hash_table_insert(rdma->blockmap, > + (void *)(uintptr_t)old[x].offset, > + &local->block[x]); > + } > } > memcpy(local->block, old, sizeof(RDMALocalBlock) * local->nb_blocks); > g_free(old); > @@ -571,7 +570,9 @@ static int rdma_add_block(RDMAContext *rdma, const char *block_name, > > block->is_ram_block = local->init ? false : true; > > - g_hash_table_insert(rdma->blockmap, (void *) block_offset, block); > + if (rdma->blockmap) { > + g_hash_table_insert(rdma->blockmap, (void *) block_offset, block); > + } > > trace_rdma_add_block(block_name, local->nb_blocks, > (uintptr_t) block->local_host_addr, > @@ -607,7 +608,6 @@ static int qemu_rdma_init_ram_blocks(RDMAContext *rdma) > RDMALocalBlocks *local = &rdma->local_ram_blocks; > > assert(rdma->blockmap == NULL); > - rdma->blockmap = g_hash_table_new(g_direct_hash, g_direct_equal); > memset(local, 0, sizeof *local); > qemu_ram_foreach_block(qemu_rdma_init_one_block, rdma); > trace_qemu_rdma_init_ram_blocks(local->nb_blocks); > @@ -2248,6 +2248,14 @@ static int qemu_rdma_source_init(RDMAContext *rdma, Error **errp, bool pin_all) > goto err_rdma_source_init; > } > > + /* Build the hash that maps from offset to RAMBlock */ > + rdma->blockmap = g_hash_table_new(g_direct_hash, g_direct_equal); > + for (idx = 0; idx < rdma->local_ram_blocks.nb_blocks; idx++) { > + g_hash_table_insert(rdma->blockmap, > + (void *)(uintptr_t)rdma->local_ram_blocks.block[idx].offset, > + &rdma->local_ram_blocks.block[idx]); > + } > + > for (idx = 0; idx < RDMA_WRID_MAX; idx++) { > ret = qemu_rdma_reg_control(rdma, idx); > if (ret) {