From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:38207) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1Uzn09-0002OE-BS for qemu-devel@nongnu.org; Thu, 18 Jul 2013 08:08:08 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1Uzn06-0002EU-0x for qemu-devel@nongnu.org; Thu, 18 Jul 2013 08:08:05 -0400 Received: from e7.ny.us.ibm.com ([32.97.182.137]:51440) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1Uzn05-0002Dx-Ts for qemu-devel@nongnu.org; Thu, 18 Jul 2013 08:08:01 -0400 Received: from /spool/local by e7.ny.us.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Thu, 18 Jul 2013 08:08:00 -0400 Received: from d01relay02.pok.ibm.com (d01relay02.pok.ibm.com [9.56.227.234]) by d01dlp03.pok.ibm.com (Postfix) with ESMTP id 3B90EC9004A for ; Thu, 18 Jul 2013 08:07:56 -0400 (EDT) Received: from d03av05.boulder.ibm.com (d03av05.boulder.ibm.com [9.17.195.85]) by d01relay02.pok.ibm.com (8.13.8/8.13.8/NCO v10.0) with ESMTP id r6IC7u7w162650 for ; Thu, 18 Jul 2013 08:07:57 -0400 Received: from d03av05.boulder.ibm.com (loopback [127.0.0.1]) by d03av05.boulder.ibm.com (8.14.4/8.13.1/NCO v10.0 AVout) with ESMTP id r6IC7kGq006953 for ; Thu, 18 Jul 2013 06:07:46 -0600 Message-ID: <51E7DA71.2040400@linux.vnet.ibm.com> Date: Thu, 18 Jul 2013 08:07:13 -0400 From: "Michael R. Hines" MIME-Version: 1.0 References: <1373993306-25764-1-git-send-email-mrhines@linux.vnet.ibm.com> <1373993306-25764-5-git-send-email-mrhines@linux.vnet.ibm.com> <1374132600.2000.12.camel@localhost.localdomain> In-Reply-To: <1374132600.2000.12.camel@localhost.localdomain> Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit Subject: Re: [Qemu-devel] [PATCH v3 resend 4/8] rdma: core logic List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Marcel Apfelbaum Cc: aliguori@us.ibm.com, quintela@redhat.com, qemu-devel@nongnu.org, owasserm@redhat.com, abali@us.ibm.com, mrhines@us.ibm.com, gokul@us.ibm.com, pbonzini@redhat.com, chegu_vinod@hp.com, knoel@redhat.com On 07/18/2013 03:30 AM, Marcel Apfelbaum wrote: > On Tue, 2013-07-16 at 12:48 -0400, mrhines@linux.vnet.ibm.com wrote: >> From: "Michael R. Hines" >> >> Code that does need to be visible is kept >> well contained inside this file and this is the only >> new additional file to the entire patch. >> >> This file includes the entire protocol and interfaces >> required to perform RDMA migration. >> >> Also, the configure and Makefile modifications to link >> this file are included. >> >> Full documentation is in docs/rdma.txt >> > This patch is too big (in my opinion). > I would split it into at least 3 patches: > 1. Generic RDMA code (this part can be reused by everyone who will need RDMA in the future) > 2. RDMA transfer protocol (separating this will give us possibility for optimization without touching the rest of the code) > 3. Migration related code > Don't let the "v3" mislead you =). The patch actually *used* to look just like what you described (3 different ones), but after more than a dozen reviews since January I was told to join all the code into a single file by the reviewers. > >> + */ >> +#define RDMA_WRID_TYPE_SHIFT 0UL >> +#define RDMA_WRID_BLOCK_SHIFT 16UL >> +#define RDMA_WRID_CHUNK_SHIFT 30UL > If I understand correctly each RDMA write is exactly 1MB. > I think that it is crucial from the performance point of view to make this configurable. > Sometimes the time to register the MR(and unregister) on both machines may be the same as transmission of 1MB. > Bottom line, this cannot be hard-coded because it depends on the connection speed. Yes, that's correct, but you probably missed reading the whole file (it's big, I know). Search for "mergable" in the patch, and you'll see the function. Chunks are only 1MB *contiguous*. That means that if ram_save_block() does not gives us contiguous pages up to 1MB, then we do not send 1MB - we only send what we were given. For example: if we were given 0.5 of contiguous transfers to perform and then the next page address is far far away, then we transmit that "chunk" immediately without worrying about waiting for anything else. > > >> +/* >> + * RDMA requires memory registration (mlock/pinning), but this is not good for >> + * overcommitment. >> + * >> + * In preparation for the future where LRU information or workload-specific >> + * writable writable working set memory access behavior is available to QEMU >> + * it would be nice to have in place the ability to UN-register/UN-pin >> + * particular memory regions from the RDMA hardware when it is determine that >> + * those regions of memory will likely not be accessed again in the near future. >> + * >> + * While we do not yet have such information right now, the following >> + * compile-time option allows us to perform a non-optimized version of this >> + * behavior. >> + * >> + * By uncommenting this option, you will cause *all* RDMA transfers to be >> + * unregistered immediately after the transfer completes on both sides of the >> + * connection. This has no effect in 'rdma-pin-all' mode, only regular mode. >> + * >> + * This will have a terrible impact on migration performance, so until future >> + * workload information or LRU information is available, do not attempt to use >> + * this feature except for basic testing. >> + */ >> +//#define RDMA_UNREGISTRATION_EXAMPLE >> + >> +/* >> + * Perform a non-optimized memory unregistration after every transfer >> + * for demonsration purposes, only if pin-all is not requested. >> + * >> + * Potential optimizations: >> + * 1. Start a new thread to run this function continuously >> + - for bit clearing >> + - and for receipt of unregister messages >> + * 2. Use an LRU. >> + * 3. Use workload hints. >> + */ > I think that if we work with chunks large enough, in such way that the time to pass > the pages between hosts is much bigger the MR registration/unregistration, it may solve > the issue as least partially (it will not be such an impact). > After that, optimization is a good idea. > > > Nice work! > Marcel > Agreed. Thank you. - Michael