From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([208.118.235.92]:53378) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1UI1Gq-0004Xu-Qn for qemu-devel@nongnu.org; Tue, 19 Mar 2013 14:28:33 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1UI1GX-00081l-6Q for qemu-devel@nongnu.org; Tue, 19 Mar 2013 14:28:24 -0400 Received: from e39.co.us.ibm.com ([32.97.110.160]:49462) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1UI1GW-00081T-V2 for qemu-devel@nongnu.org; Tue, 19 Mar 2013 14:28:05 -0400 Received: from /spool/local by e39.co.us.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Tue, 19 Mar 2013 12:28:03 -0600 Received: from d01relay06.pok.ibm.com (d01relay06.pok.ibm.com [9.56.227.116]) by d01dlp03.pok.ibm.com (Postfix) with ESMTP id 66AF9C9001D for ; Tue, 19 Mar 2013 14:28:00 -0400 (EDT) Received: from d01av02.pok.ibm.com (d01av02.pok.ibm.com [9.56.224.216]) by d01relay06.pok.ibm.com (8.13.8/8.13.8/NCO v10.0) with ESMTP id r2JIS0Kg25690120 for ; Tue, 19 Mar 2013 14:28:00 -0400 Received: from d01av02.pok.ibm.com (loopback [127.0.0.1]) by d01av02.pok.ibm.com (8.14.4/8.13.1/NCO v10.0 AVout) with ESMTP id r2JIRxmx025656 for ; Tue, 19 Mar 2013 15:27:59 -0300 Message-ID: <5148AE2F.2030206@linux.vnet.ibm.com> Date: Tue, 19 Mar 2013 14:27:59 -0400 From: "Michael R. Hines" MIME-Version: 1.0 References: <1363576743-6146-1-git-send-email-mrhines@linux.vnet.ibm.com> <1363576743-6146-9-git-send-email-mrhines@linux.vnet.ibm.com> <5146D9BF.3030407@redhat.com> <51477A26.8090600@linux.vnet.ibm.com> <51482D78.3010301@redhat.com> <5148643F.2070401@linux.vnet.ibm.com> <51486733.7060207@redhat.com> <51486AD0.80309@linux.vnet.ibm.com> <51486C0D.2040609@redhat.com> <514871C2.5020108@linux.vnet.ibm.com> <5148748C.5050509@redhat.com> In-Reply-To: <5148748C.5050509@redhat.com> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Subject: Re: [Qemu-devel] [RFC PATCH RDMA support v4: 08/10] introduce QEMUFileRDMA List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Paolo Bonzini Cc: aliguori@us.ibm.com, mst@redhat.com, qemu-devel@nongnu.org, owasserm@redhat.com, abali@us.ibm.com, mrhines@us.ibm.com, gokul@us.ibm.com On 03/19/2013 10:22 AM, Paolo Bonzini wrote: > Il 19/03/2013 15:10, Michael R. Hines ha scritto: >> On 03/19/2013 09:45 AM, Paolo Bonzini wrote: >>> This is because of downtime: You have to drain the queue anyway at the >>> very end, and if you don't drain it in advance after each iteration, then >>> the queue will have lots of bytes in it waiting for transmission and the >>> Virtual Machine will be stopped for a much longer period of time during >>> the last iteration waiting for RDMA card to finish transmission of all >>> those >>> bytes. >>> Shouldn't the "current chunk full" case take care of it too? >>> >>> Of course if you disable chunking you have to add a different condition, >>> perhaps directly into save_rdma_page. >> No, we don't want to flush on "chunk full" - that has a different meaning. >> We want to have as many chunks submitted to the hardware for transmission >> as possible to keep the bytes moving. > That however gives me an idea... Instead of the full drain at the end > of an iteration, does it make sense to do a "partial" drain at every > chunk full, so that you don't have > N bytes pending and the downtime is > correspondingly limited? Sure, you could do that, but it seems overly complex just to avoid a single flush() call at the end of each iteration, right? > If there is no RAM migration in flight. So you have > > migrate RAM > ... > RAM migration finished, device migration start > put_buffer <<<<< QEMUFileRDMA triggers drain > put_buffer > put_buffer > put_buffer > ... Ah, yes, ok. Very simple modification......