From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([208.118.235.92]:39751) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1UHwXe-00070J-1n for qemu-devel@nongnu.org; Tue, 19 Mar 2013 09:25:30 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1UHwXc-0008Eh-KN for qemu-devel@nongnu.org; Tue, 19 Mar 2013 09:25:25 -0400 Received: from mx1.redhat.com ([209.132.183.28]:21828) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1UHwXc-0008EQ-CX for qemu-devel@nongnu.org; Tue, 19 Mar 2013 09:25:24 -0400 Message-ID: <51486733.7060207@redhat.com> Date: Tue, 19 Mar 2013 14:25:07 +0100 From: Paolo Bonzini MIME-Version: 1.0 References: <1363576743-6146-1-git-send-email-mrhines@linux.vnet.ibm.com> <1363576743-6146-9-git-send-email-mrhines@linux.vnet.ibm.com> <5146D9BF.3030407@redhat.com> <51477A26.8090600@linux.vnet.ibm.com> <51482D78.3010301@redhat.com> <5148643F.2070401@linux.vnet.ibm.com> In-Reply-To: <5148643F.2070401@linux.vnet.ibm.com> Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit Subject: Re: [Qemu-devel] [RFC PATCH RDMA support v4: 08/10] introduce QEMUFileRDMA List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: "Michael R. Hines" Cc: aliguori@us.ibm.com, mst@redhat.com, qemu-devel@nongnu.org, owasserm@redhat.com, abali@us.ibm.com, mrhines@us.ibm.com, gokul@us.ibm.com Il 19/03/2013 14:12, Michael R. Hines ha scritto: > On 03/19/2013 05:18 AM, Paolo Bonzini wrote: >> Il 18/03/2013 21:33, Michael R. Hines ha scritto: >>>> +int qemu_drain(QEMUFile *f) >>>> +{ >>>> + return f->ops->drain ? f->ops->drain(f->opaque) : 0; >>>> +} >>>> Hmm, this is very similar to qemu_fflush, but not quite. :/ >>>> >>>> Why exactly is this needed? >>> Good idea - I'll replace drain with flush once I added >>> the "qemu_file_ops_are(const QEMUFile *, const QEMUFileOps *) " >>> that you recommended...... >> If I understand correctly, the problem is that save_rdma_page is >> asynchronous and you have to wait for pending operations to do the >> put_buffer protocol correctly. >> >> Would it work to just do the "drain" in the put_buffer operation, if and >> only if it was preceded by a save_rdma_page operation? > > Yes, the drain needs to happen in a few places already: > > 1. During save_rdma_page (if the current "chunk" is full of pages) Ok, this is internal to RDMA so no problem. > 2. During the end of each iteration (now using qemu_fflush in my current > patch) Why? > 3. And also during qemu_savem_state_complete(), also using qemu_fflush. This would be caught by put_buffer, but (2) would not. >>>>> /** Flushes QEMUFile buffer >>>>> * >>>>> */ >>>>> @@ -723,6 +867,8 @@ int qemu_get_byte(QEMUFile *f) >>>>> int64_t qemu_ftell(QEMUFile *f) >>>>> { >>>>> qemu_fflush(f); >>>>> + if(migrate_use_rdma(f)) >>>>> + return delta_norm_mig_bytes_transferred(); >>>> Not needed, and another undesirable dependency (savevm.c -> >>>> arch_init.c). Just update f->pos in save_rdma_page. >>> f->pos isn't good enough because save_rdma_page does not >>> go through QEMUFile directly - only non-live state goes >>> through QEMUFile ....... pc.ram uses direct RDMA writes. >>> >>> As a result, the position pointer does not get updated >>> and the accounting is missed........ >> Yes, I am suggesting to modify f->pos in save_rdma_page instead. >> >> Paolo >> > > Would that not confuse the other QEMUFile users? > If I change that pointer (without actually putting bytes > in into QEMUFile), won't the f->pos pointer be > incorrectly updated? f->pos is never used directly by QEMUFile, it is almost an opaque value. It is accumulated on every qemu_fflush (so that it can be passed to the ->put_buffer function), and returned by qemu_ftell; nothing else. If you make somehow save_rdma_page a new op, returning a value from that op and adding it to f->pos would be a good way to achieve this. Paolo