From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:32993) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1WGi8d-0004yD-3H for qemu-devel@nongnu.org; Thu, 20 Feb 2014 23:55:12 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1WGi8U-0000z9-1a for qemu-devel@nongnu.org; Thu, 20 Feb 2014 23:55:03 -0500 Received: from e8.ny.us.ibm.com ([32.97.182.138]:53726) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1WGi8T-0000w6-TH for qemu-devel@nongnu.org; Thu, 20 Feb 2014 23:54:53 -0500 Received: from /spool/local by e8.ny.us.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Thu, 20 Feb 2014 23:54:31 -0500 Received: from b01cxnp22035.gho.pok.ibm.com (b01cxnp22035.gho.pok.ibm.com [9.57.198.25]) by d01dlp03.pok.ibm.com (Postfix) with ESMTP id 2C41CC90043 for ; Thu, 20 Feb 2014 23:54:26 -0500 (EST) Received: from d01av05.pok.ibm.com (d01av05.pok.ibm.com [9.56.224.195]) by b01cxnp22035.gho.pok.ibm.com (8.13.8/8.13.8/NCO v10.0) with ESMTP id s1L4sTCQ8651140 for ; Fri, 21 Feb 2014 04:54:29 GMT Received: from d01av05.pok.ibm.com (localhost [127.0.0.1]) by d01av05.pok.ibm.com (8.14.4/8.14.4/NCO v10.0 AVout) with ESMTP id s1L4sSds005069 for ; Thu, 20 Feb 2014 23:54:29 -0500 Message-ID: <5306DBF8.3060204@linux.vnet.ibm.com> Date: Fri, 21 Feb 2014 12:54:16 +0800 From: "Michael R. Hines" MIME-Version: 1.0 References: <1392713429-18201-1-git-send-email-mrhines@linux.vnet.ibm.com> <1392713429-18201-2-git-send-email-mrhines@linux.vnet.ibm.com> <20140218124550.GF2662@work-vm> <53040B77.3030008@linux.vnet.ibm.com> <20140219112715.GE2916@work-vm> <530557A4.9040508@linux.vnet.ibm.com> <20140220100927.GA2437@work-vm> <530617C4.20103@linux.vnet.ibm.com> <20140220163255.GC2437@work-vm> In-Reply-To: <20140220163255.GC2437@work-vm> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Subject: Re: [Qemu-devel] [RFC PATCH v2 01/12] mc: add documentation for micro-checkpointing List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: "Dr. David Alan Gilbert" Cc: SADEKJ@il.ibm.com, quintela@redhat.com, hinesmr@cn.ibm.com, qemu-devel@nongnu.org, EREZH@il.ibm.com, owasserm@redhat.com, junqing.wang@cs2c.com.cn, onom@us.ibm.com, abali@us.ibm.com, lig.fnst@cn.fujitsu.com, gokul@us.ibm.com, dbulkow@gmail.com, pbonzini@redhat.com, BIRAN@il.ibm.com, isaku.yamahata@gmail.com, "Michael R. Hines" On 02/21/2014 12:32 AM, Dr. David Alan Gilbert wrote: > > I'm happy to use more memory to get FT, all I'm trying to do is see > if it's possible to put a lower bound than 2x on it while still maintaining > full FT, at the expense of performance in the case where it uses > a lot of memory. > >> The bottom line is: if you put a *hard* constraint on memory usage, >> what will happen to the guest when that garbage collection you mentioned >> shows up later and runs for several minutes? How about an hour? >> Are we just going to block the guest from being allowed to start a >> checkpoint until the memory usage goes down just for the sake of avoiding >> the 2x memory usage? > Yes, or move to the next checkpoint sooner than the N milliseconds when > we see the buffer is getting full. OK, I see there is definitely some common ground there: So to be more specific, what we really need is two things: (I've learned that the reviewers are very cautious about adding to much policy into QEMU itself, but let's iron this out anyway:) 1. First, we need to throttle down the guest (QEMU can already do this using the recently introduced "auto-converge" feature). This means that the guest is still making forward progress, albeit slow progress. 2. Then we would need some kind of policy, or better yet, a trigger that does something to the effect of "we're about to use a whole lot of checkpoint memory soon - can we afford this much memory usage". Such a trigger would be conditional on the current policy of the administrator or management software: We would either have a QMP command that with a boolean flag that says "Yes" or "No", it's tolerable or not to use that much memory in the next checkpoint. If the answer is "Yes", then nothing changes. If the answer is "No", then we should either: a) throttle down the guest b) Adjust the checkpoint frequency c) Or pause it altogether while we migrate some other VMs off the host such that we can complete the next checkpoint in its entirety. It's not clear to me how much of this (or any) of this control loop should be in QEMU or in the management software, but I would definitely agree that a minimum of at least the ability to detect the situation and remedy the situation should be in QEMU. I'm not entirely convince that the ability to *decide* to remedy the situation should be in QEMU, though. > >> If you block the guest from being checkpointed, >> then what happens if there is a failure during that extended period? >> We will have saved memory at the expense of availability. > If the active machine fails during this time then the secondary carries > on from it's last good snapshot in the knowledge that the active > never finished the new snapshot and so never uncorked it's previous packets. > > If the secondary machine fails during this time then tha active drops > it's nascent snapshot and carries on. Yes, that makes sense. Where would that policy go, though, continuing the above concern? > However, what you have made me realise is that I don't have an answer > for the memory usage on the secondary; while the primary can pause > it's guest until the secondary ack's the checkpoint, the secondary has > to rely on the primary not to send it huge checkpoints. Good question: There's a lot of work ideas out there in the academic community to compress the secondary, or push the secondary to a flash-based device, or de-duplicate the secondary. I'm sure any of them would put a dent in the problem, but I'm not seeing a smoking gun solution that would absolutely save all that memory completely. (Personally, I don't believe in swap. I wouldn't even consider swap or any kind of traditional disk-based remedy to be a viable solution). >> The customer that is expecting 100% fault tolerance and the provider >> who is supporting it need to have an understanding that fault tolerance >> is not free and that constraining memory usage will adversely affect >> the VM's ability to be protected. >> >> Do I understand your expectations correctly? Is fault tolerance >> something you're willing to sacrifice? > As above, no I'm willing to sacrifice performance but not fault tolerance. > (It is entirely possible that others would want the other trade off, i.e. > some minimum performance is worse than useless, so if we can't maintain > that performance then dropping FT leaves us in a more-working position). > Agreed - I think a "proactive" failover in this case would solve the problem. If we observed that availability/fault tolerance was going to be at risk soon (which is relatively easy to detect) - we could just *force* a failover to the secondary host and restart the protection from scratch. >> >> Well, that's simple: If there is a failure of the source, the destination >> will simply revert to the previous checkpoint using the same mode >> of operation. The lost ACKs that you're curious about only >> apply to the checkpoint that is in progress. Just because a >> checkpoint is in progress does not mean that the previous checkpoint >> is thrown away - it is already loaded into the destination's memory >> and ready to be activated. > I still don't see why, if the link between them fails, the destination > doesn't fall back it it's previous checkpoint, AND the source carries > on running - I don't see how they can differentiate which of them has failed. I think you're forgetting that the source I/O is buffered - it doesn't matter that the source VM is still running. As long as it's output is buffered - it cannot have any non-fault-tolerant affect on the outside world. In the future, if a technician access the machine or the network is restored, the management software can terminate the stale source virtual machine. >> We have a script architecture (not on github) which runs MC in a tight >> loop hundreds of times and kills the source QEMU and timestamps how >> quickly the >> destination QEMU loses the TCP socket connection receives an error code >> from the kernel - every single time, the destination resumes nearly >> instantaneously. >> I've not empirically seen a case where the socket just hangs or doesn't >> change state. >> >> I'm not very familiar with the internal linux TCP/IP stack >> implementation itself, >> but I have not had a problem with the dependability of the linux socket >> not being able to shutdown the socket as soon as possible. > OK, that only covers a very small range of normal failures. > When you kill the destination QEMU the host OS knows that QEMU is dead > and sends a packet back closing the socket, hence the source knows > the destination is dead very quickly. > If: > a) The destination machine was to lose power or hang > b) Or a network link fail (other than the one attached to the source > possibly) > > the source would have to do a full TCP timeout. > > To test a,b I'd use an iptables rule somewhere to cause the packets to > be dropped (not rejected). Stopping the qemu in gdb might be good enough. Very good idea - I'll add that to the "todo" list of things to do in my test infrastructure. It may indeed turn out be necessary to add a formal keepalive between the source and destination. - Michael