From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:46919) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1WKM4J-0000FF-Tx for qemu-devel@nongnu.org; Mon, 03 Mar 2014 01:09:49 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1WKM4A-0002pM-Vc for qemu-devel@nongnu.org; Mon, 03 Mar 2014 01:09:39 -0500 Received: from e8.ny.us.ibm.com ([32.97.182.138]:40695) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1WKM4A-0002ox-S1 for qemu-devel@nongnu.org; Mon, 03 Mar 2014 01:09:30 -0500 Received: from /spool/local by e8.ny.us.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Mon, 3 Mar 2014 01:09:28 -0500 Received: from b01cxnp22035.gho.pok.ibm.com (b01cxnp22035.gho.pok.ibm.com [9.57.198.25]) by d01dlp02.pok.ibm.com (Postfix) with ESMTP id 6B66F6E8041 for ; Mon, 3 Mar 2014 01:09:20 -0500 (EST) Received: from d01av05.pok.ibm.com (d01av05.pok.ibm.com [9.56.224.195]) by b01cxnp22035.gho.pok.ibm.com (8.13.8/8.13.8/NCO v10.0) with ESMTP id s2369PfD6029592 for ; Mon, 3 Mar 2014 06:09:25 GMT Received: from d01av05.pok.ibm.com (localhost [127.0.0.1]) by d01av05.pok.ibm.com (8.14.4/8.14.4/NCO v10.0 AVout) with ESMTP id s2369OCB017679 for ; Mon, 3 Mar 2014 01:09:25 -0500 Message-ID: <53141C6F.6050603@linux.vnet.ibm.com> Date: Mon, 03 Mar 2014 14:08:47 +0800 From: "Michael R. Hines" MIME-Version: 1.0 References: <1392713429-18201-1-git-send-email-mrhines@linux.vnet.ibm.com> <1392713429-18201-2-git-send-email-mrhines@linux.vnet.ibm.com> <20140218124550.GF2662@work-vm> <53040B77.3030008@linux.vnet.ibm.com> <20140219112715.GE2916@work-vm> <530557A4.9040508@linux.vnet.ibm.com> <20140220100927.GA2437@work-vm> <530617C4.20103@linux.vnet.ibm.com> <20140220163255.GC2437@work-vm> <5306DBF8.3060204@linux.vnet.ibm.com> <20140221094433.GA2483@work-vm> In-Reply-To: <20140221094433.GA2483@work-vm> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Subject: Re: [Qemu-devel] [RFC PATCH v2 01/12] mc: add documentation for micro-checkpointing List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: "Dr. David Alan Gilbert" Cc: SADEKJ@il.ibm.com, pbonzini@redhat.com, quintela@redhat.com, BIRAN@il.ibm.com, qemu-devel@nongnu.org, EREZH@il.ibm.com, owasserm@redhat.com, onom@us.ibm.com, hinesmr@cn.ibm.com, isaku.yamahata@gmail.com, gokul@us.ibm.com, dbulkow@gmail.com, junqing.wang@cs2c.com.cn, abali@us.ibm.com, lig.fnst@cn.fujitsu.com, "Michael R. Hines" On 02/21/2014 05:44 PM, Dr. David Alan Gilbert wrote: >> It's not clear to me how much of this (or any) of this control loop should >> be in QEMU or in the management software, but I would definitely agree >> that a minimum of at least the ability to detect the situation and remedy >> the situation should be in QEMU. I'm not entirely convince that the >> ability to *decide* to remedy the situation should be in QEMU, though. > The management software access is low frequency, high latency; it should > be setting general parameters (max memory allowed, desired checkpoint > frequency etc) but I don't see that we can use it to do anything on > a sooner than a few second basis; so yes it can monitor things and > tweek the knobs if it sees the host as a whole is getting tight on RAM > etc - but we can't rely on it to throw in the breaks if this guest > suddenly decides to take bucket loads of RAM; something has to react > quickly in relation to previously set limits. I agree - the boolean flag I mentioned previously would do just that: setting the flag (or state, perhaps instead of boolean), would indicate to QEMU to make a particular type of sacrifice: A flag of "0" might mean "Throttle the guest in an emergency" A flag of "1" might mean "Throttling is not acceptable, just let the guest use the extra memory" A flag of "2" might mean "Neither one is acceptable, fail now and inform the management software to restart somewhere else". Or something to that effect........ >>>> If you block the guest from being checkpointed, >>>> then what happens if there is a failure during that extended period? >>>> We will have saved memory at the expense of availability. >>> If the active machine fails during this time then the secondary carries >>> on from it's last good snapshot in the knowledge that the active >>> never finished the new snapshot and so never uncorked it's previous packets. >>> >>> If the secondary machine fails during this time then tha active drops >>> it's nascent snapshot and carries on. >> Yes, that makes sense. Where would that policy go, though, >> continuing the above concern? > I think there has to be some input from the management layer for failover, > because (as per my split-brain concerns) something has to make the decision > about which of the source/destination is to take over, and I don't > believe individual instances have that information. Agreed - so the "ability" (as hinted on above) should be in QEMU, but the decision to recover from the situation probably should not be, where "recover" is defined as the VM is back in a fully running, fully fault-tolerant protected state (potentially where the source VM is on a different machine than it was before). > >>>> Well, that's simple: If there is a failure of the source, the destination >>>> will simply revert to the previous checkpoint using the same mode >>>> of operation. The lost ACKs that you're curious about only >>>> apply to the checkpoint that is in progress. Just because a >>>> checkpoint is in progress does not mean that the previous checkpoint >>>> is thrown away - it is already loaded into the destination's memory >>>> and ready to be activated. >>> I still don't see why, if the link between them fails, the destination >>> doesn't fall back it it's previous checkpoint, AND the source carries >>> on running - I don't see how they can differentiate which of them has failed. >> I think you're forgetting that the source I/O is buffered - it doesn't >> matter that the source VM is still running. As long as it's output is >> buffered - it cannot have any non-fault-tolerant affect on the outside >> world. >> >> In the future, if a technician access the machine or the network >> is restored, the management software can terminate the stale >> source virtual machine. > I think going with my comment above; I'm working on the basis it's just > as likely for the destination to fail as it is for the source to fail, > and a destination failure shouldn't kill the source; and in the case > of a destination failure the source is going to have to let it's buffered > I/Os start going again. Yes, that's correct, but only after management software knows about the failure. If we're on a tightly-coupled fast lan, there's no reason to believe that libvirt, for example, would be so slow that we cannot wait a few extra (10s of?) milliseconds after destination failure to choose a new destination and restart the previous checkpoint. But if management *is* too slow, which is not unlikely, then I think we should just tell the source to Migrate entirely and get out of that environment. Either way - this isn't something QEMU itself necessarily needs to worry about - it just needs to know not to explode if the destination fails and wait for instructions on what to do next....... Alternatively, if the administrator "prefers" restarting the fault-tolerance instead of Migration, we could have a QMP command that specifies a "backup" destination (or even a "duplicate" destination) that QEMU would automatically know about in the case of destination failure. But, I wouldn't implement something like that until at least a first version was accepted by the community. - Michael