From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mailman by lists.gnu.org with tmda-scanned (Exim 4.43) id 1MGY7S-0007Id-2x for qemu-devel@nongnu.org; Tue, 16 Jun 2009 08:50:30 -0400 Received: from exim by lists.gnu.org with spam-scanned (Exim 4.43) id 1MGY7N-0007D2-5o for qemu-devel@nongnu.org; Tue, 16 Jun 2009 08:50:29 -0400 Received: from [199.232.76.173] (port=51361 helo=monty-python.gnu.org) by lists.gnu.org with esmtp (Exim 4.43) id 1MGY7M-0007Cu-Np for qemu-devel@nongnu.org; Tue, 16 Jun 2009 08:50:24 -0400 Received: from e5.ny.us.ibm.com ([32.97.182.145]:52905) by monty-python.gnu.org with esmtps (TLS-1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.60) (envelope-from ) id 1MGY7M-0004VW-Bj for qemu-devel@nongnu.org; Tue, 16 Jun 2009 08:50:24 -0400 Received: from d01relay04.pok.ibm.com (d01relay04.pok.ibm.com [9.56.227.236]) by e5.ny.us.ibm.com (8.13.1/8.13.1) with ESMTP id n5GCi9be013219 for ; Tue, 16 Jun 2009 08:44:10 -0400 Received: from d01av02.pok.ibm.com (d01av02.pok.ibm.com [9.56.224.216]) by d01relay04.pok.ibm.com (8.13.8/8.13.8/NCO v9.2) with ESMTP id n5GCoFIK250240 for ; Tue, 16 Jun 2009 08:50:17 -0400 Received: from d01av02.pok.ibm.com (loopback [127.0.0.1]) by d01av02.pok.ibm.com (8.12.11.20060308/8.13.3) with ESMTP id n5GClvwV015634 for ; Tue, 16 Jun 2009 08:47:57 -0400 Message-ID: <4A379504.4080100@us.ibm.com> Date: Tue, 16 Jun 2009 07:50:12 -0500 From: Anthony Liguori MIME-Version: 1.0 References: <4A36B025.2080602@us.ibm.com> <4A37618E.6040606@redhat.com> In-Reply-To: <4A37618E.6040606@redhat.com> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Subject: [Qemu-devel] Re: Live migration broken when under heavy IO List-Id: qemu-devel.nongnu.org List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Avi Kivity Cc: "qemu-devel@nongnu.org" , kvm-devel Avi Kivity wrote: >> Does anyone have a clever idea how to fix this without just waiting >> for all IO requests to complete? > > What's wrong with waiting for requests to complete? It should take a > few tens of milliseconds. An alternative would be to attempt to cancel the requests. This incurs no non-deterministic latency. The tricky bit is that this has to happen at the device layer because the opaques cannot be saved in a meaningful way. > We could start throttling requests late in the live stage, but I don't > really see the point. > > Isn't virtio migration currently broken due to the qdev changes? > -- Regards, Anthony Liguori