From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from [140.186.70.92] (port=42327 helo=eggs.gnu.org) by lists.gnu.org with esmtp (Exim 4.43) id 1OszfG-0006oh-Cb for qemu-devel@nongnu.org; Tue, 07 Sep 2010 11:00:56 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.69) (envelope-from ) id 1OszfE-0007d3-H0 for qemu-devel@nongnu.org; Tue, 07 Sep 2010 11:00:50 -0400 Received: from e8.ny.us.ibm.com ([32.97.182.138]:56953) by eggs.gnu.org with esmtp (Exim 4.69) (envelope-from ) id 1OszfE-0007by-BX for qemu-devel@nongnu.org; Tue, 07 Sep 2010 11:00:48 -0400 Received: from d01relay03.pok.ibm.com (d01relay03.pok.ibm.com [9.56.227.235]) by e8.ny.us.ibm.com (8.14.4/8.13.1) with ESMTP id o87EgRfO017107 for ; Tue, 7 Sep 2010 10:42:27 -0400 Received: from d01av02.pok.ibm.com (d01av02.pok.ibm.com [9.56.224.216]) by d01relay03.pok.ibm.com (8.13.8/8.13.8/NCO v10.0) with ESMTP id o87F0lol343750 for ; Tue, 7 Sep 2010 11:00:47 -0400 Received: from d01av02.pok.ibm.com (loopback [127.0.0.1]) by d01av02.pok.ibm.com (8.14.4/8.13.1/NCO v10.0 AVout) with ESMTP id o87F0kU8005720 for ; Tue, 7 Sep 2010 12:00:46 -0300 Message-ID: <4C86539C.6020302@linux.vnet.ibm.com> Date: Tue, 07 Sep 2010 10:00:44 -0500 From: Anthony Liguori MIME-Version: 1.0 Subject: Re: [Qemu-devel] QEMU interfaces for image streaming and post-copy block migration References: <4C864118.7070206@linux.vnet.ibm.com> <4C865160.5030600@linux.vnet.ibm.com> In-Reply-To: Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit List-Id: qemu-devel.nongnu.org List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Stefan Hajnoczi Cc: "libvir-list@redhat.com" , qemu-devel , Stefan Hajnoczi On 09/07/2010 09:55 AM, Stefan Hajnoczi wrote: > On Tue, Sep 7, 2010 at 3:51 PM, Anthony Liguori > wrote: > >> On 09/07/2010 09:33 AM, Stefan Hajnoczi wrote: >> >>> On Tue, Sep 7, 2010 at 2:41 PM, Anthony Liguori >>> wrote: >>> >>> >>>> The interface for copy-on-read is just an option within qemu-img create. >>>> Streaming, on the other hand, requires a bit more thought. Today, I >>>> have a >>>> monitor command that does the following: >>>> >>>> stream >>>> >>>> Which will try to stream the minimal amount of data for a single I/O >>>> operation and then return how many sectors were successfully streamed. >>>> >>>> The idea about how to drive this interface is a loop like: >>>> >>>> offset = 0; >>>> while offset< image_size: >>>> wait_for_idle_time() >>>> count = stream(device, offset) >>>> offset += count >>>> >>>> Obviously, the "wait_for_idle_time()" requires wide system awareness. >>>> The >>>> thing I'm not sure about is 1) would libvirt want to expose a similar >>>> stream >>>> interface and let management software determine idle time 2) attempt to >>>> detect idle time on it's own and provide a higher level interface. If >>>> (2), >>>> the question then becomes whether we should try to do this within qemu >>>> and >>>> provide libvirt a higher level interface. >>>> >>>> >>> A self-tuning solution is attractive because it reduces the need for >>> other components (management stack) or the user to get involved. In >>> this case self-tuning should be possible. We need to detect periods >>> of I/O inactivity, for example tracking the number of in-flight >>> requests and then setting a grace timer when it reaches zero. When >>> the grace timer expires, we start streaming until the guest initiates >>> I/O again. >>> >>> >> That detects idle I/O within a single QEMU guest, but you might have another >> guest running that's I/O bound which means that from an overall system >> throughput perspective, you really don't want to stream. >> >> I think libvirt might be able to do a better job here by looking at overall >> system I/O usage. But I'm not sure hence this RFC :-) >> > Isn't this what block I/O controller cgroups is meant to solve? If > you give vm-1 50% block bandwidth and vm-2 50% block bandwidth then > vm-1 can do streaming without eating into vm-2's guaranteed bandwidth. > That assumes you're capping I/O. But sometimes you care about overall system throughput more than you care about any individual VM. Another way to look at it may be, a user waits for a cron job that runs at midnight and starts streaming as necessary. However, the user wants to be able to interrupt the streaming should there been a sudden demand. If the user drives the streaming through an interface like I've specified, they're in full control. It's pretty simple to build a interfaces on top of this that implement stream as an aggressive or conservative background task too. > Also, I'm not sure we should worry about the priority of the I/O too > much: perhaps the user wants their vm to stream more than they want an > unimportant local vm that is currently I/O bound to have all resources > to itself. So I think it makes sense to defer this and not try for > system-wide knowledge inside a QEMU process. > Right, so that argues for an incremental interface like I started with :-) BTW, this whole discussion is also relevant for other background tasks like online defragmentation so keep that use-case in mind too. Regards, Anthony Liguori > Stefan >