From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([140.186.70.92]:55515) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1QS38s-0001JW-Ih for qemu-devel@nongnu.org; Thu, 02 Jun 2011 04:20:40 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1QS38p-0002BM-KU for qemu-devel@nongnu.org; Thu, 02 Jun 2011 04:20:33 -0400 Received: from e23smtp08.au.ibm.com ([202.81.31.141]:45886) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1QS38o-0002Au-JI for qemu-devel@nongnu.org; Thu, 02 Jun 2011 04:20:31 -0400 Received: from d23relay05.au.ibm.com (d23relay05.au.ibm.com [202.81.31.247]) by e23smtp08.au.ibm.com (8.14.4/8.13.1) with ESMTP id p528FIHB006304 for ; Thu, 2 Jun 2011 18:15:18 +1000 Received: from d23av02.au.ibm.com (d23av02.au.ibm.com [9.190.235.138]) by d23relay05.au.ibm.com (8.13.8/8.13.8/NCO v10.0) with ESMTP id p528JgqL950408 for ; Thu, 2 Jun 2011 18:19:42 +1000 Received: from d23av02.au.ibm.com (loopback [127.0.0.1]) by d23av02.au.ibm.com (8.14.4/8.13.1/NCO v10.0 AVout) with ESMTP id p528KOj3009504 for ; Thu, 2 Jun 2011 18:20:25 +1000 Date: Thu, 2 Jun 2011 16:18:56 +0800 From: Zhi Yong Wu Message-ID: <20110602081856.GM18832@f12.cn.ibm.com> References: <20110530050923.GF18832@f12.cn.ibm.com> <1306995426.2785.6.camel@lappy> <20110602062928.GL18832@f12.cn.ibm.com> <1306998902.2785.20.camel@lappy> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <1306998902.2785.20.camel@lappy> Subject: Re: [Qemu-devel] [RFC]QEMU disk I/O limits List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Sasha Levin Cc: kwolf@redhat.com, aliguori@us.ibm.com, stefanha@linux.vnet.ibm.com, kvm@vger.kernel.org, guijianfeng@cn.fujitsu.com, qemu-devel@nongnu.org, wuzhy@cn.ibm.com, luowenj@cn.ibm.com On Thu, Jun 02, 2011 at 10:15:02AM +0300, Sasha Levin wrote: >Date: Thu, 02 Jun 2011 10:15:02 +0300 >From: Sasha Levin >To: Zhi Yong Wu >Cc: kwolf@redhat.com, aliguori@us.ibm.com, herbert@gondor.apana.org.au, > kvm@vger.kernel.org, guijianfeng@cn.fujitsu.com, > qemu-devel@nongnu.org, wuzhy@cn.ibm.com, luowenj@cn.ibm.com, > zhanx@cn.ibm.com, zhaoyang@cn.ibm.com, llim@redhat.com, > raharper@us.ibm.com, vgoyal@redhat.com, stefanha@linux.vnet.ibm.com >Subject: Re: [Qemu-devel] [RFC]QEMU disk I/O limits >X-Mailer: Evolution 2.32.2 > >On Thu, 2011-06-02 at 14:29 +0800, Zhi Yong Wu wrote: >> On Thu, Jun 02, 2011 at 09:17:06AM +0300, Sasha Levin wrote: >> >Date: Thu, 02 Jun 2011 09:17:06 +0300 >> >From: Sasha Levin >> >To: Zhi Yong Wu >> >Cc: qemu-devel@nongnu.org, kvm@vger.kernel.org, kwolf@redhat.com, >> > aliguori@us.ibm.com, herbert@gondor.apana.org.au, >> > guijianfeng@cn.fujitsu.com, wuzhy@cn.ibm.com, luowenj@cn.ibm.com, >> > zhanx@cn.ibm.com, zhaoyang@cn.ibm.com, llim@redhat.com, >> > raharper@us.ibm.com, vgoyal@redhat.com, stefanha@linux.vnet.ibm.com >> >Subject: Re: [Qemu-devel] [RFC]QEMU disk I/O limits >> >X-Mailer: Evolution 2.32.2 >> > >> >Hi, >> > >> >On Mon, 2011-05-30 at 13:09 +0800, Zhi Yong Wu wrote: >> >> Hello, all, >> >> >> >> I have prepared to work on a feature called "Disk I/O limits" for qemu-kvm projeect. >> >> This feature will enable the user to cap disk I/O amount performed by a VM.It is important for some storage resources to be shared among multi-VMs. As you've known, if some of VMs are doing excessive disk I/O, they will hurt the performance of other VMs. >> >> >> >> More detail is available here: >> >> http://wiki.qemu.org/Features/DiskIOLimits >> >> >> >> 1.) Why we need per-drive disk I/O limits >> >> As you've known, for linux, cgroup blkio-controller has supported I/O throttling on block devices. More importantly, there is no single mechanism for disk I/O throttling across all underlying storage types (image file, LVM, NFS, Ceph) and for some types there is no way to throttle at all. >> >> >> >> Disk I/O limits feature introduces QEMU block layer I/O limits together with command-line and QMP interfaces for configuring limits. This allows I/O limits to be imposed across all underlying storage types using a single interface. >> >> >> >> 2.) How disk I/O limits will be implemented >> >> QEMU block layer will introduce a per-drive disk I/O request queue for those disks whose "disk I/O limits" feature is enabled. It can control disk I/O limits individually for each disk when multiple disks are attached to a VM, and enable use cases like unlimited local disk access but shared storage access with limits. >> >> In mutliple I/O threads scenario, when an application in a VM issues a block I/O request, this request will be intercepted by QEMU block layer, then it will calculate disk runtime I/O rate and determine if it has go beyond its limits. If yes, this I/O request will enqueue to that introduced queue; otherwise it will be serviced. >> >> >> >> 3.) How the users enable and play with it >> >> QEMU -drive option will be extended so that disk I/O limits can be specified on its command line, such as -drive [iops=xxx,][throughput=xxx] or -drive [iops_rd=xxx,][iops_wr=xxx,][throughput=xxx] etc. When this argument is specified, it means that "disk I/O limits" feature is enabled for this drive disk. >> >> The feature will also provide users with the ability to change per-drive disk I/O limits at runtime using QMP commands. >> > >> >I'm wondering if you've considered adding a 'burst' parameter - >> >something which will not limit (or limit less) the io ops or the >> >throughput for the first 'x' ms in a given time window. >> Currently no, Do you let us know what scenario it will make sense to? > >My assumption is that most guests are not doing constant disk I/O >access. Instead, the operations are usually short and happen on small >scale (relatively small amount of bytes accessed). > >For example: Multiple table DB lookup, serving a website, file servers. > >Basically, if I need to do a DB lookup which needs 50MB of data from a >disk which is limited to 10MB/s, I'd rather let it burst for 1 second >and complete the lookup faster instead of having it read data for 5 >seconds. > >If the guest now starts running multiple lookups one after the other, >thats when I would like to limit. HI, Sasha, If iops or bps parameters are not specified to -drive, it will not limit this disk I/O rate. Of course, QMP commands will be extended to support changing or disabling disk I/O limits at runtime. If you'd like not limit a disk I/O rate, you can use it to disabled this feature. I don't make sure that this is the right answer for your question. Regards, Zhiyong Wu > >> Regards, >> >> Zhiyong Wu >> > >> >> Regards, >> >> >> >> Zhiyong Wu >> >> >> > >> >-- >> > >> >Sasha. >> > > >-- > >Sasha. > >