From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([140.186.70.92]:39554) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1QS1Qy-0001y2-F0 for qemu-devel@nongnu.org; Thu, 02 Jun 2011 02:31:10 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1QS1Qv-0002az-KP for qemu-devel@nongnu.org; Thu, 02 Jun 2011 02:31:08 -0400 Received: from e28smtp05.in.ibm.com ([122.248.162.5]:51082) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1QS1Qu-0002af-Mn for qemu-devel@nongnu.org; Thu, 02 Jun 2011 02:31:05 -0400 Received: from d28relay01.in.ibm.com (d28relay01.in.ibm.com [9.184.220.58]) by e28smtp05.in.ibm.com (8.14.4/8.13.1) with ESMTP id p526UvR7007681 for ; Thu, 2 Jun 2011 12:00:57 +0530 Received: from d28av01.in.ibm.com (d28av01.in.ibm.com [9.184.220.63]) by d28relay01.in.ibm.com (8.13.8/8.13.8/NCO v10.0) with ESMTP id p526UuZa4386820 for ; Thu, 2 Jun 2011 12:00:57 +0530 Received: from d28av01.in.ibm.com (loopback [127.0.0.1]) by d28av01.in.ibm.com (8.14.4/8.13.1/NCO v10.0 AVout) with ESMTP id p526UtSi020269 for ; Thu, 2 Jun 2011 12:00:56 +0530 Date: Thu, 2 Jun 2011 14:29:29 +0800 From: Zhi Yong Wu Message-ID: <20110602062928.GL18832@f12.cn.ibm.com> References: <20110530050923.GF18832@f12.cn.ibm.com> <1306995426.2785.6.camel@lappy> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <1306995426.2785.6.camel@lappy> Subject: Re: [Qemu-devel] [RFC]QEMU disk I/O limits List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Sasha Levin Cc: kwolf@redhat.com, aliguori@us.ibm.com, herbert@gondor.apana.org.au, kvm@vger.kernel.org, guijianfeng@cn.fujitsu.com, qemu-devel@nongnu.org, wuzhy@cn.ibm.com, luowenj@cn.ibm.com, zhanx@cn.ibm.com, zhaoyang@cn.ibm.com, llim@redhat.com, raharper@us.ibm.com, vgoyal@redhat.com, stefanha@linux.vnet.ibm.com On Thu, Jun 02, 2011 at 09:17:06AM +0300, Sasha Levin wrote: >Date: Thu, 02 Jun 2011 09:17:06 +0300 >From: Sasha Levin >To: Zhi Yong Wu >Cc: qemu-devel@nongnu.org, kvm@vger.kernel.org, kwolf@redhat.com, > aliguori@us.ibm.com, herbert@gondor.apana.org.au, > guijianfeng@cn.fujitsu.com, wuzhy@cn.ibm.com, luowenj@cn.ibm.com, > zhanx@cn.ibm.com, zhaoyang@cn.ibm.com, llim@redhat.com, > raharper@us.ibm.com, vgoyal@redhat.com, stefanha@linux.vnet.ibm.com >Subject: Re: [Qemu-devel] [RFC]QEMU disk I/O limits >X-Mailer: Evolution 2.32.2 > >Hi, > >On Mon, 2011-05-30 at 13:09 +0800, Zhi Yong Wu wrote: >> Hello, all, >> >> I have prepared to work on a feature called "Disk I/O limits" for qemu-kvm projeect. >> This feature will enable the user to cap disk I/O amount performed by a VM.It is important for some storage resources to be shared among multi-VMs. As you've known, if some of VMs are doing excessive disk I/O, they will hurt the performance of other VMs. >> >> More detail is available here: >> http://wiki.qemu.org/Features/DiskIOLimits >> >> 1.) Why we need per-drive disk I/O limits >> As you've known, for linux, cgroup blkio-controller has supported I/O throttling on block devices. More importantly, there is no single mechanism for disk I/O throttling across all underlying storage types (image file, LVM, NFS, Ceph) and for some types there is no way to throttle at all. >> >> Disk I/O limits feature introduces QEMU block layer I/O limits together with command-line and QMP interfaces for configuring limits. This allows I/O limits to be imposed across all underlying storage types using a single interface. >> >> 2.) How disk I/O limits will be implemented >> QEMU block layer will introduce a per-drive disk I/O request queue for those disks whose "disk I/O limits" feature is enabled. It can control disk I/O limits individually for each disk when multiple disks are attached to a VM, and enable use cases like unlimited local disk access but shared storage access with limits. >> In mutliple I/O threads scenario, when an application in a VM issues a block I/O request, this request will be intercepted by QEMU block layer, then it will calculate disk runtime I/O rate and determine if it has go beyond its limits. If yes, this I/O request will enqueue to that introduced queue; otherwise it will be serviced. >> >> 3.) How the users enable and play with it >> QEMU -drive option will be extended so that disk I/O limits can be specified on its command line, such as -drive [iops=xxx,][throughput=xxx] or -drive [iops_rd=xxx,][iops_wr=xxx,][throughput=xxx] etc. When this argument is specified, it means that "disk I/O limits" feature is enabled for this drive disk. >> The feature will also provide users with the ability to change per-drive disk I/O limits at runtime using QMP commands. > >I'm wondering if you've considered adding a 'burst' parameter - >something which will not limit (or limit less) the io ops or the >throughput for the first 'x' ms in a given time window. Currently no, Do you let us know what scenario it will make sense to? Regards, Zhiyong Wu > >> Regards, >> >> Zhiyong Wu >> > >-- > >Sasha. >