From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([140.186.70.92]:55624) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1QS1E4-0007BQ-7X for qemu-devel@nongnu.org; Thu, 02 Jun 2011 02:17:49 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1QS1E2-0000cW-AX for qemu-devel@nongnu.org; Thu, 02 Jun 2011 02:17:48 -0400 Received: from mail-ww0-f53.google.com ([74.125.82.53]:35223) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1QS1E1-0000cA-Ci for qemu-devel@nongnu.org; Thu, 02 Jun 2011 02:17:46 -0400 Received: by wwj40 with SMTP id 40so495227wwj.10 for ; Wed, 01 Jun 2011 23:17:43 -0700 (PDT) From: Sasha Levin In-Reply-To: <20110530050923.GF18832@f12.cn.ibm.com> References: <20110530050923.GF18832@f12.cn.ibm.com> Content-Type: text/plain; charset="us-ascii" Date: Thu, 02 Jun 2011 09:17:06 +0300 Message-ID: <1306995426.2785.6.camel@lappy> Mime-Version: 1.0 Content-Transfer-Encoding: 7bit Subject: Re: [Qemu-devel] [RFC]QEMU disk I/O limits List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Zhi Yong Wu Cc: kwolf@redhat.com, aliguori@us.ibm.com, herbert@gondor.apana.org.au, kvm@vger.kernel.org, guijianfeng@cn.fujitsu.com, qemu-devel@nongnu.org, wuzhy@cn.ibm.com, luowenj@cn.ibm.com, zhanx@cn.ibm.com, zhaoyang@cn.ibm.com, llim@redhat.com, raharper@us.ibm.com, vgoyal@redhat.com, stefanha@linux.vnet.ibm.com Hi, On Mon, 2011-05-30 at 13:09 +0800, Zhi Yong Wu wrote: > Hello, all, > > I have prepared to work on a feature called "Disk I/O limits" for qemu-kvm projeect. > This feature will enable the user to cap disk I/O amount performed by a VM.It is important for some storage resources to be shared among multi-VMs. As you've known, if some of VMs are doing excessive disk I/O, they will hurt the performance of other VMs. > > More detail is available here: > http://wiki.qemu.org/Features/DiskIOLimits > > 1.) Why we need per-drive disk I/O limits > As you've known, for linux, cgroup blkio-controller has supported I/O throttling on block devices. More importantly, there is no single mechanism for disk I/O throttling across all underlying storage types (image file, LVM, NFS, Ceph) and for some types there is no way to throttle at all. > > Disk I/O limits feature introduces QEMU block layer I/O limits together with command-line and QMP interfaces for configuring limits. This allows I/O limits to be imposed across all underlying storage types using a single interface. > > 2.) How disk I/O limits will be implemented > QEMU block layer will introduce a per-drive disk I/O request queue for those disks whose "disk I/O limits" feature is enabled. It can control disk I/O limits individually for each disk when multiple disks are attached to a VM, and enable use cases like unlimited local disk access but shared storage access with limits. > In mutliple I/O threads scenario, when an application in a VM issues a block I/O request, this request will be intercepted by QEMU block layer, then it will calculate disk runtime I/O rate and determine if it has go beyond its limits. If yes, this I/O request will enqueue to that introduced queue; otherwise it will be serviced. > > 3.) How the users enable and play with it > QEMU -drive option will be extended so that disk I/O limits can be specified on its command line, such as -drive [iops=xxx,][throughput=xxx] or -drive [iops_rd=xxx,][iops_wr=xxx,][throughput=xxx] etc. When this argument is specified, it means that "disk I/O limits" feature is enabled for this drive disk. > The feature will also provide users with the ability to change per-drive disk I/O limits at runtime using QMP commands. I'm wondering if you've considered adding a 'burst' parameter - something which will not limit (or limit less) the io ops or the throughput for the first 'x' ms in a given time window. > Regards, > > Zhiyong Wu > -- Sasha.