From mboxrd@z Thu Jan 1 00:00:00 1970 From: Vivek Goyal Subject: Re: [Qemu-devel][RFC]QEMU disk I/O limits Date: Tue, 31 May 2011 09:45:37 -0400 Message-ID: <20110531134537.GE16382@redhat.com> References: <20110530050923.GF18832@f12.cn.ibm.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Cc: qemu-devel@nongnu.org, kvm@vger.kernel.org, kwolf@redhat.com, guijianfeng@cn.fujitsu.com, herbert@gondor.hengli.com.au, stefanha@linux.vnet.ibm.com, aliguori@us.ibm.com, raharper@us.ibm.com, luowenj@cn.ibm.com, wuzhy@cn.ibm.com, zhanx@cn.ibm.com, zhaoyang@cn.ibm.com, llim@redhat.com To: Zhi Yong Wu Return-path: Received: from mx1.redhat.com ([209.132.183.28]:45497 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751055Ab1EaNp5 (ORCPT ); Tue, 31 May 2011 09:45:57 -0400 Content-Disposition: inline In-Reply-To: <20110530050923.GF18832@f12.cn.ibm.com> Sender: kvm-owner@vger.kernel.org List-ID: On Mon, May 30, 2011 at 01:09:23PM +0800, Zhi Yong Wu wrote: > Hello, all, > > I have prepared to work on a feature called "Disk I/O limits" for qemu-kvm projeect. > This feature will enable the user to cap disk I/O amount performed by a VM.It is important for some storage resources to be shared among multi-VMs. As you've known, if some of VMs are doing excessive disk I/O, they will hurt the performance of other VMs. > Hi Zhiyong, Why not use kernel blkio controller for this and why reinvent the wheel and implement the feature again in qemu? Thanks Vivek > More detail is available here: > http://wiki.qemu.org/Features/DiskIOLimits > > 1.) Why we need per-drive disk I/O limits > As you've known, for linux, cgroup blkio-controller has supported I/O throttling on block devices. More importantly, there is no single mechanism for disk I/O throttling across all underlying storage types (image file, LVM, NFS, Ceph) and for some types there is no way to throttle at all. > > Disk I/O limits feature introduces QEMU block layer I/O limits together with command-line and QMP interfaces for configuring limits. This allows I/O limits to be imposed across all underlying storage types using a single interface. > > 2.) How disk I/O limits will be implemented > QEMU block layer will introduce a per-drive disk I/O request queue for those disks whose "disk I/O limits" feature is enabled. It can control disk I/O limits individually for each disk when multiple disks are attached to a VM, and enable use cases like unlimited local disk access but shared storage access with limits. > In mutliple I/O threads scenario, when an application in a VM issues a block I/O request, this request will be intercepted by QEMU block layer, then it will calculate disk runtime I/O rate and determine if it has go beyond its limits. If yes, this I/O request will enqueue to that introduced queue; otherwise it will be serviced. > > 3.) How the users enable and play with it > QEMU -drive option will be extended so that disk I/O limits can be specified on its command line, such as -drive [iops=xxx,][throughput=xxx] or -drive [iops_rd=xxx,][iops_wr=xxx,][throughput=xxx] etc. When this argument is specified, it means that "disk I/O limits" feature is enabled for this drive disk. > The feature will also provide users with the ability to change per-drive disk I/O limits at runtime using QMP commands. > > > Regards, > > Zhiyong Wu