From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([140.186.70.92]:47731) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1QRPGk-0003tI-1P for qemu-devel@nongnu.org; Tue, 31 May 2011 09:46:09 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1QRPGe-0003cD-AM for qemu-devel@nongnu.org; Tue, 31 May 2011 09:46:02 -0400 Received: from mx1.redhat.com ([209.132.183.28]:21356) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1QRPGe-0003bo-3Q for qemu-devel@nongnu.org; Tue, 31 May 2011 09:45:56 -0400 Date: Tue, 31 May 2011 09:45:37 -0400 From: Vivek Goyal Message-ID: <20110531134537.GE16382@redhat.com> References: <20110530050923.GF18832@f12.cn.ibm.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20110530050923.GF18832@f12.cn.ibm.com> Subject: Re: [Qemu-devel] [RFC]QEMU disk I/O limits List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Zhi Yong Wu Cc: kwolf@redhat.com, aliguori@us.ibm.com, stefanha@linux.vnet.ibm.com, kvm@vger.kernel.org, guijianfeng@cn.fujitsu.com, qemu-devel@nongnu.org, wuzhy@cn.ibm.com, herbert@gondor.hengli.com.au, luowenj@cn.ibm.com, zhanx@cn.ibm.com, zhaoyang@cn.ibm.com, llim@redhat.com, raharper@us.ibm.com On Mon, May 30, 2011 at 01:09:23PM +0800, Zhi Yong Wu wrote: > Hello, all, > > I have prepared to work on a feature called "Disk I/O limits" for qemu-kvm projeect. > This feature will enable the user to cap disk I/O amount performed by a VM.It is important for some storage resources to be shared among multi-VMs. As you've known, if some of VMs are doing excessive disk I/O, they will hurt the performance of other VMs. > Hi Zhiyong, Why not use kernel blkio controller for this and why reinvent the wheel and implement the feature again in qemu? Thanks Vivek > More detail is available here: > http://wiki.qemu.org/Features/DiskIOLimits > > 1.) Why we need per-drive disk I/O limits > As you've known, for linux, cgroup blkio-controller has supported I/O throttling on block devices. More importantly, there is no single mechanism for disk I/O throttling across all underlying storage types (image file, LVM, NFS, Ceph) and for some types there is no way to throttle at all. > > Disk I/O limits feature introduces QEMU block layer I/O limits together with command-line and QMP interfaces for configuring limits. This allows I/O limits to be imposed across all underlying storage types using a single interface. > > 2.) How disk I/O limits will be implemented > QEMU block layer will introduce a per-drive disk I/O request queue for those disks whose "disk I/O limits" feature is enabled. It can control disk I/O limits individually for each disk when multiple disks are attached to a VM, and enable use cases like unlimited local disk access but shared storage access with limits. > In mutliple I/O threads scenario, when an application in a VM issues a block I/O request, this request will be intercepted by QEMU block layer, then it will calculate disk runtime I/O rate and determine if it has go beyond its limits. If yes, this I/O request will enqueue to that introduced queue; otherwise it will be serviced. > > 3.) How the users enable and play with it > QEMU -drive option will be extended so that disk I/O limits can be specified on its command line, such as -drive [iops=xxx,][throughput=xxx] or -drive [iops_rd=xxx,][iops_wr=xxx,][throughput=xxx] etc. When this argument is specified, it means that "disk I/O limits" feature is enabled for this drive disk. > The feature will also provide users with the ability to change per-drive disk I/O limits at runtime using QMP commands. > > > Regards, > > Zhiyong Wu