From mboxrd@z Thu Jan 1 00:00:00 1970 From: Anthony Liguori Subject: Re: [Qemu-devel] [RFC]QEMU disk I/O limits Date: Tue, 31 May 2011 18:30:09 -0500 Message-ID: <4DE57A01.1070205@codemonkey.ws> References: <20110530050923.GF18832@f12.cn.ibm.com> <20110531134537.GE16382@redhat.com> <4DE4F230.2040203@us.ibm.com> <20110531140402.GF16382@redhat.com> <4DE4FA5B.1090804@codemonkey.ws> <20110531175955.GI16382@redhat.com> <4DE535F3.6040400@codemonkey.ws> <20110531192434.GK16382@redhat.com> Mime-Version: 1.0 Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Cc: kwolf@redhat.com, stefanha@linux.vnet.ibm.com, Mike Snitzer , guijianfeng@cn.fujitsu.com, qemu-devel@nongnu.org, wuzhy@cn.ibm.com, herbert@gondor.hengli.com.au, Joe Thornber , Zhi Yong Wu , luowenj@cn.ibm.com, kvm@vger.kernel.org, zhanx@cn.ibm.com, zhaoyang@cn.ibm.com, llim@redhat.com, Ryan A Harper To: Vivek Goyal Return-path: Received: from mail-yw0-f46.google.com ([209.85.213.46]:51760 "EHLO mail-yw0-f46.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S932922Ab1EaXaM (ORCPT ); Tue, 31 May 2011 19:30:12 -0400 Received: by ywe9 with SMTP id 9so2027083ywe.19 for ; Tue, 31 May 2011 16:30:11 -0700 (PDT) In-Reply-To: <20110531192434.GK16382@redhat.com> Sender: kvm-owner@vger.kernel.org List-ID: On 05/31/2011 02:24 PM, Vivek Goyal wrote: > On Tue, May 31, 2011 at 01:39:47PM -0500, Anthony Liguori wrote: >> On 05/31/2011 12:59 PM, Vivek Goyal wrote: > Ok, so we seem to be talking of two requirements. > > - A consistent experience to guest > - Isolation between VMs. > > If this qcow2 mapping/metada overhead is not significant, then we > don't have to worry about IOPs perceived by guest. It will be more or less > same. If it is significant then we provide more consistent experience to > guest but then weaken the isolation between guest and might overload the > backend storage and in turn might not get the expected IOPS for the > guest anyway. That's quite a bit of hand waving considering your following argument is that you can't be precise enough at the QEMU level. > So I think these two things are not independent. > > I agree though that advantage of qemu is that everything is a file > and handling all the complex configuraitons becomes very easy. > > Having said that, to provide a consistent experience to guest, you > also need to know where IO from guest is going and whether underlying > storage system can support that kind of IO or not. > > IO limits are of not much use if if these are put in isolation without > knowing where IO is going and how many VMs are doing IO to it. Otherwise > there are no gurantees/estimates on minimum bandwidth for guests hence > there is no consistent experience. Consistent and maximum are two very different things. QEMU can, very effectively, enforce a maximum I/O rate. This can then be used to provide mostly consistent performance across different generations of hardware, to implement service levels in a tiered offering, etc. The level of consistency will then depend on whether you overcommit your hardware and how you have it configured. Consistency is very hard because at the end of the day, you still have shared resources. Even with blkio, I presume one guest can still impact another guest by forcing the disk to do excessive seeking or something of that nature. So absolutely consistency can't be the requirement for the use-case. The use-cases we are interested really are more about providing caps than anything else. Regards, Anthony Liguori > > Thanks > Vivek >