qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: Sasha Levin <levinsasha928@gmail.com>
To: Zhi Yong Wu <wuzhy@linux.vnet.ibm.com>
Cc: kwolf@redhat.com, aliguori@us.ibm.com,
	herbert@gondor.apana.org.au, kvm@vger.kernel.org,
	guijianfeng@cn.fujitsu.com, qemu-devel@nongnu.org,
	wuzhy@cn.ibm.com, luowenj@cn.ibm.com, zhanx@cn.ibm.com,
	zhaoyang@cn.ibm.com, llim@redhat.com, raharper@us.ibm.com,
	vgoyal@redhat.com, stefanha@linux.vnet.ibm.com
Subject: Re: [Qemu-devel] [RFC]QEMU disk I/O limits
Date: Thu, 02 Jun 2011 10:15:02 +0300	[thread overview]
Message-ID: <1306998902.2785.20.camel@lappy> (raw)
In-Reply-To: <20110602062928.GL18832@f12.cn.ibm.com>

On Thu, 2011-06-02 at 14:29 +0800, Zhi Yong Wu wrote:
> On Thu, Jun 02, 2011 at 09:17:06AM +0300, Sasha Levin wrote:
> >Date: Thu, 02 Jun 2011 09:17:06 +0300
> >From: Sasha Levin <levinsasha928@gmail.com>
> >To: Zhi Yong Wu <wuzhy@linux.vnet.ibm.com>
> >Cc: qemu-devel@nongnu.org, kvm@vger.kernel.org, kwolf@redhat.com,
> >	aliguori@us.ibm.com, herbert@gondor.apana.org.au,
> >	guijianfeng@cn.fujitsu.com, wuzhy@cn.ibm.com, luowenj@cn.ibm.com,
> >	zhanx@cn.ibm.com, zhaoyang@cn.ibm.com, llim@redhat.com,
> >	raharper@us.ibm.com, vgoyal@redhat.com, stefanha@linux.vnet.ibm.com
> >Subject: Re: [Qemu-devel] [RFC]QEMU disk I/O limits
> >X-Mailer: Evolution 2.32.2 
> >
> >Hi,
> >
> >On Mon, 2011-05-30 at 13:09 +0800, Zhi Yong Wu wrote:
> >> Hello, all,
> >> 
> >>     I have prepared to work on a feature called "Disk I/O limits" for qemu-kvm projeect.
> >>     This feature will enable the user to cap disk I/O amount performed by a VM.It is important for some storage resources to be shared among multi-VMs. As you've known, if some of VMs are doing excessive disk I/O, they will hurt the performance of other VMs.
> >> 
> >>     More detail is available here:
> >>     http://wiki.qemu.org/Features/DiskIOLimits
> >> 
> >>     1.) Why we need per-drive disk I/O limits 
> >>     As you've known, for linux, cgroup blkio-controller has supported I/O throttling on block devices. More importantly, there is no single mechanism for disk I/O throttling across all underlying storage types (image file, LVM, NFS, Ceph) and for some types there is no way to throttle at all. 
> >> 
> >>     Disk I/O limits feature introduces QEMU block layer I/O limits together with command-line and QMP interfaces for configuring limits. This allows I/O limits to be imposed across all underlying storage types using a single interface.
> >> 
> >>     2.) How disk I/O limits will be implemented
> >>     QEMU block layer will introduce a per-drive disk I/O request queue for those disks whose "disk I/O limits" feature is enabled. It can control disk I/O limits individually for each disk when multiple disks are attached to a VM, and enable use cases like unlimited local disk access but shared storage access with limits. 
> >>     In mutliple I/O threads scenario, when an application in a VM issues a block I/O request, this request will be intercepted by QEMU block layer, then it will calculate disk runtime I/O rate and determine if it has go beyond its limits. If yes, this I/O request will enqueue to that introduced queue; otherwise it will be serviced.
> >> 
> >>     3.) How the users enable and play with it
> >>     QEMU -drive option will be extended so that disk I/O limits can be specified on its command line, such as -drive [iops=xxx,][throughput=xxx] or -drive [iops_rd=xxx,][iops_wr=xxx,][throughput=xxx] etc. When this argument is specified, it means that "disk I/O limits" feature is enabled for this drive disk.
> >>     The feature will also provide users with the ability to change per-drive disk I/O limits at runtime using QMP commands.
> >
> >I'm wondering if you've considered adding a 'burst' parameter -
> >something which will not limit (or limit less) the io ops or the
> >throughput for the first 'x' ms in a given time window.
> Currently no, Do you let us know what scenario it will make sense to?

My assumption is that most guests are not doing constant disk I/O
access. Instead, the operations are usually short and happen on small
scale (relatively small amount of bytes accessed).

For example: Multiple table DB lookup, serving a website, file servers.

Basically, if I need to do a DB lookup which needs 50MB of data from a
disk which is limited to 10MB/s, I'd rather let it burst for 1 second
and complete the lookup faster instead of having it read data for 5
seconds.

If the guest now starts running multiple lookups one after the other,
thats when I would like to limit.

> Regards,
> 
> Zhiyong Wu
> >
> >> Regards,
> >> 
> >> Zhiyong Wu
> >> 
> >
> >-- 
> >
> >Sasha.
> >

-- 

Sasha.

  reply	other threads:[~2011-06-02  7:15 UTC|newest]

Thread overview: 32+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2011-05-30  5:09 [Qemu-devel] [RFC]QEMU disk I/O limits Zhi Yong Wu
2011-05-31 13:45 ` Vivek Goyal
2011-05-31 13:50   ` Anthony Liguori
2011-05-31 14:04     ` Vivek Goyal
2011-05-31 14:25       ` Anthony Liguori
2011-05-31 17:59         ` Vivek Goyal
2011-05-31 18:39           ` Anthony Liguori
2011-05-31 19:24             ` Vivek Goyal
2011-05-31 23:30               ` Anthony Liguori
2011-06-01 13:20                 ` Vivek Goyal
2011-06-01 21:15                   ` Stefan Hajnoczi
2011-06-01 21:42                     ` Vivek Goyal
2011-06-01 22:28                       ` Stefan Hajnoczi
2011-06-04  8:54                 ` Blue Swirl
2011-05-31 20:48             ` Mike Snitzer
2011-05-31 22:22               ` Anthony Liguori
2011-05-31 13:56   ` Daniel P. Berrange
2011-05-31 14:10     ` Vivek Goyal
2011-05-31 14:19       ` Daniel P. Berrange
2011-05-31 14:28         ` Vivek Goyal
2011-05-31 15:28         ` Ryan Harper
2011-05-31 19:55 ` Vivek Goyal
2011-06-01  3:12   ` Zhi Yong Wu
2011-06-02  9:33     ` Michal Suchanek
2011-06-03  6:56       ` Zhi Yong Wu
2011-06-01  3:19   ` Zhi Yong Wu
2011-06-01 13:32     ` Vivek Goyal
2011-06-02  6:07       ` Zhi Yong Wu
2011-06-02  6:17 ` Sasha Levin
2011-06-02  6:29   ` Zhi Yong Wu
2011-06-02  7:15     ` Sasha Levin [this message]
2011-06-02  8:18       ` Zhi Yong Wu

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1306998902.2785.20.camel@lappy \
    --to=levinsasha928@gmail.com \
    --cc=aliguori@us.ibm.com \
    --cc=guijianfeng@cn.fujitsu.com \
    --cc=herbert@gondor.apana.org.au \
    --cc=kvm@vger.kernel.org \
    --cc=kwolf@redhat.com \
    --cc=llim@redhat.com \
    --cc=luowenj@cn.ibm.com \
    --cc=qemu-devel@nongnu.org \
    --cc=raharper@us.ibm.com \
    --cc=stefanha@linux.vnet.ibm.com \
    --cc=vgoyal@redhat.com \
    --cc=wuzhy@cn.ibm.com \
    --cc=wuzhy@linux.vnet.ibm.com \
    --cc=zhanx@cn.ibm.com \
    --cc=zhaoyang@cn.ibm.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).