From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([140.186.70.92]:45332) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1Qrkmt-0002xU-GA for qemu-devel@nongnu.org; Fri, 12 Aug 2011 02:00:08 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1Qrkms-0000LY-82 for qemu-devel@nongnu.org; Fri, 12 Aug 2011 02:00:07 -0400 Received: from mail-yw0-f45.google.com ([209.85.213.45]:34066) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1Qrkms-0000LG-1R for qemu-devel@nongnu.org; Fri, 12 Aug 2011 02:00:06 -0400 Received: by ywf9 with SMTP id 9so2105823ywf.4 for ; Thu, 11 Aug 2011 23:00:05 -0700 (PDT) MIME-Version: 1.0 In-Reply-To: References: <1312863472-6901-1-git-send-email-wuzhy@linux.vnet.ibm.com> <1312863472-6901-4-git-send-email-wuzhy@linux.vnet.ibm.com> <20110809085728.GB17510@ram-ThinkPad-T61> Date: Fri, 12 Aug 2011 14:00:04 +0800 Message-ID: From: Zhi Yong Wu Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable Subject: Re: [Qemu-devel] [PATCH v5 3/4] block: add block timer and block throttling algorithm List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Stefan Hajnoczi Cc: kwolf@redhat.com, stefanha@linux.vnet.ibm.com, kvm@vger.kernel.org, mtosatti@redhat.com, Ram Pai , qemu-devel@nongnu.org, ryanh@us.ibm.com, luowenj@cn.ibm.com, Zhi Yong Wu On Fri, Aug 12, 2011 at 1:47 PM, Stefan Hajnoczi wrote= : > On Fri, Aug 12, 2011 at 6:35 AM, Zhi Yong Wu wrote= : >> On Tue, Aug 9, 2011 at 4:57 PM, Ram Pai wrote: >>> On Tue, Aug 09, 2011 at 12:17:51PM +0800, Zhi Yong Wu wrote: >>>> Note: >>>> =A0 =A0 =A0 1.) When bps/iops limits are specified to a small value su= ch as 511 bytes/s, this VM will hang up. We are considering how to handle t= his senario. >>>> =A0 =A0 =A0 2.) When "dd" command is issued in guest, if its option bs= is set to a large value such as "bs=3D1024K", the result speed will slight= ly bigger than the limits. >>>> >>>> For these problems, if you have nice thought, pls let us know.:) >>>> >>>> Signed-off-by: Zhi Yong Wu >>>> --- >>>> =A0block.c =A0 =A0 | =A0347 ++++++++++++++++++++++++++++++++++++++++++= +++++++++++++++-- >>>> =A0block.h =A0 =A0 | =A0 =A06 +- >>>> =A0block_int.h | =A0 30 +++++ >>>> =A03 files changed, 372 insertions(+), 11 deletions(-) >>>> >>>> diff --git a/block.c b/block.c >>>> index 24a25d5..8fd6643 100644 >>>> --- a/block.c >>>> +++ b/block.c >>>> @@ -29,6 +29,9 @@ >>>> =A0#include "module.h" >>>> =A0#include "qemu-objects.h" >>>> >>>> +#include "qemu-timer.h" >>>> +#include "block/blk-queue.h" >>>> + >>>> =A0#ifdef CONFIG_BSD >>>> =A0#include >>>> =A0#include >>>> @@ -58,6 +61,13 @@ static int bdrv_read_em(BlockDriverState *bs, int64= _t sector_num, >>>> =A0static int bdrv_write_em(BlockDriverState *bs, int64_t sector_num, >>>> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 const uint8_t *buf= , int nb_sectors); >>>> >>>> +static bool bdrv_exceed_bps_limits(BlockDriverState *bs, int nb_secto= rs, >>>> + =A0 =A0 =A0 =A0bool is_write, double elapsed_time, uint64_t *wait); >>>> +static bool bdrv_exceed_iops_limits(BlockDriverState *bs, bool is_wri= te, >>>> + =A0 =A0 =A0 =A0double elapsed_time, uint64_t *wait); >>>> +static bool bdrv_exceed_io_limits(BlockDriverState *bs, int nb_sector= s, >>>> + =A0 =A0 =A0 =A0bool is_write, uint64_t *wait); >>>> + >>>> =A0static QTAILQ_HEAD(, BlockDriverState) bdrv_states =3D >>>> =A0 =A0 =A0QTAILQ_HEAD_INITIALIZER(bdrv_states); >>>> >>>> @@ -90,6 +100,68 @@ int is_windows_drive(const char *filename) >>>> =A0} >>>> =A0#endif >>>> >>>> +/* throttling disk I/O limits */ >>>> +void bdrv_io_limits_disable(BlockDriverState *bs) >>>> +{ >>>> + =A0 =A0bs->io_limits_enabled =3D false; >>>> + =A0 =A0bs->req_from_queue =A0 =A0=3D false; >>>> + >>>> + =A0 =A0if (bs->block_queue) { >>>> + =A0 =A0 =A0 =A0qemu_block_queue_flush(bs->block_queue); >>>> + =A0 =A0 =A0 =A0qemu_del_block_queue(bs->block_queue); >>>> + =A0 =A0} >>>> + >>>> + =A0 =A0if (bs->block_timer) { >>>> + =A0 =A0 =A0 =A0qemu_del_timer(bs->block_timer); >>>> + =A0 =A0 =A0 =A0qemu_free_timer(bs->block_timer); >>>> + =A0 =A0} >>>> + >>>> + =A0 =A0bs->slice_start[0] =A0 =3D 0; >>>> + =A0 =A0bs->slice_start[1] =A0 =3D 0; >>>> + >>>> + =A0 =A0bs->slice_end[0] =A0 =A0 =3D 0; >>>> + =A0 =A0bs->slice_end[1] =A0 =A0 =3D 0; >>>> +} >>>> + >>>> +static void bdrv_block_timer(void *opaque) >>>> +{ >>>> + =A0 =A0BlockDriverState *bs =3D opaque; >>>> + =A0 =A0BlockQueue *queue =3D bs->block_queue; >>>> + >>>> + =A0 =A0qemu_block_queue_flush(queue); >>>> +} >>>> + >>>> +void bdrv_io_limits_enable(BlockDriverState *bs) >>>> +{ >>>> + =A0 =A0bs->req_from_queue =3D false; >>>> + >>>> + =A0 =A0bs->block_queue =A0 =A0=3D qemu_new_block_queue(); >>>> + =A0 =A0bs->block_timer =A0 =A0=3D qemu_new_timer_ns(vm_clock, bdrv_b= lock_timer, bs); >>>> + >>>> + =A0 =A0bs->slice_start[BLOCK_IO_LIMIT_READ] =A0=3D qemu_get_clock_ns= (vm_clock); >>>> + =A0 =A0bs->slice_start[BLOCK_IO_LIMIT_WRITE] =3D qemu_get_clock_ns(v= m_clock); >>> >>> a minor comment. better to keep the slice_start of the both the READ an= d WRITE >>> side the same. >>> >>> =A0 =A0bs->slice_start[BLOCK_IO_LIMIT_WRITE] =3D bs->slice_start[BLOCK_= IO_LIMIT_READ]; >>> >>> saves =A0a call to qemu_get_clock_ns(). >>> >>>> + >>>> + =A0 =A0bs->slice_end[BLOCK_IO_LIMIT_READ] =A0 =A0=3D >>>> + =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0qemu_get_clock_ns(vm_cloc= k) + BLOCK_IO_SLICE_TIME; >>> >>> =A0 =A0bs->slice_end[BLOCK_IO_LIMIT_READ] =3D bs->slice_start[BLOCK_IO_= LIMIT_READ] + >>> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0BLOCK_IO_SLICE_TIME; >>> >>> saves one more call to qemu_get_clock_ns() >>> >>>> + =A0 =A0bs->slice_end[BLOCK_IO_LIMIT_WRITE] =A0 =3D >>>> + =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0qemu_get_clock_ns(vm_cloc= k) + BLOCK_IO_SLICE_TIME; >>> >>> >>> =A0 =A0bs->slice_end[BLOCK_IO_LIMIT_WRITE] =3D bs->slice_start[BLOCK_IO= _LIMIT_WRITE] + >>> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0BLOCK_IO_SLICE_TIME; >>> >>> yet another call saving. >>> >>> >>>> +} >>>> + >>>> +bool bdrv_io_limits_enabled(BlockDriverState *bs) >>>> +{ >>>> + =A0 =A0BlockIOLimit *io_limits =3D &bs->io_limits; >>>> + =A0 =A0if ((io_limits->bps[BLOCK_IO_LIMIT_READ] =3D=3D 0) >>>> + =A0 =A0 =A0 =A0 && (io_limits->bps[BLOCK_IO_LIMIT_WRITE] =3D=3D 0) >>>> + =A0 =A0 =A0 =A0 && (io_limits->bps[BLOCK_IO_LIMIT_TOTAL] =3D=3D 0) >>>> + =A0 =A0 =A0 =A0 && (io_limits->iops[BLOCK_IO_LIMIT_READ] =3D=3D 0) >>>> + =A0 =A0 =A0 =A0 && (io_limits->iops[BLOCK_IO_LIMIT_WRITE] =3D=3D 0) >>>> + =A0 =A0 =A0 =A0 && (io_limits->iops[BLOCK_IO_LIMIT_TOTAL] =3D=3D 0))= { >>>> + =A0 =A0 =A0 =A0return false; >>>> + =A0 =A0} >>>> + >>>> + =A0 =A0return true; >>>> +} >>> >>> can be optimized to: >>> >>> =A0 =A0 =A0 =A0return (io_limits->bps[BLOCK_IO_LIMIT_READ] >>> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0|| io_limits->bps[BLOCK_IO_LIMIT_WRITE] >>> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0|| io_limits->bps[BLOCK_IO_LIMIT_TOTAL] >>> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0|| io_limits->iops[BLOCK_IO_LIMIT_READ] >>> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0|| io_limits->iops[BLOCK_IO_LIMIT_WRITE] >>> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0|| io_limits->iops[BLOCK_IO_LIMIT_TOTAL]= ); >> I want to apply this, but it violate qemu coding styles. > > Perhaps checkpatch.pl complains because of the (...) around the return > value. =A0Try removing them. After i removed the parentheses, it can work now. thanks. > > Stefan > --=20 Regards, Zhi Yong Wu