From mboxrd@z Thu Jan 1 00:00:00 1970 From: Ryan Harper Subject: Re: [PATCH v3 0/2] The intro for QEMU disk I/O limits Date: Sun, 31 Jul 2011 14:09:03 -0500 Message-ID: <20110731190903.GG1024@us.ibm.com> References: <1311850166-9404-1-git-send-email-wuzhy@linux.vnet.ibm.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Cc: qemu-devel@nongnu.org, kvm@vger.kernel.org, stefanha@linux.vnet.ibm.com, mtosatti@redhat.com, aliguori@us.ibm.com, ryanh@us.ibm.com, zwu.kernel@gmail.com, kwolf@redhat.com, luowenj@cn.ibm.com To: Zhi Yong Wu Return-path: Received: from e3.ny.us.ibm.com ([32.97.182.143]:37791 "EHLO e3.ny.us.ibm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751370Ab1GaTJI (ORCPT ); Sun, 31 Jul 2011 15:09:08 -0400 Received: from d01relay04.pok.ibm.com (d01relay04.pok.ibm.com [9.56.227.236]) by e3.ny.us.ibm.com (8.14.4/8.13.1) with ESMTP id p6VIjKKh018487 for ; Sun, 31 Jul 2011 14:45:20 -0400 Received: from d03av01.boulder.ibm.com (d03av01.boulder.ibm.com [9.17.195.167]) by d01relay04.pok.ibm.com (8.13.8/8.13.8/NCO v10.0) with ESMTP id p6VJ97hf153348 for ; Sun, 31 Jul 2011 15:09:07 -0400 Received: from d03av01.boulder.ibm.com (loopback [127.0.0.1]) by d03av01.boulder.ibm.com (8.14.4/8.13.1/NCO v10.0 AVout) with ESMTP id p6VJ9665030355 for ; Sun, 31 Jul 2011 13:09:06 -0600 Content-Disposition: inline In-Reply-To: <1311850166-9404-1-git-send-email-wuzhy@linux.vnet.ibm.com> Sender: kvm-owner@vger.kernel.org List-ID: * Zhi Yong Wu [2011-07-28 05:53]: > The main goal of the patch is to effectively cap the disk I/O speed or counts of one single VM.It is only one draft, so it unavoidably has some drawbacks, if you catch them, please let me know. > > The patch will mainly introduce one block I/O throttling algorithm, one timer and one block queue for each I/O limits enabled drive. > > When a block request is coming in, the throttling algorithm will check if its I/O rate or counts exceed the limits; if yes, then it will enqueue to the block queue; The timer will periodically handle the I/O requests in it. > > Some available features follow as below: > (1) global bps limit. > -drive bps=xxx in bytes/s > (2) only read bps limit > -drive bps_rd=xxx in bytes/s > (3) only write bps limit > -drive bps_wr=xxx in bytes/s > (4) global iops limit > -drive iops=xxx in ios/s > (5) only read iops limit > -drive iops_rd=xxx in ios/s > (6) only write iops limit > -drive iops_wr=xxx in ios/s > (7) the combination of some limits. > -drive bps=xxx,iops=xxx > > Known Limitations: > (1) #1 can not coexist with #2, #3 > (2) #4 can not coexist with #5, #6 > (3) When bps/iops limits are specified to a small value such as 511 bytes/s, this VM will hang up. We are considering how to handle this senario. > I don't yet have detailed info , but we've got a memory leak in the code. After running the VM with a 1MB r and w limit for 8 hours or so: -drive bps_rd=$((1*1024*1024)),bps_wr=$((1*1024*1024)) I've got my system swapping with 43G resident in memory: 9913 root 20 0 87.3g 43g 548 D 9.6 34.5 44:00.87 qemu-system-x86 would be worth looking through the code and maybe a valgrind run to catch the leak. -- Ryan Harper Software Engineer; Linux Technology Center IBM Corp., Austin, Tx ryanh@us.ibm.com