From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([140.186.70.92]:59975) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1QnbOK-0003kH-6i for qemu-devel@nongnu.org; Sun, 31 Jul 2011 15:09:37 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1QnbOI-0002OU-QY for qemu-devel@nongnu.org; Sun, 31 Jul 2011 15:09:36 -0400 Received: from e7.ny.us.ibm.com ([32.97.182.137]:51194) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1QnbOI-0002Mj-Lm for qemu-devel@nongnu.org; Sun, 31 Jul 2011 15:09:34 -0400 Received: from d01relay04.pok.ibm.com (d01relay04.pok.ibm.com [9.56.227.236]) by e7.ny.us.ibm.com (8.14.4/8.13.1) with ESMTP id p6VIhIca031016 for ; Sun, 31 Jul 2011 14:43:18 -0400 Received: from d03av01.boulder.ibm.com (d03av01.boulder.ibm.com [9.17.195.167]) by d01relay04.pok.ibm.com (8.13.8/8.13.8/NCO v10.0) with ESMTP id p6VJ96Dd154390 for ; Sun, 31 Jul 2011 15:09:06 -0400 Received: from d03av01.boulder.ibm.com (loopback [127.0.0.1]) by d03av01.boulder.ibm.com (8.14.4/8.13.1/NCO v10.0 AVout) with ESMTP id p6VJ965x030355 for ; Sun, 31 Jul 2011 13:09:06 -0600 Date: Sun, 31 Jul 2011 14:09:03 -0500 From: Ryan Harper Message-ID: <20110731190903.GG1024@us.ibm.com> References: <1311850166-9404-1-git-send-email-wuzhy@linux.vnet.ibm.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <1311850166-9404-1-git-send-email-wuzhy@linux.vnet.ibm.com> Subject: Re: [Qemu-devel] [PATCH v3 0/2] The intro for QEMU disk I/O limits List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Zhi Yong Wu Cc: kwolf@redhat.com, aliguori@us.ibm.com, stefanha@linux.vnet.ibm.com, kvm@vger.kernel.org, mtosatti@redhat.com, qemu-devel@nongnu.org, zwu.kernel@gmail.com, ryanh@us.ibm.com, luowenj@cn.ibm.com * Zhi Yong Wu [2011-07-28 05:53]: > The main goal of the patch is to effectively cap the disk I/O speed or counts of one single VM.It is only one draft, so it unavoidably has some drawbacks, if you catch them, please let me know. > > The patch will mainly introduce one block I/O throttling algorithm, one timer and one block queue for each I/O limits enabled drive. > > When a block request is coming in, the throttling algorithm will check if its I/O rate or counts exceed the limits; if yes, then it will enqueue to the block queue; The timer will periodically handle the I/O requests in it. > > Some available features follow as below: > (1) global bps limit. > -drive bps=xxx in bytes/s > (2) only read bps limit > -drive bps_rd=xxx in bytes/s > (3) only write bps limit > -drive bps_wr=xxx in bytes/s > (4) global iops limit > -drive iops=xxx in ios/s > (5) only read iops limit > -drive iops_rd=xxx in ios/s > (6) only write iops limit > -drive iops_wr=xxx in ios/s > (7) the combination of some limits. > -drive bps=xxx,iops=xxx > > Known Limitations: > (1) #1 can not coexist with #2, #3 > (2) #4 can not coexist with #5, #6 > (3) When bps/iops limits are specified to a small value such as 511 bytes/s, this VM will hang up. We are considering how to handle this senario. > I don't yet have detailed info , but we've got a memory leak in the code. After running the VM with a 1MB r and w limit for 8 hours or so: -drive bps_rd=$((1*1024*1024)),bps_wr=$((1*1024*1024)) I've got my system swapping with 43G resident in memory: 9913 root 20 0 87.3g 43g 548 D 9.6 34.5 44:00.87 qemu-system-x86 would be worth looking through the code and maybe a valgrind run to catch the leak. -- Ryan Harper Software Engineer; Linux Technology Center IBM Corp., Austin, Tx ryanh@us.ibm.com