From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([140.186.70.92]:59278) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1RzPfi-000273-DA for qemu-devel@nongnu.org; Mon, 20 Feb 2012 04:36:42 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1RzPfc-0007vZ-F1 for qemu-devel@nongnu.org; Mon, 20 Feb 2012 04:36:38 -0500 Received: from mx1.redhat.com ([209.132.183.28]:14521) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1RzPfc-0007vT-5m for qemu-devel@nongnu.org; Mon, 20 Feb 2012 04:36:32 -0500 Message-ID: <4F4214EB.1040002@redhat.com> Date: Mon, 20 Feb 2012 10:39:55 +0100 From: Kevin Wolf MIME-Version: 1.0 References: <1329713430-9209-1-git-send-email-zwu.kernel@gmail.com> <4F4211AF.9000009@redhat.com> In-Reply-To: Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit Subject: Re: [Qemu-devel] [PATCH 1/2] block: add the support for draining the throttled request queue List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Zhi Yong Wu Cc: chris@arachsys.com, Zhi Yong Wu , qemu-devel@nongnu.org, stefanha@linux.vnet.ibm.com Am 20.02.2012 10:29, schrieb Zhi Yong Wu: > On Mon, Feb 20, 2012 at 5:26 PM, Kevin Wolf wrote: >> Am 20.02.2012 05:50, schrieb zwu.kernel@gmail.com: >>> From: Zhi Yong Wu >>> >>> If one guest has multiple disks with enabling I/O throttling function separately, when draining activities are done, some requests maybe are in the throttled queue; So we need to restart them at first. >>> >>> Moreover, when only one disk need to be drained such as hotplug out, if another disk still has some requests in its throttled queue, these request should not be effected. >>> >>> Signed-off-by: Zhi Yong Wu >>> --- >>> block.c | 29 ++++++++++++++++++++++------- >>> block_int.h | 1 + >>> 2 files changed, 23 insertions(+), 7 deletions(-) >>> >>> diff --git a/block.c b/block.c >>> index ae297bb..f78df78 100644 >>> --- a/block.c >>> +++ b/block.c >>> @@ -853,25 +853,40 @@ void bdrv_close_all(void) >>> } >>> } >>> >>> -/* >>> - * Wait for pending requests to complete across all BlockDriverStates >>> - * >>> - * This function does not flush data to disk, use bdrv_flush_all() for that >>> - * after calling this function. >>> - */ >>> -void bdrv_drain_all(void) >>> +void bdrv_drain_request(BlockDriverState *throttled_bs) >>> { >>> BlockDriverState *bs; >>> >>> + QTAILQ_FOREACH(bs, &bdrv_states, list) { >>> + if (throttled_bs && throttled_bs != bs) { >>> + continue; >>> + } >>> + qemu_co_queue_restart_all(&bs->throttled_reqs); >>> + } >>> + >>> qemu_aio_flush(); >> >> Why doesn't qemu_aio_flush() invoke whatever is needed? I think this is >> the real bug that should be fixed. > Do you mean that why qemu_aio_flush doesn't invoke those lines of > codes above it? As what i said in above comments, when one VM has 2 > multiple disks with enabling I/O throttling, if only one disk need to > be flushed, only this disk's throttled request need to be drained, and > another disk's throttled requests don't need. So this is an optimisation rather than a fix? Would the right optimisation be a qemu_aio_flush_disk() that flushes the requests for a single disk? Kevin