From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:36521) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1a9UAU-0002Iy-Uq for qemu-devel@nongnu.org; Thu, 17 Dec 2015 03:44:14 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1a9UAP-0003l0-T0 for qemu-devel@nongnu.org; Thu, 17 Dec 2015 03:44:10 -0500 Received: from mail-wm0-x236.google.com ([2a00:1450:400c:c09::236]:38483) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1a9UAP-0003kj-MG for qemu-devel@nongnu.org; Thu, 17 Dec 2015 03:44:05 -0500 Received: by mail-wm0-x236.google.com with SMTP id l126so10649803wml.1 for ; Thu, 17 Dec 2015 00:44:05 -0800 (PST) Sender: Paolo Bonzini References: <1450290827-30508-5-git-send-email-pbonzini@redhat.com> <20151217005946.GA20007@ad.usersys.redhat.com> From: Paolo Bonzini Message-ID: <567275D4.7030605@redhat.com> Date: Thu, 17 Dec 2015 09:44:04 +0100 MIME-Version: 1.0 In-Reply-To: <20151217005946.GA20007@ad.usersys.redhat.com> Content-Type: text/plain; charset=windows-1252 Content-Transfer-Encoding: 7bit Subject: Re: [Qemu-devel] [PATCH] block: fix bdrv_ioctl called from coroutine List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Fam Zheng Cc: qemu-devel@nongnu.org On 17/12/2015 01:59, Fam Zheng wrote: > On Wed, 12/16 19:33, Paolo Bonzini wrote: >> When called from a coroutine, bdrv_ioctl must be asynchronous just like >> e.g. bdrv_flush. The code was incorrectly making it synchronous, fix >> it. >> >> Signed-off-by: Paolo Bonzini >> --- >> Fam, any reason why you did it this way? I don't see >> any coroutine caller, but it doesn't make much sense. :) > > That is a surprising question! From a coroutine, it is bdrv_flush -> > bdrv_flush_co_entry -> bdrv_co_flush, which I think is always synchronous, > especially, noticing the code around calling bs->bdrv_aio_flush: > > acb = bs->drv->bdrv_aio_flush(bs, bdrv_co_io_em_complete, &co); > if (acb == NULL) { > ret = -EIO; > } else { > qemu_coroutine_yield(); > ret = co.ret; > } > > Am I missing something? In the coroutine case, the yield is hidden in the drivers, and it may or may not happen. For example, qcow2_co_flush_to_os starts with qemu_co_mutex_lock(&s->lock); which can yield. Paolo > Fam > >> >> block/io.c | 7 ++++--- >> 1 file changed, 4 insertions(+), 3 deletions(-) >> >> diff --git a/block/io.c b/block/io.c >> index e00fb5d..841f5b5 100644 >> --- a/block/io.c >> +++ b/block/io.c >> @@ -2614,10 +2614,11 @@ int bdrv_ioctl(BlockDriverState *bs, unsigned long int req, void *buf) >> bdrv_co_ioctl_entry(&data); >> } else { >> Coroutine *co = qemu_coroutine_create(bdrv_co_ioctl_entry); >> + >> qemu_coroutine_enter(co, &data); >> - } >> - while (data.ret == -EINPROGRESS) { >> - aio_poll(bdrv_get_aio_context(bs), true); >> + while (data.ret == -EINPROGRESS) { >> + aio_poll(bdrv_get_aio_context(bs), true); >> + } >> } >> return data.ret; >> } >> -- >> 2.5.0 >> > >