From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:45655) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1V2CXd-0007as-DC for qemu-devel@nongnu.org; Wed, 24 Jul 2013 23:48:38 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1V2CXb-0005VS-FI for qemu-devel@nongnu.org; Wed, 24 Jul 2013 23:48:37 -0400 Received: from e23smtp09.au.ibm.com ([202.81.31.142]:59906) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1V2CXZ-0005UC-Ux for qemu-devel@nongnu.org; Wed, 24 Jul 2013 23:48:35 -0400 Received: from /spool/local by e23smtp09.au.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Fri, 26 Jul 2013 00:43:40 +1000 Received: from d23relay05.au.ibm.com (d23relay05.au.ibm.com [9.190.235.152]) by d23dlp01.au.ibm.com (Postfix) with ESMTP id 157DA2CE804C for ; Thu, 25 Jul 2013 13:48:12 +1000 (EST) Received: from d23av03.au.ibm.com (d23av03.au.ibm.com [9.190.234.97]) by d23relay05.au.ibm.com (8.13.8/8.13.8/NCO v10.0) with ESMTP id r6P3WX0453280978 for ; Thu, 25 Jul 2013 13:32:38 +1000 Received: from d23av03.au.ibm.com (localhost [127.0.0.1]) by d23av03.au.ibm.com (8.14.4/8.14.4/NCO v10.0 AVout) with ESMTP id r6P3m5ll018769 for ; Thu, 25 Jul 2013 13:48:06 +1000 Message-ID: <51F09FE9.9050008@linux.vnet.ibm.com> Date: Thu, 25 Jul 2013 11:47:53 +0800 From: Wenchao Xia MIME-Version: 1.0 References: <1374182502-10292-1-git-send-email-charlie@ctshepherd.com> In-Reply-To: <1374182502-10292-1-git-send-email-charlie@ctshepherd.com> Content-Type: text/plain; charset=GB2312 Content-Transfer-Encoding: 7bit Subject: Re: [Qemu-devel] RFC [PATCH] Make bdrv_flush synchronous only and update callers List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Charlie Shepherd Cc: kwolf@redhat.com, pbonzini@redhat.com, gabriel@kerneis.info, qemu-devel@nongnu.org, stefanha@gmail.com I am glad to have an accurate sync bdrv_flush(). Code looks fine. Reviewed-by: Wenchao Xia > This patch makes bdrv_flush a synchronous function and updates any callers from > a coroutine context to use bdrv_co_flush instead. > > The motivation for this patch comes from the GSoC Continuation-Passing C > project. When coroutines were introduced, synchronous functions in the block > layer were converted to use asynchronous methods by dynamically detecting if > they were being run from a coroutine context by calling qemu_in_coroutine(), and > yielding if so. If they were not, they would spawn a new coroutine and poll > until the asynchronous counterpart finished. > > However this approach does not work with CPC as the CPC translator converts all > functions annotated coroutine_fn to a different (continuation based) calling > convention. This means that coroutine_fn annotated functions cannot be called > from a non-coroutine context. > > This patch is a Request For Comments on the approach of splitting these > "dynamic" functions into synchronous and asynchronous versions. This is easy for > bdrv_flush as it already has an asynchronous counterpart - bdrv_co_flush. The > only caller of bdrv_flush from a coroutine context is mirror_drain in > block/mirror.c - this should be annotated as a coroutine_fn as it calls > qemu_coroutine_yield(). > > If this approach meets with approval I will develop a patchset splitting the > other "dynamic" functions in the block layer. This will allow all coroutine > functions to have a coroutine_fn annotation that can be statically checked (CPC > can be used to verify annotations). > > I have audited the other callers of bdrv_flush, they are included below: > > block.c: bdrv_reopen_prepare, bdrv_close, bdrv_commit, bdrv_pwrite_sync > block/qcow2-cache.c: qcow2_cache_entry_flush, qcow2_cache_flush > block/qcow2-refcount.c: qcow2_update_snapshot_refcount > block/qcow2-snapshot.c: qcow2_write_snapshots > block/qcow2.c: qcow2_mark_dirty, qcow2_mark_clean > block/qed-check.c: qed_check_mark_clean > block/qed.c: bdrv_qed_open, bdrv_qed_close > blockdev.c: external_snapshot_prepare, do_drive_del > cpus.c: do_vm_stop > hw/block/nvme.c: nvme_clear_ctrl > qemu-io-cmds.c: flush_f > savevm.c: bdrv_fclose > > --- > block.c | 13 ++++--------- > block/mirror.c | 4 ++-- > 2 files changed, 6 insertions(+), 11 deletions(-) > > diff --git a/block.c b/block.c > index 6c493ad..00d71df 100644 > --- a/block.c > +++ b/block.c > @@ -4110,15 +4110,10 @@ int bdrv_flush(BlockDriverState *bs) > .ret = NOT_DONE, > }; > > - if (qemu_in_coroutine()) { > - /* Fast-path if already in coroutine context */ > - bdrv_flush_co_entry(&rwco); > - } else { > - co = qemu_coroutine_create(bdrv_flush_co_entry); > - qemu_coroutine_enter(co, &rwco); > - while (rwco.ret == NOT_DONE) { > - qemu_aio_wait(); > - } > + co = qemu_coroutine_create(bdrv_flush_co_entry); > + qemu_coroutine_enter(co, &rwco); > + while (rwco.ret == NOT_DONE) { > + qemu_aio_wait(); > } > > return rwco.ret; > diff --git a/block/mirror.c b/block/mirror.c > index bed4a7e..3d5da7e 100644 > --- a/block/mirror.c > +++ b/block/mirror.c > @@ -282,7 +282,7 @@ static void mirror_free_init(MirrorBlockJob *s) > } > } > > -static void mirror_drain(MirrorBlockJob *s) > +static void coroutine_fn mirror_drain(MirrorBlockJob *s) > { > while (s->in_flight > 0) { > qemu_coroutine_yield(); > @@ -390,7 +390,7 @@ static void coroutine_fn mirror_run(void *opaque) > should_complete = false; > if (s->in_flight == 0 && cnt == 0) { > trace_mirror_before_flush(s); > - ret = bdrv_flush(s->target); > + ret = bdrv_co_flush(s->target); > if (ret < 0) { > if (mirror_error_action(s, false, -ret) == BDRV_ACTION_REPORT) { > goto immediate_exit; > -- Best Regards Wenchao Xia