From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:59320) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1amlaT-0000gn-Mq for qemu-devel@nongnu.org; Sun, 03 Apr 2016 13:13:22 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1amlaQ-0003Zh-FO for qemu-devel@nongnu.org; Sun, 03 Apr 2016 13:13:21 -0400 Received: from mail-lf0-x241.google.com ([2a00:1450:4010:c07::241]:36806) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1amlaQ-0003Z3-2T for qemu-devel@nongnu.org; Sun, 03 Apr 2016 13:13:18 -0400 Received: by mail-lf0-x241.google.com with SMTP id p81so13768094lfb.3 for ; Sun, 03 Apr 2016 10:13:17 -0700 (PDT) Sender: Paolo Bonzini References: <1459516794-23629-1-git-send-email-pbonzini@redhat.com> <1459516794-23629-9-git-send-email-pbonzini@redhat.com> <20160403120117-mutt-send-email-mst@redhat.com> From: Paolo Bonzini Message-ID: <57014F26.6060607@redhat.com> Date: Sun, 3 Apr 2016 19:13:10 +0200 MIME-Version: 1.0 In-Reply-To: <20160403120117-mutt-send-email-mst@redhat.com> Content-Type: text/plain; charset=windows-1252 Content-Transfer-Encoding: 8bit Subject: Re: [Qemu-devel] [PATCH 8/9] virtio: merge virtio_queue_aio_set_host_notifier_handler with virtio_queue_set_aio List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: "Michael S. Tsirkin" Cc: famz@redhat.com, borntraeger@de.ibm.com, qemu-devel@nongnu.org, tubo@linux.vnet.ibm.com, stefanha@redhat.com, cornelia.huck@de.ibm.com On 03/04/2016 11:06, Michael S. Tsirkin wrote: > On Fri, Apr 01, 2016 at 03:19:53PM +0200, Paolo Bonzini wrote: >> Eliminating the reentrancy is actually a nice thing that we can do >> with the API that Michael proposed, so let's make it first class. >> This also hides the complex assign/set_handler conventions from >> callers of virtio_queue_aio_set_host_notifier_handler, which in >> fact was always called with assign=true. >> >> Reviewed-by: Cornelia Huck >> Signed-off-by: Paolo Bonzini >> --- >> hw/block/dataplane/virtio-blk.c | 7 +++---- >> hw/scsi/virtio-scsi-dataplane.c | 12 ++++-------- >> hw/virtio/virtio.c | 19 ++++--------------- >> include/hw/virtio/virtio.h | 6 ++---- >> 4 files changed, 13 insertions(+), 31 deletions(-) >> >> diff --git a/hw/block/dataplane/virtio-blk.c b/hw/block/dataplane/virtio-blk.c >> index fd06726..74c6d37 100644 >> --- a/hw/block/dataplane/virtio-blk.c >> +++ b/hw/block/dataplane/virtio-blk.c >> @@ -237,8 +237,8 @@ void virtio_blk_data_plane_start(VirtIOBlockDataPlane *s) >> >> /* Get this show started by hooking up our callbacks */ >> aio_context_acquire(s->ctx); >> - virtio_set_queue_aio(s->vq, virtio_blk_data_plane_handle_output); >> - virtio_queue_aio_set_host_notifier_handler(s->vq, s->ctx, true, true); >> + virtio_queue_aio_set_host_notifier_handler(s->vq, s->ctx, >> + virtio_blk_data_plane_handle_output); >> aio_context_release(s->ctx); >> return; >> >> @@ -273,8 +273,7 @@ void virtio_blk_data_plane_stop(VirtIOBlockDataPlane *s) >> aio_context_acquire(s->ctx); >> >> /* Stop notifications for new requests from guest */ >> - virtio_queue_aio_set_host_notifier_handler(s->vq, s->ctx, true, false); >> - virtio_set_queue_aio(s->vq, NULL); >> + virtio_queue_aio_set_host_notifier_handler(s->vq, s->ctx, NULL); >> >> /* Drain and switch bs back to the QEMU main loop */ >> blk_set_aio_context(s->conf->conf.blk, qemu_get_aio_context()); >> diff --git a/hw/scsi/virtio-scsi-dataplane.c b/hw/scsi/virtio-scsi-dataplane.c >> index a497a2c..5494dcc 100644 >> --- a/hw/scsi/virtio-scsi-dataplane.c >> +++ b/hw/scsi/virtio-scsi-dataplane.c >> @@ -81,8 +81,7 @@ static int virtio_scsi_vring_init(VirtIOSCSI *s, VirtQueue *vq, int n, >> return rc; >> } >> >> - virtio_queue_aio_set_host_notifier_handler(vq, s->ctx, true, true); >> - virtio_set_queue_aio(vq, fn); >> + virtio_queue_aio_set_host_notifier_handler(vq, s->ctx, fn); >> return 0; >> } >> >> @@ -99,13 +98,10 @@ static void virtio_scsi_clear_aio(VirtIOSCSI *s) >> VirtIOSCSICommon *vs = VIRTIO_SCSI_COMMON(s); >> int i; >> >> - virtio_queue_aio_set_host_notifier_handler(vs->ctrl_vq, s->ctx, true, false); >> - virtio_set_queue_aio(vs->ctrl_vq, NULL); >> - virtio_queue_aio_set_host_notifier_handler(vs->event_vq, s->ctx, true, false); >> - virtio_set_queue_aio(vs->event_vq, NULL); >> + virtio_queue_aio_set_host_notifier_handler(vs->ctrl_vq, s->ctx, NULL); >> + virtio_queue_aio_set_host_notifier_handler(vs->event_vq, s->ctx, NULL); >> for (i = 0; i < vs->conf.num_queues; i++) { >> - virtio_queue_aio_set_host_notifier_handler(vs->cmd_vqs[i], s->ctx, true, false); >> - virtio_set_queue_aio(vs->cmd_vqs[i], NULL); >> + virtio_queue_aio_set_host_notifier_handler(vs->cmd_vqs[i], s->ctx, NULL); >> } >> } >> >> diff --git a/hw/virtio/virtio.c b/hw/virtio/virtio.c >> index eb04ac0..7fcfc24f 100644 >> --- a/hw/virtio/virtio.c >> +++ b/hw/virtio/virtio.c >> @@ -1159,14 +1159,6 @@ VirtQueue *virtio_add_queue(VirtIODevice *vdev, int queue_size, >> return &vdev->vq[i]; >> } >> >> -void virtio_set_queue_aio(VirtQueue *vq, >> - void (*handle_output)(VirtIODevice *, VirtQueue *)) >> -{ >> - assert(vq->handle_output); >> - >> - vq->handle_aio_output = handle_output; >> -} >> - >> void virtio_del_queue(VirtIODevice *vdev, int n) >> { >> if (n < 0 || n >= VIRTIO_QUEUE_MAX) { >> @@ -1809,19 +1801,16 @@ static void virtio_queue_host_notifier_aio_read(EventNotifier *n) >> } >> >> void virtio_queue_aio_set_host_notifier_handler(VirtQueue *vq, AioContext *ctx, >> - bool assign, bool set_handler) >> + void (*handle_output)(VirtIODevice *, >> + VirtQueue *)) >> { >> - if (assign && set_handler) { >> + vq->handle_aio_output = handle_output; >> + if (handle_output) { >> aio_set_event_notifier(ctx, &vq->host_notifier, true, >> virtio_queue_host_notifier_aio_read); >> } else { >> aio_set_event_notifier(ctx, &vq->host_notifier, true, NULL); >> } >> - if (!assign) { >> - /* Test and clear notifier before after disabling event, >> - * in case poll callback didn't have time to run. */ >> - virtio_queue_host_notifier_aio_read(&vq->host_notifier); >> - } >> } >> >> static void virtio_queue_host_notifier_read(EventNotifier *n) > > This means that caller is now responsible for invoking the > handler after it sets handle_output = NULL. > I think it's cleaner to invoke it internally. No, the caller is not responsible for that. Ultimately, it's the virtio core that will be responsible for setting up the handler again in the main I/O thread, and that's how it will be called if it matters. This will happen when the API is cleaned up further by Cornelia. Note that this patch doesn't change the semantics, it's patch 1 that changes assign=false to assign=true in the call to virtio_queue_aio_set_host_notifier_handler. Without that change, virtio_queue_host_notifier_aio_read can run the virtqueue handler in the dataplane thread concurrently with the same handler in the main I/O thread. I consider that a fix for a latent/unknown bug, since we are having so many headaches with reentrancy. Of course I'm okay with dropping 9/9 (that's why I put it last), and probably it was not 2.6 material even if it worked. Paolo