From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:56206) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1alyzZ-0008GQ-C9 for qemu-devel@nongnu.org; Fri, 01 Apr 2016 09:20:02 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1alyzX-0001q8-Jv for qemu-devel@nongnu.org; Fri, 01 Apr 2016 09:20:01 -0400 Received: from mail-wm0-x242.google.com ([2a00:1450:400c:c09::242]:33453) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1alyzX-0001pz-BS for qemu-devel@nongnu.org; Fri, 01 Apr 2016 09:19:59 -0400 Received: by mail-wm0-x242.google.com with SMTP id i204so4580251wmd.0 for ; Fri, 01 Apr 2016 06:19:59 -0700 (PDT) Sender: Paolo Bonzini From: Paolo Bonzini Date: Fri, 1 Apr 2016 15:19:46 +0200 Message-Id: <1459516794-23629-2-git-send-email-pbonzini@redhat.com> In-Reply-To: <1459516794-23629-1-git-send-email-pbonzini@redhat.com> References: <1459516794-23629-1-git-send-email-pbonzini@redhat.com> Subject: [Qemu-devel] [PATCH 1/9] virtio-dataplane: pass assign=true to virtio_queue_aio_set_host_notifier_handler List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: qemu-devel@nongnu.org Cc: famz@redhat.com, tubo@linux.vnet.ibm.com, mst@redhat.com, borntraeger@de.ibm.com, stefanha@redhat.com, cornelia.huck@de.ibm.com There is no need to run the handler one last time; the device is being reset and it is okay to drop requests that are pending in the virtqueue. Even in the case of migration, the requests would be processed when ioeventfd is re-enabled on the destination side: virtio_queue_set_host_notifier_fd_handler will call virtio_queue_host_notifier_read, which will start dataplane; the host notifier is then connected to the I/O thread and event_notifier_set is called to start processing it. By omitting this call, we dodge a possible cause of races between the dataplane thread on one side and the main/vCPU threads on the other. The virtio_queue_aio_set_host_notifier_handler function is now only ever called with assign=true, but for now this is left as is because the function parameters will change soon anyway. Signed-off-by: Paolo Bonzini --- hw/block/dataplane/virtio-blk.c | 2 +- hw/scsi/virtio-scsi-dataplane.c | 6 +++--- 2 files changed, 4 insertions(+), 4 deletions(-) diff --git a/hw/block/dataplane/virtio-blk.c b/hw/block/dataplane/virtio-blk.c index e666dd4..fddd3ab 100644 --- a/hw/block/dataplane/virtio-blk.c +++ b/hw/block/dataplane/virtio-blk.c @@ -262,7 +262,7 @@ void virtio_blk_data_plane_stop(VirtIOBlockDataPlane *s) aio_context_acquire(s->ctx); /* Stop notifications for new requests from guest */ - virtio_queue_aio_set_host_notifier_handler(s->vq, s->ctx, false, false); + virtio_queue_aio_set_host_notifier_handler(s->vq, s->ctx, true, false); /* Drain and switch bs back to the QEMU main loop */ blk_set_aio_context(s->conf->conf.blk, qemu_get_aio_context()); diff --git a/hw/scsi/virtio-scsi-dataplane.c b/hw/scsi/virtio-scsi-dataplane.c index b44ac5d..21d5bfd 100644 --- a/hw/scsi/virtio-scsi-dataplane.c +++ b/hw/scsi/virtio-scsi-dataplane.c @@ -70,10 +70,10 @@ static void virtio_scsi_clear_aio(VirtIOSCSI *s) VirtIOSCSICommon *vs = VIRTIO_SCSI_COMMON(s); int i; - virtio_queue_aio_set_host_notifier_handler(vs->ctrl_vq, s->ctx, false, false); - virtio_queue_aio_set_host_notifier_handler(vs->event_vq, s->ctx, false, false); + virtio_queue_aio_set_host_notifier_handler(vs->ctrl_vq, s->ctx, true, false); + virtio_queue_aio_set_host_notifier_handler(vs->event_vq, s->ctx, true, false); for (i = 0; i < vs->conf.num_queues; i++) { - virtio_queue_aio_set_host_notifier_handler(vs->cmd_vqs[i], s->ctx, false, false); + virtio_queue_aio_set_host_notifier_handler(vs->cmd_vqs[i], s->ctx, true, false); } } -- 1.8.3.1