* [Qemu-devel] [PATCH 1/9] virtio-dataplane: pass assign=true to virtio_queue_aio_set_host_notifier_handler
2016-03-30 12:47 [Qemu-devel] [PATCH resend 0/9] virtio: aio handler API Paolo Bonzini
@ 2016-03-30 12:48 ` Paolo Bonzini
2016-03-30 13:23 ` Cornelia Huck
2016-03-30 13:44 ` Cornelia Huck
2016-03-30 12:48 ` [Qemu-devel] [PATCH 2/9] virtio: make virtio_queue_notify_vq static Paolo Bonzini
` (7 subsequent siblings)
8 siblings, 2 replies; 29+ messages in thread
From: Paolo Bonzini @ 2016-03-30 12:48 UTC (permalink / raw)
To: qemu-devel; +Cc: cornelia.huck, borntraeger, famz, mst
There is no need to run the handler one last time; the device is being
reset and it is okay to drop requests that are pending in the virtqueue.
By omitting this call, we dodge a possible cause of races between the
dataplane thread on one side and the main/vCPU threads on the other.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
hw/block/dataplane/virtio-blk.c | 2 +-
hw/scsi/virtio-scsi-dataplane.c | 6 +++---
2 files changed, 4 insertions(+), 4 deletions(-)
diff --git a/hw/block/dataplane/virtio-blk.c b/hw/block/dataplane/virtio-blk.c
index 36f3d2b..0d76110 100644
--- a/hw/block/dataplane/virtio-blk.c
+++ b/hw/block/dataplane/virtio-blk.c
@@ -261,7 +261,7 @@ void virtio_blk_data_plane_stop(VirtIOBlockDataPlane *s)
aio_context_acquire(s->ctx);
/* Stop notifications for new requests from guest */
- virtio_queue_aio_set_host_notifier_handler(s->vq, s->ctx, false, false);
+ virtio_queue_aio_set_host_notifier_handler(s->vq, s->ctx, true, false);
/* Drain and switch bs back to the QEMU main loop */
blk_set_aio_context(s->conf->conf.blk, qemu_get_aio_context());
diff --git a/hw/scsi/virtio-scsi-dataplane.c b/hw/scsi/virtio-scsi-dataplane.c
index 367e476..c57480e 100644
--- a/hw/scsi/virtio-scsi-dataplane.c
+++ b/hw/scsi/virtio-scsi-dataplane.c
@@ -71,10 +71,10 @@ static void virtio_scsi_clear_aio(VirtIOSCSI *s)
VirtIOSCSICommon *vs = VIRTIO_SCSI_COMMON(s);
int i;
- virtio_queue_aio_set_host_notifier_handler(vs->ctrl_vq, s->ctx, false, false);
- virtio_queue_aio_set_host_notifier_handler(vs->event_vq, s->ctx, false, false);
+ virtio_queue_aio_set_host_notifier_handler(vs->ctrl_vq, s->ctx, true, false);
+ virtio_queue_aio_set_host_notifier_handler(vs->event_vq, s->ctx, true, false);
for (i = 0; i < vs->conf.num_queues; i++) {
- virtio_queue_aio_set_host_notifier_handler(vs->cmd_vqs[i], s->ctx, false, false);
+ virtio_queue_aio_set_host_notifier_handler(vs->cmd_vqs[i], s->ctx, true, false);
}
}
--
1.8.3.1
^ permalink raw reply related [flat|nested] 29+ messages in thread
* Re: [Qemu-devel] [PATCH 1/9] virtio-dataplane: pass assign=true to virtio_queue_aio_set_host_notifier_handler
2016-03-30 12:48 ` [Qemu-devel] [PATCH 1/9] virtio-dataplane: pass assign=true to virtio_queue_aio_set_host_notifier_handler Paolo Bonzini
@ 2016-03-30 13:23 ` Cornelia Huck
2016-03-30 13:30 ` Paolo Bonzini
2016-03-30 13:44 ` Cornelia Huck
1 sibling, 1 reply; 29+ messages in thread
From: Cornelia Huck @ 2016-03-30 13:23 UTC (permalink / raw)
To: Paolo Bonzini; +Cc: borntraeger, famz, qemu-devel, mst
On Wed, 30 Mar 2016 14:48:00 +0200
Paolo Bonzini <pbonzini@redhat.com> wrote:
> There is no need to run the handler one last time; the device is being
> reset and it is okay to drop requests that are pending in the virtqueue.
What about virtio_blk_save()? Could there be any pending requests in
that case?
> By omitting this call, we dodge a possible cause of races between the
> dataplane thread on one side and the main/vCPU threads on the other.
>
> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
> ---
> hw/block/dataplane/virtio-blk.c | 2 +-
> hw/scsi/virtio-scsi-dataplane.c | 6 +++---
> 2 files changed, 4 insertions(+), 4 deletions(-)
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [Qemu-devel] [PATCH 1/9] virtio-dataplane: pass assign=true to virtio_queue_aio_set_host_notifier_handler
2016-03-30 13:23 ` Cornelia Huck
@ 2016-03-30 13:30 ` Paolo Bonzini
2016-03-30 13:43 ` Cornelia Huck
0 siblings, 1 reply; 29+ messages in thread
From: Paolo Bonzini @ 2016-03-30 13:30 UTC (permalink / raw)
To: Cornelia Huck; +Cc: borntraeger, famz, qemu-devel, mst
On 30/03/2016 15:23, Cornelia Huck wrote:
> > There is no need to run the handler one last time; the device is being
> > reset and it is okay to drop requests that are pending in the virtqueue.
>
> What about virtio_blk_save()? Could there be any pending requests in
> that case?
Those would be processed when dataplane is restarted on the destination
side, I think. virtio_queue_set_host_notifier_fd_handler calls
virtio_queue_host_notifier_read which connects the host notifier to the
I/O thread and calls event_notifier_set to start processing it.
Paolo
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [Qemu-devel] [PATCH 1/9] virtio-dataplane: pass assign=true to virtio_queue_aio_set_host_notifier_handler
2016-03-30 13:30 ` Paolo Bonzini
@ 2016-03-30 13:43 ` Cornelia Huck
0 siblings, 0 replies; 29+ messages in thread
From: Cornelia Huck @ 2016-03-30 13:43 UTC (permalink / raw)
To: Paolo Bonzini; +Cc: borntraeger, famz, qemu-devel, mst
On Wed, 30 Mar 2016 15:30:04 +0200
Paolo Bonzini <pbonzini@redhat.com> wrote:
> On 30/03/2016 15:23, Cornelia Huck wrote:
> > > There is no need to run the handler one last time; the device is being
> > > reset and it is okay to drop requests that are pending in the virtqueue.
> >
> > What about virtio_blk_save()? Could there be any pending requests in
> > that case?
>
> Those would be processed when dataplane is restarted on the destination
> side, I think. virtio_queue_set_host_notifier_fd_handler calls
> virtio_queue_host_notifier_read which connects the host notifier to the
> I/O thread and calls event_notifier_set to start processing it.
Makes sense; maybe mention that in the patch description as well?
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [Qemu-devel] [PATCH 1/9] virtio-dataplane: pass assign=true to virtio_queue_aio_set_host_notifier_handler
2016-03-30 12:48 ` [Qemu-devel] [PATCH 1/9] virtio-dataplane: pass assign=true to virtio_queue_aio_set_host_notifier_handler Paolo Bonzini
2016-03-30 13:23 ` Cornelia Huck
@ 2016-03-30 13:44 ` Cornelia Huck
1 sibling, 0 replies; 29+ messages in thread
From: Cornelia Huck @ 2016-03-30 13:44 UTC (permalink / raw)
To: Paolo Bonzini; +Cc: borntraeger, famz, qemu-devel, mst
On Wed, 30 Mar 2016 14:48:00 +0200
Paolo Bonzini <pbonzini@redhat.com> wrote:
> There is no need to run the handler one last time; the device is being
> reset and it is okay to drop requests that are pending in the virtqueue.
> By omitting this call, we dodge a possible cause of races between the
> dataplane thread on one side and the main/vCPU threads on the other.
>
> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
> ---
> hw/block/dataplane/virtio-blk.c | 2 +-
> hw/scsi/virtio-scsi-dataplane.c | 6 +++---
> 2 files changed, 4 insertions(+), 4 deletions(-)
Reviewed-by: Cornelia Huck <cornelia.huck@de.ibm.com>
^ permalink raw reply [flat|nested] 29+ messages in thread
* [Qemu-devel] [PATCH 2/9] virtio: make virtio_queue_notify_vq static
2016-03-30 12:47 [Qemu-devel] [PATCH resend 0/9] virtio: aio handler API Paolo Bonzini
2016-03-30 12:48 ` [Qemu-devel] [PATCH 1/9] virtio-dataplane: pass assign=true to virtio_queue_aio_set_host_notifier_handler Paolo Bonzini
@ 2016-03-30 12:48 ` Paolo Bonzini
2016-03-30 13:39 ` Cornelia Huck
2016-03-30 12:48 ` [Qemu-devel] [PATCH 3/9] virtio-blk: fix disabled mode Paolo Bonzini
` (6 subsequent siblings)
8 siblings, 1 reply; 29+ messages in thread
From: Paolo Bonzini @ 2016-03-30 12:48 UTC (permalink / raw)
To: qemu-devel; +Cc: cornelia.huck, borntraeger, famz, mst
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
hw/virtio/virtio.c | 2 +-
include/hw/virtio/virtio.h | 1 -
2 files changed, 1 insertion(+), 2 deletions(-)
diff --git a/hw/virtio/virtio.c b/hw/virtio/virtio.c
index 08275a9..07c45b6 100644
--- a/hw/virtio/virtio.c
+++ b/hw/virtio/virtio.c
@@ -1086,7 +1086,7 @@ void virtio_queue_set_align(VirtIODevice *vdev, int n, int align)
virtio_queue_update_rings(vdev, n);
}
-void virtio_queue_notify_vq(VirtQueue *vq)
+static void virtio_queue_notify_vq(VirtQueue *vq)
{
if (vq->vring.desc && vq->handle_output) {
VirtIODevice *vdev = vq->vdev;
diff --git a/include/hw/virtio/virtio.h b/include/hw/virtio/virtio.h
index 2b5b248..5afb51c 100644
--- a/include/hw/virtio/virtio.h
+++ b/include/hw/virtio/virtio.h
@@ -252,7 +252,6 @@ void virtio_queue_set_host_notifier_fd_handler(VirtQueue *vq, bool assign,
bool set_handler);
void virtio_queue_aio_set_host_notifier_handler(VirtQueue *vq, AioContext *ctx,
bool assign, bool set_handler);
-void virtio_queue_notify_vq(VirtQueue *vq);
void virtio_irq(VirtQueue *vq);
VirtQueue *virtio_vector_first_queue(VirtIODevice *vdev, uint16_t vector);
VirtQueue *virtio_vector_next_queue(VirtQueue *vq);
--
1.8.3.1
^ permalink raw reply related [flat|nested] 29+ messages in thread
* [Qemu-devel] [PATCH 3/9] virtio-blk: fix disabled mode
2016-03-30 12:47 [Qemu-devel] [PATCH resend 0/9] virtio: aio handler API Paolo Bonzini
2016-03-30 12:48 ` [Qemu-devel] [PATCH 1/9] virtio-dataplane: pass assign=true to virtio_queue_aio_set_host_notifier_handler Paolo Bonzini
2016-03-30 12:48 ` [Qemu-devel] [PATCH 2/9] virtio: make virtio_queue_notify_vq static Paolo Bonzini
@ 2016-03-30 12:48 ` Paolo Bonzini
2016-03-30 13:57 ` Cornelia Huck
2016-03-31 8:32 ` tu bo
2016-03-30 12:48 ` [Qemu-devel] [PATCH 4/9] virtio-scsi: " Paolo Bonzini
` (5 subsequent siblings)
8 siblings, 2 replies; 29+ messages in thread
From: Paolo Bonzini @ 2016-03-30 12:48 UTC (permalink / raw)
To: qemu-devel; +Cc: cornelia.huck, borntraeger, famz, mst
The missing check on dataplane_disabled caused a segmentation
fault in notify_guest_bh, because s->guest_notifier was NULL.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
hw/block/dataplane/virtio-blk.c | 7 +++----
hw/block/virtio-blk.c | 2 +-
include/hw/virtio/virtio-blk.h | 1 +
3 files changed, 5 insertions(+), 5 deletions(-)
diff --git a/hw/block/dataplane/virtio-blk.c b/hw/block/dataplane/virtio-blk.c
index 0d76110..378feb3 100644
--- a/hw/block/dataplane/virtio-blk.c
+++ b/hw/block/dataplane/virtio-blk.c
@@ -28,7 +28,6 @@
struct VirtIOBlockDataPlane {
bool starting;
bool stopping;
- bool disabled;
VirtIOBlkConf *conf;
@@ -233,7 +232,7 @@ void virtio_blk_data_plane_start(VirtIOBlockDataPlane *s)
fail_host_notifier:
k->set_guest_notifiers(qbus->parent, 1, false);
fail_guest_notifiers:
- s->disabled = true;
+ vblk->dataplane_disabled = true;
s->starting = false;
vblk->dataplane_started = true;
}
@@ -250,8 +249,8 @@ void virtio_blk_data_plane_stop(VirtIOBlockDataPlane *s)
}
/* Better luck next time. */
- if (s->disabled) {
- s->disabled = false;
+ if (vblk->dataplane_disabled) {
+ vblk->dataplane_disabled = false;
vblk->dataplane_started = false;
return;
}
diff --git a/hw/block/virtio-blk.c b/hw/block/virtio-blk.c
index d0b8248..77221c1 100644
--- a/hw/block/virtio-blk.c
+++ b/hw/block/virtio-blk.c
@@ -53,7 +53,7 @@ static void virtio_blk_req_complete(VirtIOBlockReq *req, unsigned char status)
stb_p(&req->in->status, status);
virtqueue_push(s->vq, &req->elem, req->in_len);
- if (s->dataplane) {
+ if (s->dataplane_started && !s->dataplane_disabled) {
virtio_blk_data_plane_notify(s->dataplane);
} else {
virtio_notify(vdev, s->vq);
diff --git a/include/hw/virtio/virtio-blk.h b/include/hw/virtio/virtio-blk.h
index 5cb66cd..073c632 100644
--- a/include/hw/virtio/virtio-blk.h
+++ b/include/hw/virtio/virtio-blk.h
@@ -53,6 +53,7 @@ typedef struct VirtIOBlock {
unsigned short sector_mask;
bool original_wce;
VMChangeStateEntry *change;
+ bool dataplane_disabled;
bool dataplane_started;
int reentrancy_test;
struct VirtIOBlockDataPlane *dataplane;
--
1.8.3.1
^ permalink raw reply related [flat|nested] 29+ messages in thread
* Re: [Qemu-devel] [PATCH 3/9] virtio-blk: fix disabled mode
2016-03-30 12:48 ` [Qemu-devel] [PATCH 3/9] virtio-blk: fix disabled mode Paolo Bonzini
@ 2016-03-30 13:57 ` Cornelia Huck
2016-03-31 8:32 ` tu bo
1 sibling, 0 replies; 29+ messages in thread
From: Cornelia Huck @ 2016-03-30 13:57 UTC (permalink / raw)
To: Paolo Bonzini; +Cc: borntraeger, famz, qemu-devel, mst
On Wed, 30 Mar 2016 14:48:02 +0200
Paolo Bonzini <pbonzini@redhat.com> wrote:
> The missing check on dataplane_disabled caused a segmentation
> fault in notify_guest_bh, because s->guest_notifier was NULL.
I think this patch description could be improved :)
"We must not call virtio_blk_data_plane_notify if dataplane is
disabled: we would hit a segmentation fault in notify_guest_bh as
s->guest_notifier has not been setup and is NULL."
?
>
> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
> ---
> hw/block/dataplane/virtio-blk.c | 7 +++----
> hw/block/virtio-blk.c | 2 +-
> include/hw/virtio/virtio-blk.h | 1 +
> 3 files changed, 5 insertions(+), 5 deletions(-)
Patch looks good:
Reviewed-by: Cornelia Huck <cornelia.huck@de.ibm.com>
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [Qemu-devel] [PATCH 3/9] virtio-blk: fix disabled mode
2016-03-30 12:48 ` [Qemu-devel] [PATCH 3/9] virtio-blk: fix disabled mode Paolo Bonzini
2016-03-30 13:57 ` Cornelia Huck
@ 2016-03-31 8:32 ` tu bo
2016-03-31 8:36 ` Paolo Bonzini
1 sibling, 1 reply; 29+ messages in thread
From: tu bo @ 2016-03-31 8:32 UTC (permalink / raw)
To: Paolo Bonzini, qemu-devel; +Cc: cornelia.huck, borntraeger, famz, mst
Hi Paolo:
On 03/30/2016 08:48 PM, Paolo Bonzini wrote:
> The missing check on dataplane_disabled caused a segmentation
> fault in notify_guest_bh, because s->guest_notifier was NULL.
>
> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
> ---
> hw/block/dataplane/virtio-blk.c | 7 +++----
> hw/block/virtio-blk.c | 2 +-
> include/hw/virtio/virtio-blk.h | 1 +
> 3 files changed, 5 insertions(+), 5 deletions(-)
>
> diff --git a/hw/block/dataplane/virtio-blk.c b/hw/block/dataplane/virtio-blk.c
> index 0d76110..378feb3 100644
> --- a/hw/block/dataplane/virtio-blk.c
> +++ b/hw/block/dataplane/virtio-blk.c
> @@ -28,7 +28,6 @@
> struct VirtIOBlockDataPlane {
> bool starting;
> bool stopping;
> - bool disabled;
>
> VirtIOBlkConf *conf;
>
> @@ -233,7 +232,7 @@ void virtio_blk_data_plane_start(VirtIOBlockDataPlane *s)
> fail_host_notifier:
> k->set_guest_notifiers(qbus->parent, 1, false);
> fail_guest_notifiers:
> - s->disabled = true;
> + vblk->dataplane_disabled = true;
> s->starting = false;
> vblk->dataplane_started = true;
> }
> @@ -250,8 +249,8 @@ void virtio_blk_data_plane_stop(VirtIOBlockDataPlane *s)
> }
>
> /* Better luck next time. */
> - if (s->disabled) {
> - s->disabled = false;
> + if (vblk->dataplane_disabled) {
> + vblk->dataplane_disabled = false;
> vblk->dataplane_started = false;
> return;
> }
> diff --git a/hw/block/virtio-blk.c b/hw/block/virtio-blk.c
> index d0b8248..77221c1 100644
> --- a/hw/block/virtio-blk.c
> +++ b/hw/block/virtio-blk.c
> @@ -53,7 +53,7 @@ static void virtio_blk_req_complete(VirtIOBlockReq *req, unsigned char status)
>
> stb_p(&req->in->status, status);
> virtqueue_push(s->vq, &req->elem, req->in_len);
> - if (s->dataplane) {
> + if (s->dataplane_started && !s->dataplane_disabled) {
> virtio_blk_data_plane_notify(s->dataplane);
> } else {
> virtio_notify(vdev, s->vq);
> diff --git a/include/hw/virtio/virtio-blk.h b/include/hw/virtio/virtio-blk.h
> index 5cb66cd..073c632 100644
> --- a/include/hw/virtio/virtio-blk.h
> +++ b/include/hw/virtio/virtio-blk.h
> @@ -53,6 +53,7 @@ typedef struct VirtIOBlock {
> unsigned short sector_mask;
> bool original_wce;
> VMChangeStateEntry *change;
> + bool dataplane_disabled;
> bool dataplane_started;
> int reentrancy_test;
There is no "int reentrancy_test;" in the latest qemu master. thx
> struct VirtIOBlockDataPlane *dataplane;
>
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [Qemu-devel] [PATCH 3/9] virtio-blk: fix disabled mode
2016-03-31 8:32 ` tu bo
@ 2016-03-31 8:36 ` Paolo Bonzini
0 siblings, 0 replies; 29+ messages in thread
From: Paolo Bonzini @ 2016-03-31 8:36 UTC (permalink / raw)
To: tu bo, qemu-devel; +Cc: cornelia.huck, borntraeger, famz, mst
On 31/03/2016 10:32, tu bo wrote:
>>
>> diff --git a/include/hw/virtio/virtio-blk.h
>> b/include/hw/virtio/virtio-blk.h
>> index 5cb66cd..073c632 100644
>> --- a/include/hw/virtio/virtio-blk.h
>> +++ b/include/hw/virtio/virtio-blk.h
>> @@ -53,6 +53,7 @@ typedef struct VirtIOBlock {
>> unsigned short sector_mask;
>> bool original_wce;
>> VMChangeStateEntry *change;
>> + bool dataplane_disabled;
>> bool dataplane_started;
>> int reentrancy_test;
>
> There is no "int reentrancy_test;" in the latest qemu master. thx
Indeed, I will resend a fix version soon.
Paolo
^ permalink raw reply [flat|nested] 29+ messages in thread
* [Qemu-devel] [PATCH 4/9] virtio-scsi: fix disabled mode
2016-03-30 12:47 [Qemu-devel] [PATCH resend 0/9] virtio: aio handler API Paolo Bonzini
` (2 preceding siblings ...)
2016-03-30 12:48 ` [Qemu-devel] [PATCH 3/9] virtio-blk: fix disabled mode Paolo Bonzini
@ 2016-03-30 12:48 ` Paolo Bonzini
2016-03-30 14:52 ` Cornelia Huck
2016-03-30 12:48 ` [Qemu-devel] [PATCH 5/9] virtio: add aio handler Paolo Bonzini
` (4 subsequent siblings)
8 siblings, 1 reply; 29+ messages in thread
From: Paolo Bonzini @ 2016-03-30 12:48 UTC (permalink / raw)
To: qemu-devel; +Cc: cornelia.huck, borntraeger, famz, mst
Add two missing checks for s->dataplane_fenced. In one case, QEMU
would skip injecting an IRQ due to a write to an uninitialized
EventNotifier's file descriptor.
In the second case, the dataplane_disabled field was used by mistake;
in fact after fixing this occurrence it is completely unused.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
hw/scsi/virtio-scsi.c | 4 ++--
include/hw/virtio/virtio-scsi.h | 1 -
2 files changed, 2 insertions(+), 3 deletions(-)
diff --git a/hw/scsi/virtio-scsi.c b/hw/scsi/virtio-scsi.c
index 0c30d2e..ac531e2 100644
--- a/hw/scsi/virtio-scsi.c
+++ b/hw/scsi/virtio-scsi.c
@@ -67,7 +67,7 @@ static void virtio_scsi_complete_req(VirtIOSCSIReq *req)
qemu_iovec_from_buf(&req->resp_iov, 0, &req->resp, req->resp_size);
virtqueue_push(vq, &req->elem, req->qsgl.size + req->resp_iov.size);
- if (s->dataplane_started) {
+ if (s->dataplane_started && !s->dataplane_fenced) {
virtio_scsi_dataplane_notify(vdev, req);
} else {
virtio_notify(vdev, vq);
@@ -772,7 +772,7 @@ static void virtio_scsi_hotplug(HotplugHandler *hotplug_dev, DeviceState *dev,
VirtIOSCSI *s = VIRTIO_SCSI(vdev);
SCSIDevice *sd = SCSI_DEVICE(dev);
- if (s->ctx && !s->dataplane_disabled) {
+ if (s->ctx && !s->dataplane_fenced) {
VirtIOSCSIBlkChangeNotifier *insert_notifier, *remove_notifier;
if (blk_op_is_blocked(sd->conf.blk, BLOCK_OP_TYPE_DATAPLANE, errp)) {
diff --git a/include/hw/virtio/virtio-scsi.h b/include/hw/virtio/virtio-scsi.h
index 209eaa4..eef4e95 100644
--- a/include/hw/virtio/virtio-scsi.h
+++ b/include/hw/virtio/virtio-scsi.h
@@ -91,7 +91,6 @@ typedef struct VirtIOSCSI {
bool dataplane_started;
bool dataplane_starting;
bool dataplane_stopping;
- bool dataplane_disabled;
bool dataplane_fenced;
Error *blocker;
uint32_t host_features;
--
1.8.3.1
^ permalink raw reply related [flat|nested] 29+ messages in thread
* Re: [Qemu-devel] [PATCH 4/9] virtio-scsi: fix disabled mode
2016-03-30 12:48 ` [Qemu-devel] [PATCH 4/9] virtio-scsi: " Paolo Bonzini
@ 2016-03-30 14:52 ` Cornelia Huck
0 siblings, 0 replies; 29+ messages in thread
From: Cornelia Huck @ 2016-03-30 14:52 UTC (permalink / raw)
To: Paolo Bonzini; +Cc: borntraeger, famz, qemu-devel, mst
On Wed, 30 Mar 2016 14:48:03 +0200
Paolo Bonzini <pbonzini@redhat.com> wrote:
> Add two missing checks for s->dataplane_fenced. In one case, QEMU
> would skip injecting an IRQ due to a write to an uninitialized
> EventNotifier's file descriptor.
>
> In the second case, the dataplane_disabled field was used by mistake;
> in fact after fixing this occurrence it is completely unused.
>
> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
> ---
> hw/scsi/virtio-scsi.c | 4 ++--
> include/hw/virtio/virtio-scsi.h | 1 -
> 2 files changed, 2 insertions(+), 3 deletions(-)
It's a bit confusing that the disabled mode for virtio-scsi dataplane
uses 'fenced' instead of 'disabled', but 'disabled' was already used...
Reviewed-by: Cornelia Huck <cornelia.huck@de.ibm.com>
^ permalink raw reply [flat|nested] 29+ messages in thread
* [Qemu-devel] [PATCH 5/9] virtio: add aio handler
2016-03-30 12:47 [Qemu-devel] [PATCH resend 0/9] virtio: aio handler API Paolo Bonzini
` (3 preceding siblings ...)
2016-03-30 12:48 ` [Qemu-devel] [PATCH 4/9] virtio-scsi: " Paolo Bonzini
@ 2016-03-30 12:48 ` Paolo Bonzini
2016-03-30 14:55 ` Cornelia Huck
2016-03-30 12:48 ` [Qemu-devel] [PATCH 6/9] virtio-blk: use aio handler for data plane Paolo Bonzini
` (3 subsequent siblings)
8 siblings, 1 reply; 29+ messages in thread
From: Paolo Bonzini @ 2016-03-30 12:48 UTC (permalink / raw)
To: qemu-devel; +Cc: cornelia.huck, borntraeger, famz, mst
From: "Michael S. Tsirkin" <mst@redhat.com>
In addition to handling IO in vcpu thread and
in io thread, blk dataplane introduces yet another mode:
handling it by aio.
This reuses the same handler as previous modes,
which triggers races as these were not designed to be reentrant.
As a temporary fix, add a separate handler just for aio, this will make
it possible to disable regular handlers when dataplane is active.
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
hw/virtio/virtio.c | 36 ++++++++++++++++++++++++++++++++----
include/hw/virtio/virtio.h | 3 +++
2 files changed, 35 insertions(+), 4 deletions(-)
diff --git a/hw/virtio/virtio.c b/hw/virtio/virtio.c
index 07c45b6..39a9f47 100644
--- a/hw/virtio/virtio.c
+++ b/hw/virtio/virtio.c
@@ -94,6 +94,7 @@ struct VirtQueue
uint16_t vector;
void (*handle_output)(VirtIODevice *vdev, VirtQueue *vq);
+ void (*handle_aio_output)(VirtIODevice *vdev, VirtQueue *vq);
VirtIODevice *vdev;
EventNotifier guest_notifier;
EventNotifier host_notifier;
@@ -1086,6 +1087,16 @@ void virtio_queue_set_align(VirtIODevice *vdev, int n, int align)
virtio_queue_update_rings(vdev, n);
}
+static void virtio_queue_notify_aio_vq(VirtQueue *vq)
+{
+ if (vq->vring.desc && vq->handle_aio_output) {
+ VirtIODevice *vdev = vq->vdev;
+
+ trace_virtio_queue_notify(vdev, vq - vdev->vq, vq);
+ vq->handle_aio_output(vdev, vq);
+ }
+}
+
static void virtio_queue_notify_vq(VirtQueue *vq)
{
if (vq->vring.desc && vq->handle_output) {
@@ -1141,10 +1152,19 @@ VirtQueue *virtio_add_queue(VirtIODevice *vdev, int queue_size,
vdev->vq[i].vring.num_default = queue_size;
vdev->vq[i].vring.align = VIRTIO_PCI_VRING_ALIGN;
vdev->vq[i].handle_output = handle_output;
+ vdev->vq[i].handle_aio_output = NULL;
return &vdev->vq[i];
}
+void virtio_set_queue_aio(VirtQueue *vq,
+ void (*handle_output)(VirtIODevice *, VirtQueue *))
+{
+ assert(vq->handle_output);
+
+ vq->handle_aio_output = handle_output;
+}
+
void virtio_del_queue(VirtIODevice *vdev, int n)
{
if (n < 0 || n >= VIRTIO_QUEUE_MAX) {
@@ -1778,11 +1798,11 @@ EventNotifier *virtio_queue_get_guest_notifier(VirtQueue *vq)
return &vq->guest_notifier;
}
-static void virtio_queue_host_notifier_read(EventNotifier *n)
+static void virtio_queue_host_notifier_aio_read(EventNotifier *n)
{
VirtQueue *vq = container_of(n, VirtQueue, host_notifier);
if (event_notifier_test_and_clear(n)) {
- virtio_queue_notify_vq(vq);
+ virtio_queue_notify_aio_vq(vq);
}
}
@@ -1791,14 +1811,22 @@ void virtio_queue_aio_set_host_notifier_handler(VirtQueue *vq, AioContext *ctx,
{
if (assign && set_handler) {
aio_set_event_notifier(ctx, &vq->host_notifier, true,
- virtio_queue_host_notifier_read);
+ virtio_queue_host_notifier_aio_read);
} else {
aio_set_event_notifier(ctx, &vq->host_notifier, true, NULL);
}
if (!assign) {
/* Test and clear notifier before after disabling event,
* in case poll callback didn't have time to run. */
- virtio_queue_host_notifier_read(&vq->host_notifier);
+ virtio_queue_host_notifier_aio_read(&vq->host_notifier);
+ }
+}
+
+static void virtio_queue_host_notifier_read(EventNotifier *n)
+{
+ VirtQueue *vq = container_of(n, VirtQueue, host_notifier);
+ if (event_notifier_test_and_clear(n)) {
+ virtio_queue_notify_vq(vq);
}
}
diff --git a/include/hw/virtio/virtio.h b/include/hw/virtio/virtio.h
index 5afb51c..fa3f93b 100644
--- a/include/hw/virtio/virtio.h
+++ b/include/hw/virtio/virtio.h
@@ -142,6 +142,9 @@ VirtQueue *virtio_add_queue(VirtIODevice *vdev, int queue_size,
void (*handle_output)(VirtIODevice *,
VirtQueue *));
+void virtio_set_queue_aio(VirtQueue *vq,
+ void (*handle_output)(VirtIODevice *, VirtQueue *));
+
void virtio_del_queue(VirtIODevice *vdev, int n);
void *virtqueue_alloc_element(size_t sz, unsigned out_num, unsigned in_num);
--
1.8.3.1
^ permalink raw reply related [flat|nested] 29+ messages in thread
* Re: [Qemu-devel] [PATCH 5/9] virtio: add aio handler
2016-03-30 12:48 ` [Qemu-devel] [PATCH 5/9] virtio: add aio handler Paolo Bonzini
@ 2016-03-30 14:55 ` Cornelia Huck
2016-03-30 15:09 ` Paolo Bonzini
0 siblings, 1 reply; 29+ messages in thread
From: Cornelia Huck @ 2016-03-30 14:55 UTC (permalink / raw)
To: Paolo Bonzini; +Cc: borntraeger, famz, qemu-devel, mst
On Wed, 30 Mar 2016 14:48:04 +0200
Paolo Bonzini <pbonzini@redhat.com> wrote:
> From: "Michael S. Tsirkin" <mst@redhat.com>
>
> In addition to handling IO in vcpu thread and
> in io thread, blk dataplane introduces yet another mode:
> handling it by aio.
>
> This reuses the same handler as previous modes,
> which triggers races as these were not designed to be reentrant.
>
> As a temporary fix, add a separate handler just for aio, this will make
> it possible to disable regular handlers when dataplane is active.
>
> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
> ---
> hw/virtio/virtio.c | 36 ++++++++++++++++++++++++++++++++----
> include/hw/virtio/virtio.h | 3 +++
> 2 files changed, 35 insertions(+), 4 deletions(-)
>
> @@ -1791,14 +1811,22 @@ void virtio_queue_aio_set_host_notifier_handler(VirtQueue *vq, AioContext *ctx,
> {
> if (assign && set_handler) {
> aio_set_event_notifier(ctx, &vq->host_notifier, true,
> - virtio_queue_host_notifier_read);
> + virtio_queue_host_notifier_aio_read);
> } else {
> aio_set_event_notifier(ctx, &vq->host_notifier, true, NULL);
> }
> if (!assign) {
> /* Test and clear notifier before after disabling event,
> * in case poll callback didn't have time to run. */
> - virtio_queue_host_notifier_read(&vq->host_notifier);
> + virtio_queue_host_notifier_aio_read(&vq->host_notifier);
Is this function ever called with assign==false anymore after patch 1?
> + }
> +}
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [Qemu-devel] [PATCH 5/9] virtio: add aio handler
2016-03-30 14:55 ` Cornelia Huck
@ 2016-03-30 15:09 ` Paolo Bonzini
0 siblings, 0 replies; 29+ messages in thread
From: Paolo Bonzini @ 2016-03-30 15:09 UTC (permalink / raw)
To: Cornelia Huck; +Cc: borntraeger, famz, qemu-devel, mst
On 30/03/2016 16:55, Cornelia Huck wrote:
>> > if (assign && set_handler) {
>> > aio_set_event_notifier(ctx, &vq->host_notifier, true,
>> > - virtio_queue_host_notifier_read);
>> > + virtio_queue_host_notifier_aio_read);
>> > } else {
>> > aio_set_event_notifier(ctx, &vq->host_notifier, true, NULL);
>> > }
>> > if (!assign) {
>> > /* Test and clear notifier before after disabling event,
>> > * in case poll callback didn't have time to run. */
>> > - virtio_queue_host_notifier_read(&vq->host_notifier);
>> > + virtio_queue_host_notifier_aio_read(&vq->host_notifier);
> Is this function ever called with assign==false anymore after patch 1?
Patch 8 provides the answer (which is no :)).
Paolo
^ permalink raw reply [flat|nested] 29+ messages in thread
* [Qemu-devel] [PATCH 6/9] virtio-blk: use aio handler for data plane
2016-03-30 12:47 [Qemu-devel] [PATCH resend 0/9] virtio: aio handler API Paolo Bonzini
` (4 preceding siblings ...)
2016-03-30 12:48 ` [Qemu-devel] [PATCH 5/9] virtio: add aio handler Paolo Bonzini
@ 2016-03-30 12:48 ` Paolo Bonzini
2016-03-30 15:25 ` Cornelia Huck
2016-03-30 12:48 ` [Qemu-devel] [PATCH 7/9] virtio-scsi: " Paolo Bonzini
` (2 subsequent siblings)
8 siblings, 1 reply; 29+ messages in thread
From: Paolo Bonzini @ 2016-03-30 12:48 UTC (permalink / raw)
To: qemu-devel; +Cc: cornelia.huck, borntraeger, famz, mst
From: "Michael S. Tsirkin" <mst@redhat.com>
In addition to handling IO in vcpu thread and in io thread, dataplane
introduces yet another mode: handling it by aio.
This reuses the same handler as previous modes, which triggers races as
these were not designed to be reentrant.
Use a separate handler just for aio, and disable regular handlers when
dataplane is active.
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
hw/block/dataplane/virtio-blk.c | 13 +++++++++++++
hw/block/virtio-blk.c | 27 +++++++++++++++++----------
include/hw/virtio/virtio-blk.h | 2 ++
3 files changed, 32 insertions(+), 10 deletions(-)
diff --git a/hw/block/dataplane/virtio-blk.c b/hw/block/dataplane/virtio-blk.c
index 378feb3..4a137df 100644
--- a/hw/block/dataplane/virtio-blk.c
+++ b/hw/block/dataplane/virtio-blk.c
@@ -183,6 +183,17 @@ void virtio_blk_data_plane_destroy(VirtIOBlockDataPlane *s)
g_free(s);
}
+static void virtio_blk_data_plane_handle_output(VirtIODevice *vdev,
+ VirtQueue *vq)
+{
+ VirtIOBlock *s = (VirtIOBlock *)vdev;
+
+ assert(s->dataplane);
+ assert(s->dataplane_started);
+
+ virtio_blk_handle_vq(s, vq);
+}
+
/* Context: QEMU global mutex held */
void virtio_blk_data_plane_start(VirtIOBlockDataPlane *s)
{
@@ -225,6 +236,7 @@ void virtio_blk_data_plane_start(VirtIOBlockDataPlane *s)
/* Get this show started by hooking up our callbacks */
aio_context_acquire(s->ctx);
+ virtio_set_queue_aio(s->vq, virtio_blk_data_plane_handle_output);
virtio_queue_aio_set_host_notifier_handler(s->vq, s->ctx, true, true);
aio_context_release(s->ctx);
return;
@@ -261,6 +273,7 @@ void virtio_blk_data_plane_stop(VirtIOBlockDataPlane *s)
/* Stop notifications for new requests from guest */
virtio_queue_aio_set_host_notifier_handler(s->vq, s->ctx, true, false);
+ virtio_set_queue_aio(s->vq, NULL);
/* Drain and switch bs back to the QEMU main loop */
blk_set_aio_context(s->conf->conf.blk, qemu_get_aio_context());
diff --git a/hw/block/virtio-blk.c b/hw/block/virtio-blk.c
index 77221c1..47ad9ed 100644
--- a/hw/block/virtio-blk.c
+++ b/hw/block/virtio-blk.c
@@ -577,20 +577,11 @@ void virtio_blk_handle_request(VirtIOBlockReq *req, MultiReqBuffer *mrb)
}
}
-static void virtio_blk_handle_output(VirtIODevice *vdev, VirtQueue *vq)
+void virtio_blk_handle_vq(VirtIOBlock *s, VirtQueue *vq)
{
- VirtIOBlock *s = VIRTIO_BLK(vdev);
VirtIOBlockReq *req;
MultiReqBuffer mrb = {};
- /* Some guests kick before setting VIRTIO_CONFIG_S_DRIVER_OK so start
- * dataplane here instead of waiting for .set_status().
- */
- if (s->dataplane && !s->dataplane_started) {
- virtio_blk_data_plane_start(s->dataplane);
- return;
- }
-
assert(atomic_fetch_inc(&s->reentrancy_test) == 0);
blk_io_plug(s->blk);
@@ -606,6 +597,22 @@ static void virtio_blk_handle_output(VirtIODevice *vdev, VirtQueue *vq)
atomic_dec(&s->reentrancy_test);
}
+static void virtio_blk_handle_output(VirtIODevice *vdev, VirtQueue *vq)
+{
+ VirtIOBlock *s = (VirtIOBlock *)vdev;
+
+ if (s->dataplane) {
+ /* Some guests kick before setting VIRTIO_CONFIG_S_DRIVER_OK so start
+ * dataplane here instead of waiting for .set_status().
+ */
+ virtio_blk_data_plane_start(s->dataplane);
+ if (!s->dataplane_disabled) {
+ return;
+ }
+ }
+ virtio_blk_handle_vq(s, vq);
+}
+
static void virtio_blk_dma_restart_bh(void *opaque)
{
VirtIOBlock *s = opaque;
diff --git a/include/hw/virtio/virtio-blk.h b/include/hw/virtio/virtio-blk.h
index 073c632..b5117a8 100644
--- a/include/hw/virtio/virtio-blk.h
+++ b/include/hw/virtio/virtio-blk.h
@@ -87,4 +87,6 @@ void virtio_blk_handle_request(VirtIOBlockReq *req, MultiReqBuffer *mrb);
void virtio_blk_submit_multireq(BlockBackend *blk, MultiReqBuffer *mrb);
+void virtio_blk_handle_vq(VirtIOBlock *s, VirtQueue *vq);
+
#endif
--
1.8.3.1
^ permalink raw reply related [flat|nested] 29+ messages in thread
* Re: [Qemu-devel] [PATCH 6/9] virtio-blk: use aio handler for data plane
2016-03-30 12:48 ` [Qemu-devel] [PATCH 6/9] virtio-blk: use aio handler for data plane Paolo Bonzini
@ 2016-03-30 15:25 ` Cornelia Huck
2016-03-30 15:42 ` Paolo Bonzini
0 siblings, 1 reply; 29+ messages in thread
From: Cornelia Huck @ 2016-03-30 15:25 UTC (permalink / raw)
To: Paolo Bonzini; +Cc: borntraeger, famz, qemu-devel, mst
On Wed, 30 Mar 2016 14:48:05 +0200
Paolo Bonzini <pbonzini@redhat.com> wrote:
> From: "Michael S. Tsirkin" <mst@redhat.com>
>
> In addition to handling IO in vcpu thread and in io thread, dataplane
> introduces yet another mode: handling it by aio.
>
> This reuses the same handler as previous modes, which triggers races as
> these were not designed to be reentrant.
>
> Use a separate handler just for aio, and disable regular handlers when
> dataplane is active.
>
> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
> ---
> hw/block/dataplane/virtio-blk.c | 13 +++++++++++++
> hw/block/virtio-blk.c | 27 +++++++++++++++++----------
> include/hw/virtio/virtio-blk.h | 2 ++
> 3 files changed, 32 insertions(+), 10 deletions(-)
>
> diff --git a/hw/block/virtio-blk.c b/hw/block/virtio-blk.c
> index 77221c1..47ad9ed 100644
> --- a/hw/block/virtio-blk.c
> +++ b/hw/block/virtio-blk.c
> @@ -577,20 +577,11 @@ void virtio_blk_handle_request(VirtIOBlockReq *req, MultiReqBuffer *mrb)
> }
> }
>
> -static void virtio_blk_handle_output(VirtIODevice *vdev, VirtQueue *vq)
> +void virtio_blk_handle_vq(VirtIOBlock *s, VirtQueue *vq)
> {
> - VirtIOBlock *s = VIRTIO_BLK(vdev);
> VirtIOBlockReq *req;
> MultiReqBuffer mrb = {};
>
> - /* Some guests kick before setting VIRTIO_CONFIG_S_DRIVER_OK so start
> - * dataplane here instead of waiting for .set_status().
> - */
> - if (s->dataplane && !s->dataplane_started) {
> - virtio_blk_data_plane_start(s->dataplane);
> - return;
> - }
> -
> assert(atomic_fetch_inc(&s->reentrancy_test) == 0);
Hm, based on wrong branch?
> blk_io_plug(s->blk);
>
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [Qemu-devel] [PATCH 6/9] virtio-blk: use aio handler for data plane
2016-03-30 15:25 ` Cornelia Huck
@ 2016-03-30 15:42 ` Paolo Bonzini
0 siblings, 0 replies; 29+ messages in thread
From: Paolo Bonzini @ 2016-03-30 15:42 UTC (permalink / raw)
To: Cornelia Huck; +Cc: borntraeger, famz, qemu-devel, mst
On 30/03/2016 17:25, Cornelia Huck wrote:
>> >
>> > - /* Some guests kick before setting VIRTIO_CONFIG_S_DRIVER_OK so start
>> > - * dataplane here instead of waiting for .set_status().
>> > - */
>> > - if (s->dataplane && !s->dataplane_started) {
>> > - virtio_blk_data_plane_start(s->dataplane);
>> > - return;
>> > - }
>> > -
>> > assert(atomic_fetch_inc(&s->reentrancy_test) == 0);
> Hm, based on wrong branch?
>
Indeed I had the testing patch in there for, well, testing. I'll repost
with improved commit messages and the assertion removed.
Paolo
^ permalink raw reply [flat|nested] 29+ messages in thread
* [Qemu-devel] [PATCH 7/9] virtio-scsi: use aio handler for data plane
2016-03-30 12:47 [Qemu-devel] [PATCH resend 0/9] virtio: aio handler API Paolo Bonzini
` (5 preceding siblings ...)
2016-03-30 12:48 ` [Qemu-devel] [PATCH 6/9] virtio-blk: use aio handler for data plane Paolo Bonzini
@ 2016-03-30 12:48 ` Paolo Bonzini
2016-03-30 15:31 ` Cornelia Huck
2016-03-30 12:48 ` [Qemu-devel] [PATCH 8/9] virtio: merge virtio_queue_aio_set_host_notifier_handler with virtio_queue_set_aio Paolo Bonzini
2016-03-30 12:48 ` [Qemu-devel] [PATCH 9/9] virtio: remove starting/stopping checks Paolo Bonzini
8 siblings, 1 reply; 29+ messages in thread
From: Paolo Bonzini @ 2016-03-30 12:48 UTC (permalink / raw)
To: qemu-devel; +Cc: cornelia.huck, borntraeger, famz, mst
In addition to handling IO in vcpu thread and in io thread, dataplane
introduces yet another mode: handling it by aio.
This reuses the same handler as previous modes, which triggers races as
these were not designed to be reentrant.
Use a separate handler just for aio, and disable regular handlers when
dataplane is active.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
hw/scsi/virtio-scsi-dataplane.c | 43 ++++++++++++++++++++++++---
hw/scsi/virtio-scsi.c | 65 ++++++++++++++++++++++++++++-------------
include/hw/virtio/virtio-scsi.h | 6 ++--
3 files changed, 86 insertions(+), 28 deletions(-)
diff --git a/hw/scsi/virtio-scsi-dataplane.c b/hw/scsi/virtio-scsi-dataplane.c
index c57480e..0fa5bb5 100644
--- a/hw/scsi/virtio-scsi-dataplane.c
+++ b/hw/scsi/virtio-scsi-dataplane.c
@@ -39,7 +39,35 @@ void virtio_scsi_set_iothread(VirtIOSCSI *s, IOThread *iothread)
}
}
-static int virtio_scsi_vring_init(VirtIOSCSI *s, VirtQueue *vq, int n)
+static void virtio_scsi_data_plane_handle_cmd(VirtIODevice *vdev,
+ VirtQueue *vq)
+{
+ VirtIOSCSI *s = (VirtIOSCSI *)vdev;
+
+ assert(s->ctx && s->dataplane_started);
+ virtio_scsi_handle_cmd_vq(s, vq);
+}
+
+static void virtio_scsi_data_plane_handle_ctrl(VirtIODevice *vdev,
+ VirtQueue *vq)
+{
+ VirtIOSCSI *s = VIRTIO_SCSI(vdev);
+
+ assert(s->ctx && s->dataplane_started);
+ virtio_scsi_handle_ctrl_vq(s, vq);
+}
+
+static void virtio_scsi_data_plane_handle_event(VirtIODevice *vdev,
+ VirtQueue *vq)
+{
+ VirtIOSCSI *s = VIRTIO_SCSI(vdev);
+
+ assert(s->ctx && s->dataplane_started);
+ virtio_scsi_handle_event_vq(s, vq);
+}
+
+static int virtio_scsi_vring_init(VirtIOSCSI *s, VirtQueue *vq, int n,
+ void (*fn)(VirtIODevice *vdev, VirtQueue *vq))
{
BusState *qbus = BUS(qdev_get_parent_bus(DEVICE(s)));
VirtioBusClass *k = VIRTIO_BUS_GET_CLASS(qbus);
@@ -55,6 +83,7 @@ static int virtio_scsi_vring_init(VirtIOSCSI *s, VirtQueue *vq, int n)
}
virtio_queue_aio_set_host_notifier_handler(vq, s->ctx, true, true);
+ virtio_set_queue_aio(vq, fn);
return 0;
}
@@ -72,9 +101,12 @@ static void virtio_scsi_clear_aio(VirtIOSCSI *s)
int i;
virtio_queue_aio_set_host_notifier_handler(vs->ctrl_vq, s->ctx, true, false);
+ virtio_set_queue_aio(vs->ctrl_vq, NULL);
virtio_queue_aio_set_host_notifier_handler(vs->event_vq, s->ctx, true, false);
+ virtio_set_queue_aio(vs->event_vq, NULL);
for (i = 0; i < vs->conf.num_queues; i++) {
virtio_queue_aio_set_host_notifier_handler(vs->cmd_vqs[i], s->ctx, true, false);
+ virtio_set_queue_aio(vs->cmd_vqs[i], NULL);
}
}
@@ -105,16 +137,19 @@ void virtio_scsi_dataplane_start(VirtIOSCSI *s)
}
aio_context_acquire(s->ctx);
- rc = virtio_scsi_vring_init(s, vs->ctrl_vq, 0);
+ rc = virtio_scsi_vring_init(s, vs->ctrl_vq, 0,
+ virtio_scsi_data_plane_handle_ctrl);
if (rc) {
goto fail_vrings;
}
- rc = virtio_scsi_vring_init(s, vs->event_vq, 1);
+ rc = virtio_scsi_vring_init(s, vs->event_vq, 1,
+ virtio_scsi_data_plane_handle_event);
if (rc) {
goto fail_vrings;
}
for (i = 0; i < vs->conf.num_queues; i++) {
- rc = virtio_scsi_vring_init(s, vs->cmd_vqs[i], i + 2);
+ rc = virtio_scsi_vring_init(s, vs->cmd_vqs[i], i + 2,
+ virtio_scsi_data_plane_handle_cmd);
if (rc) {
goto fail_vrings;
}
diff --git a/hw/scsi/virtio-scsi.c b/hw/scsi/virtio-scsi.c
index ac531e2..7342d26 100644
--- a/hw/scsi/virtio-scsi.c
+++ b/hw/scsi/virtio-scsi.c
@@ -373,7 +373,7 @@ fail:
return ret;
}
-void virtio_scsi_handle_ctrl_req(VirtIOSCSI *s, VirtIOSCSIReq *req)
+static void virtio_scsi_handle_ctrl_req(VirtIOSCSI *s, VirtIOSCSIReq *req)
{
VirtIODevice *vdev = (VirtIODevice *)s;
uint32_t type;
@@ -411,20 +411,28 @@ void virtio_scsi_handle_ctrl_req(VirtIOSCSI *s, VirtIOSCSIReq *req)
}
}
-static void virtio_scsi_handle_ctrl(VirtIODevice *vdev, VirtQueue *vq)
+void virtio_scsi_handle_ctrl_vq(VirtIOSCSI *s, VirtQueue *vq)
{
- VirtIOSCSI *s = (VirtIOSCSI *)vdev;
VirtIOSCSIReq *req;
- if (s->ctx && !s->dataplane_started) {
- virtio_scsi_dataplane_start(s);
- return;
- }
while ((req = virtio_scsi_pop_req(s, vq))) {
virtio_scsi_handle_ctrl_req(s, req);
}
}
+static void virtio_scsi_handle_ctrl(VirtIODevice *vdev, VirtQueue *vq)
+{
+ VirtIOSCSI *s = (VirtIOSCSI *)vdev;
+
+ if (s->ctx) {
+ virtio_scsi_dataplane_start(s);
+ if (!s->dataplane_fenced) {
+ return;
+ }
+ }
+ virtio_scsi_handle_ctrl_vq(s, vq);
+}
+
static void virtio_scsi_complete_cmd_req(VirtIOSCSIReq *req)
{
/* Sense data is not in req->resp and is copied separately
@@ -507,7 +515,7 @@ static void virtio_scsi_fail_cmd_req(VirtIOSCSIReq *req)
virtio_scsi_complete_cmd_req(req);
}
-bool virtio_scsi_handle_cmd_req_prepare(VirtIOSCSI *s, VirtIOSCSIReq *req)
+static bool virtio_scsi_handle_cmd_req_prepare(VirtIOSCSI *s, VirtIOSCSIReq *req)
{
VirtIOSCSICommon *vs = &s->parent_obj;
SCSIDevice *d;
@@ -549,7 +557,7 @@ bool virtio_scsi_handle_cmd_req_prepare(VirtIOSCSI *s, VirtIOSCSIReq *req)
return true;
}
-void virtio_scsi_handle_cmd_req_submit(VirtIOSCSI *s, VirtIOSCSIReq *req)
+static void virtio_scsi_handle_cmd_req_submit(VirtIOSCSI *s, VirtIOSCSIReq *req)
{
SCSIRequest *sreq = req->sreq;
if (scsi_req_enqueue(sreq)) {
@@ -559,17 +567,11 @@ void virtio_scsi_handle_cmd_req_submit(VirtIOSCSI *s, VirtIOSCSIReq *req)
scsi_req_unref(sreq);
}
-static void virtio_scsi_handle_cmd(VirtIODevice *vdev, VirtQueue *vq)
+void virtio_scsi_handle_cmd_vq(VirtIOSCSI *s, VirtQueue *vq)
{
- /* use non-QOM casts in the data path */
- VirtIOSCSI *s = (VirtIOSCSI *)vdev;
VirtIOSCSIReq *req, *next;
QTAILQ_HEAD(, VirtIOSCSIReq) reqs = QTAILQ_HEAD_INITIALIZER(reqs);
- if (s->ctx && !s->dataplane_started) {
- virtio_scsi_dataplane_start(s);
- return;
- }
while ((req = virtio_scsi_pop_req(s, vq))) {
if (virtio_scsi_handle_cmd_req_prepare(s, req)) {
QTAILQ_INSERT_TAIL(&reqs, req, next);
@@ -581,6 +583,20 @@ static void virtio_scsi_handle_cmd(VirtIODevice *vdev, VirtQueue *vq)
}
}
+static void virtio_scsi_handle_cmd(VirtIODevice *vdev, VirtQueue *vq)
+{
+ /* use non-QOM casts in the data path */
+ VirtIOSCSI *s = (VirtIOSCSI *)vdev;
+
+ if (s->ctx) {
+ virtio_scsi_dataplane_start(s);
+ if (!s->dataplane_fenced) {
+ return;
+ }
+ }
+ virtio_scsi_handle_cmd_vq(s, vq);
+}
+
static void virtio_scsi_get_config(VirtIODevice *vdev,
uint8_t *config)
{
@@ -724,17 +740,24 @@ out:
}
}
+void virtio_scsi_handle_event_vq(VirtIOSCSI *s, VirtQueue *vq)
+{
+ if (s->events_dropped) {
+ virtio_scsi_push_event(s, NULL, VIRTIO_SCSI_T_NO_EVENT, 0);
+ }
+}
+
static void virtio_scsi_handle_event(VirtIODevice *vdev, VirtQueue *vq)
{
VirtIOSCSI *s = VIRTIO_SCSI(vdev);
- if (s->ctx && !s->dataplane_started) {
+ if (s->ctx) {
virtio_scsi_dataplane_start(s);
- return;
- }
- if (s->events_dropped) {
- virtio_scsi_push_event(s, NULL, VIRTIO_SCSI_T_NO_EVENT, 0);
+ if (!s->dataplane_fenced) {
+ return;
+ }
}
+ virtio_scsi_handle_event_vq(s, vq);
}
static void virtio_scsi_change(SCSIBus *bus, SCSIDevice *dev, SCSISense sense)
diff --git a/include/hw/virtio/virtio-scsi.h b/include/hw/virtio/virtio-scsi.h
index eef4e95..ba2f5ce 100644
--- a/include/hw/virtio/virtio-scsi.h
+++ b/include/hw/virtio/virtio-scsi.h
@@ -139,9 +139,9 @@ void virtio_scsi_common_realize(DeviceState *dev, Error **errp,
HandleOutput cmd);
void virtio_scsi_common_unrealize(DeviceState *dev, Error **errp);
-void virtio_scsi_handle_ctrl_req(VirtIOSCSI *s, VirtIOSCSIReq *req);
-bool virtio_scsi_handle_cmd_req_prepare(VirtIOSCSI *s, VirtIOSCSIReq *req);
-void virtio_scsi_handle_cmd_req_submit(VirtIOSCSI *s, VirtIOSCSIReq *req);
+void virtio_scsi_handle_event_vq(VirtIOSCSI *s, VirtQueue *vq);
+void virtio_scsi_handle_cmd_vq(VirtIOSCSI *s, VirtQueue *vq);
+void virtio_scsi_handle_ctrl_vq(VirtIOSCSI *s, VirtQueue *vq);
void virtio_scsi_init_req(VirtIOSCSI *s, VirtQueue *vq, VirtIOSCSIReq *req);
void virtio_scsi_free_req(VirtIOSCSIReq *req);
void virtio_scsi_push_event(VirtIOSCSI *s, SCSIDevice *dev,
--
1.8.3.1
^ permalink raw reply related [flat|nested] 29+ messages in thread
* Re: [Qemu-devel] [PATCH 7/9] virtio-scsi: use aio handler for data plane
2016-03-30 12:48 ` [Qemu-devel] [PATCH 7/9] virtio-scsi: " Paolo Bonzini
@ 2016-03-30 15:31 ` Cornelia Huck
0 siblings, 0 replies; 29+ messages in thread
From: Cornelia Huck @ 2016-03-30 15:31 UTC (permalink / raw)
To: Paolo Bonzini; +Cc: borntraeger, famz, qemu-devel, mst
On Wed, 30 Mar 2016 14:48:06 +0200
Paolo Bonzini <pbonzini@redhat.com> wrote:
> In addition to handling IO in vcpu thread and in io thread, dataplane
> introduces yet another mode: handling it by aio.
>
> This reuses the same handler as previous modes, which triggers races as
> these were not designed to be reentrant.
>
> Use a separate handler just for aio, and disable regular handlers when
> dataplane is active.
>
> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
> ---
> hw/scsi/virtio-scsi-dataplane.c | 43 ++++++++++++++++++++++++---
> hw/scsi/virtio-scsi.c | 65 ++++++++++++++++++++++++++++-------------
> include/hw/virtio/virtio-scsi.h | 6 ++--
> 3 files changed, 86 insertions(+), 28 deletions(-)
>
> +static int virtio_scsi_vring_init(VirtIOSCSI *s, VirtQueue *vq, int n,
> + void (*fn)(VirtIODevice *vdev, VirtQueue *vq))
This function has been sadly misnamed since dataplane does not set up
its own vrings anymore. Maybe it should be called
virtio_scsi_vq_notifier_init() or so.
> {
> BusState *qbus = BUS(qdev_get_parent_bus(DEVICE(s)));
> VirtioBusClass *k = VIRTIO_BUS_GET_CLASS(qbus);
Reviewed-by: Cornelia Huck <cornelia.huck@de.ibm.com>
^ permalink raw reply [flat|nested] 29+ messages in thread
* [Qemu-devel] [PATCH 8/9] virtio: merge virtio_queue_aio_set_host_notifier_handler with virtio_queue_set_aio
2016-03-30 12:47 [Qemu-devel] [PATCH resend 0/9] virtio: aio handler API Paolo Bonzini
` (6 preceding siblings ...)
2016-03-30 12:48 ` [Qemu-devel] [PATCH 7/9] virtio-scsi: " Paolo Bonzini
@ 2016-03-30 12:48 ` Paolo Bonzini
2016-03-30 15:33 ` Cornelia Huck
2016-03-30 12:48 ` [Qemu-devel] [PATCH 9/9] virtio: remove starting/stopping checks Paolo Bonzini
8 siblings, 1 reply; 29+ messages in thread
From: Paolo Bonzini @ 2016-03-30 12:48 UTC (permalink / raw)
To: qemu-devel; +Cc: cornelia.huck, borntraeger, famz, mst
Eliminating the reentrancy is actually a nice thing that we can do
with the API that Michael proposed, so let's make it first class.
This also hides the complex assign/set_handler conventions from
callers of virtio_queue_aio_set_host_notifier_handler, and fixes
a possible race that could happen when passing assign=false to
virtio_queue_aio_set_host_notifier_handler.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
hw/block/dataplane/virtio-blk.c | 7 +++----
hw/scsi/virtio-scsi-dataplane.c | 12 ++++--------
hw/virtio/virtio.c | 19 ++++---------------
include/hw/virtio/virtio.h | 6 ++----
4 files changed, 13 insertions(+), 31 deletions(-)
diff --git a/hw/block/dataplane/virtio-blk.c b/hw/block/dataplane/virtio-blk.c
index 4a137df..7e50d01 100644
--- a/hw/block/dataplane/virtio-blk.c
+++ b/hw/block/dataplane/virtio-blk.c
@@ -236,8 +236,8 @@ void virtio_blk_data_plane_start(VirtIOBlockDataPlane *s)
/* Get this show started by hooking up our callbacks */
aio_context_acquire(s->ctx);
- virtio_set_queue_aio(s->vq, virtio_blk_data_plane_handle_output);
- virtio_queue_aio_set_host_notifier_handler(s->vq, s->ctx, true, true);
+ virtio_queue_aio_set_host_notifier_handler(s->vq, s->ctx,
+ virtio_blk_data_plane_handle_output);
aio_context_release(s->ctx);
return;
@@ -272,8 +272,7 @@ void virtio_blk_data_plane_stop(VirtIOBlockDataPlane *s)
aio_context_acquire(s->ctx);
/* Stop notifications for new requests from guest */
- virtio_queue_aio_set_host_notifier_handler(s->vq, s->ctx, true, false);
- virtio_set_queue_aio(s->vq, NULL);
+ virtio_queue_aio_set_host_notifier_handler(s->vq, s->ctx, NULL);
/* Drain and switch bs back to the QEMU main loop */
blk_set_aio_context(s->conf->conf.blk, qemu_get_aio_context());
diff --git a/hw/scsi/virtio-scsi-dataplane.c b/hw/scsi/virtio-scsi-dataplane.c
index 0fa5bb5..8694577 100644
--- a/hw/scsi/virtio-scsi-dataplane.c
+++ b/hw/scsi/virtio-scsi-dataplane.c
@@ -82,8 +82,7 @@ static int virtio_scsi_vring_init(VirtIOSCSI *s, VirtQueue *vq, int n,
return rc;
}
- virtio_queue_aio_set_host_notifier_handler(vq, s->ctx, true, true);
- virtio_set_queue_aio(vq, fn);
+ virtio_queue_aio_set_host_notifier_handler(vq, s->ctx, fn);
return 0;
}
@@ -100,13 +99,10 @@ static void virtio_scsi_clear_aio(VirtIOSCSI *s)
VirtIOSCSICommon *vs = VIRTIO_SCSI_COMMON(s);
int i;
- virtio_queue_aio_set_host_notifier_handler(vs->ctrl_vq, s->ctx, true, false);
- virtio_set_queue_aio(vs->ctrl_vq, NULL);
- virtio_queue_aio_set_host_notifier_handler(vs->event_vq, s->ctx, true, false);
- virtio_set_queue_aio(vs->event_vq, NULL);
+ virtio_queue_aio_set_host_notifier_handler(vs->ctrl_vq, s->ctx, NULL);
+ virtio_queue_aio_set_host_notifier_handler(vs->event_vq, s->ctx, NULL);
for (i = 0; i < vs->conf.num_queues; i++) {
- virtio_queue_aio_set_host_notifier_handler(vs->cmd_vqs[i], s->ctx, true, false);
- virtio_set_queue_aio(vs->cmd_vqs[i], NULL);
+ virtio_queue_aio_set_host_notifier_handler(vs->cmd_vqs[i], s->ctx, NULL);
}
}
diff --git a/hw/virtio/virtio.c b/hw/virtio/virtio.c
index 39a9f47..b16d2d8 100644
--- a/hw/virtio/virtio.c
+++ b/hw/virtio/virtio.c
@@ -1157,14 +1157,6 @@ VirtQueue *virtio_add_queue(VirtIODevice *vdev, int queue_size,
return &vdev->vq[i];
}
-void virtio_set_queue_aio(VirtQueue *vq,
- void (*handle_output)(VirtIODevice *, VirtQueue *))
-{
- assert(vq->handle_output);
-
- vq->handle_aio_output = handle_output;
-}
-
void virtio_del_queue(VirtIODevice *vdev, int n)
{
if (n < 0 || n >= VIRTIO_QUEUE_MAX) {
@@ -1807,19 +1799,16 @@ static void virtio_queue_host_notifier_aio_read(EventNotifier *n)
}
void virtio_queue_aio_set_host_notifier_handler(VirtQueue *vq, AioContext *ctx,
- bool assign, bool set_handler)
+ void (*handle_output)(VirtIODevice *,
+ VirtQueue *))
{
- if (assign && set_handler) {
+ vq->handle_aio_output = handle_output;
+ if (handle_output) {
aio_set_event_notifier(ctx, &vq->host_notifier, true,
virtio_queue_host_notifier_aio_read);
} else {
aio_set_event_notifier(ctx, &vq->host_notifier, true, NULL);
}
- if (!assign) {
- /* Test and clear notifier before after disabling event,
- * in case poll callback didn't have time to run. */
- virtio_queue_host_notifier_aio_read(&vq->host_notifier);
- }
}
static void virtio_queue_host_notifier_read(EventNotifier *n)
diff --git a/include/hw/virtio/virtio.h b/include/hw/virtio/virtio.h
index fa3f93b..6a37065 100644
--- a/include/hw/virtio/virtio.h
+++ b/include/hw/virtio/virtio.h
@@ -142,9 +142,6 @@ VirtQueue *virtio_add_queue(VirtIODevice *vdev, int queue_size,
void (*handle_output)(VirtIODevice *,
VirtQueue *));
-void virtio_set_queue_aio(VirtQueue *vq,
- void (*handle_output)(VirtIODevice *, VirtQueue *));
-
void virtio_del_queue(VirtIODevice *vdev, int n);
void *virtqueue_alloc_element(size_t sz, unsigned out_num, unsigned in_num);
@@ -254,7 +251,8 @@ EventNotifier *virtio_queue_get_host_notifier(VirtQueue *vq);
void virtio_queue_set_host_notifier_fd_handler(VirtQueue *vq, bool assign,
bool set_handler);
void virtio_queue_aio_set_host_notifier_handler(VirtQueue *vq, AioContext *ctx,
- bool assign, bool set_handler);
+ void (*fn)(VirtIODevice *,
+ VirtQueue *));
void virtio_irq(VirtQueue *vq);
VirtQueue *virtio_vector_first_queue(VirtIODevice *vdev, uint16_t vector);
VirtQueue *virtio_vector_next_queue(VirtQueue *vq);
--
1.8.3.1
^ permalink raw reply related [flat|nested] 29+ messages in thread
* Re: [Qemu-devel] [PATCH 8/9] virtio: merge virtio_queue_aio_set_host_notifier_handler with virtio_queue_set_aio
2016-03-30 12:48 ` [Qemu-devel] [PATCH 8/9] virtio: merge virtio_queue_aio_set_host_notifier_handler with virtio_queue_set_aio Paolo Bonzini
@ 2016-03-30 15:33 ` Cornelia Huck
0 siblings, 0 replies; 29+ messages in thread
From: Cornelia Huck @ 2016-03-30 15:33 UTC (permalink / raw)
To: Paolo Bonzini; +Cc: borntraeger, famz, qemu-devel, mst
On Wed, 30 Mar 2016 14:48:07 +0200
Paolo Bonzini <pbonzini@redhat.com> wrote:
> Eliminating the reentrancy is actually a nice thing that we can do
> with the API that Michael proposed, so let's make it first class.
> This also hides the complex assign/set_handler conventions from
> callers of virtio_queue_aio_set_host_notifier_handler, and fixes
> a possible race that could happen when passing assign=false to
> virtio_queue_aio_set_host_notifier_handler.
>
> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
> ---
> hw/block/dataplane/virtio-blk.c | 7 +++----
> hw/scsi/virtio-scsi-dataplane.c | 12 ++++--------
> hw/virtio/virtio.c | 19 ++++---------------
> include/hw/virtio/virtio.h | 6 ++----
> 4 files changed, 13 insertions(+), 31 deletions(-)
Reviewed-by: Cornelia Huck <cornelia.huck@de.ibm.com>
^ permalink raw reply [flat|nested] 29+ messages in thread
* [Qemu-devel] [PATCH 9/9] virtio: remove starting/stopping checks
2016-03-30 12:47 [Qemu-devel] [PATCH resend 0/9] virtio: aio handler API Paolo Bonzini
` (7 preceding siblings ...)
2016-03-30 12:48 ` [Qemu-devel] [PATCH 8/9] virtio: merge virtio_queue_aio_set_host_notifier_handler with virtio_queue_set_aio Paolo Bonzini
@ 2016-03-30 12:48 ` Paolo Bonzini
2016-03-30 15:36 ` Cornelia Huck
8 siblings, 1 reply; 29+ messages in thread
From: Paolo Bonzini @ 2016-03-30 12:48 UTC (permalink / raw)
To: qemu-devel; +Cc: cornelia.huck, borntraeger, famz, mst
Reentrancy cannot happen while the BQL is being held.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
hw/block/dataplane/virtio-blk.c | 12 ++----------
hw/scsi/virtio-scsi-dataplane.c | 9 +--------
include/hw/virtio/virtio-scsi.h | 2 --
3 files changed, 3 insertions(+), 20 deletions(-)
diff --git a/hw/block/dataplane/virtio-blk.c b/hw/block/dataplane/virtio-blk.c
index 7e50d01..0cff427 100644
--- a/hw/block/dataplane/virtio-blk.c
+++ b/hw/block/dataplane/virtio-blk.c
@@ -26,9 +26,6 @@
#include "qom/object_interfaces.h"
struct VirtIOBlockDataPlane {
- bool starting;
- bool stopping;
-
VirtIOBlkConf *conf;
VirtIODevice *vdev;
@@ -202,11 +199,10 @@ void virtio_blk_data_plane_start(VirtIOBlockDataPlane *s)
VirtIOBlock *vblk = VIRTIO_BLK(s->vdev);
int r;
- if (vblk->dataplane_started || s->starting) {
+ if (vblk->dataplane_started) {
return;
}
- s->starting = true;
s->vq = virtio_get_queue(s->vdev, 0);
/* Set up guest notifier (irq) */
@@ -225,7 +221,6 @@ void virtio_blk_data_plane_start(VirtIOBlockDataPlane *s)
goto fail_host_notifier;
}
- s->starting = false;
vblk->dataplane_started = true;
trace_virtio_blk_data_plane_start(s);
@@ -245,7 +240,6 @@ void virtio_blk_data_plane_start(VirtIOBlockDataPlane *s)
k->set_guest_notifiers(qbus->parent, 1, false);
fail_guest_notifiers:
vblk->dataplane_disabled = true;
- s->starting = false;
vblk->dataplane_started = true;
}
@@ -256,7 +250,7 @@ void virtio_blk_data_plane_stop(VirtIOBlockDataPlane *s)
VirtioBusClass *k = VIRTIO_BUS_GET_CLASS(qbus);
VirtIOBlock *vblk = VIRTIO_BLK(s->vdev);
- if (!vblk->dataplane_started || s->stopping) {
+ if (!vblk->dataplane_started) {
return;
}
@@ -266,7 +260,6 @@ void virtio_blk_data_plane_stop(VirtIOBlockDataPlane *s)
vblk->dataplane_started = false;
return;
}
- s->stopping = true;
trace_virtio_blk_data_plane_stop(s);
aio_context_acquire(s->ctx);
@@ -285,5 +278,4 @@ void virtio_blk_data_plane_stop(VirtIOBlockDataPlane *s)
k->set_guest_notifiers(qbus->parent, 1, false);
vblk->dataplane_started = false;
- s->stopping = false;
}
diff --git a/hw/scsi/virtio-scsi-dataplane.c b/hw/scsi/virtio-scsi-dataplane.c
index 8694577..ac41787 100644
--- a/hw/scsi/virtio-scsi-dataplane.c
+++ b/hw/scsi/virtio-scsi-dataplane.c
@@ -116,14 +116,11 @@ void virtio_scsi_dataplane_start(VirtIOSCSI *s)
VirtIOSCSICommon *vs = VIRTIO_SCSI_COMMON(s);
if (s->dataplane_started ||
- s->dataplane_starting ||
s->dataplane_fenced ||
s->ctx != iothread_get_aio_context(vs->conf.iothread)) {
return;
}
- s->dataplane_starting = true;
-
/* Set up guest notifier (irq) */
rc = k->set_guest_notifiers(qbus->parent, vs->conf.num_queues + 2, true);
if (rc != 0) {
@@ -151,7 +148,6 @@ void virtio_scsi_dataplane_start(VirtIOSCSI *s)
}
}
- s->dataplane_starting = false;
s->dataplane_started = true;
aio_context_release(s->ctx);
return;
@@ -165,7 +161,6 @@ fail_vrings:
k->set_guest_notifiers(qbus->parent, vs->conf.num_queues + 2, false);
fail_guest_notifiers:
s->dataplane_fenced = true;
- s->dataplane_starting = false;
s->dataplane_started = true;
}
@@ -177,7 +172,7 @@ void virtio_scsi_dataplane_stop(VirtIOSCSI *s)
VirtIOSCSICommon *vs = VIRTIO_SCSI_COMMON(s);
int i;
- if (!s->dataplane_started || s->dataplane_stopping) {
+ if (!s->dataplane_started) {
return;
}
@@ -187,7 +182,6 @@ void virtio_scsi_dataplane_stop(VirtIOSCSI *s)
s->dataplane_started = false;
return;
}
- s->dataplane_stopping = true;
assert(s->ctx == iothread_get_aio_context(vs->conf.iothread));
aio_context_acquire(s->ctx);
@@ -204,6 +198,5 @@ void virtio_scsi_dataplane_stop(VirtIOSCSI *s)
/* Clean up guest notifier (irq) */
k->set_guest_notifiers(qbus->parent, vs->conf.num_queues + 2, false);
- s->dataplane_stopping = false;
s->dataplane_started = false;
}
diff --git a/include/hw/virtio/virtio-scsi.h b/include/hw/virtio/virtio-scsi.h
index ba2f5ce..d5352d8 100644
--- a/include/hw/virtio/virtio-scsi.h
+++ b/include/hw/virtio/virtio-scsi.h
@@ -89,8 +89,6 @@ typedef struct VirtIOSCSI {
QTAILQ_HEAD(, VirtIOSCSIBlkChangeNotifier) remove_notifiers;
bool dataplane_started;
- bool dataplane_starting;
- bool dataplane_stopping;
bool dataplane_fenced;
Error *blocker;
uint32_t host_features;
--
1.8.3.1
^ permalink raw reply related [flat|nested] 29+ messages in thread
* [Qemu-devel] [PATCH 8/9] virtio: merge virtio_queue_aio_set_host_notifier_handler with virtio_queue_set_aio
2016-04-01 13:19 [Qemu-devel] [PATCH v2 0/9] virtio: aio handler API Paolo Bonzini
@ 2016-04-01 13:19 ` Paolo Bonzini
2016-04-03 9:06 ` Michael S. Tsirkin
0 siblings, 1 reply; 29+ messages in thread
From: Paolo Bonzini @ 2016-04-01 13:19 UTC (permalink / raw)
To: qemu-devel; +Cc: famz, tubo, mst, borntraeger, stefanha, cornelia.huck
Eliminating the reentrancy is actually a nice thing that we can do
with the API that Michael proposed, so let's make it first class.
This also hides the complex assign/set_handler conventions from
callers of virtio_queue_aio_set_host_notifier_handler, which in
fact was always called with assign=true.
Reviewed-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
hw/block/dataplane/virtio-blk.c | 7 +++----
hw/scsi/virtio-scsi-dataplane.c | 12 ++++--------
hw/virtio/virtio.c | 19 ++++---------------
include/hw/virtio/virtio.h | 6 ++----
4 files changed, 13 insertions(+), 31 deletions(-)
diff --git a/hw/block/dataplane/virtio-blk.c b/hw/block/dataplane/virtio-blk.c
index fd06726..74c6d37 100644
--- a/hw/block/dataplane/virtio-blk.c
+++ b/hw/block/dataplane/virtio-blk.c
@@ -237,8 +237,8 @@ void virtio_blk_data_plane_start(VirtIOBlockDataPlane *s)
/* Get this show started by hooking up our callbacks */
aio_context_acquire(s->ctx);
- virtio_set_queue_aio(s->vq, virtio_blk_data_plane_handle_output);
- virtio_queue_aio_set_host_notifier_handler(s->vq, s->ctx, true, true);
+ virtio_queue_aio_set_host_notifier_handler(s->vq, s->ctx,
+ virtio_blk_data_plane_handle_output);
aio_context_release(s->ctx);
return;
@@ -273,8 +273,7 @@ void virtio_blk_data_plane_stop(VirtIOBlockDataPlane *s)
aio_context_acquire(s->ctx);
/* Stop notifications for new requests from guest */
- virtio_queue_aio_set_host_notifier_handler(s->vq, s->ctx, true, false);
- virtio_set_queue_aio(s->vq, NULL);
+ virtio_queue_aio_set_host_notifier_handler(s->vq, s->ctx, NULL);
/* Drain and switch bs back to the QEMU main loop */
blk_set_aio_context(s->conf->conf.blk, qemu_get_aio_context());
diff --git a/hw/scsi/virtio-scsi-dataplane.c b/hw/scsi/virtio-scsi-dataplane.c
index a497a2c..5494dcc 100644
--- a/hw/scsi/virtio-scsi-dataplane.c
+++ b/hw/scsi/virtio-scsi-dataplane.c
@@ -81,8 +81,7 @@ static int virtio_scsi_vring_init(VirtIOSCSI *s, VirtQueue *vq, int n,
return rc;
}
- virtio_queue_aio_set_host_notifier_handler(vq, s->ctx, true, true);
- virtio_set_queue_aio(vq, fn);
+ virtio_queue_aio_set_host_notifier_handler(vq, s->ctx, fn);
return 0;
}
@@ -99,13 +98,10 @@ static void virtio_scsi_clear_aio(VirtIOSCSI *s)
VirtIOSCSICommon *vs = VIRTIO_SCSI_COMMON(s);
int i;
- virtio_queue_aio_set_host_notifier_handler(vs->ctrl_vq, s->ctx, true, false);
- virtio_set_queue_aio(vs->ctrl_vq, NULL);
- virtio_queue_aio_set_host_notifier_handler(vs->event_vq, s->ctx, true, false);
- virtio_set_queue_aio(vs->event_vq, NULL);
+ virtio_queue_aio_set_host_notifier_handler(vs->ctrl_vq, s->ctx, NULL);
+ virtio_queue_aio_set_host_notifier_handler(vs->event_vq, s->ctx, NULL);
for (i = 0; i < vs->conf.num_queues; i++) {
- virtio_queue_aio_set_host_notifier_handler(vs->cmd_vqs[i], s->ctx, true, false);
- virtio_set_queue_aio(vs->cmd_vqs[i], NULL);
+ virtio_queue_aio_set_host_notifier_handler(vs->cmd_vqs[i], s->ctx, NULL);
}
}
diff --git a/hw/virtio/virtio.c b/hw/virtio/virtio.c
index eb04ac0..7fcfc24f 100644
--- a/hw/virtio/virtio.c
+++ b/hw/virtio/virtio.c
@@ -1159,14 +1159,6 @@ VirtQueue *virtio_add_queue(VirtIODevice *vdev, int queue_size,
return &vdev->vq[i];
}
-void virtio_set_queue_aio(VirtQueue *vq,
- void (*handle_output)(VirtIODevice *, VirtQueue *))
-{
- assert(vq->handle_output);
-
- vq->handle_aio_output = handle_output;
-}
-
void virtio_del_queue(VirtIODevice *vdev, int n)
{
if (n < 0 || n >= VIRTIO_QUEUE_MAX) {
@@ -1809,19 +1801,16 @@ static void virtio_queue_host_notifier_aio_read(EventNotifier *n)
}
void virtio_queue_aio_set_host_notifier_handler(VirtQueue *vq, AioContext *ctx,
- bool assign, bool set_handler)
+ void (*handle_output)(VirtIODevice *,
+ VirtQueue *))
{
- if (assign && set_handler) {
+ vq->handle_aio_output = handle_output;
+ if (handle_output) {
aio_set_event_notifier(ctx, &vq->host_notifier, true,
virtio_queue_host_notifier_aio_read);
} else {
aio_set_event_notifier(ctx, &vq->host_notifier, true, NULL);
}
- if (!assign) {
- /* Test and clear notifier before after disabling event,
- * in case poll callback didn't have time to run. */
- virtio_queue_host_notifier_aio_read(&vq->host_notifier);
- }
}
static void virtio_queue_host_notifier_read(EventNotifier *n)
diff --git a/include/hw/virtio/virtio.h b/include/hw/virtio/virtio.h
index fa3f93b..6a37065 100644
--- a/include/hw/virtio/virtio.h
+++ b/include/hw/virtio/virtio.h
@@ -142,9 +142,6 @@ VirtQueue *virtio_add_queue(VirtIODevice *vdev, int queue_size,
void (*handle_output)(VirtIODevice *,
VirtQueue *));
-void virtio_set_queue_aio(VirtQueue *vq,
- void (*handle_output)(VirtIODevice *, VirtQueue *));
-
void virtio_del_queue(VirtIODevice *vdev, int n);
void *virtqueue_alloc_element(size_t sz, unsigned out_num, unsigned in_num);
@@ -254,7 +251,8 @@ EventNotifier *virtio_queue_get_host_notifier(VirtQueue *vq);
void virtio_queue_set_host_notifier_fd_handler(VirtQueue *vq, bool assign,
bool set_handler);
void virtio_queue_aio_set_host_notifier_handler(VirtQueue *vq, AioContext *ctx,
- bool assign, bool set_handler);
+ void (*fn)(VirtIODevice *,
+ VirtQueue *));
void virtio_irq(VirtQueue *vq);
VirtQueue *virtio_vector_first_queue(VirtIODevice *vdev, uint16_t vector);
VirtQueue *virtio_vector_next_queue(VirtQueue *vq);
--
1.8.3.1
^ permalink raw reply related [flat|nested] 29+ messages in thread
* Re: [Qemu-devel] [PATCH 8/9] virtio: merge virtio_queue_aio_set_host_notifier_handler with virtio_queue_set_aio
2016-04-01 13:19 ` [Qemu-devel] [PATCH 8/9] virtio: merge virtio_queue_aio_set_host_notifier_handler with virtio_queue_set_aio Paolo Bonzini
@ 2016-04-03 9:06 ` Michael S. Tsirkin
2016-04-03 17:13 ` Paolo Bonzini
0 siblings, 1 reply; 29+ messages in thread
From: Michael S. Tsirkin @ 2016-04-03 9:06 UTC (permalink / raw)
To: Paolo Bonzini
Cc: famz, tubo, qemu-devel, borntraeger, stefanha, cornelia.huck
On Fri, Apr 01, 2016 at 03:19:53PM +0200, Paolo Bonzini wrote:
> Eliminating the reentrancy is actually a nice thing that we can do
> with the API that Michael proposed, so let's make it first class.
> This also hides the complex assign/set_handler conventions from
> callers of virtio_queue_aio_set_host_notifier_handler, which in
> fact was always called with assign=true.
>
> Reviewed-by: Cornelia Huck <cornelia.huck@de.ibm.com>
> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
> ---
> hw/block/dataplane/virtio-blk.c | 7 +++----
> hw/scsi/virtio-scsi-dataplane.c | 12 ++++--------
> hw/virtio/virtio.c | 19 ++++---------------
> include/hw/virtio/virtio.h | 6 ++----
> 4 files changed, 13 insertions(+), 31 deletions(-)
>
> diff --git a/hw/block/dataplane/virtio-blk.c b/hw/block/dataplane/virtio-blk.c
> index fd06726..74c6d37 100644
> --- a/hw/block/dataplane/virtio-blk.c
> +++ b/hw/block/dataplane/virtio-blk.c
> @@ -237,8 +237,8 @@ void virtio_blk_data_plane_start(VirtIOBlockDataPlane *s)
>
> /* Get this show started by hooking up our callbacks */
> aio_context_acquire(s->ctx);
> - virtio_set_queue_aio(s->vq, virtio_blk_data_plane_handle_output);
> - virtio_queue_aio_set_host_notifier_handler(s->vq, s->ctx, true, true);
> + virtio_queue_aio_set_host_notifier_handler(s->vq, s->ctx,
> + virtio_blk_data_plane_handle_output);
> aio_context_release(s->ctx);
> return;
>
> @@ -273,8 +273,7 @@ void virtio_blk_data_plane_stop(VirtIOBlockDataPlane *s)
> aio_context_acquire(s->ctx);
>
> /* Stop notifications for new requests from guest */
> - virtio_queue_aio_set_host_notifier_handler(s->vq, s->ctx, true, false);
> - virtio_set_queue_aio(s->vq, NULL);
> + virtio_queue_aio_set_host_notifier_handler(s->vq, s->ctx, NULL);
>
> /* Drain and switch bs back to the QEMU main loop */
> blk_set_aio_context(s->conf->conf.blk, qemu_get_aio_context());
> diff --git a/hw/scsi/virtio-scsi-dataplane.c b/hw/scsi/virtio-scsi-dataplane.c
> index a497a2c..5494dcc 100644
> --- a/hw/scsi/virtio-scsi-dataplane.c
> +++ b/hw/scsi/virtio-scsi-dataplane.c
> @@ -81,8 +81,7 @@ static int virtio_scsi_vring_init(VirtIOSCSI *s, VirtQueue *vq, int n,
> return rc;
> }
>
> - virtio_queue_aio_set_host_notifier_handler(vq, s->ctx, true, true);
> - virtio_set_queue_aio(vq, fn);
> + virtio_queue_aio_set_host_notifier_handler(vq, s->ctx, fn);
> return 0;
> }
>
> @@ -99,13 +98,10 @@ static void virtio_scsi_clear_aio(VirtIOSCSI *s)
> VirtIOSCSICommon *vs = VIRTIO_SCSI_COMMON(s);
> int i;
>
> - virtio_queue_aio_set_host_notifier_handler(vs->ctrl_vq, s->ctx, true, false);
> - virtio_set_queue_aio(vs->ctrl_vq, NULL);
> - virtio_queue_aio_set_host_notifier_handler(vs->event_vq, s->ctx, true, false);
> - virtio_set_queue_aio(vs->event_vq, NULL);
> + virtio_queue_aio_set_host_notifier_handler(vs->ctrl_vq, s->ctx, NULL);
> + virtio_queue_aio_set_host_notifier_handler(vs->event_vq, s->ctx, NULL);
> for (i = 0; i < vs->conf.num_queues; i++) {
> - virtio_queue_aio_set_host_notifier_handler(vs->cmd_vqs[i], s->ctx, true, false);
> - virtio_set_queue_aio(vs->cmd_vqs[i], NULL);
> + virtio_queue_aio_set_host_notifier_handler(vs->cmd_vqs[i], s->ctx, NULL);
> }
> }
>
> diff --git a/hw/virtio/virtio.c b/hw/virtio/virtio.c
> index eb04ac0..7fcfc24f 100644
> --- a/hw/virtio/virtio.c
> +++ b/hw/virtio/virtio.c
> @@ -1159,14 +1159,6 @@ VirtQueue *virtio_add_queue(VirtIODevice *vdev, int queue_size,
> return &vdev->vq[i];
> }
>
> -void virtio_set_queue_aio(VirtQueue *vq,
> - void (*handle_output)(VirtIODevice *, VirtQueue *))
> -{
> - assert(vq->handle_output);
> -
> - vq->handle_aio_output = handle_output;
> -}
> -
> void virtio_del_queue(VirtIODevice *vdev, int n)
> {
> if (n < 0 || n >= VIRTIO_QUEUE_MAX) {
> @@ -1809,19 +1801,16 @@ static void virtio_queue_host_notifier_aio_read(EventNotifier *n)
> }
>
> void virtio_queue_aio_set_host_notifier_handler(VirtQueue *vq, AioContext *ctx,
> - bool assign, bool set_handler)
> + void (*handle_output)(VirtIODevice *,
> + VirtQueue *))
> {
> - if (assign && set_handler) {
> + vq->handle_aio_output = handle_output;
> + if (handle_output) {
> aio_set_event_notifier(ctx, &vq->host_notifier, true,
> virtio_queue_host_notifier_aio_read);
> } else {
> aio_set_event_notifier(ctx, &vq->host_notifier, true, NULL);
> }
> - if (!assign) {
> - /* Test and clear notifier before after disabling event,
> - * in case poll callback didn't have time to run. */
> - virtio_queue_host_notifier_aio_read(&vq->host_notifier);
> - }
> }
>
> static void virtio_queue_host_notifier_read(EventNotifier *n)
This means that caller is now responsible for invoking the
handler after it sets handle_output = NULL.
I think it's cleaner to invoke it internally.
> diff --git a/include/hw/virtio/virtio.h b/include/hw/virtio/virtio.h
> index fa3f93b..6a37065 100644
> --- a/include/hw/virtio/virtio.h
> +++ b/include/hw/virtio/virtio.h
> @@ -142,9 +142,6 @@ VirtQueue *virtio_add_queue(VirtIODevice *vdev, int queue_size,
> void (*handle_output)(VirtIODevice *,
> VirtQueue *));
>
> -void virtio_set_queue_aio(VirtQueue *vq,
> - void (*handle_output)(VirtIODevice *, VirtQueue *));
> -
> void virtio_del_queue(VirtIODevice *vdev, int n);
>
> void *virtqueue_alloc_element(size_t sz, unsigned out_num, unsigned in_num);
> @@ -254,7 +251,8 @@ EventNotifier *virtio_queue_get_host_notifier(VirtQueue *vq);
> void virtio_queue_set_host_notifier_fd_handler(VirtQueue *vq, bool assign,
> bool set_handler);
> void virtio_queue_aio_set_host_notifier_handler(VirtQueue *vq, AioContext *ctx,
> - bool assign, bool set_handler);
> + void (*fn)(VirtIODevice *,
> + VirtQueue *));
> void virtio_irq(VirtQueue *vq);
> VirtQueue *virtio_vector_first_queue(VirtIODevice *vdev, uint16_t vector);
> VirtQueue *virtio_vector_next_queue(VirtQueue *vq);
> --
> 1.8.3.1
>
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [Qemu-devel] [PATCH 8/9] virtio: merge virtio_queue_aio_set_host_notifier_handler with virtio_queue_set_aio
2016-04-03 9:06 ` Michael S. Tsirkin
@ 2016-04-03 17:13 ` Paolo Bonzini
0 siblings, 0 replies; 29+ messages in thread
From: Paolo Bonzini @ 2016-04-03 17:13 UTC (permalink / raw)
To: Michael S. Tsirkin
Cc: famz, borntraeger, qemu-devel, tubo, stefanha, cornelia.huck
On 03/04/2016 11:06, Michael S. Tsirkin wrote:
> On Fri, Apr 01, 2016 at 03:19:53PM +0200, Paolo Bonzini wrote:
>> Eliminating the reentrancy is actually a nice thing that we can do
>> with the API that Michael proposed, so let's make it first class.
>> This also hides the complex assign/set_handler conventions from
>> callers of virtio_queue_aio_set_host_notifier_handler, which in
>> fact was always called with assign=true.
>>
>> Reviewed-by: Cornelia Huck <cornelia.huck@de.ibm.com>
>> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
>> ---
>> hw/block/dataplane/virtio-blk.c | 7 +++----
>> hw/scsi/virtio-scsi-dataplane.c | 12 ++++--------
>> hw/virtio/virtio.c | 19 ++++---------------
>> include/hw/virtio/virtio.h | 6 ++----
>> 4 files changed, 13 insertions(+), 31 deletions(-)
>>
>> diff --git a/hw/block/dataplane/virtio-blk.c b/hw/block/dataplane/virtio-blk.c
>> index fd06726..74c6d37 100644
>> --- a/hw/block/dataplane/virtio-blk.c
>> +++ b/hw/block/dataplane/virtio-blk.c
>> @@ -237,8 +237,8 @@ void virtio_blk_data_plane_start(VirtIOBlockDataPlane *s)
>>
>> /* Get this show started by hooking up our callbacks */
>> aio_context_acquire(s->ctx);
>> - virtio_set_queue_aio(s->vq, virtio_blk_data_plane_handle_output);
>> - virtio_queue_aio_set_host_notifier_handler(s->vq, s->ctx, true, true);
>> + virtio_queue_aio_set_host_notifier_handler(s->vq, s->ctx,
>> + virtio_blk_data_plane_handle_output);
>> aio_context_release(s->ctx);
>> return;
>>
>> @@ -273,8 +273,7 @@ void virtio_blk_data_plane_stop(VirtIOBlockDataPlane *s)
>> aio_context_acquire(s->ctx);
>>
>> /* Stop notifications for new requests from guest */
>> - virtio_queue_aio_set_host_notifier_handler(s->vq, s->ctx, true, false);
>> - virtio_set_queue_aio(s->vq, NULL);
>> + virtio_queue_aio_set_host_notifier_handler(s->vq, s->ctx, NULL);
>>
>> /* Drain and switch bs back to the QEMU main loop */
>> blk_set_aio_context(s->conf->conf.blk, qemu_get_aio_context());
>> diff --git a/hw/scsi/virtio-scsi-dataplane.c b/hw/scsi/virtio-scsi-dataplane.c
>> index a497a2c..5494dcc 100644
>> --- a/hw/scsi/virtio-scsi-dataplane.c
>> +++ b/hw/scsi/virtio-scsi-dataplane.c
>> @@ -81,8 +81,7 @@ static int virtio_scsi_vring_init(VirtIOSCSI *s, VirtQueue *vq, int n,
>> return rc;
>> }
>>
>> - virtio_queue_aio_set_host_notifier_handler(vq, s->ctx, true, true);
>> - virtio_set_queue_aio(vq, fn);
>> + virtio_queue_aio_set_host_notifier_handler(vq, s->ctx, fn);
>> return 0;
>> }
>>
>> @@ -99,13 +98,10 @@ static void virtio_scsi_clear_aio(VirtIOSCSI *s)
>> VirtIOSCSICommon *vs = VIRTIO_SCSI_COMMON(s);
>> int i;
>>
>> - virtio_queue_aio_set_host_notifier_handler(vs->ctrl_vq, s->ctx, true, false);
>> - virtio_set_queue_aio(vs->ctrl_vq, NULL);
>> - virtio_queue_aio_set_host_notifier_handler(vs->event_vq, s->ctx, true, false);
>> - virtio_set_queue_aio(vs->event_vq, NULL);
>> + virtio_queue_aio_set_host_notifier_handler(vs->ctrl_vq, s->ctx, NULL);
>> + virtio_queue_aio_set_host_notifier_handler(vs->event_vq, s->ctx, NULL);
>> for (i = 0; i < vs->conf.num_queues; i++) {
>> - virtio_queue_aio_set_host_notifier_handler(vs->cmd_vqs[i], s->ctx, true, false);
>> - virtio_set_queue_aio(vs->cmd_vqs[i], NULL);
>> + virtio_queue_aio_set_host_notifier_handler(vs->cmd_vqs[i], s->ctx, NULL);
>> }
>> }
>>
>> diff --git a/hw/virtio/virtio.c b/hw/virtio/virtio.c
>> index eb04ac0..7fcfc24f 100644
>> --- a/hw/virtio/virtio.c
>> +++ b/hw/virtio/virtio.c
>> @@ -1159,14 +1159,6 @@ VirtQueue *virtio_add_queue(VirtIODevice *vdev, int queue_size,
>> return &vdev->vq[i];
>> }
>>
>> -void virtio_set_queue_aio(VirtQueue *vq,
>> - void (*handle_output)(VirtIODevice *, VirtQueue *))
>> -{
>> - assert(vq->handle_output);
>> -
>> - vq->handle_aio_output = handle_output;
>> -}
>> -
>> void virtio_del_queue(VirtIODevice *vdev, int n)
>> {
>> if (n < 0 || n >= VIRTIO_QUEUE_MAX) {
>> @@ -1809,19 +1801,16 @@ static void virtio_queue_host_notifier_aio_read(EventNotifier *n)
>> }
>>
>> void virtio_queue_aio_set_host_notifier_handler(VirtQueue *vq, AioContext *ctx,
>> - bool assign, bool set_handler)
>> + void (*handle_output)(VirtIODevice *,
>> + VirtQueue *))
>> {
>> - if (assign && set_handler) {
>> + vq->handle_aio_output = handle_output;
>> + if (handle_output) {
>> aio_set_event_notifier(ctx, &vq->host_notifier, true,
>> virtio_queue_host_notifier_aio_read);
>> } else {
>> aio_set_event_notifier(ctx, &vq->host_notifier, true, NULL);
>> }
>> - if (!assign) {
>> - /* Test and clear notifier before after disabling event,
>> - * in case poll callback didn't have time to run. */
>> - virtio_queue_host_notifier_aio_read(&vq->host_notifier);
>> - }
>> }
>>
>> static void virtio_queue_host_notifier_read(EventNotifier *n)
>
> This means that caller is now responsible for invoking the
> handler after it sets handle_output = NULL.
> I think it's cleaner to invoke it internally.
No, the caller is not responsible for that. Ultimately, it's the virtio
core that will be responsible for setting up the handler again in the
main I/O thread, and that's how it will be called if it matters. This
will happen when the API is cleaned up further by Cornelia.
Note that this patch doesn't change the semantics, it's patch 1 that
changes assign=false to assign=true in the call to
virtio_queue_aio_set_host_notifier_handler. Without that change,
virtio_queue_host_notifier_aio_read can run the virtqueue handler in the
dataplane thread concurrently with the same handler in the main I/O
thread. I consider that a fix for a latent/unknown bug, since we are
having so many headaches with reentrancy.
Of course I'm okay with dropping 9/9 (that's why I put it last), and
probably it was not 2.6 material even if it worked.
Paolo
^ permalink raw reply [flat|nested] 29+ messages in thread