* [Qemu-devel] [PATCH resend 0/9] virtio: aio handler API
@ 2016-03-30 12:47 Paolo Bonzini
2016-03-30 12:48 ` [Qemu-devel] [PATCH 1/9] virtio-dataplane: pass assign=true to virtio_queue_aio_set_host_notifier_handler Paolo Bonzini
` (8 more replies)
0 siblings, 9 replies; 31+ messages in thread
From: Paolo Bonzini @ 2016-03-30 12:47 UTC (permalink / raw)
To: qemu-devel; +Cc: cornelia.huck, borntraeger, famz, mst
Since Tu Bo tested Michael's patches successfully, here is my take on
them. The only change in Michael's patches is that I handled a failure
to start dataplane; however, I am also adding other cleanups (patches
1-2 and 8), two bugfixes (patches 3-4), converting virtio-scsi (patch 7),
and finally changing the API to be simpler (patch 8).
Paolo
Michael S. Tsirkin (2):
virtio: add aio handler
virtio-blk: use aio handler for data plane
Paolo Bonzini (7):
virtio-dataplane: pass assign=true to
virtio_queue_aio_set_host_notifier_handler
virtio: make virtio_queue_notify_vq static
virtio-blk: fix disabled mode
virtio-scsi: fix disabled mode
virtio-scsi: use aio handler for data plane
virtio: merge virtio_queue_aio_set_host_notifier_handler with
virtio_queue_set_aio
virtio: remove starting/stopping checks
hw/block/dataplane/virtio-blk.c | 35 +++++++++++----------
hw/block/virtio-blk.c | 29 ++++++++++-------
hw/scsi/virtio-scsi-dataplane.c | 56 +++++++++++++++++++++++----------
hw/scsi/virtio-scsi.c | 69 +++++++++++++++++++++++++++--------------
hw/virtio/virtio.c | 37 ++++++++++++++++------
include/hw/virtio/virtio-blk.h | 3 ++
include/hw/virtio/virtio-scsi.h | 9 ++----
include/hw/virtio/virtio.h | 4 +--
8 files changed, 158 insertions(+), 84 deletions(-)
--
1.8.3.1
^ permalink raw reply [flat|nested] 31+ messages in thread
* [Qemu-devel] [PATCH 1/9] virtio-dataplane: pass assign=true to virtio_queue_aio_set_host_notifier_handler
2016-03-30 12:47 [Qemu-devel] [PATCH resend 0/9] virtio: aio handler API Paolo Bonzini
@ 2016-03-30 12:48 ` Paolo Bonzini
2016-03-30 13:23 ` Cornelia Huck
2016-03-30 13:44 ` Cornelia Huck
2016-03-30 12:48 ` [Qemu-devel] [PATCH 2/9] virtio: make virtio_queue_notify_vq static Paolo Bonzini
` (7 subsequent siblings)
8 siblings, 2 replies; 31+ messages in thread
From: Paolo Bonzini @ 2016-03-30 12:48 UTC (permalink / raw)
To: qemu-devel; +Cc: cornelia.huck, borntraeger, famz, mst
There is no need to run the handler one last time; the device is being
reset and it is okay to drop requests that are pending in the virtqueue.
By omitting this call, we dodge a possible cause of races between the
dataplane thread on one side and the main/vCPU threads on the other.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
hw/block/dataplane/virtio-blk.c | 2 +-
hw/scsi/virtio-scsi-dataplane.c | 6 +++---
2 files changed, 4 insertions(+), 4 deletions(-)
diff --git a/hw/block/dataplane/virtio-blk.c b/hw/block/dataplane/virtio-blk.c
index 36f3d2b..0d76110 100644
--- a/hw/block/dataplane/virtio-blk.c
+++ b/hw/block/dataplane/virtio-blk.c
@@ -261,7 +261,7 @@ void virtio_blk_data_plane_stop(VirtIOBlockDataPlane *s)
aio_context_acquire(s->ctx);
/* Stop notifications for new requests from guest */
- virtio_queue_aio_set_host_notifier_handler(s->vq, s->ctx, false, false);
+ virtio_queue_aio_set_host_notifier_handler(s->vq, s->ctx, true, false);
/* Drain and switch bs back to the QEMU main loop */
blk_set_aio_context(s->conf->conf.blk, qemu_get_aio_context());
diff --git a/hw/scsi/virtio-scsi-dataplane.c b/hw/scsi/virtio-scsi-dataplane.c
index 367e476..c57480e 100644
--- a/hw/scsi/virtio-scsi-dataplane.c
+++ b/hw/scsi/virtio-scsi-dataplane.c
@@ -71,10 +71,10 @@ static void virtio_scsi_clear_aio(VirtIOSCSI *s)
VirtIOSCSICommon *vs = VIRTIO_SCSI_COMMON(s);
int i;
- virtio_queue_aio_set_host_notifier_handler(vs->ctrl_vq, s->ctx, false, false);
- virtio_queue_aio_set_host_notifier_handler(vs->event_vq, s->ctx, false, false);
+ virtio_queue_aio_set_host_notifier_handler(vs->ctrl_vq, s->ctx, true, false);
+ virtio_queue_aio_set_host_notifier_handler(vs->event_vq, s->ctx, true, false);
for (i = 0; i < vs->conf.num_queues; i++) {
- virtio_queue_aio_set_host_notifier_handler(vs->cmd_vqs[i], s->ctx, false, false);
+ virtio_queue_aio_set_host_notifier_handler(vs->cmd_vqs[i], s->ctx, true, false);
}
}
--
1.8.3.1
^ permalink raw reply related [flat|nested] 31+ messages in thread
* [Qemu-devel] [PATCH 2/9] virtio: make virtio_queue_notify_vq static
2016-03-30 12:47 [Qemu-devel] [PATCH resend 0/9] virtio: aio handler API Paolo Bonzini
2016-03-30 12:48 ` [Qemu-devel] [PATCH 1/9] virtio-dataplane: pass assign=true to virtio_queue_aio_set_host_notifier_handler Paolo Bonzini
@ 2016-03-30 12:48 ` Paolo Bonzini
2016-03-30 13:39 ` Cornelia Huck
2016-03-30 12:48 ` [Qemu-devel] [PATCH 3/9] virtio-blk: fix disabled mode Paolo Bonzini
` (6 subsequent siblings)
8 siblings, 1 reply; 31+ messages in thread
From: Paolo Bonzini @ 2016-03-30 12:48 UTC (permalink / raw)
To: qemu-devel; +Cc: cornelia.huck, borntraeger, famz, mst
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
hw/virtio/virtio.c | 2 +-
include/hw/virtio/virtio.h | 1 -
2 files changed, 1 insertion(+), 2 deletions(-)
diff --git a/hw/virtio/virtio.c b/hw/virtio/virtio.c
index 08275a9..07c45b6 100644
--- a/hw/virtio/virtio.c
+++ b/hw/virtio/virtio.c
@@ -1086,7 +1086,7 @@ void virtio_queue_set_align(VirtIODevice *vdev, int n, int align)
virtio_queue_update_rings(vdev, n);
}
-void virtio_queue_notify_vq(VirtQueue *vq)
+static void virtio_queue_notify_vq(VirtQueue *vq)
{
if (vq->vring.desc && vq->handle_output) {
VirtIODevice *vdev = vq->vdev;
diff --git a/include/hw/virtio/virtio.h b/include/hw/virtio/virtio.h
index 2b5b248..5afb51c 100644
--- a/include/hw/virtio/virtio.h
+++ b/include/hw/virtio/virtio.h
@@ -252,7 +252,6 @@ void virtio_queue_set_host_notifier_fd_handler(VirtQueue *vq, bool assign,
bool set_handler);
void virtio_queue_aio_set_host_notifier_handler(VirtQueue *vq, AioContext *ctx,
bool assign, bool set_handler);
-void virtio_queue_notify_vq(VirtQueue *vq);
void virtio_irq(VirtQueue *vq);
VirtQueue *virtio_vector_first_queue(VirtIODevice *vdev, uint16_t vector);
VirtQueue *virtio_vector_next_queue(VirtQueue *vq);
--
1.8.3.1
^ permalink raw reply related [flat|nested] 31+ messages in thread
* [Qemu-devel] [PATCH 3/9] virtio-blk: fix disabled mode
2016-03-30 12:47 [Qemu-devel] [PATCH resend 0/9] virtio: aio handler API Paolo Bonzini
2016-03-30 12:48 ` [Qemu-devel] [PATCH 1/9] virtio-dataplane: pass assign=true to virtio_queue_aio_set_host_notifier_handler Paolo Bonzini
2016-03-30 12:48 ` [Qemu-devel] [PATCH 2/9] virtio: make virtio_queue_notify_vq static Paolo Bonzini
@ 2016-03-30 12:48 ` Paolo Bonzini
2016-03-30 13:57 ` Cornelia Huck
2016-03-31 8:32 ` tu bo
2016-03-30 12:48 ` [Qemu-devel] [PATCH 4/9] virtio-scsi: " Paolo Bonzini
` (5 subsequent siblings)
8 siblings, 2 replies; 31+ messages in thread
From: Paolo Bonzini @ 2016-03-30 12:48 UTC (permalink / raw)
To: qemu-devel; +Cc: cornelia.huck, borntraeger, famz, mst
The missing check on dataplane_disabled caused a segmentation
fault in notify_guest_bh, because s->guest_notifier was NULL.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
hw/block/dataplane/virtio-blk.c | 7 +++----
hw/block/virtio-blk.c | 2 +-
include/hw/virtio/virtio-blk.h | 1 +
3 files changed, 5 insertions(+), 5 deletions(-)
diff --git a/hw/block/dataplane/virtio-blk.c b/hw/block/dataplane/virtio-blk.c
index 0d76110..378feb3 100644
--- a/hw/block/dataplane/virtio-blk.c
+++ b/hw/block/dataplane/virtio-blk.c
@@ -28,7 +28,6 @@
struct VirtIOBlockDataPlane {
bool starting;
bool stopping;
- bool disabled;
VirtIOBlkConf *conf;
@@ -233,7 +232,7 @@ void virtio_blk_data_plane_start(VirtIOBlockDataPlane *s)
fail_host_notifier:
k->set_guest_notifiers(qbus->parent, 1, false);
fail_guest_notifiers:
- s->disabled = true;
+ vblk->dataplane_disabled = true;
s->starting = false;
vblk->dataplane_started = true;
}
@@ -250,8 +249,8 @@ void virtio_blk_data_plane_stop(VirtIOBlockDataPlane *s)
}
/* Better luck next time. */
- if (s->disabled) {
- s->disabled = false;
+ if (vblk->dataplane_disabled) {
+ vblk->dataplane_disabled = false;
vblk->dataplane_started = false;
return;
}
diff --git a/hw/block/virtio-blk.c b/hw/block/virtio-blk.c
index d0b8248..77221c1 100644
--- a/hw/block/virtio-blk.c
+++ b/hw/block/virtio-blk.c
@@ -53,7 +53,7 @@ static void virtio_blk_req_complete(VirtIOBlockReq *req, unsigned char status)
stb_p(&req->in->status, status);
virtqueue_push(s->vq, &req->elem, req->in_len);
- if (s->dataplane) {
+ if (s->dataplane_started && !s->dataplane_disabled) {
virtio_blk_data_plane_notify(s->dataplane);
} else {
virtio_notify(vdev, s->vq);
diff --git a/include/hw/virtio/virtio-blk.h b/include/hw/virtio/virtio-blk.h
index 5cb66cd..073c632 100644
--- a/include/hw/virtio/virtio-blk.h
+++ b/include/hw/virtio/virtio-blk.h
@@ -53,6 +53,7 @@ typedef struct VirtIOBlock {
unsigned short sector_mask;
bool original_wce;
VMChangeStateEntry *change;
+ bool dataplane_disabled;
bool dataplane_started;
int reentrancy_test;
struct VirtIOBlockDataPlane *dataplane;
--
1.8.3.1
^ permalink raw reply related [flat|nested] 31+ messages in thread
* [Qemu-devel] [PATCH 4/9] virtio-scsi: fix disabled mode
2016-03-30 12:47 [Qemu-devel] [PATCH resend 0/9] virtio: aio handler API Paolo Bonzini
` (2 preceding siblings ...)
2016-03-30 12:48 ` [Qemu-devel] [PATCH 3/9] virtio-blk: fix disabled mode Paolo Bonzini
@ 2016-03-30 12:48 ` Paolo Bonzini
2016-03-30 14:52 ` Cornelia Huck
2016-03-30 12:48 ` [Qemu-devel] [PATCH 5/9] virtio: add aio handler Paolo Bonzini
` (4 subsequent siblings)
8 siblings, 1 reply; 31+ messages in thread
From: Paolo Bonzini @ 2016-03-30 12:48 UTC (permalink / raw)
To: qemu-devel; +Cc: cornelia.huck, borntraeger, famz, mst
Add two missing checks for s->dataplane_fenced. In one case, QEMU
would skip injecting an IRQ due to a write to an uninitialized
EventNotifier's file descriptor.
In the second case, the dataplane_disabled field was used by mistake;
in fact after fixing this occurrence it is completely unused.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
hw/scsi/virtio-scsi.c | 4 ++--
include/hw/virtio/virtio-scsi.h | 1 -
2 files changed, 2 insertions(+), 3 deletions(-)
diff --git a/hw/scsi/virtio-scsi.c b/hw/scsi/virtio-scsi.c
index 0c30d2e..ac531e2 100644
--- a/hw/scsi/virtio-scsi.c
+++ b/hw/scsi/virtio-scsi.c
@@ -67,7 +67,7 @@ static void virtio_scsi_complete_req(VirtIOSCSIReq *req)
qemu_iovec_from_buf(&req->resp_iov, 0, &req->resp, req->resp_size);
virtqueue_push(vq, &req->elem, req->qsgl.size + req->resp_iov.size);
- if (s->dataplane_started) {
+ if (s->dataplane_started && !s->dataplane_fenced) {
virtio_scsi_dataplane_notify(vdev, req);
} else {
virtio_notify(vdev, vq);
@@ -772,7 +772,7 @@ static void virtio_scsi_hotplug(HotplugHandler *hotplug_dev, DeviceState *dev,
VirtIOSCSI *s = VIRTIO_SCSI(vdev);
SCSIDevice *sd = SCSI_DEVICE(dev);
- if (s->ctx && !s->dataplane_disabled) {
+ if (s->ctx && !s->dataplane_fenced) {
VirtIOSCSIBlkChangeNotifier *insert_notifier, *remove_notifier;
if (blk_op_is_blocked(sd->conf.blk, BLOCK_OP_TYPE_DATAPLANE, errp)) {
diff --git a/include/hw/virtio/virtio-scsi.h b/include/hw/virtio/virtio-scsi.h
index 209eaa4..eef4e95 100644
--- a/include/hw/virtio/virtio-scsi.h
+++ b/include/hw/virtio/virtio-scsi.h
@@ -91,7 +91,6 @@ typedef struct VirtIOSCSI {
bool dataplane_started;
bool dataplane_starting;
bool dataplane_stopping;
- bool dataplane_disabled;
bool dataplane_fenced;
Error *blocker;
uint32_t host_features;
--
1.8.3.1
^ permalink raw reply related [flat|nested] 31+ messages in thread
* [Qemu-devel] [PATCH 5/9] virtio: add aio handler
2016-03-30 12:47 [Qemu-devel] [PATCH resend 0/9] virtio: aio handler API Paolo Bonzini
` (3 preceding siblings ...)
2016-03-30 12:48 ` [Qemu-devel] [PATCH 4/9] virtio-scsi: " Paolo Bonzini
@ 2016-03-30 12:48 ` Paolo Bonzini
2016-03-30 14:55 ` Cornelia Huck
2016-03-30 12:48 ` [Qemu-devel] [PATCH 6/9] virtio-blk: use aio handler for data plane Paolo Bonzini
` (3 subsequent siblings)
8 siblings, 1 reply; 31+ messages in thread
From: Paolo Bonzini @ 2016-03-30 12:48 UTC (permalink / raw)
To: qemu-devel; +Cc: cornelia.huck, borntraeger, famz, mst
From: "Michael S. Tsirkin" <mst@redhat.com>
In addition to handling IO in vcpu thread and
in io thread, blk dataplane introduces yet another mode:
handling it by aio.
This reuses the same handler as previous modes,
which triggers races as these were not designed to be reentrant.
As a temporary fix, add a separate handler just for aio, this will make
it possible to disable regular handlers when dataplane is active.
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
hw/virtio/virtio.c | 36 ++++++++++++++++++++++++++++++++----
include/hw/virtio/virtio.h | 3 +++
2 files changed, 35 insertions(+), 4 deletions(-)
diff --git a/hw/virtio/virtio.c b/hw/virtio/virtio.c
index 07c45b6..39a9f47 100644
--- a/hw/virtio/virtio.c
+++ b/hw/virtio/virtio.c
@@ -94,6 +94,7 @@ struct VirtQueue
uint16_t vector;
void (*handle_output)(VirtIODevice *vdev, VirtQueue *vq);
+ void (*handle_aio_output)(VirtIODevice *vdev, VirtQueue *vq);
VirtIODevice *vdev;
EventNotifier guest_notifier;
EventNotifier host_notifier;
@@ -1086,6 +1087,16 @@ void virtio_queue_set_align(VirtIODevice *vdev, int n, int align)
virtio_queue_update_rings(vdev, n);
}
+static void virtio_queue_notify_aio_vq(VirtQueue *vq)
+{
+ if (vq->vring.desc && vq->handle_aio_output) {
+ VirtIODevice *vdev = vq->vdev;
+
+ trace_virtio_queue_notify(vdev, vq - vdev->vq, vq);
+ vq->handle_aio_output(vdev, vq);
+ }
+}
+
static void virtio_queue_notify_vq(VirtQueue *vq)
{
if (vq->vring.desc && vq->handle_output) {
@@ -1141,10 +1152,19 @@ VirtQueue *virtio_add_queue(VirtIODevice *vdev, int queue_size,
vdev->vq[i].vring.num_default = queue_size;
vdev->vq[i].vring.align = VIRTIO_PCI_VRING_ALIGN;
vdev->vq[i].handle_output = handle_output;
+ vdev->vq[i].handle_aio_output = NULL;
return &vdev->vq[i];
}
+void virtio_set_queue_aio(VirtQueue *vq,
+ void (*handle_output)(VirtIODevice *, VirtQueue *))
+{
+ assert(vq->handle_output);
+
+ vq->handle_aio_output = handle_output;
+}
+
void virtio_del_queue(VirtIODevice *vdev, int n)
{
if (n < 0 || n >= VIRTIO_QUEUE_MAX) {
@@ -1778,11 +1798,11 @@ EventNotifier *virtio_queue_get_guest_notifier(VirtQueue *vq)
return &vq->guest_notifier;
}
-static void virtio_queue_host_notifier_read(EventNotifier *n)
+static void virtio_queue_host_notifier_aio_read(EventNotifier *n)
{
VirtQueue *vq = container_of(n, VirtQueue, host_notifier);
if (event_notifier_test_and_clear(n)) {
- virtio_queue_notify_vq(vq);
+ virtio_queue_notify_aio_vq(vq);
}
}
@@ -1791,14 +1811,22 @@ void virtio_queue_aio_set_host_notifier_handler(VirtQueue *vq, AioContext *ctx,
{
if (assign && set_handler) {
aio_set_event_notifier(ctx, &vq->host_notifier, true,
- virtio_queue_host_notifier_read);
+ virtio_queue_host_notifier_aio_read);
} else {
aio_set_event_notifier(ctx, &vq->host_notifier, true, NULL);
}
if (!assign) {
/* Test and clear notifier before after disabling event,
* in case poll callback didn't have time to run. */
- virtio_queue_host_notifier_read(&vq->host_notifier);
+ virtio_queue_host_notifier_aio_read(&vq->host_notifier);
+ }
+}
+
+static void virtio_queue_host_notifier_read(EventNotifier *n)
+{
+ VirtQueue *vq = container_of(n, VirtQueue, host_notifier);
+ if (event_notifier_test_and_clear(n)) {
+ virtio_queue_notify_vq(vq);
}
}
diff --git a/include/hw/virtio/virtio.h b/include/hw/virtio/virtio.h
index 5afb51c..fa3f93b 100644
--- a/include/hw/virtio/virtio.h
+++ b/include/hw/virtio/virtio.h
@@ -142,6 +142,9 @@ VirtQueue *virtio_add_queue(VirtIODevice *vdev, int queue_size,
void (*handle_output)(VirtIODevice *,
VirtQueue *));
+void virtio_set_queue_aio(VirtQueue *vq,
+ void (*handle_output)(VirtIODevice *, VirtQueue *));
+
void virtio_del_queue(VirtIODevice *vdev, int n);
void *virtqueue_alloc_element(size_t sz, unsigned out_num, unsigned in_num);
--
1.8.3.1
^ permalink raw reply related [flat|nested] 31+ messages in thread
* [Qemu-devel] [PATCH 6/9] virtio-blk: use aio handler for data plane
2016-03-30 12:47 [Qemu-devel] [PATCH resend 0/9] virtio: aio handler API Paolo Bonzini
` (4 preceding siblings ...)
2016-03-30 12:48 ` [Qemu-devel] [PATCH 5/9] virtio: add aio handler Paolo Bonzini
@ 2016-03-30 12:48 ` Paolo Bonzini
2016-03-30 15:25 ` Cornelia Huck
2016-03-30 12:48 ` [Qemu-devel] [PATCH 7/9] virtio-scsi: " Paolo Bonzini
` (2 subsequent siblings)
8 siblings, 1 reply; 31+ messages in thread
From: Paolo Bonzini @ 2016-03-30 12:48 UTC (permalink / raw)
To: qemu-devel; +Cc: cornelia.huck, borntraeger, famz, mst
From: "Michael S. Tsirkin" <mst@redhat.com>
In addition to handling IO in vcpu thread and in io thread, dataplane
introduces yet another mode: handling it by aio.
This reuses the same handler as previous modes, which triggers races as
these were not designed to be reentrant.
Use a separate handler just for aio, and disable regular handlers when
dataplane is active.
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
hw/block/dataplane/virtio-blk.c | 13 +++++++++++++
hw/block/virtio-blk.c | 27 +++++++++++++++++----------
include/hw/virtio/virtio-blk.h | 2 ++
3 files changed, 32 insertions(+), 10 deletions(-)
diff --git a/hw/block/dataplane/virtio-blk.c b/hw/block/dataplane/virtio-blk.c
index 378feb3..4a137df 100644
--- a/hw/block/dataplane/virtio-blk.c
+++ b/hw/block/dataplane/virtio-blk.c
@@ -183,6 +183,17 @@ void virtio_blk_data_plane_destroy(VirtIOBlockDataPlane *s)
g_free(s);
}
+static void virtio_blk_data_plane_handle_output(VirtIODevice *vdev,
+ VirtQueue *vq)
+{
+ VirtIOBlock *s = (VirtIOBlock *)vdev;
+
+ assert(s->dataplane);
+ assert(s->dataplane_started);
+
+ virtio_blk_handle_vq(s, vq);
+}
+
/* Context: QEMU global mutex held */
void virtio_blk_data_plane_start(VirtIOBlockDataPlane *s)
{
@@ -225,6 +236,7 @@ void virtio_blk_data_plane_start(VirtIOBlockDataPlane *s)
/* Get this show started by hooking up our callbacks */
aio_context_acquire(s->ctx);
+ virtio_set_queue_aio(s->vq, virtio_blk_data_plane_handle_output);
virtio_queue_aio_set_host_notifier_handler(s->vq, s->ctx, true, true);
aio_context_release(s->ctx);
return;
@@ -261,6 +273,7 @@ void virtio_blk_data_plane_stop(VirtIOBlockDataPlane *s)
/* Stop notifications for new requests from guest */
virtio_queue_aio_set_host_notifier_handler(s->vq, s->ctx, true, false);
+ virtio_set_queue_aio(s->vq, NULL);
/* Drain and switch bs back to the QEMU main loop */
blk_set_aio_context(s->conf->conf.blk, qemu_get_aio_context());
diff --git a/hw/block/virtio-blk.c b/hw/block/virtio-blk.c
index 77221c1..47ad9ed 100644
--- a/hw/block/virtio-blk.c
+++ b/hw/block/virtio-blk.c
@@ -577,20 +577,11 @@ void virtio_blk_handle_request(VirtIOBlockReq *req, MultiReqBuffer *mrb)
}
}
-static void virtio_blk_handle_output(VirtIODevice *vdev, VirtQueue *vq)
+void virtio_blk_handle_vq(VirtIOBlock *s, VirtQueue *vq)
{
- VirtIOBlock *s = VIRTIO_BLK(vdev);
VirtIOBlockReq *req;
MultiReqBuffer mrb = {};
- /* Some guests kick before setting VIRTIO_CONFIG_S_DRIVER_OK so start
- * dataplane here instead of waiting for .set_status().
- */
- if (s->dataplane && !s->dataplane_started) {
- virtio_blk_data_plane_start(s->dataplane);
- return;
- }
-
assert(atomic_fetch_inc(&s->reentrancy_test) == 0);
blk_io_plug(s->blk);
@@ -606,6 +597,22 @@ static void virtio_blk_handle_output(VirtIODevice *vdev, VirtQueue *vq)
atomic_dec(&s->reentrancy_test);
}
+static void virtio_blk_handle_output(VirtIODevice *vdev, VirtQueue *vq)
+{
+ VirtIOBlock *s = (VirtIOBlock *)vdev;
+
+ if (s->dataplane) {
+ /* Some guests kick before setting VIRTIO_CONFIG_S_DRIVER_OK so start
+ * dataplane here instead of waiting for .set_status().
+ */
+ virtio_blk_data_plane_start(s->dataplane);
+ if (!s->dataplane_disabled) {
+ return;
+ }
+ }
+ virtio_blk_handle_vq(s, vq);
+}
+
static void virtio_blk_dma_restart_bh(void *opaque)
{
VirtIOBlock *s = opaque;
diff --git a/include/hw/virtio/virtio-blk.h b/include/hw/virtio/virtio-blk.h
index 073c632..b5117a8 100644
--- a/include/hw/virtio/virtio-blk.h
+++ b/include/hw/virtio/virtio-blk.h
@@ -87,4 +87,6 @@ void virtio_blk_handle_request(VirtIOBlockReq *req, MultiReqBuffer *mrb);
void virtio_blk_submit_multireq(BlockBackend *blk, MultiReqBuffer *mrb);
+void virtio_blk_handle_vq(VirtIOBlock *s, VirtQueue *vq);
+
#endif
--
1.8.3.1
^ permalink raw reply related [flat|nested] 31+ messages in thread
* [Qemu-devel] [PATCH 7/9] virtio-scsi: use aio handler for data plane
2016-03-30 12:47 [Qemu-devel] [PATCH resend 0/9] virtio: aio handler API Paolo Bonzini
` (5 preceding siblings ...)
2016-03-30 12:48 ` [Qemu-devel] [PATCH 6/9] virtio-blk: use aio handler for data plane Paolo Bonzini
@ 2016-03-30 12:48 ` Paolo Bonzini
2016-03-30 15:31 ` Cornelia Huck
2016-03-30 12:48 ` [Qemu-devel] [PATCH 8/9] virtio: merge virtio_queue_aio_set_host_notifier_handler with virtio_queue_set_aio Paolo Bonzini
2016-03-30 12:48 ` [Qemu-devel] [PATCH 9/9] virtio: remove starting/stopping checks Paolo Bonzini
8 siblings, 1 reply; 31+ messages in thread
From: Paolo Bonzini @ 2016-03-30 12:48 UTC (permalink / raw)
To: qemu-devel; +Cc: cornelia.huck, borntraeger, famz, mst
In addition to handling IO in vcpu thread and in io thread, dataplane
introduces yet another mode: handling it by aio.
This reuses the same handler as previous modes, which triggers races as
these were not designed to be reentrant.
Use a separate handler just for aio, and disable regular handlers when
dataplane is active.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
hw/scsi/virtio-scsi-dataplane.c | 43 ++++++++++++++++++++++++---
hw/scsi/virtio-scsi.c | 65 ++++++++++++++++++++++++++++-------------
include/hw/virtio/virtio-scsi.h | 6 ++--
3 files changed, 86 insertions(+), 28 deletions(-)
diff --git a/hw/scsi/virtio-scsi-dataplane.c b/hw/scsi/virtio-scsi-dataplane.c
index c57480e..0fa5bb5 100644
--- a/hw/scsi/virtio-scsi-dataplane.c
+++ b/hw/scsi/virtio-scsi-dataplane.c
@@ -39,7 +39,35 @@ void virtio_scsi_set_iothread(VirtIOSCSI *s, IOThread *iothread)
}
}
-static int virtio_scsi_vring_init(VirtIOSCSI *s, VirtQueue *vq, int n)
+static void virtio_scsi_data_plane_handle_cmd(VirtIODevice *vdev,
+ VirtQueue *vq)
+{
+ VirtIOSCSI *s = (VirtIOSCSI *)vdev;
+
+ assert(s->ctx && s->dataplane_started);
+ virtio_scsi_handle_cmd_vq(s, vq);
+}
+
+static void virtio_scsi_data_plane_handle_ctrl(VirtIODevice *vdev,
+ VirtQueue *vq)
+{
+ VirtIOSCSI *s = VIRTIO_SCSI(vdev);
+
+ assert(s->ctx && s->dataplane_started);
+ virtio_scsi_handle_ctrl_vq(s, vq);
+}
+
+static void virtio_scsi_data_plane_handle_event(VirtIODevice *vdev,
+ VirtQueue *vq)
+{
+ VirtIOSCSI *s = VIRTIO_SCSI(vdev);
+
+ assert(s->ctx && s->dataplane_started);
+ virtio_scsi_handle_event_vq(s, vq);
+}
+
+static int virtio_scsi_vring_init(VirtIOSCSI *s, VirtQueue *vq, int n,
+ void (*fn)(VirtIODevice *vdev, VirtQueue *vq))
{
BusState *qbus = BUS(qdev_get_parent_bus(DEVICE(s)));
VirtioBusClass *k = VIRTIO_BUS_GET_CLASS(qbus);
@@ -55,6 +83,7 @@ static int virtio_scsi_vring_init(VirtIOSCSI *s, VirtQueue *vq, int n)
}
virtio_queue_aio_set_host_notifier_handler(vq, s->ctx, true, true);
+ virtio_set_queue_aio(vq, fn);
return 0;
}
@@ -72,9 +101,12 @@ static void virtio_scsi_clear_aio(VirtIOSCSI *s)
int i;
virtio_queue_aio_set_host_notifier_handler(vs->ctrl_vq, s->ctx, true, false);
+ virtio_set_queue_aio(vs->ctrl_vq, NULL);
virtio_queue_aio_set_host_notifier_handler(vs->event_vq, s->ctx, true, false);
+ virtio_set_queue_aio(vs->event_vq, NULL);
for (i = 0; i < vs->conf.num_queues; i++) {
virtio_queue_aio_set_host_notifier_handler(vs->cmd_vqs[i], s->ctx, true, false);
+ virtio_set_queue_aio(vs->cmd_vqs[i], NULL);
}
}
@@ -105,16 +137,19 @@ void virtio_scsi_dataplane_start(VirtIOSCSI *s)
}
aio_context_acquire(s->ctx);
- rc = virtio_scsi_vring_init(s, vs->ctrl_vq, 0);
+ rc = virtio_scsi_vring_init(s, vs->ctrl_vq, 0,
+ virtio_scsi_data_plane_handle_ctrl);
if (rc) {
goto fail_vrings;
}
- rc = virtio_scsi_vring_init(s, vs->event_vq, 1);
+ rc = virtio_scsi_vring_init(s, vs->event_vq, 1,
+ virtio_scsi_data_plane_handle_event);
if (rc) {
goto fail_vrings;
}
for (i = 0; i < vs->conf.num_queues; i++) {
- rc = virtio_scsi_vring_init(s, vs->cmd_vqs[i], i + 2);
+ rc = virtio_scsi_vring_init(s, vs->cmd_vqs[i], i + 2,
+ virtio_scsi_data_plane_handle_cmd);
if (rc) {
goto fail_vrings;
}
diff --git a/hw/scsi/virtio-scsi.c b/hw/scsi/virtio-scsi.c
index ac531e2..7342d26 100644
--- a/hw/scsi/virtio-scsi.c
+++ b/hw/scsi/virtio-scsi.c
@@ -373,7 +373,7 @@ fail:
return ret;
}
-void virtio_scsi_handle_ctrl_req(VirtIOSCSI *s, VirtIOSCSIReq *req)
+static void virtio_scsi_handle_ctrl_req(VirtIOSCSI *s, VirtIOSCSIReq *req)
{
VirtIODevice *vdev = (VirtIODevice *)s;
uint32_t type;
@@ -411,20 +411,28 @@ void virtio_scsi_handle_ctrl_req(VirtIOSCSI *s, VirtIOSCSIReq *req)
}
}
-static void virtio_scsi_handle_ctrl(VirtIODevice *vdev, VirtQueue *vq)
+void virtio_scsi_handle_ctrl_vq(VirtIOSCSI *s, VirtQueue *vq)
{
- VirtIOSCSI *s = (VirtIOSCSI *)vdev;
VirtIOSCSIReq *req;
- if (s->ctx && !s->dataplane_started) {
- virtio_scsi_dataplane_start(s);
- return;
- }
while ((req = virtio_scsi_pop_req(s, vq))) {
virtio_scsi_handle_ctrl_req(s, req);
}
}
+static void virtio_scsi_handle_ctrl(VirtIODevice *vdev, VirtQueue *vq)
+{
+ VirtIOSCSI *s = (VirtIOSCSI *)vdev;
+
+ if (s->ctx) {
+ virtio_scsi_dataplane_start(s);
+ if (!s->dataplane_fenced) {
+ return;
+ }
+ }
+ virtio_scsi_handle_ctrl_vq(s, vq);
+}
+
static void virtio_scsi_complete_cmd_req(VirtIOSCSIReq *req)
{
/* Sense data is not in req->resp and is copied separately
@@ -507,7 +515,7 @@ static void virtio_scsi_fail_cmd_req(VirtIOSCSIReq *req)
virtio_scsi_complete_cmd_req(req);
}
-bool virtio_scsi_handle_cmd_req_prepare(VirtIOSCSI *s, VirtIOSCSIReq *req)
+static bool virtio_scsi_handle_cmd_req_prepare(VirtIOSCSI *s, VirtIOSCSIReq *req)
{
VirtIOSCSICommon *vs = &s->parent_obj;
SCSIDevice *d;
@@ -549,7 +557,7 @@ bool virtio_scsi_handle_cmd_req_prepare(VirtIOSCSI *s, VirtIOSCSIReq *req)
return true;
}
-void virtio_scsi_handle_cmd_req_submit(VirtIOSCSI *s, VirtIOSCSIReq *req)
+static void virtio_scsi_handle_cmd_req_submit(VirtIOSCSI *s, VirtIOSCSIReq *req)
{
SCSIRequest *sreq = req->sreq;
if (scsi_req_enqueue(sreq)) {
@@ -559,17 +567,11 @@ void virtio_scsi_handle_cmd_req_submit(VirtIOSCSI *s, VirtIOSCSIReq *req)
scsi_req_unref(sreq);
}
-static void virtio_scsi_handle_cmd(VirtIODevice *vdev, VirtQueue *vq)
+void virtio_scsi_handle_cmd_vq(VirtIOSCSI *s, VirtQueue *vq)
{
- /* use non-QOM casts in the data path */
- VirtIOSCSI *s = (VirtIOSCSI *)vdev;
VirtIOSCSIReq *req, *next;
QTAILQ_HEAD(, VirtIOSCSIReq) reqs = QTAILQ_HEAD_INITIALIZER(reqs);
- if (s->ctx && !s->dataplane_started) {
- virtio_scsi_dataplane_start(s);
- return;
- }
while ((req = virtio_scsi_pop_req(s, vq))) {
if (virtio_scsi_handle_cmd_req_prepare(s, req)) {
QTAILQ_INSERT_TAIL(&reqs, req, next);
@@ -581,6 +583,20 @@ static void virtio_scsi_handle_cmd(VirtIODevice *vdev, VirtQueue *vq)
}
}
+static void virtio_scsi_handle_cmd(VirtIODevice *vdev, VirtQueue *vq)
+{
+ /* use non-QOM casts in the data path */
+ VirtIOSCSI *s = (VirtIOSCSI *)vdev;
+
+ if (s->ctx) {
+ virtio_scsi_dataplane_start(s);
+ if (!s->dataplane_fenced) {
+ return;
+ }
+ }
+ virtio_scsi_handle_cmd_vq(s, vq);
+}
+
static void virtio_scsi_get_config(VirtIODevice *vdev,
uint8_t *config)
{
@@ -724,17 +740,24 @@ out:
}
}
+void virtio_scsi_handle_event_vq(VirtIOSCSI *s, VirtQueue *vq)
+{
+ if (s->events_dropped) {
+ virtio_scsi_push_event(s, NULL, VIRTIO_SCSI_T_NO_EVENT, 0);
+ }
+}
+
static void virtio_scsi_handle_event(VirtIODevice *vdev, VirtQueue *vq)
{
VirtIOSCSI *s = VIRTIO_SCSI(vdev);
- if (s->ctx && !s->dataplane_started) {
+ if (s->ctx) {
virtio_scsi_dataplane_start(s);
- return;
- }
- if (s->events_dropped) {
- virtio_scsi_push_event(s, NULL, VIRTIO_SCSI_T_NO_EVENT, 0);
+ if (!s->dataplane_fenced) {
+ return;
+ }
}
+ virtio_scsi_handle_event_vq(s, vq);
}
static void virtio_scsi_change(SCSIBus *bus, SCSIDevice *dev, SCSISense sense)
diff --git a/include/hw/virtio/virtio-scsi.h b/include/hw/virtio/virtio-scsi.h
index eef4e95..ba2f5ce 100644
--- a/include/hw/virtio/virtio-scsi.h
+++ b/include/hw/virtio/virtio-scsi.h
@@ -139,9 +139,9 @@ void virtio_scsi_common_realize(DeviceState *dev, Error **errp,
HandleOutput cmd);
void virtio_scsi_common_unrealize(DeviceState *dev, Error **errp);
-void virtio_scsi_handle_ctrl_req(VirtIOSCSI *s, VirtIOSCSIReq *req);
-bool virtio_scsi_handle_cmd_req_prepare(VirtIOSCSI *s, VirtIOSCSIReq *req);
-void virtio_scsi_handle_cmd_req_submit(VirtIOSCSI *s, VirtIOSCSIReq *req);
+void virtio_scsi_handle_event_vq(VirtIOSCSI *s, VirtQueue *vq);
+void virtio_scsi_handle_cmd_vq(VirtIOSCSI *s, VirtQueue *vq);
+void virtio_scsi_handle_ctrl_vq(VirtIOSCSI *s, VirtQueue *vq);
void virtio_scsi_init_req(VirtIOSCSI *s, VirtQueue *vq, VirtIOSCSIReq *req);
void virtio_scsi_free_req(VirtIOSCSIReq *req);
void virtio_scsi_push_event(VirtIOSCSI *s, SCSIDevice *dev,
--
1.8.3.1
^ permalink raw reply related [flat|nested] 31+ messages in thread
* [Qemu-devel] [PATCH 8/9] virtio: merge virtio_queue_aio_set_host_notifier_handler with virtio_queue_set_aio
2016-03-30 12:47 [Qemu-devel] [PATCH resend 0/9] virtio: aio handler API Paolo Bonzini
` (6 preceding siblings ...)
2016-03-30 12:48 ` [Qemu-devel] [PATCH 7/9] virtio-scsi: " Paolo Bonzini
@ 2016-03-30 12:48 ` Paolo Bonzini
2016-03-30 15:33 ` Cornelia Huck
2016-03-30 12:48 ` [Qemu-devel] [PATCH 9/9] virtio: remove starting/stopping checks Paolo Bonzini
8 siblings, 1 reply; 31+ messages in thread
From: Paolo Bonzini @ 2016-03-30 12:48 UTC (permalink / raw)
To: qemu-devel; +Cc: cornelia.huck, borntraeger, famz, mst
Eliminating the reentrancy is actually a nice thing that we can do
with the API that Michael proposed, so let's make it first class.
This also hides the complex assign/set_handler conventions from
callers of virtio_queue_aio_set_host_notifier_handler, and fixes
a possible race that could happen when passing assign=false to
virtio_queue_aio_set_host_notifier_handler.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
hw/block/dataplane/virtio-blk.c | 7 +++----
hw/scsi/virtio-scsi-dataplane.c | 12 ++++--------
hw/virtio/virtio.c | 19 ++++---------------
include/hw/virtio/virtio.h | 6 ++----
4 files changed, 13 insertions(+), 31 deletions(-)
diff --git a/hw/block/dataplane/virtio-blk.c b/hw/block/dataplane/virtio-blk.c
index 4a137df..7e50d01 100644
--- a/hw/block/dataplane/virtio-blk.c
+++ b/hw/block/dataplane/virtio-blk.c
@@ -236,8 +236,8 @@ void virtio_blk_data_plane_start(VirtIOBlockDataPlane *s)
/* Get this show started by hooking up our callbacks */
aio_context_acquire(s->ctx);
- virtio_set_queue_aio(s->vq, virtio_blk_data_plane_handle_output);
- virtio_queue_aio_set_host_notifier_handler(s->vq, s->ctx, true, true);
+ virtio_queue_aio_set_host_notifier_handler(s->vq, s->ctx,
+ virtio_blk_data_plane_handle_output);
aio_context_release(s->ctx);
return;
@@ -272,8 +272,7 @@ void virtio_blk_data_plane_stop(VirtIOBlockDataPlane *s)
aio_context_acquire(s->ctx);
/* Stop notifications for new requests from guest */
- virtio_queue_aio_set_host_notifier_handler(s->vq, s->ctx, true, false);
- virtio_set_queue_aio(s->vq, NULL);
+ virtio_queue_aio_set_host_notifier_handler(s->vq, s->ctx, NULL);
/* Drain and switch bs back to the QEMU main loop */
blk_set_aio_context(s->conf->conf.blk, qemu_get_aio_context());
diff --git a/hw/scsi/virtio-scsi-dataplane.c b/hw/scsi/virtio-scsi-dataplane.c
index 0fa5bb5..8694577 100644
--- a/hw/scsi/virtio-scsi-dataplane.c
+++ b/hw/scsi/virtio-scsi-dataplane.c
@@ -82,8 +82,7 @@ static int virtio_scsi_vring_init(VirtIOSCSI *s, VirtQueue *vq, int n,
return rc;
}
- virtio_queue_aio_set_host_notifier_handler(vq, s->ctx, true, true);
- virtio_set_queue_aio(vq, fn);
+ virtio_queue_aio_set_host_notifier_handler(vq, s->ctx, fn);
return 0;
}
@@ -100,13 +99,10 @@ static void virtio_scsi_clear_aio(VirtIOSCSI *s)
VirtIOSCSICommon *vs = VIRTIO_SCSI_COMMON(s);
int i;
- virtio_queue_aio_set_host_notifier_handler(vs->ctrl_vq, s->ctx, true, false);
- virtio_set_queue_aio(vs->ctrl_vq, NULL);
- virtio_queue_aio_set_host_notifier_handler(vs->event_vq, s->ctx, true, false);
- virtio_set_queue_aio(vs->event_vq, NULL);
+ virtio_queue_aio_set_host_notifier_handler(vs->ctrl_vq, s->ctx, NULL);
+ virtio_queue_aio_set_host_notifier_handler(vs->event_vq, s->ctx, NULL);
for (i = 0; i < vs->conf.num_queues; i++) {
- virtio_queue_aio_set_host_notifier_handler(vs->cmd_vqs[i], s->ctx, true, false);
- virtio_set_queue_aio(vs->cmd_vqs[i], NULL);
+ virtio_queue_aio_set_host_notifier_handler(vs->cmd_vqs[i], s->ctx, NULL);
}
}
diff --git a/hw/virtio/virtio.c b/hw/virtio/virtio.c
index 39a9f47..b16d2d8 100644
--- a/hw/virtio/virtio.c
+++ b/hw/virtio/virtio.c
@@ -1157,14 +1157,6 @@ VirtQueue *virtio_add_queue(VirtIODevice *vdev, int queue_size,
return &vdev->vq[i];
}
-void virtio_set_queue_aio(VirtQueue *vq,
- void (*handle_output)(VirtIODevice *, VirtQueue *))
-{
- assert(vq->handle_output);
-
- vq->handle_aio_output = handle_output;
-}
-
void virtio_del_queue(VirtIODevice *vdev, int n)
{
if (n < 0 || n >= VIRTIO_QUEUE_MAX) {
@@ -1807,19 +1799,16 @@ static void virtio_queue_host_notifier_aio_read(EventNotifier *n)
}
void virtio_queue_aio_set_host_notifier_handler(VirtQueue *vq, AioContext *ctx,
- bool assign, bool set_handler)
+ void (*handle_output)(VirtIODevice *,
+ VirtQueue *))
{
- if (assign && set_handler) {
+ vq->handle_aio_output = handle_output;
+ if (handle_output) {
aio_set_event_notifier(ctx, &vq->host_notifier, true,
virtio_queue_host_notifier_aio_read);
} else {
aio_set_event_notifier(ctx, &vq->host_notifier, true, NULL);
}
- if (!assign) {
- /* Test and clear notifier before after disabling event,
- * in case poll callback didn't have time to run. */
- virtio_queue_host_notifier_aio_read(&vq->host_notifier);
- }
}
static void virtio_queue_host_notifier_read(EventNotifier *n)
diff --git a/include/hw/virtio/virtio.h b/include/hw/virtio/virtio.h
index fa3f93b..6a37065 100644
--- a/include/hw/virtio/virtio.h
+++ b/include/hw/virtio/virtio.h
@@ -142,9 +142,6 @@ VirtQueue *virtio_add_queue(VirtIODevice *vdev, int queue_size,
void (*handle_output)(VirtIODevice *,
VirtQueue *));
-void virtio_set_queue_aio(VirtQueue *vq,
- void (*handle_output)(VirtIODevice *, VirtQueue *));
-
void virtio_del_queue(VirtIODevice *vdev, int n);
void *virtqueue_alloc_element(size_t sz, unsigned out_num, unsigned in_num);
@@ -254,7 +251,8 @@ EventNotifier *virtio_queue_get_host_notifier(VirtQueue *vq);
void virtio_queue_set_host_notifier_fd_handler(VirtQueue *vq, bool assign,
bool set_handler);
void virtio_queue_aio_set_host_notifier_handler(VirtQueue *vq, AioContext *ctx,
- bool assign, bool set_handler);
+ void (*fn)(VirtIODevice *,
+ VirtQueue *));
void virtio_irq(VirtQueue *vq);
VirtQueue *virtio_vector_first_queue(VirtIODevice *vdev, uint16_t vector);
VirtQueue *virtio_vector_next_queue(VirtQueue *vq);
--
1.8.3.1
^ permalink raw reply related [flat|nested] 31+ messages in thread
* [Qemu-devel] [PATCH 9/9] virtio: remove starting/stopping checks
2016-03-30 12:47 [Qemu-devel] [PATCH resend 0/9] virtio: aio handler API Paolo Bonzini
` (7 preceding siblings ...)
2016-03-30 12:48 ` [Qemu-devel] [PATCH 8/9] virtio: merge virtio_queue_aio_set_host_notifier_handler with virtio_queue_set_aio Paolo Bonzini
@ 2016-03-30 12:48 ` Paolo Bonzini
2016-03-30 15:36 ` Cornelia Huck
8 siblings, 1 reply; 31+ messages in thread
From: Paolo Bonzini @ 2016-03-30 12:48 UTC (permalink / raw)
To: qemu-devel; +Cc: cornelia.huck, borntraeger, famz, mst
Reentrancy cannot happen while the BQL is being held.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
hw/block/dataplane/virtio-blk.c | 12 ++----------
hw/scsi/virtio-scsi-dataplane.c | 9 +--------
include/hw/virtio/virtio-scsi.h | 2 --
3 files changed, 3 insertions(+), 20 deletions(-)
diff --git a/hw/block/dataplane/virtio-blk.c b/hw/block/dataplane/virtio-blk.c
index 7e50d01..0cff427 100644
--- a/hw/block/dataplane/virtio-blk.c
+++ b/hw/block/dataplane/virtio-blk.c
@@ -26,9 +26,6 @@
#include "qom/object_interfaces.h"
struct VirtIOBlockDataPlane {
- bool starting;
- bool stopping;
-
VirtIOBlkConf *conf;
VirtIODevice *vdev;
@@ -202,11 +199,10 @@ void virtio_blk_data_plane_start(VirtIOBlockDataPlane *s)
VirtIOBlock *vblk = VIRTIO_BLK(s->vdev);
int r;
- if (vblk->dataplane_started || s->starting) {
+ if (vblk->dataplane_started) {
return;
}
- s->starting = true;
s->vq = virtio_get_queue(s->vdev, 0);
/* Set up guest notifier (irq) */
@@ -225,7 +221,6 @@ void virtio_blk_data_plane_start(VirtIOBlockDataPlane *s)
goto fail_host_notifier;
}
- s->starting = false;
vblk->dataplane_started = true;
trace_virtio_blk_data_plane_start(s);
@@ -245,7 +240,6 @@ void virtio_blk_data_plane_start(VirtIOBlockDataPlane *s)
k->set_guest_notifiers(qbus->parent, 1, false);
fail_guest_notifiers:
vblk->dataplane_disabled = true;
- s->starting = false;
vblk->dataplane_started = true;
}
@@ -256,7 +250,7 @@ void virtio_blk_data_plane_stop(VirtIOBlockDataPlane *s)
VirtioBusClass *k = VIRTIO_BUS_GET_CLASS(qbus);
VirtIOBlock *vblk = VIRTIO_BLK(s->vdev);
- if (!vblk->dataplane_started || s->stopping) {
+ if (!vblk->dataplane_started) {
return;
}
@@ -266,7 +260,6 @@ void virtio_blk_data_plane_stop(VirtIOBlockDataPlane *s)
vblk->dataplane_started = false;
return;
}
- s->stopping = true;
trace_virtio_blk_data_plane_stop(s);
aio_context_acquire(s->ctx);
@@ -285,5 +278,4 @@ void virtio_blk_data_plane_stop(VirtIOBlockDataPlane *s)
k->set_guest_notifiers(qbus->parent, 1, false);
vblk->dataplane_started = false;
- s->stopping = false;
}
diff --git a/hw/scsi/virtio-scsi-dataplane.c b/hw/scsi/virtio-scsi-dataplane.c
index 8694577..ac41787 100644
--- a/hw/scsi/virtio-scsi-dataplane.c
+++ b/hw/scsi/virtio-scsi-dataplane.c
@@ -116,14 +116,11 @@ void virtio_scsi_dataplane_start(VirtIOSCSI *s)
VirtIOSCSICommon *vs = VIRTIO_SCSI_COMMON(s);
if (s->dataplane_started ||
- s->dataplane_starting ||
s->dataplane_fenced ||
s->ctx != iothread_get_aio_context(vs->conf.iothread)) {
return;
}
- s->dataplane_starting = true;
-
/* Set up guest notifier (irq) */
rc = k->set_guest_notifiers(qbus->parent, vs->conf.num_queues + 2, true);
if (rc != 0) {
@@ -151,7 +148,6 @@ void virtio_scsi_dataplane_start(VirtIOSCSI *s)
}
}
- s->dataplane_starting = false;
s->dataplane_started = true;
aio_context_release(s->ctx);
return;
@@ -165,7 +161,6 @@ fail_vrings:
k->set_guest_notifiers(qbus->parent, vs->conf.num_queues + 2, false);
fail_guest_notifiers:
s->dataplane_fenced = true;
- s->dataplane_starting = false;
s->dataplane_started = true;
}
@@ -177,7 +172,7 @@ void virtio_scsi_dataplane_stop(VirtIOSCSI *s)
VirtIOSCSICommon *vs = VIRTIO_SCSI_COMMON(s);
int i;
- if (!s->dataplane_started || s->dataplane_stopping) {
+ if (!s->dataplane_started) {
return;
}
@@ -187,7 +182,6 @@ void virtio_scsi_dataplane_stop(VirtIOSCSI *s)
s->dataplane_started = false;
return;
}
- s->dataplane_stopping = true;
assert(s->ctx == iothread_get_aio_context(vs->conf.iothread));
aio_context_acquire(s->ctx);
@@ -204,6 +198,5 @@ void virtio_scsi_dataplane_stop(VirtIOSCSI *s)
/* Clean up guest notifier (irq) */
k->set_guest_notifiers(qbus->parent, vs->conf.num_queues + 2, false);
- s->dataplane_stopping = false;
s->dataplane_started = false;
}
diff --git a/include/hw/virtio/virtio-scsi.h b/include/hw/virtio/virtio-scsi.h
index ba2f5ce..d5352d8 100644
--- a/include/hw/virtio/virtio-scsi.h
+++ b/include/hw/virtio/virtio-scsi.h
@@ -89,8 +89,6 @@ typedef struct VirtIOSCSI {
QTAILQ_HEAD(, VirtIOSCSIBlkChangeNotifier) remove_notifiers;
bool dataplane_started;
- bool dataplane_starting;
- bool dataplane_stopping;
bool dataplane_fenced;
Error *blocker;
uint32_t host_features;
--
1.8.3.1
^ permalink raw reply related [flat|nested] 31+ messages in thread
* Re: [Qemu-devel] [PATCH 1/9] virtio-dataplane: pass assign=true to virtio_queue_aio_set_host_notifier_handler
2016-03-30 12:48 ` [Qemu-devel] [PATCH 1/9] virtio-dataplane: pass assign=true to virtio_queue_aio_set_host_notifier_handler Paolo Bonzini
@ 2016-03-30 13:23 ` Cornelia Huck
2016-03-30 13:30 ` Paolo Bonzini
2016-03-30 13:44 ` Cornelia Huck
1 sibling, 1 reply; 31+ messages in thread
From: Cornelia Huck @ 2016-03-30 13:23 UTC (permalink / raw)
To: Paolo Bonzini; +Cc: borntraeger, famz, qemu-devel, mst
On Wed, 30 Mar 2016 14:48:00 +0200
Paolo Bonzini <pbonzini@redhat.com> wrote:
> There is no need to run the handler one last time; the device is being
> reset and it is okay to drop requests that are pending in the virtqueue.
What about virtio_blk_save()? Could there be any pending requests in
that case?
> By omitting this call, we dodge a possible cause of races between the
> dataplane thread on one side and the main/vCPU threads on the other.
>
> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
> ---
> hw/block/dataplane/virtio-blk.c | 2 +-
> hw/scsi/virtio-scsi-dataplane.c | 6 +++---
> 2 files changed, 4 insertions(+), 4 deletions(-)
^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: [Qemu-devel] [PATCH 1/9] virtio-dataplane: pass assign=true to virtio_queue_aio_set_host_notifier_handler
2016-03-30 13:23 ` Cornelia Huck
@ 2016-03-30 13:30 ` Paolo Bonzini
2016-03-30 13:43 ` Cornelia Huck
0 siblings, 1 reply; 31+ messages in thread
From: Paolo Bonzini @ 2016-03-30 13:30 UTC (permalink / raw)
To: Cornelia Huck; +Cc: borntraeger, famz, qemu-devel, mst
On 30/03/2016 15:23, Cornelia Huck wrote:
> > There is no need to run the handler one last time; the device is being
> > reset and it is okay to drop requests that are pending in the virtqueue.
>
> What about virtio_blk_save()? Could there be any pending requests in
> that case?
Those would be processed when dataplane is restarted on the destination
side, I think. virtio_queue_set_host_notifier_fd_handler calls
virtio_queue_host_notifier_read which connects the host notifier to the
I/O thread and calls event_notifier_set to start processing it.
Paolo
^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: [Qemu-devel] [PATCH 2/9] virtio: make virtio_queue_notify_vq static
2016-03-30 12:48 ` [Qemu-devel] [PATCH 2/9] virtio: make virtio_queue_notify_vq static Paolo Bonzini
@ 2016-03-30 13:39 ` Cornelia Huck
0 siblings, 0 replies; 31+ messages in thread
From: Cornelia Huck @ 2016-03-30 13:39 UTC (permalink / raw)
To: Paolo Bonzini; +Cc: borntraeger, famz, qemu-devel, mst
On Wed, 30 Mar 2016 14:48:01 +0200
Paolo Bonzini <pbonzini@redhat.com> wrote:
> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
> ---
> hw/virtio/virtio.c | 2 +-
> include/hw/virtio/virtio.h | 1 -
> 2 files changed, 1 insertion(+), 2 deletions(-)
Acked-by: Cornelia Huck <cornelia.huck@de.ibm.com>
^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: [Qemu-devel] [PATCH 1/9] virtio-dataplane: pass assign=true to virtio_queue_aio_set_host_notifier_handler
2016-03-30 13:30 ` Paolo Bonzini
@ 2016-03-30 13:43 ` Cornelia Huck
0 siblings, 0 replies; 31+ messages in thread
From: Cornelia Huck @ 2016-03-30 13:43 UTC (permalink / raw)
To: Paolo Bonzini; +Cc: borntraeger, famz, qemu-devel, mst
On Wed, 30 Mar 2016 15:30:04 +0200
Paolo Bonzini <pbonzini@redhat.com> wrote:
> On 30/03/2016 15:23, Cornelia Huck wrote:
> > > There is no need to run the handler one last time; the device is being
> > > reset and it is okay to drop requests that are pending in the virtqueue.
> >
> > What about virtio_blk_save()? Could there be any pending requests in
> > that case?
>
> Those would be processed when dataplane is restarted on the destination
> side, I think. virtio_queue_set_host_notifier_fd_handler calls
> virtio_queue_host_notifier_read which connects the host notifier to the
> I/O thread and calls event_notifier_set to start processing it.
Makes sense; maybe mention that in the patch description as well?
^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: [Qemu-devel] [PATCH 1/9] virtio-dataplane: pass assign=true to virtio_queue_aio_set_host_notifier_handler
2016-03-30 12:48 ` [Qemu-devel] [PATCH 1/9] virtio-dataplane: pass assign=true to virtio_queue_aio_set_host_notifier_handler Paolo Bonzini
2016-03-30 13:23 ` Cornelia Huck
@ 2016-03-30 13:44 ` Cornelia Huck
1 sibling, 0 replies; 31+ messages in thread
From: Cornelia Huck @ 2016-03-30 13:44 UTC (permalink / raw)
To: Paolo Bonzini; +Cc: borntraeger, famz, qemu-devel, mst
On Wed, 30 Mar 2016 14:48:00 +0200
Paolo Bonzini <pbonzini@redhat.com> wrote:
> There is no need to run the handler one last time; the device is being
> reset and it is okay to drop requests that are pending in the virtqueue.
> By omitting this call, we dodge a possible cause of races between the
> dataplane thread on one side and the main/vCPU threads on the other.
>
> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
> ---
> hw/block/dataplane/virtio-blk.c | 2 +-
> hw/scsi/virtio-scsi-dataplane.c | 6 +++---
> 2 files changed, 4 insertions(+), 4 deletions(-)
Reviewed-by: Cornelia Huck <cornelia.huck@de.ibm.com>
^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: [Qemu-devel] [PATCH 3/9] virtio-blk: fix disabled mode
2016-03-30 12:48 ` [Qemu-devel] [PATCH 3/9] virtio-blk: fix disabled mode Paolo Bonzini
@ 2016-03-30 13:57 ` Cornelia Huck
2016-03-31 8:32 ` tu bo
1 sibling, 0 replies; 31+ messages in thread
From: Cornelia Huck @ 2016-03-30 13:57 UTC (permalink / raw)
To: Paolo Bonzini; +Cc: borntraeger, famz, qemu-devel, mst
On Wed, 30 Mar 2016 14:48:02 +0200
Paolo Bonzini <pbonzini@redhat.com> wrote:
> The missing check on dataplane_disabled caused a segmentation
> fault in notify_guest_bh, because s->guest_notifier was NULL.
I think this patch description could be improved :)
"We must not call virtio_blk_data_plane_notify if dataplane is
disabled: we would hit a segmentation fault in notify_guest_bh as
s->guest_notifier has not been setup and is NULL."
?
>
> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
> ---
> hw/block/dataplane/virtio-blk.c | 7 +++----
> hw/block/virtio-blk.c | 2 +-
> include/hw/virtio/virtio-blk.h | 1 +
> 3 files changed, 5 insertions(+), 5 deletions(-)
Patch looks good:
Reviewed-by: Cornelia Huck <cornelia.huck@de.ibm.com>
^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: [Qemu-devel] [PATCH 4/9] virtio-scsi: fix disabled mode
2016-03-30 12:48 ` [Qemu-devel] [PATCH 4/9] virtio-scsi: " Paolo Bonzini
@ 2016-03-30 14:52 ` Cornelia Huck
0 siblings, 0 replies; 31+ messages in thread
From: Cornelia Huck @ 2016-03-30 14:52 UTC (permalink / raw)
To: Paolo Bonzini; +Cc: borntraeger, famz, qemu-devel, mst
On Wed, 30 Mar 2016 14:48:03 +0200
Paolo Bonzini <pbonzini@redhat.com> wrote:
> Add two missing checks for s->dataplane_fenced. In one case, QEMU
> would skip injecting an IRQ due to a write to an uninitialized
> EventNotifier's file descriptor.
>
> In the second case, the dataplane_disabled field was used by mistake;
> in fact after fixing this occurrence it is completely unused.
>
> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
> ---
> hw/scsi/virtio-scsi.c | 4 ++--
> include/hw/virtio/virtio-scsi.h | 1 -
> 2 files changed, 2 insertions(+), 3 deletions(-)
It's a bit confusing that the disabled mode for virtio-scsi dataplane
uses 'fenced' instead of 'disabled', but 'disabled' was already used...
Reviewed-by: Cornelia Huck <cornelia.huck@de.ibm.com>
^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: [Qemu-devel] [PATCH 5/9] virtio: add aio handler
2016-03-30 12:48 ` [Qemu-devel] [PATCH 5/9] virtio: add aio handler Paolo Bonzini
@ 2016-03-30 14:55 ` Cornelia Huck
2016-03-30 15:09 ` Paolo Bonzini
0 siblings, 1 reply; 31+ messages in thread
From: Cornelia Huck @ 2016-03-30 14:55 UTC (permalink / raw)
To: Paolo Bonzini; +Cc: borntraeger, famz, qemu-devel, mst
On Wed, 30 Mar 2016 14:48:04 +0200
Paolo Bonzini <pbonzini@redhat.com> wrote:
> From: "Michael S. Tsirkin" <mst@redhat.com>
>
> In addition to handling IO in vcpu thread and
> in io thread, blk dataplane introduces yet another mode:
> handling it by aio.
>
> This reuses the same handler as previous modes,
> which triggers races as these were not designed to be reentrant.
>
> As a temporary fix, add a separate handler just for aio, this will make
> it possible to disable regular handlers when dataplane is active.
>
> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
> ---
> hw/virtio/virtio.c | 36 ++++++++++++++++++++++++++++++++----
> include/hw/virtio/virtio.h | 3 +++
> 2 files changed, 35 insertions(+), 4 deletions(-)
>
> @@ -1791,14 +1811,22 @@ void virtio_queue_aio_set_host_notifier_handler(VirtQueue *vq, AioContext *ctx,
> {
> if (assign && set_handler) {
> aio_set_event_notifier(ctx, &vq->host_notifier, true,
> - virtio_queue_host_notifier_read);
> + virtio_queue_host_notifier_aio_read);
> } else {
> aio_set_event_notifier(ctx, &vq->host_notifier, true, NULL);
> }
> if (!assign) {
> /* Test and clear notifier before after disabling event,
> * in case poll callback didn't have time to run. */
> - virtio_queue_host_notifier_read(&vq->host_notifier);
> + virtio_queue_host_notifier_aio_read(&vq->host_notifier);
Is this function ever called with assign==false anymore after patch 1?
> + }
> +}
^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: [Qemu-devel] [PATCH 5/9] virtio: add aio handler
2016-03-30 14:55 ` Cornelia Huck
@ 2016-03-30 15:09 ` Paolo Bonzini
0 siblings, 0 replies; 31+ messages in thread
From: Paolo Bonzini @ 2016-03-30 15:09 UTC (permalink / raw)
To: Cornelia Huck; +Cc: borntraeger, famz, qemu-devel, mst
On 30/03/2016 16:55, Cornelia Huck wrote:
>> > if (assign && set_handler) {
>> > aio_set_event_notifier(ctx, &vq->host_notifier, true,
>> > - virtio_queue_host_notifier_read);
>> > + virtio_queue_host_notifier_aio_read);
>> > } else {
>> > aio_set_event_notifier(ctx, &vq->host_notifier, true, NULL);
>> > }
>> > if (!assign) {
>> > /* Test and clear notifier before after disabling event,
>> > * in case poll callback didn't have time to run. */
>> > - virtio_queue_host_notifier_read(&vq->host_notifier);
>> > + virtio_queue_host_notifier_aio_read(&vq->host_notifier);
> Is this function ever called with assign==false anymore after patch 1?
Patch 8 provides the answer (which is no :)).
Paolo
^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: [Qemu-devel] [PATCH 6/9] virtio-blk: use aio handler for data plane
2016-03-30 12:48 ` [Qemu-devel] [PATCH 6/9] virtio-blk: use aio handler for data plane Paolo Bonzini
@ 2016-03-30 15:25 ` Cornelia Huck
2016-03-30 15:42 ` Paolo Bonzini
0 siblings, 1 reply; 31+ messages in thread
From: Cornelia Huck @ 2016-03-30 15:25 UTC (permalink / raw)
To: Paolo Bonzini; +Cc: borntraeger, famz, qemu-devel, mst
On Wed, 30 Mar 2016 14:48:05 +0200
Paolo Bonzini <pbonzini@redhat.com> wrote:
> From: "Michael S. Tsirkin" <mst@redhat.com>
>
> In addition to handling IO in vcpu thread and in io thread, dataplane
> introduces yet another mode: handling it by aio.
>
> This reuses the same handler as previous modes, which triggers races as
> these were not designed to be reentrant.
>
> Use a separate handler just for aio, and disable regular handlers when
> dataplane is active.
>
> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
> ---
> hw/block/dataplane/virtio-blk.c | 13 +++++++++++++
> hw/block/virtio-blk.c | 27 +++++++++++++++++----------
> include/hw/virtio/virtio-blk.h | 2 ++
> 3 files changed, 32 insertions(+), 10 deletions(-)
>
> diff --git a/hw/block/virtio-blk.c b/hw/block/virtio-blk.c
> index 77221c1..47ad9ed 100644
> --- a/hw/block/virtio-blk.c
> +++ b/hw/block/virtio-blk.c
> @@ -577,20 +577,11 @@ void virtio_blk_handle_request(VirtIOBlockReq *req, MultiReqBuffer *mrb)
> }
> }
>
> -static void virtio_blk_handle_output(VirtIODevice *vdev, VirtQueue *vq)
> +void virtio_blk_handle_vq(VirtIOBlock *s, VirtQueue *vq)
> {
> - VirtIOBlock *s = VIRTIO_BLK(vdev);
> VirtIOBlockReq *req;
> MultiReqBuffer mrb = {};
>
> - /* Some guests kick before setting VIRTIO_CONFIG_S_DRIVER_OK so start
> - * dataplane here instead of waiting for .set_status().
> - */
> - if (s->dataplane && !s->dataplane_started) {
> - virtio_blk_data_plane_start(s->dataplane);
> - return;
> - }
> -
> assert(atomic_fetch_inc(&s->reentrancy_test) == 0);
Hm, based on wrong branch?
> blk_io_plug(s->blk);
>
^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: [Qemu-devel] [PATCH 7/9] virtio-scsi: use aio handler for data plane
2016-03-30 12:48 ` [Qemu-devel] [PATCH 7/9] virtio-scsi: " Paolo Bonzini
@ 2016-03-30 15:31 ` Cornelia Huck
0 siblings, 0 replies; 31+ messages in thread
From: Cornelia Huck @ 2016-03-30 15:31 UTC (permalink / raw)
To: Paolo Bonzini; +Cc: borntraeger, famz, qemu-devel, mst
On Wed, 30 Mar 2016 14:48:06 +0200
Paolo Bonzini <pbonzini@redhat.com> wrote:
> In addition to handling IO in vcpu thread and in io thread, dataplane
> introduces yet another mode: handling it by aio.
>
> This reuses the same handler as previous modes, which triggers races as
> these were not designed to be reentrant.
>
> Use a separate handler just for aio, and disable regular handlers when
> dataplane is active.
>
> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
> ---
> hw/scsi/virtio-scsi-dataplane.c | 43 ++++++++++++++++++++++++---
> hw/scsi/virtio-scsi.c | 65 ++++++++++++++++++++++++++++-------------
> include/hw/virtio/virtio-scsi.h | 6 ++--
> 3 files changed, 86 insertions(+), 28 deletions(-)
>
> +static int virtio_scsi_vring_init(VirtIOSCSI *s, VirtQueue *vq, int n,
> + void (*fn)(VirtIODevice *vdev, VirtQueue *vq))
This function has been sadly misnamed since dataplane does not set up
its own vrings anymore. Maybe it should be called
virtio_scsi_vq_notifier_init() or so.
> {
> BusState *qbus = BUS(qdev_get_parent_bus(DEVICE(s)));
> VirtioBusClass *k = VIRTIO_BUS_GET_CLASS(qbus);
Reviewed-by: Cornelia Huck <cornelia.huck@de.ibm.com>
^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: [Qemu-devel] [PATCH 8/9] virtio: merge virtio_queue_aio_set_host_notifier_handler with virtio_queue_set_aio
2016-03-30 12:48 ` [Qemu-devel] [PATCH 8/9] virtio: merge virtio_queue_aio_set_host_notifier_handler with virtio_queue_set_aio Paolo Bonzini
@ 2016-03-30 15:33 ` Cornelia Huck
0 siblings, 0 replies; 31+ messages in thread
From: Cornelia Huck @ 2016-03-30 15:33 UTC (permalink / raw)
To: Paolo Bonzini; +Cc: borntraeger, famz, qemu-devel, mst
On Wed, 30 Mar 2016 14:48:07 +0200
Paolo Bonzini <pbonzini@redhat.com> wrote:
> Eliminating the reentrancy is actually a nice thing that we can do
> with the API that Michael proposed, so let's make it first class.
> This also hides the complex assign/set_handler conventions from
> callers of virtio_queue_aio_set_host_notifier_handler, and fixes
> a possible race that could happen when passing assign=false to
> virtio_queue_aio_set_host_notifier_handler.
>
> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
> ---
> hw/block/dataplane/virtio-blk.c | 7 +++----
> hw/scsi/virtio-scsi-dataplane.c | 12 ++++--------
> hw/virtio/virtio.c | 19 ++++---------------
> include/hw/virtio/virtio.h | 6 ++----
> 4 files changed, 13 insertions(+), 31 deletions(-)
Reviewed-by: Cornelia Huck <cornelia.huck@de.ibm.com>
^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: [Qemu-devel] [PATCH 9/9] virtio: remove starting/stopping checks
2016-03-30 12:48 ` [Qemu-devel] [PATCH 9/9] virtio: remove starting/stopping checks Paolo Bonzini
@ 2016-03-30 15:36 ` Cornelia Huck
0 siblings, 0 replies; 31+ messages in thread
From: Cornelia Huck @ 2016-03-30 15:36 UTC (permalink / raw)
To: Paolo Bonzini; +Cc: borntraeger, famz, qemu-devel, mst
On Wed, 30 Mar 2016 14:48:08 +0200
Paolo Bonzini <pbonzini@redhat.com> wrote:
> Reentrancy cannot happen while the BQL is being held.
>
> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
> ---
> hw/block/dataplane/virtio-blk.c | 12 ++----------
> hw/scsi/virtio-scsi-dataplane.c | 9 +--------
> include/hw/virtio/virtio-scsi.h | 2 --
> 3 files changed, 3 insertions(+), 20 deletions(-)
>
Less flags, yeah!
Reviewed-by: Cornelia Huck <cornelia.huck@de.ibm.com>
^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: [Qemu-devel] [PATCH 6/9] virtio-blk: use aio handler for data plane
2016-03-30 15:25 ` Cornelia Huck
@ 2016-03-30 15:42 ` Paolo Bonzini
0 siblings, 0 replies; 31+ messages in thread
From: Paolo Bonzini @ 2016-03-30 15:42 UTC (permalink / raw)
To: Cornelia Huck; +Cc: borntraeger, famz, qemu-devel, mst
On 30/03/2016 17:25, Cornelia Huck wrote:
>> >
>> > - /* Some guests kick before setting VIRTIO_CONFIG_S_DRIVER_OK so start
>> > - * dataplane here instead of waiting for .set_status().
>> > - */
>> > - if (s->dataplane && !s->dataplane_started) {
>> > - virtio_blk_data_plane_start(s->dataplane);
>> > - return;
>> > - }
>> > -
>> > assert(atomic_fetch_inc(&s->reentrancy_test) == 0);
> Hm, based on wrong branch?
>
Indeed I had the testing patch in there for, well, testing. I'll repost
with improved commit messages and the assertion removed.
Paolo
^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: [Qemu-devel] [PATCH 3/9] virtio-blk: fix disabled mode
2016-03-30 12:48 ` [Qemu-devel] [PATCH 3/9] virtio-blk: fix disabled mode Paolo Bonzini
2016-03-30 13:57 ` Cornelia Huck
@ 2016-03-31 8:32 ` tu bo
2016-03-31 8:36 ` Paolo Bonzini
1 sibling, 1 reply; 31+ messages in thread
From: tu bo @ 2016-03-31 8:32 UTC (permalink / raw)
To: Paolo Bonzini, qemu-devel; +Cc: cornelia.huck, borntraeger, famz, mst
Hi Paolo:
On 03/30/2016 08:48 PM, Paolo Bonzini wrote:
> The missing check on dataplane_disabled caused a segmentation
> fault in notify_guest_bh, because s->guest_notifier was NULL.
>
> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
> ---
> hw/block/dataplane/virtio-blk.c | 7 +++----
> hw/block/virtio-blk.c | 2 +-
> include/hw/virtio/virtio-blk.h | 1 +
> 3 files changed, 5 insertions(+), 5 deletions(-)
>
> diff --git a/hw/block/dataplane/virtio-blk.c b/hw/block/dataplane/virtio-blk.c
> index 0d76110..378feb3 100644
> --- a/hw/block/dataplane/virtio-blk.c
> +++ b/hw/block/dataplane/virtio-blk.c
> @@ -28,7 +28,6 @@
> struct VirtIOBlockDataPlane {
> bool starting;
> bool stopping;
> - bool disabled;
>
> VirtIOBlkConf *conf;
>
> @@ -233,7 +232,7 @@ void virtio_blk_data_plane_start(VirtIOBlockDataPlane *s)
> fail_host_notifier:
> k->set_guest_notifiers(qbus->parent, 1, false);
> fail_guest_notifiers:
> - s->disabled = true;
> + vblk->dataplane_disabled = true;
> s->starting = false;
> vblk->dataplane_started = true;
> }
> @@ -250,8 +249,8 @@ void virtio_blk_data_plane_stop(VirtIOBlockDataPlane *s)
> }
>
> /* Better luck next time. */
> - if (s->disabled) {
> - s->disabled = false;
> + if (vblk->dataplane_disabled) {
> + vblk->dataplane_disabled = false;
> vblk->dataplane_started = false;
> return;
> }
> diff --git a/hw/block/virtio-blk.c b/hw/block/virtio-blk.c
> index d0b8248..77221c1 100644
> --- a/hw/block/virtio-blk.c
> +++ b/hw/block/virtio-blk.c
> @@ -53,7 +53,7 @@ static void virtio_blk_req_complete(VirtIOBlockReq *req, unsigned char status)
>
> stb_p(&req->in->status, status);
> virtqueue_push(s->vq, &req->elem, req->in_len);
> - if (s->dataplane) {
> + if (s->dataplane_started && !s->dataplane_disabled) {
> virtio_blk_data_plane_notify(s->dataplane);
> } else {
> virtio_notify(vdev, s->vq);
> diff --git a/include/hw/virtio/virtio-blk.h b/include/hw/virtio/virtio-blk.h
> index 5cb66cd..073c632 100644
> --- a/include/hw/virtio/virtio-blk.h
> +++ b/include/hw/virtio/virtio-blk.h
> @@ -53,6 +53,7 @@ typedef struct VirtIOBlock {
> unsigned short sector_mask;
> bool original_wce;
> VMChangeStateEntry *change;
> + bool dataplane_disabled;
> bool dataplane_started;
> int reentrancy_test;
There is no "int reentrancy_test;" in the latest qemu master. thx
> struct VirtIOBlockDataPlane *dataplane;
>
^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: [Qemu-devel] [PATCH 3/9] virtio-blk: fix disabled mode
2016-03-31 8:32 ` tu bo
@ 2016-03-31 8:36 ` Paolo Bonzini
0 siblings, 0 replies; 31+ messages in thread
From: Paolo Bonzini @ 2016-03-31 8:36 UTC (permalink / raw)
To: tu bo, qemu-devel; +Cc: cornelia.huck, borntraeger, famz, mst
On 31/03/2016 10:32, tu bo wrote:
>>
>> diff --git a/include/hw/virtio/virtio-blk.h
>> b/include/hw/virtio/virtio-blk.h
>> index 5cb66cd..073c632 100644
>> --- a/include/hw/virtio/virtio-blk.h
>> +++ b/include/hw/virtio/virtio-blk.h
>> @@ -53,6 +53,7 @@ typedef struct VirtIOBlock {
>> unsigned short sector_mask;
>> bool original_wce;
>> VMChangeStateEntry *change;
>> + bool dataplane_disabled;
>> bool dataplane_started;
>> int reentrancy_test;
>
> There is no "int reentrancy_test;" in the latest qemu master. thx
Indeed, I will resend a fix version soon.
Paolo
^ permalink raw reply [flat|nested] 31+ messages in thread
* [Qemu-devel] [PATCH 1/9] virtio-dataplane: pass assign=true to virtio_queue_aio_set_host_notifier_handler
2016-04-01 13:19 [Qemu-devel] [PATCH v2 0/9] virtio: aio handler API Paolo Bonzini
@ 2016-04-01 13:19 ` Paolo Bonzini
2016-04-01 14:02 ` Cornelia Huck
2016-04-05 10:38 ` Michael S. Tsirkin
0 siblings, 2 replies; 31+ messages in thread
From: Paolo Bonzini @ 2016-04-01 13:19 UTC (permalink / raw)
To: qemu-devel; +Cc: famz, tubo, mst, borntraeger, stefanha, cornelia.huck
There is no need to run the handler one last time; the device is
being reset and it is okay to drop requests that are pending in
the virtqueue. Even in the case of migration, the requests would
be processed when ioeventfd is re-enabled on the destination
side: virtio_queue_set_host_notifier_fd_handler will call
virtio_queue_host_notifier_read, which will start dataplane; the host
notifier is then connected to the I/O thread and event_notifier_set is
called to start processing it.
By omitting this call, we dodge a possible cause of races between the
dataplane thread on one side and the main/vCPU threads on the other.
The virtio_queue_aio_set_host_notifier_handler function is now
only ever called with assign=true, but for now this is left as is
because the function parameters will change soon anyway.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
hw/block/dataplane/virtio-blk.c | 2 +-
hw/scsi/virtio-scsi-dataplane.c | 6 +++---
2 files changed, 4 insertions(+), 4 deletions(-)
diff --git a/hw/block/dataplane/virtio-blk.c b/hw/block/dataplane/virtio-blk.c
index e666dd4..fddd3ab 100644
--- a/hw/block/dataplane/virtio-blk.c
+++ b/hw/block/dataplane/virtio-blk.c
@@ -262,7 +262,7 @@ void virtio_blk_data_plane_stop(VirtIOBlockDataPlane *s)
aio_context_acquire(s->ctx);
/* Stop notifications for new requests from guest */
- virtio_queue_aio_set_host_notifier_handler(s->vq, s->ctx, false, false);
+ virtio_queue_aio_set_host_notifier_handler(s->vq, s->ctx, true, false);
/* Drain and switch bs back to the QEMU main loop */
blk_set_aio_context(s->conf->conf.blk, qemu_get_aio_context());
diff --git a/hw/scsi/virtio-scsi-dataplane.c b/hw/scsi/virtio-scsi-dataplane.c
index b44ac5d..21d5bfd 100644
--- a/hw/scsi/virtio-scsi-dataplane.c
+++ b/hw/scsi/virtio-scsi-dataplane.c
@@ -70,10 +70,10 @@ static void virtio_scsi_clear_aio(VirtIOSCSI *s)
VirtIOSCSICommon *vs = VIRTIO_SCSI_COMMON(s);
int i;
- virtio_queue_aio_set_host_notifier_handler(vs->ctrl_vq, s->ctx, false, false);
- virtio_queue_aio_set_host_notifier_handler(vs->event_vq, s->ctx, false, false);
+ virtio_queue_aio_set_host_notifier_handler(vs->ctrl_vq, s->ctx, true, false);
+ virtio_queue_aio_set_host_notifier_handler(vs->event_vq, s->ctx, true, false);
for (i = 0; i < vs->conf.num_queues; i++) {
- virtio_queue_aio_set_host_notifier_handler(vs->cmd_vqs[i], s->ctx, false, false);
+ virtio_queue_aio_set_host_notifier_handler(vs->cmd_vqs[i], s->ctx, true, false);
}
}
--
1.8.3.1
^ permalink raw reply related [flat|nested] 31+ messages in thread
* Re: [Qemu-devel] [PATCH 1/9] virtio-dataplane: pass assign=true to virtio_queue_aio_set_host_notifier_handler
2016-04-01 13:19 ` [Qemu-devel] [PATCH 1/9] virtio-dataplane: pass assign=true to virtio_queue_aio_set_host_notifier_handler Paolo Bonzini
@ 2016-04-01 14:02 ` Cornelia Huck
2016-04-05 10:38 ` Michael S. Tsirkin
1 sibling, 0 replies; 31+ messages in thread
From: Cornelia Huck @ 2016-04-01 14:02 UTC (permalink / raw)
To: Paolo Bonzini; +Cc: famz, borntraeger, mst, qemu-devel, tubo, stefanha
On Fri, 1 Apr 2016 15:19:46 +0200
Paolo Bonzini <pbonzini@redhat.com> wrote:
> There is no need to run the handler one last time; the device is
> being reset and it is okay to drop requests that are pending in
> the virtqueue. Even in the case of migration, the requests would
> be processed when ioeventfd is re-enabled on the destination
> side: virtio_queue_set_host_notifier_fd_handler will call
> virtio_queue_host_notifier_read, which will start dataplane; the host
> notifier is then connected to the I/O thread and event_notifier_set is
> called to start processing it.
>
> By omitting this call, we dodge a possible cause of races between the
> dataplane thread on one side and the main/vCPU threads on the other.
>
> The virtio_queue_aio_set_host_notifier_handler function is now
> only ever called with assign=true, but for now this is left as is
> because the function parameters will change soon anyway.
>
> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
> ---
> hw/block/dataplane/virtio-blk.c | 2 +-
> hw/scsi/virtio-scsi-dataplane.c | 6 +++---
> 2 files changed, 4 insertions(+), 4 deletions(-)
Reviewed-by: Cornelia Huck <cornelia.huck@de.ibm.com>
^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: [Qemu-devel] [PATCH 1/9] virtio-dataplane: pass assign=true to virtio_queue_aio_set_host_notifier_handler
2016-04-01 13:19 ` [Qemu-devel] [PATCH 1/9] virtio-dataplane: pass assign=true to virtio_queue_aio_set_host_notifier_handler Paolo Bonzini
2016-04-01 14:02 ` Cornelia Huck
@ 2016-04-05 10:38 ` Michael S. Tsirkin
2016-04-05 10:42 ` Paolo Bonzini
1 sibling, 1 reply; 31+ messages in thread
From: Michael S. Tsirkin @ 2016-04-05 10:38 UTC (permalink / raw)
To: Paolo Bonzini
Cc: famz, tubo, qemu-devel, dgilbert, borntraeger, stefanha,
cornelia.huck
On Fri, Apr 01, 2016 at 03:19:46PM +0200, Paolo Bonzini wrote:
> There is no need to run the handler one last time; the device is
> being reset and it is okay to drop requests that are pending in
> the virtqueue. Even in the case of migration, the requests would
> be processed when ioeventfd is re-enabled on the destination
> side: virtio_queue_set_host_notifier_fd_handler will call
> virtio_queue_host_notifier_read, which will start dataplane;
I didn't get this part: here's virtio_queue_host_notifier_read:
VirtQueue *vq = container_of(n, VirtQueue, host_notifier);
if (event_notifier_test_and_clear(n)) {
virtio_queue_notify_vq(vq);
}
event notifier is initially clear on migration, how can we
be sure virtio_queue_notify_vq is invoked?
> the host
> notifier is then connected to the I/O thread and event_notifier_set is
> called to start processing it.
>
> By omitting this call, we dodge a possible cause of races between the
> dataplane thread on one side and the main/vCPU threads on the other.
>
> The virtio_queue_aio_set_host_notifier_handler function is now
> only ever called with assign=true, but for now this is left as is
> because the function parameters will change soon anyway.
>
> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
> ---
> hw/block/dataplane/virtio-blk.c | 2 +-
> hw/scsi/virtio-scsi-dataplane.c | 6 +++---
> 2 files changed, 4 insertions(+), 4 deletions(-)
>
> diff --git a/hw/block/dataplane/virtio-blk.c b/hw/block/dataplane/virtio-blk.c
> index e666dd4..fddd3ab 100644
> --- a/hw/block/dataplane/virtio-blk.c
> +++ b/hw/block/dataplane/virtio-blk.c
> @@ -262,7 +262,7 @@ void virtio_blk_data_plane_stop(VirtIOBlockDataPlane *s)
> aio_context_acquire(s->ctx);
>
> /* Stop notifications for new requests from guest */
> - virtio_queue_aio_set_host_notifier_handler(s->vq, s->ctx, false, false);
> + virtio_queue_aio_set_host_notifier_handler(s->vq, s->ctx, true, false);
>
> /* Drain and switch bs back to the QEMU main loop */
> blk_set_aio_context(s->conf->conf.blk, qemu_get_aio_context());
> diff --git a/hw/scsi/virtio-scsi-dataplane.c b/hw/scsi/virtio-scsi-dataplane.c
> index b44ac5d..21d5bfd 100644
> --- a/hw/scsi/virtio-scsi-dataplane.c
> +++ b/hw/scsi/virtio-scsi-dataplane.c
> @@ -70,10 +70,10 @@ static void virtio_scsi_clear_aio(VirtIOSCSI *s)
> VirtIOSCSICommon *vs = VIRTIO_SCSI_COMMON(s);
> int i;
>
> - virtio_queue_aio_set_host_notifier_handler(vs->ctrl_vq, s->ctx, false, false);
> - virtio_queue_aio_set_host_notifier_handler(vs->event_vq, s->ctx, false, false);
> + virtio_queue_aio_set_host_notifier_handler(vs->ctrl_vq, s->ctx, true, false);
> + virtio_queue_aio_set_host_notifier_handler(vs->event_vq, s->ctx, true, false);
> for (i = 0; i < vs->conf.num_queues; i++) {
> - virtio_queue_aio_set_host_notifier_handler(vs->cmd_vqs[i], s->ctx, false, false);
> + virtio_queue_aio_set_host_notifier_handler(vs->cmd_vqs[i], s->ctx, true, false);
> }
> }
>
> --
> 1.8.3.1
>
^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: [Qemu-devel] [PATCH 1/9] virtio-dataplane: pass assign=true to virtio_queue_aio_set_host_notifier_handler
2016-04-05 10:38 ` Michael S. Tsirkin
@ 2016-04-05 10:42 ` Paolo Bonzini
2016-04-05 10:59 ` Paolo Bonzini
0 siblings, 1 reply; 31+ messages in thread
From: Paolo Bonzini @ 2016-04-05 10:42 UTC (permalink / raw)
To: Michael S. Tsirkin
Cc: famz, borntraeger, qemu-devel, dgilbert, tubo, stefanha,
cornelia.huck
On 05/04/2016 12:38, Michael S. Tsirkin wrote:
>> > There is no need to run the handler one last time; the device is
>> > being reset and it is okay to drop requests that are pending in
>> > the virtqueue. Even in the case of migration, the requests would
>> > be processed when ioeventfd is re-enabled on the destination
>> > side: virtio_queue_set_host_notifier_fd_handler will call
>> > virtio_queue_host_notifier_read, which will start dataplane;
> I didn't get this part: here's virtio_queue_host_notifier_read:
>
> VirtQueue *vq = container_of(n, VirtQueue, host_notifier);
> if (event_notifier_test_and_clear(n)) {
> virtio_queue_notify_vq(vq);
> }
>
> event notifier is initially clear on migration, how can we
> be sure virtio_queue_notify_vq is invoked?
I think you're right about this.
Dataplane has an event_notifier_set; we should move it to
virtio_queue_aio_set_host_notifier_handler and add it to
virtio_queue_set_host_notifier_handler. I'll send v3 today.
Paolo
^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: [Qemu-devel] [PATCH 1/9] virtio-dataplane: pass assign=true to virtio_queue_aio_set_host_notifier_handler
2016-04-05 10:42 ` Paolo Bonzini
@ 2016-04-05 10:59 ` Paolo Bonzini
0 siblings, 0 replies; 31+ messages in thread
From: Paolo Bonzini @ 2016-04-05 10:59 UTC (permalink / raw)
To: Michael S. Tsirkin
Cc: famz, borntraeger, qemu-devel, dgilbert, tubo, stefanha,
cornelia.huck
On 05/04/2016 12:42, Paolo Bonzini wrote:
> I think you're right about this.
>
> Dataplane has an event_notifier_set; we should move it to
> virtio_queue_aio_set_host_notifier_handler and add it to
> virtio_queue_set_host_notifier_handler. I'll send v3 today.
As discussed on IRC, even that wouldn't be safe. The good news is that
this patch is not necessary, because
virtio_queue_aio_set_host_notifier_handler(s->vq, s->ctx, false, false)
is called inside aio_context_acquire.
Paolo
^ permalink raw reply [flat|nested] 31+ messages in thread
end of thread, other threads:[~2016-04-05 10:59 UTC | newest]
Thread overview: 31+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2016-03-30 12:47 [Qemu-devel] [PATCH resend 0/9] virtio: aio handler API Paolo Bonzini
2016-03-30 12:48 ` [Qemu-devel] [PATCH 1/9] virtio-dataplane: pass assign=true to virtio_queue_aio_set_host_notifier_handler Paolo Bonzini
2016-03-30 13:23 ` Cornelia Huck
2016-03-30 13:30 ` Paolo Bonzini
2016-03-30 13:43 ` Cornelia Huck
2016-03-30 13:44 ` Cornelia Huck
2016-03-30 12:48 ` [Qemu-devel] [PATCH 2/9] virtio: make virtio_queue_notify_vq static Paolo Bonzini
2016-03-30 13:39 ` Cornelia Huck
2016-03-30 12:48 ` [Qemu-devel] [PATCH 3/9] virtio-blk: fix disabled mode Paolo Bonzini
2016-03-30 13:57 ` Cornelia Huck
2016-03-31 8:32 ` tu bo
2016-03-31 8:36 ` Paolo Bonzini
2016-03-30 12:48 ` [Qemu-devel] [PATCH 4/9] virtio-scsi: " Paolo Bonzini
2016-03-30 14:52 ` Cornelia Huck
2016-03-30 12:48 ` [Qemu-devel] [PATCH 5/9] virtio: add aio handler Paolo Bonzini
2016-03-30 14:55 ` Cornelia Huck
2016-03-30 15:09 ` Paolo Bonzini
2016-03-30 12:48 ` [Qemu-devel] [PATCH 6/9] virtio-blk: use aio handler for data plane Paolo Bonzini
2016-03-30 15:25 ` Cornelia Huck
2016-03-30 15:42 ` Paolo Bonzini
2016-03-30 12:48 ` [Qemu-devel] [PATCH 7/9] virtio-scsi: " Paolo Bonzini
2016-03-30 15:31 ` Cornelia Huck
2016-03-30 12:48 ` [Qemu-devel] [PATCH 8/9] virtio: merge virtio_queue_aio_set_host_notifier_handler with virtio_queue_set_aio Paolo Bonzini
2016-03-30 15:33 ` Cornelia Huck
2016-03-30 12:48 ` [Qemu-devel] [PATCH 9/9] virtio: remove starting/stopping checks Paolo Bonzini
2016-03-30 15:36 ` Cornelia Huck
-- strict thread matches above, loose matches on Subject: below --
2016-04-01 13:19 [Qemu-devel] [PATCH v2 0/9] virtio: aio handler API Paolo Bonzini
2016-04-01 13:19 ` [Qemu-devel] [PATCH 1/9] virtio-dataplane: pass assign=true to virtio_queue_aio_set_host_notifier_handler Paolo Bonzini
2016-04-01 14:02 ` Cornelia Huck
2016-04-05 10:38 ` Michael S. Tsirkin
2016-04-05 10:42 ` Paolo Bonzini
2016-04-05 10:59 ` Paolo Bonzini
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).