* [Qemu-devel] [PATCH v2 0/8] virtio-blk: multiqueue support
@ 2016-05-31 1:25 Stefan Hajnoczi
2016-05-31 1:25 ` [Qemu-devel] [PATCH v2 1/8] virtio-blk: use batch notify in non-dataplane case Stefan Hajnoczi
` (8 more replies)
0 siblings, 9 replies; 13+ messages in thread
From: Stefan Hajnoczi @ 2016-05-31 1:25 UTC (permalink / raw)
To: qemu-devel
Cc: Kevin Wolf, Roman Pen, Fam Zheng, Christian Borntraeger,
Paolo Bonzini, Ming Lei, Stefan Hajnoczi
v2:
* Simplify s->rq live migration [Paolo]
* Use more efficient bitmap ops for batch notification [Paolo]
* Fix perf regression due to batch notify BH in wrong AioContext [Christian]
The virtio_blk guest driver has supported multiple virtqueues since Linux 3.17.
This patch series adds multiple virtqueues to QEMU's virtio-blk emulated
device.
Ming Lei sent patches previously but these were not merged. This series
implements virtio-blk multiqueue for QEMU from scratch since the codebase has
changed. Live migration support for s->rq was also missing from the previous
series and has been added.
It's important to note that QEMU's block layer does not support multiqueue yet.
Therefore virtio-blk device processes all virtqueues in the same AioContext
(IOThread). Further work is necessary to take advantage of multiqueue support
in QEMU's block layer once it becomes available.
I will post performance results once they are ready.
Stefan Hajnoczi (8):
virtio-blk: use batch notify in non-dataplane case
virtio-blk: tell dataplane which vq to notify
virtio-blk: associate request with a virtqueue
virtio-blk: add VirtIOBlockConf->num_queues
virtio-blk: multiqueue batch notify
virtio-blk: live migrateion s->rq with multiqueue
virtio-blk: dataplane multiqueue support
virtio-blk: add num-queues device property
hw/block/dataplane/virtio-blk.c | 68 +++++++++++----------
hw/block/dataplane/virtio-blk.h | 2 +-
hw/block/virtio-blk.c | 129 +++++++++++++++++++++++++++++++++++-----
include/hw/virtio/virtio-blk.h | 8 ++-
4 files changed, 159 insertions(+), 48 deletions(-)
--
2.5.5
^ permalink raw reply [flat|nested] 13+ messages in thread
* [Qemu-devel] [PATCH v2 1/8] virtio-blk: use batch notify in non-dataplane case
2016-05-31 1:25 [Qemu-devel] [PATCH v2 0/8] virtio-blk: multiqueue support Stefan Hajnoczi
@ 2016-05-31 1:25 ` Stefan Hajnoczi
2016-05-31 1:26 ` [Qemu-devel] [PATCH v2 2/8] virtio-blk: tell dataplane which vq to notify Stefan Hajnoczi
` (7 subsequent siblings)
8 siblings, 0 replies; 13+ messages in thread
From: Stefan Hajnoczi @ 2016-05-31 1:25 UTC (permalink / raw)
To: qemu-devel
Cc: Kevin Wolf, Roman Pen, Fam Zheng, Christian Borntraeger,
Paolo Bonzini, Ming Lei, Stefan Hajnoczi, Ming Lei
Commit 5b2ffbe4d99843fd8305c573a100047a8c962327 ("virtio-blk: dataplane:
notify guest as a batch") deferred guest notification to a BH in order
batch notifications. This optimization is not specific to dataplane so
move it to the generic virtio-blk code that is shared by both dataplane
and non-dataplane code.
Use the AioContext notifier to detect when dataplane moves the
BlockBackend to the IOThread's AioContext. This is necessary so the
notification BH is always created in the current AioContext.
Note that this patch adds a flush function to force pending
notifications. This way we can ensure notifications do not cross device
reset or vmstate saving.
Cc: Ming Lei <tom.leiming@gmail.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
hw/block/dataplane/virtio-blk.c | 10 -------
hw/block/virtio-blk.c | 60 ++++++++++++++++++++++++++++++++++++-----
include/hw/virtio/virtio-blk.h | 1 +
3 files changed, 55 insertions(+), 16 deletions(-)
diff --git a/hw/block/dataplane/virtio-blk.c b/hw/block/dataplane/virtio-blk.c
index 3cb97c9..e0ac4f4 100644
--- a/hw/block/dataplane/virtio-blk.c
+++ b/hw/block/dataplane/virtio-blk.c
@@ -35,7 +35,6 @@ struct VirtIOBlockDataPlane {
VirtIODevice *vdev;
VirtQueue *vq; /* virtqueue vring */
EventNotifier *guest_notifier; /* irq */
- QEMUBH *bh; /* bh for guest notification */
Notifier insert_notifier, remove_notifier;
@@ -54,13 +53,6 @@ struct VirtIOBlockDataPlane {
/* Raise an interrupt to signal guest, if necessary */
void virtio_blk_data_plane_notify(VirtIOBlockDataPlane *s)
{
- qemu_bh_schedule(s->bh);
-}
-
-static void notify_guest_bh(void *opaque)
-{
- VirtIOBlockDataPlane *s = opaque;
-
if (!virtio_should_notify(s->vdev, s->vq)) {
return;
}
@@ -156,7 +148,6 @@ void virtio_blk_data_plane_create(VirtIODevice *vdev, VirtIOBlkConf *conf,
object_ref(OBJECT(s->iothread));
}
s->ctx = iothread_get_aio_context(s->iothread);
- s->bh = aio_bh_new(s->ctx, notify_guest_bh, s);
s->insert_notifier.notify = data_plane_blk_insert_notifier;
s->remove_notifier.notify = data_plane_blk_remove_notifier;
@@ -179,7 +170,6 @@ void virtio_blk_data_plane_destroy(VirtIOBlockDataPlane *s)
data_plane_remove_op_blockers(s);
notifier_remove(&s->insert_notifier);
notifier_remove(&s->remove_notifier);
- qemu_bh_delete(s->bh);
object_unref(OBJECT(s->iothread));
g_free(s);
}
diff --git a/hw/block/virtio-blk.c b/hw/block/virtio-blk.c
index 284e646..c108414 100644
--- a/hw/block/virtio-blk.c
+++ b/hw/block/virtio-blk.c
@@ -45,20 +45,57 @@ void virtio_blk_free_request(VirtIOBlockReq *req)
}
}
+/* Many requests can complete in an event loop iteration, we only notify the
+ * guest once.
+ */
+static void virtio_blk_batch_notify_bh(void *opaque)
+{
+ VirtIOBlock *s = opaque;
+ VirtIODevice *vdev = VIRTIO_DEVICE(s);
+
+ if (s->dataplane_started && !s->dataplane_disabled) {
+ virtio_blk_data_plane_notify(s->dataplane);
+ } else {
+ virtio_notify(vdev, s->vq);
+ }
+}
+
+/* Force batch notifications to run */
+static void virtio_blk_batch_notify_flush(VirtIOBlock *s)
+{
+ qemu_bh_cancel(s->batch_notify_bh);
+ virtio_blk_batch_notify_bh(s);
+}
+
+static void virtio_blk_attached_aio_context(AioContext *new_context,
+ void *opaque)
+{
+ VirtIOBlock *s = opaque;
+
+ assert(!s->batch_notify_bh);
+ s->batch_notify_bh = aio_bh_new(new_context, virtio_blk_batch_notify_bh,
+ s);
+ qemu_bh_schedule(s->batch_notify_bh); /* in case notify was pending */
+}
+
+static void virtio_blk_detach_aio_context(void *opaque)
+{
+ VirtIOBlock *s = opaque;
+
+ assert(s->batch_notify_bh);
+ qemu_bh_delete(s->batch_notify_bh);
+ s->batch_notify_bh = NULL;
+}
+
static void virtio_blk_req_complete(VirtIOBlockReq *req, unsigned char status)
{
VirtIOBlock *s = req->dev;
- VirtIODevice *vdev = VIRTIO_DEVICE(s);
trace_virtio_blk_req_complete(req, status);
stb_p(&req->in->status, status);
virtqueue_push(s->vq, &req->elem, req->in_len);
- if (s->dataplane_started && !s->dataplane_disabled) {
- virtio_blk_data_plane_notify(s->dataplane);
- } else {
- virtio_notify(vdev, s->vq);
- }
+ qemu_bh_schedule(s->batch_notify_bh);
}
static int virtio_blk_handle_rw_error(VirtIOBlockReq *req, int error,
@@ -664,6 +701,8 @@ static void virtio_blk_reset(VirtIODevice *vdev)
if (s->dataplane) {
virtio_blk_data_plane_stop(s->dataplane);
}
+
+ virtio_blk_batch_notify_flush(s);
aio_context_release(ctx);
blk_set_enable_write_cache(s->blk, s->original_wce);
@@ -801,6 +840,8 @@ static void virtio_blk_save(QEMUFile *f, void *opaque)
virtio_blk_data_plane_stop(s->dataplane);
}
+ virtio_blk_batch_notify_flush(s);
+
virtio_save(vdev, f);
}
@@ -896,6 +937,10 @@ static void virtio_blk_device_realize(DeviceState *dev, Error **errp)
return;
}
+ blk_add_aio_context_notifier(s->blk, virtio_blk_attached_aio_context,
+ virtio_blk_detach_aio_context, s);
+ virtio_blk_attached_aio_context(blk_get_aio_context(s->blk), s);
+
s->change = qemu_add_vm_change_state_handler(virtio_blk_dma_restart_cb, s);
register_savevm(dev, "virtio-blk", virtio_blk_id++, 2,
virtio_blk_save, virtio_blk_load, s);
@@ -910,6 +955,9 @@ static void virtio_blk_device_unrealize(DeviceState *dev, Error **errp)
VirtIODevice *vdev = VIRTIO_DEVICE(dev);
VirtIOBlock *s = VIRTIO_BLK(dev);
+ blk_remove_aio_context_notifier(s->blk, virtio_blk_attached_aio_context,
+ virtio_blk_detach_aio_context, s);
+ virtio_blk_detach_aio_context(s);
virtio_blk_data_plane_destroy(s->dataplane);
s->dataplane = NULL;
qemu_del_vm_change_state_handler(s->change);
diff --git a/include/hw/virtio/virtio-blk.h b/include/hw/virtio/virtio-blk.h
index 8f2b056..b3834bc 100644
--- a/include/hw/virtio/virtio-blk.h
+++ b/include/hw/virtio/virtio-blk.h
@@ -49,6 +49,7 @@ typedef struct VirtIOBlock {
VirtQueue *vq;
void *rq;
QEMUBH *bh;
+ QEMUBH *batch_notify_bh;
VirtIOBlkConf conf;
unsigned short sector_mask;
bool original_wce;
--
2.5.5
^ permalink raw reply related [flat|nested] 13+ messages in thread
* [Qemu-devel] [PATCH v2 2/8] virtio-blk: tell dataplane which vq to notify
2016-05-31 1:25 [Qemu-devel] [PATCH v2 0/8] virtio-blk: multiqueue support Stefan Hajnoczi
2016-05-31 1:25 ` [Qemu-devel] [PATCH v2 1/8] virtio-blk: use batch notify in non-dataplane case Stefan Hajnoczi
@ 2016-05-31 1:26 ` Stefan Hajnoczi
2016-05-31 1:26 ` [Qemu-devel] [PATCH v2 3/8] virtio-blk: associate request with a virtqueue Stefan Hajnoczi
` (6 subsequent siblings)
8 siblings, 0 replies; 13+ messages in thread
From: Stefan Hajnoczi @ 2016-05-31 1:26 UTC (permalink / raw)
To: qemu-devel
Cc: Kevin Wolf, Roman Pen, Fam Zheng, Christian Borntraeger,
Paolo Bonzini, Ming Lei, Stefan Hajnoczi
Let the virtio_blk_data_plane_notify() caller decide which virtqueue to
notify. This will allow the function to be used with multiqueue.
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
hw/block/dataplane/virtio-blk.c | 8 +++-----
hw/block/dataplane/virtio-blk.h | 2 +-
hw/block/virtio-blk.c | 2 +-
3 files changed, 5 insertions(+), 7 deletions(-)
diff --git a/hw/block/dataplane/virtio-blk.c b/hw/block/dataplane/virtio-blk.c
index e0ac4f4..592aa95 100644
--- a/hw/block/dataplane/virtio-blk.c
+++ b/hw/block/dataplane/virtio-blk.c
@@ -34,7 +34,6 @@ struct VirtIOBlockDataPlane {
VirtIODevice *vdev;
VirtQueue *vq; /* virtqueue vring */
- EventNotifier *guest_notifier; /* irq */
Notifier insert_notifier, remove_notifier;
@@ -51,13 +50,13 @@ struct VirtIOBlockDataPlane {
};
/* Raise an interrupt to signal guest, if necessary */
-void virtio_blk_data_plane_notify(VirtIOBlockDataPlane *s)
+void virtio_blk_data_plane_notify(VirtIOBlockDataPlane *s, VirtQueue *vq)
{
- if (!virtio_should_notify(s->vdev, s->vq)) {
+ if (!virtio_should_notify(s->vdev, vq)) {
return;
}
- event_notifier_set(s->guest_notifier);
+ event_notifier_set(virtio_queue_get_guest_notifier(vq));
}
static void data_plane_set_up_op_blockers(VirtIOBlockDataPlane *s)
@@ -207,7 +206,6 @@ void virtio_blk_data_plane_start(VirtIOBlockDataPlane *s)
"ensure -enable-kvm is set\n", r);
goto fail_guest_notifiers;
}
- s->guest_notifier = virtio_queue_get_guest_notifier(s->vq);
/* Set up virtqueue notify */
r = k->set_host_notifier(qbus->parent, 0, true);
diff --git a/hw/block/dataplane/virtio-blk.h b/hw/block/dataplane/virtio-blk.h
index 0714c11..b1f0b95 100644
--- a/hw/block/dataplane/virtio-blk.h
+++ b/hw/block/dataplane/virtio-blk.h
@@ -26,6 +26,6 @@ void virtio_blk_data_plane_destroy(VirtIOBlockDataPlane *s);
void virtio_blk_data_plane_start(VirtIOBlockDataPlane *s);
void virtio_blk_data_plane_stop(VirtIOBlockDataPlane *s);
void virtio_blk_data_plane_drain(VirtIOBlockDataPlane *s);
-void virtio_blk_data_plane_notify(VirtIOBlockDataPlane *s);
+void virtio_blk_data_plane_notify(VirtIOBlockDataPlane *s, VirtQueue *vq);
#endif /* HW_DATAPLANE_VIRTIO_BLK_H */
diff --git a/hw/block/virtio-blk.c b/hw/block/virtio-blk.c
index c108414..797b568 100644
--- a/hw/block/virtio-blk.c
+++ b/hw/block/virtio-blk.c
@@ -54,7 +54,7 @@ static void virtio_blk_batch_notify_bh(void *opaque)
VirtIODevice *vdev = VIRTIO_DEVICE(s);
if (s->dataplane_started && !s->dataplane_disabled) {
- virtio_blk_data_plane_notify(s->dataplane);
+ virtio_blk_data_plane_notify(s->dataplane, s->vq);
} else {
virtio_notify(vdev, s->vq);
}
--
2.5.5
^ permalink raw reply related [flat|nested] 13+ messages in thread
* [Qemu-devel] [PATCH v2 3/8] virtio-blk: associate request with a virtqueue
2016-05-31 1:25 [Qemu-devel] [PATCH v2 0/8] virtio-blk: multiqueue support Stefan Hajnoczi
2016-05-31 1:25 ` [Qemu-devel] [PATCH v2 1/8] virtio-blk: use batch notify in non-dataplane case Stefan Hajnoczi
2016-05-31 1:26 ` [Qemu-devel] [PATCH v2 2/8] virtio-blk: tell dataplane which vq to notify Stefan Hajnoczi
@ 2016-05-31 1:26 ` Stefan Hajnoczi
2016-05-31 1:26 ` [Qemu-devel] [PATCH v2 4/8] virtio-blk: add VirtIOBlockConf->num_queues Stefan Hajnoczi
` (5 subsequent siblings)
8 siblings, 0 replies; 13+ messages in thread
From: Stefan Hajnoczi @ 2016-05-31 1:26 UTC (permalink / raw)
To: qemu-devel
Cc: Kevin Wolf, Roman Pen, Fam Zheng, Christian Borntraeger,
Paolo Bonzini, Ming Lei, Stefan Hajnoczi
Multiqueue requires that each request knows to which virtqueue it
belongs.
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
hw/block/virtio-blk.c | 16 +++++++++-------
include/hw/virtio/virtio-blk.h | 4 +++-
2 files changed, 12 insertions(+), 8 deletions(-)
diff --git a/hw/block/virtio-blk.c b/hw/block/virtio-blk.c
index 797b568..4ee4063 100644
--- a/hw/block/virtio-blk.c
+++ b/hw/block/virtio-blk.c
@@ -29,9 +29,11 @@
#include "hw/virtio/virtio-bus.h"
#include "hw/virtio/virtio-access.h"
-void virtio_blk_init_request(VirtIOBlock *s, VirtIOBlockReq *req)
+void virtio_blk_init_request(VirtIOBlock *s, VirtQueue *vq,
+ VirtIOBlockReq *req)
{
req->dev = s;
+ req->vq = vq;
req->qiov.size = 0;
req->in_len = 0;
req->next = NULL;
@@ -94,7 +96,7 @@ static void virtio_blk_req_complete(VirtIOBlockReq *req, unsigned char status)
trace_virtio_blk_req_complete(req, status);
stb_p(&req->in->status, status);
- virtqueue_push(s->vq, &req->elem, req->in_len);
+ virtqueue_push(req->vq, &req->elem, req->in_len);
qemu_bh_schedule(s->batch_notify_bh);
}
@@ -224,12 +226,12 @@ out:
#endif
-static VirtIOBlockReq *virtio_blk_get_request(VirtIOBlock *s)
+static VirtIOBlockReq *virtio_blk_get_request(VirtIOBlock *s, VirtQueue *vq)
{
- VirtIOBlockReq *req = virtqueue_pop(s->vq, sizeof(VirtIOBlockReq));
+ VirtIOBlockReq *req = virtqueue_pop(vq, sizeof(VirtIOBlockReq));
if (req) {
- virtio_blk_init_request(s, req);
+ virtio_blk_init_request(s, vq, req);
}
return req;
}
@@ -620,7 +622,7 @@ void virtio_blk_handle_vq(VirtIOBlock *s, VirtQueue *vq)
blk_io_plug(s->blk);
- while ((req = virtio_blk_get_request(s))) {
+ while ((req = virtio_blk_get_request(s, vq))) {
virtio_blk_handle_request(req, &mrb);
}
@@ -877,7 +879,7 @@ static int virtio_blk_load_device(VirtIODevice *vdev, QEMUFile *f,
while (qemu_get_sbyte(f)) {
VirtIOBlockReq *req;
req = qemu_get_virtqueue_element(f, sizeof(VirtIOBlockReq));
- virtio_blk_init_request(s, req);
+ virtio_blk_init_request(s, s->vq, req);
req->next = s->rq;
s->rq = req;
}
diff --git a/include/hw/virtio/virtio-blk.h b/include/hw/virtio/virtio-blk.h
index b3834bc..9c6747d 100644
--- a/include/hw/virtio/virtio-blk.h
+++ b/include/hw/virtio/virtio-blk.h
@@ -63,6 +63,7 @@ typedef struct VirtIOBlockReq {
VirtQueueElement elem;
int64_t sector_num;
VirtIOBlock *dev;
+ VirtQueue *vq;
struct virtio_blk_inhdr *in;
struct virtio_blk_outhdr out;
QEMUIOVector qiov;
@@ -80,7 +81,8 @@ typedef struct MultiReqBuffer {
bool is_write;
} MultiReqBuffer;
-void virtio_blk_init_request(VirtIOBlock *s, VirtIOBlockReq *req);
+void virtio_blk_init_request(VirtIOBlock *s, VirtQueue *vq,
+ VirtIOBlockReq *req);
void virtio_blk_free_request(VirtIOBlockReq *req);
void virtio_blk_handle_request(VirtIOBlockReq *req, MultiReqBuffer *mrb);
--
2.5.5
^ permalink raw reply related [flat|nested] 13+ messages in thread
* [Qemu-devel] [PATCH v2 4/8] virtio-blk: add VirtIOBlockConf->num_queues
2016-05-31 1:25 [Qemu-devel] [PATCH v2 0/8] virtio-blk: multiqueue support Stefan Hajnoczi
` (2 preceding siblings ...)
2016-05-31 1:26 ` [Qemu-devel] [PATCH v2 3/8] virtio-blk: associate request with a virtqueue Stefan Hajnoczi
@ 2016-05-31 1:26 ` Stefan Hajnoczi
2016-05-31 1:26 ` [Qemu-devel] [PATCH v2 5/8] virtio-blk: multiqueue batch notify Stefan Hajnoczi
` (4 subsequent siblings)
8 siblings, 0 replies; 13+ messages in thread
From: Stefan Hajnoczi @ 2016-05-31 1:26 UTC (permalink / raw)
To: qemu-devel
Cc: Kevin Wolf, Roman Pen, Fam Zheng, Christian Borntraeger,
Paolo Bonzini, Ming Lei, Stefan Hajnoczi
The num_queues field is always 1 for the time being. A later patch will
make it a configurable device property so that multiqueue can be
enabled.
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
hw/block/virtio-blk.c | 1 +
include/hw/virtio/virtio-blk.h | 1 +
2 files changed, 2 insertions(+)
diff --git a/hw/block/virtio-blk.c b/hw/block/virtio-blk.c
index 4ee4063..c8d66f0 100644
--- a/hw/block/virtio-blk.c
+++ b/hw/block/virtio-blk.c
@@ -931,6 +931,7 @@ static void virtio_blk_device_realize(DeviceState *dev, Error **errp)
s->rq = NULL;
s->sector_mask = (s->conf.conf.logical_block_size / BDRV_SECTOR_SIZE) - 1;
+ conf->num_queues = 1;
s->vq = virtio_add_queue(vdev, 128, virtio_blk_handle_output);
virtio_blk_data_plane_create(vdev, conf, &s->dataplane, &err);
if (err != NULL) {
diff --git a/include/hw/virtio/virtio-blk.h b/include/hw/virtio/virtio-blk.h
index 9c6747d..487b28d 100644
--- a/include/hw/virtio/virtio-blk.h
+++ b/include/hw/virtio/virtio-blk.h
@@ -38,6 +38,7 @@ struct VirtIOBlkConf
uint32_t scsi;
uint32_t config_wce;
uint32_t request_merging;
+ uint16_t num_queues;
};
struct VirtIOBlockDataPlane;
--
2.5.5
^ permalink raw reply related [flat|nested] 13+ messages in thread
* [Qemu-devel] [PATCH v2 5/8] virtio-blk: multiqueue batch notify
2016-05-31 1:25 [Qemu-devel] [PATCH v2 0/8] virtio-blk: multiqueue support Stefan Hajnoczi
` (3 preceding siblings ...)
2016-05-31 1:26 ` [Qemu-devel] [PATCH v2 4/8] virtio-blk: add VirtIOBlockConf->num_queues Stefan Hajnoczi
@ 2016-05-31 1:26 ` Stefan Hajnoczi
2016-05-31 1:26 ` [Qemu-devel] [PATCH v2 6/8] virtio-blk: live migrateion s->rq with multiqueue Stefan Hajnoczi
` (3 subsequent siblings)
8 siblings, 0 replies; 13+ messages in thread
From: Stefan Hajnoczi @ 2016-05-31 1:26 UTC (permalink / raw)
To: qemu-devel
Cc: Kevin Wolf, Roman Pen, Fam Zheng, Christian Borntraeger,
Paolo Bonzini, Ming Lei, Stefan Hajnoczi
The batch notification BH needs to know which virtqueues to notify when
multiqueue is enabled. Use a bitmap to track the virtqueues with
pending notifications.
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
hw/block/virtio-blk.c | 29 +++++++++++++++++++++++++----
include/hw/virtio/virtio-blk.h | 1 +
2 files changed, 26 insertions(+), 4 deletions(-)
diff --git a/hw/block/virtio-blk.c b/hw/block/virtio-blk.c
index c8d66f0..9de749b 100644
--- a/hw/block/virtio-blk.c
+++ b/hw/block/virtio-blk.c
@@ -54,11 +54,28 @@ static void virtio_blk_batch_notify_bh(void *opaque)
{
VirtIOBlock *s = opaque;
VirtIODevice *vdev = VIRTIO_DEVICE(s);
+ unsigned nvqs = s->conf.num_queues;
+ unsigned long bitmap[BITS_TO_LONGS(nvqs)];
+ unsigned j;
- if (s->dataplane_started && !s->dataplane_disabled) {
- virtio_blk_data_plane_notify(s->dataplane, s->vq);
- } else {
- virtio_notify(vdev, s->vq);
+ memcpy(bitmap, s->batch_notify_vqs, sizeof(bitmap));
+ memset(s->batch_notify_vqs, 0, sizeof(bitmap));
+
+ for (j = 0; j < nvqs; j += BITS_PER_LONG) {
+ unsigned long bits = bitmap[j];
+
+ while (bits != 0) {
+ unsigned i = j + ctzl(bits);
+ VirtQueue *vq = virtio_get_queue(vdev, i);
+
+ if (s->dataplane_started && !s->dataplane_disabled) {
+ virtio_blk_data_plane_notify(s->dataplane, vq);
+ } else {
+ virtio_notify(vdev, vq);
+ }
+
+ bits &= bits - 1; /* clear right-most bit */
+ }
}
}
@@ -97,6 +114,8 @@ static void virtio_blk_req_complete(VirtIOBlockReq *req, unsigned char status)
stb_p(&req->in->status, status);
virtqueue_push(req->vq, &req->elem, req->in_len);
+
+ set_bit(virtio_queue_get_id(req->vq), s->batch_notify_vqs);
qemu_bh_schedule(s->batch_notify_bh);
}
@@ -940,6 +959,7 @@ static void virtio_blk_device_realize(DeviceState *dev, Error **errp)
return;
}
+ s->batch_notify_vqs = bitmap_new(conf->num_queues);
blk_add_aio_context_notifier(s->blk, virtio_blk_attached_aio_context,
virtio_blk_detach_aio_context, s);
virtio_blk_attached_aio_context(blk_get_aio_context(s->blk), s);
@@ -961,6 +981,7 @@ static void virtio_blk_device_unrealize(DeviceState *dev, Error **errp)
blk_remove_aio_context_notifier(s->blk, virtio_blk_attached_aio_context,
virtio_blk_detach_aio_context, s);
virtio_blk_detach_aio_context(s);
+ g_free(s->batch_notify_vqs);
virtio_blk_data_plane_destroy(s->dataplane);
s->dataplane = NULL;
qemu_del_vm_change_state_handler(s->change);
diff --git a/include/hw/virtio/virtio-blk.h b/include/hw/virtio/virtio-blk.h
index 487b28d..b6e7860 100644
--- a/include/hw/virtio/virtio-blk.h
+++ b/include/hw/virtio/virtio-blk.h
@@ -51,6 +51,7 @@ typedef struct VirtIOBlock {
void *rq;
QEMUBH *bh;
QEMUBH *batch_notify_bh;
+ unsigned long *batch_notify_vqs;
VirtIOBlkConf conf;
unsigned short sector_mask;
bool original_wce;
--
2.5.5
^ permalink raw reply related [flat|nested] 13+ messages in thread
* [Qemu-devel] [PATCH v2 6/8] virtio-blk: live migrateion s->rq with multiqueue
2016-05-31 1:25 [Qemu-devel] [PATCH v2 0/8] virtio-blk: multiqueue support Stefan Hajnoczi
` (4 preceding siblings ...)
2016-05-31 1:26 ` [Qemu-devel] [PATCH v2 5/8] virtio-blk: multiqueue batch notify Stefan Hajnoczi
@ 2016-05-31 1:26 ` Stefan Hajnoczi
2016-05-31 1:26 ` [Qemu-devel] [PATCH v2 7/8] virtio-blk: dataplane multiqueue support Stefan Hajnoczi
` (2 subsequent siblings)
8 siblings, 0 replies; 13+ messages in thread
From: Stefan Hajnoczi @ 2016-05-31 1:26 UTC (permalink / raw)
To: qemu-devel
Cc: Kevin Wolf, Roman Pen, Fam Zheng, Christian Borntraeger,
Paolo Bonzini, Ming Lei, Stefan Hajnoczi
Add a field for the virtqueue index when migrating the s->rq request
list. The new field is only needed when num_queues > 1. Existing QEMUs
are unaffected by this change and therefore virtio-blk migration stays
compatible.
Suggested-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
hw/block/virtio-blk.c | 20 +++++++++++++++++++-
1 file changed, 19 insertions(+), 1 deletion(-)
diff --git a/hw/block/virtio-blk.c b/hw/block/virtio-blk.c
index 9de749b..f36b690 100644
--- a/hw/block/virtio-blk.c
+++ b/hw/block/virtio-blk.c
@@ -873,6 +873,11 @@ static void virtio_blk_save_device(VirtIODevice *vdev, QEMUFile *f)
while (req) {
qemu_put_sbyte(f, 1);
+
+ if (s->conf.num_queues > 1) {
+ qemu_put_be32(f, virtio_queue_get_id(req->vq));
+ }
+
qemu_put_virtqueue_element(f, &req->elem);
req = req->next;
}
@@ -896,9 +901,22 @@ static int virtio_blk_load_device(VirtIODevice *vdev, QEMUFile *f,
VirtIOBlock *s = VIRTIO_BLK(vdev);
while (qemu_get_sbyte(f)) {
+ unsigned nvqs = s->conf.num_queues;
+ unsigned vq_idx = 0;
VirtIOBlockReq *req;
+
+ if (nvqs > 1) {
+ vq_idx = qemu_get_be32(f);
+
+ if (vq_idx >= nvqs) {
+ error_report("Invalid virtqueue index in request list: %#x",
+ vq_idx);
+ return -EINVAL;
+ }
+ }
+
req = qemu_get_virtqueue_element(f, sizeof(VirtIOBlockReq));
- virtio_blk_init_request(s, s->vq, req);
+ virtio_blk_init_request(s, virtio_get_queue(vdev, vq_idx), req);
req->next = s->rq;
s->rq = req;
}
--
2.5.5
^ permalink raw reply related [flat|nested] 13+ messages in thread
* [Qemu-devel] [PATCH v2 7/8] virtio-blk: dataplane multiqueue support
2016-05-31 1:25 [Qemu-devel] [PATCH v2 0/8] virtio-blk: multiqueue support Stefan Hajnoczi
` (5 preceding siblings ...)
2016-05-31 1:26 ` [Qemu-devel] [PATCH v2 6/8] virtio-blk: live migrateion s->rq with multiqueue Stefan Hajnoczi
@ 2016-05-31 1:26 ` Stefan Hajnoczi
2016-05-31 1:26 ` [Qemu-devel] [PATCH v2 8/8] virtio-blk: add num-queues device property Stefan Hajnoczi
2016-06-03 0:19 ` [Qemu-devel] [PATCH v2 0/8] virtio-blk: multiqueue support Stefan Hajnoczi
8 siblings, 0 replies; 13+ messages in thread
From: Stefan Hajnoczi @ 2016-05-31 1:26 UTC (permalink / raw)
To: qemu-devel
Cc: Kevin Wolf, Roman Pen, Fam Zheng, Christian Borntraeger,
Paolo Bonzini, Ming Lei, Stefan Hajnoczi
Monitor ioeventfds for all virtqueues in the device's AioContext. This
is not true multiqueue because requests from all virtqueues are
processed in a single IOThread. In the future it will be possible to
use multiple IOThreads when the QEMU block layer supports multiqueue.
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
hw/block/dataplane/virtio-blk.c | 50 ++++++++++++++++++++++++++++-------------
1 file changed, 34 insertions(+), 16 deletions(-)
diff --git a/hw/block/dataplane/virtio-blk.c b/hw/block/dataplane/virtio-blk.c
index 592aa95..48c0bb7 100644
--- a/hw/block/dataplane/virtio-blk.c
+++ b/hw/block/dataplane/virtio-blk.c
@@ -31,9 +31,7 @@ struct VirtIOBlockDataPlane {
bool stopping;
VirtIOBlkConf *conf;
-
VirtIODevice *vdev;
- VirtQueue *vq; /* virtqueue vring */
Notifier insert_notifier, remove_notifier;
@@ -190,6 +188,8 @@ void virtio_blk_data_plane_start(VirtIOBlockDataPlane *s)
BusState *qbus = BUS(qdev_get_parent_bus(DEVICE(s->vdev)));
VirtioBusClass *k = VIRTIO_BUS_GET_CLASS(qbus);
VirtIOBlock *vblk = VIRTIO_BLK(s->vdev);
+ unsigned i;
+ unsigned nvqs = s->conf->num_queues;
int r;
if (vblk->dataplane_started || s->starting) {
@@ -197,10 +197,9 @@ void virtio_blk_data_plane_start(VirtIOBlockDataPlane *s)
}
s->starting = true;
- s->vq = virtio_get_queue(s->vdev, 0);
/* Set up guest notifier (irq) */
- r = k->set_guest_notifiers(qbus->parent, 1, true);
+ r = k->set_guest_notifiers(qbus->parent, nvqs, true);
if (r != 0) {
fprintf(stderr, "virtio-blk failed to set guest notifier (%d), "
"ensure -enable-kvm is set\n", r);
@@ -208,10 +207,15 @@ void virtio_blk_data_plane_start(VirtIOBlockDataPlane *s)
}
/* Set up virtqueue notify */
- r = k->set_host_notifier(qbus->parent, 0, true);
- if (r != 0) {
- fprintf(stderr, "virtio-blk failed to set host notifier (%d)\n", r);
- goto fail_host_notifier;
+ for (i = 0; i < nvqs; i++) {
+ r = k->set_host_notifier(qbus->parent, i, true);
+ if (r != 0) {
+ fprintf(stderr, "virtio-blk failed to set host notifier (%d)\n", r);
+ while (i--) {
+ k->set_host_notifier(qbus->parent, i, false);
+ }
+ goto fail_guest_notifiers;
+ }
}
s->starting = false;
@@ -221,17 +225,23 @@ void virtio_blk_data_plane_start(VirtIOBlockDataPlane *s)
blk_set_aio_context(s->conf->conf.blk, s->ctx);
/* Kick right away to begin processing requests already in vring */
- event_notifier_set(virtio_queue_get_host_notifier(s->vq));
+ for (i = 0; i < nvqs; i++) {
+ VirtQueue *vq = virtio_get_queue(s->vdev, i);
+
+ event_notifier_set(virtio_queue_get_host_notifier(vq));
+ }
/* Get this show started by hooking up our callbacks */
aio_context_acquire(s->ctx);
- virtio_queue_aio_set_host_notifier_handler(s->vq, s->ctx,
- virtio_blk_data_plane_handle_output);
+ for (i = 0; i < nvqs; i++) {
+ VirtQueue *vq = virtio_get_queue(s->vdev, i);
+
+ virtio_queue_aio_set_host_notifier_handler(vq, s->ctx,
+ virtio_blk_data_plane_handle_output);
+ }
aio_context_release(s->ctx);
return;
- fail_host_notifier:
- k->set_guest_notifiers(qbus->parent, 1, false);
fail_guest_notifiers:
vblk->dataplane_disabled = true;
s->starting = false;
@@ -244,6 +254,8 @@ void virtio_blk_data_plane_stop(VirtIOBlockDataPlane *s)
BusState *qbus = BUS(qdev_get_parent_bus(DEVICE(s->vdev)));
VirtioBusClass *k = VIRTIO_BUS_GET_CLASS(qbus);
VirtIOBlock *vblk = VIRTIO_BLK(s->vdev);
+ unsigned i;
+ unsigned nvqs = s->conf->num_queues;
if (!vblk->dataplane_started || s->stopping) {
return;
@@ -261,17 +273,23 @@ void virtio_blk_data_plane_stop(VirtIOBlockDataPlane *s)
aio_context_acquire(s->ctx);
/* Stop notifications for new requests from guest */
- virtio_queue_aio_set_host_notifier_handler(s->vq, s->ctx, NULL);
+ for (i = 0; i < nvqs; i++) {
+ VirtQueue *vq = virtio_get_queue(s->vdev, i);
+
+ virtio_queue_aio_set_host_notifier_handler(vq, s->ctx, NULL);
+ }
/* Drain and switch bs back to the QEMU main loop */
blk_set_aio_context(s->conf->conf.blk, qemu_get_aio_context());
aio_context_release(s->ctx);
- k->set_host_notifier(qbus->parent, 0, false);
+ for (i = 0; i < nvqs; i++) {
+ k->set_host_notifier(qbus->parent, i, false);
+ }
/* Clean up guest notifier (irq) */
- k->set_guest_notifiers(qbus->parent, 1, false);
+ k->set_guest_notifiers(qbus->parent, nvqs, false);
vblk->dataplane_started = false;
s->stopping = false;
--
2.5.5
^ permalink raw reply related [flat|nested] 13+ messages in thread
* [Qemu-devel] [PATCH v2 8/8] virtio-blk: add num-queues device property
2016-05-31 1:25 [Qemu-devel] [PATCH v2 0/8] virtio-blk: multiqueue support Stefan Hajnoczi
` (6 preceding siblings ...)
2016-05-31 1:26 ` [Qemu-devel] [PATCH v2 7/8] virtio-blk: dataplane multiqueue support Stefan Hajnoczi
@ 2016-05-31 1:26 ` Stefan Hajnoczi
2016-06-03 0:19 ` [Qemu-devel] [PATCH v2 0/8] virtio-blk: multiqueue support Stefan Hajnoczi
8 siblings, 0 replies; 13+ messages in thread
From: Stefan Hajnoczi @ 2016-05-31 1:26 UTC (permalink / raw)
To: qemu-devel
Cc: Kevin Wolf, Roman Pen, Fam Zheng, Christian Borntraeger,
Paolo Bonzini, Ming Lei, Stefan Hajnoczi
Multiqueue virtio-blk can be enabled as follows:
qemu -device virtio-blk-pci,num-queues=8
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
hw/block/virtio-blk.c | 15 +++++++++++++--
include/hw/virtio/virtio-blk.h | 1 -
2 files changed, 13 insertions(+), 3 deletions(-)
diff --git a/hw/block/virtio-blk.c b/hw/block/virtio-blk.c
index f36b690..c79a9d5 100644
--- a/hw/block/virtio-blk.c
+++ b/hw/block/virtio-blk.c
@@ -768,6 +768,7 @@ static void virtio_blk_update_config(VirtIODevice *vdev, uint8_t *config)
blkcfg.physical_block_exp = get_physical_block_exp(conf);
blkcfg.alignment_offset = 0;
blkcfg.wce = blk_enable_write_cache(s->blk);
+ virtio_stw_p(vdev, &blkcfg.num_queues, s->conf.num_queues);
memcpy(config, &blkcfg, sizeof(struct virtio_blk_config));
}
@@ -811,6 +812,9 @@ static uint64_t virtio_blk_get_features(VirtIODevice *vdev, uint64_t features,
if (blk_is_read_only(s->blk)) {
virtio_add_feature(&features, VIRTIO_BLK_F_RO);
}
+ if (s->conf.num_queues > 1) {
+ virtio_add_feature(&features, VIRTIO_BLK_F_MQ);
+ }
return features;
}
@@ -942,6 +946,7 @@ static void virtio_blk_device_realize(DeviceState *dev, Error **errp)
VirtIOBlkConf *conf = &s->conf;
Error *err = NULL;
static int virtio_blk_id;
+ unsigned i;
if (!conf->conf.blk) {
error_setg(errp, "drive property not set");
@@ -951,6 +956,10 @@ static void virtio_blk_device_realize(DeviceState *dev, Error **errp)
error_setg(errp, "Device needs media, but drive is empty");
return;
}
+ if (!conf->num_queues) {
+ error_setg(errp, "num-queues property must be larger than 0");
+ return;
+ }
blkconf_serial(&conf->conf, &conf->serial);
s->original_wce = blk_enable_write_cache(conf->conf.blk);
@@ -968,8 +977,9 @@ static void virtio_blk_device_realize(DeviceState *dev, Error **errp)
s->rq = NULL;
s->sector_mask = (s->conf.conf.logical_block_size / BDRV_SECTOR_SIZE) - 1;
- conf->num_queues = 1;
- s->vq = virtio_add_queue(vdev, 128, virtio_blk_handle_output);
+ for (i = 0; i < conf->num_queues; i++) {
+ virtio_add_queue(vdev, 128, virtio_blk_handle_output);
+ }
virtio_blk_data_plane_create(vdev, conf, &s->dataplane, &err);
if (err != NULL) {
error_propagate(errp, err);
@@ -1031,6 +1041,7 @@ static Property virtio_blk_properties[] = {
#endif
DEFINE_PROP_BIT("request-merging", VirtIOBlock, conf.request_merging, 0,
true),
+ DEFINE_PROP_UINT16("num-queues", VirtIOBlock, conf.num_queues, 1),
DEFINE_PROP_END_OF_LIST(),
};
diff --git a/include/hw/virtio/virtio-blk.h b/include/hw/virtio/virtio-blk.h
index b6e7860..70bd3b7 100644
--- a/include/hw/virtio/virtio-blk.h
+++ b/include/hw/virtio/virtio-blk.h
@@ -47,7 +47,6 @@ struct VirtIOBlockReq;
typedef struct VirtIOBlock {
VirtIODevice parent_obj;
BlockBackend *blk;
- VirtQueue *vq;
void *rq;
QEMUBH *bh;
QEMUBH *batch_notify_bh;
--
2.5.5
^ permalink raw reply related [flat|nested] 13+ messages in thread
* Re: [Qemu-devel] [PATCH v2 0/8] virtio-blk: multiqueue support
2016-05-31 1:25 [Qemu-devel] [PATCH v2 0/8] virtio-blk: multiqueue support Stefan Hajnoczi
` (7 preceding siblings ...)
2016-05-31 1:26 ` [Qemu-devel] [PATCH v2 8/8] virtio-blk: add num-queues device property Stefan Hajnoczi
@ 2016-06-03 0:19 ` Stefan Hajnoczi
2016-06-03 22:26 ` Stefan Hajnoczi
8 siblings, 1 reply; 13+ messages in thread
From: Stefan Hajnoczi @ 2016-06-03 0:19 UTC (permalink / raw)
To: Stefan Hajnoczi
Cc: qemu-devel, Kevin Wolf, Fam Zheng, Ming Lei,
Christian Borntraeger, Roman Pen, Paolo Bonzini
[-- Attachment #1: Type: text/plain, Size: 2835 bytes --]
On Mon, May 30, 2016 at 06:25:58PM -0700, Stefan Hajnoczi wrote:
> v2:
> * Simplify s->rq live migration [Paolo]
> * Use more efficient bitmap ops for batch notification [Paolo]
> * Fix perf regression due to batch notify BH in wrong AioContext [Christian]
>
> The virtio_blk guest driver has supported multiple virtqueues since Linux 3.17.
> This patch series adds multiple virtqueues to QEMU's virtio-blk emulated
> device.
>
> Ming Lei sent patches previously but these were not merged. This series
> implements virtio-blk multiqueue for QEMU from scratch since the codebase has
> changed. Live migration support for s->rq was also missing from the previous
> series and has been added.
>
> It's important to note that QEMU's block layer does not support multiqueue yet.
> Therefore virtio-blk device processes all virtqueues in the same AioContext
> (IOThread). Further work is necessary to take advantage of multiqueue support
> in QEMU's block layer once it becomes available.
>
> I will post performance results once they are ready.
>
> Stefan Hajnoczi (8):
> virtio-blk: use batch notify in non-dataplane case
> virtio-blk: tell dataplane which vq to notify
> virtio-blk: associate request with a virtqueue
> virtio-blk: add VirtIOBlockConf->num_queues
> virtio-blk: multiqueue batch notify
> virtio-blk: live migrateion s->rq with multiqueue
> virtio-blk: dataplane multiqueue support
> virtio-blk: add num-queues device property
>
> hw/block/dataplane/virtio-blk.c | 68 +++++++++++----------
> hw/block/dataplane/virtio-blk.h | 2 +-
> hw/block/virtio-blk.c | 129 +++++++++++++++++++++++++++++++++++-----
> include/hw/virtio/virtio-blk.h | 8 ++-
> 4 files changed, 159 insertions(+), 48 deletions(-)
There is a significant performance regression due to batch notify:
$ ./analyze.py runs/
Name IOPS Error
unpatched-d6550e9ed2 19269820.2 ± 1.36%
unpatched-d6550e9ed2-2 19567358.4 ± 2.42%
v2-batch-only-f27ed9a4d9 16252227.2 ± 6.09%
v2-no-dataplane 14560225.4 ± 5.16%
v2-no-dataplane-2 14622535.6 ± 10.08%
v2-no-dataplane-3 13960670.8 ± 7.11%
unpatched-d6550e9ed2 is without this patch series.
v2-batch-only-f27ed9a4d9 is with Patch 1 only. v2-no-dataplane is with
the patch series (dataplane is not enabled in any of these tests).
Next I will compare unpatched dataplane against patched dataplane. I
want to make sure Patch 1 faithfully moved batch notify from dataplane
code to generic virtio-blk code without affecting performance.
If there is no difference then it means batch notify decreases
performance for some workloads (obviously not the same workload that
Ming Lei was running).
Stefan
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 473 bytes --]
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [Qemu-devel] [PATCH v2 0/8] virtio-blk: multiqueue support
2016-06-03 0:19 ` [Qemu-devel] [PATCH v2 0/8] virtio-blk: multiqueue support Stefan Hajnoczi
@ 2016-06-03 22:26 ` Stefan Hajnoczi
2016-06-04 15:49 ` Roman Penyaev
2016-06-06 15:16 ` Paolo Bonzini
0 siblings, 2 replies; 13+ messages in thread
From: Stefan Hajnoczi @ 2016-06-03 22:26 UTC (permalink / raw)
To: qemu-devel
Cc: Kevin Wolf, Fam Zheng, Ming Lei, Christian Borntraeger, Roman Pen,
Paolo Bonzini
[-- Attachment #1: Type: text/plain, Size: 4141 bytes --]
On Thu, Jun 02, 2016 at 05:19:41PM -0700, Stefan Hajnoczi wrote:
> On Mon, May 30, 2016 at 06:25:58PM -0700, Stefan Hajnoczi wrote:
> > v2:
> > * Simplify s->rq live migration [Paolo]
> > * Use more efficient bitmap ops for batch notification [Paolo]
> > * Fix perf regression due to batch notify BH in wrong AioContext [Christian]
> >
> > The virtio_blk guest driver has supported multiple virtqueues since Linux 3.17.
> > This patch series adds multiple virtqueues to QEMU's virtio-blk emulated
> > device.
> >
> > Ming Lei sent patches previously but these were not merged. This series
> > implements virtio-blk multiqueue for QEMU from scratch since the codebase has
> > changed. Live migration support for s->rq was also missing from the previous
> > series and has been added.
> >
> > It's important to note that QEMU's block layer does not support multiqueue yet.
> > Therefore virtio-blk device processes all virtqueues in the same AioContext
> > (IOThread). Further work is necessary to take advantage of multiqueue support
> > in QEMU's block layer once it becomes available.
> >
> > I will post performance results once they are ready.
> >
> > Stefan Hajnoczi (8):
> > virtio-blk: use batch notify in non-dataplane case
> > virtio-blk: tell dataplane which vq to notify
> > virtio-blk: associate request with a virtqueue
> > virtio-blk: add VirtIOBlockConf->num_queues
> > virtio-blk: multiqueue batch notify
> > virtio-blk: live migrateion s->rq with multiqueue
> > virtio-blk: dataplane multiqueue support
> > virtio-blk: add num-queues device property
> >
> > hw/block/dataplane/virtio-blk.c | 68 +++++++++++----------
> > hw/block/dataplane/virtio-blk.h | 2 +-
> > hw/block/virtio-blk.c | 129 +++++++++++++++++++++++++++++++++++-----
> > include/hw/virtio/virtio-blk.h | 8 ++-
> > 4 files changed, 159 insertions(+), 48 deletions(-)
>
> There is a significant performance regression due to batch notify:
>
> $ ./analyze.py runs/
> Name IOPS Error
> unpatched-d6550e9ed2 19269820.2 ± 1.36%
> unpatched-d6550e9ed2-2 19567358.4 ± 2.42%
> v2-batch-only-f27ed9a4d9 16252227.2 ± 6.09%
> v2-no-dataplane 14560225.4 ± 5.16%
> v2-no-dataplane-2 14622535.6 ± 10.08%
> v2-no-dataplane-3 13960670.8 ± 7.11%
>
> unpatched-d6550e9ed2 is without this patch series.
> v2-batch-only-f27ed9a4d9 is with Patch 1 only. v2-no-dataplane is with
> the patch series (dataplane is not enabled in any of these tests).
>
> Next I will compare unpatched dataplane against patched dataplane. I
> want to make sure Patch 1 faithfully moved batch notify from dataplane
> code to generic virtio-blk code without affecting performance.
>
> If there is no difference then it means batch notify decreases
> performance for some workloads (obviously not the same workload that
> Ming Lei was running).
It turns out that Patch 1 slows down dataplane even though the code
looks equivalent. After a lot of poking it turned out to be a subtle
issue:
The order of BHs in the AioContext->first_bh list affects performance.
Linux AIO (block/linux-aio.c) invokes completion callbacks from a BH.
Performance is much better if virtio-blk.c's batch BH is after the
completion BH.
The "fast" ordering notifies the guest in ~300 nanoseconds after the
last request completion.
The "slow" ordering sometimes takes 100 microseconds after the last
request completion before the guest is notified. It probably depends on
whether the event loop is kicked by another source.
I'm thinking of scrapping the batch BH and instead using a notify
plug/unplug callback to suppress notification until the last request has
been processed.
I also checked that batch notification does indeed improve performance
compared to no batching. It offers a nice boost so we do want to port
the feature from dataplane to non-dataplane.
For the time being: consider this patch series broken due to the
performance regression.
Stefan
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 473 bytes --]
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [Qemu-devel] [PATCH v2 0/8] virtio-blk: multiqueue support
2016-06-03 22:26 ` Stefan Hajnoczi
@ 2016-06-04 15:49 ` Roman Penyaev
2016-06-06 15:16 ` Paolo Bonzini
1 sibling, 0 replies; 13+ messages in thread
From: Roman Penyaev @ 2016-06-04 15:49 UTC (permalink / raw)
To: Stefan Hajnoczi
Cc: qemu-devel, Kevin Wolf, Fam Zheng, Ming Lei,
Christian Borntraeger, Paolo Bonzini
Hi,
On Sat, Jun 4, 2016 at 12:26 AM, Stefan Hajnoczi <stefanha@redhat.com> wrote:
> On Thu, Jun 02, 2016 at 05:19:41PM -0700, Stefan Hajnoczi wrote:
>> On Mon, May 30, 2016 at 06:25:58PM -0700, Stefan Hajnoczi wrote:
>> > v2:
>> > * Simplify s->rq live migration [Paolo]
>> > * Use more efficient bitmap ops for batch notification [Paolo]
>> > * Fix perf regression due to batch notify BH in wrong AioContext [Christian]
>> >
>> > The virtio_blk guest driver has supported multiple virtqueues since Linux 3.17.
>> > This patch series adds multiple virtqueues to QEMU's virtio-blk emulated
>> > device.
>> >
>> > Ming Lei sent patches previously but these were not merged. This series
>> > implements virtio-blk multiqueue for QEMU from scratch since the codebase has
>> > changed. Live migration support for s->rq was also missing from the previous
>> > series and has been added.
>> >
>> > It's important to note that QEMU's block layer does not support multiqueue yet.
>> > Therefore virtio-blk device processes all virtqueues in the same AioContext
>> > (IOThread). Further work is necessary to take advantage of multiqueue support
>> > in QEMU's block layer once it becomes available.
>> >
>> > I will post performance results once they are ready.
>> >
>> > Stefan Hajnoczi (8):
>> > virtio-blk: use batch notify in non-dataplane case
>> > virtio-blk: tell dataplane which vq to notify
>> > virtio-blk: associate request with a virtqueue
>> > virtio-blk: add VirtIOBlockConf->num_queues
>> > virtio-blk: multiqueue batch notify
>> > virtio-blk: live migrateion s->rq with multiqueue
>> > virtio-blk: dataplane multiqueue support
>> > virtio-blk: add num-queues device property
>> >
>> > hw/block/dataplane/virtio-blk.c | 68 +++++++++++----------
>> > hw/block/dataplane/virtio-blk.h | 2 +-
>> > hw/block/virtio-blk.c | 129 +++++++++++++++++++++++++++++++++++-----
>> > include/hw/virtio/virtio-blk.h | 8 ++-
>> > 4 files changed, 159 insertions(+), 48 deletions(-)
>>
>> There is a significant performance regression due to batch notify:
>>
>> $ ./analyze.py runs/
>> Name IOPS Error
>> unpatched-d6550e9ed2 19269820.2 ± 1.36%
>> unpatched-d6550e9ed2-2 19567358.4 ± 2.42%
>> v2-batch-only-f27ed9a4d9 16252227.2 ± 6.09%
>> v2-no-dataplane 14560225.4 ± 5.16%
>> v2-no-dataplane-2 14622535.6 ± 10.08%
>> v2-no-dataplane-3 13960670.8 ± 7.11%
>>
>> unpatched-d6550e9ed2 is without this patch series.
>> v2-batch-only-f27ed9a4d9 is with Patch 1 only. v2-no-dataplane is with
>> the patch series (dataplane is not enabled in any of these tests).
>>
>> Next I will compare unpatched dataplane against patched dataplane. I
>> want to make sure Patch 1 faithfully moved batch notify from dataplane
>> code to generic virtio-blk code without affecting performance.
>>
>> If there is no difference then it means batch notify decreases
>> performance for some workloads (obviously not the same workload that
>> Ming Lei was running).
>
> It turns out that Patch 1 slows down dataplane even though the code
> looks equivalent. After a lot of poking it turned out to be a subtle
> issue:
>
> The order of BHs in the AioContext->first_bh list affects performance.
> Linux AIO (block/linux-aio.c) invokes completion callbacks from a BH.
> Performance is much better if virtio-blk.c's batch BH is after the
> completion BH.
>
> The "fast" ordering notifies the guest in ~300 nanoseconds after the
> last request completion.
>
> The "slow" ordering sometimes takes 100 microseconds after the last
> request completion before the guest is notified. It probably depends on
> whether the event loop is kicked by another source.
>
> I'm thinking of scrapping the batch BH and instead using a notify
> plug/unplug callback to suppress notification until the last request has
> been processed.
>
> I also checked that batch notification does indeed improve performance
> compared to no batching. It offers a nice boost so we do want to port
> the feature from dataplane to non-dataplane.
>
> For the time being: consider this patch series broken due to the
> performance regression.
>
> Stefan
Stefan, could you please share your loads? I tried on my fio scripts and did
not notice any significant difference. Would be interesting to understand
the root cause.
--
Roman
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [Qemu-devel] [PATCH v2 0/8] virtio-blk: multiqueue support
2016-06-03 22:26 ` Stefan Hajnoczi
2016-06-04 15:49 ` Roman Penyaev
@ 2016-06-06 15:16 ` Paolo Bonzini
1 sibling, 0 replies; 13+ messages in thread
From: Paolo Bonzini @ 2016-06-06 15:16 UTC (permalink / raw)
To: Stefan Hajnoczi, qemu-devel
Cc: Kevin Wolf, Fam Zheng, Ming Lei, Christian Borntraeger, Roman Pen
On 04/06/2016 00:26, Stefan Hajnoczi wrote:
> It turns out that Patch 1 slows down dataplane even though the code
> looks equivalent. After a lot of poking it turned out to be a subtle
> issue:
>
> The order of BHs in the AioContext->first_bh list affects performance.
> Linux AIO (block/linux-aio.c) invokes completion callbacks from a BH.
> Performance is much better if virtio-blk.c's batch BH is after the
> completion BH.
>
> The "fast" ordering notifies the guest in ~300 nanoseconds after the
> last request completion.
>
> The "slow" ordering sometimes takes 100 microseconds after the last
> request completion before the guest is notified. It probably depends on
> whether the event loop is kicked by another source.
>
> I'm thinking of scrapping the batch BH and instead using a notify
> plug/unplug callback to suppress notification until the last request has
> been processed.
>
> I also checked that batch notification does indeed improve performance
> compared to no batching. It offers a nice boost so we do want to port
> the feature from dataplane to non-dataplane.
>
> For the time being: consider this patch series broken due to the
> performance regression.
Thanks for the nice analysis!
Paolo
^ permalink raw reply [flat|nested] 13+ messages in thread
end of thread, other threads:[~2016-06-06 15:16 UTC | newest]
Thread overview: 13+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2016-05-31 1:25 [Qemu-devel] [PATCH v2 0/8] virtio-blk: multiqueue support Stefan Hajnoczi
2016-05-31 1:25 ` [Qemu-devel] [PATCH v2 1/8] virtio-blk: use batch notify in non-dataplane case Stefan Hajnoczi
2016-05-31 1:26 ` [Qemu-devel] [PATCH v2 2/8] virtio-blk: tell dataplane which vq to notify Stefan Hajnoczi
2016-05-31 1:26 ` [Qemu-devel] [PATCH v2 3/8] virtio-blk: associate request with a virtqueue Stefan Hajnoczi
2016-05-31 1:26 ` [Qemu-devel] [PATCH v2 4/8] virtio-blk: add VirtIOBlockConf->num_queues Stefan Hajnoczi
2016-05-31 1:26 ` [Qemu-devel] [PATCH v2 5/8] virtio-blk: multiqueue batch notify Stefan Hajnoczi
2016-05-31 1:26 ` [Qemu-devel] [PATCH v2 6/8] virtio-blk: live migrateion s->rq with multiqueue Stefan Hajnoczi
2016-05-31 1:26 ` [Qemu-devel] [PATCH v2 7/8] virtio-blk: dataplane multiqueue support Stefan Hajnoczi
2016-05-31 1:26 ` [Qemu-devel] [PATCH v2 8/8] virtio-blk: add num-queues device property Stefan Hajnoczi
2016-06-03 0:19 ` [Qemu-devel] [PATCH v2 0/8] virtio-blk: multiqueue support Stefan Hajnoczi
2016-06-03 22:26 ` Stefan Hajnoczi
2016-06-04 15:49 ` Roman Penyaev
2016-06-06 15:16 ` Paolo Bonzini
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).