* [PATCH 0/2] block/export: add vhost-user-blk multi-queue support
@ 2020-10-01 14:46 Stefan Hajnoczi
2020-10-01 14:46 ` [PATCH 1/2] " Stefan Hajnoczi
` (2 more replies)
0 siblings, 3 replies; 6+ messages in thread
From: Stefan Hajnoczi @ 2020-10-01 14:46 UTC (permalink / raw)
To: qemu-devel
Cc: Kevin Wolf, Laurent Vivier, Thomas Huth, qemu-block,
Markus Armbruster, Coiby Xu, Max Reitz, Stefan Hajnoczi,
Paolo Bonzini
The vhost-user-blk server currently only supports 1 virtqueue. Add a
'num-queues' option for multi-queue. Both --device
vhost-user-blk-pci,num-queues= and --export vhost-user-blk,num-queues= need to
be set in order for multi-queue to work (otherwise it will fall back to 1
virtqueue).
Based-on: 20200924151549.913737-1-stefanha@redhat.com ("[PATCH v2 00/13] block/export: convert vhost-user-blk-server to block exports API")
Stefan Hajnoczi (2):
block/export: add vhost-user-blk multi-queue support
tests/qtest: add multi-queue test case to vhost-user-blk-test
qapi/block-export.json | 6 ++-
block/export/vhost-user-blk-server.c | 24 ++++++---
tests/qtest/vhost-user-blk-test.c | 81 ++++++++++++++++++++++++++--
3 files changed, 99 insertions(+), 12 deletions(-)
--
2.26.2
^ permalink raw reply [flat|nested] 6+ messages in thread
* [PATCH 1/2] block/export: add vhost-user-blk multi-queue support
2020-10-01 14:46 [PATCH 0/2] block/export: add vhost-user-blk multi-queue support Stefan Hajnoczi
@ 2020-10-01 14:46 ` Stefan Hajnoczi
2020-10-02 5:32 ` Markus Armbruster
2020-10-01 14:46 ` [PATCH 2/2] tests/qtest: add multi-queue test case to vhost-user-blk-test Stefan Hajnoczi
2020-10-09 10:16 ` [PATCH 0/2] block/export: add vhost-user-blk multi-queue support Stefan Hajnoczi
2 siblings, 1 reply; 6+ messages in thread
From: Stefan Hajnoczi @ 2020-10-01 14:46 UTC (permalink / raw)
To: qemu-devel
Cc: Kevin Wolf, Laurent Vivier, Thomas Huth, qemu-block,
Markus Armbruster, Coiby Xu, Max Reitz, Stefan Hajnoczi,
Paolo Bonzini
Allow the number of queues to be configured using --export
vhost-user-blk,num-queues=N. This setting should match the QEMU --device
vhost-user-blk-pci,num-queues=N setting but QEMU vhost-user-blk.c lowers
its own value if the vhost-user-blk backend offers fewer queues than
QEMU.
The vhost-user-blk-server.c code is already capable of multi-queue. All
virtqueue processing runs in the same AioContext. No new locking is
needed.
Add the num-queues=N option and set the VIRTIO_BLK_F_MQ feature bit.
Note that the feature bit only announces the presence of the num_queues
configuration space field. It does not promise that there is more than 1
virtqueue, so we can set it unconditionally.
I tested multi-queue by running a random read fio test with numjobs=4 on
an -smp 4 guest. After the benchmark finished the guest /proc/interrupts
file showed activity on all 4 virtio-blk MSI-X. The /sys/block/vda/mq/
directory shows that Linux blk-mq has 4 queues configured.
An automated test is included in the next commit.
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
qapi/block-export.json | 6 +++++-
block/export/vhost-user-blk-server.c | 24 ++++++++++++++++++------
2 files changed, 23 insertions(+), 7 deletions(-)
diff --git a/qapi/block-export.json b/qapi/block-export.json
index a793e34af9..17020de257 100644
--- a/qapi/block-export.json
+++ b/qapi/block-export.json
@@ -93,11 +93,15 @@
# SocketAddress types are supported. Passed fds must be UNIX domain
# sockets.
# @logical-block-size: Logical block size in bytes. Defaults to 512 bytes.
+# @num-queues: Number of request virtqueues. Must be greater than 0. Defaults
+# to 1.
#
# Since: 5.2
##
{ 'struct': 'BlockExportOptionsVhostUserBlk',
- 'data': { 'addr': 'SocketAddress', '*logical-block-size': 'size' } }
+ 'data': { 'addr': 'SocketAddress',
+ '*logical-block-size': 'size',
+ '*num-queues': 'uint16'} }
##
# @NbdServerAddOptions:
diff --git a/block/export/vhost-user-blk-server.c b/block/export/vhost-user-blk-server.c
index 81072a5a46..bf84b45ecd 100644
--- a/block/export/vhost-user-blk-server.c
+++ b/block/export/vhost-user-blk-server.c
@@ -21,7 +21,7 @@
#include "util/block-helpers.h"
enum {
- VHOST_USER_BLK_MAX_QUEUES = 1,
+ VHOST_USER_BLK_NUM_QUEUES_DEFAULT = 1,
};
struct virtio_blk_inhdr {
unsigned char status;
@@ -242,6 +242,7 @@ static uint64_t vu_blk_get_features(VuDev *dev)
1ull << VIRTIO_BLK_F_DISCARD |
1ull << VIRTIO_BLK_F_WRITE_ZEROES |
1ull << VIRTIO_BLK_F_CONFIG_WCE |
+ 1ull << VIRTIO_BLK_F_MQ |
1ull << VIRTIO_F_VERSION_1 |
1ull << VIRTIO_RING_F_INDIRECT_DESC |
1ull << VIRTIO_RING_F_EVENT_IDX |
@@ -334,7 +335,9 @@ static void blk_aio_detach(void *opaque)
static void
vu_blk_initialize_config(BlockDriverState *bs,
- struct virtio_blk_config *config, uint32_t blk_size)
+ struct virtio_blk_config *config,
+ uint32_t blk_size,
+ uint16_t num_queues)
{
config->capacity = bdrv_getlength(bs) >> BDRV_SECTOR_BITS;
config->blk_size = blk_size;
@@ -342,7 +345,7 @@ vu_blk_initialize_config(BlockDriverState *bs,
config->seg_max = 128 - 2;
config->min_io_size = 1;
config->opt_io_size = 1;
- config->num_queues = VHOST_USER_BLK_MAX_QUEUES;
+ config->num_queues = num_queues;
config->max_discard_sectors = 32768;
config->max_discard_seg = 1;
config->discard_sector_alignment = config->blk_size >> 9;
@@ -364,6 +367,7 @@ static int vu_blk_exp_create(BlockExport *exp, BlockExportOptions *opts,
BlockExportOptionsVhostUserBlk *vu_opts = &opts->u.vhost_user_blk;
Error *local_err = NULL;
uint64_t logical_block_size;
+ uint16_t num_queues = VHOST_USER_BLK_NUM_QUEUES_DEFAULT;
vexp->writable = opts->writable;
vexp->blkcfg.wce = 0;
@@ -381,16 +385,24 @@ static int vu_blk_exp_create(BlockExport *exp, BlockExportOptions *opts,
}
vexp->blk_size = logical_block_size;
blk_set_guest_block_size(exp->blk, logical_block_size);
+
+ if (vu_opts->has_num_queues) {
+ num_queues = vu_opts->num_queues;
+ }
+ if (num_queues == 0) {
+ error_setg(errp, "num-queues must be greater than 0");
+ return -EINVAL;
+ }
+
vu_blk_initialize_config(blk_bs(exp->blk), &vexp->blkcfg,
- logical_block_size);
+ logical_block_size, num_queues);
blk_set_allow_aio_context_change(exp->blk, true);
blk_add_aio_context_notifier(exp->blk, blk_aio_attached, blk_aio_detach,
vexp);
if (!vhost_user_server_start(&vexp->vu_server, vu_opts->addr, exp->ctx,
- VHOST_USER_BLK_MAX_QUEUES, &vu_blk_iface,
- errp)) {
+ num_queues, &vu_blk_iface, errp)) {
blk_remove_aio_context_notifier(exp->blk, blk_aio_attached,
blk_aio_detach, vexp);
return -EADDRNOTAVAIL;
--
2.26.2
^ permalink raw reply related [flat|nested] 6+ messages in thread
* [PATCH 2/2] tests/qtest: add multi-queue test case to vhost-user-blk-test
2020-10-01 14:46 [PATCH 0/2] block/export: add vhost-user-blk multi-queue support Stefan Hajnoczi
2020-10-01 14:46 ` [PATCH 1/2] " Stefan Hajnoczi
@ 2020-10-01 14:46 ` Stefan Hajnoczi
2020-10-09 10:16 ` [PATCH 0/2] block/export: add vhost-user-blk multi-queue support Stefan Hajnoczi
2 siblings, 0 replies; 6+ messages in thread
From: Stefan Hajnoczi @ 2020-10-01 14:46 UTC (permalink / raw)
To: qemu-devel
Cc: Kevin Wolf, Laurent Vivier, Thomas Huth, qemu-block,
Markus Armbruster, Coiby Xu, Max Reitz, Stefan Hajnoczi,
Paolo Bonzini
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
tests/qtest/vhost-user-blk-test.c | 81 +++++++++++++++++++++++++++++--
1 file changed, 76 insertions(+), 5 deletions(-)
diff --git a/tests/qtest/vhost-user-blk-test.c b/tests/qtest/vhost-user-blk-test.c
index 42e4cfde82..b9f35191df 100644
--- a/tests/qtest/vhost-user-blk-test.c
+++ b/tests/qtest/vhost-user-blk-test.c
@@ -559,6 +559,67 @@ static void pci_hotplug(void *obj, void *data, QGuestAllocator *t_alloc)
qpci_unplug_acpi_device_test(qts, "drv1", PCI_SLOT_HP);
}
+static void multiqueue(void *obj, void *data, QGuestAllocator *t_alloc)
+{
+ QVirtioPCIDevice *pdev1 = obj;
+ QVirtioDevice *dev1 = &pdev1->vdev;
+ QVirtioPCIDevice *pdev8;
+ QVirtioDevice *dev8;
+ QTestState *qts = pdev1->pdev->bus->qts;
+ uint64_t features;
+ uint16_t num_queues;
+
+ /*
+ * The primary device has 1 queue and VIRTIO_BLK_F_MQ is not enabled. The
+ * VIRTIO specification allows VIRTIO_BLK_F_MQ to be enabled when there is
+ * only 1 virtqueue, but --device vhost-user-blk-pci doesn't do this (which
+ * is also spec-compliant).
+ */
+ features = qvirtio_get_features(dev1);
+ g_assert_cmpint(features & (1u << VIRTIO_BLK_F_MQ), ==, 0);
+ features = features & ~(QVIRTIO_F_BAD_FEATURE |
+ (1u << VIRTIO_RING_F_INDIRECT_DESC) |
+ (1u << VIRTIO_F_NOTIFY_ON_EMPTY) |
+ (1u << VIRTIO_BLK_F_SCSI));
+ qvirtio_set_features(dev1, features);
+
+ /* Hotplug a secondary device with 8 queues */
+ qtest_qmp_device_add(qts, "vhost-user-blk-pci", "drv1",
+ "{'addr': %s, 'chardev': 'char2', 'num-queues': 8}",
+ stringify(PCI_SLOT_HP) ".0");
+
+ pdev8 = virtio_pci_new(pdev1->pdev->bus,
+ &(QPCIAddress) {
+ .devfn = QPCI_DEVFN(PCI_SLOT_HP, 0)
+ });
+ g_assert_nonnull(pdev8);
+ g_assert_cmpint(pdev8->vdev.device_type, ==, VIRTIO_ID_BLOCK);
+
+ qos_object_start_hw(&pdev8->obj);
+
+ dev8 = &pdev8->vdev;
+ features = qvirtio_get_features(dev8);
+ g_assert_cmpint(features & (1u << VIRTIO_BLK_F_MQ),
+ ==,
+ (1u << VIRTIO_BLK_F_MQ));
+ features = features & ~(QVIRTIO_F_BAD_FEATURE |
+ (1u << VIRTIO_RING_F_INDIRECT_DESC) |
+ (1u << VIRTIO_F_NOTIFY_ON_EMPTY) |
+ (1u << VIRTIO_BLK_F_SCSI) |
+ (1u << VIRTIO_BLK_F_MQ));
+ qvirtio_set_features(dev8, features);
+
+ num_queues = qvirtio_config_readw(dev8,
+ offsetof(struct virtio_blk_config, num_queues));
+ g_assert_cmpint(num_queues, ==, 8);
+
+ qvirtio_pci_device_disable(pdev8);
+ qos_object_destroy(&pdev8->obj);
+
+ /* unplug secondary disk */
+ qpci_unplug_acpi_device_test(qts, "drv1", PCI_SLOT_HP);
+}
+
/*
* Check that setting the vring addr on a non-existent virtqueue does
* not crash.
@@ -643,7 +704,8 @@ static void quit_storage_daemon(void *qmp_test_state)
g_free(qmp_test_state);
}
-static char *start_vhost_user_blk(GString *cmd_line, int vus_instances)
+static char *start_vhost_user_blk(GString *cmd_line, int vus_instances,
+ int num_queues)
{
const char *vhost_user_blk_bin = qtest_qemu_storage_daemon_binary();
int fd, qmp_fd, i;
@@ -675,8 +737,8 @@ static char *start_vhost_user_blk(GString *cmd_line, int vus_instances)
g_string_append_printf(storage_daemon_command,
"--blockdev driver=file,node-name=disk%d,filename=%s "
"--export type=vhost-user-blk,id=disk%d,addr.type=unix,addr.path=%s,"
- "node-name=disk%i,writable=on ",
- i, img_path, i, sock_path, i);
+ "node-name=disk%i,writable=on,num-queues=%d ",
+ i, img_path, i, sock_path, i, num_queues);
g_string_append_printf(cmd_line, "-chardev socket,id=char%d,path=%s ",
i + 1, sock_path);
@@ -705,7 +767,7 @@ static char *start_vhost_user_blk(GString *cmd_line, int vus_instances)
static void *vhost_user_blk_test_setup(GString *cmd_line, void *arg)
{
- start_vhost_user_blk(cmd_line, 1);
+ start_vhost_user_blk(cmd_line, 1, 1);
return arg;
}
@@ -719,7 +781,13 @@ static void *vhost_user_blk_test_setup(GString *cmd_line, void *arg)
static void *vhost_user_blk_hotplug_test_setup(GString *cmd_line, void *arg)
{
/* "-chardev socket,id=char2" is used for pci_hotplug*/
- start_vhost_user_blk(cmd_line, 2);
+ start_vhost_user_blk(cmd_line, 2, 1);
+ return arg;
+}
+
+static void *vhost_user_blk_multiqueue_test_setup(GString *cmd_line, void *arg)
+{
+ start_vhost_user_blk(cmd_line, 2, 8);
return arg;
}
@@ -746,6 +814,9 @@ static void register_vhost_user_blk_test(void)
opts.before = vhost_user_blk_hotplug_test_setup;
qos_add_test("hotplug", "vhost-user-blk-pci", pci_hotplug, &opts);
+
+ opts.before = vhost_user_blk_multiqueue_test_setup;
+ qos_add_test("multiqueue", "vhost-user-blk-pci", multiqueue, &opts);
}
libqos_init(register_vhost_user_blk_test);
--
2.26.2
^ permalink raw reply related [flat|nested] 6+ messages in thread
* Re: [PATCH 1/2] block/export: add vhost-user-blk multi-queue support
2020-10-01 14:46 ` [PATCH 1/2] " Stefan Hajnoczi
@ 2020-10-02 5:32 ` Markus Armbruster
2020-10-02 13:47 ` Stefan Hajnoczi
0 siblings, 1 reply; 6+ messages in thread
From: Markus Armbruster @ 2020-10-02 5:32 UTC (permalink / raw)
To: Stefan Hajnoczi
Cc: Kevin Wolf, Laurent Vivier, Thomas Huth, qemu-block, qemu-devel,
Coiby Xu, Max Reitz, Paolo Bonzini
Stefan Hajnoczi <stefanha@redhat.com> writes:
> Allow the number of queues to be configured using --export
> vhost-user-blk,num-queues=N. This setting should match the QEMU --device
> vhost-user-blk-pci,num-queues=N setting but QEMU vhost-user-blk.c lowers
> its own value if the vhost-user-blk backend offers fewer queues than
> QEMU.
>
> The vhost-user-blk-server.c code is already capable of multi-queue. All
> virtqueue processing runs in the same AioContext. No new locking is
> needed.
>
> Add the num-queues=N option and set the VIRTIO_BLK_F_MQ feature bit.
> Note that the feature bit only announces the presence of the num_queues
> configuration space field. It does not promise that there is more than 1
> virtqueue, so we can set it unconditionally.
>
> I tested multi-queue by running a random read fio test with numjobs=4 on
> an -smp 4 guest. After the benchmark finished the guest /proc/interrupts
> file showed activity on all 4 virtio-blk MSI-X. The /sys/block/vda/mq/
> directory shows that Linux blk-mq has 4 queues configured.
>
> An automated test is included in the next commit.
>
> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
> ---
> qapi/block-export.json | 6 +++++-
> block/export/vhost-user-blk-server.c | 24 ++++++++++++++++++------
> 2 files changed, 23 insertions(+), 7 deletions(-)
>
> diff --git a/qapi/block-export.json b/qapi/block-export.json
> index a793e34af9..17020de257 100644
> --- a/qapi/block-export.json
> +++ b/qapi/block-export.json
> @@ -93,11 +93,15 @@
> # SocketAddress types are supported. Passed fds must be UNIX domain
> # sockets.
> # @logical-block-size: Logical block size in bytes. Defaults to 512 bytes.
> +# @num-queues: Number of request virtqueues. Must be greater than 0. Defaults
> +# to 1.
> #
> # Since: 5.2
> ##
> { 'struct': 'BlockExportOptionsVhostUserBlk',
> - 'data': { 'addr': 'SocketAddress', '*logical-block-size': 'size' } }
> + 'data': { 'addr': 'SocketAddress',
> + '*logical-block-size': 'size',
Tab damage.
> + '*num-queues': 'uint16'} }
Out of curiosity: what made you pick 16 bit signed? net.json uses both
32 and 64 bit signed. Odd...
>
> ##
> # @NbdServerAddOptions:
Acked-by: Markus Armbruster <armbru@redhat.com>
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [PATCH 1/2] block/export: add vhost-user-blk multi-queue support
2020-10-02 5:32 ` Markus Armbruster
@ 2020-10-02 13:47 ` Stefan Hajnoczi
0 siblings, 0 replies; 6+ messages in thread
From: Stefan Hajnoczi @ 2020-10-02 13:47 UTC (permalink / raw)
To: Markus Armbruster
Cc: Kevin Wolf, Laurent Vivier, Thomas Huth, qemu-block, qemu-devel,
Coiby Xu, Max Reitz, Paolo Bonzini
[-- Attachment #1: Type: text/plain, Size: 2654 bytes --]
On Fri, Oct 02, 2020 at 07:32:39AM +0200, Markus Armbruster wrote:
> Stefan Hajnoczi <stefanha@redhat.com> writes:
>
> > Allow the number of queues to be configured using --export
> > vhost-user-blk,num-queues=N. This setting should match the QEMU --device
> > vhost-user-blk-pci,num-queues=N setting but QEMU vhost-user-blk.c lowers
> > its own value if the vhost-user-blk backend offers fewer queues than
> > QEMU.
> >
> > The vhost-user-blk-server.c code is already capable of multi-queue. All
> > virtqueue processing runs in the same AioContext. No new locking is
> > needed.
> >
> > Add the num-queues=N option and set the VIRTIO_BLK_F_MQ feature bit.
> > Note that the feature bit only announces the presence of the num_queues
> > configuration space field. It does not promise that there is more than 1
> > virtqueue, so we can set it unconditionally.
> >
> > I tested multi-queue by running a random read fio test with numjobs=4 on
> > an -smp 4 guest. After the benchmark finished the guest /proc/interrupts
> > file showed activity on all 4 virtio-blk MSI-X. The /sys/block/vda/mq/
> > directory shows that Linux blk-mq has 4 queues configured.
> >
> > An automated test is included in the next commit.
> >
> > Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
> > ---
> > qapi/block-export.json | 6 +++++-
> > block/export/vhost-user-blk-server.c | 24 ++++++++++++++++++------
> > 2 files changed, 23 insertions(+), 7 deletions(-)
> >
> > diff --git a/qapi/block-export.json b/qapi/block-export.json
> > index a793e34af9..17020de257 100644
> > --- a/qapi/block-export.json
> > +++ b/qapi/block-export.json
> > @@ -93,11 +93,15 @@
> > # SocketAddress types are supported. Passed fds must be UNIX domain
> > # sockets.
> > # @logical-block-size: Logical block size in bytes. Defaults to 512 bytes.
> > +# @num-queues: Number of request virtqueues. Must be greater than 0. Defaults
> > +# to 1.
> > #
> > # Since: 5.2
> > ##
> > { 'struct': 'BlockExportOptionsVhostUserBlk',
> > - 'data': { 'addr': 'SocketAddress', '*logical-block-size': 'size' } }
> > + 'data': { 'addr': 'SocketAddress',
> > + '*logical-block-size': 'size',
>
> Tab damage.
Oops, thanks! I have updated my editor configuration to use 4-space
indents for .json files :).
> > + '*num-queues': 'uint16'} }
>
> Out of curiosity: what made you pick 16 bit signed? net.json uses both
> 32 and 64 bit signed. Odd...
struct virtio_blk_config {
__u16 num_queues;
}
Also, virtio-pci and virtio-ccw use 16-bit types for the queue count.
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [PATCH 0/2] block/export: add vhost-user-blk multi-queue support
2020-10-01 14:46 [PATCH 0/2] block/export: add vhost-user-blk multi-queue support Stefan Hajnoczi
2020-10-01 14:46 ` [PATCH 1/2] " Stefan Hajnoczi
2020-10-01 14:46 ` [PATCH 2/2] tests/qtest: add multi-queue test case to vhost-user-blk-test Stefan Hajnoczi
@ 2020-10-09 10:16 ` Stefan Hajnoczi
2 siblings, 0 replies; 6+ messages in thread
From: Stefan Hajnoczi @ 2020-10-09 10:16 UTC (permalink / raw)
To: qemu-devel
Cc: Kevin Wolf, Laurent Vivier, Thomas Huth, qemu-block,
Markus Armbruster, Coiby Xu, Max Reitz, Paolo Bonzini
[-- Attachment #1: Type: text/plain, Size: 1048 bytes --]
On Thu, Oct 01, 2020 at 03:46:02PM +0100, Stefan Hajnoczi wrote:
> The vhost-user-blk server currently only supports 1 virtqueue. Add a
> 'num-queues' option for multi-queue. Both --device
> vhost-user-blk-pci,num-queues= and --export vhost-user-blk,num-queues= need to
> be set in order for multi-queue to work (otherwise it will fall back to 1
> virtqueue).
>
> Based-on: 20200924151549.913737-1-stefanha@redhat.com ("[PATCH v2 00/13] block/export: convert vhost-user-blk-server to block exports API")
>
> Stefan Hajnoczi (2):
> block/export: add vhost-user-blk multi-queue support
> tests/qtest: add multi-queue test case to vhost-user-blk-test
>
> qapi/block-export.json | 6 ++-
> block/export/vhost-user-blk-server.c | 24 ++++++---
> tests/qtest/vhost-user-blk-test.c | 81 ++++++++++++++++++++++++++--
> 3 files changed, 99 insertions(+), 12 deletions(-)
>
> --
> 2.26.2
>
Thanks, applied to my block tree with tab damage fixed:
https://github.com/stefanha/qemu/commits/block
Stefan
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]
^ permalink raw reply [flat|nested] 6+ messages in thread
end of thread, other threads:[~2020-10-09 10:18 UTC | newest]
Thread overview: 6+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2020-10-01 14:46 [PATCH 0/2] block/export: add vhost-user-blk multi-queue support Stefan Hajnoczi
2020-10-01 14:46 ` [PATCH 1/2] " Stefan Hajnoczi
2020-10-02 5:32 ` Markus Armbruster
2020-10-02 13:47 ` Stefan Hajnoczi
2020-10-01 14:46 ` [PATCH 2/2] tests/qtest: add multi-queue test case to vhost-user-blk-test Stefan Hajnoczi
2020-10-09 10:16 ` [PATCH 0/2] block/export: add vhost-user-blk multi-queue support Stefan Hajnoczi
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).