* [PATCH 1/4] um: virtio_uml: send SET_MEM_TABLE message with the exact size
2024-11-03 21:28 [PATCH 0/4] Enable virtio-fs and virtio-snd in UML Benjamin Berg
@ 2024-11-03 21:28 ` Benjamin Berg
2024-11-03 21:28 ` [PATCH 2/4] um: virtio_uml: use smaller virtqueue sizes for VIRTIO_ID_SOUND Benjamin Berg
` (2 subsequent siblings)
3 siblings, 0 replies; 6+ messages in thread
From: Benjamin Berg @ 2024-11-03 21:28 UTC (permalink / raw)
To: linux-um; +Cc: Benjamin Berg
From: Benjamin Berg <benjamin.berg@intel.com>
The rust based userspace vhost devices are very strict and will not
accept the message if it is longer than required. So, only include the
data for the first memory region.
Signed-off-by: Benjamin Berg <benjamin.berg@intel.com>
---
arch/um/drivers/virtio_uml.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/um/drivers/virtio_uml.c b/arch/um/drivers/virtio_uml.c
index 4d3e9b9f5b61..c602892f329f 100644
--- a/arch/um/drivers/virtio_uml.c
+++ b/arch/um/drivers/virtio_uml.c
@@ -623,7 +623,7 @@ static int vhost_user_set_mem_table(struct virtio_uml_device *vu_dev)
{
struct vhost_user_msg msg = {
.header.request = VHOST_USER_SET_MEM_TABLE,
- .header.size = sizeof(msg.payload.mem_regions),
+ .header.size = offsetof(typeof(msg.payload.mem_regions), regions[1]),
.payload.mem_regions.num = 1,
};
unsigned long reserved = uml_reserved - uml_physmem;
--
2.47.0
^ permalink raw reply related [flat|nested] 6+ messages in thread* [PATCH 2/4] um: virtio_uml: use smaller virtqueue sizes for VIRTIO_ID_SOUND
2024-11-03 21:28 [PATCH 0/4] Enable virtio-fs and virtio-snd in UML Benjamin Berg
2024-11-03 21:28 ` [PATCH 1/4] um: virtio_uml: send SET_MEM_TABLE message with the exact size Benjamin Berg
@ 2024-11-03 21:28 ` Benjamin Berg
2024-11-07 17:01 ` Johannes Berg
2024-11-03 21:28 ` [PATCH 3/4] um: virtio_uml: fix call_fd IRQ allocation Benjamin Berg
2024-11-03 21:28 ` [PATCH 4/4] um: virtio_uml: query the number of vqs if supported Benjamin Berg
3 siblings, 1 reply; 6+ messages in thread
From: Benjamin Berg @ 2024-11-03 21:28 UTC (permalink / raw)
To: linux-um; +Cc: Benjamin Berg
From: Benjamin Berg <benjamin.berg@intel.com>
It appears that the different vhost device implementations use different
sizes of the virtual queues. Add device specific limitations (for now,
only for sound), to ensure that we do not get disconnected unexpectedly.
Signed-off-by: Benjamin Berg <benjamin.berg@intel.com>
---
arch/um/drivers/virtio_uml.c | 13 ++++++++++++-
1 file changed, 12 insertions(+), 1 deletion(-)
diff --git a/arch/um/drivers/virtio_uml.c b/arch/um/drivers/virtio_uml.c
index c602892f329f..2e4b4eadd553 100644
--- a/arch/um/drivers/virtio_uml.c
+++ b/arch/um/drivers/virtio_uml.c
@@ -25,6 +25,7 @@
#include <linux/platform_device.h>
#include <linux/slab.h>
#include <linux/virtio.h>
+#include <linux/virtio_ids.h>
#include <linux/virtio_config.h>
#include <linux/virtio_ring.h>
#include <linux/time-internal.h>
@@ -935,9 +936,19 @@ static struct virtqueue *vu_setup_vq(struct virtio_device *vdev,
struct platform_device *pdev = vu_dev->pdev;
struct virtio_uml_vq_info *info;
struct virtqueue *vq;
- int num = MAX_SUPPORTED_QUEUE_SIZE;
+ int num;
int rc;
+ /* Seems like we need to hard-code the queue size */
+ switch (vu_dev->vdev.id.device) {
+ case VIRTIO_ID_SOUND:
+ num = 64;
+ break;
+ default:
+ num = MAX_SUPPORTED_QUEUE_SIZE;
+ break;
+ }
+
info = kzalloc(sizeof(*info), GFP_KERNEL);
if (!info) {
rc = -ENOMEM;
--
2.47.0
^ permalink raw reply related [flat|nested] 6+ messages in thread* Re: [PATCH 2/4] um: virtio_uml: use smaller virtqueue sizes for VIRTIO_ID_SOUND
2024-11-03 21:28 ` [PATCH 2/4] um: virtio_uml: use smaller virtqueue sizes for VIRTIO_ID_SOUND Benjamin Berg
@ 2024-11-07 17:01 ` Johannes Berg
0 siblings, 0 replies; 6+ messages in thread
From: Johannes Berg @ 2024-11-07 17:01 UTC (permalink / raw)
To: Benjamin Berg, linux-um
On Sun, 2024-11-03 at 22:28 +0100, Benjamin Berg wrote:
> From: Benjamin Berg <benjamin.berg@intel.com>
>
> It appears that the different vhost device implementations use different
> sizes of the virtual queues. Add device specific limitations (for now,
> only for sound), to ensure that we do not get disconnected unexpectedly.
I'm not convinced this makes sense. If anything, it's a workaround for
some specific userspace, but ... do we care enough and not let them fix
it?
The protocol [1] basically says we decide on the size (see
VHOST_USER_SET_VRING_NUM) and the device doesn't even need to allocate
the memory, that's on us?
[1] https://qemu-project.gitlab.io/qemu/interop/vhost-user.html
So maybe let's see what they were thinking? I'm not sure it's important
enough right now to have this working to apply such a workaround without
some further discussion?
On PCI it seems that you can and should query the desired queue size
(see e.g. vp_modern_get_queue_size), but with vhost-user that doesn't
seem to be supported at all, so there isn't really a good thing we could
do in that sense.
johannes
^ permalink raw reply [flat|nested] 6+ messages in thread
* [PATCH 3/4] um: virtio_uml: fix call_fd IRQ allocation
2024-11-03 21:28 [PATCH 0/4] Enable virtio-fs and virtio-snd in UML Benjamin Berg
2024-11-03 21:28 ` [PATCH 1/4] um: virtio_uml: send SET_MEM_TABLE message with the exact size Benjamin Berg
2024-11-03 21:28 ` [PATCH 2/4] um: virtio_uml: use smaller virtqueue sizes for VIRTIO_ID_SOUND Benjamin Berg
@ 2024-11-03 21:28 ` Benjamin Berg
2024-11-03 21:28 ` [PATCH 4/4] um: virtio_uml: query the number of vqs if supported Benjamin Berg
3 siblings, 0 replies; 6+ messages in thread
From: Benjamin Berg @ 2024-11-03 21:28 UTC (permalink / raw)
To: linux-um; +Cc: Benjamin Berg
From: Benjamin Berg <benjamin.berg@intel.com>
If the device does not support slave requests, then the IRQ will not yet
be allocated. So initialize the IRQ to UM_IRQ_ALLOC so that it will be
allocated if none has been assigned yet and store it slightly later when
we know that it will not be immediately unregistered again.
Signed-off-by: Benjamin Berg <benjamin.berg@intel.com>
---
arch/um/drivers/virtio_uml.c | 15 ++++++++++-----
1 file changed, 10 insertions(+), 5 deletions(-)
diff --git a/arch/um/drivers/virtio_uml.c b/arch/um/drivers/virtio_uml.c
index 2e4b4eadd553..5b19e9a3447a 100644
--- a/arch/um/drivers/virtio_uml.c
+++ b/arch/um/drivers/virtio_uml.c
@@ -889,7 +889,7 @@ static int vu_setup_vq_call_fd(struct virtio_uml_device *vu_dev,
{
struct virtio_uml_vq_info *info = vq->priv;
int call_fds[2];
- int rc;
+ int rc, irq;
/* no call FD needed/desired in this case */
if (vu_dev->protocol_features &
@@ -906,19 +906,23 @@ static int vu_setup_vq_call_fd(struct virtio_uml_device *vu_dev,
return rc;
info->call_fd = call_fds[0];
- rc = um_request_irq(vu_dev->irq, info->call_fd, IRQ_READ,
- vu_interrupt, IRQF_SHARED, info->name, vq);
- if (rc < 0)
+ irq = um_request_irq(vu_dev->irq, info->call_fd, IRQ_READ,
+ vu_interrupt, IRQF_SHARED, info->name, vq);
+ if (irq < 0) {
+ rc = irq;
goto close_both;
+ }
rc = vhost_user_set_vring_call(vu_dev, vq->index, call_fds[1]);
if (rc)
goto release_irq;
+ vu_dev->irq = irq;
+
goto out;
release_irq:
- um_free_irq(vu_dev->irq, vq);
+ um_free_irq(irq, vq);
close_both:
os_close_file(call_fds[0]);
out:
@@ -1212,6 +1216,7 @@ static int virtio_uml_probe(struct platform_device *pdev)
vu_dev->vdev.id.vendor = VIRTIO_DEV_ANY_ID;
vu_dev->pdev = pdev;
vu_dev->req_fd = -1;
+ vu_dev->irq = UM_IRQ_ALLOC;
time_travel_propagate_time();
--
2.47.0
^ permalink raw reply related [flat|nested] 6+ messages in thread* [PATCH 4/4] um: virtio_uml: query the number of vqs if supported
2024-11-03 21:28 [PATCH 0/4] Enable virtio-fs and virtio-snd in UML Benjamin Berg
` (2 preceding siblings ...)
2024-11-03 21:28 ` [PATCH 3/4] um: virtio_uml: fix call_fd IRQ allocation Benjamin Berg
@ 2024-11-03 21:28 ` Benjamin Berg
3 siblings, 0 replies; 6+ messages in thread
From: Benjamin Berg @ 2024-11-03 21:28 UTC (permalink / raw)
To: linux-um; +Cc: Benjamin Berg
From: Benjamin Berg <benjamin.berg@intel.com>
When the VHOST_USER_PROTOCOL_F_MQ protocol feature flag is set, we can
query the maximum number of virtual queues. Do so when supported and
extend the check to verify that we are not trying to allocate more
queues.
Signed-off-by: Benjamin Berg <benjamin.berg@intel.com>
---
arch/um/drivers/vhost_user.h | 4 +++-
arch/um/drivers/virtio_uml.c | 23 ++++++++++++++++++++++-
2 files changed, 25 insertions(+), 2 deletions(-)
diff --git a/arch/um/drivers/vhost_user.h b/arch/um/drivers/vhost_user.h
index 6f147cd3c9f7..fcfa3b7e021b 100644
--- a/arch/um/drivers/vhost_user.h
+++ b/arch/um/drivers/vhost_user.h
@@ -10,6 +10,7 @@
/* Feature bits */
#define VHOST_USER_F_PROTOCOL_FEATURES 30
/* Protocol feature bits */
+#define VHOST_USER_PROTOCOL_F_MQ 0
#define VHOST_USER_PROTOCOL_F_REPLY_ACK 3
#define VHOST_USER_PROTOCOL_F_SLAVE_REQ 5
#define VHOST_USER_PROTOCOL_F_CONFIG 9
@@ -23,7 +24,8 @@
/* Supported transport features */
#define VHOST_USER_SUPPORTED_F BIT_ULL(VHOST_USER_F_PROTOCOL_FEATURES)
/* Supported protocol features */
-#define VHOST_USER_SUPPORTED_PROTOCOL_F (BIT_ULL(VHOST_USER_PROTOCOL_F_REPLY_ACK) | \
+#define VHOST_USER_SUPPORTED_PROTOCOL_F (BIT_ULL(VHOST_USER_PROTOCOL_F_MQ) | \
+ BIT_ULL(VHOST_USER_PROTOCOL_F_REPLY_ACK) | \
BIT_ULL(VHOST_USER_PROTOCOL_F_SLAVE_REQ) | \
BIT_ULL(VHOST_USER_PROTOCOL_F_CONFIG) | \
BIT_ULL(VHOST_USER_PROTOCOL_F_INBAND_NOTIFICATIONS))
diff --git a/arch/um/drivers/virtio_uml.c b/arch/um/drivers/virtio_uml.c
index 5b19e9a3447a..2b130d57d36e 100644
--- a/arch/um/drivers/virtio_uml.c
+++ b/arch/um/drivers/virtio_uml.c
@@ -57,6 +57,7 @@ struct virtio_uml_device {
int sock, req_fd, irq;
u64 features;
u64 protocol_features;
+ u64 max_vqs;
u8 status;
u8 registered:1;
u8 suspended:1;
@@ -342,6 +343,17 @@ static int vhost_user_set_protocol_features(struct virtio_uml_device *vu_dev,
protocol_features);
}
+static int vhost_user_get_queue_num(struct virtio_uml_device *vu_dev,
+ u64 *queue_num)
+{
+ int rc = vhost_user_send_no_payload(vu_dev, true,
+ VHOST_USER_GET_QUEUE_NUM);
+
+ if (rc)
+ return rc;
+ return vhost_user_recv_u64(vu_dev, queue_num);
+}
+
static void vhost_user_reply(struct virtio_uml_device *vu_dev,
struct vhost_user_msg *msg, int response)
{
@@ -515,6 +527,15 @@ static int vhost_user_init(struct virtio_uml_device *vu_dev)
return rc;
}
+ if (vu_dev->protocol_features &
+ BIT_ULL(VHOST_USER_PROTOCOL_F_MQ)) {
+ rc = vhost_user_get_queue_num(vu_dev, &vu_dev->max_vqs);
+ if (rc)
+ return rc;
+ } else {
+ vu_dev->max_vqs = U64_MAX;
+ }
+
return 0;
}
@@ -1029,7 +1050,7 @@ static int vu_find_vqs(struct virtio_device *vdev, unsigned nvqs,
struct virtqueue *vq;
/* not supported for now */
- if (WARN_ON(nvqs > 64))
+ if (WARN_ON(nvqs > 64) || WARN_ON(nvqs > vu_dev->max_vqs))
return -EINVAL;
rc = vhost_user_set_mem_table(vu_dev);
--
2.47.0
^ permalink raw reply related [flat|nested] 6+ messages in thread