From: "Michael S. Tsirkin" <mst@redhat.com>
To: qemu-devel@nongnu.org
Cc: "Peter Maydell" <peter.maydell@linaro.org>,
"Stefan Hajnoczi" <stefanha@redhat.com>,
"Gerd Hoffmann" <kraxel@redhat.com>,
"Johannes Berg" <johannes.berg@intel.com>,
"Marc-André Lureau" <marcandre.lureau@redhat.com>
Subject: [PULL v2 25/30] libvhost-user: handle NOFD flag in call/kick/err better
Date: Wed, 26 Feb 2020 04:07:38 -0500 [thread overview]
Message-ID: <20200226090010.708934-26-mst@redhat.com> (raw)
In-Reply-To: <20200226090010.708934-1-mst@redhat.com>
From: Johannes Berg <johannes.berg@intel.com>
The code here is odd, for example will it print out invalid
file descriptor numbers that were never sent in the message.
Clean that up a bit so it's actually possible to implement
a device that uses polling.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Message-Id: <20200123081708.7817-5-johannes@sipsolutions.net>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
---
contrib/libvhost-user/libvhost-user.c | 24 ++++++++++++++++--------
1 file changed, 16 insertions(+), 8 deletions(-)
diff --git a/contrib/libvhost-user/libvhost-user.c b/contrib/libvhost-user/libvhost-user.c
index 533d55d82a..3abc9689e5 100644
--- a/contrib/libvhost-user/libvhost-user.c
+++ b/contrib/libvhost-user/libvhost-user.c
@@ -948,6 +948,7 @@ static bool
vu_check_queue_msg_file(VuDev *dev, VhostUserMsg *vmsg)
{
int index = vmsg->payload.u64 & VHOST_USER_VRING_IDX_MASK;
+ bool nofd = vmsg->payload.u64 & VHOST_USER_VRING_NOFD_MASK;
if (index >= dev->max_queues) {
vmsg_close_fds(vmsg);
@@ -955,8 +956,12 @@ vu_check_queue_msg_file(VuDev *dev, VhostUserMsg *vmsg)
return false;
}
- if (vmsg->payload.u64 & VHOST_USER_VRING_NOFD_MASK ||
- vmsg->fd_num != 1) {
+ if (nofd) {
+ vmsg_close_fds(vmsg);
+ return true;
+ }
+
+ if (vmsg->fd_num != 1) {
vmsg_close_fds(vmsg);
vu_panic(dev, "Invalid fds in request: %d", vmsg->request);
return false;
@@ -1053,6 +1058,7 @@ static bool
vu_set_vring_kick_exec(VuDev *dev, VhostUserMsg *vmsg)
{
int index = vmsg->payload.u64 & VHOST_USER_VRING_IDX_MASK;
+ bool nofd = vmsg->payload.u64 & VHOST_USER_VRING_NOFD_MASK;
DPRINT("u64: 0x%016"PRIx64"\n", vmsg->payload.u64);
@@ -1066,8 +1072,8 @@ vu_set_vring_kick_exec(VuDev *dev, VhostUserMsg *vmsg)
dev->vq[index].kick_fd = -1;
}
- dev->vq[index].kick_fd = vmsg->fds[0];
- DPRINT("Got kick_fd: %d for vq: %d\n", vmsg->fds[0], index);
+ dev->vq[index].kick_fd = nofd ? -1 : vmsg->fds[0];
+ DPRINT("Got kick_fd: %d for vq: %d\n", dev->vq[index].kick_fd, index);
dev->vq[index].started = true;
if (dev->iface->queue_set_started) {
@@ -1147,6 +1153,7 @@ static bool
vu_set_vring_call_exec(VuDev *dev, VhostUserMsg *vmsg)
{
int index = vmsg->payload.u64 & VHOST_USER_VRING_IDX_MASK;
+ bool nofd = vmsg->payload.u64 & VHOST_USER_VRING_NOFD_MASK;
DPRINT("u64: 0x%016"PRIx64"\n", vmsg->payload.u64);
@@ -1159,14 +1166,14 @@ vu_set_vring_call_exec(VuDev *dev, VhostUserMsg *vmsg)
dev->vq[index].call_fd = -1;
}
- dev->vq[index].call_fd = vmsg->fds[0];
+ dev->vq[index].call_fd = nofd ? -1 : vmsg->fds[0];
/* in case of I/O hang after reconnecting */
- if (eventfd_write(vmsg->fds[0], 1)) {
+ if (dev->vq[index].call_fd != -1 && eventfd_write(vmsg->fds[0], 1)) {
return -1;
}
- DPRINT("Got call_fd: %d for vq: %d\n", vmsg->fds[0], index);
+ DPRINT("Got call_fd: %d for vq: %d\n", dev->vq[index].call_fd, index);
return false;
}
@@ -1175,6 +1182,7 @@ static bool
vu_set_vring_err_exec(VuDev *dev, VhostUserMsg *vmsg)
{
int index = vmsg->payload.u64 & VHOST_USER_VRING_IDX_MASK;
+ bool nofd = vmsg->payload.u64 & VHOST_USER_VRING_NOFD_MASK;
DPRINT("u64: 0x%016"PRIx64"\n", vmsg->payload.u64);
@@ -1187,7 +1195,7 @@ vu_set_vring_err_exec(VuDev *dev, VhostUserMsg *vmsg)
dev->vq[index].err_fd = -1;
}
- dev->vq[index].err_fd = vmsg->fds[0];
+ dev->vq[index].err_fd = nofd ? -1 : vmsg->fds[0];
return false;
}
--
MST
next prev parent reply other threads:[~2020-02-26 9:11 UTC|newest]
Thread overview: 33+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-02-26 9:00 [PULL v2 00/30] virtio, pc: fixes, features Michael S. Tsirkin
2020-02-26 9:01 ` [PULL v2 01/30] bios-tables-test: tell people how to update Michael S. Tsirkin
2020-02-26 9:01 ` [PULL v2 02/30] bios-tables-test: fix up DIFF generation Michael S. Tsirkin
2020-02-26 9:01 ` [PULL v2 03/30] bios-tables-test: default diff command Michael S. Tsirkin
2020-02-26 9:01 ` [PULL v2 04/30] rebuild-expected-aml.sh: remind about the process Michael S. Tsirkin
2020-02-26 9:01 ` [PULL v2 05/30] vhost-user-fs: do delete virtio_queues in unrealize Michael S. Tsirkin
2020-02-26 9:01 ` [PULL v2 06/30] vhost-user-fs: convert to the new virtio_delete_queue function Michael S. Tsirkin
2020-02-26 9:01 ` [PULL v2 07/30] virtio-pmem: do delete rq_vq in virtio_pmem_unrealize Michael S. Tsirkin
2020-02-26 9:06 ` [PULL v2 08/30] virtio-crypto: do delete ctrl_vq in virtio_crypto_device_unrealize Michael S. Tsirkin
2020-02-26 9:06 ` [PULL v2 09/30] vhost-user-blk: delete virtioqueues in unrealize to fix memleaks Michael S. Tsirkin
2020-02-26 9:06 ` [PULL v2 10/30] vhost-user-blk: convert to new virtio_delete_queue Michael S. Tsirkin
2020-02-26 9:06 ` [PULL v2 11/30] virtio: gracefully handle invalid region caches Michael S. Tsirkin
2020-02-26 9:06 ` [PULL v2 12/30] virtio-iommu: Add skeleton Michael S. Tsirkin
2020-02-26 9:06 ` [PULL v2 13/30] virtio-iommu: Decode the command payload Michael S. Tsirkin
2020-02-26 9:06 ` [PULL v2 14/30] virtio-iommu: Implement attach/detach command Michael S. Tsirkin
2020-02-26 9:06 ` [PULL v2 15/30] virtio-iommu: Implement map/unmap Michael S. Tsirkin
2020-02-26 9:06 ` [PULL v2 16/30] virtio-iommu: Implement translate Michael S. Tsirkin
2020-02-26 9:07 ` [PULL v2 17/30] virtio-iommu: Implement fault reporting Michael S. Tsirkin
2020-02-26 9:07 ` [PULL v2 18/30] virtio-iommu: Support migration Michael S. Tsirkin
2020-02-26 9:07 ` [PULL v2 19/30] virtio-iommu-pci: Add virtio iommu pci support Michael S. Tsirkin
2020-02-26 9:07 ` [PULL v2 20/30] hw/arm/virt: Add the virtio-iommu device tree mappings Michael S. Tsirkin
2020-02-26 9:07 ` [PULL v2 21/30] MAINTAINERS: add virtio-iommu related files Michael S. Tsirkin
2020-02-26 9:07 ` [PULL v2 22/30] libvhost-user: implement VHOST_USER_PROTOCOL_F_REPLY_ACK Michael S. Tsirkin
2020-02-26 9:07 ` [PULL v2 23/30] libvhost-user-glib: fix VugDev main fd cleanup Michael S. Tsirkin
2020-02-26 9:07 ` [PULL v2 24/30] libvhost-user-glib: use g_main_context_get_thread_default() Michael S. Tsirkin
2020-02-26 9:07 ` Michael S. Tsirkin [this message]
2020-02-26 9:07 ` [PULL v2 26/30] docs: vhost-user: add in-band kick/call messages Michael S. Tsirkin
2020-02-26 9:07 ` [PULL v2 27/30] libvhost-user: implement in-band notifications Michael S. Tsirkin
2020-02-26 9:07 ` [PULL v2 28/30] acpi: cpuhp: document CPHP_GET_CPU_ID_CMD command Michael S. Tsirkin
2020-02-26 9:07 ` [PULL v2 29/30] vhost-user: only set slave channel for first vq Michael S. Tsirkin
2020-02-26 9:08 ` [PULL v2 30/30] Fixed assert in vhost_user_set_mem_table_postcopy Michael S. Tsirkin
2020-02-27 8:54 ` [PULL v2 00/30] virtio, pc: fixes, features Michael S. Tsirkin
2020-02-27 19:56 ` Peter Maydell
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20200226090010.708934-26-mst@redhat.com \
--to=mst@redhat.com \
--cc=johannes.berg@intel.com \
--cc=kraxel@redhat.com \
--cc=marcandre.lureau@redhat.com \
--cc=peter.maydell@linaro.org \
--cc=qemu-devel@nongnu.org \
--cc=stefanha@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).