From: "Eugenio Pérez" <eperezma@redhat.com>
To: qemu-devel@nongnu.org
Cc: Stefan Hajnoczi <stefanha@redhat.com>,
Gautam Dawar <gdawar@xilinx.com>,
Liuxiangdong <liuxiangdong5@huawei.com>,
Paolo Bonzini <pbonzini@redhat.com>,
Markus Armbruster <armbru@redhat.com>,
Jason Wang <jasowang@redhat.com>,
Cornelia Huck <cohuck@redhat.com>,
Parav Pandit <parav@mellanox.com>, Eric Blake <eblake@redhat.com>,
"Michael S. Tsirkin" <mst@redhat.com>,
Laurent Vivier <lvivier@redhat.com>,
Zhu Lingshan <lingshan.zhu@intel.com>,
Eli Cohen <eli@mellanox.com>, Cindy Lu <lulu@redhat.com>,
"Gonglei (Arei)" <arei.gonglei@huawei.com>,
Stefano Garzarella <sgarzare@redhat.com>,
Harpreet Singh Anand <hanand@xilinx.com>
Subject: [PATCH v5 15/20] vdpa: Export vhost_vdpa_dma_map and unmap calls
Date: Tue, 19 Jul 2022 11:56:24 +0200 [thread overview]
Message-ID: <20220719095629.3031338-16-eperezma@redhat.com> (raw)
In-Reply-To: <20220719095629.3031338-1-eperezma@redhat.com>
Shadow CVQ will copy buffers on qemu VA, so we avoid TOCTOU attacks from
the guest that could set a different state in qemu device model and vdpa
device.
To do so, it needs to be able to map these new buffers to the device.
Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
Acked-by: Jason Wang <jasowang@redhat.com>
---
include/hw/virtio/vhost-vdpa.h | 4 ++++
hw/virtio/vhost-vdpa.c | 7 +++----
2 files changed, 7 insertions(+), 4 deletions(-)
diff --git a/include/hw/virtio/vhost-vdpa.h b/include/hw/virtio/vhost-vdpa.h
index a29dbb3f53..7214eb47dc 100644
--- a/include/hw/virtio/vhost-vdpa.h
+++ b/include/hw/virtio/vhost-vdpa.h
@@ -39,4 +39,8 @@ typedef struct vhost_vdpa {
VhostVDPAHostNotifier notifier[VIRTIO_QUEUE_MAX];
} VhostVDPA;
+int vhost_vdpa_dma_map(struct vhost_vdpa *v, hwaddr iova, hwaddr size,
+ void *vaddr, bool readonly);
+int vhost_vdpa_dma_unmap(struct vhost_vdpa *v, hwaddr iova, hwaddr size);
+
#endif
diff --git a/hw/virtio/vhost-vdpa.c b/hw/virtio/vhost-vdpa.c
index 0b13e98471..96997210be 100644
--- a/hw/virtio/vhost-vdpa.c
+++ b/hw/virtio/vhost-vdpa.c
@@ -71,8 +71,8 @@ static bool vhost_vdpa_listener_skipped_section(MemoryRegionSection *section,
return false;
}
-static int vhost_vdpa_dma_map(struct vhost_vdpa *v, hwaddr iova, hwaddr size,
- void *vaddr, bool readonly)
+int vhost_vdpa_dma_map(struct vhost_vdpa *v, hwaddr iova, hwaddr size,
+ void *vaddr, bool readonly)
{
struct vhost_msg_v2 msg = {};
int fd = v->device_fd;
@@ -97,8 +97,7 @@ static int vhost_vdpa_dma_map(struct vhost_vdpa *v, hwaddr iova, hwaddr size,
return ret;
}
-static int vhost_vdpa_dma_unmap(struct vhost_vdpa *v, hwaddr iova,
- hwaddr size)
+int vhost_vdpa_dma_unmap(struct vhost_vdpa *v, hwaddr iova, hwaddr size)
{
struct vhost_msg_v2 msg = {};
int fd = v->device_fd;
--
2.31.1
next prev parent reply other threads:[~2022-07-19 10:06 UTC|newest]
Thread overview: 22+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-07-19 9:56 [PATCH v5 00/20] vdpa net devices Rx filter change notification with Shadow VQ Eugenio Pérez
2022-07-19 9:56 ` [PATCH v5 01/20] vhost: move descriptor translation to vhost_svq_vring_write_descs Eugenio Pérez
2022-07-19 9:56 ` [PATCH v5 02/20] virtio-net: Expose MAC_TABLE_ENTRIES Eugenio Pérez
2022-07-19 9:56 ` [PATCH v5 03/20] virtio-net: Expose ctrl virtqueue logic Eugenio Pérez
2022-07-19 9:56 ` [PATCH v5 04/20] vdpa: Avoid compiler to squash reads to used idx Eugenio Pérez
2022-07-19 9:56 ` [PATCH v5 05/20] vhost: Reorder vhost_svq_kick Eugenio Pérez
2022-07-19 9:56 ` [PATCH v5 06/20] vhost: Move vhost_svq_kick call to vhost_svq_add Eugenio Pérez
2022-07-19 9:56 ` [PATCH v5 07/20] vhost: Check for queue full at vhost_svq_add Eugenio Pérez
2022-07-19 9:56 ` [PATCH v5 08/20] vhost: Decouple vhost_svq_add from VirtQueueElement Eugenio Pérez
2022-07-19 9:56 ` [PATCH v5 09/20] vhost: Add SVQDescState Eugenio Pérez
2022-07-19 9:56 ` [PATCH v5 10/20] vhost: Track number of descs in SVQDescState Eugenio Pérez
2022-07-19 9:56 ` [PATCH v5 11/20] vhost: add vhost_svq_push_elem Eugenio Pérez
2022-07-19 9:56 ` [PATCH v5 12/20] vhost: Expose vhost_svq_add Eugenio Pérez
2022-07-19 9:56 ` [PATCH v5 13/20] vhost: add vhost_svq_poll Eugenio Pérez
2022-07-19 9:56 ` [PATCH v5 14/20] vhost: Add svq avail_handler callback Eugenio Pérez
2022-07-19 9:56 ` Eugenio Pérez [this message]
2022-07-19 9:56 ` [PATCH v5 16/20] vdpa: manual forward CVQ buffers Eugenio Pérez
2022-07-19 9:56 ` [PATCH v5 17/20] vdpa: Buffer CVQ support on shadow virtqueue Eugenio Pérez
2022-07-19 9:56 ` [PATCH v5 18/20] vdpa: Extract get features part from vhost_vdpa_get_max_queue_pairs Eugenio Pérez
2022-07-19 9:56 ` [PATCH v5 19/20] vdpa: Add device migration blocker Eugenio Pérez
2022-07-19 9:56 ` [PATCH v5 20/20] vdpa: Add x-svq to NetdevVhostVDPAOptions Eugenio Pérez
2022-07-19 10:04 ` [PATCH v5 00/20] vdpa net devices Rx filter change notification with Shadow VQ Eugenio Perez Martin
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20220719095629.3031338-16-eperezma@redhat.com \
--to=eperezma@redhat.com \
--cc=arei.gonglei@huawei.com \
--cc=armbru@redhat.com \
--cc=cohuck@redhat.com \
--cc=eblake@redhat.com \
--cc=eli@mellanox.com \
--cc=gdawar@xilinx.com \
--cc=hanand@xilinx.com \
--cc=jasowang@redhat.com \
--cc=lingshan.zhu@intel.com \
--cc=liuxiangdong5@huawei.com \
--cc=lulu@redhat.com \
--cc=lvivier@redhat.com \
--cc=mst@redhat.com \
--cc=parav@mellanox.com \
--cc=pbonzini@redhat.com \
--cc=qemu-devel@nongnu.org \
--cc=sgarzare@redhat.com \
--cc=stefanha@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).