From: David Hildenbrand <david@redhat.com>
To: qemu-devel@nongnu.org
Cc: David Hildenbrand <david@redhat.com>,
"Michael S . Tsirkin" <mst@redhat.com>,
Jason Wang <jasowang@redhat.com>,
Stefan Hajnoczi <stefanha@redhat.com>,
Stefano Garzarella <sgarzare@redhat.com>,
Germano Veit Michel <germano@redhat.com>,
Raphael Norwitz <raphael.norwitz@nutanix.com>
Subject: [PATCH v1 10/15] libvhost-user: Factor out search for memory region by GPA and simplify
Date: Fri, 2 Feb 2024 22:53:27 +0100 [thread overview]
Message-ID: <20240202215332.118728-11-david@redhat.com> (raw)
In-Reply-To: <20240202215332.118728-1-david@redhat.com>
Memory regions cannot overlap, and if we ever hit that case something
would be really flawed.
For example, when vhost code in QEMU decides to increase the size of memory
regions to cover full huge pages, it makes sure to never create overlaps,
and if there would be overlaps, it would bail out.
QEMU commits 48d7c9757749 ("vhost: Merge sections added to temporary
list"), c1ece84e7c93 ("vhost: Huge page align and merge") and
e7b94a84b6cb ("vhost: Allow adjoining regions") added and clarified that
handling and how overlaps are impossible.
Consequently, each GPA can belong to at most one memory region, and
everything else doesn't make sense. Let's factor out our search to prepare
for further changes.
Signed-off-by: David Hildenbrand <david@redhat.com>
---
subprojects/libvhost-user/libvhost-user.c | 79 +++++++++++++----------
1 file changed, 45 insertions(+), 34 deletions(-)
diff --git a/subprojects/libvhost-user/libvhost-user.c b/subprojects/libvhost-user/libvhost-user.c
index 22154b217f..d036b54ed0 100644
--- a/subprojects/libvhost-user/libvhost-user.c
+++ b/subprojects/libvhost-user/libvhost-user.c
@@ -195,30 +195,47 @@ vu_panic(VuDev *dev, const char *msg, ...)
*/
}
+/* Search for a memory region that covers this guest physical address. */
+static VuDevRegion *
+vu_gpa_to_mem_region(VuDev *dev, uint64_t guest_addr)
+{
+ unsigned int i;
+
+ /*
+ * Memory regions cannot overlap in guest physical address space. Each
+ * GPA belongs to exactly one memory region, so there can only be one
+ * match.
+ */
+ for (i = 0; i < dev->nregions; i++) {
+ VuDevRegion *cur = &dev->regions[i];
+
+ if (guest_addr >= cur->gpa && guest_addr < cur->gpa + cur->size) {
+ return cur;
+ }
+ }
+ return NULL;
+}
+
/* Translate guest physical address to our virtual address. */
void *
vu_gpa_to_va(VuDev *dev, uint64_t *plen, uint64_t guest_addr)
{
- unsigned int i;
+ VuDevRegion *r;
if (*plen == 0) {
return NULL;
}
- /* Find matching memory region. */
- for (i = 0; i < dev->nregions; i++) {
- VuDevRegion *r = &dev->regions[i];
-
- if ((guest_addr >= r->gpa) && (guest_addr < (r->gpa + r->size))) {
- if ((guest_addr + *plen) > (r->gpa + r->size)) {
- *plen = r->gpa + r->size - guest_addr;
- }
- return (void *)(uintptr_t)
- guest_addr - r->gpa + r->mmap_addr + r->mmap_offset;
- }
+ r = vu_gpa_to_mem_region(dev, guest_addr);
+ if (!r) {
+ return NULL;
}
- return NULL;
+ if ((guest_addr + *plen) > (r->gpa + r->size)) {
+ *plen = r->gpa + r->size - guest_addr;
+ }
+ return (void *)(uintptr_t)guest_addr - r->gpa + r->mmap_addr +
+ r->mmap_offset;
}
/* Translate qemu virtual address to our virtual address. */
@@ -854,8 +871,8 @@ static inline bool reg_equal(VuDevRegion *vudev_reg,
static bool
vu_rem_mem_reg(VuDev *dev, VhostUserMsg *vmsg) {
VhostUserMemoryRegion m = vmsg->payload.memreg.region, *msg_region = &m;
- unsigned int i;
- bool found = false;
+ unsigned int idx;
+ VuDevRegion *r;
if (vmsg->fd_num > 1) {
vmsg_close_fds(vmsg);
@@ -882,28 +899,22 @@ vu_rem_mem_reg(VuDev *dev, VhostUserMsg *vmsg) {
DPRINT(" mmap_offset 0x%016"PRIx64"\n",
msg_region->mmap_offset);
- for (i = 0; i < dev->nregions; i++) {
- if (reg_equal(&dev->regions[i], msg_region)) {
- VuDevRegion *r = &dev->regions[i];
-
- munmap((void *)(uintptr_t)r->mmap_addr, r->size + r->mmap_offset);
-
- /* Shift all affected entries by 1 to close the hole at index. */
- memmove(dev->regions + i, dev->regions + i + 1,
- sizeof(VuDevRegion) * (dev->nregions - i - 1));
- DPRINT("Successfully removed a region\n");
- dev->nregions--;
- i--;
-
- found = true;
- break;
- }
- }
-
- if (!found) {
+ r = vu_gpa_to_mem_region(dev, msg_region->guest_phys_addr);
+ if (!r || !reg_equal(r, msg_region)) {
+ vmsg_close_fds(vmsg);
vu_panic(dev, "Specified region not found\n");
+ return false;
}
+ munmap((void *)(uintptr_t)r->mmap_addr, r->size + r->mmap_offset);
+
+ idx = r - dev->regions;
+ assert(idx < dev->nregions);
+ /* Shift all affected entries by 1 to close the hole. */
+ memmove(r, r + 1, sizeof(VuDevRegion) * (dev->nregions - idx - 1));
+ DPRINT("Successfully removed a region\n");
+ dev->nregions--;
+
vmsg_close_fds(vmsg);
return false;
--
2.43.0
next prev parent reply other threads:[~2024-02-02 21:54 UTC|newest]
Thread overview: 46+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-02-02 21:53 [PATCH v1 00/15] libvhost-user: support more memslots and cleanup memslot handling code David Hildenbrand
2024-02-02 21:53 ` [PATCH v1 01/15] libvhost-user: Fix msg_region->userspace_addr computation David Hildenbrand
2024-02-04 1:35 ` Raphael Norwitz
2024-02-04 14:36 ` David Hildenbrand
2024-02-04 22:01 ` Raphael Norwitz
2024-02-05 7:32 ` David Hildenbrand
2024-02-13 17:32 ` Michael S. Tsirkin
2024-02-13 18:25 ` David Hildenbrand
2024-02-02 21:53 ` [PATCH v1 02/15] libvhost-user: Dynamically allocate memory for memory slots David Hildenbrand
2024-02-04 1:36 ` Raphael Norwitz
2024-02-02 21:53 ` [PATCH v1 03/15] libvhost-user: Bump up VHOST_USER_MAX_RAM_SLOTS to 509 David Hildenbrand
2024-02-04 1:42 ` Raphael Norwitz
2024-02-02 21:53 ` [PATCH v1 04/15] libvhost-user: Factor out removing all mem regions David Hildenbrand
2024-02-04 1:43 ` Raphael Norwitz
2024-02-02 21:53 ` [PATCH v1 05/15] libvhost-user: Merge vu_set_mem_table_exec_postcopy() into vu_set_mem_table_exec() David Hildenbrand
2024-02-04 1:44 ` Raphael Norwitz
2024-02-02 21:53 ` [PATCH v1 06/15] libvhost-user: Factor out adding a memory region David Hildenbrand
2024-02-04 1:44 ` Raphael Norwitz
2024-02-02 21:53 ` [PATCH v1 07/15] libvhost-user: No need to check for NULL when unmapping David Hildenbrand
2024-02-04 1:45 ` Raphael Norwitz
2024-02-02 21:53 ` [PATCH v1 08/15] libvhost-user: Don't zero out memory for memory regions David Hildenbrand
2024-02-04 1:46 ` Raphael Norwitz
2024-02-02 21:53 ` [PATCH v1 09/15] libvhost-user: Don't search for duplicates when removing " David Hildenbrand
2024-02-04 1:47 ` Raphael Norwitz
2024-02-02 21:53 ` David Hildenbrand [this message]
2024-02-04 1:47 ` [PATCH v1 10/15] libvhost-user: Factor out search for memory region by GPA and simplify Raphael Norwitz
2024-02-02 21:53 ` [PATCH v1 11/15] libvhost-user: Speedup gpa_to_mem_region() and vu_gpa_to_va() David Hildenbrand
2024-02-04 2:10 ` Raphael Norwitz
2024-02-04 14:51 ` David Hildenbrand
2024-02-04 22:07 ` Raphael Norwitz
2024-02-05 7:32 ` David Hildenbrand
2024-02-02 21:53 ` [PATCH v1 12/15] libvhost-user: Use most of mmap_offset as fd_offset David Hildenbrand
2024-02-04 2:11 ` Raphael Norwitz
2024-02-02 21:53 ` [PATCH v1 13/15] libvhost-user: Factor out vq usability check David Hildenbrand
2024-02-04 2:11 ` Raphael Norwitz
2024-02-02 21:53 ` [PATCH v1 14/15] libvhost-user: Dynamically remap rings after (temporarily?) removing memory regions David Hildenbrand
2024-02-04 2:15 ` Raphael Norwitz
2024-02-04 14:58 ` David Hildenbrand
2024-02-02 21:53 ` [PATCH v1 15/15] libvhost-user: Mark mmap'ed region memory as MADV_DONTDUMP David Hildenbrand
2024-02-04 2:16 ` Raphael Norwitz
2024-02-07 11:40 ` [PATCH v1 00/15] libvhost-user: support more memslots and cleanup memslot handling code Stefano Garzarella
2024-02-09 22:36 ` David Hildenbrand
2024-02-13 17:33 ` Michael S. Tsirkin
2024-02-13 18:27 ` David Hildenbrand
2024-02-13 18:55 ` Michael S. Tsirkin
2024-02-14 11:06 ` David Hildenbrand
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20240202215332.118728-11-david@redhat.com \
--to=david@redhat.com \
--cc=germano@redhat.com \
--cc=jasowang@redhat.com \
--cc=mst@redhat.com \
--cc=qemu-devel@nongnu.org \
--cc=raphael.norwitz@nutanix.com \
--cc=sgarzare@redhat.com \
--cc=stefanha@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).