From: David Hildenbrand <david@redhat.com>
To: qemu-devel@nongnu.org
Cc: David Hildenbrand <david@redhat.com>,
"Michael S . Tsirkin" <mst@redhat.com>,
Jason Wang <jasowang@redhat.com>,
Stefan Hajnoczi <stefanha@redhat.com>,
Stefano Garzarella <sgarzare@redhat.com>,
Germano Veit Michel <germano@redhat.com>,
Raphael Norwitz <raphael.norwitz@nutanix.com>
Subject: [PATCH v1 11/15] libvhost-user: Speedup gpa_to_mem_region() and vu_gpa_to_va()
Date: Fri, 2 Feb 2024 22:53:28 +0100 [thread overview]
Message-ID: <20240202215332.118728-12-david@redhat.com> (raw)
In-Reply-To: <20240202215332.118728-1-david@redhat.com>
Let's speed up GPA to memory region / virtual address lookup. Store the
memory regions ordered by guest physical addresses, and use binary
search for address translation, as well as when adding/removing memory
regions.
Most importantly, this will speed up GPA->VA address translation when we
have many memslots.
Signed-off-by: David Hildenbrand <david@redhat.com>
---
subprojects/libvhost-user/libvhost-user.c | 49 +++++++++++++++++++++--
1 file changed, 45 insertions(+), 4 deletions(-)
diff --git a/subprojects/libvhost-user/libvhost-user.c b/subprojects/libvhost-user/libvhost-user.c
index d036b54ed0..75e47b7bb3 100644
--- a/subprojects/libvhost-user/libvhost-user.c
+++ b/subprojects/libvhost-user/libvhost-user.c
@@ -199,19 +199,30 @@ vu_panic(VuDev *dev, const char *msg, ...)
static VuDevRegion *
vu_gpa_to_mem_region(VuDev *dev, uint64_t guest_addr)
{
- unsigned int i;
+ int low = 0;
+ int high = dev->nregions - 1;
/*
* Memory regions cannot overlap in guest physical address space. Each
* GPA belongs to exactly one memory region, so there can only be one
* match.
+ *
+ * We store our memory regions ordered by GPA and can simply perform a
+ * binary search.
*/
- for (i = 0; i < dev->nregions; i++) {
- VuDevRegion *cur = &dev->regions[i];
+ while (low <= high) {
+ unsigned int mid = low + (high - low) / 2;
+ VuDevRegion *cur = &dev->regions[mid];
if (guest_addr >= cur->gpa && guest_addr < cur->gpa + cur->size) {
return cur;
}
+ if (guest_addr >= cur->gpa + cur->size) {
+ low = mid + 1;
+ }
+ if (guest_addr < cur->gpa) {
+ high = mid - 1;
+ }
}
return NULL;
}
@@ -273,9 +284,14 @@ vu_remove_all_mem_regs(VuDev *dev)
static void
_vu_add_mem_reg(VuDev *dev, VhostUserMemoryRegion *msg_region, int fd)
{
+ const uint64_t start_gpa = msg_region->guest_phys_addr;
+ const uint64_t end_gpa = start_gpa + msg_region->memory_size;
int prot = PROT_READ | PROT_WRITE;
VuDevRegion *r;
void *mmap_addr;
+ int low = 0;
+ int high = dev->nregions - 1;
+ unsigned int idx;
DPRINT("Adding region %d\n", dev->nregions);
DPRINT(" guest_phys_addr: 0x%016"PRIx64"\n",
@@ -295,6 +311,29 @@ _vu_add_mem_reg(VuDev *dev, VhostUserMemoryRegion *msg_region, int fd)
prot = PROT_NONE;
}
+ /*
+ * We will add memory regions into the array sorted by GPA. Perform a
+ * binary search to locate the insertion point: it will be at the low
+ * index.
+ */
+ while (low <= high) {
+ unsigned int mid = low + (high - low) / 2;
+ VuDevRegion *cur = &dev->regions[mid];
+
+ /* Overlap of GPA addresses. */
+ if (start_gpa < cur->gpa + cur->size && cur->gpa < end_gpa) {
+ vu_panic(dev, "regions with overlapping guest physical addresses");
+ return;
+ }
+ if (start_gpa >= cur->gpa + cur->size) {
+ low = mid + 1;
+ }
+ if (start_gpa < cur->gpa) {
+ high = mid - 1;
+ }
+ }
+ idx = low;
+
/*
* We don't use offset argument of mmap() since the mapped address has
* to be page aligned, and we use huge pages.
@@ -308,7 +347,9 @@ _vu_add_mem_reg(VuDev *dev, VhostUserMemoryRegion *msg_region, int fd)
DPRINT(" mmap_addr: 0x%016"PRIx64"\n",
(uint64_t)(uintptr_t)mmap_addr);
- r = &dev->regions[dev->nregions];
+ /* Shift all affected entries by 1 to open a hole at idx. */
+ r = &dev->regions[idx];
+ memmove(r + 1, r, sizeof(VuDevRegion) * (dev->nregions - idx));
r->gpa = msg_region->guest_phys_addr;
r->size = msg_region->memory_size;
r->qva = msg_region->userspace_addr;
--
2.43.0
next prev parent reply other threads:[~2024-02-02 21:55 UTC|newest]
Thread overview: 46+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-02-02 21:53 [PATCH v1 00/15] libvhost-user: support more memslots and cleanup memslot handling code David Hildenbrand
2024-02-02 21:53 ` [PATCH v1 01/15] libvhost-user: Fix msg_region->userspace_addr computation David Hildenbrand
2024-02-04 1:35 ` Raphael Norwitz
2024-02-04 14:36 ` David Hildenbrand
2024-02-04 22:01 ` Raphael Norwitz
2024-02-05 7:32 ` David Hildenbrand
2024-02-13 17:32 ` Michael S. Tsirkin
2024-02-13 18:25 ` David Hildenbrand
2024-02-02 21:53 ` [PATCH v1 02/15] libvhost-user: Dynamically allocate memory for memory slots David Hildenbrand
2024-02-04 1:36 ` Raphael Norwitz
2024-02-02 21:53 ` [PATCH v1 03/15] libvhost-user: Bump up VHOST_USER_MAX_RAM_SLOTS to 509 David Hildenbrand
2024-02-04 1:42 ` Raphael Norwitz
2024-02-02 21:53 ` [PATCH v1 04/15] libvhost-user: Factor out removing all mem regions David Hildenbrand
2024-02-04 1:43 ` Raphael Norwitz
2024-02-02 21:53 ` [PATCH v1 05/15] libvhost-user: Merge vu_set_mem_table_exec_postcopy() into vu_set_mem_table_exec() David Hildenbrand
2024-02-04 1:44 ` Raphael Norwitz
2024-02-02 21:53 ` [PATCH v1 06/15] libvhost-user: Factor out adding a memory region David Hildenbrand
2024-02-04 1:44 ` Raphael Norwitz
2024-02-02 21:53 ` [PATCH v1 07/15] libvhost-user: No need to check for NULL when unmapping David Hildenbrand
2024-02-04 1:45 ` Raphael Norwitz
2024-02-02 21:53 ` [PATCH v1 08/15] libvhost-user: Don't zero out memory for memory regions David Hildenbrand
2024-02-04 1:46 ` Raphael Norwitz
2024-02-02 21:53 ` [PATCH v1 09/15] libvhost-user: Don't search for duplicates when removing " David Hildenbrand
2024-02-04 1:47 ` Raphael Norwitz
2024-02-02 21:53 ` [PATCH v1 10/15] libvhost-user: Factor out search for memory region by GPA and simplify David Hildenbrand
2024-02-04 1:47 ` Raphael Norwitz
2024-02-02 21:53 ` David Hildenbrand [this message]
2024-02-04 2:10 ` [PATCH v1 11/15] libvhost-user: Speedup gpa_to_mem_region() and vu_gpa_to_va() Raphael Norwitz
2024-02-04 14:51 ` David Hildenbrand
2024-02-04 22:07 ` Raphael Norwitz
2024-02-05 7:32 ` David Hildenbrand
2024-02-02 21:53 ` [PATCH v1 12/15] libvhost-user: Use most of mmap_offset as fd_offset David Hildenbrand
2024-02-04 2:11 ` Raphael Norwitz
2024-02-02 21:53 ` [PATCH v1 13/15] libvhost-user: Factor out vq usability check David Hildenbrand
2024-02-04 2:11 ` Raphael Norwitz
2024-02-02 21:53 ` [PATCH v1 14/15] libvhost-user: Dynamically remap rings after (temporarily?) removing memory regions David Hildenbrand
2024-02-04 2:15 ` Raphael Norwitz
2024-02-04 14:58 ` David Hildenbrand
2024-02-02 21:53 ` [PATCH v1 15/15] libvhost-user: Mark mmap'ed region memory as MADV_DONTDUMP David Hildenbrand
2024-02-04 2:16 ` Raphael Norwitz
2024-02-07 11:40 ` [PATCH v1 00/15] libvhost-user: support more memslots and cleanup memslot handling code Stefano Garzarella
2024-02-09 22:36 ` David Hildenbrand
2024-02-13 17:33 ` Michael S. Tsirkin
2024-02-13 18:27 ` David Hildenbrand
2024-02-13 18:55 ` Michael S. Tsirkin
2024-02-14 11:06 ` David Hildenbrand
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20240202215332.118728-12-david@redhat.com \
--to=david@redhat.com \
--cc=germano@redhat.com \
--cc=jasowang@redhat.com \
--cc=mst@redhat.com \
--cc=qemu-devel@nongnu.org \
--cc=raphael.norwitz@nutanix.com \
--cc=sgarzare@redhat.com \
--cc=stefanha@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).