From: Chenyi Qiang <chenyi.qiang@intel.com>
To: "Paolo Bonzini" <pbonzini@redhat.com>,
"David Hildenbrand" <david@redhat.com>,
"Peter Xu" <peterx@redhat.com>,
"Philippe Mathieu-Daudé" <philmd@linaro.org>,
"Michael Roth" <michael.roth@amd.com>
Cc: Chenyi Qiang <chenyi.qiang@intel.com>,
qemu-devel@nongnu.org, kvm@vger.kernel.org,
Williams Dan J <dan.j.williams@intel.com>,
Edgecombe Rick P <rick.p.edgecombe@intel.com>,
Wang Wei W <wei.w.wang@intel.com>,
Peng Chao P <chao.p.peng@intel.com>,
Gao Chao <chao.gao@intel.com>, Wu Hao <hao.wu@intel.com>,
Xu Yilun <yilun.xu@intel.com>
Subject: [RFC PATCH 2/6] guest_memfd: Introduce a helper to notify the shared/private state change
Date: Thu, 25 Jul 2024 03:21:11 -0400 [thread overview]
Message-ID: <20240725072118.358923-3-chenyi.qiang@intel.com> (raw)
In-Reply-To: <20240725072118.358923-1-chenyi.qiang@intel.com>
Introduce a helper function within RamDiscardManager to efficiently
notify all registered RamDiscardListeners, including VFIO listeners
about the memory conversion events between shared and private in
guest_memfd. The existing VFIO listener can dynamically DMA map/unmap
the shared pages based on the conversion type:
- For conversions from shared to private, the VFIO system ensures the
discarding of shared mapping from the IOMMU.
- For conversions from private to shared, it triggers the population of
the shared mapping into the IOMMU.
Additionally, there could be some special conversion requests:
- When a conversion request is made for a page already in the desired
state (either private or shared), the helper simply returns success.
- For requests involving a range partially in the desired state, only
the necessary segments are converted, ensuring the entire range
complies with the request efficiently.
- In scenarios where a conversion request is declined by other systems,
such as a failure from VFIO during notify_populate(), the helper will
roll back the request, maintaining consistency.
Signed-off-by: Chenyi Qiang <chenyi.qiang@intel.com>
---
include/sysemu/guest-memfd-manager.h | 3 +
system/guest-memfd-manager.c | 141 +++++++++++++++++++++++++++
2 files changed, 144 insertions(+)
diff --git a/include/sysemu/guest-memfd-manager.h b/include/sysemu/guest-memfd-manager.h
index ab8c2ba362..1cce4cde43 100644
--- a/include/sysemu/guest-memfd-manager.h
+++ b/include/sysemu/guest-memfd-manager.h
@@ -43,4 +43,7 @@ struct GuestMemfdManagerClass {
void (*realize)(Object *gmm, MemoryRegion *mr, uint64_t region_size);
};
+int guest_memfd_state_change(GuestMemfdManager *gmm, uint64_t offset, uint64_t size,
+ bool shared_to_private);
+
#endif
diff --git a/system/guest-memfd-manager.c b/system/guest-memfd-manager.c
index 7b90f26859..deb43db90b 100644
--- a/system/guest-memfd-manager.c
+++ b/system/guest-memfd-manager.c
@@ -243,6 +243,147 @@ static void guest_memfd_rdm_replay_discarded(const RamDiscardManager *rdm,
guest_memfd_rdm_replay_discarded_cb);
}
+static bool guest_memfd_is_valid_range(GuestMemfdManager *gmm,
+ uint64_t offset, uint64_t size)
+{
+ MemoryRegion *mr = gmm->mr;
+
+ g_assert(mr);
+
+ uint64_t region_size = memory_region_size(mr);
+ if (!QEMU_IS_ALIGNED(offset, gmm->block_size)) {
+ return false;
+ }
+ if (offset + size < offset || !size) {
+ return false;
+ }
+ if (offset >= region_size || offset + size > region_size) {
+ return false;
+ }
+ return true;
+}
+
+static void guest_memfd_notify_discard(GuestMemfdManager *gmm,
+ uint64_t offset, uint64_t size)
+{
+ RamDiscardListener *rdl;
+
+ QLIST_FOREACH(rdl, &gmm->rdl_list, next) {
+ MemoryRegionSection tmp = *rdl->section;
+
+ if (!guest_memfd_rdm_intersect_memory_section(&tmp, offset, size)) {
+ continue;
+ }
+
+ guest_memfd_for_each_populated_range(gmm, &tmp, rdl,
+ guest_memfd_notify_discard_cb);
+ }
+}
+
+
+static int guest_memfd_notify_populate(GuestMemfdManager *gmm,
+ uint64_t offset, uint64_t size)
+{
+ RamDiscardListener *rdl, *rdl2;
+ int ret = 0;
+
+ QLIST_FOREACH(rdl, &gmm->rdl_list, next) {
+ MemoryRegionSection tmp = *rdl->section;
+
+ if (!guest_memfd_rdm_intersect_memory_section(&tmp, offset, size)) {
+ continue;
+ }
+
+ ret = guest_memfd_for_each_discarded_range(gmm, &tmp, rdl,
+ guest_memfd_notify_populate_cb);
+ if (ret) {
+ break;
+ }
+ }
+
+ if (ret) {
+ /* Notify all already-notified listeners. */
+ QLIST_FOREACH(rdl2, &gmm->rdl_list, next) {
+ MemoryRegionSection tmp = *rdl2->section;
+
+ if (rdl2 == rdl) {
+ break;
+ }
+ if (!guest_memfd_rdm_intersect_memory_section(&tmp, offset, size)) {
+ continue;
+ }
+
+ guest_memfd_for_each_discarded_range(gmm, &tmp, rdl2,
+ guest_memfd_notify_discard_cb);
+ }
+ }
+ return ret;
+}
+
+static bool guest_memfd_is_range_populated(GuestMemfdManager *gmm,
+ uint64_t offset, uint64_t size)
+{
+ const unsigned long first_bit = offset / gmm->block_size;
+ const unsigned long last_bit = first_bit + (size / gmm->block_size) - 1;
+ unsigned long found_bit;
+
+ /* We fake a shorter bitmap to avoid searching too far. */
+ found_bit = find_next_bit(gmm->discard_bitmap, last_bit + 1, first_bit);
+ return found_bit > last_bit;
+}
+
+static bool guest_memfd_is_range_discarded(GuestMemfdManager *gmm,
+ uint64_t offset, uint64_t size)
+{
+ const unsigned long first_bit = offset / gmm->block_size;
+ const unsigned long last_bit = first_bit + (size / gmm->block_size) - 1;
+ unsigned long found_bit;
+
+ /* We fake a shorter bitmap to avoid searching too far. */
+ found_bit = find_next_zero_bit(gmm->discard_bitmap, last_bit + 1, first_bit);
+ return found_bit > last_bit;
+}
+
+int guest_memfd_state_change(GuestMemfdManager *gmm, uint64_t offset, uint64_t size,
+ bool shared_to_private)
+{
+ int ret = 0;
+
+ if (!guest_memfd_is_valid_range(gmm, offset, size)) {
+ error_report("%s, invalid range: offset 0x%lx, size 0x%lx",
+ __func__, offset, size);
+ return -1;
+ }
+
+ if ((shared_to_private && guest_memfd_is_range_discarded(gmm, offset, size)) ||
+ (!shared_to_private && guest_memfd_is_range_populated(gmm, offset, size))) {
+ return 0;
+ }
+
+ if (shared_to_private) {
+ guest_memfd_notify_discard(gmm, offset, size);
+ } else {
+ ret = guest_memfd_notify_populate(gmm, offset, size);
+ }
+
+ if (!ret) {
+ unsigned long first_bit = offset / gmm->block_size;
+ unsigned long nbits = size / gmm->block_size;
+
+ g_assert((first_bit + nbits) <= gmm->discard_bitmap_size);
+
+ if (shared_to_private) {
+ bitmap_set(gmm->discard_bitmap, first_bit, nbits);
+ } else {
+ bitmap_clear(gmm->discard_bitmap, first_bit, nbits);
+ }
+
+ return 0;
+ }
+
+ return ret;
+}
+
static void guest_memfd_manager_realize(Object *obj, MemoryRegion *mr,
uint64_t region_size)
{
--
2.43.5
next prev parent reply other threads:[~2024-07-25 7:23 UTC|newest]
Thread overview: 22+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-07-25 7:21 [RFC PATCH 0/6] Enable shared device assignment Chenyi Qiang
2024-07-25 7:21 ` [RFC PATCH 1/6] guest_memfd: Introduce an object to manage the guest-memfd with RamDiscardManager Chenyi Qiang
2024-07-25 7:21 ` Chenyi Qiang [this message]
2024-07-25 7:21 ` [RFC PATCH 3/6] KVM: Notify the state change via RamDiscardManager helper during shared/private conversion Chenyi Qiang
2024-07-25 7:21 ` [RFC PATCH 4/6] memory: Register the RamDiscardManager instance upon guest_memfd creation Chenyi Qiang
2024-07-25 7:21 ` [RFC PATCH 5/6] guest-memfd: Default to discarded (private) in guest_memfd_manager Chenyi Qiang
2024-07-25 7:21 ` [RFC PATCH 6/6] RAMBlock: make guest_memfd require coordinate discard Chenyi Qiang
2024-07-25 14:04 ` [RFC PATCH 0/6] Enable shared device assignment David Hildenbrand
2024-07-26 5:02 ` Tian, Kevin
2024-07-26 7:08 ` David Hildenbrand
2024-07-31 7:12 ` Xu Yilun
2024-07-31 11:05 ` David Hildenbrand
2024-07-26 6:20 ` Chenyi Qiang
2024-07-26 7:20 ` David Hildenbrand
2024-07-26 10:56 ` Chenyi Qiang
2024-07-31 11:18 ` David Hildenbrand
2024-08-02 7:00 ` Chenyi Qiang
2024-08-01 7:32 ` Yin, Fengwei
2024-08-16 3:02 ` Chenyi Qiang
2024-10-08 8:59 ` Chenyi Qiang
2024-11-15 16:47 ` Rob Nertney
2024-11-15 17:20 ` David Hildenbrand
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20240725072118.358923-3-chenyi.qiang@intel.com \
--to=chenyi.qiang@intel.com \
--cc=chao.gao@intel.com \
--cc=chao.p.peng@intel.com \
--cc=dan.j.williams@intel.com \
--cc=david@redhat.com \
--cc=hao.wu@intel.com \
--cc=kvm@vger.kernel.org \
--cc=michael.roth@amd.com \
--cc=pbonzini@redhat.com \
--cc=peterx@redhat.com \
--cc=philmd@linaro.org \
--cc=qemu-devel@nongnu.org \
--cc=rick.p.edgecombe@intel.com \
--cc=wei.w.wang@intel.com \
--cc=yilun.xu@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).