From: David Hildenbrand <david@redhat.com>
To: qemu-devel@nongnu.org
Cc: Pankaj Gupta <pankaj.gupta.linux@gmail.com>,
Wei Yang <richard.weiyang@linux.alibaba.com>,
David Hildenbrand <david@redhat.com>,
"Michael S. Tsirkin" <mst@redhat.com>,
"Dr . David Alan Gilbert" <dgilbert@redhat.com>,
Peter Xu <peterx@redhat.com>,
Marek Kedzierski <mkedzier@redhat.com>,
Auger Eric <eric.auger@redhat.com>,
Alex Williamson <alex.williamson@redhat.com>,
teawater <teawaterz@linux.alibaba.com>,
Jonathan Cameron <Jonathan.Cameron@huawei.com>,
Paolo Bonzini <pbonzini@redhat.com>,
Igor Mammedov <imammedo@redhat.com>
Subject: [PATCH v4 06/11] vfio: Sanity check maximum number of DMA mappings with RamDiscardMgr
Date: Thu, 7 Jan 2021 14:34:18 +0100 [thread overview]
Message-ID: <20210107133423.44964-7-david@redhat.com> (raw)
In-Reply-To: <20210107133423.44964-1-david@redhat.com>
Although RamDiscardMgr can handle running into the maximum number of
DMA mappings by propagating errors when creating a DMA mapping, we want
to sanity check and warn the user early that there is a theoretical setup
issue and that virtio-mem might not be able to provide as much memory
towards a VM as desired.
As suggested by Alex, let's use the number of KVM memory slots to guess
how many other mappings we might see over time.
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: "Michael S. Tsirkin" <mst@redhat.com>
Cc: Alex Williamson <alex.williamson@redhat.com>
Cc: Dr. David Alan Gilbert <dgilbert@redhat.com>
Cc: Igor Mammedov <imammedo@redhat.com>
Cc: Pankaj Gupta <pankaj.gupta.linux@gmail.com>
Cc: Peter Xu <peterx@redhat.com>
Cc: Auger Eric <eric.auger@redhat.com>
Cc: Wei Yang <richard.weiyang@linux.alibaba.com>
Cc: teawater <teawaterz@linux.alibaba.com>
Cc: Marek Kedzierski <mkedzier@redhat.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
---
hw/vfio/common.c | 43 +++++++++++++++++++++++++++++++++++++++++++
1 file changed, 43 insertions(+)
diff --git a/hw/vfio/common.c b/hw/vfio/common.c
index 1babb6bb99..bc20f738ce 100644
--- a/hw/vfio/common.c
+++ b/hw/vfio/common.c
@@ -758,6 +758,49 @@ static void vfio_register_ram_discard_notifier(VFIOContainer *container,
vfio_ram_discard_notify_discard_all);
rdmc->register_listener(rdm, section->mr, &vrdl->listener);
QLIST_INSERT_HEAD(&container->vrdl_list, vrdl, next);
+
+ /*
+ * Sanity-check if we have a theoretically problematic setup where we could
+ * exceed the maximum number of possible DMA mappings over time. We assume
+ * that each mapped section in the same address space as a RamDiscardMgr
+ * section consumes exactly one DMA mapping, with the exception of
+ * RamDiscardMgr sections; i.e., we don't expect to have gIOMMU sections in
+ * the same address space as RamDiscardMgr sections.
+ *
+ * We assume that each section in the address space consumes one memslot.
+ * We take the number of KVM memory slots as a best guess for the maximum
+ * number of sections in the address space we could have over time,
+ * also consuming DMA mappings.
+ */
+ if (container->dma_max_mappings) {
+ unsigned int vrdl_count = 0, vrdl_mappings = 0, max_memslots = 512;
+
+#ifdef CONFIG_KVM
+ if (kvm_enabled()) {
+ max_memslots = kvm_get_max_memslots();
+ }
+#endif
+
+ QLIST_FOREACH(vrdl, &container->vrdl_list, next) {
+ hwaddr start, end;
+
+ start = QEMU_ALIGN_DOWN(vrdl->offset_within_address_space,
+ vrdl->granularity);
+ end = ROUND_UP(vrdl->offset_within_address_space + vrdl->size,
+ vrdl->granularity);
+ vrdl_mappings = (end - start) / vrdl->granularity;
+ vrdl_count++;
+ }
+
+ if (vrdl_mappings + max_memslots - vrdl_count >
+ container->dma_max_mappings) {
+ warn_report("%s: possibly running out of DMA mappings. E.g., try"
+ " increasing the 'block-size' of virtio-mem devies."
+ " Maximum possible DMA mappings: %d, Maximum possible"
+ " memslots: %d", __func__, container->dma_max_mappings,
+ max_memslots);
+ }
+ }
}
static void vfio_unregister_ram_discard_listener(VFIOContainer *container,
--
2.29.2
next prev parent reply other threads:[~2021-01-07 13:39 UTC|newest]
Thread overview: 21+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-01-07 13:34 [PATCH v4 00/11] virtio-mem: vfio support David Hildenbrand
2021-01-07 13:34 ` [PATCH v4 01/11] memory: Introduce RamDiscardMgr for RAM memory regions David Hildenbrand
2021-01-07 13:34 ` [PATCH v4 02/11] virtio-mem: Factor out traversing unplugged ranges David Hildenbrand
2021-01-07 13:34 ` [PATCH v4 03/11] virtio-mem: Implement RamDiscardMgr interface David Hildenbrand
2021-01-07 13:34 ` [PATCH v4 04/11] vfio: Support for RamDiscardMgr in the !vIOMMU case David Hildenbrand
2021-01-13 23:27 ` Alex Williamson
2021-01-14 15:54 ` David Hildenbrand
2021-01-14 15:57 ` David Hildenbrand
2021-01-15 10:27 ` David Hildenbrand
2021-01-07 13:34 ` [PATCH v4 05/11] vfio: Query and store the maximum number of possible DMA mappings David Hildenbrand
2021-01-13 23:30 ` Alex Williamson
2021-01-07 13:34 ` David Hildenbrand [this message]
2021-01-13 23:34 ` [PATCH v4 06/11] vfio: Sanity check maximum number of DMA mappings with RamDiscardMgr Alex Williamson
2021-01-14 15:59 ` David Hildenbrand
2021-01-07 13:34 ` [PATCH v4 07/11] vfio: Support for RamDiscardMgr in the vIOMMU case David Hildenbrand
2021-01-07 13:34 ` [PATCH v4 08/11] softmmu/physmem: Don't use atomic operations in ram_block_discard_(disable|require) David Hildenbrand
2021-01-07 13:34 ` [PATCH v4 09/11] softmmu/physmem: Extend ram_block_discard_(require|disable) by two discard types David Hildenbrand
2021-01-07 13:34 ` [PATCH v4 10/11] virtio-mem: Require only coordinated discards David Hildenbrand
2021-01-07 13:34 ` [PATCH v4 11/11] vfio: Disable only uncoordinated discards David Hildenbrand
2021-01-13 23:57 ` Alex Williamson
2021-01-14 16:19 ` David Hildenbrand
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20210107133423.44964-7-david@redhat.com \
--to=david@redhat.com \
--cc=Jonathan.Cameron@huawei.com \
--cc=alex.williamson@redhat.com \
--cc=dgilbert@redhat.com \
--cc=eric.auger@redhat.com \
--cc=imammedo@redhat.com \
--cc=mkedzier@redhat.com \
--cc=mst@redhat.com \
--cc=pankaj.gupta.linux@gmail.com \
--cc=pbonzini@redhat.com \
--cc=peterx@redhat.com \
--cc=qemu-devel@nongnu.org \
--cc=richard.weiyang@linux.alibaba.com \
--cc=teawaterz@linux.alibaba.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).