qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: Eduardo Habkost <ehabkost@redhat.com>
To: Peter Maydell <peter.maydell@linaro.org>, qemu-devel@nongnu.org
Cc: Pankaj Gupta <pankaj.gupta.linux@gmail.com>,
	Eduardo Habkost <ehabkost@redhat.com>,
	"Michael S . Tsirkin" <mst@redhat.com>,
	David Hildenbrand <david@redhat.com>,
	"Dr . David Alan Gilbert" <dgilbert@redhat.com>,
	Peter Xu <peterx@redhat.com>, Auger Eric <eric.auger@redhat.com>,
	Alex Williamson <alex.williamson@redhat.com>,
	teawater <teawaterz@linux.alibaba.com>,
	Igor Mammedov <imammedo@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Marek Kedzierski <mkedzier@redhat.com>,
	Wei Yang <richard.weiyang@linux.alibaba.com>
Subject: [PULL v2 10/15] vfio: Sanity check maximum number of DMA mappings with RamDiscardManager
Date: Thu,  8 Jul 2021 15:55:47 -0400	[thread overview]
Message-ID: <20210708195552.2730970-11-ehabkost@redhat.com> (raw)
In-Reply-To: <20210708195552.2730970-1-ehabkost@redhat.com>

From: David Hildenbrand <david@redhat.com>

Although RamDiscardManager can handle running into the maximum number of
DMA mappings by propagating errors when creating a DMA mapping, we want
to sanity check and warn the user early that there is a theoretical setup
issue and that virtio-mem might not be able to provide as much memory
towards a VM as desired.

As suggested by Alex, let's use the number of KVM memory slots to guess
how many other mappings we might see over time.

Acked-by: Alex Williamson <alex.williamson@redhat.com>
Reviewed-by: Alex Williamson <alex.williamson@redhat.com>
Acked-by: Michael S. Tsirkin <mst@redhat.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: "Michael S. Tsirkin" <mst@redhat.com>
Cc: Alex Williamson <alex.williamson@redhat.com>
Cc: Dr. David Alan Gilbert <dgilbert@redhat.com>
Cc: Igor Mammedov <imammedo@redhat.com>
Cc: Pankaj Gupta <pankaj.gupta.linux@gmail.com>
Cc: Peter Xu <peterx@redhat.com>
Cc: Auger Eric <eric.auger@redhat.com>
Cc: Wei Yang <richard.weiyang@linux.alibaba.com>
Cc: teawater <teawaterz@linux.alibaba.com>
Cc: Marek Kedzierski <mkedzier@redhat.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20210413095531.25603-9-david@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
---
 hw/vfio/common.c | 43 +++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 43 insertions(+)

diff --git a/hw/vfio/common.c b/hw/vfio/common.c
index 79628d60aed..f8a2fe8441a 100644
--- a/hw/vfio/common.c
+++ b/hw/vfio/common.c
@@ -728,6 +728,49 @@ static void vfio_register_ram_discard_listener(VFIOContainer *container,
                               vfio_ram_discard_notify_discard, true);
     ram_discard_manager_register_listener(rdm, &vrdl->listener, section);
     QLIST_INSERT_HEAD(&container->vrdl_list, vrdl, next);
+
+    /*
+     * Sanity-check if we have a theoretically problematic setup where we could
+     * exceed the maximum number of possible DMA mappings over time. We assume
+     * that each mapped section in the same address space as a RamDiscardManager
+     * section consumes exactly one DMA mapping, with the exception of
+     * RamDiscardManager sections; i.e., we don't expect to have gIOMMU sections
+     * in the same address space as RamDiscardManager sections.
+     *
+     * We assume that each section in the address space consumes one memslot.
+     * We take the number of KVM memory slots as a best guess for the maximum
+     * number of sections in the address space we could have over time,
+     * also consuming DMA mappings.
+     */
+    if (container->dma_max_mappings) {
+        unsigned int vrdl_count = 0, vrdl_mappings = 0, max_memslots = 512;
+
+#ifdef CONFIG_KVM
+        if (kvm_enabled()) {
+            max_memslots = kvm_get_max_memslots();
+        }
+#endif
+
+        QLIST_FOREACH(vrdl, &container->vrdl_list, next) {
+            hwaddr start, end;
+
+            start = QEMU_ALIGN_DOWN(vrdl->offset_within_address_space,
+                                    vrdl->granularity);
+            end = ROUND_UP(vrdl->offset_within_address_space + vrdl->size,
+                           vrdl->granularity);
+            vrdl_mappings += (end - start) / vrdl->granularity;
+            vrdl_count++;
+        }
+
+        if (vrdl_mappings + max_memslots - vrdl_count >
+            container->dma_max_mappings) {
+            warn_report("%s: possibly running out of DMA mappings. E.g., try"
+                        " increasing the 'block-size' of virtio-mem devies."
+                        " Maximum possible DMA mappings: %d, Maximum possible"
+                        " memslots: %d", __func__, container->dma_max_mappings,
+                        max_memslots);
+        }
+    }
 }
 
 static void vfio_unregister_ram_discard_listener(VFIOContainer *container,
-- 
2.31.1



  parent reply	other threads:[~2021-07-08 20:18 UTC|newest]

Thread overview: 17+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-07-08 19:55 [PULL v2 00/15] Machine queue, 2021-07-07 Eduardo Habkost
2021-07-08 19:55 ` [PULL v2 01/15] vmbus: Don't make QOM property registration conditional Eduardo Habkost
2021-07-08 19:55 ` [PULL v2 02/15] Deprecate pmem=on with non-DAX capable backend file Eduardo Habkost
2021-07-08 19:55 ` [PULL v2 03/15] memory: Introduce RamDiscardManager for RAM memory regions Eduardo Habkost
2021-07-08 19:55 ` [PULL v2 04/15] memory: Helpers to copy/free a MemoryRegionSection Eduardo Habkost
2021-07-08 19:55 ` [PULL v2 05/15] virtio-mem: Factor out traversing unplugged ranges Eduardo Habkost
2021-07-08 19:55 ` [PULL v2 06/15] virtio-mem: Don't report errors when ram_block_discard_range() fails Eduardo Habkost
2021-07-08 19:55 ` [PULL v2 07/15] virtio-mem: Implement RamDiscardManager interface Eduardo Habkost
2021-07-08 19:55 ` [PULL v2 08/15] vfio: Support for RamDiscardManager in the !vIOMMU case Eduardo Habkost
2021-07-08 19:55 ` [PULL v2 09/15] vfio: Query and store the maximum number of possible DMA mappings Eduardo Habkost
2021-07-08 19:55 ` Eduardo Habkost [this message]
2021-07-08 19:55 ` [PULL v2 11/15] vfio: Support for RamDiscardManager in the vIOMMU case Eduardo Habkost
2021-07-08 19:55 ` [PULL v2 12/15] softmmu/physmem: Don't use atomic operations in ram_block_discard_(disable|require) Eduardo Habkost
2021-07-08 19:55 ` [PULL v2 13/15] softmmu/physmem: Extend ram_block_discard_(require|disable) by two discard types Eduardo Habkost
2021-07-08 19:55 ` [PULL v2 14/15] virtio-mem: Require only coordinated discards Eduardo Habkost
2021-07-08 19:55 ` [PULL v2 15/15] vfio: Disable only uncoordinated discards for VFIO_TYPE1 iommus Eduardo Habkost
2021-07-10 15:05 ` [PULL v2 00/15] Machine queue, 2021-07-07 Peter Maydell

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20210708195552.2730970-11-ehabkost@redhat.com \
    --to=ehabkost@redhat.com \
    --cc=alex.williamson@redhat.com \
    --cc=david@redhat.com \
    --cc=dgilbert@redhat.com \
    --cc=eric.auger@redhat.com \
    --cc=imammedo@redhat.com \
    --cc=mkedzier@redhat.com \
    --cc=mst@redhat.com \
    --cc=pankaj.gupta.linux@gmail.com \
    --cc=pbonzini@redhat.com \
    --cc=peter.maydell@linaro.org \
    --cc=peterx@redhat.com \
    --cc=qemu-devel@nongnu.org \
    --cc=richard.weiyang@linux.alibaba.com \
    --cc=teawaterz@linux.alibaba.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).