qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: Alex Williamson <alex.williamson@redhat.com>
To: David Hildenbrand <david@redhat.com>
Cc: Pankaj Gupta <pankaj.gupta.linux@gmail.com>,
	Wei Yang <richard.weiyang@linux.alibaba.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jonathan Cameron <Jonathan.Cameron@huawei.com>,
	qemu-devel@nongnu.org, Peter Xu <peterx@redhat.com>,
	"Dr . David Alan Gilbert" <dgilbert@redhat.com>,
	Auger Eric <eric.auger@redhat.com>,
	teawater <teawaterz@linux.alibaba.com>,
	Igor Mammedov <imammedo@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Marek Kedzierski <mkedzier@redhat.com>
Subject: Re: [PATCH v5 06/11] vfio: Sanity check maximum number of DMA mappings with RamDiscardMgr
Date: Tue, 16 Feb 2021 11:34:06 -0700	[thread overview]
Message-ID: <20210216113406.470665a5@omen.home.shazbot.org> (raw)
In-Reply-To: <20210121110540.33704-7-david@redhat.com>

On Thu, 21 Jan 2021 12:05:35 +0100
David Hildenbrand <david@redhat.com> wrote:

> Although RamDiscardMgr can handle running into the maximum number of
> DMA mappings by propagating errors when creating a DMA mapping, we want
> to sanity check and warn the user early that there is a theoretical setup
> issue and that virtio-mem might not be able to provide as much memory
> towards a VM as desired.
> 
> As suggested by Alex, let's use the number of KVM memory slots to guess
> how many other mappings we might see over time.
> 
> Cc: Paolo Bonzini <pbonzini@redhat.com>
> Cc: "Michael S. Tsirkin" <mst@redhat.com>
> Cc: Alex Williamson <alex.williamson@redhat.com>
> Cc: Dr. David Alan Gilbert <dgilbert@redhat.com>
> Cc: Igor Mammedov <imammedo@redhat.com>
> Cc: Pankaj Gupta <pankaj.gupta.linux@gmail.com>
> Cc: Peter Xu <peterx@redhat.com>
> Cc: Auger Eric <eric.auger@redhat.com>
> Cc: Wei Yang <richard.weiyang@linux.alibaba.com>
> Cc: teawater <teawaterz@linux.alibaba.com>
> Cc: Marek Kedzierski <mkedzier@redhat.com>
> Signed-off-by: David Hildenbrand <david@redhat.com>
> ---
>  hw/vfio/common.c | 43 +++++++++++++++++++++++++++++++++++++++++++
>  1 file changed, 43 insertions(+)


Acked-by: Alex Williamson <alex.williamson@redhat.com
Reviewed-by: Alex Williamson <alex.williamson@redhat.com>



> diff --git a/hw/vfio/common.c b/hw/vfio/common.c
> index 78be813a53..166ec6ec62 100644
> --- a/hw/vfio/common.c
> +++ b/hw/vfio/common.c
> @@ -761,6 +761,49 @@ static void vfio_register_ram_discard_notifier(VFIOContainer *container,
>                                vfio_ram_discard_notify_discard_all);
>      rdmc->register_listener(rdm, section->mr, &vrdl->listener);
>      QLIST_INSERT_HEAD(&container->vrdl_list, vrdl, next);
> +
> +    /*
> +     * Sanity-check if we have a theoretically problematic setup where we could
> +     * exceed the maximum number of possible DMA mappings over time. We assume
> +     * that each mapped section in the same address space as a RamDiscardMgr
> +     * section consumes exactly one DMA mapping, with the exception of
> +     * RamDiscardMgr sections; i.e., we don't expect to have gIOMMU sections in
> +     * the same address space as RamDiscardMgr sections.
> +     *
> +     * We assume that each section in the address space consumes one memslot.
> +     * We take the number of KVM memory slots as a best guess for the maximum
> +     * number of sections in the address space we could have over time,
> +     * also consuming DMA mappings.
> +     */
> +    if (container->dma_max_mappings) {
> +        unsigned int vrdl_count = 0, vrdl_mappings = 0, max_memslots = 512;
> +
> +#ifdef CONFIG_KVM
> +        if (kvm_enabled()) {
> +            max_memslots = kvm_get_max_memslots();
> +        }
> +#endif
> +
> +        QLIST_FOREACH(vrdl, &container->vrdl_list, next) {
> +            hwaddr start, end;
> +
> +            start = QEMU_ALIGN_DOWN(vrdl->offset_within_address_space,
> +                                    vrdl->granularity);
> +            end = ROUND_UP(vrdl->offset_within_address_space + vrdl->size,
> +                           vrdl->granularity);
> +            vrdl_mappings += (end - start) / vrdl->granularity;
> +            vrdl_count++;
> +        }
> +
> +        if (vrdl_mappings + max_memslots - vrdl_count >
> +            container->dma_max_mappings) {
> +            warn_report("%s: possibly running out of DMA mappings. E.g., try"
> +                        " increasing the 'block-size' of virtio-mem devies."
> +                        " Maximum possible DMA mappings: %d, Maximum possible"
> +                        " memslots: %d", __func__, container->dma_max_mappings,
> +                        max_memslots);
> +        }
> +    }
>  }
>  
>  static void vfio_unregister_ram_discard_listener(VFIOContainer *container,



  reply	other threads:[~2021-02-16 18:36 UTC|newest]

Thread overview: 25+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-01-21 11:05 [PATCH v5 00/11] virtio-mem: vfio support David Hildenbrand
2021-01-21 11:05 ` [PATCH v5 01/11] memory: Introduce RamDiscardMgr for RAM memory regions David Hildenbrand
2021-02-16 18:50   ` David Hildenbrand
2021-01-21 11:05 ` [PATCH v5 02/11] virtio-mem: Factor out traversing unplugged ranges David Hildenbrand
2021-01-21 11:05 ` [PATCH v5 03/11] virtio-mem: Implement RamDiscardMgr interface David Hildenbrand
2021-01-27 20:14   ` Dr. David Alan Gilbert
2021-01-27 20:20     ` David Hildenbrand
2021-02-22 11:29     ` David Hildenbrand
2021-01-21 11:05 ` [PATCH v5 04/11] vfio: Support for RamDiscardMgr in the !vIOMMU case David Hildenbrand
2021-01-21 11:05 ` [PATCH v5 05/11] vfio: Query and store the maximum number of possible DMA mappings David Hildenbrand
2021-01-21 11:05 ` [PATCH v5 06/11] vfio: Sanity check maximum number of DMA mappings with RamDiscardMgr David Hildenbrand
2021-02-16 18:34   ` Alex Williamson [this message]
2021-01-21 11:05 ` [PATCH v5 07/11] vfio: Support for RamDiscardMgr in the vIOMMU case David Hildenbrand
2021-02-16 18:34   ` Alex Williamson
2021-01-21 11:05 ` [PATCH v5 08/11] softmmu/physmem: Don't use atomic operations in ram_block_discard_(disable|require) David Hildenbrand
2021-01-21 11:05 ` [PATCH v5 09/11] softmmu/physmem: Extend ram_block_discard_(require|disable) by two discard types David Hildenbrand
2021-01-21 11:05 ` [PATCH v5 10/11] virtio-mem: Require only coordinated discards David Hildenbrand
2021-01-21 11:05 ` [PATCH v5 11/11] vfio: Disable only uncoordinated discards for VFIO_TYPE1 iommus David Hildenbrand
2021-02-16 19:03   ` Alex Williamson
2021-01-27 12:45 ` [PATCH v5 00/11] virtio-mem: vfio support Michael S. Tsirkin
2021-02-08  8:28   ` David Hildenbrand
2021-02-15 14:03     ` David Hildenbrand
2021-02-16 18:33       ` Alex Williamson
2021-02-16 18:49         ` David Hildenbrand
2021-02-16 19:04           ` Alex Williamson

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20210216113406.470665a5@omen.home.shazbot.org \
    --to=alex.williamson@redhat.com \
    --cc=Jonathan.Cameron@huawei.com \
    --cc=david@redhat.com \
    --cc=dgilbert@redhat.com \
    --cc=eric.auger@redhat.com \
    --cc=imammedo@redhat.com \
    --cc=mkedzier@redhat.com \
    --cc=mst@redhat.com \
    --cc=pankaj.gupta.linux@gmail.com \
    --cc=pbonzini@redhat.com \
    --cc=peterx@redhat.com \
    --cc=qemu-devel@nongnu.org \
    --cc=richard.weiyang@linux.alibaba.com \
    --cc=teawaterz@linux.alibaba.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).