From: Stefan Hajnoczi <stefanha@redhat.com>
To: David Hildenbrand <david@redhat.com>
Cc: "Albert Esteve" <aesteve@redhat.com>,
qemu-devel@nongnu.org, slp@redhat.com, stevensd@chromium.org,
"Alex Bennée" <alex.bennee@linaro.org>,
"Stefano Garzarella" <sgarzare@redhat.com>,
hi@alyssa.is, mst@redhat.com, jasowang@redhat.com
Subject: Re: [PATCH v4 0/9] vhost-user: Add SHMEM_MAP/UNMAP requests
Date: Thu, 27 Feb 2025 15:10:48 +0800 [thread overview]
Message-ID: <20250227071048.GD85709@fedora> (raw)
In-Reply-To: <ba4c6655-4f69-4001-84fa-2ebfe87c0868@redhat.com>
[-- Attachment #1: Type: text/plain, Size: 3540 bytes --]
On Wed, Feb 26, 2025 at 10:53:01AM +0100, David Hildenbrand wrote:
> > > As commented offline, maybe one would want the option to enable the
> > > alternative mode, where such updates (in the SHM region) are not sent to
> > > vhost-user devices. In such a configuration, the MEM_READ / MEM_WRITE
> > > would be unavoidable.
> >
> > At first, I remember we discussed two options, having update messages
> > sent to all devices (which was deemed as potentially racy), or using
> > MEM_READ / MEM _WRITE messages. With this version of the patch there
> > is no option to avoid the mem_table update messages, which brings me
> > to my point in the previous message: it may make sense to continue
> > with this patch without MEM_READ/WRITE support, and leave that and the
> > option to make mem_table updates optional for a followup patch?
>
> IMHO that would work for me.
I'm happy with dropping MEM_READ/WRITE. If the memslots limit becomes a
problem then it will be necessary to think about handling things
differently, but there are many possible uses of VIRTIO Shared Memory
Regions that will not hit the limit and I don't see a need to hold them
back.
Stefan
>
> >
> > >
> > > What comes to mind are vhost-user devices with limited number of
> > > supported memslots.
> > >
> > > No idea how relevant that really is, and how many SHM regions we will
> > > see in practice.
> >
> > In general, from what I see they usually require 1 or 2 regions,
> > except for virtio-scmi which requires >256.
>
> 1/2 regions are not a problem. Once we're in the hundreds for a single
> device, it will likely start being a problem, especially when you have more
> such devices.
>
> BUT, it would likely be a problem even with the alternative approach where
> we don't communicate these regions to vhost-user: IIRC, vhost-net in
> the kernel is usually limited to a maximum of 509 memslots as well as
> default. Similarly, older KVM only supports a total of 509 memslots.
>
> See https://virtio-mem.gitlab.io/user-guide/user-guide-qemu.html
> "Compatibility with vhost-net and vhost-user".
>
> In libvhost-user, and rust-vmm, we have a similar limit of ~509.
>
>
> Note that for memory devices (DIMMs, virtio-mem), we'll use up to 256
> memslots in case all devices support 509 memslots.
> See MEMORY_DEVICES_SOFT_MEMSLOT_LIMIT:
>
> /*
> * Traditionally, KVM/vhost in many setups supported 509 memslots, whereby
> * 253 memslots were "reserved" for boot memory and other devices (such
> * as PCI BARs, which can get mapped dynamically) and 256 memslots were
> * dedicated for DIMMs. These magic numbers worked reliably in the past.
> *
> * Further, using many memslots can negatively affect performance, so setting
> * the soft-limit of memslots used by memory devices to the traditional
> * DIMM limit of 256 sounds reasonable.
> *
> * If we have less than 509 memslots, we will instruct memory devices that
> * support automatically deciding how many memslots to use to only use a single
> * one.
> *
> * Hotplugging vhost devices with at least 509 memslots is not expected to
> * cause problems, not even when devices automatically decided how many memslots
> * to use.
> */
> #define MEMORY_DEVICES_SOFT_MEMSLOT_LIMIT 256
> #define MEMORY_DEVICES_SAFE_MAX_MEMSLOTS 509
>
>
> That changes once you have some vhost-user devices consume combined with boot
> memory more than 253 memslots.
>
> --
> Cheers,
>
> David / dhildenb
>
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]
prev parent reply other threads:[~2025-02-27 7:11 UTC|newest]
Thread overview: 36+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-02-17 16:40 [PATCH v4 0/9] vhost-user: Add SHMEM_MAP/UNMAP requests Albert Esteve
2025-02-17 16:40 ` [PATCH v4 1/9] vhost-user: Add VirtIO Shared Memory map request Albert Esteve
2025-02-18 6:43 ` Stefan Hajnoczi
2025-02-18 10:33 ` Albert Esteve
2025-03-06 14:48 ` Albert Esteve
2025-02-18 10:19 ` Stefan Hajnoczi
2025-02-20 10:59 ` Alyssa Ross
2025-02-17 16:40 ` [PATCH v4 2/9] vhost_user.rst: Align VhostUserMsg excerpt members Albert Esteve
2025-02-18 6:44 ` Stefan Hajnoczi
2025-02-17 16:40 ` [PATCH v4 3/9] vhost_user.rst: Add SHMEM_MAP/_UNMAP to spec Albert Esteve
2025-02-17 16:40 ` [PATCH v4 4/9] vhost_user: Add frontend get_shmem_config command Albert Esteve
2025-02-18 10:27 ` Stefan Hajnoczi
2025-02-17 16:40 ` [PATCH v4 5/9] vhost_user.rst: Add GET_SHMEM_CONFIG message Albert Esteve
2025-02-18 10:33 ` Stefan Hajnoczi
2025-02-17 16:40 ` [PATCH v4 6/9] qmp: add shmem feature map Albert Esteve
2025-02-18 10:34 ` Stefan Hajnoczi
2025-02-17 16:40 ` [PATCH v4 7/9] vhost-user-devive: Add shmem BAR Albert Esteve
2025-02-18 10:41 ` Stefan Hajnoczi
2025-02-18 10:55 ` Albert Esteve
2025-02-18 13:25 ` Stefan Hajnoczi
2025-02-18 15:04 ` Albert Esteve
2025-02-17 16:40 ` [PATCH v4 8/9] vhost_user: Add mem_read/write backend requests Albert Esteve
2025-02-18 10:57 ` Stefan Hajnoczi
2025-02-17 16:40 ` [PATCH v4 9/9] vhost_user.rst: Add MEM_READ/WRITE messages Albert Esteve
2025-02-18 11:00 ` Stefan Hajnoczi
2025-02-18 12:50 ` Albert Esteve
2025-02-17 20:01 ` [PATCH v4 0/9] vhost-user: Add SHMEM_MAP/UNMAP requests David Hildenbrand
2025-02-24 8:54 ` Albert Esteve
2025-02-24 9:16 ` David Hildenbrand
2025-02-24 9:35 ` Albert Esteve
2025-02-24 9:49 ` David Hildenbrand
2025-02-24 13:41 ` Albert Esteve
2025-02-24 13:57 ` David Hildenbrand
2025-02-24 15:15 ` Albert Esteve
2025-02-26 9:53 ` David Hildenbrand
2025-02-27 7:10 ` Stefan Hajnoczi [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20250227071048.GD85709@fedora \
--to=stefanha@redhat.com \
--cc=aesteve@redhat.com \
--cc=alex.bennee@linaro.org \
--cc=david@redhat.com \
--cc=hi@alyssa.is \
--cc=jasowang@redhat.com \
--cc=mst@redhat.com \
--cc=qemu-devel@nongnu.org \
--cc=sgarzare@redhat.com \
--cc=slp@redhat.com \
--cc=stevensd@chromium.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).