qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: Peter Xu <peterx@redhat.com>
To: David Hildenbrand <david@redhat.com>
Cc: qemu-devel@nongnu.org, "Michael S. Tsirkin" <mst@redhat.com>,
	"Stefan Hajnoczi" <stefanha@redhat.com>,
	"Igor Mammedov" <imammedo@redhat.com>,
	"Paolo Bonzini" <pbonzini@redhat.com>,
	"Philippe Mathieu-Daudé" <philmd@linaro.org>,
	"Tiwei Bie" <tiwei.bie@intel.com>
Subject: Re: [PATCH v3 1/3] vhost: Rework memslot filtering and fix "used_memslot" tracking
Date: Tue, 23 May 2023 11:34:57 -0400	[thread overview]
Message-ID: <ZGzdIfLy+7rSh6fW@x1n> (raw)
In-Reply-To: <20230503172121.733642-2-david@redhat.com>

On Wed, May 03, 2023 at 07:21:19PM +0200, David Hildenbrand wrote:
> Having multiple vhost devices, some filtering out fd-less memslots and
> some not, can mess up the "used_memslot" accounting. Consequently our
> "free memslot" checks become unreliable and we might run out of free
> memslots at runtime later.
> 
> An example sequence which can trigger a potential issue that involves
> different vhost backends (vhost-kernel and vhost-user) and hotplugged
> memory devices can be found at [1].
> 
> Let's make the filtering mechanism less generic and distinguish between
> backends that support private memslots (without a fd) and ones that only
> support shared memslots (with a fd). Track the used_memslots for both
> cases separately and use the corresponding value when required.
> 
> Note: Most probably we should filter out MAP_PRIVATE fd-based RAM regions
> (for example, via memory-backend-memfd,...,shared=off or as default with
>  memory-backend-file) as well. When not using MAP_SHARED, it might not work
> as expected. Add a TODO for now.
> 
> [1] https://lkml.kernel.org/r/fad9136f-08d3-3fd9-71a1-502069c000cf@redhat.com
> 
> Fixes: 988a27754bbb ("vhost: allow backends to filter memory sections")
> Cc: Tiwei Bie <tiwei.bie@intel.com>
> Acked-by: Igor Mammedov <imammedo@redhat.com>
> Signed-off-by: David Hildenbrand <david@redhat.com>
> ---
>  hw/virtio/vhost-user.c            |  7 ++--
>  hw/virtio/vhost.c                 | 56 ++++++++++++++++++++++++++-----
>  include/hw/virtio/vhost-backend.h |  5 ++-
>  3 files changed, 52 insertions(+), 16 deletions(-)
> 
> diff --git a/hw/virtio/vhost-user.c b/hw/virtio/vhost-user.c
> index e5285df4ba..0c3e2702b1 100644
> --- a/hw/virtio/vhost-user.c
> +++ b/hw/virtio/vhost-user.c
> @@ -2453,10 +2453,9 @@ vhost_user_crypto_close_session(struct vhost_dev *dev, uint64_t session_id)
>      return 0;
>  }
>  
> -static bool vhost_user_mem_section_filter(struct vhost_dev *dev,
> -                                          MemoryRegionSection *section)
> +static bool vhost_user_no_private_memslots(struct vhost_dev *dev)
>  {
> -    return memory_region_get_fd(section->mr) >= 0;
> +    return true;
>  }
>  
>  static int vhost_user_get_inflight_fd(struct vhost_dev *dev,
> @@ -2686,6 +2685,7 @@ const VhostOps user_ops = {
>          .vhost_backend_init = vhost_user_backend_init,
>          .vhost_backend_cleanup = vhost_user_backend_cleanup,
>          .vhost_backend_memslots_limit = vhost_user_memslots_limit,
> +        .vhost_backend_no_private_memslots = vhost_user_no_private_memslots,
>          .vhost_set_log_base = vhost_user_set_log_base,
>          .vhost_set_mem_table = vhost_user_set_mem_table,
>          .vhost_set_vring_addr = vhost_user_set_vring_addr,
> @@ -2712,7 +2712,6 @@ const VhostOps user_ops = {
>          .vhost_set_config = vhost_user_set_config,
>          .vhost_crypto_create_session = vhost_user_crypto_create_session,
>          .vhost_crypto_close_session = vhost_user_crypto_close_session,
> -        .vhost_backend_mem_section_filter = vhost_user_mem_section_filter,
>          .vhost_get_inflight_fd = vhost_user_get_inflight_fd,
>          .vhost_set_inflight_fd = vhost_user_set_inflight_fd,
>          .vhost_dev_start = vhost_user_dev_start,
> diff --git a/hw/virtio/vhost.c b/hw/virtio/vhost.c
> index 746d130c74..4fe08c809f 100644
> --- a/hw/virtio/vhost.c
> +++ b/hw/virtio/vhost.c
> @@ -46,20 +46,33 @@
>  static struct vhost_log *vhost_log;
>  static struct vhost_log *vhost_log_shm;
>  
> +/* Memslots used by backends that support private memslots (without an fd). */
>  static unsigned int used_memslots;
> +
> +/* Memslots used by backends that only support shared memslots (with an fd). */
> +static unsigned int used_shared_memslots;

It's just that these vars are updated multiple times when >1 vhost is
there, accessing these fields are still a bit confusing - I think it's
implicitly protected by BQL so looks always safe.

Since we already have the shared/private handling, maybe for the long term
it'll be nicer to just keep such info per-device e.g. in vhost_dev so we
can also drop vhost_backend_no_private_memslots().  Anyway the code is
internal so can be done on top even if worthwhile.

> +
>  static QLIST_HEAD(, vhost_dev) vhost_devices =
>      QLIST_HEAD_INITIALIZER(vhost_devices);
>  
>  bool vhost_has_free_slot(void)
>  {
> -    unsigned int slots_limit = ~0U;
> +    unsigned int free = UINT_MAX;
>      struct vhost_dev *hdev;
>  
>      QLIST_FOREACH(hdev, &vhost_devices, entry) {
>          unsigned int r = hdev->vhost_ops->vhost_backend_memslots_limit(hdev);
> -        slots_limit = MIN(slots_limit, r);
> +        unsigned int cur_free;
> +
> +        if (hdev->vhost_ops->vhost_backend_no_private_memslots &&
> +            hdev->vhost_ops->vhost_backend_no_private_memslots(hdev)) {
> +            cur_free = r - used_shared_memslots;
> +        } else {
> +            cur_free = r - used_memslots;
> +        }
> +        free = MIN(free, cur_free);
>      }
> -    return slots_limit > used_memslots;
> +    return free > 1;

Should here be "free > 0" instead?

Trivial but maybe still matter when some device used exactly the size of
all memslots of a device..

Other than this the patch looks all good here.

Thanks,

-- 
Peter Xu



  reply	other threads:[~2023-05-23 15:35 UTC|newest]

Thread overview: 9+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-05-03 17:21 [PATCH v3 0/3] vhost: memslot handling improvements David Hildenbrand
2023-05-03 17:21 ` [PATCH v3 1/3] vhost: Rework memslot filtering and fix "used_memslot" tracking David Hildenbrand
2023-05-23 15:34   ` Peter Xu [this message]
2023-05-23 15:42     ` David Hildenbrand
2023-05-03 17:21 ` [PATCH v3 2/3] vhost: Remove vhost_backend_can_merge() callback David Hildenbrand
2023-05-23 15:40   ` Peter Xu
2023-05-03 17:21 ` [PATCH v3 3/3] softmmu/physmem: Fixup qemu_ram_block_from_host() documentation David Hildenbrand
2023-05-23 15:42   ` Peter Xu
2023-05-23 14:25 ` [PATCH v3 0/3] vhost: memslot handling improvements David Hildenbrand

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=ZGzdIfLy+7rSh6fW@x1n \
    --to=peterx@redhat.com \
    --cc=david@redhat.com \
    --cc=imammedo@redhat.com \
    --cc=mst@redhat.com \
    --cc=pbonzini@redhat.com \
    --cc=philmd@linaro.org \
    --cc=qemu-devel@nongnu.org \
    --cc=stefanha@redhat.com \
    --cc=tiwei.bie@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).