qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: Igor Mammedov <imammedo@redhat.com>
To: yuanminghao <yuanmh12@chinatelecom.cn>
Cc: qemu-devel@nongnu.org, "Michael S . Tsirkin" <mst@redhat.com>,
	Stefano Garzarella <sgarzare@redhat.com>
Subject: Re: [PATCH 1/1] vhost: do not reset used_memslots when destroying vhost dev
Date: Tue, 25 Feb 2025 14:42:50 +0100	[thread overview]
Message-ID: <20250225144250.7f5b7243@imammedo.users.ipa.redhat.com> (raw)
In-Reply-To: <20241121060755.164310-1-yuanmh12@chinatelecom.cn>

On Thu, 21 Nov 2024 14:07:55 +0800
yuanminghao <yuanmh12@chinatelecom.cn> wrote:

> The global used_memslots or used_shared_memslots is updated to 0 unexpectly

it shouldn't be 0 in practice, as it comes from number of RAM regions VM has.
It's likely a bug somewhere else.

Please describe a way to reproduce the issue.

> when a vhost device destroyed. This can occur during scenarios such as live
> detaching a vhost device or restarting a vhost-user net backend (e.g., OVS-DPDK):
>  #0  vhost_commit(listener) at hw/virtio/vhost.c:439
>  #1  listener_del_address_space(as, listener) at memory.c:2777
>  #2  memory_listener_unregister(listener) at memory.c:2823
>  #3  vhost_dev_cleanup(hdev) at hw/virtio/vhost.c:1406
>  #4  vhost_net_cleanup(net) at hw/net/vhost_net.c:402
>  #5  vhost_user_start(be, ncs, queues) at net/vhost-user.c:113
>  #6  net_vhost_user_event(opaque, event) at net/vhost-user.c:281
>  #7  tcp_chr_new_client(chr, sioc) at chardev/char-socket.c:924
>  #8  tcp_chr_accept(listener, cioc, opaque) at chardev/char-socket.c:961
> 
> So we skip the update of used_memslots and used_shared_memslots when destroying
> vhost devices, and it should work event if all vhost devices are removed.
> 
> Signed-off-by: yuanminghao <yuanmh12@chinatelecom.cn>
> ---
>  hw/virtio/vhost.c         | 14 +++++++++-----
>  include/hw/virtio/vhost.h |  1 +
>  2 files changed, 10 insertions(+), 5 deletions(-)
> 
> diff --git a/hw/virtio/vhost.c b/hw/virtio/vhost.c
> index 6aa72fd434..2258a12066 100644
> --- a/hw/virtio/vhost.c
> +++ b/hw/virtio/vhost.c
> @@ -666,11 +666,13 @@ static void vhost_commit(MemoryListener *listener)
>      dev->mem = g_realloc(dev->mem, regions_size);
>      dev->mem->nregions = dev->n_mem_sections;
>  
> -    if (dev->vhost_ops->vhost_backend_no_private_memslots &&
> -        dev->vhost_ops->vhost_backend_no_private_memslots(dev)) {
> -        used_shared_memslots = dev->mem->nregions;
> -    } else {
> -        used_memslots = dev->mem->nregions;
> +    if (!dev->listener_removing) {
> +        if (dev->vhost_ops->vhost_backend_no_private_memslots &&
> +            dev->vhost_ops->vhost_backend_no_private_memslots(dev)) {
> +            used_shared_memslots = dev->mem->nregions;
> +        } else {
> +            used_memslots = dev->mem->nregions;
> +        }
>      }
>  
>      for (i = 0; i < dev->n_mem_sections; i++) {
> @@ -1668,7 +1670,9 @@ void vhost_dev_cleanup(struct vhost_dev *hdev)
>      }
>      if (hdev->mem) {
>          /* those are only safe after successful init */
> +        hdev->listener_removing = true;
>          memory_listener_unregister(&hdev->memory_listener);
> +        hdev->listener_removing = false;
>          QLIST_REMOVE(hdev, entry);
>      }
>      migrate_del_blocker(&hdev->migration_blocker);
> diff --git a/include/hw/virtio/vhost.h b/include/hw/virtio/vhost.h
> index a9469d50bc..037f85b642 100644
> --- a/include/hw/virtio/vhost.h
> +++ b/include/hw/virtio/vhost.h
> @@ -133,6 +133,7 @@ struct vhost_dev {
>      QLIST_HEAD(, vhost_iommu) iommu_list;
>      IOMMUNotifier n;
>      const VhostDevConfigOps *config_ops;
> +    bool listener_removing;
>  };
>  
>  extern const VhostOps kernel_ops;



  reply	other threads:[~2025-02-25 13:43 UTC|newest]

Thread overview: 14+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-11-21  6:07 [PATCH 1/1] vhost: do not reset used_memslots when destroying vhost dev yuanminghao
2025-02-25 13:42 ` Igor Mammedov [this message]
  -- strict thread matches above, loose matches on Subject: below --
2025-03-03 18:02 yuanminghao
2025-04-02 16:25 ` Michael S. Tsirkin
2025-05-09 16:39 ` Michael S. Tsirkin
2025-05-13 12:13 ` Igor Mammedov
2025-05-13 13:12   ` David Hildenbrand
2025-05-14  9:12     ` Igor Mammedov
2025-05-14  9:26       ` David Hildenbrand
2025-05-30 11:18         ` Michael S. Tsirkin
2025-05-30 11:28           ` David Hildenbrand
2025-05-30 11:36             ` Michael S. Tsirkin
2025-06-03  9:15               ` David Hildenbrand
2025-06-10 16:14                 ` Michael S. Tsirkin

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20250225144250.7f5b7243@imammedo.users.ipa.redhat.com \
    --to=imammedo@redhat.com \
    --cc=mst@redhat.com \
    --cc=qemu-devel@nongnu.org \
    --cc=sgarzare@redhat.com \
    --cc=yuanmh12@chinatelecom.cn \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).