From: "Michael S. Tsirkin" <mst@redhat.com>
To: yuanminghao <yuanmh12@chinatelecom.cn>
Cc: Igor Mammedov <imammedo@redhat.com>,
qemu-devel@nongnu.org, Stefano Garzarella <sgarzare@redhat.com>
Subject: Re: [PATCH 1/1] vhost: do not reset used_memslots when destroying vhost dev
Date: Wed, 2 Apr 2025 12:25:41 -0400 [thread overview]
Message-ID: <20250402122528-mutt-send-email-mst@kernel.org> (raw)
In-Reply-To: <1741024937-37164-1-git-send-email-yuanmh12@chinatelecom.cn>
On Mon, Mar 03, 2025 at 01:02:17PM -0500, yuanminghao wrote:
> > > Global used_memslots or used_shared_memslots is updated to 0 unexpectly
> >
> > it shouldn't be 0 in practice, as it comes from number of RAM regions VM has.
> > It's likely a bug somewhere else.
> >
> > Please describe a way to reproduce the issue.
> >
> Hi, Igor Mammedov,
> Sorry for the late response, here are the steps to reproduce the issue:
>
> 1.start a domain with 1Core 1GiB memory, no network interface.
> 2.print used_memslots with gdb
> gdb -p ${qemupid} <<< "p used_memslots"
> $1 = 0
> 3.attach a network interface net1
> cat>/tmp/net1.xml <<EOF
> <interface type='network'>
> <mac address='52:54:00:12:34:56'/>
> <source network='default'/>
> <model type='virtio'/>
> </interface>
> EOF
> virsh attach-device dom /tmp/net1.xml --live
> 4.print current used_memslots with gdb
> gdb -p ${qemupid} <<< "p used_memslots"
> $1 = 2
> 5.attach another network interface net2
> cat>/tmp/net2.xml <<EOF
> <interface type='network'>
> <mac address='52:54:00:12:34:78'/>
> <source network='default'/>
> <model type='virtio'/>
> </interface>
> EOF
> virsh attach-device dom /tmp/net2.xml --live
> 6.print current used_memslots with gdb
> gdb -p ${qemupid} <<< "p used_memslots"
> $1 = 2
> 7.detach network interface net2
> virsh detach-device dom /tmp/net2.xml --live
> 8.print current used_memslots with gdb
> gdb -p ${qemupid} <<< "p used_memslots"
> $1 = 0
> After detaching net2, the used_memslots was reseted to 0, which was expected to be 2.
Igor were you looking at this?
> > > when a vhost device destroyed. This can occur during scenarios such as live
> > > detaching a vhost device or restarting a vhost-user net backend (e.g., OVS-DPDK):
> > > #0 vhost_commit(listener) at hw/virtio/vhost.c:439
> > > #1 listener_del_address_space(as, listener) at memory.c:2777
> > > #2 memory_listener_unregister(listener) at memory.c:2823
> > > #3 vhost_dev_cleanup(hdev) at hw/virtio/vhost.c:1406
> > > #4 vhost_net_cleanup(net) at hw/net/vhost_net.c:402
> > > #5 vhost_user_start(be, ncs, queues) at net/vhost-user.c:113
> > > #6 net_vhost_user_event(opaque, event) at net/vhost-user.c:281
> > > #7 tcp_chr_new_client(chr, sioc) at chardev/char-socket.c:924
> > > #8 tcp_chr_accept(listener, cioc, opaque) at chardev/char-socket.c:961
> > >
> > > So we skip the update of used_memslots and used_shared_memslots when destroying
> > > vhost devices, and it should work event if all vhost devices are removed.
> > >
> > > Signed-off-by: yuanminghao <yuanmh12@chinatelecom.cn>
> > > ---
> > > hw/virtio/vhost.c | 14 +++++++++-----
> > > include/hw/virtio/vhost.h | 1 +
> > > 2 files changed, 10 insertions(+), 5 deletions(-)
> > >
> > > diff --git a/hw/virtio/vhost.c b/hw/virtio/vhost.c
> > > index 6aa72fd434..2258a12066 100644
> > > --- a/hw/virtio/vhost.c
> > > +++ b/hw/virtio/vhost.c
> > > @@ -666,11 +666,13 @@ static void vhost_commit(MemoryListener *listener)
> > > dev->mem = g_realloc(dev->mem, regions_size);
> > > dev->mem->nregions = dev->n_mem_sections;
> > >
> > > - if (dev->vhost_ops->vhost_backend_no_private_memslots &&
> > > - dev->vhost_ops->vhost_backend_no_private_memslots(dev)) {
> > > - used_shared_memslots = dev->mem->nregions;
> > > - } else {
> > > - used_memslots = dev->mem->nregions;
> > > + if (!dev->listener_removing) {
> > > + if (dev->vhost_ops->vhost_backend_no_private_memslots &&
> > > + dev->vhost_ops->vhost_backend_no_private_memslots(dev)) {
> > > + used_shared_memslots = dev->mem->nregions;
> > > + } else {
> > > + used_memslots = dev->mem->nregions;
> > > + }
> > > }
> > >
> > > for (i = 0; i < dev->n_mem_sections; i++) {
> > > @@ -1668,7 +1670,9 @@ void vhost_dev_cleanup(struct vhost_dev *hdev)
> > > }
> > > if (hdev->mem) {
> > > /* those are only safe after successful init */
> > > + hdev->listener_removing = true;
> > > memory_listener_unregister(&hdev->memory_listener);
> > > + hdev->listener_removing = false;
> > > QLIST_REMOVE(hdev, entry);
> > > }
> > > migrate_del_blocker(&hdev->migration_blocker);
> > > diff --git a/include/hw/virtio/vhost.h b/include/hw/virtio/vhost.h
> > > index a9469d50bc..037f85b642 100644
> > > --- a/include/hw/virtio/vhost.h
> > > +++ b/include/hw/virtio/vhost.h
> > > @@ -133,6 +133,7 @@ struct vhost_dev {
> > > QLIST_HEAD(, vhost_iommu) iommu_list;
> > > IOMMUNotifier n;
> > > const VhostDevConfigOps *config_ops;
> > > + bool listener_removing;
> > > };
> > >
> > > extern const VhostOps kernel_ops;
next prev parent reply other threads:[~2025-04-02 16:26 UTC|newest]
Thread overview: 14+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-03-03 18:02 [PATCH 1/1] vhost: do not reset used_memslots when destroying vhost dev yuanminghao
2025-04-02 16:25 ` Michael S. Tsirkin [this message]
2025-05-09 16:39 ` Michael S. Tsirkin
2025-05-13 12:13 ` Igor Mammedov
2025-05-13 13:12 ` David Hildenbrand
2025-05-14 9:12 ` Igor Mammedov
2025-05-14 9:26 ` David Hildenbrand
2025-05-30 11:18 ` Michael S. Tsirkin
2025-05-30 11:28 ` David Hildenbrand
2025-05-30 11:36 ` Michael S. Tsirkin
2025-06-03 9:15 ` David Hildenbrand
2025-06-10 16:14 ` Michael S. Tsirkin
-- strict thread matches above, loose matches on Subject: below --
2024-11-21 6:07 yuanminghao
2025-02-25 13:42 ` Igor Mammedov
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20250402122528-mutt-send-email-mst@kernel.org \
--to=mst@redhat.com \
--cc=imammedo@redhat.com \
--cc=qemu-devel@nongnu.org \
--cc=sgarzare@redhat.com \
--cc=yuanmh12@chinatelecom.cn \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).