* [PATCH 1/1] vhost: do not reset used_memslots when destroying vhost dev
@ 2024-11-21 6:07 yuanminghao
2025-02-25 13:42 ` Igor Mammedov
0 siblings, 1 reply; 14+ messages in thread
From: yuanminghao @ 2024-11-21 6:07 UTC (permalink / raw)
To: qemu-devel; +Cc: Michael S . Tsirkin, Stefano Garzarella
The global used_memslots or used_shared_memslots is updated to 0 unexpectly
when a vhost device destroyed. This can occur during scenarios such as live
detaching a vhost device or restarting a vhost-user net backend (e.g., OVS-DPDK):
#0 vhost_commit(listener) at hw/virtio/vhost.c:439
#1 listener_del_address_space(as, listener) at memory.c:2777
#2 memory_listener_unregister(listener) at memory.c:2823
#3 vhost_dev_cleanup(hdev) at hw/virtio/vhost.c:1406
#4 vhost_net_cleanup(net) at hw/net/vhost_net.c:402
#5 vhost_user_start(be, ncs, queues) at net/vhost-user.c:113
#6 net_vhost_user_event(opaque, event) at net/vhost-user.c:281
#7 tcp_chr_new_client(chr, sioc) at chardev/char-socket.c:924
#8 tcp_chr_accept(listener, cioc, opaque) at chardev/char-socket.c:961
So we skip the update of used_memslots and used_shared_memslots when destroying
vhost devices, and it should work event if all vhost devices are removed.
Signed-off-by: yuanminghao <yuanmh12@chinatelecom.cn>
---
hw/virtio/vhost.c | 14 +++++++++-----
include/hw/virtio/vhost.h | 1 +
2 files changed, 10 insertions(+), 5 deletions(-)
diff --git a/hw/virtio/vhost.c b/hw/virtio/vhost.c
index 6aa72fd434..2258a12066 100644
--- a/hw/virtio/vhost.c
+++ b/hw/virtio/vhost.c
@@ -666,11 +666,13 @@ static void vhost_commit(MemoryListener *listener)
dev->mem = g_realloc(dev->mem, regions_size);
dev->mem->nregions = dev->n_mem_sections;
- if (dev->vhost_ops->vhost_backend_no_private_memslots &&
- dev->vhost_ops->vhost_backend_no_private_memslots(dev)) {
- used_shared_memslots = dev->mem->nregions;
- } else {
- used_memslots = dev->mem->nregions;
+ if (!dev->listener_removing) {
+ if (dev->vhost_ops->vhost_backend_no_private_memslots &&
+ dev->vhost_ops->vhost_backend_no_private_memslots(dev)) {
+ used_shared_memslots = dev->mem->nregions;
+ } else {
+ used_memslots = dev->mem->nregions;
+ }
}
for (i = 0; i < dev->n_mem_sections; i++) {
@@ -1668,7 +1670,9 @@ void vhost_dev_cleanup(struct vhost_dev *hdev)
}
if (hdev->mem) {
/* those are only safe after successful init */
+ hdev->listener_removing = true;
memory_listener_unregister(&hdev->memory_listener);
+ hdev->listener_removing = false;
QLIST_REMOVE(hdev, entry);
}
migrate_del_blocker(&hdev->migration_blocker);
diff --git a/include/hw/virtio/vhost.h b/include/hw/virtio/vhost.h
index a9469d50bc..037f85b642 100644
--- a/include/hw/virtio/vhost.h
+++ b/include/hw/virtio/vhost.h
@@ -133,6 +133,7 @@ struct vhost_dev {
QLIST_HEAD(, vhost_iommu) iommu_list;
IOMMUNotifier n;
const VhostDevConfigOps *config_ops;
+ bool listener_removing;
};
extern const VhostOps kernel_ops;
--
2.27.0
^ permalink raw reply related [flat|nested] 14+ messages in thread
* Re: [PATCH 1/1] vhost: do not reset used_memslots when destroying vhost dev
2024-11-21 6:07 yuanminghao
@ 2025-02-25 13:42 ` Igor Mammedov
0 siblings, 0 replies; 14+ messages in thread
From: Igor Mammedov @ 2025-02-25 13:42 UTC (permalink / raw)
To: yuanminghao; +Cc: qemu-devel, Michael S . Tsirkin, Stefano Garzarella
On Thu, 21 Nov 2024 14:07:55 +0800
yuanminghao <yuanmh12@chinatelecom.cn> wrote:
> The global used_memslots or used_shared_memslots is updated to 0 unexpectly
it shouldn't be 0 in practice, as it comes from number of RAM regions VM has.
It's likely a bug somewhere else.
Please describe a way to reproduce the issue.
> when a vhost device destroyed. This can occur during scenarios such as live
> detaching a vhost device or restarting a vhost-user net backend (e.g., OVS-DPDK):
> #0 vhost_commit(listener) at hw/virtio/vhost.c:439
> #1 listener_del_address_space(as, listener) at memory.c:2777
> #2 memory_listener_unregister(listener) at memory.c:2823
> #3 vhost_dev_cleanup(hdev) at hw/virtio/vhost.c:1406
> #4 vhost_net_cleanup(net) at hw/net/vhost_net.c:402
> #5 vhost_user_start(be, ncs, queues) at net/vhost-user.c:113
> #6 net_vhost_user_event(opaque, event) at net/vhost-user.c:281
> #7 tcp_chr_new_client(chr, sioc) at chardev/char-socket.c:924
> #8 tcp_chr_accept(listener, cioc, opaque) at chardev/char-socket.c:961
>
> So we skip the update of used_memslots and used_shared_memslots when destroying
> vhost devices, and it should work event if all vhost devices are removed.
>
> Signed-off-by: yuanminghao <yuanmh12@chinatelecom.cn>
> ---
> hw/virtio/vhost.c | 14 +++++++++-----
> include/hw/virtio/vhost.h | 1 +
> 2 files changed, 10 insertions(+), 5 deletions(-)
>
> diff --git a/hw/virtio/vhost.c b/hw/virtio/vhost.c
> index 6aa72fd434..2258a12066 100644
> --- a/hw/virtio/vhost.c
> +++ b/hw/virtio/vhost.c
> @@ -666,11 +666,13 @@ static void vhost_commit(MemoryListener *listener)
> dev->mem = g_realloc(dev->mem, regions_size);
> dev->mem->nregions = dev->n_mem_sections;
>
> - if (dev->vhost_ops->vhost_backend_no_private_memslots &&
> - dev->vhost_ops->vhost_backend_no_private_memslots(dev)) {
> - used_shared_memslots = dev->mem->nregions;
> - } else {
> - used_memslots = dev->mem->nregions;
> + if (!dev->listener_removing) {
> + if (dev->vhost_ops->vhost_backend_no_private_memslots &&
> + dev->vhost_ops->vhost_backend_no_private_memslots(dev)) {
> + used_shared_memslots = dev->mem->nregions;
> + } else {
> + used_memslots = dev->mem->nregions;
> + }
> }
>
> for (i = 0; i < dev->n_mem_sections; i++) {
> @@ -1668,7 +1670,9 @@ void vhost_dev_cleanup(struct vhost_dev *hdev)
> }
> if (hdev->mem) {
> /* those are only safe after successful init */
> + hdev->listener_removing = true;
> memory_listener_unregister(&hdev->memory_listener);
> + hdev->listener_removing = false;
> QLIST_REMOVE(hdev, entry);
> }
> migrate_del_blocker(&hdev->migration_blocker);
> diff --git a/include/hw/virtio/vhost.h b/include/hw/virtio/vhost.h
> index a9469d50bc..037f85b642 100644
> --- a/include/hw/virtio/vhost.h
> +++ b/include/hw/virtio/vhost.h
> @@ -133,6 +133,7 @@ struct vhost_dev {
> QLIST_HEAD(, vhost_iommu) iommu_list;
> IOMMUNotifier n;
> const VhostDevConfigOps *config_ops;
> + bool listener_removing;
> };
>
> extern const VhostOps kernel_ops;
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH 1/1] vhost: do not reset used_memslots when destroying vhost dev
@ 2025-03-03 18:02 yuanminghao
2025-04-02 16:25 ` Michael S. Tsirkin
` (2 more replies)
0 siblings, 3 replies; 14+ messages in thread
From: yuanminghao @ 2025-03-03 18:02 UTC (permalink / raw)
To: Igor Mammedov; +Cc: qemu-devel, Michael S. Tsirkin, Stefano Garzarella
> > Global used_memslots or used_shared_memslots is updated to 0 unexpectly
>
> it shouldn't be 0 in practice, as it comes from number of RAM regions VM has.
> It's likely a bug somewhere else.
>
> Please describe a way to reproduce the issue.
>
Hi, Igor Mammedov,
Sorry for the late response, here are the steps to reproduce the issue:
1.start a domain with 1Core 1GiB memory, no network interface.
2.print used_memslots with gdb
gdb -p ${qemupid} <<< "p used_memslots"
$1 = 0
3.attach a network interface net1
cat>/tmp/net1.xml <<EOF
<interface type='network'>
<mac address='52:54:00:12:34:56'/>
<source network='default'/>
<model type='virtio'/>
</interface>
EOF
virsh attach-device dom /tmp/net1.xml --live
4.print current used_memslots with gdb
gdb -p ${qemupid} <<< "p used_memslots"
$1 = 2
5.attach another network interface net2
cat>/tmp/net2.xml <<EOF
<interface type='network'>
<mac address='52:54:00:12:34:78'/>
<source network='default'/>
<model type='virtio'/>
</interface>
EOF
virsh attach-device dom /tmp/net2.xml --live
6.print current used_memslots with gdb
gdb -p ${qemupid} <<< "p used_memslots"
$1 = 2
7.detach network interface net2
virsh detach-device dom /tmp/net2.xml --live
8.print current used_memslots with gdb
gdb -p ${qemupid} <<< "p used_memslots"
$1 = 0
After detaching net2, the used_memslots was reseted to 0, which was expected to be 2.
> > when a vhost device destroyed. This can occur during scenarios such as live
> > detaching a vhost device or restarting a vhost-user net backend (e.g., OVS-DPDK):
> > #0 vhost_commit(listener) at hw/virtio/vhost.c:439
> > #1 listener_del_address_space(as, listener) at memory.c:2777
> > #2 memory_listener_unregister(listener) at memory.c:2823
> > #3 vhost_dev_cleanup(hdev) at hw/virtio/vhost.c:1406
> > #4 vhost_net_cleanup(net) at hw/net/vhost_net.c:402
> > #5 vhost_user_start(be, ncs, queues) at net/vhost-user.c:113
> > #6 net_vhost_user_event(opaque, event) at net/vhost-user.c:281
> > #7 tcp_chr_new_client(chr, sioc) at chardev/char-socket.c:924
> > #8 tcp_chr_accept(listener, cioc, opaque) at chardev/char-socket.c:961
> >
> > So we skip the update of used_memslots and used_shared_memslots when destroying
> > vhost devices, and it should work event if all vhost devices are removed.
> >
> > Signed-off-by: yuanminghao <yuanmh12@chinatelecom.cn>
> > ---
> > hw/virtio/vhost.c | 14 +++++++++-----
> > include/hw/virtio/vhost.h | 1 +
> > 2 files changed, 10 insertions(+), 5 deletions(-)
> >
> > diff --git a/hw/virtio/vhost.c b/hw/virtio/vhost.c
> > index 6aa72fd434..2258a12066 100644
> > --- a/hw/virtio/vhost.c
> > +++ b/hw/virtio/vhost.c
> > @@ -666,11 +666,13 @@ static void vhost_commit(MemoryListener *listener)
> > dev->mem = g_realloc(dev->mem, regions_size);
> > dev->mem->nregions = dev->n_mem_sections;
> >
> > - if (dev->vhost_ops->vhost_backend_no_private_memslots &&
> > - dev->vhost_ops->vhost_backend_no_private_memslots(dev)) {
> > - used_shared_memslots = dev->mem->nregions;
> > - } else {
> > - used_memslots = dev->mem->nregions;
> > + if (!dev->listener_removing) {
> > + if (dev->vhost_ops->vhost_backend_no_private_memslots &&
> > + dev->vhost_ops->vhost_backend_no_private_memslots(dev)) {
> > + used_shared_memslots = dev->mem->nregions;
> > + } else {
> > + used_memslots = dev->mem->nregions;
> > + }
> > }
> >
> > for (i = 0; i < dev->n_mem_sections; i++) {
> > @@ -1668,7 +1670,9 @@ void vhost_dev_cleanup(struct vhost_dev *hdev)
> > }
> > if (hdev->mem) {
> > /* those are only safe after successful init */
> > + hdev->listener_removing = true;
> > memory_listener_unregister(&hdev->memory_listener);
> > + hdev->listener_removing = false;
> > QLIST_REMOVE(hdev, entry);
> > }
> > migrate_del_blocker(&hdev->migration_blocker);
> > diff --git a/include/hw/virtio/vhost.h b/include/hw/virtio/vhost.h
> > index a9469d50bc..037f85b642 100644
> > --- a/include/hw/virtio/vhost.h
> > +++ b/include/hw/virtio/vhost.h
> > @@ -133,6 +133,7 @@ struct vhost_dev {
> > QLIST_HEAD(, vhost_iommu) iommu_list;
> > IOMMUNotifier n;
> > const VhostDevConfigOps *config_ops;
> > + bool listener_removing;
> > };
> >
> > extern const VhostOps kernel_ops;
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH 1/1] vhost: do not reset used_memslots when destroying vhost dev
2025-03-03 18:02 [PATCH 1/1] vhost: do not reset used_memslots when destroying vhost dev yuanminghao
@ 2025-04-02 16:25 ` Michael S. Tsirkin
2025-05-09 16:39 ` Michael S. Tsirkin
2025-05-13 12:13 ` Igor Mammedov
2 siblings, 0 replies; 14+ messages in thread
From: Michael S. Tsirkin @ 2025-04-02 16:25 UTC (permalink / raw)
To: yuanminghao; +Cc: Igor Mammedov, qemu-devel, Stefano Garzarella
On Mon, Mar 03, 2025 at 01:02:17PM -0500, yuanminghao wrote:
> > > Global used_memslots or used_shared_memslots is updated to 0 unexpectly
> >
> > it shouldn't be 0 in practice, as it comes from number of RAM regions VM has.
> > It's likely a bug somewhere else.
> >
> > Please describe a way to reproduce the issue.
> >
> Hi, Igor Mammedov,
> Sorry for the late response, here are the steps to reproduce the issue:
>
> 1.start a domain with 1Core 1GiB memory, no network interface.
> 2.print used_memslots with gdb
> gdb -p ${qemupid} <<< "p used_memslots"
> $1 = 0
> 3.attach a network interface net1
> cat>/tmp/net1.xml <<EOF
> <interface type='network'>
> <mac address='52:54:00:12:34:56'/>
> <source network='default'/>
> <model type='virtio'/>
> </interface>
> EOF
> virsh attach-device dom /tmp/net1.xml --live
> 4.print current used_memslots with gdb
> gdb -p ${qemupid} <<< "p used_memslots"
> $1 = 2
> 5.attach another network interface net2
> cat>/tmp/net2.xml <<EOF
> <interface type='network'>
> <mac address='52:54:00:12:34:78'/>
> <source network='default'/>
> <model type='virtio'/>
> </interface>
> EOF
> virsh attach-device dom /tmp/net2.xml --live
> 6.print current used_memslots with gdb
> gdb -p ${qemupid} <<< "p used_memslots"
> $1 = 2
> 7.detach network interface net2
> virsh detach-device dom /tmp/net2.xml --live
> 8.print current used_memslots with gdb
> gdb -p ${qemupid} <<< "p used_memslots"
> $1 = 0
> After detaching net2, the used_memslots was reseted to 0, which was expected to be 2.
Igor were you looking at this?
> > > when a vhost device destroyed. This can occur during scenarios such as live
> > > detaching a vhost device or restarting a vhost-user net backend (e.g., OVS-DPDK):
> > > #0 vhost_commit(listener) at hw/virtio/vhost.c:439
> > > #1 listener_del_address_space(as, listener) at memory.c:2777
> > > #2 memory_listener_unregister(listener) at memory.c:2823
> > > #3 vhost_dev_cleanup(hdev) at hw/virtio/vhost.c:1406
> > > #4 vhost_net_cleanup(net) at hw/net/vhost_net.c:402
> > > #5 vhost_user_start(be, ncs, queues) at net/vhost-user.c:113
> > > #6 net_vhost_user_event(opaque, event) at net/vhost-user.c:281
> > > #7 tcp_chr_new_client(chr, sioc) at chardev/char-socket.c:924
> > > #8 tcp_chr_accept(listener, cioc, opaque) at chardev/char-socket.c:961
> > >
> > > So we skip the update of used_memslots and used_shared_memslots when destroying
> > > vhost devices, and it should work event if all vhost devices are removed.
> > >
> > > Signed-off-by: yuanminghao <yuanmh12@chinatelecom.cn>
> > > ---
> > > hw/virtio/vhost.c | 14 +++++++++-----
> > > include/hw/virtio/vhost.h | 1 +
> > > 2 files changed, 10 insertions(+), 5 deletions(-)
> > >
> > > diff --git a/hw/virtio/vhost.c b/hw/virtio/vhost.c
> > > index 6aa72fd434..2258a12066 100644
> > > --- a/hw/virtio/vhost.c
> > > +++ b/hw/virtio/vhost.c
> > > @@ -666,11 +666,13 @@ static void vhost_commit(MemoryListener *listener)
> > > dev->mem = g_realloc(dev->mem, regions_size);
> > > dev->mem->nregions = dev->n_mem_sections;
> > >
> > > - if (dev->vhost_ops->vhost_backend_no_private_memslots &&
> > > - dev->vhost_ops->vhost_backend_no_private_memslots(dev)) {
> > > - used_shared_memslots = dev->mem->nregions;
> > > - } else {
> > > - used_memslots = dev->mem->nregions;
> > > + if (!dev->listener_removing) {
> > > + if (dev->vhost_ops->vhost_backend_no_private_memslots &&
> > > + dev->vhost_ops->vhost_backend_no_private_memslots(dev)) {
> > > + used_shared_memslots = dev->mem->nregions;
> > > + } else {
> > > + used_memslots = dev->mem->nregions;
> > > + }
> > > }
> > >
> > > for (i = 0; i < dev->n_mem_sections; i++) {
> > > @@ -1668,7 +1670,9 @@ void vhost_dev_cleanup(struct vhost_dev *hdev)
> > > }
> > > if (hdev->mem) {
> > > /* those are only safe after successful init */
> > > + hdev->listener_removing = true;
> > > memory_listener_unregister(&hdev->memory_listener);
> > > + hdev->listener_removing = false;
> > > QLIST_REMOVE(hdev, entry);
> > > }
> > > migrate_del_blocker(&hdev->migration_blocker);
> > > diff --git a/include/hw/virtio/vhost.h b/include/hw/virtio/vhost.h
> > > index a9469d50bc..037f85b642 100644
> > > --- a/include/hw/virtio/vhost.h
> > > +++ b/include/hw/virtio/vhost.h
> > > @@ -133,6 +133,7 @@ struct vhost_dev {
> > > QLIST_HEAD(, vhost_iommu) iommu_list;
> > > IOMMUNotifier n;
> > > const VhostDevConfigOps *config_ops;
> > > + bool listener_removing;
> > > };
> > >
> > > extern const VhostOps kernel_ops;
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH 1/1] vhost: do not reset used_memslots when destroying vhost dev
2025-03-03 18:02 [PATCH 1/1] vhost: do not reset used_memslots when destroying vhost dev yuanminghao
2025-04-02 16:25 ` Michael S. Tsirkin
@ 2025-05-09 16:39 ` Michael S. Tsirkin
2025-05-13 12:13 ` Igor Mammedov
2 siblings, 0 replies; 14+ messages in thread
From: Michael S. Tsirkin @ 2025-05-09 16:39 UTC (permalink / raw)
To: yuanminghao; +Cc: Igor Mammedov, qemu-devel, Stefano Garzarella
On Mon, Mar 03, 2025 at 01:02:17PM -0500, yuanminghao wrote:
> > > Global used_memslots or used_shared_memslots is updated to 0 unexpectly
> >
> > it shouldn't be 0 in practice, as it comes from number of RAM regions VM has.
> > It's likely a bug somewhere else.
> >
> > Please describe a way to reproduce the issue.
> >
> Hi, Igor Mammedov,
> Sorry for the late response, here are the steps to reproduce the issue:
>
> 1.start a domain with 1Core 1GiB memory, no network interface.
> 2.print used_memslots with gdb
> gdb -p ${qemupid} <<< "p used_memslots"
> $1 = 0
> 3.attach a network interface net1
> cat>/tmp/net1.xml <<EOF
> <interface type='network'>
> <mac address='52:54:00:12:34:56'/>
> <source network='default'/>
> <model type='virtio'/>
> </interface>
> EOF
> virsh attach-device dom /tmp/net1.xml --live
> 4.print current used_memslots with gdb
> gdb -p ${qemupid} <<< "p used_memslots"
> $1 = 2
> 5.attach another network interface net2
> cat>/tmp/net2.xml <<EOF
> <interface type='network'>
> <mac address='52:54:00:12:34:78'/>
> <source network='default'/>
> <model type='virtio'/>
> </interface>
> EOF
> virsh attach-device dom /tmp/net2.xml --live
> 6.print current used_memslots with gdb
> gdb -p ${qemupid} <<< "p used_memslots"
> $1 = 2
> 7.detach network interface net2
> virsh detach-device dom /tmp/net2.xml --live
> 8.print current used_memslots with gdb
> gdb -p ${qemupid} <<< "p used_memslots"
> $1 = 0
> After detaching net2, the used_memslots was reseted to 0, which was expected to be 2.
Igor, WDYT?
> > > when a vhost device destroyed. This can occur during scenarios such as live
> > > detaching a vhost device or restarting a vhost-user net backend (e.g., OVS-DPDK):
> > > #0 vhost_commit(listener) at hw/virtio/vhost.c:439
> > > #1 listener_del_address_space(as, listener) at memory.c:2777
> > > #2 memory_listener_unregister(listener) at memory.c:2823
> > > #3 vhost_dev_cleanup(hdev) at hw/virtio/vhost.c:1406
> > > #4 vhost_net_cleanup(net) at hw/net/vhost_net.c:402
> > > #5 vhost_user_start(be, ncs, queues) at net/vhost-user.c:113
> > > #6 net_vhost_user_event(opaque, event) at net/vhost-user.c:281
> > > #7 tcp_chr_new_client(chr, sioc) at chardev/char-socket.c:924
> > > #8 tcp_chr_accept(listener, cioc, opaque) at chardev/char-socket.c:961
> > >
> > > So we skip the update of used_memslots and used_shared_memslots when destroying
> > > vhost devices, and it should work event if all vhost devices are removed.
> > >
> > > Signed-off-by: yuanminghao <yuanmh12@chinatelecom.cn>
> > > ---
> > > hw/virtio/vhost.c | 14 +++++++++-----
> > > include/hw/virtio/vhost.h | 1 +
> > > 2 files changed, 10 insertions(+), 5 deletions(-)
> > >
> > > diff --git a/hw/virtio/vhost.c b/hw/virtio/vhost.c
> > > index 6aa72fd434..2258a12066 100644
> > > --- a/hw/virtio/vhost.c
> > > +++ b/hw/virtio/vhost.c
> > > @@ -666,11 +666,13 @@ static void vhost_commit(MemoryListener *listener)
> > > dev->mem = g_realloc(dev->mem, regions_size);
> > > dev->mem->nregions = dev->n_mem_sections;
> > >
> > > - if (dev->vhost_ops->vhost_backend_no_private_memslots &&
> > > - dev->vhost_ops->vhost_backend_no_private_memslots(dev)) {
> > > - used_shared_memslots = dev->mem->nregions;
> > > - } else {
> > > - used_memslots = dev->mem->nregions;
> > > + if (!dev->listener_removing) {
> > > + if (dev->vhost_ops->vhost_backend_no_private_memslots &&
> > > + dev->vhost_ops->vhost_backend_no_private_memslots(dev)) {
> > > + used_shared_memslots = dev->mem->nregions;
> > > + } else {
> > > + used_memslots = dev->mem->nregions;
> > > + }
> > > }
> > >
> > > for (i = 0; i < dev->n_mem_sections; i++) {
> > > @@ -1668,7 +1670,9 @@ void vhost_dev_cleanup(struct vhost_dev *hdev)
> > > }
> > > if (hdev->mem) {
> > > /* those are only safe after successful init */
> > > + hdev->listener_removing = true;
> > > memory_listener_unregister(&hdev->memory_listener);
> > > + hdev->listener_removing = false;
> > > QLIST_REMOVE(hdev, entry);
> > > }
> > > migrate_del_blocker(&hdev->migration_blocker);
> > > diff --git a/include/hw/virtio/vhost.h b/include/hw/virtio/vhost.h
> > > index a9469d50bc..037f85b642 100644
> > > --- a/include/hw/virtio/vhost.h
> > > +++ b/include/hw/virtio/vhost.h
> > > @@ -133,6 +133,7 @@ struct vhost_dev {
> > > QLIST_HEAD(, vhost_iommu) iommu_list;
> > > IOMMUNotifier n;
> > > const VhostDevConfigOps *config_ops;
> > > + bool listener_removing;
> > > };
> > >
> > > extern const VhostOps kernel_ops;
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH 1/1] vhost: do not reset used_memslots when destroying vhost dev
2025-03-03 18:02 [PATCH 1/1] vhost: do not reset used_memslots when destroying vhost dev yuanminghao
2025-04-02 16:25 ` Michael S. Tsirkin
2025-05-09 16:39 ` Michael S. Tsirkin
@ 2025-05-13 12:13 ` Igor Mammedov
2025-05-13 13:12 ` David Hildenbrand
2 siblings, 1 reply; 14+ messages in thread
From: Igor Mammedov @ 2025-05-13 12:13 UTC (permalink / raw)
To: yuanminghao
Cc: qemu-devel, Michael S. Tsirkin, Stefano Garzarella,
David Hildenbrand
On Mon, 3 Mar 2025 13:02:17 -0500
yuanminghao <yuanmh12@chinatelecom.cn> wrote:
> > > Global used_memslots or used_shared_memslots is updated to 0 unexpectly
> >
> > it shouldn't be 0 in practice, as it comes from number of RAM regions VM has.
> > It's likely a bug somewhere else.
I haven't touched this code for a long time, but I'd say if we consider multiple
devices, we shouldn't do following:
static void vhost_commit(MemoryListener *listener)
...
if (dev->vhost_ops->vhost_backend_no_private_memslots &&
dev->vhost_ops->vhost_backend_no_private_memslots(dev)) {
used_shared_memslots = dev->mem->nregions;
} else {
used_memslots = dev->mem->nregions;
}
where value dev->mem->nregions gets is well hidden/obscured
and hard to trace where tail ends => fragile.
CCing David (accidental victim) who rewrote this part the last time,
perhaps he can suggest a better way to fix the issue.
> > Please describe a way to reproduce the issue.
> >
> Hi, Igor Mammedov,
> Sorry for the late response, here are the steps to reproduce the issue:
>
> 1.start a domain with 1Core 1GiB memory, no network interface.
> 2.print used_memslots with gdb
> gdb -p ${qemupid} <<< "p used_memslots"
> $1 = 0
> 3.attach a network interface net1
> cat>/tmp/net1.xml <<EOF
> <interface type='network'>
> <mac address='52:54:00:12:34:56'/>
> <source network='default'/>
> <model type='virtio'/>
> </interface>
> EOF
> virsh attach-device dom /tmp/net1.xml --live
> 4.print current used_memslots with gdb
> gdb -p ${qemupid} <<< "p used_memslots"
> $1 = 2
> 5.attach another network interface net2
> cat>/tmp/net2.xml <<EOF
> <interface type='network'>
> <mac address='52:54:00:12:34:78'/>
> <source network='default'/>
> <model type='virtio'/>
> </interface>
> EOF
> virsh attach-device dom /tmp/net2.xml --live
> 6.print current used_memslots with gdb
> gdb -p ${qemupid} <<< "p used_memslots"
> $1 = 2
> 7.detach network interface net2
> virsh detach-device dom /tmp/net2.xml --live
> 8.print current used_memslots with gdb
> gdb -p ${qemupid} <<< "p used_memslots"
> $1 = 0
> After detaching net2, the used_memslots was reseted to 0, which was expected to be 2.
>
> > > when a vhost device destroyed. This can occur during scenarios such as live
> > > detaching a vhost device or restarting a vhost-user net backend (e.g., OVS-DPDK):
> > > #0 vhost_commit(listener) at hw/virtio/vhost.c:439
> > > #1 listener_del_address_space(as, listener) at memory.c:2777
> > > #2 memory_listener_unregister(listener) at memory.c:2823
> > > #3 vhost_dev_cleanup(hdev) at hw/virtio/vhost.c:1406
> > > #4 vhost_net_cleanup(net) at hw/net/vhost_net.c:402
> > > #5 vhost_user_start(be, ncs, queues) at net/vhost-user.c:113
> > > #6 net_vhost_user_event(opaque, event) at net/vhost-user.c:281
> > > #7 tcp_chr_new_client(chr, sioc) at chardev/char-socket.c:924
> > > #8 tcp_chr_accept(listener, cioc, opaque) at chardev/char-socket.c:961
> > >
> > > So we skip the update of used_memslots and used_shared_memslots when destroying
> > > vhost devices, and it should work event if all vhost devices are removed.
> > >
> > > Signed-off-by: yuanminghao <yuanmh12@chinatelecom.cn>
> > > ---
> > > hw/virtio/vhost.c | 14 +++++++++-----
> > > include/hw/virtio/vhost.h | 1 +
> > > 2 files changed, 10 insertions(+), 5 deletions(-)
> > >
> > > diff --git a/hw/virtio/vhost.c b/hw/virtio/vhost.c
> > > index 6aa72fd434..2258a12066 100644
> > > --- a/hw/virtio/vhost.c
> > > +++ b/hw/virtio/vhost.c
> > > @@ -666,11 +666,13 @@ static void vhost_commit(MemoryListener *listener)
> > > dev->mem = g_realloc(dev->mem, regions_size);
> > > dev->mem->nregions = dev->n_mem_sections;
> > >
> > > - if (dev->vhost_ops->vhost_backend_no_private_memslots &&
> > > - dev->vhost_ops->vhost_backend_no_private_memslots(dev)) {
> > > - used_shared_memslots = dev->mem->nregions;
> > > - } else {
> > > - used_memslots = dev->mem->nregions;
> > > + if (!dev->listener_removing) {
> > > + if (dev->vhost_ops->vhost_backend_no_private_memslots &&
> > > + dev->vhost_ops->vhost_backend_no_private_memslots(dev)) {
> > > + used_shared_memslots = dev->mem->nregions;
> > > + } else {
> > > + used_memslots = dev->mem->nregions;
> > > + }
> > > }
> > >
> > > for (i = 0; i < dev->n_mem_sections; i++) {
> > > @@ -1668,7 +1670,9 @@ void vhost_dev_cleanup(struct vhost_dev *hdev)
> > > }
> > > if (hdev->mem) {
> > > /* those are only safe after successful init */
> > > + hdev->listener_removing = true;
> > > memory_listener_unregister(&hdev->memory_listener);
> > > + hdev->listener_removing = false;
> > > QLIST_REMOVE(hdev, entry);
> > > }
> > > migrate_del_blocker(&hdev->migration_blocker);
> > > diff --git a/include/hw/virtio/vhost.h b/include/hw/virtio/vhost.h
> > > index a9469d50bc..037f85b642 100644
> > > --- a/include/hw/virtio/vhost.h
> > > +++ b/include/hw/virtio/vhost.h
> > > @@ -133,6 +133,7 @@ struct vhost_dev {
> > > QLIST_HEAD(, vhost_iommu) iommu_list;
> > > IOMMUNotifier n;
> > > const VhostDevConfigOps *config_ops;
> > > + bool listener_removing;
> > > };
> > >
> > > extern const VhostOps kernel_ops;
>
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH 1/1] vhost: do not reset used_memslots when destroying vhost dev
2025-05-13 12:13 ` Igor Mammedov
@ 2025-05-13 13:12 ` David Hildenbrand
2025-05-14 9:12 ` Igor Mammedov
0 siblings, 1 reply; 14+ messages in thread
From: David Hildenbrand @ 2025-05-13 13:12 UTC (permalink / raw)
To: Igor Mammedov, yuanminghao
Cc: qemu-devel, Michael S. Tsirkin, Stefano Garzarella
On 13.05.25 14:13, Igor Mammedov wrote:
> On Mon, 3 Mar 2025 13:02:17 -0500
> yuanminghao <yuanmh12@chinatelecom.cn> wrote:
>
>>>> Global used_memslots or used_shared_memslots is updated to 0 unexpectly
>>>
>>> it shouldn't be 0 in practice, as it comes from number of RAM regions VM has.
>>> It's likely a bug somewhere else.
>
> I haven't touched this code for a long time, but I'd say if we consider multiple
> devices, we shouldn't do following:
>
> static void vhost_commit(MemoryListener *listener)
> ...
> if (dev->vhost_ops->vhost_backend_no_private_memslots &&
> dev->vhost_ops->vhost_backend_no_private_memslots(dev)) {
> used_shared_memslots = dev->mem->nregions;
> } else {
> used_memslots = dev->mem->nregions;
> }
>
> where value dev->mem->nregions gets is well hidden/obscured
> and hard to trace where tail ends => fragile.
>
> CCing David (accidental victim) who rewrote this part the last time,
> perhaps he can suggest a better way to fix the issue.
I think the original idea is that all devices (of on type: private vs.
non-private memslots) have the same number of memslots.
This avoids having to loop over all devices to figure out the number of
memslots.
... but in vhost_get_free_memslots() we already loop over all devices.
The check in vhost_dev_init() needs to be taken care of.
So maybe we can get rid of both variables completely?
--
Cheers,
David / dhildenb
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH 1/1] vhost: do not reset used_memslots when destroying vhost dev
2025-05-13 13:12 ` David Hildenbrand
@ 2025-05-14 9:12 ` Igor Mammedov
2025-05-14 9:26 ` David Hildenbrand
0 siblings, 1 reply; 14+ messages in thread
From: Igor Mammedov @ 2025-05-14 9:12 UTC (permalink / raw)
To: David Hildenbrand
Cc: yuanminghao, qemu-devel, Michael S. Tsirkin, Stefano Garzarella
On Tue, 13 May 2025 15:12:11 +0200
David Hildenbrand <david@redhat.com> wrote:
> On 13.05.25 14:13, Igor Mammedov wrote:
> > On Mon, 3 Mar 2025 13:02:17 -0500
> > yuanminghao <yuanmh12@chinatelecom.cn> wrote:
> >
> >>>> Global used_memslots or used_shared_memslots is updated to 0 unexpectly
> >>>
> >>> it shouldn't be 0 in practice, as it comes from number of RAM regions VM has.
> >>> It's likely a bug somewhere else.
> >
> > I haven't touched this code for a long time, but I'd say if we consider multiple
> > devices, we shouldn't do following:
> >
> > static void vhost_commit(MemoryListener *listener)
> > ...
> > if (dev->vhost_ops->vhost_backend_no_private_memslots &&
> > dev->vhost_ops->vhost_backend_no_private_memslots(dev)) {
> > used_shared_memslots = dev->mem->nregions;
> > } else {
> > used_memslots = dev->mem->nregions;
> > }
> >
> > where value dev->mem->nregions gets is well hidden/obscured
> > and hard to trace where tail ends => fragile.
> >
> > CCing David (accidental victim) who rewrote this part the last time,
> > perhaps he can suggest a better way to fix the issue.
>
> I think the original idea is that all devices (of on type: private vs.
> non-private memslots) have the same number of memslots.
>
> This avoids having to loop over all devices to figure out the number of
> memslots.
>
> ... but in vhost_get_free_memslots() we already loop over all devices.
>
> The check in vhost_dev_init() needs to be taken care of.
>
> So maybe we can get rid of both variables completely?
looks reasonable to me, (instead of current state which is
juggling with dev->mem->nregions that can become 0 on unplug
as it was reported).
David,
do you have time to fix it?
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH 1/1] vhost: do not reset used_memslots when destroying vhost dev
2025-05-14 9:12 ` Igor Mammedov
@ 2025-05-14 9:26 ` David Hildenbrand
2025-05-30 11:18 ` Michael S. Tsirkin
0 siblings, 1 reply; 14+ messages in thread
From: David Hildenbrand @ 2025-05-14 9:26 UTC (permalink / raw)
To: Igor Mammedov
Cc: yuanminghao, qemu-devel, Michael S. Tsirkin, Stefano Garzarella
On 14.05.25 11:12, Igor Mammedov wrote:
> On Tue, 13 May 2025 15:12:11 +0200
> David Hildenbrand <david@redhat.com> wrote:
>
>> On 13.05.25 14:13, Igor Mammedov wrote:
>>> On Mon, 3 Mar 2025 13:02:17 -0500
>>> yuanminghao <yuanmh12@chinatelecom.cn> wrote:
>>>
>>>>>> Global used_memslots or used_shared_memslots is updated to 0 unexpectly
>>>>>
>>>>> it shouldn't be 0 in practice, as it comes from number of RAM regions VM has.
>>>>> It's likely a bug somewhere else.
>>>
>>> I haven't touched this code for a long time, but I'd say if we consider multiple
>>> devices, we shouldn't do following:
>>>
>>> static void vhost_commit(MemoryListener *listener)
>>> ...
>>> if (dev->vhost_ops->vhost_backend_no_private_memslots &&
>>> dev->vhost_ops->vhost_backend_no_private_memslots(dev)) {
>>> used_shared_memslots = dev->mem->nregions;
>>> } else {
>>> used_memslots = dev->mem->nregions;
>>> }
>>>
>>> where value dev->mem->nregions gets is well hidden/obscured
>>> and hard to trace where tail ends => fragile.
>>>
>>> CCing David (accidental victim) who rewrote this part the last time,
>>> perhaps he can suggest a better way to fix the issue.
>>
>> I think the original idea is that all devices (of on type: private vs.
>> non-private memslots) have the same number of memslots.
>>
>> This avoids having to loop over all devices to figure out the number of
>> memslots.
>>
>> ... but in vhost_get_free_memslots() we already loop over all devices.
>>
>> The check in vhost_dev_init() needs to be taken care of.
>>
>> So maybe we can get rid of both variables completely?
>
> looks reasonable to me, (instead of current state which is
> juggling with dev->mem->nregions that can become 0 on unplug
> as it was reported).
>
> David,
> do you have time to fix it?
I can try, but I was wondering/hoping whether Yuanminghao could take a
look at that? I can provide guidance if necessary.
--
Cheers,
David / dhildenb
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH 1/1] vhost: do not reset used_memslots when destroying vhost dev
2025-05-14 9:26 ` David Hildenbrand
@ 2025-05-30 11:18 ` Michael S. Tsirkin
2025-05-30 11:28 ` David Hildenbrand
0 siblings, 1 reply; 14+ messages in thread
From: Michael S. Tsirkin @ 2025-05-30 11:18 UTC (permalink / raw)
To: David Hildenbrand
Cc: Igor Mammedov, yuanminghao, qemu-devel, Stefano Garzarella
On Wed, May 14, 2025 at 11:26:05AM +0200, David Hildenbrand wrote:
> On 14.05.25 11:12, Igor Mammedov wrote:
> > On Tue, 13 May 2025 15:12:11 +0200
> > David Hildenbrand <david@redhat.com> wrote:
> >
> > > On 13.05.25 14:13, Igor Mammedov wrote:
> > > > On Mon, 3 Mar 2025 13:02:17 -0500
> > > > yuanminghao <yuanmh12@chinatelecom.cn> wrote:
> > > > > > > Global used_memslots or used_shared_memslots is updated to 0 unexpectly
> > > > > >
> > > > > > it shouldn't be 0 in practice, as it comes from number of RAM regions VM has.
> > > > > > It's likely a bug somewhere else.
> > > >
> > > > I haven't touched this code for a long time, but I'd say if we consider multiple
> > > > devices, we shouldn't do following:
> > > >
> > > > static void vhost_commit(MemoryListener *listener)
> > > > ...
> > > > if (dev->vhost_ops->vhost_backend_no_private_memslots &&
> > > > dev->vhost_ops->vhost_backend_no_private_memslots(dev)) {
> > > > used_shared_memslots = dev->mem->nregions;
> > > > } else {
> > > > used_memslots = dev->mem->nregions;
> > > > }
> > > >
> > > > where value dev->mem->nregions gets is well hidden/obscured
> > > > and hard to trace where tail ends => fragile.
> > > >
> > > > CCing David (accidental victim) who rewrote this part the last time,
> > > > perhaps he can suggest a better way to fix the issue.
> > >
> > > I think the original idea is that all devices (of on type: private vs.
> > > non-private memslots) have the same number of memslots.
> > >
> > > This avoids having to loop over all devices to figure out the number of
> > > memslots.
> > >
> > > ... but in vhost_get_free_memslots() we already loop over all devices.
> > >
> > > The check in vhost_dev_init() needs to be taken care of.
> > >
> > > So maybe we can get rid of both variables completely?
> >
> > looks reasonable to me, (instead of current state which is
> > juggling with dev->mem->nregions that can become 0 on unplug
> > as it was reported).
> >
> > David,
> > do you have time to fix it?
>
> I can try, but I was wondering/hoping whether Yuanminghao could take a look
> at that? I can provide guidance if necessary.
Guys?
> --
> Cheers,
>
> David / dhildenb
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH 1/1] vhost: do not reset used_memslots when destroying vhost dev
2025-05-30 11:18 ` Michael S. Tsirkin
@ 2025-05-30 11:28 ` David Hildenbrand
2025-05-30 11:36 ` Michael S. Tsirkin
0 siblings, 1 reply; 14+ messages in thread
From: David Hildenbrand @ 2025-05-30 11:28 UTC (permalink / raw)
To: Michael S. Tsirkin
Cc: Igor Mammedov, yuanminghao, qemu-devel, Stefano Garzarella
On 30.05.25 13:18, Michael S. Tsirkin wrote:
> On Wed, May 14, 2025 at 11:26:05AM +0200, David Hildenbrand wrote:
>> On 14.05.25 11:12, Igor Mammedov wrote:
>>> On Tue, 13 May 2025 15:12:11 +0200
>>> David Hildenbrand <david@redhat.com> wrote:
>>>
>>>> On 13.05.25 14:13, Igor Mammedov wrote:
>>>>> On Mon, 3 Mar 2025 13:02:17 -0500
>>>>> yuanminghao <yuanmh12@chinatelecom.cn> wrote:
>>>>>>>> Global used_memslots or used_shared_memslots is updated to 0 unexpectly
>>>>>>>
>>>>>>> it shouldn't be 0 in practice, as it comes from number of RAM regions VM has.
>>>>>>> It's likely a bug somewhere else.
>>>>>
>>>>> I haven't touched this code for a long time, but I'd say if we consider multiple
>>>>> devices, we shouldn't do following:
>>>>>
>>>>> static void vhost_commit(MemoryListener *listener)
>>>>> ...
>>>>> if (dev->vhost_ops->vhost_backend_no_private_memslots &&
>>>>> dev->vhost_ops->vhost_backend_no_private_memslots(dev)) {
>>>>> used_shared_memslots = dev->mem->nregions;
>>>>> } else {
>>>>> used_memslots = dev->mem->nregions;
>>>>> }
>>>>>
>>>>> where value dev->mem->nregions gets is well hidden/obscured
>>>>> and hard to trace where tail ends => fragile.
>>>>>
>>>>> CCing David (accidental victim) who rewrote this part the last time,
>>>>> perhaps he can suggest a better way to fix the issue.
>>>>
>>>> I think the original idea is that all devices (of on type: private vs.
>>>> non-private memslots) have the same number of memslots.
>>>>
>>>> This avoids having to loop over all devices to figure out the number of
>>>> memslots.
>>>>
>>>> ... but in vhost_get_free_memslots() we already loop over all devices.
>>>>
>>>> The check in vhost_dev_init() needs to be taken care of.
>>>>
>>>> So maybe we can get rid of both variables completely?
>>>
>>> looks reasonable to me, (instead of current state which is
>>> juggling with dev->mem->nregions that can become 0 on unplug
>>> as it was reported).
>>>
>>> David,
>>> do you have time to fix it?
>>
>> I can try, but I was wondering/hoping whether Yuanminghao could take a look
>> at that? I can provide guidance if necessary.
>
>
> Guys?
Is the original author not interested in fixing the problem?
--
Cheers,
David / dhildenb
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH 1/1] vhost: do not reset used_memslots when destroying vhost dev
2025-05-30 11:28 ` David Hildenbrand
@ 2025-05-30 11:36 ` Michael S. Tsirkin
2025-06-03 9:15 ` David Hildenbrand
0 siblings, 1 reply; 14+ messages in thread
From: Michael S. Tsirkin @ 2025-05-30 11:36 UTC (permalink / raw)
To: David Hildenbrand
Cc: Igor Mammedov, yuanminghao, qemu-devel, Stefano Garzarella
On Fri, May 30, 2025 at 01:28:58PM +0200, David Hildenbrand wrote:
> On 30.05.25 13:18, Michael S. Tsirkin wrote:
> > On Wed, May 14, 2025 at 11:26:05AM +0200, David Hildenbrand wrote:
> > > On 14.05.25 11:12, Igor Mammedov wrote:
> > > > On Tue, 13 May 2025 15:12:11 +0200
> > > > David Hildenbrand <david@redhat.com> wrote:
> > > >
> > > > > On 13.05.25 14:13, Igor Mammedov wrote:
> > > > > > On Mon, 3 Mar 2025 13:02:17 -0500
> > > > > > yuanminghao <yuanmh12@chinatelecom.cn> wrote:
> > > > > > > > > Global used_memslots or used_shared_memslots is updated to 0 unexpectly
> > > > > > > >
> > > > > > > > it shouldn't be 0 in practice, as it comes from number of RAM regions VM has.
> > > > > > > > It's likely a bug somewhere else.
> > > > > >
> > > > > > I haven't touched this code for a long time, but I'd say if we consider multiple
> > > > > > devices, we shouldn't do following:
> > > > > >
> > > > > > static void vhost_commit(MemoryListener *listener)
> > > > > > ...
> > > > > > if (dev->vhost_ops->vhost_backend_no_private_memslots &&
> > > > > > dev->vhost_ops->vhost_backend_no_private_memslots(dev)) {
> > > > > > used_shared_memslots = dev->mem->nregions;
> > > > > > } else {
> > > > > > used_memslots = dev->mem->nregions;
> > > > > > }
> > > > > >
> > > > > > where value dev->mem->nregions gets is well hidden/obscured
> > > > > > and hard to trace where tail ends => fragile.
> > > > > >
> > > > > > CCing David (accidental victim) who rewrote this part the last time,
> > > > > > perhaps he can suggest a better way to fix the issue.
> > > > >
> > > > > I think the original idea is that all devices (of on type: private vs.
> > > > > non-private memslots) have the same number of memslots.
> > > > >
> > > > > This avoids having to loop over all devices to figure out the number of
> > > > > memslots.
> > > > >
> > > > > ... but in vhost_get_free_memslots() we already loop over all devices.
> > > > >
> > > > > The check in vhost_dev_init() needs to be taken care of.
> > > > >
> > > > > So maybe we can get rid of both variables completely?
> > > >
> > > > looks reasonable to me, (instead of current state which is
> > > > juggling with dev->mem->nregions that can become 0 on unplug
> > > > as it was reported).
> > > >
> > > > David,
> > > > do you have time to fix it?
> > >
> > > I can try, but I was wondering/hoping whether Yuanminghao could take a look
> > > at that? I can provide guidance if necessary.
> >
> >
> > Guys?
>
> Is the original author not interested in fixing the problem?
Given the silence I'd guess no.
> --
> Cheers,
>
> David / dhildenb
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH 1/1] vhost: do not reset used_memslots when destroying vhost dev
2025-05-30 11:36 ` Michael S. Tsirkin
@ 2025-06-03 9:15 ` David Hildenbrand
2025-06-10 16:14 ` Michael S. Tsirkin
0 siblings, 1 reply; 14+ messages in thread
From: David Hildenbrand @ 2025-06-03 9:15 UTC (permalink / raw)
To: Michael S. Tsirkin
Cc: Igor Mammedov, yuanminghao, qemu-devel, Stefano Garzarella
On 30.05.25 13:36, Michael S. Tsirkin wrote:
> On Fri, May 30, 2025 at 01:28:58PM +0200, David Hildenbrand wrote:
>> On 30.05.25 13:18, Michael S. Tsirkin wrote:
>>> On Wed, May 14, 2025 at 11:26:05AM +0200, David Hildenbrand wrote:
>>>> On 14.05.25 11:12, Igor Mammedov wrote:
>>>>> On Tue, 13 May 2025 15:12:11 +0200
>>>>> David Hildenbrand <david@redhat.com> wrote:
>>>>>
>>>>>> On 13.05.25 14:13, Igor Mammedov wrote:
>>>>>>> On Mon, 3 Mar 2025 13:02:17 -0500
>>>>>>> yuanminghao <yuanmh12@chinatelecom.cn> wrote:
>>>>>>>>>> Global used_memslots or used_shared_memslots is updated to 0 unexpectly
>>>>>>>>>
>>>>>>>>> it shouldn't be 0 in practice, as it comes from number of RAM regions VM has.
>>>>>>>>> It's likely a bug somewhere else.
>>>>>>>
>>>>>>> I haven't touched this code for a long time, but I'd say if we consider multiple
>>>>>>> devices, we shouldn't do following:
>>>>>>>
>>>>>>> static void vhost_commit(MemoryListener *listener)
>>>>>>> ...
>>>>>>> if (dev->vhost_ops->vhost_backend_no_private_memslots &&
>>>>>>> dev->vhost_ops->vhost_backend_no_private_memslots(dev)) {
>>>>>>> used_shared_memslots = dev->mem->nregions;
>>>>>>> } else {
>>>>>>> used_memslots = dev->mem->nregions;
>>>>>>> }
>>>>>>>
>>>>>>> where value dev->mem->nregions gets is well hidden/obscured
>>>>>>> and hard to trace where tail ends => fragile.
>>>>>>>
>>>>>>> CCing David (accidental victim) who rewrote this part the last time,
>>>>>>> perhaps he can suggest a better way to fix the issue.
>>>>>>
>>>>>> I think the original idea is that all devices (of on type: private vs.
>>>>>> non-private memslots) have the same number of memslots.
>>>>>>
>>>>>> This avoids having to loop over all devices to figure out the number of
>>>>>> memslots.
>>>>>>
>>>>>> ... but in vhost_get_free_memslots() we already loop over all devices.
>>>>>>
>>>>>> The check in vhost_dev_init() needs to be taken care of.
>>>>>>
>>>>>> So maybe we can get rid of both variables completely?
>>>>>
>>>>> looks reasonable to me, (instead of current state which is
>>>>> juggling with dev->mem->nregions that can become 0 on unplug
>>>>> as it was reported).
>>>>>
>>>>> David,
>>>>> do you have time to fix it?
>>>>
>>>> I can try, but I was wondering/hoping whether Yuanminghao could take a look
>>>> at that? I can provide guidance if necessary.
>>>
>>>
>>> Guys?
>>
>> Is the original author not interested in fixing the problem?
>
> Given the silence I'd guess no.
SMH, why then even send patches in the first place ...
Will try finding time to look into this ...
--
Cheers,
David / dhildenb
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH 1/1] vhost: do not reset used_memslots when destroying vhost dev
2025-06-03 9:15 ` David Hildenbrand
@ 2025-06-10 16:14 ` Michael S. Tsirkin
0 siblings, 0 replies; 14+ messages in thread
From: Michael S. Tsirkin @ 2025-06-10 16:14 UTC (permalink / raw)
To: David Hildenbrand
Cc: Igor Mammedov, yuanminghao, qemu-devel, Stefano Garzarella
On Tue, Jun 03, 2025 at 11:15:18AM +0200, David Hildenbrand wrote:
> On 30.05.25 13:36, Michael S. Tsirkin wrote:
> > On Fri, May 30, 2025 at 01:28:58PM +0200, David Hildenbrand wrote:
> > > On 30.05.25 13:18, Michael S. Tsirkin wrote:
> > > > On Wed, May 14, 2025 at 11:26:05AM +0200, David Hildenbrand wrote:
> > > > > On 14.05.25 11:12, Igor Mammedov wrote:
> > > > > > On Tue, 13 May 2025 15:12:11 +0200
> > > > > > David Hildenbrand <david@redhat.com> wrote:
> > > > > >
> > > > > > > On 13.05.25 14:13, Igor Mammedov wrote:
> > > > > > > > On Mon, 3 Mar 2025 13:02:17 -0500
> > > > > > > > yuanminghao <yuanmh12@chinatelecom.cn> wrote:
> > > > > > > > > > > Global used_memslots or used_shared_memslots is updated to 0 unexpectly
> > > > > > > > > >
> > > > > > > > > > it shouldn't be 0 in practice, as it comes from number of RAM regions VM has.
> > > > > > > > > > It's likely a bug somewhere else.
> > > > > > > >
> > > > > > > > I haven't touched this code for a long time, but I'd say if we consider multiple
> > > > > > > > devices, we shouldn't do following:
> > > > > > > >
> > > > > > > > static void vhost_commit(MemoryListener *listener)
> > > > > > > > ...
> > > > > > > > if (dev->vhost_ops->vhost_backend_no_private_memslots &&
> > > > > > > > dev->vhost_ops->vhost_backend_no_private_memslots(dev)) {
> > > > > > > > used_shared_memslots = dev->mem->nregions;
> > > > > > > > } else {
> > > > > > > > used_memslots = dev->mem->nregions;
> > > > > > > > }
> > > > > > > >
> > > > > > > > where value dev->mem->nregions gets is well hidden/obscured
> > > > > > > > and hard to trace where tail ends => fragile.
> > > > > > > >
> > > > > > > > CCing David (accidental victim) who rewrote this part the last time,
> > > > > > > > perhaps he can suggest a better way to fix the issue.
> > > > > > >
> > > > > > > I think the original idea is that all devices (of on type: private vs.
> > > > > > > non-private memslots) have the same number of memslots.
> > > > > > >
> > > > > > > This avoids having to loop over all devices to figure out the number of
> > > > > > > memslots.
> > > > > > >
> > > > > > > ... but in vhost_get_free_memslots() we already loop over all devices.
> > > > > > >
> > > > > > > The check in vhost_dev_init() needs to be taken care of.
> > > > > > >
> > > > > > > So maybe we can get rid of both variables completely?
> > > > > >
> > > > > > looks reasonable to me, (instead of current state which is
> > > > > > juggling with dev->mem->nregions that can become 0 on unplug
> > > > > > as it was reported).
> > > > > >
> > > > > > David,
> > > > > > do you have time to fix it?
> > > > >
> > > > > I can try, but I was wondering/hoping whether Yuanminghao could take a look
> > > > > at that? I can provide guidance if necessary.
> > > >
> > > >
> > > > Guys?
> > >
> > > Is the original author not interested in fixing the problem?
> >
> > Given the silence I'd guess no.
>
> SMH, why then even send patches in the first place ...
Drive by contributions are not uncommon.
> Will try finding time to look into this ...
>
> --
> Cheers,
>
> David / dhildenb
^ permalink raw reply [flat|nested] 14+ messages in thread
end of thread, other threads:[~2025-06-10 16:54 UTC | newest]
Thread overview: 14+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-03-03 18:02 [PATCH 1/1] vhost: do not reset used_memslots when destroying vhost dev yuanminghao
2025-04-02 16:25 ` Michael S. Tsirkin
2025-05-09 16:39 ` Michael S. Tsirkin
2025-05-13 12:13 ` Igor Mammedov
2025-05-13 13:12 ` David Hildenbrand
2025-05-14 9:12 ` Igor Mammedov
2025-05-14 9:26 ` David Hildenbrand
2025-05-30 11:18 ` Michael S. Tsirkin
2025-05-30 11:28 ` David Hildenbrand
2025-05-30 11:36 ` Michael S. Tsirkin
2025-06-03 9:15 ` David Hildenbrand
2025-06-10 16:14 ` Michael S. Tsirkin
-- strict thread matches above, loose matches on Subject: below --
2024-11-21 6:07 yuanminghao
2025-02-25 13:42 ` Igor Mammedov
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).