From: Si-Wei Liu <si-wei.liu@oracle.com>
To: "Eugenio Pérez" <eperezma@redhat.com>,
mst@redhat.com, qemu-devel@nongnu.org
Cc: Peter Xu <peterx@redhat.com>,
Dragos Tatulea <dtatulea@nvidia.com>,
Zhu Lingshan <lingshan.zhu@intel.com>,
Jason Wang <jasowang@redhat.com>, Lei Yang <leiyang@redhat.com>,
Laurent Vivier <lvivier@redhat.com>,
Stefano Garzarella <sgarzare@redhat.com>,
Parav Pandit <parav@mellanox.com>
Subject: Re: [PATCH 1/6] vdpa: check for iova tree initialized at net_client_start
Date: Wed, 31 Jan 2024 02:06:49 -0800 [thread overview]
Message-ID: <92ecfd90-8d06-4669-b260-a7a3b106277e@oracle.com> (raw)
In-Reply-To: <20240111190222.496695-2-eperezma@redhat.com>
Hi Eugenio,
Maybe there's some patch missing, but I saw this core dump when x-svq=on
is specified while waiting for the incoming migration on destination host:
(gdb) bt
#0 0x00005643b24cc13c in vhost_iova_tree_map_alloc (tree=0x0,
map=map@entry=0x7ffd58c54830) at ../hw/virtio/vhost-iova-tree.c:89
#1 0x00005643b234f193 in vhost_vdpa_listener_region_add
(listener=0x5643b4403fd8, section=0x7ffd58c548d0) at
/home/opc/qemu-upstream/include/qemu/int128.h:34
#2 0x00005643b24e6a61 in address_space_update_topology_pass
(as=as@entry=0x5643b35a3840 <address_space_memory>,
old_view=old_view@entry=0x5643b442b5f0,
new_view=new_view@entry=0x5643b44a2130, adding=adding@entry=true) at
../system/memory.c:1004
#3 0x00005643b24e6e60 in address_space_set_flatview (as=0x5643b35a3840
<address_space_memory>) at ../system/memory.c:1080
#4 0x00005643b24ea750 in memory_region_transaction_commit () at
../system/memory.c:1132
#5 0x00005643b24ea750 in memory_region_transaction_commit () at
../system/memory.c:1117
#6 0x00005643b241f4c1 in pc_memory_init
(pcms=pcms@entry=0x5643b43c8400,
system_memory=system_memory@entry=0x5643b43d18b0,
rom_memory=rom_memory@entry=0x5643b449a960, pci_hole64_size=<optimized
out>) at ../hw/i386/pc.c:954
#7 0x00005643b240d088 in pc_q35_init (machine=0x5643b43c8400) at
../hw/i386/pc_q35.c:222
#8 0x00005643b21e1da8 in machine_run_board_init (machine=<optimized
out>, mem_path=<optimized out>, errp=<optimized out>,
errp@entry=0x5643b35b7958 <error_fatal>)
at ../hw/core/machine.c:1509
#9 0x00005643b237c0f6 in qmp_x_exit_preconfig () at ../system/vl.c:2613
#10 0x00005643b237c0f6 in qmp_x_exit_preconfig (errp=<optimized out>) at
../system/vl.c:2704
#11 0x00005643b237fcdd in qemu_init (errp=<optimized out>) at
../system/vl.c:3753
#12 0x00005643b237fcdd in qemu_init (argc=<optimized out>,
argv=<optimized out>) at ../system/vl.c:3753
#13 0x00005643b2158249 in main (argc=<optimized out>, argv=<optimized
out>) at ../system/main.c:47
Shall we create the iova tree early during vdpa dev int for the x-svq=on
case?
+ if (s->always_svq) {
+ /* iova tree is needed because of SVQ */
+ shared->iova_tree = vhost_iova_tree_new(shared->iova_range.first,
+ shared->iova_range.last);
+ }
+
Regards,
-Siwei
On 1/11/2024 11:02 AM, Eugenio Pérez wrote:
> To map the guest memory while it is migrating we need to create the
> iova_tree, as long as the destination uses x-svq=on. Checking to not
> override it.
>
> The function vhost_vdpa_net_client_stop clear it if the device is
> stopped. If the guest starts the device again, the iova tree is
> recreated by vhost_vdpa_net_data_start_first or vhost_vdpa_net_cvq_start
> if needed, so old behavior is kept.
>
> Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
> ---
> net/vhost-vdpa.c | 4 +++-
> 1 file changed, 3 insertions(+), 1 deletion(-)
>
> diff --git a/net/vhost-vdpa.c b/net/vhost-vdpa.c
> index 3726ee5d67..e11b390466 100644
> --- a/net/vhost-vdpa.c
> +++ b/net/vhost-vdpa.c
> @@ -341,7 +341,9 @@ static void vhost_vdpa_net_data_start_first(VhostVDPAState *s)
>
> migration_add_notifier(&s->migration_state,
> vdpa_net_migration_state_notifier);
> - if (v->shadow_vqs_enabled) {
> +
> + /* iova_tree may be initialized by vhost_vdpa_net_load_setup */
> + if (v->shadow_vqs_enabled && !v->shared->iova_tree) {
> v->shared->iova_tree = vhost_iova_tree_new(v->shared->iova_range.first,
> v->shared->iova_range.last);
> }
next prev parent reply other threads:[~2024-01-31 10:07 UTC|newest]
Thread overview: 10+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-01-11 19:02 [PATCH 0/6] Move memory listener register to vhost_vdpa_init Eugenio Pérez
2024-01-11 19:02 ` [PATCH 1/6] vdpa: check for iova tree initialized at net_client_start Eugenio Pérez
2024-01-31 10:06 ` Si-Wei Liu [this message]
2024-02-01 10:14 ` Eugenio Perez Martin
2024-01-11 19:02 ` [PATCH 2/6] vdpa: reorder vhost_vdpa_set_backend_cap Eugenio Pérez
2024-01-11 19:02 ` [PATCH 3/6] vdpa: set backend capabilities at vhost_vdpa_init Eugenio Pérez
2024-01-11 19:02 ` [PATCH 4/6] vdpa: add listener_registered Eugenio Pérez
2024-01-11 19:02 ` [PATCH 5/6] vdpa: reorder listener assignment Eugenio Pérez
2024-01-11 19:02 ` [PATCH 6/6] vdpa: move memory listener register to vhost_vdpa_init Eugenio Pérez
2024-01-19 14:44 ` [PATCH 0/6] Move " Lei Yang
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=92ecfd90-8d06-4669-b260-a7a3b106277e@oracle.com \
--to=si-wei.liu@oracle.com \
--cc=dtatulea@nvidia.com \
--cc=eperezma@redhat.com \
--cc=jasowang@redhat.com \
--cc=leiyang@redhat.com \
--cc=lingshan.zhu@intel.com \
--cc=lvivier@redhat.com \
--cc=mst@redhat.com \
--cc=parav@mellanox.com \
--cc=peterx@redhat.com \
--cc=qemu-devel@nongnu.org \
--cc=sgarzare@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).