From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 331CDC4167B for ; Thu, 7 Dec 2023 09:18:45 +0000 (UTC) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1rBAWa-0006NP-IX; Thu, 07 Dec 2023 04:18:30 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1rBAWU-0006NF-NW for qemu-devel@nongnu.org; Thu, 07 Dec 2023 04:18:22 -0500 Received: from us-smtp-delivery-124.mimecast.com ([170.10.129.124]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1rBAWS-0007rF-HN for qemu-devel@nongnu.org; Thu, 07 Dec 2023 04:18:22 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1701940699; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=xlqmoi2Udl0/hvOySjv5Ba0b1cKwOvfivfdXrWSNMEs=; b=TeuAuXmZBI2qpadHvvj+fgYKl+6VXtJQBnKBIoQukV60oqtOGqxy8DBhtDTUUt69DPUOjS Q9grdwrgqmqt4WAGOgOxu3sygMfjKczpAEr7i0QMUv2tLqFOM4uWUV8cKXl/ulYuHEw2gp j9d+fOo5EfEkwcJPwgNwwNzcDs2FOeI= Received: from mail-pj1-f69.google.com (mail-pj1-f69.google.com [209.85.216.69]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-281-Mxdt1vXBMOS4FPQX4tKuRQ-1; Thu, 07 Dec 2023 04:18:17 -0500 X-MC-Unique: Mxdt1vXBMOS4FPQX4tKuRQ-1 Received: by mail-pj1-f69.google.com with SMTP id 98e67ed59e1d1-2865681dcb4so722423a91.3 for ; Thu, 07 Dec 2023 01:18:17 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1701940696; x=1702545496; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=xlqmoi2Udl0/hvOySjv5Ba0b1cKwOvfivfdXrWSNMEs=; b=YYF8md5ybn2g5o2bSaPThjQJnKDx7N95IH6GXLNA/CBjwp8qj7u5NQy/JtlXq/8Tux eDx++vfZoKMCXNyUSXQzoavEIJzXK+vbLn+s2/Pn4VaeglUmY0gd4aHQJQykn6VT5T2P AmH/QstVYk7WVXf9MwgJZlTh6TBWIwwuf/WqZYJ29GgNGYb615NhozBQyqQru4IJolfe KATJMBsJWuLv5rMjyQH237BoIZ0M+jHcVupQOg79RdciazRTowZKU14Fu/UB0zdr27Yz 3XzU96xuUP4sVogYcxnYCGcfAJgs4hb0NybYjVrliUq68+ngw6M8bpkglMRJIstBFQbn 2YPg== X-Gm-Message-State: AOJu0YxjMYXSvPCdGf3gRyqb5NVC4ZsbXxtEP3qfwi4u6kRI3Xpc+DMd AcdpIzw6mHmXD7093To7bM3cd+05DlbjQZ0sADOwwdWhGIYn1Wxqo02piJBaRowkL5ErxhxHZWA n7bFZRIYyZ5lcEkxfJSzQuDxzr8UvA+/Aq8iIWVc= X-Received: by 2002:a17:90b:4b45:b0:286:bdb5:d07d with SMTP id mi5-20020a17090b4b4500b00286bdb5d07dmr1904059pjb.18.1701940696628; Thu, 07 Dec 2023 01:18:16 -0800 (PST) X-Google-Smtp-Source: AGHT+IEYecc4vu4++kn96KP9TKQxTur4K8U17MGgTKjxtaJqrqQe67/77VO+dI5iW+iPj4qxkMT0+WEq/TGPsEiJ42U= X-Received: by 2002:a17:90b:4b45:b0:286:bdb5:d07d with SMTP id mi5-20020a17090b4b4500b00286bdb5d07dmr1904053pjb.18.1701940696382; Thu, 07 Dec 2023 01:18:16 -0800 (PST) MIME-Version: 1.0 References: <20231107093744.388099-1-aesteve@redhat.com> <20231107093744.388099-3-aesteve@redhat.com> In-Reply-To: From: Albert Esteve Date: Thu, 7 Dec 2023 10:18:05 +0100 Message-ID: Subject: Re: [PATCH 2/3] hw/virtio: cleanup shared resources To: =?UTF-8?B?TWFyYy1BbmRyw6kgTHVyZWF1?= Cc: qemu-devel@nongnu.org, "Michael S. Tsirkin" , kraxel@redhat.com, stefanha@gmail.com Content-Type: multipart/alternative; boundary="000000000000a3130b060be7f2a7" Received-SPF: pass client-ip=170.10.129.124; envelope-from=aesteve@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -5 X-Spam_score: -0.6 X-Spam_bar: / X-Spam_report: (-0.6 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.001, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, HTML_MESSAGE=0.001, RCVD_IN_DNSWL_NONE=-0.0001, RCVD_IN_MSPIKE_H4=0.001, RCVD_IN_MSPIKE_WL=0.001, RCVD_IN_SORBS_WEB=1.5, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=no autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org --000000000000a3130b060be7f2a7 Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable On Mon, Dec 4, 2023 at 9:00=E2=80=AFAM Marc-Andr=C3=A9 Lureau wrote: > Hi > > On Tue, Nov 7, 2023 at 1:37=E2=80=AFPM Albert Esteve = wrote: > > > > Ensure that we cleanup all virtio shared > > resources when the vhost devices is cleaned > > up (after a hot unplug, or a crash). > > > > To track all owned uuids of a device, add > > a GSList to the vhost_dev struct. This way > > we can avoid traversing the full table > > for every cleanup, whether they actually > > own any shared resource or not. > > > > Signed-off-by: Albert Esteve > > --- > > hw/virtio/vhost-user.c | 2 ++ > > hw/virtio/vhost.c | 4 ++++ > > include/hw/virtio/vhost.h | 6 ++++++ > > 3 files changed, 12 insertions(+) > > > > diff --git a/hw/virtio/vhost-user.c b/hw/virtio/vhost-user.c > > index 5fdff0241f..04848d1fa0 100644 > > --- a/hw/virtio/vhost-user.c > > +++ b/hw/virtio/vhost-user.c > > @@ -1598,6 +1598,7 @@ vhost_user_backend_handle_shared_object_add(struc= t > vhost_dev *dev, > > QemuUUID uuid; > > > > memcpy(uuid.data, object->uuid, sizeof(object->uuid)); > > + dev->shared_uuids =3D g_slist_append(dev->shared_uuids, &uuid); > > This will point to the stack variable. > > > return virtio_add_vhost_device(&uuid, dev); > > } > > > > @@ -1623,6 +1624,7 @@ > vhost_user_backend_handle_shared_object_remove(struct vhost_dev *dev, > > } > > > > memcpy(uuid.data, object->uuid, sizeof(object->uuid)); > > + dev->shared_uuids =3D g_slist_remove_all(dev->shared_uuids, &uuid)= ; > > return virtio_remove_resource(&uuid); > > } > > > > diff --git a/hw/virtio/vhost.c b/hw/virtio/vhost.c > > index 9c9ae7109e..3aff94664b 100644 > > --- a/hw/virtio/vhost.c > > +++ b/hw/virtio/vhost.c > > @@ -16,6 +16,7 @@ > > #include "qemu/osdep.h" > > #include "qapi/error.h" > > #include "hw/virtio/vhost.h" > > +#include "hw/virtio/virtio-dmabuf.h" > > #include "qemu/atomic.h" > > #include "qemu/range.h" > > #include "qemu/error-report.h" > > @@ -1599,6 +1600,9 @@ void vhost_dev_cleanup(struct vhost_dev *hdev) > > migrate_del_blocker(&hdev->migration_blocker); > > g_free(hdev->mem); > > g_free(hdev->mem_sections); > > + /* free virtio shared objects */ > > + g_slist_foreach(hdev->shared_uuids, (GFunc)virtio_remove_resource, > NULL); > > + g_slist_free_full(g_steal_pointer(&hdev->shared_uuids), g_free); > > (and will crash here) > > Imho, you should just traverse the hashtable, instead of introducing > another list. > Ok, I was probably doing premature optimization. I guess it should not happen as often, or track as many resources, as to require a separate list. I will just traverse. Thanks! > > > if (hdev->vhost_ops) { > > hdev->vhost_ops->vhost_backend_cleanup(hdev); > > } > > diff --git a/include/hw/virtio/vhost.h b/include/hw/virtio/vhost.h > > index 5e8183f64a..376bc8446d 100644 > > --- a/include/hw/virtio/vhost.h > > +++ b/include/hw/virtio/vhost.h > > @@ -118,6 +118,12 @@ struct vhost_dev { > > */ > > uint64_t protocol_features; > > > > + /** > > + * @shared_uuids: contains the UUIDs of all the exported > > + * virtio objects owned by the vhost device. > > + */ > > + GSList *shared_uuids; > > + > > uint64_t max_queues; > > uint64_t backend_cap; > > /* @started: is the vhost device started? */ > > -- > > 2.41.0 > > > > > -- > Marc-Andr=C3=A9 Lureau > > --000000000000a3130b060be7f2a7 Content-Type: text/html; charset="UTF-8" Content-Transfer-Encoding: quoted-printable


On Mon, Dec 4, 2023 at 9:00= =E2=80=AFAM Marc-Andr=C3=A9 Lureau <marcandre.lureau@gmail.com> wrote:
Hi

On Tue, Nov 7, 2023 at 1:37=E2=80=AFPM Albert Esteve <aesteve@redhat.com> wrote:
>
> Ensure that we cleanup all virtio shared
> resources when the vhost devices is cleaned
> up (after a hot unplug, or a crash).
>
> To track all owned uuids of a device, add
> a GSList to the vhost_dev struct. This way
> we can avoid traversing the full table
> for every cleanup, whether they actually
> own any shared resource or not.
>
> Signed-off-by: Albert Esteve <aesteve@redhat.com>
> ---
>=C2=A0 hw/virtio/vhost-user.c=C2=A0 =C2=A0 | 2 ++
>=C2=A0 hw/virtio/vhost.c=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0| 4 ++++
>=C2=A0 include/hw/virtio/vhost.h | 6 ++++++
>=C2=A0 3 files changed, 12 insertions(+)
>
> diff --git a/hw/virtio/vhost-user.c b/hw/virtio/vhost-user.c
> index 5fdff0241f..04848d1fa0 100644
> --- a/hw/virtio/vhost-user.c
> +++ b/hw/virtio/vhost-user.c
> @@ -1598,6 +1598,7 @@ vhost_user_backend_handle_shared_object_add(stru= ct vhost_dev *dev,
>=C2=A0 =C2=A0 =C2=A0 QemuUUID uuid;
>
>=C2=A0 =C2=A0 =C2=A0 memcpy(uuid.data, object->uuid, sizeof(object-&= gt;uuid));
> +=C2=A0 =C2=A0 dev->shared_uuids =3D g_slist_append(dev->shared_= uuids, &uuid);

This will point to the stack variable.

>=C2=A0 =C2=A0 =C2=A0 return virtio_add_vhost_device(&uuid, dev); >=C2=A0 }
>
> @@ -1623,6 +1624,7 @@ vhost_user_backend_handle_shared_object_remove(s= truct vhost_dev *dev,
>=C2=A0 =C2=A0 =C2=A0 }
>
>=C2=A0 =C2=A0 =C2=A0 memcpy(uuid.data, object->uuid, sizeof(object-&= gt;uuid));
> +=C2=A0 =C2=A0 dev->shared_uuids =3D g_slist_remove_all(dev->sha= red_uuids, &uuid);
>=C2=A0 =C2=A0 =C2=A0 return virtio_remove_resource(&uuid);
>=C2=A0 }
>
> diff --git a/hw/virtio/vhost.c b/hw/virtio/vhost.c
> index 9c9ae7109e..3aff94664b 100644
> --- a/hw/virtio/vhost.c
> +++ b/hw/virtio/vhost.c
> @@ -16,6 +16,7 @@
>=C2=A0 #include "qemu/osdep.h"
>=C2=A0 #include "qapi/error.h"
>=C2=A0 #include "hw/virtio/vhost.h"
> +#include "hw/virtio/virtio-dmabuf.h"
>=C2=A0 #include "qemu/atomic.h"
>=C2=A0 #include "qemu/range.h"
>=C2=A0 #include "qemu/error-report.h"
> @@ -1599,6 +1600,9 @@ void vhost_dev_cleanup(struct vhost_dev *hdev) >=C2=A0 =C2=A0 =C2=A0 migrate_del_blocker(&hdev->migration_blocke= r);
>=C2=A0 =C2=A0 =C2=A0 g_free(hdev->mem);
>=C2=A0 =C2=A0 =C2=A0 g_free(hdev->mem_sections);
> +=C2=A0 =C2=A0 /* free virtio shared objects */
> +=C2=A0 =C2=A0 g_slist_foreach(hdev->shared_uuids, (GFunc)virtio_re= move_resource, NULL);
> +=C2=A0 =C2=A0 g_slist_free_full(g_steal_pointer(&hdev->shared_= uuids), g_free);

(and will crash here)

Imho, you should just traverse the hashtable, instead of introducing
another list.

Ok, I was probably doing = premature optimization. I guess it should
not happen as often, or= track as many resources, as to require
a separate list. I will j= ust traverse.

Thanks!
=C2=A0

>=C2=A0 =C2=A0 =C2=A0 if (hdev->vhost_ops) {
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 hdev->vhost_ops->vhost_backend= _cleanup(hdev);
>=C2=A0 =C2=A0 =C2=A0 }
> diff --git a/include/hw/virtio/vhost.h b/include/hw/virtio/vhost.h
> index 5e8183f64a..376bc8446d 100644
> --- a/include/hw/virtio/vhost.h
> +++ b/include/hw/virtio/vhost.h
> @@ -118,6 +118,12 @@ struct vhost_dev {
>=C2=A0 =C2=A0 =C2=A0 =C2=A0*/
>=C2=A0 =C2=A0 =C2=A0 uint64_t protocol_features;
>
> +=C2=A0 =C2=A0 /**
> +=C2=A0 =C2=A0 =C2=A0* @shared_uuids: contains the UUIDs of all the ex= ported
> +=C2=A0 =C2=A0 =C2=A0* virtio objects owned by the vhost device.
> +=C2=A0 =C2=A0 =C2=A0*/
> +=C2=A0 =C2=A0 GSList *shared_uuids;
> +
>=C2=A0 =C2=A0 =C2=A0 uint64_t max_queues;
>=C2=A0 =C2=A0 =C2=A0 uint64_t backend_cap;
>=C2=A0 =C2=A0 =C2=A0 /* @started: is the vhost device started? */
> --
> 2.41.0
>


--
Marc-Andr=C3=A9 Lureau

--000000000000a3130b060be7f2a7--