From: "Cédric Le Goater" <clg@redhat.com>
To: Steve Sistare <steven.sistare@oracle.com>, qemu-devel@nongnu.org
Cc: Alex Williamson <alex.williamson@redhat.com>,
Yi Liu <yi.l.liu@intel.com>, Eric Auger <eric.auger@redhat.com>,
Zhenzhong Duan <zhenzhong.duan@intel.com>,
"Michael S. Tsirkin" <mst@redhat.com>,
Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
Peter Xu <peterx@redhat.com>, Fabiano Rosas <farosas@suse.de>
Subject: Re: [PATCH V2 07/45] vfio/container: vfio_container_group_add
Date: Mon, 17 Feb 2025 19:02:40 +0100 [thread overview]
Message-ID: <73883ee3-e85b-40ba-8adf-80f1afe8274d@redhat.com> (raw)
In-Reply-To: <1739542467-226739-8-git-send-email-steven.sistare@oracle.com>
On 2/14/25 15:13, Steve Sistare wrote:
> Add vfio_container_group_add to de-dup some code. No functional change.
>
> Signed-off-by: Steve Sistare <steven.sistare@oracle.com>
Reviewed-by: Cédric Le Goater <clg@redhat.com>
Thanks,
C.
> ---
> hw/vfio/container.c | 47 +++++++++++++++++++++++++----------------------
> 1 file changed, 25 insertions(+), 22 deletions(-)
>
> diff --git a/hw/vfio/container.c b/hw/vfio/container.c
> index c668d07..c5bbb03 100644
> --- a/hw/vfio/container.c
> +++ b/hw/vfio/container.c
> @@ -582,6 +582,26 @@ static bool vfio_attach_discard_disable(VFIOContainer *container,
> return !ret;
> }
>
> +static bool vfio_container_group_add(VFIOContainer *container, VFIOGroup *group,
> + Error **errp)
> +{
> + if (!vfio_attach_discard_disable(container, group, errp)) {
> + return false;
> + }
> + group->container = container;
> + QLIST_INSERT_HEAD(&container->group_list, group, container_next);
> + vfio_kvm_device_add_group(group);
> + return true;
> +}
> +
> +static void vfio_container_group_del(VFIOContainer *container, VFIOGroup *group)
> +{
> + QLIST_REMOVE(group, container_next);
> + group->container = NULL;
> + vfio_kvm_device_del_group(group);
> + vfio_ram_block_discard_disable(container, false);
> +}
> +
> static bool vfio_connect_container(VFIOGroup *group, AddressSpace *as,
> Error **errp)
> {
> @@ -592,20 +612,13 @@ static bool vfio_connect_container(VFIOGroup *group, AddressSpace *as,
> VFIOIOMMUClass *vioc = NULL;
> bool new_container = false;
> bool group_was_added = false;
> - bool discard_disabled = false;
>
> space = vfio_get_address_space(as);
>
> QLIST_FOREACH(bcontainer, &space->containers, next) {
> container = container_of(bcontainer, VFIOContainer, bcontainer);
> if (!ioctl(group->fd, VFIO_GROUP_SET_CONTAINER, &container->fd)) {
> - if (!vfio_attach_discard_disable(container, group, errp)) {
> - return false;
> - }
> - group->container = container;
> - QLIST_INSERT_HEAD(&container->group_list, group, container_next);
> - vfio_kvm_device_add_group(group);
> - return true;
> + return vfio_container_group_add(container, group, errp);
> }
> }
>
> @@ -632,11 +645,6 @@ static bool vfio_connect_container(VFIOGroup *group, AddressSpace *as,
> goto fail;
> }
>
> - if (!vfio_attach_discard_disable(container, group, errp)) {
> - goto fail;
> - }
> - discard_disabled = true;
> -
> vioc = VFIO_IOMMU_GET_CLASS(bcontainer);
> assert(vioc->setup);
>
> @@ -644,12 +652,11 @@ static bool vfio_connect_container(VFIOGroup *group, AddressSpace *as,
> goto fail;
> }
>
> - vfio_kvm_device_add_group(group);
> -
> vfio_address_space_insert(space, bcontainer);
>
> - group->container = container;
> - QLIST_INSERT_HEAD(&container->group_list, group, container_next);
> + if (!vfio_container_group_add(container, group, errp)) {
> + goto fail;
> + }
> group_was_added = true;
>
> bcontainer->listener = vfio_memory_listener;
> @@ -669,15 +676,11 @@ fail:
> memory_listener_unregister(&bcontainer->listener);
>
> if (group_was_added) {
> - QLIST_REMOVE(group, container_next);
> - vfio_kvm_device_del_group(group);
> + vfio_container_group_del(container, group);
> }
> if (vioc && vioc->release) {
> vioc->release(bcontainer);
> }
> - if (discard_disabled) {
> - vfio_ram_block_discard_disable(container, false);
> - }
> if (new_container) {
> vfio_cpr_unregister_container(bcontainer);
> object_unref(container);
next prev parent reply other threads:[~2025-02-17 18:03 UTC|newest]
Thread overview: 72+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-02-14 14:13 [PATCH V2 00/45] Live update: vfio and iommufd Steve Sistare
2025-02-14 14:13 ` [PATCH V2 01/45] MAINTAINERS: Add reviewer for CPR Steve Sistare
2025-02-14 14:53 ` Peter Xu
2025-02-14 20:14 ` Steven Sistare
2025-02-14 14:13 ` [PATCH V2 02/45] migration: cpr helpers Steve Sistare
2025-02-14 16:37 ` Peter Xu
2025-02-14 20:31 ` Steven Sistare
2025-02-18 16:26 ` Peter Xu
2025-02-24 16:51 ` Steven Sistare
2025-02-14 14:13 ` [PATCH V2 03/45] migration: lower handler priority Steve Sistare
2025-02-14 15:58 ` Peter Xu
2025-02-14 14:13 ` [PATCH V2 04/45] vfio: vfio_find_ram_discard_listener Steve Sistare
2025-02-14 14:13 ` [PATCH V2 05/45] vfio/container: ram discard disable helper Steve Sistare
2025-02-17 17:58 ` Cédric Le Goater
2025-02-14 14:13 ` [PATCH V2 06/45] vfio/container: reform vfio_connect_container cleanup Steve Sistare
2025-02-17 18:01 ` Cédric Le Goater
2025-02-14 14:13 ` [PATCH V2 07/45] vfio/container: vfio_container_group_add Steve Sistare
2025-02-17 18:02 ` Cédric Le Goater [this message]
2025-02-14 14:13 ` [PATCH V2 08/45] vfio/container: register container for cpr Steve Sistare
2025-02-14 14:13 ` [PATCH V2 09/45] vfio/container: preserve descriptors Steve Sistare
2025-02-14 14:13 ` [PATCH V2 10/45] vfio/container: export vfio_legacy_dma_map Steve Sistare
2025-02-14 14:13 ` [PATCH V2 11/45] vfio/container: discard old DMA vaddr Steve Sistare
2025-02-14 14:13 ` [PATCH V2 12/45] vfio/container: restore " Steve Sistare
2025-02-14 14:13 ` [PATCH V2 13/45] vfio/container: mdev cpr blocker Steve Sistare
2025-02-14 14:13 ` [PATCH V2 14/45] vfio/container: recover from unmap-all-vaddr failure Steve Sistare
2025-02-14 14:13 ` [PATCH V2 15/45] pci: export msix_is_pending Steve Sistare
2025-02-14 14:45 ` Steven Sistare
2025-02-14 14:46 ` Steven Sistare
2025-02-14 14:13 ` [PATCH V2 16/45] pci: skip reset during cpr Steve Sistare
2025-02-14 14:13 ` [PATCH V2 17/45] vfio-pci: " Steve Sistare
2025-02-14 14:14 ` [PATCH V2 18/45] vfio/pci: vfio_vector_init Steve Sistare
2025-02-14 14:14 ` [PATCH V2 19/45] vfio/pci: vfio_notifier_init Steve Sistare
2025-02-14 14:14 ` [PATCH V2 20/45] vfio/pci: pass vector to virq functions Steve Sistare
2025-02-14 14:14 ` [PATCH V2 21/45] vfio/pci: vfio_notifier_init cpr parameters Steve Sistare
2025-02-14 14:14 ` [PATCH V2 22/45] vfio/pci: vfio_notifier_cleanup Steve Sistare
2025-02-14 14:14 ` [PATCH V2 23/45] vfio/pci: export MSI functions Steve Sistare
2025-02-14 14:14 ` [PATCH V2 24/45] vfio-pci: preserve MSI Steve Sistare
2025-02-14 14:14 ` [PATCH V2 25/45] vfio-pci: preserve INTx Steve Sistare
2025-02-14 14:14 ` [PATCH V2 26/45] migration: close kvm after cpr Steve Sistare
2025-02-14 15:51 ` Steven Sistare
2025-02-14 14:14 ` [PATCH V2 27/45] migration: cpr_get_fd_param helper Steve Sistare
2025-02-14 14:14 ` [PATCH V2 28/45] vfio: return mr from vfio_get_xlat_addr Steve Sistare
2025-02-14 14:38 ` Steven Sistare
2025-02-14 16:48 ` Peter Xu
2025-02-14 20:40 ` Steven Sistare
2025-02-14 22:42 ` Peter Xu
2025-02-24 16:50 ` Steven Sistare
2025-02-24 19:20 ` Peter Xu
2025-02-24 19:35 ` Steven Sistare
2025-02-14 14:14 ` [PATCH V2 29/45] vfio: pass ramblock to vfio_container_dma_map Steve Sistare
2025-02-14 14:14 ` [PATCH V2 30/45] backends/iommufd: iommufd_backend_map_file_dma Steve Sistare
2025-02-14 14:14 ` [PATCH V2 31/45] backends/iommufd: change process ioctl Steve Sistare
2025-02-14 14:14 ` [PATCH V2 32/45] physmem: qemu_ram_get_fd_offset Steve Sistare
2025-02-14 14:39 ` Steven Sistare
2025-02-14 16:49 ` Peter Xu
2025-02-14 14:14 ` [PATCH V2 33/45] vfio/iommufd: use IOMMU_IOAS_MAP_FILE Steve Sistare
2025-02-14 14:14 ` [PATCH V2 34/45] vfio/iommufd: export iommufd_cdev_get_info_iova_range Steve Sistare
2025-02-14 14:14 ` [PATCH V2 35/45] vfio/iommufd: define hwpt constructors Steve Sistare
2025-02-14 14:14 ` [PATCH V2 36/45] vfio/iommufd: invariant device name Steve Sistare
2025-02-14 14:14 ` [PATCH V2 37/45] vfio/iommufd: fix cpr register Steve Sistare
2025-02-14 14:14 ` [PATCH V2 38/45] vfio/iommufd: register container for cpr Steve Sistare
2025-02-14 14:14 ` [PATCH V2 39/45] vfio/iommufd: preserve descriptors Steve Sistare
2025-02-14 14:14 ` [PATCH V2 40/45] vfio/iommufd: reconstruct device Steve Sistare
2025-02-14 14:14 ` [PATCH V2 41/45] vfio/iommufd: reconstruct hw_caps Steve Sistare
2025-02-14 14:14 ` [PATCH V2 42/45] vfio/iommufd: reconstruct hwpt Steve Sistare
2025-02-14 14:14 ` [PATCH V2 43/45] vfio/iommufd: change process Steve Sistare
2025-02-14 14:14 ` [PATCH V2 44/45] iommufd: preserve DMA mappings Steve Sistare
2025-02-14 14:14 ` [PATCH V2 45/45] vfio/container: delete old cpr register Steve Sistare
2025-02-14 15:56 ` [PATCH V2 00/45] Live update: vfio and iommufd Steven Sistare
2025-02-14 16:06 ` Peter Xu
2025-02-14 16:20 ` Steven Sistare
2025-02-14 16:48 ` Cédric Le Goater
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=73883ee3-e85b-40ba-8adf-80f1afe8274d@redhat.com \
--to=clg@redhat.com \
--cc=alex.williamson@redhat.com \
--cc=eric.auger@redhat.com \
--cc=farosas@suse.de \
--cc=marcel.apfelbaum@gmail.com \
--cc=mst@redhat.com \
--cc=peterx@redhat.com \
--cc=qemu-devel@nongnu.org \
--cc=steven.sistare@oracle.com \
--cc=yi.l.liu@intel.com \
--cc=zhenzhong.duan@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).