From: Shameerali Kolothum Thodi <shameerali.kolothum.thodi@huawei.com>
To: Joao Martins <joao.m.martins@oracle.com>,
Jason Gunthorpe <jgg@nvidia.com>,
Zhangfei Gao <zhangfei.gao@linaro.org>
Cc: "iommu@lists.linux.dev" <iommu@lists.linux.dev>,
Kevin Tian <kevin.tian@intel.com>,
Lu Baolu <baolu.lu@linux.intel.com>, Yi Liu <yi.l.liu@intel.com>,
Yi Y Sun <yi.y.sun@intel.com>, Nicolin Chen <nicolinc@nvidia.com>,
Joerg Roedel <joro@8bytes.org>,
Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>,
Will Deacon <will@kernel.org>,
Robin Murphy <robin.murphy@arm.com>,
Zhenzhong Duan <zhenzhong.duan@intel.com>,
"Alex Williamson" <alex.williamson@redhat.com>,
"kvm@vger.kernel.org" <kvm@vger.kernel.org>,
Shameer Kolothum <shamiali2008@gmail.com>,
"Wangzhou (B)" <wangzhou1@hisilicon.com>
Subject: RE: [PATCH v6 00/18] IOMMUFD Dirty Tracking
Date: Wed, 30 Oct 2024 18:41:25 +0000 [thread overview]
Message-ID: <63d5a152dc1143e69d062dd854d4dd7b@huawei.com> (raw)
In-Reply-To: <59d76989-3d7f-449d-8339-2edd31270b08@oracle.com>
> -----Original Message-----
> From: Joao Martins <joao.m.martins@oracle.com>
> Sent: Wednesday, October 30, 2024 4:57 PM
> To: Shameerali Kolothum Thodi
> <shameerali.kolothum.thodi@huawei.com>; Jason Gunthorpe
> <jgg@nvidia.com>; Zhangfei Gao <zhangfei.gao@linaro.org>
> Cc: iommu@lists.linux.dev; Kevin Tian <kevin.tian@intel.com>; Lu Baolu
> <baolu.lu@linux.intel.com>; Yi Liu <yi.l.liu@intel.com>; Yi Y Sun
> <yi.y.sun@intel.com>; Nicolin Chen <nicolinc@nvidia.com>; Joerg Roedel
> <joro@8bytes.org>; Suravee Suthikulpanit
> <suravee.suthikulpanit@amd.com>; Will Deacon <will@kernel.org>; Robin
> Murphy <robin.murphy@arm.com>; Zhenzhong Duan
> <zhenzhong.duan@intel.com>; Alex Williamson
> <alex.williamson@redhat.com>; kvm@vger.kernel.org; Shameer Kolothum
> <shamiali2008@gmail.com>; Wangzhou (B) <wangzhou1@hisilicon.com>
> Subject: Re: [PATCH v6 00/18] IOMMUFD Dirty Tracking
>
> On 30/10/2024 15:57, Shameerali Kolothum Thodi wrote:
> >> On 30/10/2024 15:36, Jason Gunthorpe wrote:
> >>> On Wed, Oct 30, 2024 at 11:15:02PM +0800, Zhangfei Gao wrote:
> >>>> hw/vfio/migration.c
> >>>> if (vfio_viommu_preset(vbasedev)) {
> >>>> error_setg(&err, "%s: Migration is currently not supported "
> >>>> "with vIOMMU enabled", vbasedev->name);
> >>>> goto add_blocker;
> >>>> }
> >>>
> >>> The viommu driver itself does not support live migration, it would
> >>> need to preserve all the guest configuration and bring it all back. It
> >>> doesn't know how to do that yet.
> >>
> >> It's more of vfio code, not quite related to actually hw vIOMMU.
> >>
> >> There's some vfio migration + vIOMMU support patches I have to follow
> up
> >> (v5)
> >
> > Are you referring this series here?
> > https://lore.kernel.org/qemu-devel/d5d30f58-31f0-1103-6956-
> 377de34a790c@redhat.com/T/
> >
> > Is that enabling migration only if Guest doesn’t do any DMA translations?
> >
> No, it does it when guest is using the sw vIOMMU too. to be clear: this has
> nothing to do with nested IOMMU or if the guest is doing (emulated) dirty
> tracking.
Ok. Thanks for explaining. So just to clarify, this works for Intel vt-d with
" caching-mode=on". ie, no real 2 stage setup is required like in ARM
SMMUv3.
> When the guest doesn't do DMA translation is this patch:
>
> https://lore.kernel.org/qemu-devel/20230908120521.50903-1-
> joao.m.martins@oracle.com/
Ok.
>
> >> but unexpected set backs unrelated to work delay some of my plans for
> >> qemu 9.2.
> >> I expect to resume in few weeks. I can point you to a branch while I don't
> >> submit (provided soft-freeze is coming)
> >
> > Also, I think we need a mechanism for page fault handling in case Guest
> handles
> > the stage 1 plus dirty tracking for stage 1 as well.
> >
>
> I have emulation for x86 iommus to dirty tracking, but that is unrelated to
> L0
> live migration -- It's more for testing in the lack of recent hardware. Even
> emulated page fault handling doesn't affect this unless you have to re-
> map/map
> new IOVA, which would also be covered in this series I think.
>
> Unless you are talking about physical IOPF that qemu may terminate,
> though we
> don't have such support in Qemu atm.
Yeah I was referring to ARM SMMUv3 cases, where we need nested-smmuv3
support for vfio-pci assignment. Another usecase we have is support SVA in
Guest, with hardware capable of physical IOPF.
I will take a look at your series above and will see what else is required
to support ARM. Please CC if you plan to respin or have a latest branch.
Thanks for your efforts.
Shameer
next prev parent reply other threads:[~2024-10-30 18:41 UTC|newest]
Thread overview: 32+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-10-24 13:50 [PATCH v6 00/18] IOMMUFD Dirty Tracking Joao Martins
2023-10-24 13:50 ` [PATCH v6 01/18] vfio/iova_bitmap: Export more API symbols Joao Martins
2023-10-24 13:50 ` [PATCH v6 02/18] vfio: Move iova_bitmap into iommufd Joao Martins
2023-10-24 13:50 ` [PATCH v6 03/18] iommufd/iova_bitmap: Move symbols to IOMMUFD namespace Joao Martins
2023-10-24 13:50 ` [PATCH v6 04/18] iommu: Add iommu_domain ops for dirty tracking Joao Martins
2023-10-24 13:50 ` [PATCH v6 05/18] iommufd: Add a flag to enforce dirty tracking on attach Joao Martins
2023-10-24 13:50 ` [PATCH v6 06/18] iommufd: Add IOMMU_HWPT_SET_DIRTY_TRACKING Joao Martins
2023-10-24 13:50 ` [PATCH v6 07/18] iommufd: Add IOMMU_HWPT_GET_DIRTY_BITMAP Joao Martins
2023-10-24 13:50 ` [PATCH v6 08/18] iommufd: Add capabilities to IOMMU_GET_HW_INFO Joao Martins
2023-10-24 13:51 ` [PATCH v6 09/18] iommufd: Add a flag to skip clearing of IOPTE dirty Joao Martins
2023-10-24 13:51 ` [PATCH v6 10/18] iommu/amd: Add domain_alloc_user based domain allocation Joao Martins
2023-10-24 13:51 ` [PATCH v6 11/18] iommu/amd: Access/Dirty bit support in IOPTEs Joao Martins
2023-10-24 13:51 ` [PATCH v6 12/18] iommu/vt-d: Access/Dirty bit support for SS domains Joao Martins
2023-10-24 13:51 ` [PATCH v6 13/18] iommufd/selftest: Expand mock_domain with dev_flags Joao Martins
2023-10-24 13:51 ` [PATCH v6 14/18] iommufd/selftest: Test IOMMU_HWPT_ALLOC_DIRTY_TRACKING Joao Martins
2023-10-24 13:51 ` [PATCH v6 15/18] iommufd/selftest: Test IOMMU_HWPT_SET_DIRTY_TRACKING Joao Martins
2023-10-24 13:51 ` [PATCH v6 16/18] iommufd/selftest: Test IOMMU_HWPT_GET_DIRTY_BITMAP Joao Martins
2023-10-24 13:51 ` [PATCH v6 17/18] iommufd/selftest: Test out_capabilities in IOMMU_GET_HW_INFO Joao Martins
2023-10-24 13:51 ` [PATCH v6 18/18] iommufd/selftest: Test IOMMU_HWPT_GET_DIRTY_BITMAP_NO_CLEAR flag Joao Martins
2023-10-24 15:55 ` [PATCH v6 00/18] IOMMUFD Dirty Tracking Jason Gunthorpe
2023-10-24 18:11 ` Nicolin Chen
2024-10-29 2:35 ` Zhangfei Gao
2024-10-29 2:52 ` Yi Liu
2024-10-29 8:05 ` Zhangfei Gao
2024-10-29 9:34 ` Yi Liu
2024-10-30 15:15 ` Zhangfei Gao
2024-10-30 15:36 ` Jason Gunthorpe
2024-10-30 15:47 ` Joao Martins
2024-10-30 15:57 ` Shameerali Kolothum Thodi
2024-10-30 16:56 ` Joao Martins
2024-10-30 18:41 ` Shameerali Kolothum Thodi [this message]
2024-10-30 20:37 ` Joao Martins
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=63d5a152dc1143e69d062dd854d4dd7b@huawei.com \
--to=shameerali.kolothum.thodi@huawei.com \
--cc=alex.williamson@redhat.com \
--cc=baolu.lu@linux.intel.com \
--cc=iommu@lists.linux.dev \
--cc=jgg@nvidia.com \
--cc=joao.m.martins@oracle.com \
--cc=joro@8bytes.org \
--cc=kevin.tian@intel.com \
--cc=kvm@vger.kernel.org \
--cc=nicolinc@nvidia.com \
--cc=robin.murphy@arm.com \
--cc=shamiali2008@gmail.com \
--cc=suravee.suthikulpanit@amd.com \
--cc=wangzhou1@hisilicon.com \
--cc=will@kernel.org \
--cc=yi.l.liu@intel.com \
--cc=yi.y.sun@intel.com \
--cc=zhangfei.gao@linaro.org \
--cc=zhenzhong.duan@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox