public inbox for kvm@vger.kernel.org
 help / color / mirror / Atom feed
From: Joao Martins <joao.m.martins@oracle.com>
To: Jason Gunthorpe <jgg@nvidia.com>
Cc: iommu@lists.linux.dev, Kevin Tian <kevin.tian@intel.com>,
	Shameerali Kolothum Thodi  <shameerali.kolothum.thodi@huawei.com>,
	Lu Baolu <baolu.lu@linux.intel.com>, Yi Liu <yi.l.liu@intel.com>,
	Yi Y Sun <yi.y.sun@intel.com>, Nicolin Chen <nicolinc@nvidia.com>,
	Joerg Roedel <joro@8bytes.org>,
	Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>,
	Will Deacon <will@kernel.org>,
	Robin Murphy <robin.murphy@arm.com>,
	Zhenzhong Duan <zhenzhong.duan@intel.com>,
	Alex Williamson <alex.williamson@redhat.com>,
	kvm@vger.kernel.org
Subject: Re: [PATCH v4 11/18] iommu/amd: Access/Dirty bit support in IOPTEs
Date: Thu, 19 Oct 2023 01:17:32 +0100	[thread overview]
Message-ID: <2a8b0362-7185-4bca-ba06-e6a4f8de940b@oracle.com> (raw)
In-Reply-To: <20231018231111.GP3952@nvidia.com>

On 19/10/2023 00:11, Jason Gunthorpe wrote:
> On Wed, Oct 18, 2023 at 09:27:08PM +0100, Joao Martins wrote:
>> +static int iommu_v1_read_and_clear_dirty(struct io_pgtable_ops *ops,
>> +					 unsigned long iova, size_t size,
>> +					 unsigned long flags,
>> +					 struct iommu_dirty_bitmap *dirty)
>> +{
>> +	struct amd_io_pgtable *pgtable = io_pgtable_ops_to_data(ops);
>> +	unsigned long end = iova + size - 1;
>> +
>> +	do {
>> +		unsigned long pgsize = 0;
>> +		u64 *ptep, pte;
>> +
>> +		ptep = fetch_pte(pgtable, iova, &pgsize);
>> +		if (ptep)
>> +			pte = READ_ONCE(*ptep);
> 
> It is fine for now, but this is so slow for something that is such a
> fast path. We are optimizing away a TLB invalidation but leaving
> this???
> 

More obvious reason is that I'm still working towards the 'faster' page table
walker. Then map/unmap code needs to do similar lookups so thought of reusing
the same functions as map/unmap initially. And improve it afterwards or when
introducing the splitting.

> It is a radix tree, you walk trees by retaining your position at each
> level as you go (eg in a function per-level call chain or something)
> then ++ is cheap. Re-searching the entire tree every time is madness.

I'm aware -- I have an improved page-table walker for AMD[0] (not yet for Intel;
still in the works), but in my experiments with huge IOVA ranges, the time to
walk the page tables end up making not that much difference, compared to the
size it needs to walk. However is how none of this matters, once we increase up
a level (PMD), then walking huge IOVA ranges is super-cheap (and invisible with
PUDs). Which makes the dynamic-splitting/page-demotion important.

Furthermore, this is not quite yet easy for other people to test and see numbers
for themselves; so more and more I need to work on something like
iommufd_log_perf tool under tools/testing that is similar to the gup_perf to all
performance work obvious and 'standardized'

------->8--------
[0] [hasn't been rebased into this version I sent]

commit 431de7e855ee8c1622663f8d81600f62fed0ed4a
Author: Joao Martins <joao.m.martins@oracle.com>
Date:   Sat Oct 7 18:17:33 2023 -0400

    iommu/amd: Improve dirty read io-pgtable walker

    fetch_pte() based is a little ineficient for level-1 page-sizes.

    It walks all the levels to return a PTE, and disregarding the potential
    batching that could be done for the previous level. Implement a
    page-table walker based on the freeing functions which recursevily walks
    the next-level.

    For each level it iterates on the non-default page sizes as the
    different mappings return, provided each PTE level-7 may account
    the next power-of-2 per added PTE.

    Signed-off-by: Joao Martins <joao.m.martins@oracle.com>

diff --git a/drivers/iommu/amd/io_pgtable.c b/drivers/iommu/amd/io_pgtable.c
index 29f5ab0ba14f..babb5fb5fd51 100644
--- a/drivers/iommu/amd/io_pgtable.c
+++ b/drivers/iommu/amd/io_pgtable.c
@@ -552,39 +552,63 @@ static bool pte_test_and_clear_dirty(u64 *ptep, unsigned
long size)
        return dirty;
 }

+static bool pte_is_large_or_base(u64 *ptep)
+{
+       return (PM_PTE_LEVEL(*ptep) == 0 || PM_PTE_LEVEL(*ptep) == 7);
+}
+
+static int walk_iova_range(u64 *pt, unsigned long iova, size_t size,
+                          int level, unsigned long flags,
+                          struct iommu_dirty_bitmap *dirty)
+{
+       unsigned long addr, isize, end = iova + size;
+       unsigned long page_size;
+       int i, next_level;
+       u64 *p, *ptep;
+
+       next_level = level - 1;
+       isize = page_size = PTE_LEVEL_PAGE_SIZE(next_level);
+
+       for (addr = iova; addr < end; addr += isize) {
+               i = PM_LEVEL_INDEX(next_level, addr);
+               ptep = &pt[i];
+
+               /* PTE present? */
+               if (!IOMMU_PTE_PRESENT(*ptep))
+                       continue;
+
+               if (level > 1 && !pte_is_large_or_base(ptep)) {
+                       p = IOMMU_PTE_PAGE(*ptep);
+                       isize = min(end - addr, page_size);
+                       walk_iova_range(p, addr, isize, next_level,
+                                       flags, dirty);
+               } else {
+                       isize = PM_PTE_LEVEL(*ptep) == 7 ?
+                                       PTE_PAGE_SIZE(*ptep) : page_size;
+
+                       /*
+                        * Mark the whole IOVA range as dirty even if only one
+                        * of the replicated PTEs were marked dirty.
+                        */
+                       if (((flags & IOMMU_DIRTY_NO_CLEAR) &&
+                                       pte_test_dirty(ptep, isize)) ||
+                           pte_test_and_clear_dirty(ptep, isize))
+                               iommu_dirty_bitmap_record(dirty, addr, isize);
+               }
+       }
+
+       return 0;
+}
+
 static int iommu_v1_read_and_clear_dirty(struct io_pgtable_ops *ops,
                                         unsigned long iova, size_t size,
                                         unsigned long flags,
                                         struct iommu_dirty_bitmap *dirty)
 {
        struct amd_io_pgtable *pgtable = io_pgtable_ops_to_data(ops);
-       unsigned long end = iova + size - 1;
-
-       do {
-               unsigned long pgsize = 0;
-               u64 *ptep, pte;
-
-               ptep = fetch_pte(pgtable, iova, &pgsize);
-               if (ptep)
-                       pte = READ_ONCE(*ptep);
-               if (!ptep || !IOMMU_PTE_PRESENT(pte)) {
-                       pgsize = pgsize ?: PTE_LEVEL_PAGE_SIZE(0);
-                       iova += pgsize;
-                       continue;
-               }
-
-               /*
-                * Mark the whole IOVA range as dirty even if only one of
-                * the replicated PTEs were marked dirty.
-                */
-               if (((flags & IOMMU_DIRTY_NO_CLEAR) &&
-                               pte_test_dirty(ptep, pgsize)) ||
-                   pte_test_and_clear_dirty(ptep, pgsize))
-                       iommu_dirty_bitmap_record(dirty, iova, pgsize);
-               iova += pgsize;
-       } while (iova < end);

-       return 0;
+       return walk_iova_range(pgtable->root, iova, size,
+                              pgtable->mode, flags, dirty);
 }

 /*

  reply	other threads:[~2023-10-19  0:18 UTC|newest]

Thread overview: 84+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-10-18 20:26 [PATCH v4 00/18] IOMMUFD Dirty Tracking Joao Martins
2023-10-18 20:26 ` [PATCH v4 01/18] vfio/iova_bitmap: Export more API symbols Joao Martins
2023-10-18 22:14   ` Jason Gunthorpe
2023-10-20  5:45   ` Tian, Kevin
2023-10-20 16:44   ` Alex Williamson
2023-10-18 20:26 ` [PATCH v4 02/18] vfio: Move iova_bitmap into iommufd Joao Martins
2023-10-18 22:14   ` Jason Gunthorpe
2023-10-19 17:48     ` Brett Creeley
2023-10-20  5:46   ` Tian, Kevin
2023-10-20 16:44   ` Alex Williamson
2023-10-18 20:27 ` [PATCH v4 03/18] iommufd/iova_bitmap: Move symbols to IOMMUFD namespace Joao Martins
2023-10-18 22:16   ` Jason Gunthorpe
2023-10-19 17:48   ` Brett Creeley
2023-10-20  5:47   ` Tian, Kevin
2023-10-20 16:44   ` Alex Williamson
2023-10-18 20:27 ` [PATCH v4 04/18] iommu: Add iommu_domain ops for dirty tracking Joao Martins
2023-10-18 22:26   ` Jason Gunthorpe
2023-10-19  1:45   ` Baolu Lu
2023-10-20  5:54   ` Tian, Kevin
2023-10-20 11:24     ` Joao Martins
2023-10-18 20:27 ` [PATCH v4 05/18] iommufd: Add a flag to enforce dirty tracking on attach Joao Martins
2023-10-18 22:26   ` Jason Gunthorpe
2023-10-18 22:38   ` Jason Gunthorpe
2023-10-18 23:38     ` Joao Martins
2023-10-20  5:55       ` Tian, Kevin
2023-10-18 20:27 ` [PATCH v4 06/18] iommufd: Add IOMMU_HWPT_SET_DIRTY Joao Martins
2023-10-18 22:28   ` Jason Gunthorpe
2023-10-20  6:09   ` Tian, Kevin
2023-10-20 15:30     ` Joao Martins
2023-10-20  7:56   ` Tian, Kevin
2023-10-20 20:41   ` Joao Martins
2023-10-18 20:27 ` [PATCH v4 07/18] iommufd: Add IOMMU_HWPT_GET_DIRTY_IOVA Joao Martins
2023-10-18 22:39   ` Jason Gunthorpe
2023-10-18 23:43     ` Joao Martins
2023-10-19 12:01       ` Jason Gunthorpe
2023-10-19 12:04         ` Joao Martins
2023-10-19 10:01   ` Joao Martins
2023-10-20  6:32   ` Tian, Kevin
2023-10-20 11:53     ` Joao Martins
2023-10-20 13:40       ` Jason Gunthorpe
2023-10-18 20:27 ` [PATCH v4 08/18] iommufd: Add capabilities to IOMMU_GET_HW_INFO Joao Martins
2023-10-18 22:44   ` Jason Gunthorpe
2023-10-19  9:55     ` Joao Martins
2023-10-19 23:56       ` Jason Gunthorpe
2023-10-20  6:46   ` Tian, Kevin
2023-10-20 11:52     ` Joao Martins
2023-10-18 20:27 ` [PATCH v4 09/18] iommufd: Add a flag to skip clearing of IOPTE dirty Joao Martins
2023-10-18 22:54   ` Jason Gunthorpe
2023-10-18 23:50     ` Joao Martins
2023-10-20  6:52   ` Tian, Kevin
2023-10-18 20:27 ` [PATCH v4 10/18] iommu/amd: Add domain_alloc_user based domain allocation Joao Martins
2023-10-18 22:58   ` Jason Gunthorpe
2023-10-18 23:54     ` Joao Martins
2023-10-18 20:27 ` [PATCH v4 11/18] iommu/amd: Access/Dirty bit support in IOPTEs Joao Martins
2023-10-18 23:11   ` Jason Gunthorpe
2023-10-19  0:17     ` Joao Martins [this message]
2023-10-19 11:58       ` Joao Martins
2023-10-19 23:59         ` Jason Gunthorpe
2023-10-20 14:43           ` Joao Martins
2023-10-20 21:22             ` Joao Martins
2023-10-21 16:14             ` Jason Gunthorpe
2023-10-22  7:07               ` Yishai Hadas
2023-10-20  2:21         ` Baolu Lu
2023-10-20  7:01           ` Tian, Kevin
2023-10-20  9:34           ` Joao Martins
2023-10-20 11:20             ` Joao Martins
2023-10-20 18:57   ` Joao Martins
2023-10-18 20:27 ` [PATCH v4 12/18] iommu/intel: Access/Dirty bit support for SL domains Joao Martins
2023-10-19  3:04   ` Baolu Lu
2023-10-19  9:14     ` Joao Martins
2023-10-19 10:33       ` Joao Martins
2023-10-19 23:56       ` Jason Gunthorpe
2023-10-20 10:12         ` Joao Martins
2023-10-20  7:53   ` Tian, Kevin
2023-10-20  9:15     ` Baolu Lu
2023-10-18 20:27 ` [PATCH v4 13/18] iommufd/selftest: Expand mock_domain with dev_flags Joao Martins
2023-10-20  7:57   ` Tian, Kevin
2023-10-18 20:27 ` [PATCH v4 14/18] iommufd/selftest: Test IOMMU_HWPT_ALLOC_ENFORCE_DIRTY Joao Martins
2023-10-20  7:59   ` Tian, Kevin
2023-10-18 20:27 ` [PATCH v4 15/18] iommufd/selftest: Test IOMMU_HWPT_SET_DIRTY Joao Martins
2023-10-20  8:00   ` Tian, Kevin
2023-10-18 20:27 ` [PATCH v4 16/18] iommufd/selftest: Test IOMMU_HWPT_GET_DIRTY_IOVA Joao Martins
2023-10-18 20:27 ` [PATCH v4 17/18] iommufd/selftest: Test out_capabilities in IOMMU_GET_HW_INFO Joao Martins
2023-10-18 20:27 ` [PATCH v4 18/18] iommufd/selftest: Test IOMMU_GET_DIRTY_IOVA_NO_CLEAR flag Joao Martins

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=2a8b0362-7185-4bca-ba06-e6a4f8de940b@oracle.com \
    --to=joao.m.martins@oracle.com \
    --cc=alex.williamson@redhat.com \
    --cc=baolu.lu@linux.intel.com \
    --cc=iommu@lists.linux.dev \
    --cc=jgg@nvidia.com \
    --cc=joro@8bytes.org \
    --cc=kevin.tian@intel.com \
    --cc=kvm@vger.kernel.org \
    --cc=nicolinc@nvidia.com \
    --cc=robin.murphy@arm.com \
    --cc=shameerali.kolothum.thodi@huawei.com \
    --cc=suravee.suthikulpanit@amd.com \
    --cc=will@kernel.org \
    --cc=yi.l.liu@intel.com \
    --cc=yi.y.sun@intel.com \
    --cc=zhenzhong.duan@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox