* [PATCH 0/2] iommu/io-pgtable-arm: Fix for stage-2 map/unmap
@ 2024-10-24 16:25 Mostafa Saleh
2024-10-24 16:25 ` [PATCH 1/2] iommu/io-pgtable-arm: Fix stage-2 map/umap for concatenated tables Mostafa Saleh
` (2 more replies)
0 siblings, 3 replies; 7+ messages in thread
From: Mostafa Saleh @ 2024-10-24 16:25 UTC (permalink / raw)
To: linux-kernel, iommu, linux-arm-kernel
Cc: will, robin.murphy, joro, Mostafa Saleh
The first patch is a bug fix for stage-2 handling of concatenated pgds
that can break map/unmap for some cases.
The second patch is adding these cases to selfs tests.
While looking at that, I also noticed that the concated page tables support
is minimal and not supported for many cases (level = 0, 2, 3), I wrote some
code to support that so I can test further, I can send it later in another
patch.
Mostafa Saleh (2):
iommu/io-pgtable-arm: Fix stage-2 map/umap for concatenated tables
iommu/io-pgtable-arm: Add self test for the last page in the IAS
drivers/iommu/io-pgtable-arm.c | 34 +++++++++++++++++++++++++++++++---
1 file changed, 31 insertions(+), 3 deletions(-)
--
2.47.0.105.g07ac214952-goog
^ permalink raw reply [flat|nested] 7+ messages in thread
* [PATCH 1/2] iommu/io-pgtable-arm: Fix stage-2 map/umap for concatenated tables
2024-10-24 16:25 [PATCH 0/2] iommu/io-pgtable-arm: Fix for stage-2 map/unmap Mostafa Saleh
@ 2024-10-24 16:25 ` Mostafa Saleh
2024-12-01 4:20 ` Daniel Mentz
2024-10-24 16:25 ` [PATCH 2/2] iommu/io-pgtable-arm: Add self test for the last page in the IAS Mostafa Saleh
2024-10-29 16:15 ` [PATCH 0/2] iommu/io-pgtable-arm: Fix for stage-2 map/unmap Will Deacon
2 siblings, 1 reply; 7+ messages in thread
From: Mostafa Saleh @ 2024-10-24 16:25 UTC (permalink / raw)
To: linux-kernel, iommu, linux-arm-kernel
Cc: will, robin.murphy, joro, Mostafa Saleh
When calculating the max number of entries in a table, Where
RM_LPAE_LVL_IDX() understands the concatenated pgds and can return
an index spanning more than one concatenated table (for ex for 4K
page size > 512).
But then, max_entries is calculated as follows:
max_entries = ARM_LPAE_PTES_PER_TABLE(data) - map_idx_start;
This leads to a negative index in the page table, where for:
- map: do nothing (no OOB) as fortunately all comparisons are signed,
but it would return a negative mapped value.
- unmap: it would leak any child tables as it skips the loop over
“__arm_lpae_free_pgtable”
This bug only happens when map/unmap is requested with a page size
equals to the first level of the concatenated table (for 40 bits input
and 4K granule would be 1GB) and can be triggered from userspace with
VFIO, by choosing a VM IPA in a concatenated table > 0 and using
huge pages to mmap with the first level size.
For example, I was able to reproduce it with the following command
with mainline linux and mainline kvmtool:
./lkvm run --irqchip gicv3 -k {$KERNEL} -p "earlycon" -d {$ROOTFS} --force-pci -c \
`nproc` --debug -m 4096@525312 --vfio-pci 0000:00:03.0 --hugetlbfs /hugepages
Where huge pages with 1GB would be used and using IPA in the second table
(512GB for 40 bits SMMU and 4K granule).
Signed-off-by: Mostafa Saleh <smostafa@google.com>
---
drivers/iommu/io-pgtable-arm.c | 17 ++++++++++++++---
1 file changed, 14 insertions(+), 3 deletions(-)
diff --git a/drivers/iommu/io-pgtable-arm.c b/drivers/iommu/io-pgtable-arm.c
index 0e67f1721a3d..3ecbc024e440 100644
--- a/drivers/iommu/io-pgtable-arm.c
+++ b/drivers/iommu/io-pgtable-arm.c
@@ -199,6 +199,17 @@ static phys_addr_t iopte_to_paddr(arm_lpae_iopte pte,
return (paddr | (paddr << (48 - 12))) & (ARM_LPAE_PTE_ADDR_MASK << 4);
}
+/*
+ * Using an index returned from ARM_LPAE_PGD_IDX(), which can point to
+ * concatenated PGD concatenated, get the max entries of a that table.
+ */
+static inline int arm_lpae_max_entries(int i, struct arm_lpae_io_pgtable *data)
+{
+ int ptes_per_table = ARM_LPAE_PTES_PER_TABLE(data);
+
+ return ptes_per_table - (i & (ptes_per_table - 1));
+}
+
static bool selftest_running = false;
static dma_addr_t __arm_lpae_dma_addr(void *pages)
@@ -390,7 +401,7 @@ static int __arm_lpae_map(struct arm_lpae_io_pgtable *data, unsigned long iova,
/* If we can install a leaf entry at this level, then do so */
if (size == block_size) {
- max_entries = ARM_LPAE_PTES_PER_TABLE(data) - map_idx_start;
+ max_entries = arm_lpae_max_entries(map_idx_start, data);
num_entries = min_t(int, pgcount, max_entries);
ret = arm_lpae_init_pte(data, iova, paddr, prot, lvl, num_entries, ptep);
if (!ret)
@@ -592,7 +603,7 @@ static size_t arm_lpae_split_blk_unmap(struct arm_lpae_io_pgtable *data,
if (size == split_sz) {
unmap_idx_start = ARM_LPAE_LVL_IDX(iova, lvl, data);
- max_entries = ptes_per_table - unmap_idx_start;
+ max_entries = arm_lpae_max_entries(unmap_idx_start, data);
num_entries = min_t(int, pgcount, max_entries);
}
@@ -650,7 +661,7 @@ static size_t __arm_lpae_unmap(struct arm_lpae_io_pgtable *data,
/* If the size matches this level, we're in the right place */
if (size == ARM_LPAE_BLOCK_SIZE(lvl, data)) {
- max_entries = ARM_LPAE_PTES_PER_TABLE(data) - unmap_idx_start;
+ max_entries = arm_lpae_max_entries(unmap_idx_start, data);
num_entries = min_t(int, pgcount, max_entries);
/* Find and handle non-leaf entries */
--
2.47.0.105.g07ac214952-goog
^ permalink raw reply related [flat|nested] 7+ messages in thread
* [PATCH 2/2] iommu/io-pgtable-arm: Add self test for the last page in the IAS
2024-10-24 16:25 [PATCH 0/2] iommu/io-pgtable-arm: Fix for stage-2 map/unmap Mostafa Saleh
2024-10-24 16:25 ` [PATCH 1/2] iommu/io-pgtable-arm: Fix stage-2 map/umap for concatenated tables Mostafa Saleh
@ 2024-10-24 16:25 ` Mostafa Saleh
2024-10-29 16:15 ` [PATCH 0/2] iommu/io-pgtable-arm: Fix for stage-2 map/unmap Will Deacon
2 siblings, 0 replies; 7+ messages in thread
From: Mostafa Saleh @ 2024-10-24 16:25 UTC (permalink / raw)
To: linux-kernel, iommu, linux-arm-kernel
Cc: will, robin.murphy, joro, Mostafa Saleh
Add a case in the selftests that can detect some bugs with concatenated
page tables, where it maps the biggest supported page size at the end of
the IAS, this test would fail without the previous fix.
Signed-off-by: Mostafa Saleh <smostafa@google.com>
---
drivers/iommu/io-pgtable-arm.c | 17 +++++++++++++++++
1 file changed, 17 insertions(+)
diff --git a/drivers/iommu/io-pgtable-arm.c b/drivers/iommu/io-pgtable-arm.c
index 3ecbc024e440..71e8046671a4 100644
--- a/drivers/iommu/io-pgtable-arm.c
+++ b/drivers/iommu/io-pgtable-arm.c
@@ -1393,6 +1393,23 @@ static int __init arm_lpae_run_tests(struct io_pgtable_cfg *cfg)
iova += SZ_1G;
}
+ /*
+ * Map/unmap the last largest supported page of the IAS, this can
+ * trigger corner cases in the concatednated page tables.
+ */
+ mapped = 0;
+ size = 1UL << __fls(cfg->pgsize_bitmap);
+ iova = (1UL << cfg->ias) - size;
+ if (ops->map_pages(ops, iova, iova, size, 1,
+ IOMMU_READ | IOMMU_WRITE |
+ IOMMU_NOEXEC | IOMMU_CACHE,
+ GFP_KERNEL, &mapped))
+ return __FAIL(ops, i);
+ if (mapped != size)
+ return __FAIL(ops, i);
+ if (ops->unmap_pages(ops, iova, size, 1, NULL) != size)
+ return __FAIL(ops, i);
+
free_io_pgtable_ops(ops);
}
--
2.47.0.105.g07ac214952-goog
^ permalink raw reply related [flat|nested] 7+ messages in thread
* Re: [PATCH 0/2] iommu/io-pgtable-arm: Fix for stage-2 map/unmap
2024-10-24 16:25 [PATCH 0/2] iommu/io-pgtable-arm: Fix for stage-2 map/unmap Mostafa Saleh
2024-10-24 16:25 ` [PATCH 1/2] iommu/io-pgtable-arm: Fix stage-2 map/umap for concatenated tables Mostafa Saleh
2024-10-24 16:25 ` [PATCH 2/2] iommu/io-pgtable-arm: Add self test for the last page in the IAS Mostafa Saleh
@ 2024-10-29 16:15 ` Will Deacon
2 siblings, 0 replies; 7+ messages in thread
From: Will Deacon @ 2024-10-29 16:15 UTC (permalink / raw)
To: linux-kernel, iommu, linux-arm-kernel, Mostafa Saleh
Cc: catalin.marinas, kernel-team, Will Deacon, robin.murphy, joro
On Thu, 24 Oct 2024 16:25:14 +0000, Mostafa Saleh wrote:
> The first patch is a bug fix for stage-2 handling of concatenated pgds
> that can break map/unmap for some cases.
>
> The second patch is adding these cases to selfs tests.
>
> While looking at that, I also noticed that the concated page tables support
> is minimal and not supported for many cases (level = 0, 2, 3), I wrote some
> code to support that so I can test further, I can send it later in another
> patch.
>
> [...]
Applied to will (for-joerg/arm-smmu/updates), thanks!
[1/2] iommu/io-pgtable-arm: Fix stage-2 map/unmap for concatenated tables
https://git.kernel.org/will/c/d71fa842d33c
[2/2] iommu/io-pgtable-arm: Add self test for the last page in the IAS
https://git.kernel.org/will/c/d64c8053378e
Cheers,
--
Will
https://fixes.arm64.dev
https://next.arm64.dev
https://will.arm64.dev
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH 1/2] iommu/io-pgtable-arm: Fix stage-2 map/umap for concatenated tables
2024-10-24 16:25 ` [PATCH 1/2] iommu/io-pgtable-arm: Fix stage-2 map/umap for concatenated tables Mostafa Saleh
@ 2024-12-01 4:20 ` Daniel Mentz
2024-12-02 12:12 ` Mostafa Saleh
0 siblings, 1 reply; 7+ messages in thread
From: Daniel Mentz @ 2024-12-01 4:20 UTC (permalink / raw)
To: Mostafa Saleh
Cc: linux-kernel, iommu, linux-arm-kernel, will, robin.murphy, joro
On Thu, Oct 24, 2024 at 9:26 AM Mostafa Saleh <smostafa@google.com> wrote:
>
> When calculating the max number of entries in a table, Where
> RM_LPAE_LVL_IDX() understands the concatenated pgds and can return
> an index spanning more than one concatenated table (for ex for 4K
> page size > 512).
> But then, max_entries is calculated as follows:
> max_entries = ARM_LPAE_PTES_PER_TABLE(data) - map_idx_start;
>
> This leads to a negative index in the page table, where for:
> - map: do nothing (no OOB) as fortunately all comparisons are signed,
> but it would return a negative mapped value.
>
> - unmap: it would leak any child tables as it skips the loop over
> “__arm_lpae_free_pgtable”
>
> This bug only happens when map/unmap is requested with a page size
> equals to the first level of the concatenated table (for 40 bits input
> and 4K granule would be 1GB) and can be triggered from userspace with
> VFIO, by choosing a VM IPA in a concatenated table > 0 and using
> huge pages to mmap with the first level size.
>
> For example, I was able to reproduce it with the following command
> with mainline linux and mainline kvmtool:
>
> ./lkvm run --irqchip gicv3 -k {$KERNEL} -p "earlycon" -d {$ROOTFS} --force-pci -c \
> `nproc` --debug -m 4096@525312 --vfio-pci 0000:00:03.0 --hugetlbfs /hugepages
>
> Where huge pages with 1GB would be used and using IPA in the second table
> (512GB for 40 bits SMMU and 4K granule).
>
> Signed-off-by: Mostafa Saleh <smostafa@google.com>
> ---
> drivers/iommu/io-pgtable-arm.c | 17 ++++++++++++++---
> 1 file changed, 14 insertions(+), 3 deletions(-)
>
> diff --git a/drivers/iommu/io-pgtable-arm.c b/drivers/iommu/io-pgtable-arm.c
> index 0e67f1721a3d..3ecbc024e440 100644
> --- a/drivers/iommu/io-pgtable-arm.c
> +++ b/drivers/iommu/io-pgtable-arm.c
> @@ -199,6 +199,17 @@ static phys_addr_t iopte_to_paddr(arm_lpae_iopte pte,
> return (paddr | (paddr << (48 - 12))) & (ARM_LPAE_PTE_ADDR_MASK << 4);
> }
>
> +/*
> + * Using an index returned from ARM_LPAE_PGD_IDX(), which can point to
> + * concatenated PGD concatenated, get the max entries of a that table.
I believe the macro that returns an index is called ARM_LPAE_LVL_IDX
not ARM_LPAE_PGD_IDX.
> + */
> +static inline int arm_lpae_max_entries(int i, struct arm_lpae_io_pgtable *data)
> +{
> + int ptes_per_table = ARM_LPAE_PTES_PER_TABLE(data);
> +
> + return ptes_per_table - (i & (ptes_per_table - 1));
> +}
> +
> static bool selftest_running = false;
>
> static dma_addr_t __arm_lpae_dma_addr(void *pages)
> @@ -390,7 +401,7 @@ static int __arm_lpae_map(struct arm_lpae_io_pgtable *data, unsigned long iova,
>
> /* If we can install a leaf entry at this level, then do so */
> if (size == block_size) {
> - max_entries = ARM_LPAE_PTES_PER_TABLE(data) - map_idx_start;
> + max_entries = arm_lpae_max_entries(map_idx_start, data);
> num_entries = min_t(int, pgcount, max_entries);
> ret = arm_lpae_init_pte(data, iova, paddr, prot, lvl, num_entries, ptep);
> if (!ret)
> @@ -592,7 +603,7 @@ static size_t arm_lpae_split_blk_unmap(struct arm_lpae_io_pgtable *data,
>
> if (size == split_sz) {
> unmap_idx_start = ARM_LPAE_LVL_IDX(iova, lvl, data);
> - max_entries = ptes_per_table - unmap_idx_start;
> + max_entries = arm_lpae_max_entries(unmap_idx_start, data);
> num_entries = min_t(int, pgcount, max_entries);
> }
>
> @@ -650,7 +661,7 @@ static size_t __arm_lpae_unmap(struct arm_lpae_io_pgtable *data,
>
> /* If the size matches this level, we're in the right place */
> if (size == ARM_LPAE_BLOCK_SIZE(lvl, data)) {
> - max_entries = ARM_LPAE_PTES_PER_TABLE(data) - unmap_idx_start;
> + max_entries = arm_lpae_max_entries(unmap_idx_start, data);
> num_entries = min_t(int, pgcount, max_entries);
>
> /* Find and handle non-leaf entries */
> --
> 2.47.0.105.g07ac214952-goog
>
>
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH 1/2] iommu/io-pgtable-arm: Fix stage-2 map/umap for concatenated tables
2024-12-01 4:20 ` Daniel Mentz
@ 2024-12-02 12:12 ` Mostafa Saleh
2024-12-02 19:06 ` Daniel Mentz
0 siblings, 1 reply; 7+ messages in thread
From: Mostafa Saleh @ 2024-12-02 12:12 UTC (permalink / raw)
To: Daniel Mentz
Cc: linux-kernel, iommu, linux-arm-kernel, will, robin.murphy, joro
Hi Daniel,
On Sat, Nov 30, 2024 at 08:20:10PM -0800, Daniel Mentz wrote:
> On Thu, Oct 24, 2024 at 9:26 AM Mostafa Saleh <smostafa@google.com> wrote:
> >
> > When calculating the max number of entries in a table, Where
> > RM_LPAE_LVL_IDX() understands the concatenated pgds and can return
> > an index spanning more than one concatenated table (for ex for 4K
> > page size > 512).
> > But then, max_entries is calculated as follows:
> > max_entries = ARM_LPAE_PTES_PER_TABLE(data) - map_idx_start;
> >
> > This leads to a negative index in the page table, where for:
> > - map: do nothing (no OOB) as fortunately all comparisons are signed,
> > but it would return a negative mapped value.
> >
> > - unmap: it would leak any child tables as it skips the loop over
> > “__arm_lpae_free_pgtable”
> >
> > This bug only happens when map/unmap is requested with a page size
> > equals to the first level of the concatenated table (for 40 bits input
> > and 4K granule would be 1GB) and can be triggered from userspace with
> > VFIO, by choosing a VM IPA in a concatenated table > 0 and using
> > huge pages to mmap with the first level size.
> >
> > For example, I was able to reproduce it with the following command
> > with mainline linux and mainline kvmtool:
> >
> > ./lkvm run --irqchip gicv3 -k {$KERNEL} -p "earlycon" -d {$ROOTFS} --force-pci -c \
> > `nproc` --debug -m 4096@525312 --vfio-pci 0000:00:03.0 --hugetlbfs /hugepages
> >
> > Where huge pages with 1GB would be used and using IPA in the second table
> > (512GB for 40 bits SMMU and 4K granule).
> >
> > Signed-off-by: Mostafa Saleh <smostafa@google.com>
> > ---
> > drivers/iommu/io-pgtable-arm.c | 17 ++++++++++++++---
> > 1 file changed, 14 insertions(+), 3 deletions(-)
> >
> > diff --git a/drivers/iommu/io-pgtable-arm.c b/drivers/iommu/io-pgtable-arm.c
> > index 0e67f1721a3d..3ecbc024e440 100644
> > --- a/drivers/iommu/io-pgtable-arm.c
> > +++ b/drivers/iommu/io-pgtable-arm.c
> > @@ -199,6 +199,17 @@ static phys_addr_t iopte_to_paddr(arm_lpae_iopte pte,
> > return (paddr | (paddr << (48 - 12))) & (ARM_LPAE_PTE_ADDR_MASK << 4);
> > }
> >
> > +/*
> > + * Using an index returned from ARM_LPAE_PGD_IDX(), which can point to
> > + * concatenated PGD concatenated, get the max entries of a that table.
>
> I believe the macro that returns an index is called ARM_LPAE_LVL_IDX
> not ARM_LPAE_PGD_IDX.
>
Yes, the comment is not quite accurate, although ARM_LPAE_PGD_IDX()
calls ARM_LPAE_PGD_IDX() which is the problem.
Thanks,
Mostafa
> > + */
> > +static inline int arm_lpae_max_entries(int i, struct arm_lpae_io_pgtable *data)
> > +{
> > + int ptes_per_table = ARM_LPAE_PTES_PER_TABLE(data);
> > +
> > + return ptes_per_table - (i & (ptes_per_table - 1));
> > +}
> > +
> > static bool selftest_running = false;
> >
> > static dma_addr_t __arm_lpae_dma_addr(void *pages)
> > @@ -390,7 +401,7 @@ static int __arm_lpae_map(struct arm_lpae_io_pgtable *data, unsigned long iova,
> >
> > /* If we can install a leaf entry at this level, then do so */
> > if (size == block_size) {
> > - max_entries = ARM_LPAE_PTES_PER_TABLE(data) - map_idx_start;
> > + max_entries = arm_lpae_max_entries(map_idx_start, data);
> > num_entries = min_t(int, pgcount, max_entries);
> > ret = arm_lpae_init_pte(data, iova, paddr, prot, lvl, num_entries, ptep);
> > if (!ret)
> > @@ -592,7 +603,7 @@ static size_t arm_lpae_split_blk_unmap(struct arm_lpae_io_pgtable *data,
> >
> > if (size == split_sz) {
> > unmap_idx_start = ARM_LPAE_LVL_IDX(iova, lvl, data);
> > - max_entries = ptes_per_table - unmap_idx_start;
> > + max_entries = arm_lpae_max_entries(unmap_idx_start, data);
> > num_entries = min_t(int, pgcount, max_entries);
> > }
> >
> > @@ -650,7 +661,7 @@ static size_t __arm_lpae_unmap(struct arm_lpae_io_pgtable *data,
> >
> > /* If the size matches this level, we're in the right place */
> > if (size == ARM_LPAE_BLOCK_SIZE(lvl, data)) {
> > - max_entries = ARM_LPAE_PTES_PER_TABLE(data) - unmap_idx_start;
> > + max_entries = arm_lpae_max_entries(unmap_idx_start, data);
> > num_entries = min_t(int, pgcount, max_entries);
> >
> > /* Find and handle non-leaf entries */
> > --
> > 2.47.0.105.g07ac214952-goog
> >
> >
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH 1/2] iommu/io-pgtable-arm: Fix stage-2 map/umap for concatenated tables
2024-12-02 12:12 ` Mostafa Saleh
@ 2024-12-02 19:06 ` Daniel Mentz
0 siblings, 0 replies; 7+ messages in thread
From: Daniel Mentz @ 2024-12-02 19:06 UTC (permalink / raw)
To: Mostafa Saleh
Cc: linux-kernel, iommu, linux-arm-kernel, will, robin.murphy, joro
Hi Mostafa,
On Mon, Dec 2, 2024 at 4:12 AM Mostafa Saleh <smostafa@google.com> wrote:
>
> Hi Daniel,
>
> On Sat, Nov 30, 2024 at 08:20:10PM -0800, Daniel Mentz wrote:
> > On Thu, Oct 24, 2024 at 9:26 AM Mostafa Saleh <smostafa@google.com> wrote:
> > >
> > > When calculating the max number of entries in a table, Where
> > > RM_LPAE_LVL_IDX() understands the concatenated pgds and can return
> > > an index spanning more than one concatenated table (for ex for 4K
> > > page size > 512).
> > > But then, max_entries is calculated as follows:
> > > max_entries = ARM_LPAE_PTES_PER_TABLE(data) - map_idx_start;
> > >
> > > This leads to a negative index in the page table, where for:
> > > - map: do nothing (no OOB) as fortunately all comparisons are signed,
> > > but it would return a negative mapped value.
> > >
> > > - unmap: it would leak any child tables as it skips the loop over
> > > “__arm_lpae_free_pgtable”
> > >
> > > This bug only happens when map/unmap is requested with a page size
> > > equals to the first level of the concatenated table (for 40 bits input
> > > and 4K granule would be 1GB) and can be triggered from userspace with
> > > VFIO, by choosing a VM IPA in a concatenated table > 0 and using
> > > huge pages to mmap with the first level size.
> > >
> > > For example, I was able to reproduce it with the following command
> > > with mainline linux and mainline kvmtool:
> > >
> > > ./lkvm run --irqchip gicv3 -k {$KERNEL} -p "earlycon" -d {$ROOTFS} --force-pci -c \
> > > `nproc` --debug -m 4096@525312 --vfio-pci 0000:00:03.0 --hugetlbfs /hugepages
> > >
> > > Where huge pages with 1GB would be used and using IPA in the second table
> > > (512GB for 40 bits SMMU and 4K granule).
> > >
> > > Signed-off-by: Mostafa Saleh <smostafa@google.com>
> > > ---
> > > drivers/iommu/io-pgtable-arm.c | 17 ++++++++++++++---
> > > 1 file changed, 14 insertions(+), 3 deletions(-)
> > >
> > > diff --git a/drivers/iommu/io-pgtable-arm.c b/drivers/iommu/io-pgtable-arm.c
> > > index 0e67f1721a3d..3ecbc024e440 100644
> > > --- a/drivers/iommu/io-pgtable-arm.c
> > > +++ b/drivers/iommu/io-pgtable-arm.c
> > > @@ -199,6 +199,17 @@ static phys_addr_t iopte_to_paddr(arm_lpae_iopte pte,
> > > return (paddr | (paddr << (48 - 12))) & (ARM_LPAE_PTE_ADDR_MASK << 4);
> > > }
> > >
> > > +/*
> > > + * Using an index returned from ARM_LPAE_PGD_IDX(), which can point to
> > > + * concatenated PGD concatenated, get the max entries of a that table.
> >
> > I believe the macro that returns an index is called ARM_LPAE_LVL_IDX
> > not ARM_LPAE_PGD_IDX.
> >
>
> Yes, the comment is not quite accurate, although ARM_LPAE_PGD_IDX()
> calls ARM_LPAE_PGD_IDX() which is the problem.
I assume you mean the problem is that ARM_LPAE_LVL_IDX() calls into
ARM_LPAE_PGD_IDX().
I find that ARM_LPAE_PGD_IDX is a misnomer as it returns a number of
bits as opposed to an index. Something like ARM_LPAE_PGD_EXTRA_BITS
would be a more appropriate name.
>
> Thanks,
> Mostafa
>
> > > + */
> > > +static inline int arm_lpae_max_entries(int i, struct arm_lpae_io_pgtable *data)
> > > +{
> > > + int ptes_per_table = ARM_LPAE_PTES_PER_TABLE(data);
> > > +
> > > + return ptes_per_table - (i & (ptes_per_table - 1));
> > > +}
> > > +
> > > static bool selftest_running = false;
> > >
> > > static dma_addr_t __arm_lpae_dma_addr(void *pages)
> > > @@ -390,7 +401,7 @@ static int __arm_lpae_map(struct arm_lpae_io_pgtable *data, unsigned long iova,
> > >
> > > /* If we can install a leaf entry at this level, then do so */
> > > if (size == block_size) {
> > > - max_entries = ARM_LPAE_PTES_PER_TABLE(data) - map_idx_start;
> > > + max_entries = arm_lpae_max_entries(map_idx_start, data);
> > > num_entries = min_t(int, pgcount, max_entries);
> > > ret = arm_lpae_init_pte(data, iova, paddr, prot, lvl, num_entries, ptep);
> > > if (!ret)
> > > @@ -592,7 +603,7 @@ static size_t arm_lpae_split_blk_unmap(struct arm_lpae_io_pgtable *data,
> > >
> > > if (size == split_sz) {
> > > unmap_idx_start = ARM_LPAE_LVL_IDX(iova, lvl, data);
> > > - max_entries = ptes_per_table - unmap_idx_start;
> > > + max_entries = arm_lpae_max_entries(unmap_idx_start, data);
> > > num_entries = min_t(int, pgcount, max_entries);
> > > }
> > >
> > > @@ -650,7 +661,7 @@ static size_t __arm_lpae_unmap(struct arm_lpae_io_pgtable *data,
> > >
> > > /* If the size matches this level, we're in the right place */
> > > if (size == ARM_LPAE_BLOCK_SIZE(lvl, data)) {
> > > - max_entries = ARM_LPAE_PTES_PER_TABLE(data) - unmap_idx_start;
> > > + max_entries = arm_lpae_max_entries(unmap_idx_start, data);
> > > num_entries = min_t(int, pgcount, max_entries);
> > >
> > > /* Find and handle non-leaf entries */
> > > --
> > > 2.47.0.105.g07ac214952-goog
> > >
> > >
^ permalink raw reply [flat|nested] 7+ messages in thread
end of thread, other threads:[~2024-12-02 20:13 UTC | newest]
Thread overview: 7+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-10-24 16:25 [PATCH 0/2] iommu/io-pgtable-arm: Fix for stage-2 map/unmap Mostafa Saleh
2024-10-24 16:25 ` [PATCH 1/2] iommu/io-pgtable-arm: Fix stage-2 map/umap for concatenated tables Mostafa Saleh
2024-12-01 4:20 ` Daniel Mentz
2024-12-02 12:12 ` Mostafa Saleh
2024-12-02 19:06 ` Daniel Mentz
2024-10-24 16:25 ` [PATCH 2/2] iommu/io-pgtable-arm: Add self test for the last page in the IAS Mostafa Saleh
2024-10-29 16:15 ` [PATCH 0/2] iommu/io-pgtable-arm: Fix for stage-2 map/unmap Will Deacon
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).