From: Robin Murphy <robin.murphy@arm.com>
To: "Isaac J. Manjarres" <isaacm@codeaurora.org>,
iommu@lists.linux-foundation.org,
linux-arm-kernel@lists.infradead.org
Cc: pratikp@codeaurora.org, will@kernel.org
Subject: Re: [RFC PATCH 3/5] iommu: Add support for the unmap_pages IOMMU callback
Date: Thu, 1 Apr 2021 16:34:37 +0100 [thread overview]
Message-ID: <f57e2151-1199-46f0-21ed-e401be358857@arm.com> (raw)
In-Reply-To: <20210331030042.13348-4-isaacm@codeaurora.org>
On 2021-03-31 04:00, Isaac J. Manjarres wrote:
> The IOMMU framework currently unmaps memory one page block at a time,
> per the page block sizes that are supported by the IOMMU hardware.
> Now that IOMMU drivers can supply a callback for unmapping multiple
> in one call, add support in the IOMMU framework to calculate how many
> page mappings of the same size can be unmapped in one shot, and invoke the
> IOMMU driver's unmap_pages callback if it has one. Otherwise, the
> existing behavior will be used.
>
> Signed-off-by: Isaac J. Manjarres <isaacm@codeaurora.org>
> Suggested-by: Will Deacon <will@kernel.org>
> ---
> drivers/iommu/iommu.c | 44 +++++++++++++++++++++++++++++++++++++------
> 1 file changed, 38 insertions(+), 6 deletions(-)
>
> diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c
> index d0b0a15dba84..dc4295f6bc7f 100644
> --- a/drivers/iommu/iommu.c
> +++ b/drivers/iommu/iommu.c
> @@ -2356,8 +2356,8 @@ phys_addr_t iommu_iova_to_phys(struct iommu_domain *domain, dma_addr_t iova)
> }
> EXPORT_SYMBOL_GPL(iommu_iova_to_phys);
>
> -static size_t iommu_pgsize(struct iommu_domain *domain,
> - unsigned long addr_merge, size_t size)
> +static size_t __iommu_pgsize(struct iommu_domain *domain,
> + unsigned long addr_merge, size_t size)
> {
> unsigned int pgsize_idx;
> size_t pgsize;
> @@ -2388,6 +2388,24 @@ static size_t iommu_pgsize(struct iommu_domain *domain,
> return pgsize;
> }
>
> +static size_t iommu_pgsize(struct iommu_domain *domain,
> + unsigned long addr_merge, size_t size,
> + size_t *pgcount)
> +{
> + size_t pgsize = __iommu_pgsize(domain, addr_merge, size);
> + size_t pgs = 0;
> +
> + do {
> + pgs++;
> + size -= pgsize;
> + addr_merge += pgsize;
> + } while (size && __iommu_pgsize(domain, addr_merge, size) == pgsize);
This looks horrifically inefficient. As part of calculating the best
current page size it should then be pretty trivial to calculate "(size &
next_pgsize_up - 1) >> pgsize_idx" for the number of current-size pages
up to the next-better-size boundary (with next_pgsize_up being 0 if
pgsize is already the largest possible for the relative alignment of
physical and virtual address). A loop is just... yuck :(
> +
> + *pgcount = pgs;
> +
> + return pgsize;
> +}
> +
> static int __iommu_map(struct iommu_domain *domain, unsigned long iova,
> phys_addr_t paddr, size_t size, int prot, gfp_t gfp)
> {
> @@ -2422,7 +2440,7 @@ static int __iommu_map(struct iommu_domain *domain, unsigned long iova,
> pr_debug("map: iova 0x%lx pa %pa size 0x%zx\n", iova, &paddr, size);
>
> while (size) {
> - size_t pgsize = iommu_pgsize(domain, iova | paddr, size);
> + size_t pgsize = __iommu_pgsize(domain, iova | paddr, size);
>
> pr_debug("mapping: iova 0x%lx pa %pa pgsize 0x%zx\n",
> iova, &paddr, pgsize);
> @@ -2473,6 +2491,21 @@ int iommu_map_atomic(struct iommu_domain *domain, unsigned long iova,
> }
> EXPORT_SYMBOL_GPL(iommu_map_atomic);
>
> +static size_t __iommu_unmap_pages(struct iommu_domain *domain, unsigned long iova,
> + size_t size, struct iommu_iotlb_gather *iotlb_gather)
> +{
> + const struct iommu_ops *ops = domain->ops;
> + size_t pgsize, pgcount;
> +
> + if (ops->unmap_pages) {
> + pgsize = iommu_pgsize(domain, iova, size, &pgcount);
> + return ops->unmap_pages(domain, iova, pgsize, pgcount, iotlb_gather);
> + }
> +
> + pgsize = __iommu_pgsize(domain, iova, size);
> + return ops->unmap(domain, iova, pgsize, iotlb_gather);
> +}
> +
> static size_t __iommu_unmap(struct iommu_domain *domain,
> unsigned long iova, size_t size,
> struct iommu_iotlb_gather *iotlb_gather)
> @@ -2510,9 +2543,8 @@ static size_t __iommu_unmap(struct iommu_domain *domain,
> * or we hit an area that isn't mapped.
> */
> while (unmapped < size) {
> - size_t pgsize = iommu_pgsize(domain, iova, size - unmapped);
> -
> - unmapped_page = ops->unmap(domain, iova, pgsize, iotlb_gather);
> + unmapped_page = __iommu_unmap_pages(domain, iova, size - unmapped,
> + iotlb_gather);
I think it would make more sense to restructure the basic function
around handling a page range, then just have a little inner loop to
iterate over the individual pages if the driver doesn't provide the new
callback.
Robin.
> if (!unmapped_page)
> break;
>
>
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu
next prev parent reply other threads:[~2021-04-01 15:35 UTC|newest]
Thread overview: 16+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-03-31 3:00 [RFC PATCH 0/5] Optimization for unmapping iommu mapped buffers Isaac J. Manjarres
2021-03-31 3:00 ` [RFC PATCH 1/5] iommu/io-pgtable: Introduce unmap_pages() as a page table op Isaac J. Manjarres
2021-03-31 3:00 ` [RFC PATCH 2/5] iommu: Add an unmap_pages() op for IOMMU drivers Isaac J. Manjarres
2021-03-31 4:47 ` Lu Baolu
2021-03-31 5:36 ` isaacm
2021-03-31 5:39 ` Lu Baolu
2021-04-02 17:25 ` isaacm
2021-04-03 1:35 ` Lu Baolu
2021-03-31 3:00 ` [RFC PATCH 3/5] iommu: Add support for the unmap_pages IOMMU callback Isaac J. Manjarres
2021-04-01 15:34 ` Robin Murphy [this message]
2021-04-01 16:37 ` Will Deacon
2021-03-31 3:00 ` [RFC PATCH 4/5] iommu/io-pgtable-arm: Implement arm_lpae_unmap_pages() Isaac J. Manjarres
2021-04-01 17:19 ` Robin Murphy
2021-03-31 3:00 ` [RFC PATCH 5/5] iommu/arm-smmu: Implement the unmap_pages IOMMU driver callback Isaac J. Manjarres
2021-04-01 3:28 ` [RFC PATCH 0/5] Optimization for unmapping iommu mapped buffers chenxiang (M)
2021-04-01 15:33 ` Robin Murphy
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=f57e2151-1199-46f0-21ed-e401be358857@arm.com \
--to=robin.murphy@arm.com \
--cc=iommu@lists.linux-foundation.org \
--cc=isaacm@codeaurora.org \
--cc=linux-arm-kernel@lists.infradead.org \
--cc=pratikp@codeaurora.org \
--cc=will@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox