Linux IOMMU Development
 help / color / mirror / Atom feed
From: Robin Murphy <robin.murphy@arm.com>
To: "Isaac J. Manjarres" <isaacm@codeaurora.org>,
	iommu@lists.linux-foundation.org,
	linux-arm-kernel@lists.infradead.org
Cc: pratikp@codeaurora.org, will@kernel.org
Subject: Re: [RFC PATCH 0/5] Optimization for unmapping iommu mapped buffers
Date: Thu, 1 Apr 2021 16:33:53 +0100	[thread overview]
Message-ID: <f4931afb-7530-ff96-44e0-25e3e86de336@arm.com> (raw)
In-Reply-To: <20210331030042.13348-1-isaacm@codeaurora.org>

On 2021-03-31 04:00, Isaac J. Manjarres wrote:
> When unmapping a buffer from an IOMMU domain, the IOMMU framework unmaps
> the buffer at a granule of the largest page size that is supported by
> the IOMMU hardware and fits within the buffer. For every block that
> is unmapped, the IOMMU framework will call into the IOMMU driver, and
> then the io-pgtable framework to walk the page tables to find the entry
> that corresponds to the IOVA, and then unmaps the entry.
> 
> This can be suboptimal in scenarios where a buffer or a piece of a
> buffer can be split into several contiguous page blocks of the same size.
> For example, consider an IOMMU that supports 4 KB page blocks, 2 MB page
> blocks, and 1 GB page blocks, and a buffer that is 4 MB in size is being
> unmapped at IOVA 0. The current call-flow will result in 4 indirect calls,
> and 2 page table walks, to unmap 2 entries that are next to each other in
> the page-tables, when both entries could have been unmapped in one shot
> by clearing both page table entries in the same call.

s/unmap/map/ and s/clear/set/ and those two paragraphs are still just as 
valid. I'd say If it's worth doing anything at all then it's worth doing 
more than just half the job ;)

> These patches implement a callback called unmap_pages to the io-pgtable
> code and IOMMU drivers which unmaps an IOVA range that consists of a
> number of pages of the same page size that is supported by the IOMMU
> hardware, and allows for clearing multiple entries in the same set of
> indirect calls. The reason for introducing unmap_pages is to give
> other IOMMU drivers/io-pgtable formats time to change to using the new
> unmap_pages callback, so that the transition to using this approach can be
> done piecemeal.
> 
> The same optimization is applicable for mapping buffers, however, the
> error handling in the io-pgtable layer couldn't be handled cleanly, as we
> would need to invoke iommu_unmap to unmap the parts of the buffer that
> were mapped, and then do any TLB maintenance. However, that seemed like a
> layering violation.

Why couldn't it just return the partial mapping and let the caller roll 
it back?

Note that having a weird asymmetric interface was how things started out 
way back when - see bd13969b9524 ("iommu: Split iommu_unmaps") for context.

> Any feedback is very much appreciated.

Do you have any real-world performance figures? I proposed this as an 
approach because it was clear it could give *some* benefit for 
relatively low impact, but I'm curious to find out exactly how much, and 
in particular whether it appears to leave anything on the table vs. 
punting the entire operation down into the drivers.

Robin.

> Thanks,
> Isaac
> 
> Isaac J. Manjarres (5):
>    iommu/io-pgtable: Introduce unmap_pages() as a page table op
>    iommu: Add an unmap_pages() op for IOMMU drivers
>    iommu: Add support for the unmap_pages IOMMU callback
>    iommu/io-pgtable-arm: Implement arm_lpae_unmap_pages()
>    iommu/arm-smmu: Implement the unmap_pages IOMMU driver callback
> 
>   drivers/iommu/arm/arm-smmu/arm-smmu.c |  19 +++++
>   drivers/iommu/io-pgtable-arm.c        | 114 +++++++++++++++++++++-----
>   drivers/iommu/iommu.c                 |  44 ++++++++--
>   include/linux/io-pgtable.h            |   4 +
>   include/linux/iommu.h                 |   4 +
>   5 files changed, 159 insertions(+), 26 deletions(-)
> 
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

      parent reply	other threads:[~2021-04-01 15:34 UTC|newest]

Thread overview: 16+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-03-31  3:00 [RFC PATCH 0/5] Optimization for unmapping iommu mapped buffers Isaac J. Manjarres
2021-03-31  3:00 ` [RFC PATCH 1/5] iommu/io-pgtable: Introduce unmap_pages() as a page table op Isaac J. Manjarres
2021-03-31  3:00 ` [RFC PATCH 2/5] iommu: Add an unmap_pages() op for IOMMU drivers Isaac J. Manjarres
2021-03-31  4:47   ` Lu Baolu
2021-03-31  5:36     ` isaacm
2021-03-31  5:39       ` Lu Baolu
2021-04-02 17:25         ` isaacm
2021-04-03  1:35           ` Lu Baolu
2021-03-31  3:00 ` [RFC PATCH 3/5] iommu: Add support for the unmap_pages IOMMU callback Isaac J. Manjarres
2021-04-01 15:34   ` Robin Murphy
2021-04-01 16:37     ` Will Deacon
2021-03-31  3:00 ` [RFC PATCH 4/5] iommu/io-pgtable-arm: Implement arm_lpae_unmap_pages() Isaac J. Manjarres
2021-04-01 17:19   ` Robin Murphy
2021-03-31  3:00 ` [RFC PATCH 5/5] iommu/arm-smmu: Implement the unmap_pages IOMMU driver callback Isaac J. Manjarres
2021-04-01  3:28 ` [RFC PATCH 0/5] Optimization for unmapping iommu mapped buffers chenxiang (M)
2021-04-01 15:33 ` Robin Murphy [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=f4931afb-7530-ff96-44e0-25e3e86de336@arm.com \
    --to=robin.murphy@arm.com \
    --cc=iommu@lists.linux-foundation.org \
    --cc=isaacm@codeaurora.org \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=pratikp@codeaurora.org \
    --cc=will@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox