From: Alex Williamson <alex.williamson-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
To: iommu-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA@public.gmane.org
Cc: linux-kernel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
Subject: [PATCH] iommu: Split iommu_unmaps
Date: Fri, 24 May 2013 11:14:45 -0600 [thread overview]
Message-ID: <20130524171401.14099.78694.stgit@bling.home> (raw)
iommu_map splits requests into pages that the iommu driver reports
that it can handle. The iommu_unmap path does not do the same. This
can cause problems not only from callers that might expect the same
behavior as the map path, but even from the failure path of iommu_map,
should it fail at a point where it has mapped and needs to unwind a
set of pages that the iommu driver cannot handle directly. amd_iommu,
for example, will BUG_ON if asked to unmap a non power of 2 size.
Fix this by extracting and generalizing the sizing code from the
iommu_map path and use it for both map and unmap.
Signed-off-by: Alex Williamson <alex.williamson-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
---
drivers/iommu/iommu.c | 63 +++++++++++++++++++++++++++----------------------
1 file changed, 35 insertions(+), 28 deletions(-)
diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c
index d8f98b1..4b0b56b 100644
--- a/drivers/iommu/iommu.c
+++ b/drivers/iommu/iommu.c
@@ -754,6 +754,38 @@ int iommu_domain_has_cap(struct iommu_domain *domain,
}
EXPORT_SYMBOL_GPL(iommu_domain_has_cap);
+static size_t iommu_pgsize(struct iommu_domain *domain,
+ unsigned long addr_merge, size_t size)
+{
+ unsigned int pgsize_idx;
+ size_t pgsize;
+
+ /* Max page size that still fits into 'size' */
+ pgsize_idx = __fls(size);
+
+ /* need to consider alignment requirements ? */
+ if (likely(addr_merge)) {
+ /* Max page size allowed by address */
+ unsigned int align_pgsize_idx = __ffs(addr_merge);
+ pgsize_idx = min(pgsize_idx, align_pgsize_idx);
+ }
+
+ /* build a mask of acceptable page sizes */
+ pgsize = (1UL << (pgsize_idx + 1)) - 1;
+
+ /* throw away page sizes not supported by the hardware */
+ pgsize &= domain->ops->pgsize_bitmap;
+
+ /* make sure we're still sane */
+ BUG_ON(!pgsize);
+
+ /* pick the biggest page */
+ pgsize_idx = __fls(pgsize);
+ pgsize = 1UL << pgsize_idx;
+
+ return pgsize;
+}
+
int iommu_map(struct iommu_domain *domain, unsigned long iova,
phys_addr_t paddr, size_t size, int prot)
{
@@ -785,32 +817,7 @@ int iommu_map(struct iommu_domain *domain, unsigned long iova,
(unsigned long)paddr, (unsigned long)size);
while (size) {
- unsigned long pgsize, addr_merge = iova | paddr;
- unsigned int pgsize_idx;
-
- /* Max page size that still fits into 'size' */
- pgsize_idx = __fls(size);
-
- /* need to consider alignment requirements ? */
- if (likely(addr_merge)) {
- /* Max page size allowed by both iova and paddr */
- unsigned int align_pgsize_idx = __ffs(addr_merge);
-
- pgsize_idx = min(pgsize_idx, align_pgsize_idx);
- }
-
- /* build a mask of acceptable page sizes */
- pgsize = (1UL << (pgsize_idx + 1)) - 1;
-
- /* throw away page sizes not supported by the hardware */
- pgsize &= domain->ops->pgsize_bitmap;
-
- /* make sure we're still sane */
- BUG_ON(!pgsize);
-
- /* pick the biggest page */
- pgsize_idx = __fls(pgsize);
- pgsize = 1UL << pgsize_idx;
+ size_t pgsize = iommu_pgsize(domain, iova | paddr, size);
pr_debug("mapping: iova 0x%lx pa 0x%lx pgsize %lu\n", iova,
(unsigned long)paddr, pgsize);
@@ -863,9 +870,9 @@ size_t iommu_unmap(struct iommu_domain *domain, unsigned long iova, size_t size)
* or we hit an area that isn't mapped.
*/
while (unmapped < size) {
- size_t left = size - unmapped;
+ size_t pgsize = iommu_pgsize(domain, iova, size - unmapped);
- unmapped_page = domain->ops->unmap(domain, iova, left);
+ unmapped_page = domain->ops->unmap(domain, iova, pgsize);
if (!unmapped_page)
break;
next reply other threads:[~2013-05-24 17:14 UTC|newest]
Thread overview: 5+ messages / expand[flat|nested] mbox.gz Atom feed top
2013-05-24 17:14 Alex Williamson [this message]
[not found] ` <20130524171401.14099.78694.stgit-xdHQ/5r00wBBDLzU/O5InQ@public.gmane.org>
2013-06-05 16:39 ` [PATCH] iommu: Split iommu_unmaps Alex Williamson
2013-11-07 16:37 ` David Woodhouse
[not found] ` <1383842247.27315.117.camel-Fexsq3y4057IgHVZqg5X0TlWvGAXklZc@public.gmane.org>
2013-11-11 23:09 ` Alex Williamson
[not found] ` <1384211375.22415.69.camel-85EaTFmN5p//9pzu0YdTqQ@public.gmane.org>
2013-11-20 14:29 ` David Woodhouse
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20130524171401.14099.78694.stgit@bling.home \
--to=alex.williamson-h+wxahxf7alqt0dzr+alfa@public.gmane.org \
--cc=iommu-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA@public.gmane.org \
--cc=linux-kernel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).