linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Will Deacon <will@kernel.org>
To: linux-kernel@vger.kernel.org
Cc: kernel-team@android.com, Will Deacon <will@kernel.org>,
	iommu@lists.linux.dev, Christoph Hellwig <hch@lst.de>,
	Marek Szyprowski <m.szyprowski@samsung.com>,
	Robin Murphy <robin.murphy@arm.com>,
	Petr Tesarik <petr.tesarik1@huawei-partners.com>,
	Dexuan Cui <decui@microsoft.com>,
	Nicolin Chen <nicolinc@nvidia.com>,
	Michael Kelley <mhklinux@outlook.com>
Subject: [PATCH v4 4/5] swiotlb: Fix alignment checks when both allocation and DMA masks are present
Date: Wed, 21 Feb 2024 11:35:03 +0000	[thread overview]
Message-ID: <20240221113504.7161-5-will@kernel.org> (raw)
In-Reply-To: <20240221113504.7161-1-will@kernel.org>

Nicolin reports that swiotlb buffer allocations fail for an NVME device
behind an IOMMU using 64KiB pages. This is because we end up with a
minimum allocation alignment of 64KiB (for the IOMMU to map the buffer
safely) but a minimum DMA alignment mask corresponding to a 4KiB NVME
page (i.e. preserving the 4KiB page offset from the original allocation).
If the original address is not 4KiB-aligned, the allocation will fail
because swiotlb_search_pool_area() erroneously compares these unmasked
bits with the 64KiB-aligned candidate allocation.

Tweak swiotlb_search_pool_area() so that the DMA alignment mask is
reduced based on the required alignment of the allocation.

Fixes: 82612d66d51d ("iommu: Allow the dma-iommu api to use bounce buffers")
Reported-by: Nicolin Chen <nicolinc@nvidia.com>
Link: https://lore.kernel.org/r/cover.1707851466.git.nicolinc@nvidia.com
Signed-off-by: Will Deacon <will@kernel.org>
---
 kernel/dma/swiotlb.c | 11 +++++++++--
 1 file changed, 9 insertions(+), 2 deletions(-)

diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
index c20324fba814..c381a7ed718f 100644
--- a/kernel/dma/swiotlb.c
+++ b/kernel/dma/swiotlb.c
@@ -981,8 +981,7 @@ static int swiotlb_search_pool_area(struct device *dev, struct io_tlb_pool *pool
 	dma_addr_t tbl_dma_addr =
 		phys_to_dma_unencrypted(dev, pool->start) & boundary_mask;
 	unsigned long max_slots = get_max_slots(boundary_mask);
-	unsigned int iotlb_align_mask =
-		dma_get_min_align_mask(dev) & ~(IO_TLB_SIZE - 1);
+	unsigned int iotlb_align_mask = dma_get_min_align_mask(dev);
 	unsigned int nslots = nr_slots(alloc_size), stride;
 	unsigned int offset = swiotlb_align_offset(dev, orig_addr);
 	unsigned int index, slots_checked, count = 0, i;
@@ -993,6 +992,14 @@ static int swiotlb_search_pool_area(struct device *dev, struct io_tlb_pool *pool
 	BUG_ON(!nslots);
 	BUG_ON(area_index >= pool->nareas);
 
+	/*
+	 * Ensure that the allocation is at least slot-aligned and update
+	 * 'iotlb_align_mask' to ignore bits that will be preserved when
+	 * offsetting into the allocation.
+	 */
+	alloc_align_mask |= (IO_TLB_SIZE - 1);
+	iotlb_align_mask &= ~alloc_align_mask;
+
 	/*
 	 * For mappings with an alignment requirement don't bother looping to
 	 * unaligned slots once we found an aligned one.
-- 
2.44.0.rc0.258.g7320e95886-goog


  parent reply	other threads:[~2024-02-21 11:35 UTC|newest]

Thread overview: 27+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-02-21 11:34 [PATCH v4 0/5] Fix double allocation in swiotlb_alloc() Will Deacon
2024-02-21 11:35 ` [PATCH v4 1/5] swiotlb: Fix double-allocation of slots due to broken alignment handling Will Deacon
2024-02-21 23:35   ` Michael Kelley
2024-02-23 12:47     ` Will Deacon
2024-02-23 13:36       ` Petr Tesařík
2024-02-23 17:04       ` Michael Kelley
2024-02-27 15:38       ` Christoph Hellwig
2024-02-21 11:35 ` [PATCH v4 2/5] swiotlb: Enforce page alignment in swiotlb_alloc() Will Deacon
2024-02-21 23:36   ` Michael Kelley
2024-02-21 11:35 ` [PATCH v4 3/5] swiotlb: Honour dma_alloc_coherent() " Will Deacon
2024-02-21 23:36   ` Michael Kelley
2024-02-21 11:35 ` Will Deacon [this message]
2024-02-21 23:37   ` [PATCH v4 4/5] swiotlb: Fix alignment checks when both allocation and DMA masks are present Michael Kelley
2024-02-21 11:35 ` [PATCH v4 5/5] iommu/dma: Force swiotlb_max_mapping_size on an untrusted device Will Deacon
2024-02-21 23:39   ` Michael Kelley
2024-02-23 19:58     ` Nicolin Chen
2024-02-23 21:10       ` Michael Kelley
2024-02-25 21:17         ` Michael Kelley
2024-02-26 19:35   ` Robin Murphy
2024-02-26 21:11     ` Michael Kelley
2024-02-27 13:22       ` Robin Murphy
2024-02-27 14:30         ` Michael Kelley
2024-02-27 15:40   ` Christoph Hellwig
2024-02-27 15:53     ` Robin Murphy
2024-02-28 12:05       ` Will Deacon
2024-02-23 11:34 ` [PATCH v4 0/5] Fix double allocation in swiotlb_alloc() Nicolin Chen
2024-02-23 12:25   ` Will Deacon

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20240221113504.7161-5-will@kernel.org \
    --to=will@kernel.org \
    --cc=decui@microsoft.com \
    --cc=hch@lst.de \
    --cc=iommu@lists.linux.dev \
    --cc=kernel-team@android.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=m.szyprowski@samsung.com \
    --cc=mhklinux@outlook.com \
    --cc=nicolinc@nvidia.com \
    --cc=petr.tesarik1@huawei-partners.com \
    --cc=robin.murphy@arm.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).