From: Peter Gonda <pgonda@google.com>
To: stable@vger.kernel.org
Cc: Peter Gonda <pgonda@google.com>,
David Rientjes <rientjes@google.com>,
Christoph Hellwig <hch@lst.de>
Subject: [PATCH 18/30 for 5.4] dma-direct: always align allocation size in dma_direct_alloc_pages()
Date: Fri, 25 Sep 2020 09:19:04 -0700 [thread overview]
Message-ID: <20200925161916.204667-19-pgonda@google.com> (raw)
In-Reply-To: <20200925161916.204667-1-pgonda@google.com>
From: David Rientjes <rientjes@google.com>
upstream 633d5fce78a61e8727674467944939f55b0bcfab commit.
dma_alloc_contiguous() does size >> PAGE_SHIFT and set_memory_decrypted()
works at page granularity. It's necessary to page align the allocation
size in dma_direct_alloc_pages() for consistent behavior.
This also fixes an issue when arch_dma_prep_coherent() is called on an
unaligned allocation size for dma_alloc_need_uncached() when
CONFIG_DMA_DIRECT_REMAP is disabled but CONFIG_ARCH_HAS_DMA_SET_UNCACHED
is enabled.
Signed-off-by: David Rientjes <rientjes@google.com>
Signed-off-by: Christoph Hellwig <hch@lst.de>
Change-Id: I6ede6ca2864a9fb3ace42df7a0da6725ae453f1c
Signed-off-by: Peter Gonda <pgonda@google.com>
---
kernel/dma/direct.c | 17 ++++++++++-------
1 file changed, 10 insertions(+), 7 deletions(-)
diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c
index bb5cb5af9f7d..e72bb0dc8150 100644
--- a/kernel/dma/direct.c
+++ b/kernel/dma/direct.c
@@ -126,11 +126,12 @@ static inline bool dma_should_free_from_pool(struct device *dev,
struct page *__dma_direct_alloc_pages(struct device *dev, size_t size,
gfp_t gfp, unsigned long attrs)
{
- size_t alloc_size = PAGE_ALIGN(size);
int node = dev_to_node(dev);
struct page *page = NULL;
u64 phys_mask;
+ WARN_ON_ONCE(!PAGE_ALIGNED(size));
+
if (attrs & DMA_ATTR_NO_WARN)
gfp |= __GFP_NOWARN;
@@ -138,14 +139,14 @@ struct page *__dma_direct_alloc_pages(struct device *dev, size_t size,
gfp &= ~__GFP_ZERO;
gfp |= dma_direct_optimal_gfp_mask(dev, dev->coherent_dma_mask,
&phys_mask);
- page = dma_alloc_contiguous(dev, alloc_size, gfp);
+ page = dma_alloc_contiguous(dev, size, gfp);
if (page && !dma_coherent_ok(dev, page_to_phys(page), size)) {
- dma_free_contiguous(dev, page, alloc_size);
+ dma_free_contiguous(dev, page, size);
page = NULL;
}
again:
if (!page)
- page = alloc_pages_node(node, gfp, get_order(alloc_size));
+ page = alloc_pages_node(node, gfp, get_order(size));
if (page && !dma_coherent_ok(dev, page_to_phys(page), size)) {
dma_free_contiguous(dev, page, size);
page = NULL;
@@ -172,8 +173,10 @@ void *dma_direct_alloc_pages(struct device *dev, size_t size,
struct page *page;
void *ret;
+ size = PAGE_ALIGN(size);
+
if (dma_should_alloc_from_pool(dev, gfp, attrs)) {
- ret = dma_alloc_from_pool(dev, PAGE_ALIGN(size), &page, gfp);
+ ret = dma_alloc_from_pool(dev, size, &page, gfp);
if (!ret)
return NULL;
goto done;
@@ -197,10 +200,10 @@ void *dma_direct_alloc_pages(struct device *dev, size_t size,
dma_alloc_need_uncached(dev, attrs)) ||
(IS_ENABLED(CONFIG_DMA_REMAP) && PageHighMem(page))) {
/* remove any dirty cache lines on the kernel alias */
- arch_dma_prep_coherent(page, PAGE_ALIGN(size));
+ arch_dma_prep_coherent(page, size);
/* create a coherent mapping */
- ret = dma_common_contiguous_remap(page, PAGE_ALIGN(size),
+ ret = dma_common_contiguous_remap(page, size,
dma_pgprot(dev, PAGE_KERNEL, attrs),
__builtin_return_address(0));
if (!ret)
--
2.28.0.618.gf4bc123cb7-goog
next prev parent reply other threads:[~2020-09-25 16:20 UTC|newest]
Thread overview: 39+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-09-25 16:18 [PATCH 00/30 for 5.4] Backport unencrypted non-blocking DMA allocations Peter Gonda
2020-09-25 16:18 ` [PATCH 01/30 for 5.4] dma-direct: remove __dma_direct_free_pages Peter Gonda
2020-09-25 16:18 ` [PATCH 02/30 for 5.4] dma-direct: remove the dma_handle argument to __dma_direct_alloc_pages Peter Gonda
2020-09-25 16:18 ` [PATCH 03/30 for 5.4] dma-direct: provide mmap and get_sgtable method overrides Peter Gonda
2020-09-25 16:18 ` [PATCH 04/30 for 5.4] dma-mapping: merge the generic remapping helpers into dma-direct Peter Gonda
2020-09-25 16:18 ` [PATCH 05/30 for 5.4] lib/genalloc.c: rename addr_in_gen_pool to gen_pool_has_addr Peter Gonda
2020-09-25 16:18 ` [PATCH 06/30 for 5.4] dma-remap: separate DMA atomic pools from direct remap code Peter Gonda
2020-09-25 16:18 ` [PATCH 07/30 for 5.4] dma-pool: add additional coherent pools to map to gfp mask Peter Gonda
2020-09-25 16:18 ` [PATCH 08/30 for 5.4] dma-pool: dynamically expanding atomic pools Peter Gonda
2020-09-25 16:18 ` [PATCH 09/30 for 5.4] dma-direct: atomic allocations must come from atomic coherent pools Peter Gonda
2020-09-25 16:18 ` [PATCH 10/30 for 5.4] dma-pool: add pool sizes to debugfs Peter Gonda
2020-09-25 16:18 ` [PATCH 11/30 for 5.4] x86/mm: unencrypted non-blocking DMA allocations use coherent pools Peter Gonda
2020-09-25 16:18 ` [PATCH 12/30 for 5.4] dma-pool: scale the default DMA coherent pool size with memory capacity Peter Gonda
2020-09-25 16:18 ` [PATCH 13/30 for 5.4] dma-pool: fix too large DMA pools on medium memory size systems Peter Gonda
2020-09-25 16:19 ` [PATCH 14/30 for 5.4] dma-pool: decouple DMA_REMAP from DMA_COHERENT_POOL Peter Gonda
2020-09-25 16:19 ` [PATCH 15/30 for 5.4] dma-direct: consolidate the error handling in dma_direct_alloc_pages Peter Gonda
2020-09-25 16:19 ` [PATCH 16/30 for 5.4] xtensa: use the generic uncached segment support Peter Gonda
2020-09-25 16:19 ` [PATCH 17/30 for 5.4] dma-direct: make uncached_kernel_address more general Peter Gonda
2020-09-25 16:19 ` Peter Gonda [this message]
2020-09-25 16:19 ` [PATCH 19/30 for 5.4] dma-direct: re-encrypt memory if dma_direct_alloc_pages() fails Peter Gonda
2020-09-25 16:19 ` [PATCH 20/30 for 5.4] dma-direct: check return value when encrypting or decrypting memory Peter Gonda
2020-09-25 16:19 ` [PATCH 21/30 for 5.4] dma-direct: add missing set_memory_decrypted() for coherent mapping Peter Gonda
2020-09-25 16:19 ` [PATCH 22/30 for 5.4] dma-mapping: DMA_COHERENT_POOL should select GENERIC_ALLOCATOR Peter Gonda
2020-09-25 16:19 ` [PATCH 23/30 for 5.4] dma-mapping: warn when coherent pool is depleted Peter Gonda
2020-09-25 16:19 ` [PATCH 24/30 for 5.4] dma-direct: provide function to check physical memory area validity Peter Gonda
2020-09-25 16:19 ` [PATCH 25/30 for 5.4] dma-pool: get rid of dma_in_atomic_pool() Peter Gonda
2020-09-25 16:19 ` [PATCH 26/30 for 5.4] dma-pool: introduce dma_guess_pool() Peter Gonda
2020-09-25 16:19 ` [PATCH 27/30 for 5.4] dma-pool: make sure atomic pool suits device Peter Gonda
2020-09-25 16:19 ` [PATCH 28/30 for 5.4] dma-pool: do not allocate pool memory from CMA Peter Gonda
2020-09-25 16:19 ` [PATCH 29/30 for 5.4] dma-pool: fix coherent pool allocations for IOMMU mappings Peter Gonda
2020-09-25 16:19 ` [PATCH 30/30 for 5.4] dma/direct: turn ARCH_ZONE_DMA_BITS into a variable Peter Gonda
2020-10-05 13:07 ` [PATCH 00/30 for 5.4] Backport unencrypted non-blocking DMA allocations Greg KH
2020-10-06 15:26 ` Peter Gonda
2020-10-06 18:10 ` David Rientjes
2021-01-04 13:00 ` Greg KH
2021-01-04 22:37 ` David Rientjes
2021-01-05 5:58 ` Greg KH
2021-01-05 6:38 ` David Rientjes
2021-05-12 19:06 ` Peter Gonda
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20200925161916.204667-19-pgonda@google.com \
--to=pgonda@google.com \
--cc=hch@lst.de \
--cc=rientjes@google.com \
--cc=stable@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).