* [PATCH] dma-direct: clean up the logic in __dma_direct_alloc_pages()
@ 2025-07-10 8:38 ` Petr Tesarik
2025-08-11 11:29 ` Marek Szyprowski
0 siblings, 1 reply; 2+ messages in thread
From: Petr Tesarik @ 2025-07-10 8:38 UTC (permalink / raw)
To: Marek Szyprowski, Robin Murphy
Cc: open list:DMA MAPPING HELPERS, linux-kernel, Petr Tesarik
Convert a goto-based loop to a while() loop. To allow the simplification,
return early when allocation from CMA is successful. As a bonus, this early
return avoids a repeated dma_coherent_ok() check.
No functional change.
Signed-off-by: Petr Tesarik <ptesarik@suse.com>
---
kernel/dma/direct.c | 31 +++++++++++++------------------
1 file changed, 13 insertions(+), 18 deletions(-)
diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c
index 24c359d9c879..302e89580972 100644
--- a/kernel/dma/direct.c
+++ b/kernel/dma/direct.c
@@ -120,7 +120,7 @@ static struct page *__dma_direct_alloc_pages(struct device *dev, size_t size,
gfp_t gfp, bool allow_highmem)
{
int node = dev_to_node(dev);
- struct page *page = NULL;
+ struct page *page;
u64 phys_limit;
WARN_ON_ONCE(!PAGE_ALIGNED(size));
@@ -131,30 +131,25 @@ static struct page *__dma_direct_alloc_pages(struct device *dev, size_t size,
gfp |= dma_direct_optimal_gfp_mask(dev, &phys_limit);
page = dma_alloc_contiguous(dev, size, gfp);
if (page) {
- if (!dma_coherent_ok(dev, page_to_phys(page), size) ||
- (!allow_highmem && PageHighMem(page))) {
- dma_free_contiguous(dev, page, size);
- page = NULL;
- }
+ if (dma_coherent_ok(dev, page_to_phys(page), size) &&
+ (allow_highmem || !PageHighMem(page)))
+ return page;
+
+ dma_free_contiguous(dev, page, size);
}
-again:
- if (!page)
- page = alloc_pages_node(node, gfp, get_order(size));
- if (page && !dma_coherent_ok(dev, page_to_phys(page), size)) {
+
+ while ((page = alloc_pages_node(node, gfp, get_order(size)))
+ && !dma_coherent_ok(dev, page_to_phys(page), size)) {
__free_pages(page, get_order(size));
- page = NULL;
if (IS_ENABLED(CONFIG_ZONE_DMA32) &&
phys_limit < DMA_BIT_MASK(64) &&
- !(gfp & (GFP_DMA32 | GFP_DMA))) {
+ !(gfp & (GFP_DMA32 | GFP_DMA)))
gfp |= GFP_DMA32;
- goto again;
- }
-
- if (IS_ENABLED(CONFIG_ZONE_DMA) && !(gfp & GFP_DMA)) {
+ else if (IS_ENABLED(CONFIG_ZONE_DMA) && !(gfp & GFP_DMA))
gfp = (gfp & ~GFP_DMA32) | GFP_DMA;
- goto again;
- }
+ else
+ return NULL;
}
return page;
--
2.49.0
^ permalink raw reply related [flat|nested] 2+ messages in thread
* Re: [PATCH] dma-direct: clean up the logic in __dma_direct_alloc_pages()
2025-07-10 8:38 ` [PATCH] dma-direct: clean up the logic in __dma_direct_alloc_pages() Petr Tesarik
@ 2025-08-11 11:29 ` Marek Szyprowski
0 siblings, 0 replies; 2+ messages in thread
From: Marek Szyprowski @ 2025-08-11 11:29 UTC (permalink / raw)
To: Petr Tesarik, Robin Murphy; +Cc: open list:DMA MAPPING HELPERS, linux-kernel
On 10.07.2025 10:38, Petr Tesarik wrote:
> Convert a goto-based loop to a while() loop. To allow the simplification,
> return early when allocation from CMA is successful. As a bonus, this early
> return avoids a repeated dma_coherent_ok() check.
>
> No functional change.
>
> Signed-off-by: Petr Tesarik <ptesarik@suse.com>
Thanks, applied to dma-mapping-for-next branch.
Best regards
--
Marek Szyprowski, PhD
Samsung R&D Institute Poland
^ permalink raw reply [flat|nested] 2+ messages in thread
end of thread, other threads:[~2025-08-11 11:29 UTC | newest]
Thread overview: 2+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
[not found] <CGME20250710083845eucas1p26704ee50d9c05b0689ff17a1f8a1fca5@eucas1p2.samsung.com>
2025-07-10 8:38 ` [PATCH] dma-direct: clean up the logic in __dma_direct_alloc_pages() Petr Tesarik
2025-08-11 11:29 ` Marek Szyprowski
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).