public inbox for linux-media@vger.kernel.org
 help / color / mirror / Atom feed
From: "Christian König" <christian.koenig@amd.com>
To: Linus Walleij <linusw@kernel.org>,
	Sumit Semwal <sumit.semwal@linaro.org>,
	Benjamin Gaignard <benjamin.gaignard@collabora.com>,
	Brian Starkey <Brian.Starkey@arm.com>,
	John Stultz <jstultz@google.com>,
	"T.J. Mercier" <tjmercier@google.com>
Cc: linux-media@vger.kernel.org, dri-devel@lists.freedesktop.org,
	linaro-mm-sig@lists.linaro.org
Subject: Re: [PATCH v2 2/2] dma-buf: heaps: Clear CMA highages using helper
Date: Tue, 10 Mar 2026 09:55:02 +0100	[thread overview]
Message-ID: <c9271f37-e66e-45f6-8c81-1c9686ff53d4@amd.com> (raw)
In-Reply-To: <20260310-cma-heap-clear-pages-v2-2-ecbbed3d7e6d@kernel.org>

On 3/10/26 09:53, Linus Walleij wrote:
> Currently the CMA allocator clears highmem pages using
> kmap()->clear_page()->kunmap(), but there is a helper
> static inline in <linux/highmem.h> that does the same for
> us so use clear_highpage() instead of open coding this.
> 
> Suggested-by: T.J. Mercier <tjmercier@google.com>
> Signed-off-by: Linus Walleij <linusw@kernel.org>

Ah yes, somebody pointed that out to me before but I never found time to write a patch to clean it up.

Reviewed-by: Christian König <christian.koenig@amd.com>

> ---
>  drivers/dma-buf/heaps/cma_heap.c | 5 +----
>  1 file changed, 1 insertion(+), 4 deletions(-)
> 
> diff --git a/drivers/dma-buf/heaps/cma_heap.c b/drivers/dma-buf/heaps/cma_heap.c
> index f0bacf25ed9d..92865786cfc9 100644
> --- a/drivers/dma-buf/heaps/cma_heap.c
> +++ b/drivers/dma-buf/heaps/cma_heap.c
> @@ -329,10 +329,7 @@ static struct dma_buf *cma_heap_allocate(struct dma_heap *heap,
>                 struct page *page = cma_pages;
> 
>                 while (nr_clear_pages > 0) {
> -                       void *vaddr = kmap_local_page(page);
> -
> -                       clear_page(vaddr);
> -                       kunmap_local(vaddr);
> +                       clear_highpage(page);
>                         /*
>                          * Avoid wasting time zeroing memory if the process
>                          * has been killed by SIGKILL.
> 
> --
> 2.53.0
> 


  reply	other threads:[~2026-03-10  8:55 UTC|newest]

Thread overview: 8+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-03-10  8:53 [PATCH v2 0/2] dma-buf: heaps: Use page clearing helpers Linus Walleij
2026-03-10  8:53 ` [PATCH v2 1/2] dma-buf: heaps: Clear CMA pages with clear_pages() Linus Walleij
2026-03-10 15:29   ` T.J. Mercier
2026-03-10  8:53 ` [PATCH v2 2/2] dma-buf: heaps: Clear CMA highages using helper Linus Walleij
2026-03-10  8:55   ` Christian König [this message]
2026-03-10 15:29   ` T.J. Mercier
2026-03-10 14:30 ` [PATCH v2 0/2] dma-buf: heaps: Use page clearing helpers Maxime Ripard
2026-03-11  9:19 ` Linus Walleij

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=c9271f37-e66e-45f6-8c81-1c9686ff53d4@amd.com \
    --to=christian.koenig@amd.com \
    --cc=Brian.Starkey@arm.com \
    --cc=benjamin.gaignard@collabora.com \
    --cc=dri-devel@lists.freedesktop.org \
    --cc=jstultz@google.com \
    --cc=linaro-mm-sig@lists.linaro.org \
    --cc=linusw@kernel.org \
    --cc=linux-media@vger.kernel.org \
    --cc=sumit.semwal@linaro.org \
    --cc=tjmercier@google.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox