linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Catalin Marinas <catalin.marinas@arm.com>
To: Vlastimil Babka <vbabka@suse.cz>
Cc: Petr Tesarik <ptesarik@suse.com>,
	Feng Tang <feng.tang@linux.alibaba.com>,
	Harry Yoo <harry.yoo@oracle.com>, Peng Fan <peng.fan@nxp.com>,
	Hyeonggon Yoo <42.hyeyoo@gmail.com>,
	David Rientjes <rientjes@google.com>,
	Christoph Lameter <cl@linux.com>,
	"linux-mm@kvack.org" <linux-mm@kvack.org>
Subject: Re: slub - extended kmalloc redzone and dma alignment
Date: Wed, 9 Apr 2025 15:30:16 +0100	[thread overview]
Message-ID: <Z_aEeL9vHFUDB0G2@arm.com> (raw)
In-Reply-To: <53cc9e92-8a57-4989-af4e-26ced40de91c@suse.cz>

On Wed, Apr 09, 2025 at 02:22:10PM +0200, Vlastimil Babka wrote:
> On 4/9/25 1:11 PM, Catalin Marinas wrote:
> > On Wed, Apr 09, 2025 at 10:51:43AM +0200, Vlastimil Babka wrote:
> >> On 4/8/25 5:07 PM, Catalin Marinas wrote:
> >>> Assuming I got kmalloc redzoning right, I think there's still a
> >>> potential issue. Let's say we have a system with 128-byte DMA alignment
> >>> required (the largest cache line size). We do a kmalloc(104) and
> >>> kmalloc_size_roundup() returns 128, so all seems good to the DMA code.
> >>> However, kmalloc() redzones from 104 to 128 as it tracks the original
> >>> size. The DMA bouncing doesn't spot it since the
> >>> kmalloc_size_roundup(104) is aligned to 128.
> >>
> >> Note that kmalloc_size_roundup() is supposed to be used *before*
> >> kmalloc(), such as dma_resv_list_alloc() does. Then there's no issue as
> >> no redzoning would not be done between 104 and 128, there would be only
> >> the additional redzone at 128+.
> > 
> > Yes, if people use it this way. devm_kmalloc() via alloc_dr() also seems
> > to be handling this. However, given the original report, I assume there
> 
> We can probably ignore my original private discussion as motivation as
> it wasn't confirmed (and I'm not sure it will) that it was really a case
> involving DMA alignment. It was just something I thought might be
> possible explanation and wanted to doublecheck with people more
> knowledgeable.
> 
> Unless you mean original report as 120ee599b5bf ("staging: octeon-usb:
> prevent memory corruption") that Feng mentioned.

I was referring to your private discussion. IIUC the one Feng mentioned
was about the SLOB allocator which I recall did not guarantee natural
alignment for power-of-two allocations.

> > are drivers that have a problem with redzoning at the end of the buffer.
> 
> So I'm not aware of any issues reported due to the extended redzoning.

Thanks for confirming. I guess we can ignore this potential issue then
as long as drivers take care of the alignment or use devm_kmalloc().

> > I did a quick test with kmem_cache_create() of 104 bytes with
> > SLAB_HWCACHE_ALIGN (64 bytes) and it has a similar problem with the
> > redzone from byte 104 onwards. Here we don't have the equivalent of
> > kmalloc_size_roundup() that a driver can use.
> 
> AFAIK SLAB_HWCACHE_ALIGN exists for performance reasons, not to provide
> dma guarantees as kmalloc(). So I'd say users of kmem_cache_create()
> would have to do their own rounding - you mentioned
> dma_get_cache_alignment()? And there's an align parameter too when
> creating caches.

I just checked and the align parameter only ensures the start of the
buffer, the redzone start is not aligned.

Anyway, as in the other subthread with Petr, I think most architectures
would benefit from an update to the DMA cache maintenance to avoid
corrupting the redzone.

-- 
Catalin


  reply	other threads:[~2025-04-09 14:30 UTC|newest]

Thread overview: 24+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-04-04  9:30 slub - extended kmalloc redzone and dma alignment Vlastimil Babka
2025-04-04 10:30 ` Harry Yoo
2025-04-04 11:12   ` Petr Tesarik
2025-04-04 12:45     ` Vlastimil Babka
2025-04-04 13:53       ` Petr Tesarik
2025-04-06 14:02         ` Feng Tang
2025-04-07  7:21           ` Feng Tang
2025-04-07  7:54             ` Vlastimil Babka
2025-04-07  9:50               ` Petr Tesarik
2025-04-07 17:12               ` Catalin Marinas
2025-04-08  5:27                 ` Petr Tesarik
2025-04-08 15:07                   ` Catalin Marinas
2025-04-09  8:39                     ` Petr Tesarik
2025-04-09  9:05                       ` Petr Tesarik
2025-04-09  9:47                         ` Catalin Marinas
2025-04-09 12:18                           ` Petr Tesarik
2025-04-09 12:49                             ` Catalin Marinas
2025-04-09 13:41                               ` Petr Tesarik
2025-04-09  8:51                     ` Vlastimil Babka
2025-04-09 11:11                       ` Catalin Marinas
2025-04-09 12:22                         ` Vlastimil Babka
2025-04-09 14:30                           ` Catalin Marinas [this message]
2025-04-10  1:54                             ` Feng Tang
2025-04-07  7:45         ` Vlastimil Babka

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=Z_aEeL9vHFUDB0G2@arm.com \
    --to=catalin.marinas@arm.com \
    --cc=42.hyeyoo@gmail.com \
    --cc=cl@linux.com \
    --cc=feng.tang@linux.alibaba.com \
    --cc=harry.yoo@oracle.com \
    --cc=linux-mm@kvack.org \
    --cc=peng.fan@nxp.com \
    --cc=ptesarik@suse.com \
    --cc=rientjes@google.com \
    --cc=vbabka@suse.cz \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).