From: Dave Chinner <david@fromorbit.com>
To: Brian Foster <bfoster@redhat.com>
Cc: linux-xfs@vger.kernel.org
Subject: Re: [PATCH 2/3] xfs: add kmem_alloc_io()
Date: Thu, 22 Aug 2019 07:14:52 +1000 [thread overview]
Message-ID: <20190821211452.GN1119@dread.disaster.area> (raw)
In-Reply-To: <20190821133533.GB19646@bfoster>
On Wed, Aug 21, 2019 at 09:35:33AM -0400, Brian Foster wrote:
> On Wed, Aug 21, 2019 at 06:38:19PM +1000, Dave Chinner wrote:
> > From: Dave Chinner <dchinner@redhat.com>
> >
> > Memory we use to submit for IO needs strict alignment to the
> > underlying driver contraints. Worst case, this is 512 bytes. Given
> > that all allocations for IO are always a power of 2 multiple of 512
> > bytes, the kernel heap provides natural alignment for objects of
> > these sizes and that suffices.
> >
> > Until, of course, memory debugging of some kind is turned on (e.g.
> > red zones, poisoning, KASAN) and then the alignment of the heap
> > objects is thrown out the window. Then we get weird IO errors and
> > data corruption problems because drivers don't validate alignment
> > and do the wrong thing when passed unaligned memory buffers in bios.
> >
> > TO fix this, introduce kmem_alloc_io(), which will guaranteeat least
>
> s/TO/To/
>
> > 512 byte alignment of buffers for IO, even if memory debugging
> > options are turned on. It is assumed that the minimum allocation
> > size will be 512 bytes, and that sizes will be power of 2 mulitples
> > of 512 bytes.
> >
> > Use this everywhere we allocate buffers for IO.
> >
> > This no longer fails with log recovery errors when KASAN is enabled
> > due to the brd driver not handling unaligned memory buffers:
> >
> > # mkfs.xfs -f /dev/ram0 ; mount /dev/ram0 /mnt/test
> >
> > Signed-off-by: Dave Chinner <dchinner@redhat.com>
> > ---
> > fs/xfs/kmem.c | 61 +++++++++++++++++++++++++++++-----------
> > fs/xfs/kmem.h | 1 +
> > fs/xfs/xfs_buf.c | 4 +--
> > fs/xfs/xfs_log.c | 2 +-
> > fs/xfs/xfs_log_recover.c | 2 +-
> > fs/xfs/xfs_trace.h | 1 +
> > 6 files changed, 50 insertions(+), 21 deletions(-)
> >
> > diff --git a/fs/xfs/kmem.c b/fs/xfs/kmem.c
> > index edcf393c8fd9..ec693c0fdcff 100644
> > --- a/fs/xfs/kmem.c
> > +++ b/fs/xfs/kmem.c
> ...
> > @@ -62,6 +56,39 @@ kmem_alloc_large(size_t size, xfs_km_flags_t flags)
> > return ptr;
> > }
> >
> > +/*
> > + * Same as kmem_alloc_large, except we guarantee a 512 byte aligned buffer is
> > + * returned. vmalloc always returns an aligned region.
> > + */
> > +void *
> > +kmem_alloc_io(size_t size, xfs_km_flags_t flags)
> > +{
> > + void *ptr;
> > +
> > + trace_kmem_alloc_io(size, flags, _RET_IP_);
> > +
> > + ptr = kmem_alloc(size, flags | KM_MAYFAIL);
> > + if (ptr) {
> > + if (!((long)ptr & 511))
> > + return ptr;
> > + kfree(ptr);
> > + }
> > + return __kmem_vmalloc(size, flags);
> > +}
>
> Even though it is unfortunate, this seems like a quite reasonable and
> isolated temporary solution to the problem to me. The one concern I have
> is if/how much this could affect performance under certain
> circumstances.
Can't measure a difference on 4k block size filesystems. It's only
used for log recovery and then for allocation AGF/AGI buffers on
512 byte sector devices. Anything using 4k sectors only hits it
during mount. So for default configs with memory posioning/KASAN
enabled, the massive overhead of poisoning/tracking makes this
disappear in the noise.
For 1k block size filesystems, it gets hit much harder, but
there's no noticable increase in runtime of xfstests vs 4k block
size with KASAN enabled. The big increase in overhead comes from
enabling KASAN (takes 3x longer than without), not doing one extra
allocation/free pair.
> I realize that these callsites are isolated in the common
> scenario. Less common scenarios like sub-page block sizes (whether due
> to explicit mkfs time format or default configurations on larger page
> size systems) can fall into this path much more frequently, however.
*nod*
> Since this implies some kind of vm debug option is enabled, performance
> itself isn't critical when this solution is active. But how bad is it in
> those cases where we might depend on this more heavily? Have you
> confirmed that the end configuration is still "usable," at least?
No noticable difference, most definitely still usable.
> I ask because the repeated alloc/free behavior can easily be avoided via
> something like an mp flag (which may require a tweak to the
What's an "mp flag"?
> kmem_alloc_io() interface) to skip further kmem_alloc() calls from this
> path once we see one unaligned allocation. That assumes this behavior is
> tied to functionality that isn't dynamically configured at runtime, of
> course.
vmalloc() has a _lot_ more overhead than kmalloc (especially when
vmalloc has to do multiple allocations itself to set up page table
entries) so there is still an overall gain to be had by trying
kmalloc even if 50% of allocations are unaligned.
Cheers,
Dave.
--
Dave Chinner
david@fromorbit.com
next prev parent reply other threads:[~2019-08-21 21:16 UTC|newest]
Thread overview: 44+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-08-21 8:38 [PATCH 0/3] xfs: avoid IO issues unaligned memory allocation Dave Chinner
2019-08-21 8:38 ` [PATCH 1/3] xfs: add kmem allocation trace points Dave Chinner
2019-08-21 13:34 ` Brian Foster
2019-08-21 23:20 ` Christoph Hellwig
2019-08-21 8:38 ` [PATCH 2/3] xfs: add kmem_alloc_io() Dave Chinner
2019-08-21 13:35 ` Brian Foster
2019-08-21 15:08 ` Darrick J. Wong
2019-08-21 21:24 ` Dave Chinner
2019-08-21 15:23 ` Eric Sandeen
2019-08-21 21:14 ` Dave Chinner [this message]
2019-08-22 13:40 ` Brian Foster
2019-08-22 22:39 ` Dave Chinner
2019-08-23 12:10 ` Brian Foster
2019-08-21 23:24 ` Christoph Hellwig
2019-08-22 0:31 ` Dave Chinner
2019-08-22 7:59 ` Christoph Hellwig
2019-08-22 8:51 ` Peter Zijlstra
2019-08-22 9:10 ` Peter Zijlstra
2019-08-22 10:14 ` Dave Chinner
2019-08-22 11:14 ` Vlastimil Babka
2019-08-22 12:07 ` Dave Chinner
2019-08-22 12:19 ` Vlastimil Babka
2019-08-22 13:17 ` Dave Chinner
2019-08-22 14:26 ` Vlastimil Babka
2019-08-26 12:21 ` Michal Hocko
2019-08-21 8:38 ` [PATCH 3/3] xfs: alignment check bio buffers Dave Chinner
2019-08-21 13:39 ` Brian Foster
2019-08-21 21:39 ` Dave Chinner
2019-08-22 13:47 ` Brian Foster
2019-08-22 23:03 ` Dave Chinner
2019-08-23 12:33 ` Brian Foster
2019-08-21 23:30 ` Christoph Hellwig
2019-08-22 0:44 ` Dave Chinner
2019-08-21 23:29 ` Christoph Hellwig
2019-08-22 0:37 ` Dave Chinner
2019-08-22 8:03 ` Christoph Hellwig
2019-08-22 10:17 ` Dave Chinner
2019-08-22 2:50 ` Ming Lei
2019-08-22 4:49 ` Dave Chinner
2019-08-22 7:23 ` Ming Lei
2019-08-22 8:08 ` Christoph Hellwig
2019-08-22 10:20 ` Ming Lei
2019-08-23 0:14 ` Christoph Hellwig
2019-08-23 1:19 ` Ming Lei
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20190821211452.GN1119@dread.disaster.area \
--to=david@fromorbit.com \
--cc=bfoster@redhat.com \
--cc=linux-xfs@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox