From: "Darrick J. Wong" <djwong@kernel.org>
To: Brian Foster <bfoster@redhat.com>
Cc: Joanne Koong <joannelkoong@gmail.com>,
fstests@vger.kernel.org, linux-fsdevel@vger.kernel.org,
nirjhar@linux.ibm.com, zlang@redhat.com, kernel-team@meta.com
Subject: Re: [PATCH v3 1/2] fsx: support reads/writes from buffers backed by hugepages
Date: Fri, 17 Jan 2025 09:43:45 -0800 [thread overview]
Message-ID: <20250117174345.GI3557695@frogsfrogsfrogs> (raw)
In-Reply-To: <Z4pavNG_GKxPSRBy@bfoster>
On Fri, Jan 17, 2025 at 08:27:24AM -0500, Brian Foster wrote:
> On Thu, Jan 16, 2025 at 05:26:31PM -0800, Joanne Koong wrote:
> > On Thu, Jan 16, 2025 at 4:51 AM Brian Foster <bfoster@redhat.com> wrote:
> > >
> > > On Wed, Jan 15, 2025 at 04:59:19PM -0800, Darrick J. Wong wrote:
> > > > On Wed, Jan 15, 2025 at 04:47:30PM -0800, Joanne Koong wrote:
> > > > > On Wed, Jan 15, 2025 at 1:37 PM Darrick J. Wong <djwong@kernel.org> wrote:
> > > > > >
> > > > > > On Wed, Jan 15, 2025 at 10:31:06AM -0800, Joanne Koong wrote:
> > > > > > > Add support for reads/writes from buffers backed by hugepages.
> > > > > > > This can be enabled through the '-h' flag. This flag should only be used
> > > > > > > on systems where THP capabilities are enabled.
> > > > > > >
> > > > > > > This is motivated by a recent bug that was due to faulty handling of
> > > > > > > userspace buffers backed by hugepages. This patch is a mitigation
> > > > > > > against problems like this in the future.
> > > > > > >
> > > > > > > Signed-off-by: Joanne Koong <joannelkoong@gmail.com>
> > > > > > > Reviewed-by: Brian Foster <bfoster@redhat.com>
> > > > > > > ---
> > > > > > > ltp/fsx.c | 119 +++++++++++++++++++++++++++++++++++++++++++++++++-----
> > > > > > > 1 file changed, 108 insertions(+), 11 deletions(-)
> > > > > > >
> > > > > > > diff --git a/ltp/fsx.c b/ltp/fsx.c
> > > > > > > index 41933354..8d3a2e2c 100644
> > > > > > > --- a/ltp/fsx.c
> > > > > > > +++ b/ltp/fsx.c
> > > > > > > @@ -190,6 +190,7 @@ int o_direct; /* -Z */
> > > > > > > int aio = 0;
> > > > > > > int uring = 0;
> > > > > > > int mark_nr = 0;
> > > > > > > +int hugepages = 0; /* -h flag */
> > > > > > >
> > > > > > > int page_size;
> > > > > > > int page_mask;
> > > > > > > @@ -2471,7 +2472,7 @@ void
> > > > > > > usage(void)
> > > > > > > {
> > > > > > > fprintf(stdout, "usage: %s",
> > > > > > > - "fsx [-dfknqxyzBEFHIJKLORWXZ0]\n\
> > > > > > > + "fsx [-dfhknqxyzBEFHIJKLORWXZ0]\n\
> > > > > > > [-b opnum] [-c Prob] [-g filldata] [-i logdev] [-j logid]\n\
> > > > > > > [-l flen] [-m start:end] [-o oplen] [-p progressinterval]\n\
> > > > > > > [-r readbdy] [-s style] [-t truncbdy] [-w writebdy]\n\
> > > > > > > @@ -2484,6 +2485,7 @@ usage(void)
> > > > > > > -e: pollute post-eof on size changes (default 0)\n\
> > > > > > > -f: flush and invalidate cache after I/O\n\
> > > > > > > -g X: write character X instead of random generated data\n\
> > > > > > > + -h hugepages: use buffers backed by hugepages for reads/writes\n\
> > > > > >
> > > > > > If this requires MADV_COLLAPSE, then perhaps the help text shouldn't
> > > > > > describe the switch if the support wasn't compiled in?
> > > > > >
> > > > > > e.g.
> > > > > >
> > > > > > -g X: write character X instead of random generated data\n"
> > > > > > #ifdef MADV_COLLAPSE
> > > > > > " -h hugepages: use buffers backed by hugepages for reads/writes\n"
> > > > > > #endif
> > > > > > " -i logdev: do integrity testing, logdev is the dm log writes device\n\
> > > > > >
> > > > > > (assuming I got the preprocessor and string construction goo right; I
> > > > > > might be a few cards short of a deck due to zombie attack earlier)
> > > > >
> > > > > Sounds great, I'll #ifdef out the help text -h line. Hope you feel better.
> > > > > >
> > > > > > > -i logdev: do integrity testing, logdev is the dm log writes device\n\
> > > > > > > -j logid: prefix debug log messsages with this id\n\
> > > > > > > -k: do not truncate existing file and use its size as upper bound on file size\n\
> > > > > [...]
> > > > > > > +}
> > > > > > > +
> > > > > > > +#ifdef MADV_COLLAPSE
> > > > > > > +static void *
> > > > > > > +init_hugepages_buf(unsigned len, int hugepage_size, int alignment)
> > > > > > > +{
> > > > > > > + void *buf;
> > > > > > > + long buf_size = roundup(len, hugepage_size) + alignment;
> > > > > > > +
> > > > > > > + if (posix_memalign(&buf, hugepage_size, buf_size)) {
> > > > > > > + prterr("posix_memalign for buf");
> > > > > > > + return NULL;
> > > > > > > + }
> > > > > > > + memset(buf, '\0', buf_size);
> > > > > > > + if (madvise(buf, buf_size, MADV_COLLAPSE)) {
> > > > > >
> > > > > > If the fsx runs for a long period of time, will it be necessary to call
> > > > > > MADV_COLLAPSE periodically to ensure that reclaim doesn't break up the
> > > > > > hugepage?
> > > > > >
> > > > >
> > > > > imo, I don't think so. My understanding is that this would be a rare
> > > > > edge case that happens when the system is constrained on memory, in
> > > > > which case subsequent calls to MADV_COLLAPSE would most likely fail
> > > > > anyways.
> > > >
> > > > Hrmmm... well I /do/ like to run memory constrained VMs to prod reclaim
> > > > into stressing the filesystem more. But I guess there's no good way for
> > > > fsx to know that something happened to it. Unless there's some even
> > > > goofier way to force a hugepage, like shmem/hugetlbfs (ugh!) :)
> > > >
> > > > Will have to ponder hugepage renewasl -- maybe we should madvise every
> > > > few thousand fsxops just to be careful?
> > > >
> > >
> > > I wonder.. is there test value in doing collapses to the target file as
> > > well, either as a standalone map/madvise command or a random thing
> > > hitched onto preexisting commands? If so, I could see how something like
> > > that could potentially lift the current init time only approach into
> > > something that occurs with frequency, which then could at the same time
> > > (again maybe randomly) reinvoke for internal buffers as well.
> >
> > My understanding is that if a filesystem has support enabled for large
> > folios, then doing large writes/reads (which I believe is currently
> > supported in fsx via the -o flag) will already automatically test the
> > functionality of how the filesystem handles hugepages. I don't think
> > this would be different from what doing a collapse on the target file
> > would do.
> >
>
> Ah, that is a good point. So maybe not that useful to have something
> that would hook into writes. OTOH, fsx does a lot of random ops in the
> general case. I wonder how likely it is to sustain large folios in a
> typical long running test and whether explicit madvise calls thrown into
> the mix would make any difference at all.
>
> I suppose there may also be an argument that doing collapses provides
> more test coverage than purely doing larger folio allocations at write
> time..? I don't know the code well enough to say whether there is any
> value there. FWIW, what I think is more interesting from the fsx side is
> the oddball sequences of operations that it can create to uncover
> similarly odd problems. IOW, in theory if we had a randomish "collapse
> target range before next operation," would that effectively provide more
> coverage with how the various supported ops interact with large folios
> over current behavior?
>
> But anyways, this is all nebulous and strikes me more as maybe something
> interesting to play with as a potential future enhancement more than
> anything. BTW, is there any good way to measure use of large folios in
> general and/or on a particular file? I.e., collapse/split stats or some
> such thing..? Thanks.
I only know of two -- hooking the mm_filemap_add_to_page_cache
tracepoint, and running MADV_COLLAPSE to see if it returns an errno.
--D
> Brian
>
> >
> > Thanks,
> > Joanne
> >
> > >
> > > All that said, this is new functionality and IIUC provides functional
> > > test coverage for a valid issue. IMO, it would be nice to get this
> > > merged as a baseline feature and explore these sort of enhancements as
> > > followon work. Just my .02.
> > >
> > > Brian
> > >
> > > > --D
> > > >
> > > > >
> > > > > Thanks,
> > > > > Joanne
> > > > >
> > > > > > > + prterr("madvise collapse for buf");
> > > > > > > + free(buf);
> > > > > > > + return NULL;
> > > > > > > + }
> > > > > > > +
> > > > > > > + return buf;
> > > > > > > +}
> > > > > > > +#else
> > > > > > > +static void *
> > > > > > > +init_hugepages_buf(unsigned len, int hugepage_size, int alignment)
> > > > > > > +{
> > > > > > > + return NULL;
> > > > > > > +}
> > > > > > > +#endif
> > > > > > > +
> > > > > > > +static void
> > > > > > > +init_buffers(void)
> > > > > > > +{
> > > > > > > + int i;
> > > > > > > +
> > > > > > > + original_buf = (char *) malloc(maxfilelen);
> > > > > > > + for (i = 0; i < maxfilelen; i++)
> > > > > > > + original_buf[i] = random() % 256;
> > > > > > > + if (hugepages) {
> > > > > > > + long hugepage_size = get_hugepage_size();
> > > > > > > + if (hugepage_size == -1) {
> > > > > > > + prterr("get_hugepage_size()");
> > > > > > > + exit(102);
> > > > > > > + }
> > > > > > > + good_buf = init_hugepages_buf(maxfilelen, hugepage_size, writebdy);
> > > > > > > + if (!good_buf) {
> > > > > > > + prterr("init_hugepages_buf failed for good_buf");
> > > > > > > + exit(103);
> > > > > > > + }
> > > > > > > +
> > > > > > > + temp_buf = init_hugepages_buf(maxoplen, hugepage_size, readbdy);
> > > > > > > + if (!temp_buf) {
> > > > > > > + prterr("init_hugepages_buf failed for temp_buf");
> > > > > > > + exit(103);
> > > > > > > + }
> > > > > > > + } else {
> > > > > > > + unsigned long good_buf_len = maxfilelen + writebdy;
> > > > > > > + unsigned long temp_buf_len = maxoplen + readbdy;
> > > > > > > +
> > > > > > > + good_buf = calloc(1, good_buf_len);
> > > > > > > + temp_buf = calloc(1, temp_buf_len);
> > > > > > > + }
> > > > > > > + good_buf = round_ptr_up(good_buf, writebdy, 0);
> > > > > > > + temp_buf = round_ptr_up(temp_buf, readbdy, 0);
> > > > > > > +}
> > > > > > > +
> > > > > > > static struct option longopts[] = {
> > > > > > > {"replay-ops", required_argument, 0, 256},
> > > > > > > {"record-ops", optional_argument, 0, 255},
> > > > > > > @@ -2883,7 +2980,7 @@ main(int argc, char **argv)
> > > > > > > setvbuf(stdout, (char *)0, _IOLBF, 0); /* line buffered stdout */
> > > > > > >
> > > > > > > while ((ch = getopt_long(argc, argv,
> > > > > > > - "0b:c:de:fg:i:j:kl:m:no:p:qr:s:t:uw:xyABD:EFJKHzCILN:OP:RS:UWXZ",
> > > > > > > + "0b:c:de:fg:hi:j:kl:m:no:p:qr:s:t:uw:xyABD:EFJKHzCILN:OP:RS:UWXZ",
> > > > > > > longopts, NULL)) != EOF)
> > > > > > > switch (ch) {
> > > > > > > case 'b':
> > > > > > > @@ -2916,6 +3013,14 @@ main(int argc, char **argv)
> > > > > > > case 'g':
> > > > > > > filldata = *optarg;
> > > > > > > break;
> > > > > > > + case 'h':
> > > > > > > + #ifndef MADV_COLLAPSE
> > > > > >
> > > > > > Preprocessor directives should start at column 0, like most of the rest
> > > > > > of fstests.
> > > > > >
> > > > > > --D
> > > > > >
> > > >
> > >
> >
>
>
next prev parent reply other threads:[~2025-01-17 17:43 UTC|newest]
Thread overview: 15+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-01-15 18:31 [PATCH v3 0/2] fstests: test reads/writes from hugepages-backed buffers Joanne Koong
2025-01-15 18:31 ` [PATCH v3 1/2] fsx: support reads/writes from buffers backed by hugepages Joanne Koong
2025-01-15 21:37 ` Darrick J. Wong
2025-01-16 0:47 ` Joanne Koong
2025-01-16 0:59 ` Darrick J. Wong
2025-01-16 12:53 ` Brian Foster
2025-01-17 1:26 ` Joanne Koong
2025-01-17 13:27 ` Brian Foster
2025-01-17 17:43 ` Darrick J. Wong [this message]
2025-01-17 21:18 ` Joanne Koong
2025-01-17 1:03 ` Joanne Koong
2025-01-17 1:57 ` Darrick J. Wong
2025-01-17 21:48 ` Joanne Koong
2025-01-15 18:31 ` [PATCH v3 2/2] generic: add tests for read/writes from hugepages-backed buffers Joanne Koong
2025-01-15 21:40 ` Darrick J. Wong
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20250117174345.GI3557695@frogsfrogsfrogs \
--to=djwong@kernel.org \
--cc=bfoster@redhat.com \
--cc=fstests@vger.kernel.org \
--cc=joannelkoong@gmail.com \
--cc=kernel-team@meta.com \
--cc=linux-fsdevel@vger.kernel.org \
--cc=nirjhar@linux.ibm.com \
--cc=zlang@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox