* [PATCH v4 0/2] fstests: test reads/writes from hugepages-backed buffers
@ 2025-01-18 0:47 Joanne Koong
2025-01-18 0:47 ` [PATCH v4 1/2] fsx: support reads/writes from buffers backed by hugepages Joanne Koong
2025-01-18 0:47 ` [PATCH v4 2/2] generic: add tests for read/writes from hugepages-backed buffers Joanne Koong
0 siblings, 2 replies; 5+ messages in thread
From: Joanne Koong @ 2025-01-18 0:47 UTC (permalink / raw)
To: fstests; +Cc: linux-fsdevel, bfoster, djwong, nirjhar, zlang, kernel-team
There was a recent bug in rc1 [1] that was due to faulty handling for
userspace buffers backed by hugepages.
This patchset adds generic tests for reads/writes from buffers backed by
hugepages.
[1] https://lore.kernel.org/linux-fsdevel/p3iss6hssbvtdutnwmuddvdadubrhfkdoosgmbewvo674f7f3y@cwnwffjqltzw/
Changelog:
v3: https://lore.kernel.org/linux-fsdevel/20250115183107.3124743-1-joannelkoong@gmail.com/
- Gate '-h' text behind MADVISE_COLLAPSE (Darrick)
- Add periodic hugepages collapsing for memory constrained environments (Darrick)
- Use tabs instead of spaces (Darrick)
- Refactor #ifdef MADVISE_COLLAPSE placements
v2: https://lore.kernel.org/linux-fsdevel/20241227193311.1799626-1-joannelkoong@gmail.com/
- Gate -h and MADV_COLLAPSE usage to get it compatible on old systems (Zorro)
- Use calloc instead of malloc/memset (Nirjhar)
- Update exit codes in 1st patch
- Add Brian's reviewed-bys
v1: https://lore.kernel.org/linux-fsdevel/20241218210122.3809198-1-joannelkoong@gmail.com/
- Refactor out buffer initialization (Brian)
- Update commit messages of 1st patch (Brian)
- Use << 10 instead of * 1024 (Nirjhar)
- Replace CONFIG_TRANSPARENT_HUGEPAGE check with checking
'sys/kernel/mm/transparent_hugepage/' (Darrick)
- Integrate readbdy and writebdy options
- Update options of generic/759 to include psize/bsize
Joanne Koong (2):
fsx: support reads/writes from buffers backed by hugepages
generic: add tests for read/writes from hugepages-backed buffers
common/rc | 13 ++++
ltp/fsx.c | 165 ++++++++++++++++++++++++++++++++++++++----
tests/generic/758 | 22 ++++++
tests/generic/758.out | 4 +
tests/generic/759 | 26 +++++++
tests/generic/759.out | 4 +
6 files changed, 221 insertions(+), 13 deletions(-)
create mode 100755 tests/generic/758
create mode 100644 tests/generic/758.out
create mode 100755 tests/generic/759
create mode 100644 tests/generic/759.out
--
2.47.1
^ permalink raw reply [flat|nested] 5+ messages in thread
* [PATCH v4 1/2] fsx: support reads/writes from buffers backed by hugepages
2025-01-18 0:47 [PATCH v4 0/2] fstests: test reads/writes from hugepages-backed buffers Joanne Koong
@ 2025-01-18 0:47 ` Joanne Koong
2025-01-18 20:17 ` Darrick J. Wong
2025-01-18 0:47 ` [PATCH v4 2/2] generic: add tests for read/writes from hugepages-backed buffers Joanne Koong
1 sibling, 1 reply; 5+ messages in thread
From: Joanne Koong @ 2025-01-18 0:47 UTC (permalink / raw)
To: fstests; +Cc: linux-fsdevel, bfoster, djwong, nirjhar, zlang, kernel-team
Add support for reads/writes from buffers backed by hugepages.
This can be enabled through the '-h' flag. This flag should only be used
on systems where THP capabilities are enabled.
This is motivated by a recent bug that was due to faulty handling of
userspace buffers backed by hugepages. This patch is a mitigation
against problems like this in the future.
Signed-off-by: Joanne Koong <joannelkoong@gmail.com>
Reviewed-by: Brian Foster <bfoster@redhat.com>
---
ltp/fsx.c | 165 +++++++++++++++++++++++++++++++++++++++++++++++++-----
1 file changed, 152 insertions(+), 13 deletions(-)
diff --git a/ltp/fsx.c b/ltp/fsx.c
index 41933354..1513755f 100644
--- a/ltp/fsx.c
+++ b/ltp/fsx.c
@@ -190,6 +190,16 @@ int o_direct; /* -Z */
int aio = 0;
int uring = 0;
int mark_nr = 0;
+int hugepages = 0; /* -h flag */
+
+/* Stores info needed to periodically collapse hugepages */
+struct hugepages_collapse_info {
+ void *orig_good_buf;
+ long good_buf_size;
+ void *orig_temp_buf;
+ long temp_buf_size;
+};
+struct hugepages_collapse_info hugepages_info;
int page_size;
int page_mask;
@@ -2471,7 +2481,7 @@ void
usage(void)
{
fprintf(stdout, "usage: %s",
- "fsx [-dfknqxyzBEFHIJKLORWXZ0]\n\
+ "fsx [-dfhknqxyzBEFHIJKLORWXZ0]\n\
[-b opnum] [-c Prob] [-g filldata] [-i logdev] [-j logid]\n\
[-l flen] [-m start:end] [-o oplen] [-p progressinterval]\n\
[-r readbdy] [-s style] [-t truncbdy] [-w writebdy]\n\
@@ -2483,8 +2493,11 @@ usage(void)
-d: debug output for all operations\n\
-e: pollute post-eof on size changes (default 0)\n\
-f: flush and invalidate cache after I/O\n\
- -g X: write character X instead of random generated data\n\
- -i logdev: do integrity testing, logdev is the dm log writes device\n\
+ -g X: write character X instead of random generated data\n"
+#ifdef MADV_COLLAPSE
+" -h hugepages: use buffers backed by hugepages for reads/writes\n"
+#endif
+" -i logdev: do integrity testing, logdev is the dm log writes device\n\
-j logid: prefix debug log messsages with this id\n\
-k: do not truncate existing file and use its size as upper bound on file size\n\
-l flen: the upper bound on file size (default 262144)\n\
@@ -2833,11 +2846,40 @@ __test_fallocate(int mode, const char *mode_str)
#endif
}
+/*
+ * Reclaim may break up hugepages, so do a best-effort collapse every once in
+ * a while.
+ */
+static void
+collapse_hugepages(void)
+{
+#ifdef MADV_COLLAPSE
+ int interval = 1 << 14; /* 16k */
+ int ret;
+
+ if (numops && (numops & (interval - 1)) == 0) {
+ ret = madvise(hugepages_info.orig_good_buf,
+ hugepages_info.good_buf_size, MADV_COLLAPSE);
+ if (ret)
+ prt("collapsing hugepages for good_buf failed (numops=%llu): %s\n",
+ numops, strerror(errno));
+ ret = madvise(hugepages_info.orig_temp_buf,
+ hugepages_info.temp_buf_size, MADV_COLLAPSE);
+ if (ret)
+ prt("collapsing hugepages for temp_buf failed (numops=%llu): %s\n",
+ numops, strerror(errno));
+ }
+#endif
+}
+
bool
keep_running(void)
{
int ret;
+ if (hugepages)
+ collapse_hugepages();
+
if (deadline.tv_nsec) {
struct timespec now;
@@ -2856,6 +2898,103 @@ keep_running(void)
return numops-- != 0;
}
+static long
+get_hugepage_size(void)
+{
+ const char str[] = "Hugepagesize:";
+ size_t str_len = sizeof(str) - 1;
+ unsigned int hugepage_size = 0;
+ char buffer[64];
+ FILE *file;
+
+ file = fopen("/proc/meminfo", "r");
+ if (!file) {
+ prterr("get_hugepage_size: fopen /proc/meminfo");
+ return -1;
+ }
+ while (fgets(buffer, sizeof(buffer), file)) {
+ if (strncmp(buffer, str, str_len) == 0) {
+ sscanf(buffer + str_len, "%u", &hugepage_size);
+ break;
+ }
+ }
+ fclose(file);
+ if (!hugepage_size) {
+ prterr("get_hugepage_size: failed to find "
+ "hugepage size in /proc/meminfo\n");
+ return -1;
+ }
+
+ /* convert from KiB to bytes */
+ return hugepage_size << 10;
+}
+
+static void *
+init_hugepages_buf(unsigned len, int hugepage_size, int alignment, long *buf_size)
+{
+ void *buf = NULL;
+#ifdef MADV_COLLAPSE
+ int ret;
+ long size = roundup(len, hugepage_size) + alignment;
+
+ ret = posix_memalign(&buf, hugepage_size, size);
+ if (ret) {
+ prterr("posix_memalign for buf");
+ return NULL;
+ }
+ memset(buf, '\0', size);
+ ret = madvise(buf, size, MADV_COLLAPSE);
+ if (ret) {
+ prterr("madvise collapse for buf");
+ free(buf);
+ return NULL;
+ }
+
+ *buf_size = size;
+#endif
+ return buf;
+}
+
+static void
+init_buffers(void)
+{
+ int i;
+
+ original_buf = (char *) malloc(maxfilelen);
+ for (i = 0; i < maxfilelen; i++)
+ original_buf[i] = random() % 256;
+ if (hugepages) {
+ long hugepage_size = get_hugepage_size();
+ if (hugepage_size == -1) {
+ prterr("get_hugepage_size()");
+ exit(102);
+ }
+ good_buf = init_hugepages_buf(maxfilelen, hugepage_size, writebdy,
+ &hugepages_info.good_buf_size);
+ if (!good_buf) {
+ prterr("init_hugepages_buf failed for good_buf");
+ exit(103);
+ }
+ hugepages_info.orig_good_buf = good_buf;
+
+ temp_buf = init_hugepages_buf(maxoplen, hugepage_size, readbdy,
+ &hugepages_info.temp_buf_size);
+ if (!temp_buf) {
+ prterr("init_hugepages_buf failed for temp_buf");
+ exit(103);
+ }
+ hugepages_info.orig_temp_buf = temp_buf;
+ } else {
+ unsigned long good_buf_len = maxfilelen + writebdy;
+ unsigned long temp_buf_len = maxoplen + readbdy;
+
+ good_buf = calloc(1, good_buf_len);
+ temp_buf = calloc(1, temp_buf_len);
+ }
+ good_buf = round_ptr_up(good_buf, writebdy, 0);
+ temp_buf = round_ptr_up(temp_buf, readbdy, 0);
+}
+
static struct option longopts[] = {
{"replay-ops", required_argument, 0, 256},
{"record-ops", optional_argument, 0, 255},
@@ -2883,7 +3022,7 @@ main(int argc, char **argv)
setvbuf(stdout, (char *)0, _IOLBF, 0); /* line buffered stdout */
while ((ch = getopt_long(argc, argv,
- "0b:c:de:fg:i:j:kl:m:no:p:qr:s:t:uw:xyABD:EFJKHzCILN:OP:RS:UWXZ",
+ "0b:c:de:fg:hi:j:kl:m:no:p:qr:s:t:uw:xyABD:EFJKHzCILN:OP:RS:UWXZ",
longopts, NULL)) != EOF)
switch (ch) {
case 'b':
@@ -2916,6 +3055,14 @@ main(int argc, char **argv)
case 'g':
filldata = *optarg;
break;
+ case 'h':
+#ifndef MADV_COLLAPSE
+ fprintf(stderr, "MADV_COLLAPSE not supported. "
+ "Can't support -h\n");
+ exit(86);
+#endif
+ hugepages = 1;
+ break;
case 'i':
integrity = 1;
logdev = strdup(optarg);
@@ -3229,15 +3376,7 @@ main(int argc, char **argv)
exit(95);
}
}
- original_buf = (char *) malloc(maxfilelen);
- for (i = 0; i < maxfilelen; i++)
- original_buf[i] = random() % 256;
- good_buf = (char *) malloc(maxfilelen + writebdy);
- good_buf = round_ptr_up(good_buf, writebdy, 0);
- memset(good_buf, '\0', maxfilelen);
- temp_buf = (char *) malloc(maxoplen + readbdy);
- temp_buf = round_ptr_up(temp_buf, readbdy, 0);
- memset(temp_buf, '\0', maxoplen);
+ init_buffers();
if (lite) { /* zero entire existing file */
ssize_t written;
--
2.47.1
^ permalink raw reply related [flat|nested] 5+ messages in thread
* [PATCH v4 2/2] generic: add tests for read/writes from hugepages-backed buffers
2025-01-18 0:47 [PATCH v4 0/2] fstests: test reads/writes from hugepages-backed buffers Joanne Koong
2025-01-18 0:47 ` [PATCH v4 1/2] fsx: support reads/writes from buffers backed by hugepages Joanne Koong
@ 2025-01-18 0:47 ` Joanne Koong
1 sibling, 0 replies; 5+ messages in thread
From: Joanne Koong @ 2025-01-18 0:47 UTC (permalink / raw)
To: fstests; +Cc: linux-fsdevel, bfoster, djwong, nirjhar, zlang, kernel-team
Add generic tests 758 and 759 for testing reads/writes from buffers
backed by hugepages.
Signed-off-by: Joanne Koong <joannelkoong@gmail.com>
Reviewed-by: Brian Foster <bfoster@redhat.com>
Reviewed-by: "Darrick J. Wong" <djwong@kernel.org>
---
common/rc | 13 +++++++++++++
tests/generic/758 | 22 ++++++++++++++++++++++
tests/generic/758.out | 4 ++++
tests/generic/759 | 26 ++++++++++++++++++++++++++
tests/generic/759.out | 4 ++++
5 files changed, 69 insertions(+)
create mode 100755 tests/generic/758
create mode 100644 tests/generic/758.out
create mode 100755 tests/generic/759
create mode 100644 tests/generic/759.out
diff --git a/common/rc b/common/rc
index 1b2e4508..0c44d096 100644
--- a/common/rc
+++ b/common/rc
@@ -3016,6 +3016,19 @@ _require_xfs_io_command()
fi
}
+# check that the system supports transparent hugepages
+_require_thp()
+{
+ if [ ! -e /sys/kernel/mm/transparent_hugepage/enabled ]; then
+ _notrun "system doesn't support transparent hugepages"
+ fi
+
+ thp_status=$(cat /sys/kernel/mm/transparent_hugepage/enabled)
+ if [[ $thp_status == *"[never]"* ]]; then
+ _notrun "system doesn't have transparent hugepages enabled"
+ fi
+}
+
# check that kernel and filesystem support direct I/O, and check if "$1" size
# aligned (optional) is supported
_require_odirect()
diff --git a/tests/generic/758 b/tests/generic/758
new file mode 100755
index 00000000..e7cd8cdc
--- /dev/null
+++ b/tests/generic/758
@@ -0,0 +1,22 @@
+#! /bin/bash
+# SPDX-License-Identifier: GPL-2.0
+#
+# FS QA Test No. 758
+#
+# fsx exercising reads/writes from userspace buffers
+# backed by hugepages
+#
+. ./common/preamble
+_begin_fstest rw auto quick
+
+. ./common/filter
+
+_require_test
+_require_thp
+
+run_fsx -N 10000 -l 500000 -h
+run_fsx -N 10000 -o 8192 -l 500000 -h
+run_fsx -N 10000 -o 128000 -l 500000 -h
+
+status=0
+exit
diff --git a/tests/generic/758.out b/tests/generic/758.out
new file mode 100644
index 00000000..af04bb14
--- /dev/null
+++ b/tests/generic/758.out
@@ -0,0 +1,4 @@
+QA output created by 758
+fsx -N 10000 -l 500000 -h
+fsx -N 10000 -o 8192 -l 500000 -h
+fsx -N 10000 -o 128000 -l 500000 -h
diff --git a/tests/generic/759 b/tests/generic/759
new file mode 100755
index 00000000..514e7603
--- /dev/null
+++ b/tests/generic/759
@@ -0,0 +1,26 @@
+#! /bin/bash
+# SPDX-License-Identifier: GPL-2.0
+#
+# FS QA Test No. 759
+#
+# fsx exercising direct IO reads/writes from userspace buffers
+# backed by hugepages
+#
+. ./common/preamble
+_begin_fstest rw auto quick
+
+. ./common/filter
+
+_require_test
+_require_odirect
+_require_thp
+
+psize=`$here/src/feature -s`
+bsize=`$here/src/min_dio_alignment $TEST_DIR $TEST_DEV`
+
+run_fsx -N 10000 -l 500000 -r PSIZE -t BSIZE -w BSIZE -Z -R -W -h
+run_fsx -N 10000 -o 8192 -l 500000 -r PSIZE -t BSIZE -w BSIZE -Z -R -W -h
+run_fsx -N 10000 -o 128000 -l 500000 -r PSIZE -t BSIZE -w BSIZE -Z -R -W -h
+
+status=0
+exit
diff --git a/tests/generic/759.out b/tests/generic/759.out
new file mode 100644
index 00000000..86bb66ef
--- /dev/null
+++ b/tests/generic/759.out
@@ -0,0 +1,4 @@
+QA output created by 759
+fsx -N 10000 -l 500000 -r PSIZE -t BSIZE -w BSIZE -Z -R -W -h
+fsx -N 10000 -o 8192 -l 500000 -r PSIZE -t BSIZE -w BSIZE -Z -R -W -h
+fsx -N 10000 -o 128000 -l 500000 -r PSIZE -t BSIZE -w BSIZE -Z -R -W -h
--
2.47.1
^ permalink raw reply related [flat|nested] 5+ messages in thread
* Re: [PATCH v4 1/2] fsx: support reads/writes from buffers backed by hugepages
2025-01-18 0:47 ` [PATCH v4 1/2] fsx: support reads/writes from buffers backed by hugepages Joanne Koong
@ 2025-01-18 20:17 ` Darrick J. Wong
2025-01-21 21:50 ` Joanne Koong
0 siblings, 1 reply; 5+ messages in thread
From: Darrick J. Wong @ 2025-01-18 20:17 UTC (permalink / raw)
To: Joanne Koong; +Cc: fstests, linux-fsdevel, bfoster, nirjhar, zlang, kernel-team
On Fri, Jan 17, 2025 at 04:47:58PM -0800, Joanne Koong wrote:
> Add support for reads/writes from buffers backed by hugepages.
> This can be enabled through the '-h' flag. This flag should only be used
> on systems where THP capabilities are enabled.
>
> This is motivated by a recent bug that was due to faulty handling of
> userspace buffers backed by hugepages. This patch is a mitigation
> against problems like this in the future.
>
> Signed-off-by: Joanne Koong <joannelkoong@gmail.com>
> Reviewed-by: Brian Foster <bfoster@redhat.com>
> ---
> ltp/fsx.c | 165 +++++++++++++++++++++++++++++++++++++++++++++++++-----
> 1 file changed, 152 insertions(+), 13 deletions(-)
>
> diff --git a/ltp/fsx.c b/ltp/fsx.c
> index 41933354..1513755f 100644
> --- a/ltp/fsx.c
> +++ b/ltp/fsx.c
> @@ -190,6 +190,16 @@ int o_direct; /* -Z */
> int aio = 0;
> int uring = 0;
> int mark_nr = 0;
> +int hugepages = 0; /* -h flag */
> +
> +/* Stores info needed to periodically collapse hugepages */
> +struct hugepages_collapse_info {
> + void *orig_good_buf;
> + long good_buf_size;
> + void *orig_temp_buf;
> + long temp_buf_size;
> +};
> +struct hugepages_collapse_info hugepages_info;
>
> int page_size;
> int page_mask;
> @@ -2471,7 +2481,7 @@ void
> usage(void)
> {
> fprintf(stdout, "usage: %s",
> - "fsx [-dfknqxyzBEFHIJKLORWXZ0]\n\
> + "fsx [-dfhknqxyzBEFHIJKLORWXZ0]\n\
> [-b opnum] [-c Prob] [-g filldata] [-i logdev] [-j logid]\n\
> [-l flen] [-m start:end] [-o oplen] [-p progressinterval]\n\
> [-r readbdy] [-s style] [-t truncbdy] [-w writebdy]\n\
> @@ -2483,8 +2493,11 @@ usage(void)
> -d: debug output for all operations\n\
> -e: pollute post-eof on size changes (default 0)\n\
> -f: flush and invalidate cache after I/O\n\
> - -g X: write character X instead of random generated data\n\
> - -i logdev: do integrity testing, logdev is the dm log writes device\n\
> + -g X: write character X instead of random generated data\n"
> +#ifdef MADV_COLLAPSE
> +" -h hugepages: use buffers backed by hugepages for reads/writes\n"
> +#endif
> +" -i logdev: do integrity testing, logdev is the dm log writes device\n\
> -j logid: prefix debug log messsages with this id\n\
> -k: do not truncate existing file and use its size as upper bound on file size\n\
> -l flen: the upper bound on file size (default 262144)\n\
> @@ -2833,11 +2846,40 @@ __test_fallocate(int mode, const char *mode_str)
> #endif
> }
>
> +/*
> + * Reclaim may break up hugepages, so do a best-effort collapse every once in
> + * a while.
> + */
> +static void
> +collapse_hugepages(void)
> +{
> +#ifdef MADV_COLLAPSE
> + int interval = 1 << 14; /* 16k */
> + int ret;
> +
> + if (numops && (numops & (interval - 1)) == 0) {
I wonder if this could be collapsed to:
/* re-collapse every 16k fsxops after we start */
if (!numops || (numops & ((1U << 14) - 1)))
return;
ret = madvise(...);
But my guess is that the compiler is smart enough to realize that
interval never changes and fold it into the test expression?
<shrug> Not that passionate either way. :)
> + ret = madvise(hugepages_info.orig_good_buf,
> + hugepages_info.good_buf_size, MADV_COLLAPSE);
> + if (ret)
> + prt("collapsing hugepages for good_buf failed (numops=%llu): %s\n",
> + numops, strerror(errno));
> + ret = madvise(hugepages_info.orig_temp_buf,
> + hugepages_info.temp_buf_size, MADV_COLLAPSE);
> + if (ret)
> + prt("collapsing hugepages for temp_buf failed (numops=%llu): %s\n",
> + numops, strerror(errno));
> + }
> +#endif
> +}
> +
> bool
> keep_running(void)
> {
> int ret;
>
> + if (hugepages)
> + collapse_hugepages();
> +
> if (deadline.tv_nsec) {
> struct timespec now;
>
> @@ -2856,6 +2898,103 @@ keep_running(void)
> return numops-- != 0;
> }
>
> +static long
> +get_hugepage_size(void)
> +{
> + const char str[] = "Hugepagesize:";
> + size_t str_len = sizeof(str) - 1;
> + unsigned int hugepage_size = 0;
> + char buffer[64];
> + FILE *file;
> +
> + file = fopen("/proc/meminfo", "r");
> + if (!file) {
> + prterr("get_hugepage_size: fopen /proc/meminfo");
> + return -1;
> + }
> + while (fgets(buffer, sizeof(buffer), file)) {
> + if (strncmp(buffer, str, str_len) == 0) {
> + sscanf(buffer + str_len, "%u", &hugepage_size);
> + break;
> + }
> + }
> + fclose(file);
> + if (!hugepage_size) {
> + prterr("get_hugepage_size: failed to find "
> + "hugepage size in /proc/meminfo\n");
> + return -1;
> + }
> +
> + /* convert from KiB to bytes */
> + return hugepage_size << 10;
> +}
> +
> +static void *
> +init_hugepages_buf(unsigned len, int hugepage_size, int alignment, long *buf_size)
> +{
> + void *buf = NULL;
> +#ifdef MADV_COLLAPSE
> + int ret;
> + long size = roundup(len, hugepage_size) + alignment;
> +
> + ret = posix_memalign(&buf, hugepage_size, size);
> + if (ret) {
> + prterr("posix_memalign for buf");
> + return NULL;
> + }
> + memset(buf, '\0', size);
> + ret = madvise(buf, size, MADV_COLLAPSE);
> + if (ret) {
> + prterr("madvise collapse for buf");
> + free(buf);
> + return NULL;
> + }
> +
> + *buf_size = size;
> +#endif
> + return buf;
> +}
> +
> +static void
> +init_buffers(void)
> +{
> + int i;
> +
> + original_buf = (char *) malloc(maxfilelen);
> + for (i = 0; i < maxfilelen; i++)
> + original_buf[i] = random() % 256;
> + if (hugepages) {
> + long hugepage_size = get_hugepage_size();
> + if (hugepage_size == -1) {
> + prterr("get_hugepage_size()");
> + exit(102);
> + }
> + good_buf = init_hugepages_buf(maxfilelen, hugepage_size, writebdy,
> + &hugepages_info.good_buf_size);
> + if (!good_buf) {
> + prterr("init_hugepages_buf failed for good_buf");
> + exit(103);
> + }
> + hugepages_info.orig_good_buf = good_buf;
> +
> + temp_buf = init_hugepages_buf(maxoplen, hugepage_size, readbdy,
> + &hugepages_info.temp_buf_size);
> + if (!temp_buf) {
> + prterr("init_hugepages_buf failed for temp_buf");
> + exit(103);
> + }
> + hugepages_info.orig_temp_buf = temp_buf;
> + } else {
> + unsigned long good_buf_len = maxfilelen + writebdy;
> + unsigned long temp_buf_len = maxoplen + readbdy;
> +
> + good_buf = calloc(1, good_buf_len);
> + temp_buf = calloc(1, temp_buf_len);
> + }
> + good_buf = round_ptr_up(good_buf, writebdy, 0);
> + temp_buf = round_ptr_up(temp_buf, readbdy, 0);
> +}
> +
> static struct option longopts[] = {
> {"replay-ops", required_argument, 0, 256},
> {"record-ops", optional_argument, 0, 255},
> @@ -2883,7 +3022,7 @@ main(int argc, char **argv)
> setvbuf(stdout, (char *)0, _IOLBF, 0); /* line buffered stdout */
>
> while ((ch = getopt_long(argc, argv,
> - "0b:c:de:fg:i:j:kl:m:no:p:qr:s:t:uw:xyABD:EFJKHzCILN:OP:RS:UWXZ",
> + "0b:c:de:fg:hi:j:kl:m:no:p:qr:s:t:uw:xyABD:EFJKHzCILN:OP:RS:UWXZ",
> longopts, NULL)) != EOF)
> switch (ch) {
> case 'b':
> @@ -2916,6 +3055,14 @@ main(int argc, char **argv)
> case 'g':
> filldata = *optarg;
> break;
> + case 'h':
> +#ifndef MADV_COLLAPSE
> + fprintf(stderr, "MADV_COLLAPSE not supported. "
> + "Can't support -h\n");
> + exit(86);
Excessive indenting here.
With those fixed up,
Reviewed-by: "Darrick J. Wong" <djwong@kernel.org>
--D
> +#endif
> + hugepages = 1;
> + break;
> case 'i':
> integrity = 1;
> logdev = strdup(optarg);
> @@ -3229,15 +3376,7 @@ main(int argc, char **argv)
> exit(95);
> }
> }
> - original_buf = (char *) malloc(maxfilelen);
> - for (i = 0; i < maxfilelen; i++)
> - original_buf[i] = random() % 256;
> - good_buf = (char *) malloc(maxfilelen + writebdy);
> - good_buf = round_ptr_up(good_buf, writebdy, 0);
> - memset(good_buf, '\0', maxfilelen);
> - temp_buf = (char *) malloc(maxoplen + readbdy);
> - temp_buf = round_ptr_up(temp_buf, readbdy, 0);
> - memset(temp_buf, '\0', maxoplen);
> + init_buffers();
> if (lite) { /* zero entire existing file */
> ssize_t written;
>
> --
> 2.47.1
>
>
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [PATCH v4 1/2] fsx: support reads/writes from buffers backed by hugepages
2025-01-18 20:17 ` Darrick J. Wong
@ 2025-01-21 21:50 ` Joanne Koong
0 siblings, 0 replies; 5+ messages in thread
From: Joanne Koong @ 2025-01-21 21:50 UTC (permalink / raw)
To: Darrick J. Wong
Cc: fstests, linux-fsdevel, bfoster, nirjhar, zlang, kernel-team
On Sat, Jan 18, 2025 at 12:17 PM Darrick J. Wong <djwong@kernel.org> wrote:
>
> On Fri, Jan 17, 2025 at 04:47:58PM -0800, Joanne Koong wrote:
> > Add support for reads/writes from buffers backed by hugepages.
> > This can be enabled through the '-h' flag. This flag should only be used
> > on systems where THP capabilities are enabled.
> >
> > This is motivated by a recent bug that was due to faulty handling of
> > userspace buffers backed by hugepages. This patch is a mitigation
> > against problems like this in the future.
> >
> > Signed-off-by: Joanne Koong <joannelkoong@gmail.com>
> > Reviewed-by: Brian Foster <bfoster@redhat.com>
> > ---
> > ltp/fsx.c | 165 +++++++++++++++++++++++++++++++++++++++++++++++++-----
> > 1 file changed, 152 insertions(+), 13 deletions(-)
> >
> > diff --git a/ltp/fsx.c b/ltp/fsx.c
> > index 41933354..1513755f 100644
> > --- a/ltp/fsx.c
> > +++ b/ltp/fsx.c
> > @@ -2833,11 +2846,40 @@ __test_fallocate(int mode, const char *mode_str)
> > #endif
> > }
> >
> > +/*
> > + * Reclaim may break up hugepages, so do a best-effort collapse every once in
> > + * a while.
> > + */
> > +static void
> > +collapse_hugepages(void)
> > +{
> > +#ifdef MADV_COLLAPSE
> > + int interval = 1 << 14; /* 16k */
> > + int ret;
> > +
> > + if (numops && (numops & (interval - 1)) == 0) {
>
> I wonder if this could be collapsed to:
>
> /* re-collapse every 16k fsxops after we start */
> if (!numops || (numops & ((1U << 14) - 1)))
> return;
>
> ret = madvise(...);
>
> But my guess is that the compiler is smart enough to realize that
> interval never changes and fold it into the test expression?
>
> <shrug> Not that passionate either way. :)
Sounds good, I will change it to this and fix the indentation below for v5.
Thanks,
Joanne
>
> > + ret = madvise(hugepages_info.orig_good_buf,
> > + hugepages_info.good_buf_size, MADV_COLLAPSE);
> > + if (ret)
> > + prt("collapsing hugepages for good_buf failed (numops=%llu): %s\n",
> > + numops, strerror(errno));
> > + ret = madvise(hugepages_info.orig_temp_buf,
> > + hugepages_info.temp_buf_size, MADV_COLLAPSE);
> > + if (ret)
> > + prt("collapsing hugepages for temp_buf failed (numops=%llu): %s\n",
> > + numops, strerror(errno));
> > + }
> > +#endif
> > +}
> > +
> > bool
> > keep_running(void)
> > {
> > int ret;
> >
> > + if (hugepages)
> > + collapse_hugepages();
> > +
> > if (deadline.tv_nsec) {
> > struct timespec now;
> >
> > @@ -2856,6 +2898,103 @@ keep_running(void)
> > return numops-- != 0;
> > }
> >
> > +static long
> > +get_hugepage_size(void)
> > +{
> > + const char str[] = "Hugepagesize:";
> > + size_t str_len = sizeof(str) - 1;
> > + unsigned int hugepage_size = 0;
> > + char buffer[64];
> > + FILE *file;
> > +
> > + file = fopen("/proc/meminfo", "r");
> > + if (!file) {
> > + prterr("get_hugepage_size: fopen /proc/meminfo");
> > + return -1;
> > + }
> > + while (fgets(buffer, sizeof(buffer), file)) {
> > + if (strncmp(buffer, str, str_len) == 0) {
> > + sscanf(buffer + str_len, "%u", &hugepage_size);
> > + break;
> > + }
> > + }
> > + fclose(file);
> > + if (!hugepage_size) {
> > + prterr("get_hugepage_size: failed to find "
> > + "hugepage size in /proc/meminfo\n");
> > + return -1;
> > + }
> > +
> > + /* convert from KiB to bytes */
> > + return hugepage_size << 10;
> > +}
> > +
> > +static void *
> > +init_hugepages_buf(unsigned len, int hugepage_size, int alignment, long *buf_size)
> > +{
> > + void *buf = NULL;
> > +#ifdef MADV_COLLAPSE
> > + int ret;
> > + long size = roundup(len, hugepage_size) + alignment;
> > +
> > + ret = posix_memalign(&buf, hugepage_size, size);
> > + if (ret) {
> > + prterr("posix_memalign for buf");
> > + return NULL;
> > + }
> > + memset(buf, '\0', size);
> > + ret = madvise(buf, size, MADV_COLLAPSE);
> > + if (ret) {
> > + prterr("madvise collapse for buf");
> > + free(buf);
> > + return NULL;
> > + }
> > +
> > + *buf_size = size;
> > +#endif
> > + return buf;
> > +}
> > +
> > +static void
> > +init_buffers(void)
> > +{
> > + int i;
> > +
> > + original_buf = (char *) malloc(maxfilelen);
> > + for (i = 0; i < maxfilelen; i++)
> > + original_buf[i] = random() % 256;
> > + if (hugepages) {
> > + long hugepage_size = get_hugepage_size();
> > + if (hugepage_size == -1) {
> > + prterr("get_hugepage_size()");
> > + exit(102);
> > + }
> > + good_buf = init_hugepages_buf(maxfilelen, hugepage_size, writebdy,
> > + &hugepages_info.good_buf_size);
> > + if (!good_buf) {
> > + prterr("init_hugepages_buf failed for good_buf");
> > + exit(103);
> > + }
> > + hugepages_info.orig_good_buf = good_buf;
> > +
> > + temp_buf = init_hugepages_buf(maxoplen, hugepage_size, readbdy,
> > + &hugepages_info.temp_buf_size);
> > + if (!temp_buf) {
> > + prterr("init_hugepages_buf failed for temp_buf");
> > + exit(103);
> > + }
> > + hugepages_info.orig_temp_buf = temp_buf;
> > + } else {
> > + unsigned long good_buf_len = maxfilelen + writebdy;
> > + unsigned long temp_buf_len = maxoplen + readbdy;
> > +
> > + good_buf = calloc(1, good_buf_len);
> > + temp_buf = calloc(1, temp_buf_len);
> > + }
> > + good_buf = round_ptr_up(good_buf, writebdy, 0);
> > + temp_buf = round_ptr_up(temp_buf, readbdy, 0);
> > +}
> > +
> > static struct option longopts[] = {
> > {"replay-ops", required_argument, 0, 256},
> > {"record-ops", optional_argument, 0, 255},
> > @@ -2883,7 +3022,7 @@ main(int argc, char **argv)
> > setvbuf(stdout, (char *)0, _IOLBF, 0); /* line buffered stdout */
> >
> > while ((ch = getopt_long(argc, argv,
> > - "0b:c:de:fg:i:j:kl:m:no:p:qr:s:t:uw:xyABD:EFJKHzCILN:OP:RS:UWXZ",
> > + "0b:c:de:fg:hi:j:kl:m:no:p:qr:s:t:uw:xyABD:EFJKHzCILN:OP:RS:UWXZ",
> > longopts, NULL)) != EOF)
> > switch (ch) {
> > case 'b':
> > @@ -2916,6 +3055,14 @@ main(int argc, char **argv)
> > case 'g':
> > filldata = *optarg;
> > break;
> > + case 'h':
> > +#ifndef MADV_COLLAPSE
> > + fprintf(stderr, "MADV_COLLAPSE not supported. "
> > + "Can't support -h\n");
> > + exit(86);
>
> Excessive indenting here.
>
> With those fixed up,
> Reviewed-by: "Darrick J. Wong" <djwong@kernel.org>
>
> --D
>
> > +#endif
> > + hugepages = 1;
> > + break;
> > case 'i':
> > integrity = 1;
> > logdev = strdup(optarg);
> > @@ -3229,15 +3376,7 @@ main(int argc, char **argv)
> > exit(95);
> > }
> > }
> > - original_buf = (char *) malloc(maxfilelen);
> > - for (i = 0; i < maxfilelen; i++)
> > - original_buf[i] = random() % 256;
> > - good_buf = (char *) malloc(maxfilelen + writebdy);
> > - good_buf = round_ptr_up(good_buf, writebdy, 0);
> > - memset(good_buf, '\0', maxfilelen);
> > - temp_buf = (char *) malloc(maxoplen + readbdy);
> > - temp_buf = round_ptr_up(temp_buf, readbdy, 0);
> > - memset(temp_buf, '\0', maxoplen);
> > + init_buffers();
> > if (lite) { /* zero entire existing file */
> > ssize_t written;
> >
> > --
> > 2.47.1
> >
> >
^ permalink raw reply [flat|nested] 5+ messages in thread
end of thread, other threads:[~2025-01-21 21:50 UTC | newest]
Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-01-18 0:47 [PATCH v4 0/2] fstests: test reads/writes from hugepages-backed buffers Joanne Koong
2025-01-18 0:47 ` [PATCH v4 1/2] fsx: support reads/writes from buffers backed by hugepages Joanne Koong
2025-01-18 20:17 ` Darrick J. Wong
2025-01-21 21:50 ` Joanne Koong
2025-01-18 0:47 ` [PATCH v4 2/2] generic: add tests for read/writes from hugepages-backed buffers Joanne Koong
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox