* [PATCH v3 1/3] selftests/mm/write_to_hugetlbfs: parse -s as size_t
2025-12-21 12:26 [PATCH v3 0/3] selftests/mm: hugetlb cgroup charging: robustness fixes Li Wang
@ 2025-12-21 12:26 ` Li Wang
2025-12-21 20:23 ` Waiman Long
2025-12-21 22:10 ` David Laight
2025-12-21 12:26 ` [PATCH v3 2/3] selftests/mm/charge_reserved_hugetlb: drop mount size for hugetlbfs Li Wang
2025-12-21 12:26 ` [PATCH v3 3/3] selftests/mm/charge_reserved_hugetlb.sh: add waits with timeout helper Li Wang
2 siblings, 2 replies; 19+ messages in thread
From: Li Wang @ 2025-12-21 12:26 UTC (permalink / raw)
To: akpm, linux-kselftest, linux-kernel, linux-mm
Cc: David Hildenbrand, Mark Brown, Shuah Khan, Waiman Long
write_to_hugetlbfs currently parses the -s size argument with atoi()
into an int. This silently accepts malformed input, cannot report overflow,
and can truncate large sizes.
--- Error log ---
# uname -r
6.12.0-xxx.el10.aarch64+64k
# ls /sys/kernel/mm/hugepages/hugepages-*
hugepages-16777216kB/ hugepages-2048kB/ hugepages-524288kB/
#./charge_reserved_hugetlb.sh -cgroup-v2
# -----------------------------------------
...
# nr hugepages = 10
# writing cgroup limit: 5368709120
# writing reseravation limit: 5368709120
...
# Writing to this path: /mnt/huge/test
# Writing this size: -1610612736 <--------
Switch the size variable to size_t and parse -s with sscanf("%zu", ...).
Also print the size using %zu.
This avoids incorrect behavior with large -s values and makes the utility
more robust.
Signed-off-by: Li Wang <liwang@redhat.com>
Cc: David Hildenbrand <david@kernel.org>
Cc: Mark Brown <broonie@kernel.org>
Cc: Shuah Khan <shuah@kernel.org>
Cc: Waiman Long <longman@redhat.com>
Acked-by: David Hildenbrand (Red Hat) <david@kernel.org>
---
tools/testing/selftests/mm/write_to_hugetlbfs.c | 9 ++++++---
1 file changed, 6 insertions(+), 3 deletions(-)
diff --git a/tools/testing/selftests/mm/write_to_hugetlbfs.c b/tools/testing/selftests/mm/write_to_hugetlbfs.c
index 34c91f7e6128..ecb5f7619960 100644
--- a/tools/testing/selftests/mm/write_to_hugetlbfs.c
+++ b/tools/testing/selftests/mm/write_to_hugetlbfs.c
@@ -68,7 +68,7 @@ int main(int argc, char **argv)
int key = 0;
int *ptr = NULL;
int c = 0;
- int size = 0;
+ size_t size = 0;
char path[256] = "";
enum method method = MAX_METHOD;
int want_sleep = 0, private = 0;
@@ -86,7 +86,10 @@ int main(int argc, char **argv)
while ((c = getopt(argc, argv, "s:p:m:owlrn")) != -1) {
switch (c) {
case 's':
- size = atoi(optarg);
+ if (sscanf(optarg, "%zu", &size) != 1) {
+ perror("Invalid -s.");
+ exit_usage();
+ }
break;
case 'p':
strncpy(path, optarg, sizeof(path) - 1);
@@ -131,7 +134,7 @@ int main(int argc, char **argv)
}
if (size != 0) {
- printf("Writing this size: %d\n", size);
+ printf("Writing this size: %zu\n", size);
} else {
errno = EINVAL;
perror("size not found");
--
2.49.0
^ permalink raw reply related [flat|nested] 19+ messages in thread* Re: [PATCH v3 1/3] selftests/mm/write_to_hugetlbfs: parse -s as size_t
2025-12-21 12:26 ` [PATCH v3 1/3] selftests/mm/write_to_hugetlbfs: parse -s as size_t Li Wang
@ 2025-12-21 20:23 ` Waiman Long
2025-12-21 22:10 ` David Laight
1 sibling, 0 replies; 19+ messages in thread
From: Waiman Long @ 2025-12-21 20:23 UTC (permalink / raw)
To: Li Wang, akpm, linux-kselftest, linux-kernel, linux-mm
Cc: David Hildenbrand, Mark Brown, Shuah Khan
On 12/21/25 7:26 AM, Li Wang wrote:
> write_to_hugetlbfs currently parses the -s size argument with atoi()
> into an int. This silently accepts malformed input, cannot report overflow,
> and can truncate large sizes.
>
> --- Error log ---
> # uname -r
> 6.12.0-xxx.el10.aarch64+64k
>
> # ls /sys/kernel/mm/hugepages/hugepages-*
> hugepages-16777216kB/ hugepages-2048kB/ hugepages-524288kB/
>
> #./charge_reserved_hugetlb.sh -cgroup-v2
> # -----------------------------------------
> ...
> # nr hugepages = 10
> # writing cgroup limit: 5368709120
> # writing reseravation limit: 5368709120
> ...
> # Writing to this path: /mnt/huge/test
> # Writing this size: -1610612736 <--------
>
> Switch the size variable to size_t and parse -s with sscanf("%zu", ...).
> Also print the size using %zu.
>
> This avoids incorrect behavior with large -s values and makes the utility
> more robust.
>
> Signed-off-by: Li Wang <liwang@redhat.com>
> Cc: David Hildenbrand <david@kernel.org>
> Cc: Mark Brown <broonie@kernel.org>
> Cc: Shuah Khan <shuah@kernel.org>
> Cc: Waiman Long <longman@redhat.com>
> Acked-by: David Hildenbrand (Red Hat) <david@kernel.org>
> ---
> tools/testing/selftests/mm/write_to_hugetlbfs.c | 9 ++++++---
> 1 file changed, 6 insertions(+), 3 deletions(-)
>
> diff --git a/tools/testing/selftests/mm/write_to_hugetlbfs.c b/tools/testing/selftests/mm/write_to_hugetlbfs.c
> index 34c91f7e6128..ecb5f7619960 100644
> --- a/tools/testing/selftests/mm/write_to_hugetlbfs.c
> +++ b/tools/testing/selftests/mm/write_to_hugetlbfs.c
> @@ -68,7 +68,7 @@ int main(int argc, char **argv)
> int key = 0;
> int *ptr = NULL;
> int c = 0;
> - int size = 0;
> + size_t size = 0;
> char path[256] = "";
> enum method method = MAX_METHOD;
> int want_sleep = 0, private = 0;
> @@ -86,7 +86,10 @@ int main(int argc, char **argv)
> while ((c = getopt(argc, argv, "s:p:m:owlrn")) != -1) {
> switch (c) {
> case 's':
> - size = atoi(optarg);
> + if (sscanf(optarg, "%zu", &size) != 1) {
> + perror("Invalid -s.");
> + exit_usage();
> + }
> break;
> case 'p':
> strncpy(path, optarg, sizeof(path) - 1);
> @@ -131,7 +134,7 @@ int main(int argc, char **argv)
> }
>
> if (size != 0) {
> - printf("Writing this size: %d\n", size);
> + printf("Writing this size: %zu\n", size);
> } else {
> errno = EINVAL;
> perror("size not found");
LGTM
Acked-by: Waiman Long <longman@redhat.com>
^ permalink raw reply [flat|nested] 19+ messages in thread* Re: [PATCH v3 1/3] selftests/mm/write_to_hugetlbfs: parse -s as size_t
2025-12-21 12:26 ` [PATCH v3 1/3] selftests/mm/write_to_hugetlbfs: parse -s as size_t Li Wang
2025-12-21 20:23 ` Waiman Long
@ 2025-12-21 22:10 ` David Laight
[not found] ` <CAEemH2f40t+4SsjL3Y=8Gid-CBMtf3eL1egsPKT1J_7LDbdWPQ@mail.gmail.com>
1 sibling, 1 reply; 19+ messages in thread
From: David Laight @ 2025-12-21 22:10 UTC (permalink / raw)
To: Li Wang
Cc: akpm, linux-kselftest, linux-kernel, linux-mm, David Hildenbrand,
Mark Brown, Shuah Khan, Waiman Long
On Sun, 21 Dec 2025 20:26:37 +0800
Li Wang <liwang@redhat.com> wrote:
> write_to_hugetlbfs currently parses the -s size argument with atoi()
> into an int. This silently accepts malformed input, cannot report overflow,
> and can truncate large sizes.
And sscanf() will just ignore invalid trailing characters.
Probably much the same as atoi() apart from a leading '-'.
Maybe you could use "%zu%c" and check the count is 1 - but I bet
some static checker won't like that.
Using strtoul() and checking the terminating character is 'reasonable',
but won't detect overflow.
David
>
> --- Error log ---
> # uname -r
> 6.12.0-xxx.el10.aarch64+64k
>
> # ls /sys/kernel/mm/hugepages/hugepages-*
> hugepages-16777216kB/ hugepages-2048kB/ hugepages-524288kB/
>
> #./charge_reserved_hugetlb.sh -cgroup-v2
> # -----------------------------------------
> ...
> # nr hugepages = 10
> # writing cgroup limit: 5368709120
> # writing reseravation limit: 5368709120
> ...
> # Writing to this path: /mnt/huge/test
> # Writing this size: -1610612736 <--------
>
> Switch the size variable to size_t and parse -s with sscanf("%zu", ...).
> Also print the size using %zu.
>
> This avoids incorrect behavior with large -s values and makes the utility
> more robust.
>
> Signed-off-by: Li Wang <liwang@redhat.com>
> Cc: David Hildenbrand <david@kernel.org>
> Cc: Mark Brown <broonie@kernel.org>
> Cc: Shuah Khan <shuah@kernel.org>
> Cc: Waiman Long <longman@redhat.com>
> Acked-by: David Hildenbrand (Red Hat) <david@kernel.org>
> ---
> tools/testing/selftests/mm/write_to_hugetlbfs.c | 9 ++++++---
> 1 file changed, 6 insertions(+), 3 deletions(-)
>
> diff --git a/tools/testing/selftests/mm/write_to_hugetlbfs.c b/tools/testing/selftests/mm/write_to_hugetlbfs.c
> index 34c91f7e6128..ecb5f7619960 100644
> --- a/tools/testing/selftests/mm/write_to_hugetlbfs.c
> +++ b/tools/testing/selftests/mm/write_to_hugetlbfs.c
> @@ -68,7 +68,7 @@ int main(int argc, char **argv)
> int key = 0;
> int *ptr = NULL;
> int c = 0;
> - int size = 0;
> + size_t size = 0;
> char path[256] = "";
> enum method method = MAX_METHOD;
> int want_sleep = 0, private = 0;
> @@ -86,7 +86,10 @@ int main(int argc, char **argv)
> while ((c = getopt(argc, argv, "s:p:m:owlrn")) != -1) {
> switch (c) {
> case 's':
> - size = atoi(optarg);
> + if (sscanf(optarg, "%zu", &size) != 1) {
> + perror("Invalid -s.");
> + exit_usage();
> + }
> break;
> case 'p':
> strncpy(path, optarg, sizeof(path) - 1);
> @@ -131,7 +134,7 @@ int main(int argc, char **argv)
> }
>
> if (size != 0) {
> - printf("Writing this size: %d\n", size);
> + printf("Writing this size: %zu\n", size);
> } else {
> errno = EINVAL;
> perror("size not found");
^ permalink raw reply [flat|nested] 19+ messages in thread
* [PATCH v3 2/3] selftests/mm/charge_reserved_hugetlb: drop mount size for hugetlbfs
2025-12-21 12:26 [PATCH v3 0/3] selftests/mm: hugetlb cgroup charging: robustness fixes Li Wang
2025-12-21 12:26 ` [PATCH v3 1/3] selftests/mm/write_to_hugetlbfs: parse -s as size_t Li Wang
@ 2025-12-21 12:26 ` Li Wang
2025-12-21 20:24 ` Waiman Long
2025-12-22 10:01 ` David Hildenbrand (Red Hat)
2025-12-21 12:26 ` [PATCH v3 3/3] selftests/mm/charge_reserved_hugetlb.sh: add waits with timeout helper Li Wang
2 siblings, 2 replies; 19+ messages in thread
From: Li Wang @ 2025-12-21 12:26 UTC (permalink / raw)
To: akpm, linux-kselftest, linux-kernel, linux-mm
Cc: David Hildenbrand, Mark Brown, Shuah Khan, Waiman Long
charge_reserved_hugetlb.sh mounts a hugetlbfs instance at /mnt/huge with
a fixed size of 256M. On systems with large base hugepages (e.g. 512MB),
this is smaller than a single hugepage, so the hugetlbfs mount ends up
with zero capacity (often visible as size=0 in mount output).
As a result, write_to_hugetlbfs fails with ENOMEM and the test can hang
waiting for progress.
--- Error log ---
# uname -r
6.12.0-xxx.el10.aarch64+64k
#./charge_reserved_hugetlb.sh -cgroup-v2
# -----------------------------------------
...
# nr hugepages = 10
# writing cgroup limit: 5368709120
# writing reseravation limit: 5368709120
...
# write_to_hugetlbfs: Error mapping the file: Cannot allocate memory
# Waiting for hugetlb memory reservation to reach size 2684354560.
# 0
# Waiting for hugetlb memory reservation to reach size 2684354560.
# 0
...
# mount |grep /mnt/huge
none on /mnt/huge type hugetlbfs (rw,relatime,seclabel,pagesize=512M,size=0)
# grep -i huge /proc/meminfo
...
HugePages_Total: 10
HugePages_Free: 10
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 524288 kB
Hugetlb: 5242880 kB
Drop the mount args with 'size=256M', so the filesystem capacity is sufficient
regardless of HugeTLB page size.
Fixes: 29750f71a9 ("hugetlb_cgroup: add hugetlb_cgroup reservation tests")
Signed-off-by: Li Wang <liwang@redhat.com>
Cc: David Hildenbrand <david@kernel.org>
Cc: Mark Brown <broonie@kernel.org>
Cc: Shuah Khan <shuah@kernel.org>
Cc: Waiman Long <longman@redhat.com>
---
tools/testing/selftests/mm/charge_reserved_hugetlb.sh | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/tools/testing/selftests/mm/charge_reserved_hugetlb.sh b/tools/testing/selftests/mm/charge_reserved_hugetlb.sh
index e1fe16bcbbe8..fa6713892d82 100755
--- a/tools/testing/selftests/mm/charge_reserved_hugetlb.sh
+++ b/tools/testing/selftests/mm/charge_reserved_hugetlb.sh
@@ -290,7 +290,7 @@ function run_test() {
setup_cgroup "hugetlb_cgroup_test" "$cgroup_limit" "$reservation_limit"
mkdir -p /mnt/huge
- mount -t hugetlbfs -o pagesize=${MB}M,size=256M none /mnt/huge
+ mount -t hugetlbfs -o pagesize=${MB}M none /mnt/huge
write_hugetlbfs_and_get_usage "hugetlb_cgroup_test" "$size" "$populate" \
"$write" "/mnt/huge/test" "$method" "$private" "$expect_failure" \
@@ -344,7 +344,7 @@ function run_multiple_cgroup_test() {
setup_cgroup "hugetlb_cgroup_test2" "$cgroup_limit2" "$reservation_limit2"
mkdir -p /mnt/huge
- mount -t hugetlbfs -o pagesize=${MB}M,size=256M none /mnt/huge
+ mount -t hugetlbfs -o pagesize=${MB}M none /mnt/huge
write_hugetlbfs_and_get_usage "hugetlb_cgroup_test1" "$size1" \
"$populate1" "$write1" "/mnt/huge/test1" "$method" "$private" \
--
2.49.0
^ permalink raw reply related [flat|nested] 19+ messages in thread* Re: [PATCH v3 2/3] selftests/mm/charge_reserved_hugetlb: drop mount size for hugetlbfs
2025-12-21 12:26 ` [PATCH v3 2/3] selftests/mm/charge_reserved_hugetlb: drop mount size for hugetlbfs Li Wang
@ 2025-12-21 20:24 ` Waiman Long
2025-12-22 10:01 ` David Hildenbrand (Red Hat)
1 sibling, 0 replies; 19+ messages in thread
From: Waiman Long @ 2025-12-21 20:24 UTC (permalink / raw)
To: Li Wang, akpm, linux-kselftest, linux-kernel, linux-mm
Cc: David Hildenbrand, Mark Brown, Shuah Khan
On 12/21/25 7:26 AM, Li Wang wrote:
> charge_reserved_hugetlb.sh mounts a hugetlbfs instance at /mnt/huge with
> a fixed size of 256M. On systems with large base hugepages (e.g. 512MB),
> this is smaller than a single hugepage, so the hugetlbfs mount ends up
> with zero capacity (often visible as size=0 in mount output).
>
> As a result, write_to_hugetlbfs fails with ENOMEM and the test can hang
> waiting for progress.
>
> --- Error log ---
> # uname -r
> 6.12.0-xxx.el10.aarch64+64k
>
> #./charge_reserved_hugetlb.sh -cgroup-v2
> # -----------------------------------------
> ...
> # nr hugepages = 10
> # writing cgroup limit: 5368709120
> # writing reseravation limit: 5368709120
> ...
> # write_to_hugetlbfs: Error mapping the file: Cannot allocate memory
> # Waiting for hugetlb memory reservation to reach size 2684354560.
> # 0
> # Waiting for hugetlb memory reservation to reach size 2684354560.
> # 0
> ...
>
> # mount |grep /mnt/huge
> none on /mnt/huge type hugetlbfs (rw,relatime,seclabel,pagesize=512M,size=0)
>
> # grep -i huge /proc/meminfo
> ...
> HugePages_Total: 10
> HugePages_Free: 10
> HugePages_Rsvd: 0
> HugePages_Surp: 0
> Hugepagesize: 524288 kB
> Hugetlb: 5242880 kB
>
> Drop the mount args with 'size=256M', so the filesystem capacity is sufficient
> regardless of HugeTLB page size.
>
> Fixes: 29750f71a9 ("hugetlb_cgroup: add hugetlb_cgroup reservation tests")
> Signed-off-by: Li Wang <liwang@redhat.com>
> Cc: David Hildenbrand <david@kernel.org>
> Cc: Mark Brown <broonie@kernel.org>
> Cc: Shuah Khan <shuah@kernel.org>
> Cc: Waiman Long <longman@redhat.com>
> ---
> tools/testing/selftests/mm/charge_reserved_hugetlb.sh | 4 ++--
> 1 file changed, 2 insertions(+), 2 deletions(-)
>
> diff --git a/tools/testing/selftests/mm/charge_reserved_hugetlb.sh b/tools/testing/selftests/mm/charge_reserved_hugetlb.sh
> index e1fe16bcbbe8..fa6713892d82 100755
> --- a/tools/testing/selftests/mm/charge_reserved_hugetlb.sh
> +++ b/tools/testing/selftests/mm/charge_reserved_hugetlb.sh
> @@ -290,7 +290,7 @@ function run_test() {
> setup_cgroup "hugetlb_cgroup_test" "$cgroup_limit" "$reservation_limit"
>
> mkdir -p /mnt/huge
> - mount -t hugetlbfs -o pagesize=${MB}M,size=256M none /mnt/huge
> + mount -t hugetlbfs -o pagesize=${MB}M none /mnt/huge
>
> write_hugetlbfs_and_get_usage "hugetlb_cgroup_test" "$size" "$populate" \
> "$write" "/mnt/huge/test" "$method" "$private" "$expect_failure" \
> @@ -344,7 +344,7 @@ function run_multiple_cgroup_test() {
> setup_cgroup "hugetlb_cgroup_test2" "$cgroup_limit2" "$reservation_limit2"
>
> mkdir -p /mnt/huge
> - mount -t hugetlbfs -o pagesize=${MB}M,size=256M none /mnt/huge
> + mount -t hugetlbfs -o pagesize=${MB}M none /mnt/huge
>
> write_hugetlbfs_and_get_usage "hugetlb_cgroup_test1" "$size1" \
> "$populate1" "$write1" "/mnt/huge/test1" "$method" "$private" \
Acked-by: Waiman Long <longman@redhat.com>
^ permalink raw reply [flat|nested] 19+ messages in thread* Re: [PATCH v3 2/3] selftests/mm/charge_reserved_hugetlb: drop mount size for hugetlbfs
2025-12-21 12:26 ` [PATCH v3 2/3] selftests/mm/charge_reserved_hugetlb: drop mount size for hugetlbfs Li Wang
2025-12-21 20:24 ` Waiman Long
@ 2025-12-22 10:01 ` David Hildenbrand (Red Hat)
2025-12-22 19:08 ` Andrew Morton
1 sibling, 1 reply; 19+ messages in thread
From: David Hildenbrand (Red Hat) @ 2025-12-22 10:01 UTC (permalink / raw)
To: Li Wang, akpm, linux-kselftest, linux-kernel, linux-mm
Cc: Mark Brown, Shuah Khan, Waiman Long
On 12/21/25 13:26, Li Wang wrote:
> charge_reserved_hugetlb.sh mounts a hugetlbfs instance at /mnt/huge with
> a fixed size of 256M. On systems with large base hugepages (e.g. 512MB),
> this is smaller than a single hugepage, so the hugetlbfs mount ends up
> with zero capacity (often visible as size=0 in mount output).
>
> As a result, write_to_hugetlbfs fails with ENOMEM and the test can hang
> waiting for progress.
>
> --- Error log ---
> # uname -r
> 6.12.0-xxx.el10.aarch64+64k
>
> #./charge_reserved_hugetlb.sh -cgroup-v2
> # -----------------------------------------
> ...
> # nr hugepages = 10
> # writing cgroup limit: 5368709120
> # writing reseravation limit: 5368709120
> ...
> # write_to_hugetlbfs: Error mapping the file: Cannot allocate memory
> # Waiting for hugetlb memory reservation to reach size 2684354560.
> # 0
> # Waiting for hugetlb memory reservation to reach size 2684354560.
> # 0
> ...
>
> # mount |grep /mnt/huge
> none on /mnt/huge type hugetlbfs (rw,relatime,seclabel,pagesize=512M,size=0)
>
> # grep -i huge /proc/meminfo
> ...
> HugePages_Total: 10
> HugePages_Free: 10
> HugePages_Rsvd: 0
> HugePages_Surp: 0
> Hugepagesize: 524288 kB
> Hugetlb: 5242880 kB
>
> Drop the mount args with 'size=256M', so the filesystem capacity is sufficient
> regardless of HugeTLB page size.
>
> Fixes: 29750f71a9 ("hugetlb_cgroup: add hugetlb_cgroup reservation tests")
Likely Andrew should add a CC of stable
Acked-by: David Hildenbrand (Red Hat) <david@kernel.org>
--
Cheers
David
^ permalink raw reply [flat|nested] 19+ messages in thread
* [PATCH v3 3/3] selftests/mm/charge_reserved_hugetlb.sh: add waits with timeout helper
2025-12-21 12:26 [PATCH v3 0/3] selftests/mm: hugetlb cgroup charging: robustness fixes Li Wang
2025-12-21 12:26 ` [PATCH v3 1/3] selftests/mm/write_to_hugetlbfs: parse -s as size_t Li Wang
2025-12-21 12:26 ` [PATCH v3 2/3] selftests/mm/charge_reserved_hugetlb: drop mount size for hugetlbfs Li Wang
@ 2025-12-21 12:26 ` Li Wang
2025-12-21 20:30 ` Waiman Long
2025-12-22 10:06 ` David Hildenbrand (Red Hat)
2 siblings, 2 replies; 19+ messages in thread
From: Li Wang @ 2025-12-21 12:26 UTC (permalink / raw)
To: akpm, linux-kselftest, linux-kernel, linux-mm
Cc: David Hildenbrand, Mark Brown, Shuah Khan, Waiman Long
The hugetlb cgroup usage wait loops in charge_reserved_hugetlb.sh were
unbounded and could hang forever if the expected cgroup file value never
appears (e.g. due to write_to_hugetlbfs in Error mapping).
--- Error log ---
# uname -r
6.12.0-xxx.el10.aarch64+64k
# ls /sys/kernel/mm/hugepages/hugepages-*
hugepages-16777216kB/ hugepages-2048kB/ hugepages-524288kB/
#./charge_reserved_hugetlb.sh -cgroup-v2
# -----------------------------------------
...
# nr hugepages = 10
# writing cgroup limit: 5368709120
# writing reseravation limit: 5368709120
...
# write_to_hugetlbfs: Error mapping the file: Cannot allocate memory
# Waiting for hugetlb memory reservation to reach size 2684354560.
# 0
# Waiting for hugetlb memory reservation to reach size 2684354560.
# 0
# Waiting for hugetlb memory reservation to reach size 2684354560.
# 0
# Waiting for hugetlb memory reservation to reach size 2684354560.
# 0
# Waiting for hugetlb memory reservation to reach size 2684354560.
# 0
# Waiting for hugetlb memory reservation to reach size 2684354560.
# 0
...
Introduce a small helper, wait_for_file_value(), and use it for:
- waiting for reservation usage to drop to 0,
- waiting for reservation usage to reach a given size,
- waiting for fault usage to reach a given size.
This makes the waits consistent and adds a hard timeout (60 tries with
1s sleep) so the test fails instead of stalling indefinitely.
Signed-off-by: Li Wang <liwang@redhat.com>
Cc: David Hildenbrand <david@kernel.org>
Cc: Mark Brown <broonie@kernel.org>
Cc: Shuah Khan <shuah@kernel.org>
Cc: Waiman Long <longman@redhat.com>
---
.../selftests/mm/charge_reserved_hugetlb.sh | 51 +++++++++++--------
1 file changed, 30 insertions(+), 21 deletions(-)
diff --git a/tools/testing/selftests/mm/charge_reserved_hugetlb.sh b/tools/testing/selftests/mm/charge_reserved_hugetlb.sh
index fa6713892d82..447769657634 100755
--- a/tools/testing/selftests/mm/charge_reserved_hugetlb.sh
+++ b/tools/testing/selftests/mm/charge_reserved_hugetlb.sh
@@ -100,7 +100,7 @@ function setup_cgroup() {
echo writing cgroup limit: "$cgroup_limit"
echo "$cgroup_limit" >$cgroup_path/$name/hugetlb.${MB}MB.$fault_limit_file
- echo writing reseravation limit: "$reservation_limit"
+ echo writing reservation limit: "$reservation_limit"
echo "$reservation_limit" > \
$cgroup_path/$name/hugetlb.${MB}MB.$reservation_limit_file
@@ -112,41 +112,50 @@ function setup_cgroup() {
fi
}
+function wait_for_file_value() {
+ local path="$1"
+ local expect="$2"
+ local max_tries=60
+
+ if [[ ! -r "$path" ]]; then
+ echo "ERROR: cannot read '$path', missing or permission denied"
+ return 1
+ fi
+
+ for ((i=1; i<=max_tries; i++)); do
+ local cur="$(cat "$path")"
+ if [[ "$cur" == "$expect" ]]; then
+ return 0
+ fi
+ echo "Waiting for $path to become '$expect' (current: '$cur') (try $i/$max_tries)"
+ sleep 1
+ done
+
+ echo "ERROR: timeout waiting for $path to become '$expect'"
+ return 1
+}
+
function wait_for_hugetlb_memory_to_get_depleted() {
local cgroup="$1"
local path="$cgroup_path/$cgroup/hugetlb.${MB}MB.$reservation_usage_file"
- # Wait for hugetlbfs memory to get depleted.
- while [ $(cat $path) != 0 ]; do
- echo Waiting for hugetlb memory to get depleted.
- cat $path
- sleep 0.5
- done
+
+ wait_for_file_value "$path" "0"
}
function wait_for_hugetlb_memory_to_get_reserved() {
local cgroup="$1"
local size="$2"
-
local path="$cgroup_path/$cgroup/hugetlb.${MB}MB.$reservation_usage_file"
- # Wait for hugetlbfs memory to get written.
- while [ $(cat $path) != $size ]; do
- echo Waiting for hugetlb memory reservation to reach size $size.
- cat $path
- sleep 0.5
- done
+
+ wait_for_file_value "$path" "$size"
}
function wait_for_hugetlb_memory_to_get_written() {
local cgroup="$1"
local size="$2"
-
local path="$cgroup_path/$cgroup/hugetlb.${MB}MB.$fault_usage_file"
- # Wait for hugetlbfs memory to get written.
- while [ $(cat $path) != $size ]; do
- echo Waiting for hugetlb memory to reach size $size.
- cat $path
- sleep 0.5
- done
+
+ wait_for_file_value "$path" "$size"
}
function write_hugetlbfs_and_get_usage() {
--
2.49.0
^ permalink raw reply related [flat|nested] 19+ messages in thread* Re: [PATCH v3 3/3] selftests/mm/charge_reserved_hugetlb.sh: add waits with timeout helper
2025-12-21 12:26 ` [PATCH v3 3/3] selftests/mm/charge_reserved_hugetlb.sh: add waits with timeout helper Li Wang
@ 2025-12-21 20:30 ` Waiman Long
2025-12-22 0:56 ` Li Wang
2025-12-22 10:06 ` David Hildenbrand (Red Hat)
1 sibling, 1 reply; 19+ messages in thread
From: Waiman Long @ 2025-12-21 20:30 UTC (permalink / raw)
To: Li Wang, akpm, linux-kselftest, linux-kernel, linux-mm
Cc: David Hildenbrand, Mark Brown, Shuah Khan
On 12/21/25 7:26 AM, Li Wang wrote:
> The hugetlb cgroup usage wait loops in charge_reserved_hugetlb.sh were
> unbounded and could hang forever if the expected cgroup file value never
> appears (e.g. due to write_to_hugetlbfs in Error mapping).
>
> --- Error log ---
> # uname -r
> 6.12.0-xxx.el10.aarch64+64k
>
> # ls /sys/kernel/mm/hugepages/hugepages-*
> hugepages-16777216kB/ hugepages-2048kB/ hugepages-524288kB/
>
> #./charge_reserved_hugetlb.sh -cgroup-v2
> # -----------------------------------------
> ...
> # nr hugepages = 10
> # writing cgroup limit: 5368709120
> # writing reseravation limit: 5368709120
> ...
> # write_to_hugetlbfs: Error mapping the file: Cannot allocate memory
> # Waiting for hugetlb memory reservation to reach size 2684354560.
> # 0
> # Waiting for hugetlb memory reservation to reach size 2684354560.
> # 0
> # Waiting for hugetlb memory reservation to reach size 2684354560.
> # 0
> # Waiting for hugetlb memory reservation to reach size 2684354560.
> # 0
> # Waiting for hugetlb memory reservation to reach size 2684354560.
> # 0
> # Waiting for hugetlb memory reservation to reach size 2684354560.
> # 0
> ...
>
> Introduce a small helper, wait_for_file_value(), and use it for:
> - waiting for reservation usage to drop to 0,
> - waiting for reservation usage to reach a given size,
> - waiting for fault usage to reach a given size.
>
> This makes the waits consistent and adds a hard timeout (60 tries with
> 1s sleep) so the test fails instead of stalling indefinitely.
>
> Signed-off-by: Li Wang <liwang@redhat.com>
> Cc: David Hildenbrand <david@kernel.org>
> Cc: Mark Brown <broonie@kernel.org>
> Cc: Shuah Khan <shuah@kernel.org>
> Cc: Waiman Long <longman@redhat.com>
> ---
> .../selftests/mm/charge_reserved_hugetlb.sh | 51 +++++++++++--------
> 1 file changed, 30 insertions(+), 21 deletions(-)
>
> diff --git a/tools/testing/selftests/mm/charge_reserved_hugetlb.sh b/tools/testing/selftests/mm/charge_reserved_hugetlb.sh
> index fa6713892d82..447769657634 100755
> --- a/tools/testing/selftests/mm/charge_reserved_hugetlb.sh
> +++ b/tools/testing/selftests/mm/charge_reserved_hugetlb.sh
> @@ -100,7 +100,7 @@ function setup_cgroup() {
> echo writing cgroup limit: "$cgroup_limit"
> echo "$cgroup_limit" >$cgroup_path/$name/hugetlb.${MB}MB.$fault_limit_file
>
> - echo writing reseravation limit: "$reservation_limit"
> + echo writing reservation limit: "$reservation_limit"
> echo "$reservation_limit" > \
> $cgroup_path/$name/hugetlb.${MB}MB.$reservation_limit_file
>
> @@ -112,41 +112,50 @@ function setup_cgroup() {
> fi
> }
>
> +function wait_for_file_value() {
> + local path="$1"
> + local expect="$2"
> + local max_tries=60
> +
> + if [[ ! -r "$path" ]]; then
> + echo "ERROR: cannot read '$path', missing or permission denied"
> + return 1
> + fi
> +
> + for ((i=1; i<=max_tries; i++)); do
> + local cur="$(cat "$path")"
> + if [[ "$cur" == "$expect" ]]; then
> + return 0
> + fi
> + echo "Waiting for $path to become '$expect' (current: '$cur') (try $i/$max_tries)"
> + sleep 1
> + done
> +
> + echo "ERROR: timeout waiting for $path to become '$expect'"
> + return 1
> +}
> +
> function wait_for_hugetlb_memory_to_get_depleted() {
> local cgroup="$1"
> local path="$cgroup_path/$cgroup/hugetlb.${MB}MB.$reservation_usage_file"
> - # Wait for hugetlbfs memory to get depleted.
> - while [ $(cat $path) != 0 ]; do
> - echo Waiting for hugetlb memory to get depleted.
> - cat $path
> - sleep 0.5
> - done
> +
> + wait_for_file_value "$path" "0"
> }
>
> function wait_for_hugetlb_memory_to_get_reserved() {
> local cgroup="$1"
> local size="$2"
> -
> local path="$cgroup_path/$cgroup/hugetlb.${MB}MB.$reservation_usage_file"
> - # Wait for hugetlbfs memory to get written.
> - while [ $(cat $path) != $size ]; do
> - echo Waiting for hugetlb memory reservation to reach size $size.
> - cat $path
> - sleep 0.5
> - done
> +
> + wait_for_file_value "$path" "$size"
> }
>
> function wait_for_hugetlb_memory_to_get_written() {
> local cgroup="$1"
> local size="$2"
> -
> local path="$cgroup_path/$cgroup/hugetlb.${MB}MB.$fault_usage_file"
> - # Wait for hugetlbfs memory to get written.
> - while [ $(cat $path) != $size ]; do
> - echo Waiting for hugetlb memory to reach size $size.
> - cat $path
> - sleep 0.5
> - done
> +
> + wait_for_file_value "$path" "$size"
> }
>
> function write_hugetlbfs_and_get_usage() {
wait_for_file_value() now return 0 onr success and 1 on timeout.
However, none of the callers of the wait_for_hugetlb_memory* are
checking their return values and acting accordingly. Are we expecting
that the test will show failure because the waiting isn't completed or
should we explicitly exit with ksft_fail (1) value?
Cheers,
Longman
^ permalink raw reply [flat|nested] 19+ messages in thread* Re: [PATCH v3 3/3] selftests/mm/charge_reserved_hugetlb.sh: add waits with timeout helper
2025-12-21 20:30 ` Waiman Long
@ 2025-12-22 0:56 ` Li Wang
0 siblings, 0 replies; 19+ messages in thread
From: Li Wang @ 2025-12-22 0:56 UTC (permalink / raw)
To: Waiman Long
Cc: akpm, linux-kselftest, linux-kernel, linux-mm, David Hildenbrand,
Mark Brown, Shuah Khan
On Mon, Dec 22, 2025 at 4:30 AM Waiman Long <llong@redhat.com> wrote:
>
>
> On 12/21/25 7:26 AM, Li Wang wrote:
> > The hugetlb cgroup usage wait loops in charge_reserved_hugetlb.sh were
> > unbounded and could hang forever if the expected cgroup file value never
> > appears (e.g. due to write_to_hugetlbfs in Error mapping).
> >
> > --- Error log ---
> > # uname -r
> > 6.12.0-xxx.el10.aarch64+64k
> >
> > # ls /sys/kernel/mm/hugepages/hugepages-*
> > hugepages-16777216kB/ hugepages-2048kB/ hugepages-524288kB/
> >
> > #./charge_reserved_hugetlb.sh -cgroup-v2
> > # -----------------------------------------
> > ...
> > # nr hugepages = 10
> > # writing cgroup limit: 5368709120
> > # writing reseravation limit: 5368709120
> > ...
> > # write_to_hugetlbfs: Error mapping the file: Cannot allocate memory
> > # Waiting for hugetlb memory reservation to reach size 2684354560.
> > # 0
> > # Waiting for hugetlb memory reservation to reach size 2684354560.
> > # 0
> > # Waiting for hugetlb memory reservation to reach size 2684354560.
> > # 0
> > # Waiting for hugetlb memory reservation to reach size 2684354560.
> > # 0
> > # Waiting for hugetlb memory reservation to reach size 2684354560.
> > # 0
> > # Waiting for hugetlb memory reservation to reach size 2684354560.
> > # 0
> > ...
> >
> > Introduce a small helper, wait_for_file_value(), and use it for:
> > - waiting for reservation usage to drop to 0,
> > - waiting for reservation usage to reach a given size,
> > - waiting for fault usage to reach a given size.
> >
> > This makes the waits consistent and adds a hard timeout (60 tries with
> > 1s sleep) so the test fails instead of stalling indefinitely.
> >
> > Signed-off-by: Li Wang <liwang@redhat.com>
> > Cc: David Hildenbrand <david@kernel.org>
> > Cc: Mark Brown <broonie@kernel.org>
> > Cc: Shuah Khan <shuah@kernel.org>
> > Cc: Waiman Long <longman@redhat.com>
> > ---
> > .../selftests/mm/charge_reserved_hugetlb.sh | 51 +++++++++++--------
> > 1 file changed, 30 insertions(+), 21 deletions(-)
> >
> > diff --git a/tools/testing/selftests/mm/charge_reserved_hugetlb.sh b/tools/testing/selftests/mm/charge_reserved_hugetlb.sh
> > index fa6713892d82..447769657634 100755
> > --- a/tools/testing/selftests/mm/charge_reserved_hugetlb.sh
> > +++ b/tools/testing/selftests/mm/charge_reserved_hugetlb.sh
> > @@ -100,7 +100,7 @@ function setup_cgroup() {
> > echo writing cgroup limit: "$cgroup_limit"
> > echo "$cgroup_limit" >$cgroup_path/$name/hugetlb.${MB}MB.$fault_limit_file
> >
> > - echo writing reseravation limit: "$reservation_limit"
> > + echo writing reservation limit: "$reservation_limit"
> > echo "$reservation_limit" > \
> > $cgroup_path/$name/hugetlb.${MB}MB.$reservation_limit_file
> >
> > @@ -112,41 +112,50 @@ function setup_cgroup() {
> > fi
> > }
> >
> > +function wait_for_file_value() {
> > + local path="$1"
> > + local expect="$2"
> > + local max_tries=60
> > +
> > + if [[ ! -r "$path" ]]; then
> > + echo "ERROR: cannot read '$path', missing or permission denied"
> > + return 1
> > + fi
> > +
> > + for ((i=1; i<=max_tries; i++)); do
> > + local cur="$(cat "$path")"
> > + if [[ "$cur" == "$expect" ]]; then
> > + return 0
> > + fi
> > + echo "Waiting for $path to become '$expect' (current: '$cur') (try $i/$max_tries)"
> > + sleep 1
> > + done
> > +
> > + echo "ERROR: timeout waiting for $path to become '$expect'"
> > + return 1
> > +}
> > +
> > function wait_for_hugetlb_memory_to_get_depleted() {
> > local cgroup="$1"
> > local path="$cgroup_path/$cgroup/hugetlb.${MB}MB.$reservation_usage_file"
> > - # Wait for hugetlbfs memory to get depleted.
> > - while [ $(cat $path) != 0 ]; do
> > - echo Waiting for hugetlb memory to get depleted.
> > - cat $path
> > - sleep 0.5
> > - done
> > +
> > + wait_for_file_value "$path" "0"
> > }
> >
> > function wait_for_hugetlb_memory_to_get_reserved() {
> > local cgroup="$1"
> > local size="$2"
> > -
> > local path="$cgroup_path/$cgroup/hugetlb.${MB}MB.$reservation_usage_file"
> > - # Wait for hugetlbfs memory to get written.
> > - while [ $(cat $path) != $size ]; do
> > - echo Waiting for hugetlb memory reservation to reach size $size.
> > - cat $path
> > - sleep 0.5
> > - done
> > +
> > + wait_for_file_value "$path" "$size"
> > }
> >
> > function wait_for_hugetlb_memory_to_get_written() {
> > local cgroup="$1"
> > local size="$2"
> > -
> > local path="$cgroup_path/$cgroup/hugetlb.${MB}MB.$fault_usage_file"
> > - # Wait for hugetlbfs memory to get written.
> > - while [ $(cat $path) != $size ]; do
> > - echo Waiting for hugetlb memory to reach size $size.
> > - cat $path
> > - sleep 0.5
> > - done
> > +
> > + wait_for_file_value "$path" "$size"
> > }
> >
> > function write_hugetlbfs_and_get_usage() {
>
> wait_for_file_value() now return 0 onr success and 1 on timeout.
> However, none of the callers of the wait_for_hugetlb_memory* are
> checking their return values and acting accordingly. Are we expecting
> that the test will show failure because the waiting isn't completed or
> should we explicitly exit with ksft_fail (1) value?
Hmm, it seems the test shouldn't exit too early.
As the wait_for_hugetlb_memory* is only trying 60s to examine the file
value, if timeouted, we still need to keep going because the test requires
CLEANUP work and exit/report from there.
The key point of each subtest is to save the '$write_result' value and
examine it
which controls the whole test to exit.
e.g.
This is an intentional error test:
# ./charge_reserved_hugetlb.sh -cgroup-v2
CLEANUP DONE
...
Writing to this path: /mnt/huge/test
Writing this size: 2684354560
Not populating.
Not writing to memory.
Using method=0
Shared mapping.
RESERVE mapping.
Allocating using HUGETLBFS.
write_to_hugetlbfs: Error mapping the file: Cannot allocate memory
Waiting for /sys/fs/cgroup/hugetlb_cgroup_test/hugetlb.512MB.rsvd.current
to become '2684354560' (current: '0') (try 1/60)
Waiting for /sys/fs/cgroup/hugetlb_cgroup_test/hugetlb.512MB.rsvd.current
to become '2684354560' (current: '0') (try 2/60)
Waiting for /sys/fs/cgroup/hugetlb_cgroup_test/hugetlb.512MB.rsvd.current
to become '2684354560' (current: '0') (try 3/60)
Waiting for /sys/fs/cgroup/hugetlb_cgroup_test/hugetlb.512MB.rsvd.current
to become '2684354560' (current: '0') (try 4/60)
...
Waiting for /sys/fs/cgroup/hugetlb_cgroup_test/hugetlb.512MB.rsvd.current
to become '2684354560' (current: '0') (try 60/60)
ERROR: timeout waiting for
/sys/fs/cgroup/hugetlb_cgroup_test/hugetlb.512MB.rsvd.current to
become '2684354560'
After write:
hugetlb_usage=0
reserved_usage=0
0
0
Memory charged to hugtlb=0
Memory charged to reservation=0
expected (2684354560) != actual (0): Reserved memory not charged to
reservation usage.
CLEANUP DONE
--
Regards,
Li Wang
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH v3 3/3] selftests/mm/charge_reserved_hugetlb.sh: add waits with timeout helper
2025-12-21 12:26 ` [PATCH v3 3/3] selftests/mm/charge_reserved_hugetlb.sh: add waits with timeout helper Li Wang
2025-12-21 20:30 ` Waiman Long
@ 2025-12-22 10:06 ` David Hildenbrand (Red Hat)
1 sibling, 0 replies; 19+ messages in thread
From: David Hildenbrand (Red Hat) @ 2025-12-22 10:06 UTC (permalink / raw)
To: Li Wang, akpm, linux-kselftest, linux-kernel, linux-mm
Cc: Mark Brown, Shuah Khan, Waiman Long
On 12/21/25 13:26, Li Wang wrote:
> The hugetlb cgroup usage wait loops in charge_reserved_hugetlb.sh were
> unbounded and could hang forever if the expected cgroup file value never
> appears (e.g. due to write_to_hugetlbfs in Error mapping).
>
> --- Error log ---
> # uname -r
> 6.12.0-xxx.el10.aarch64+64k
>
> # ls /sys/kernel/mm/hugepages/hugepages-*
> hugepages-16777216kB/ hugepages-2048kB/ hugepages-524288kB/
>
> #./charge_reserved_hugetlb.sh -cgroup-v2
> # -----------------------------------------
> ...
> # nr hugepages = 10
> # writing cgroup limit: 5368709120
> # writing reseravation limit: 5368709120
> ...
> # write_to_hugetlbfs: Error mapping the file: Cannot allocate memory
> # Waiting for hugetlb memory reservation to reach size 2684354560.
> # 0
> # Waiting for hugetlb memory reservation to reach size 2684354560.
> # 0
> # Waiting for hugetlb memory reservation to reach size 2684354560.
> # 0
> # Waiting for hugetlb memory reservation to reach size 2684354560.
> # 0
> # Waiting for hugetlb memory reservation to reach size 2684354560.
> # 0
> # Waiting for hugetlb memory reservation to reach size 2684354560.
> # 0
> ...
>
> Introduce a small helper, wait_for_file_value(), and use it for:
> - waiting for reservation usage to drop to 0,
> - waiting for reservation usage to reach a given size,
> - waiting for fault usage to reach a given size.
>
> This makes the waits consistent and adds a hard timeout (60 tries with
> 1s sleep) so the test fails instead of stalling indefinitely.
>
> Signed-off-by: Li Wang <liwang@redhat.com>
> Cc: David Hildenbrand <david@kernel.org>
> Cc: Mark Brown <broonie@kernel.org>
> Cc: Shuah Khan <shuah@kernel.org>
> Cc: Waiman Long <longman@redhat.com>
> ---
> .../selftests/mm/charge_reserved_hugetlb.sh | 51 +++++++++++--------
> 1 file changed, 30 insertions(+), 21 deletions(-)
>
> diff --git a/tools/testing/selftests/mm/charge_reserved_hugetlb.sh b/tools/testing/selftests/mm/charge_reserved_hugetlb.sh
> index fa6713892d82..447769657634 100755
> --- a/tools/testing/selftests/mm/charge_reserved_hugetlb.sh
> +++ b/tools/testing/selftests/mm/charge_reserved_hugetlb.sh
> @@ -100,7 +100,7 @@ function setup_cgroup() {
> echo writing cgroup limit: "$cgroup_limit"
> echo "$cgroup_limit" >$cgroup_path/$name/hugetlb.${MB}MB.$fault_limit_file
>
> - echo writing reseravation limit: "$reservation_limit"
> + echo writing reservation limit: "$reservation_limit"
> echo "$reservation_limit" > \
> $cgroup_path/$name/hugetlb.${MB}MB.$reservation_limit_file
>
> @@ -112,41 +112,50 @@ function setup_cgroup() {
> fi
> }
>
> +function wait_for_file_value() {
> + local path="$1"
> + local expect="$2"
> + local max_tries=60
> +
> + if [[ ! -r "$path" ]]; then
> + echo "ERROR: cannot read '$path', missing or permission denied"
> + return 1
> + fi
> +
> + for ((i=1; i<=max_tries; i++)); do
> + local cur="$(cat "$path")"
> + if [[ "$cur" == "$expect" ]]; then
> + return 0
> + fi
> + echo "Waiting for $path to become '$expect' (current: '$cur') (try $i/$max_tries)"
> + sleep 1
> + done
> +
> + echo "ERROR: timeout waiting for $path to become '$expect'"
> + return 1
> +}
> +
> function wait_for_hugetlb_memory_to_get_depleted() {
> local cgroup="$1"
> local path="$cgroup_path/$cgroup/hugetlb.${MB}MB.$reservation_usage_file"
> - # Wait for hugetlbfs memory to get depleted.
> - while [ $(cat $path) != 0 ]; do
> - echo Waiting for hugetlb memory to get depleted.
> - cat $path
> - sleep 0.5
> - done
> +
> + wait_for_file_value "$path" "0"
> }
>
> function wait_for_hugetlb_memory_to_get_reserved() {
> local cgroup="$1"
> local size="$2"
> -
> local path="$cgroup_path/$cgroup/hugetlb.${MB}MB.$reservation_usage_file"
> - # Wait for hugetlbfs memory to get written.
> - while [ $(cat $path) != $size ]; do
> - echo Waiting for hugetlb memory reservation to reach size $size.
> - cat $path
> - sleep 0.5
> - done
> +
> + wait_for_file_value "$path" "$size"
> }
>
> function wait_for_hugetlb_memory_to_get_written() {
> local cgroup="$1"
> local size="$2"
> -
> local path="$cgroup_path/$cgroup/hugetlb.${MB}MB.$fault_usage_file"
> - # Wait for hugetlbfs memory to get written.
> - while [ $(cat $path) != $size ]; do
> - echo Waiting for hugetlb memory to reach size $size.
> - cat $path
> - sleep 0.5
> - done
> +
> + wait_for_file_value "$path" "$size"
> }
>
> function write_hugetlbfs_and_get_usage() {
It would indeed be cleaner to propagate the error and make it clearer
how we then cleanly abort the test.
But if it keeps working for the time beiing, fine with me
Acked-by: David Hildenbrand (Red Hat) <david@kernel.org>
--
Cheers
David
^ permalink raw reply [flat|nested] 19+ messages in thread