* [LTP] [PATCH] unshare03: using soft limit of NOFILE
@ 2025-03-14 4:42 lufei
2025-03-27 10:33 ` Petr Vorel
` (2 more replies)
0 siblings, 3 replies; 15+ messages in thread
From: lufei @ 2025-03-14 4:42 UTC (permalink / raw)
To: ltp; +Cc: lufei
I think it's safer to set NOFILE increasing from soft limit than from
hard limit.
Hard limit may lead to dup2 ENOMEM error which bring the result to
TBROK on little memory machine. (e.g. 2GB memory in my situation, hard
limit in /proc/sys/fs/nr_open come out to be 1073741816)
Signed-off-by: lufei <lufei@uniontech.com>
---
testcases/kernel/syscalls/unshare/unshare03.c | 14 ++++++--------
1 file changed, 6 insertions(+), 8 deletions(-)
diff --git a/testcases/kernel/syscalls/unshare/unshare03.c b/testcases/kernel/syscalls/unshare/unshare03.c
index 7c5e71c4e..bb568264c 100644
--- a/testcases/kernel/syscalls/unshare/unshare03.c
+++ b/testcases/kernel/syscalls/unshare/unshare03.c
@@ -24,7 +24,7 @@
static void run(void)
{
- int nr_open;
+ int rlim_max;
int nr_limit;
struct rlimit rlimit;
struct tst_clone_args args = {
@@ -32,14 +32,12 @@ static void run(void)
.exit_signal = SIGCHLD,
};
- SAFE_FILE_SCANF(FS_NR_OPEN, "%d", &nr_open);
- tst_res(TDEBUG, "Maximum number of file descriptors: %d", nr_open);
+ SAFE_GETRLIMIT(RLIMIT_NOFILE, &rlimit);
+ rlim_max = rlimit.rlim_max;
- nr_limit = nr_open + NR_OPEN_LIMIT;
+ nr_limit = rlim_max + NR_OPEN_LIMIT;
SAFE_FILE_PRINTF(FS_NR_OPEN, "%d", nr_limit);
- SAFE_GETRLIMIT(RLIMIT_NOFILE, &rlimit);
-
rlimit.rlim_cur = nr_limit;
rlimit.rlim_max = nr_limit;
@@ -47,10 +45,10 @@ static void run(void)
tst_res(TDEBUG, "Set new maximum number of file descriptors to : %d",
nr_limit);
- SAFE_DUP2(2, nr_open + NR_OPEN_DUP);
+ SAFE_DUP2(2, rlim_max + NR_OPEN_DUP);
if (!SAFE_CLONE(&args)) {
- SAFE_FILE_PRINTF(FS_NR_OPEN, "%d", nr_open);
+ SAFE_FILE_PRINTF(FS_NR_OPEN, "%d", rlim_max);
TST_EXP_FAIL(unshare(CLONE_FILES), EMFILE);
exit(0);
}
--
2.39.3
--
Mailing list info: https://lists.linux.it/listinfo/ltp
^ permalink raw reply related [flat|nested] 15+ messages in thread
* Re: [LTP] [PATCH] unshare03: using soft limit of NOFILE
2025-03-14 4:42 [LTP] [PATCH] unshare03: using soft limit of NOFILE lufei
@ 2025-03-27 10:33 ` Petr Vorel
[not found] ` <0A99FFBB46DDB0B4+Z+YKSlAwn1vx3Dz4@rocky>
2025-04-01 10:11 ` Cyril Hrubis
2025-04-09 7:49 ` [LTP] [PATCH v2] unshare03: set nr_open with sizeof(long)*8 lufei
2 siblings, 1 reply; 15+ messages in thread
From: Petr Vorel @ 2025-03-27 10:33 UTC (permalink / raw)
To: lufei; +Cc: Al Viro, ltp
Hi lufei, Al,
@Al, you're the author of the original test unshare_test.c [1] in kselftest.
This is a patch to LTP test unshare03.c, which is based on your test.
> I think it's safer to set NOFILE increasing from soft limit than from
> hard limit.
> Hard limit may lead to dup2 ENOMEM error which bring the result to
> TBROK on little memory machine. (e.g. 2GB memory in my situation, hard
> limit in /proc/sys/fs/nr_open come out to be 1073741816)
IMHO lowering number to ~ half (in my case) by using rlimit.rlim_max instead of
/proc/sys/fs/nr_open should not affect the functionality of the test, right?
Or am I missing something obvious?
@lufei I guess kselftest tools/testing/selftests/core/unshare_test.c would fail
for you as well, right?
Kind regards,
Petr
[1] https://web.git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=611fbeb44a777e5ab54ab3127ec85f72147911d8
> Signed-off-by: lufei <lufei@uniontech.com>
> ---
> testcases/kernel/syscalls/unshare/unshare03.c | 14 ++++++--------
> 1 file changed, 6 insertions(+), 8 deletions(-)
> diff --git a/testcases/kernel/syscalls/unshare/unshare03.c b/testcases/kernel/syscalls/unshare/unshare03.c
> index 7c5e71c4e..bb568264c 100644
> --- a/testcases/kernel/syscalls/unshare/unshare03.c
> +++ b/testcases/kernel/syscalls/unshare/unshare03.c
> @@ -24,7 +24,7 @@
> static void run(void)
> {
> - int nr_open;
> + int rlim_max;
> int nr_limit;
> struct rlimit rlimit;
> struct tst_clone_args args = {
> @@ -32,14 +32,12 @@ static void run(void)
> .exit_signal = SIGCHLD,
> };
> - SAFE_FILE_SCANF(FS_NR_OPEN, "%d", &nr_open);
> - tst_res(TDEBUG, "Maximum number of file descriptors: %d", nr_open);
> + SAFE_GETRLIMIT(RLIMIT_NOFILE, &rlimit);
> + rlim_max = rlimit.rlim_max;
> - nr_limit = nr_open + NR_OPEN_LIMIT;
> + nr_limit = rlim_max + NR_OPEN_LIMIT;
> SAFE_FILE_PRINTF(FS_NR_OPEN, "%d", nr_limit);
> - SAFE_GETRLIMIT(RLIMIT_NOFILE, &rlimit);
> -
> rlimit.rlim_cur = nr_limit;
> rlimit.rlim_max = nr_limit;
> @@ -47,10 +45,10 @@ static void run(void)
> tst_res(TDEBUG, "Set new maximum number of file descriptors to : %d",
> nr_limit);
> - SAFE_DUP2(2, nr_open + NR_OPEN_DUP);
> + SAFE_DUP2(2, rlim_max + NR_OPEN_DUP);
> if (!SAFE_CLONE(&args)) {
> - SAFE_FILE_PRINTF(FS_NR_OPEN, "%d", nr_open);
> + SAFE_FILE_PRINTF(FS_NR_OPEN, "%d", rlim_max);
> TST_EXP_FAIL(unshare(CLONE_FILES), EMFILE);
> exit(0);
> }
--
Mailing list info: https://lists.linux.it/listinfo/ltp
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [LTP] [PATCH] unshare03: using soft limit of NOFILE
[not found] ` <0A99FFBB46DDB0B4+Z+YKSlAwn1vx3Dz4@rocky>
@ 2025-03-28 8:50 ` Petr Vorel
0 siblings, 0 replies; 15+ messages in thread
From: Petr Vorel @ 2025-03-28 8:50 UTC (permalink / raw)
To: Lu Fei; +Cc: Al Viro, ltp
Hi lufei, Al,
> Hi Petr,
> Yes, kselftest tools/testing/selftests/core/unshare_test.c failed as
> well, dup2 failed:
> ```
> unshare_test.c:60:unshare_EMFILE:Expected dup2(2, nr_open + 64) (-1) >= 0 (0)
> ```
Thanks for info. Maybe also sending a patch to kselftest?
Kind regards,
Petr
PS: lufei, please keep Cc to keep other informed.
> Thanks for reply.
> Best regards.
> On Thu, Mar 27, 2025 at 11:33:36AM +0100, Petr Vorel wrote:
> > Hi lufei, Al,
> > @Al, you're the author of the original test unshare_test.c [1] in kselftest.
> > This is a patch to LTP test unshare03.c, which is based on your test.
> > > I think it's safer to set NOFILE increasing from soft limit than from
> > > hard limit.
> > > Hard limit may lead to dup2 ENOMEM error which bring the result to
> > > TBROK on little memory machine. (e.g. 2GB memory in my situation, hard
> > > limit in /proc/sys/fs/nr_open come out to be 1073741816)
> > IMHO lowering number to ~ half (in my case) by using rlimit.rlim_max instead of
> > /proc/sys/fs/nr_open should not affect the functionality of the test, right?
> > Or am I missing something obvious?
> > @lufei I guess kselftest tools/testing/selftests/core/unshare_test.c would fail
> > for you as well, right?
> > Kind regards,
> > Petr
> > [1] https://web.git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=611fbeb44a777e5ab54ab3127ec85f72147911d8
> > > Signed-off-by: lufei <lufei@uniontech.com>
> > > ---
> > > testcases/kernel/syscalls/unshare/unshare03.c | 14 ++++++--------
> > > 1 file changed, 6 insertions(+), 8 deletions(-)
> > > diff --git a/testcases/kernel/syscalls/unshare/unshare03.c b/testcases/kernel/syscalls/unshare/unshare03.c
> > > index 7c5e71c4e..bb568264c 100644
> > > --- a/testcases/kernel/syscalls/unshare/unshare03.c
> > > +++ b/testcases/kernel/syscalls/unshare/unshare03.c
> > > @@ -24,7 +24,7 @@
> > > static void run(void)
> > > {
> > > - int nr_open;
> > > + int rlim_max;
> > > int nr_limit;
> > > struct rlimit rlimit;
> > > struct tst_clone_args args = {
> > > @@ -32,14 +32,12 @@ static void run(void)
> > > .exit_signal = SIGCHLD,
> > > };
> > > - SAFE_FILE_SCANF(FS_NR_OPEN, "%d", &nr_open);
> > > - tst_res(TDEBUG, "Maximum number of file descriptors: %d", nr_open);
> > > + SAFE_GETRLIMIT(RLIMIT_NOFILE, &rlimit);
> > > + rlim_max = rlimit.rlim_max;
> > > - nr_limit = nr_open + NR_OPEN_LIMIT;
> > > + nr_limit = rlim_max + NR_OPEN_LIMIT;
> > > SAFE_FILE_PRINTF(FS_NR_OPEN, "%d", nr_limit);
> > > - SAFE_GETRLIMIT(RLIMIT_NOFILE, &rlimit);
> > > -
> > > rlimit.rlim_cur = nr_limit;
> > > rlimit.rlim_max = nr_limit;
> > > @@ -47,10 +45,10 @@ static void run(void)
> > > tst_res(TDEBUG, "Set new maximum number of file descriptors to : %d",
> > > nr_limit);
> > > - SAFE_DUP2(2, nr_open + NR_OPEN_DUP);
> > > + SAFE_DUP2(2, rlim_max + NR_OPEN_DUP);
> > > if (!SAFE_CLONE(&args)) {
> > > - SAFE_FILE_PRINTF(FS_NR_OPEN, "%d", nr_open);
> > > + SAFE_FILE_PRINTF(FS_NR_OPEN, "%d", rlim_max);
> > > TST_EXP_FAIL(unshare(CLONE_FILES), EMFILE);
> > > exit(0);
> > > }
--
Mailing list info: https://lists.linux.it/listinfo/ltp
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [LTP] [PATCH] unshare03: using soft limit of NOFILE
2025-03-14 4:42 [LTP] [PATCH] unshare03: using soft limit of NOFILE lufei
2025-03-27 10:33 ` Petr Vorel
@ 2025-04-01 10:11 ` Cyril Hrubis
2025-04-01 12:54 ` Petr Vorel
2025-04-09 6:41 ` Lu Fei
2025-04-09 7:49 ` [LTP] [PATCH v2] unshare03: set nr_open with sizeof(long)*8 lufei
2 siblings, 2 replies; 15+ messages in thread
From: Cyril Hrubis @ 2025-04-01 10:11 UTC (permalink / raw)
To: lufei; +Cc: ltp
Hi!
> I think it's safer to set NOFILE increasing from soft limit than from
> hard limit.
>
> Hard limit may lead to dup2 ENOMEM error which bring the result to
> TBROK on little memory machine. (e.g. 2GB memory in my situation, hard
> limit in /proc/sys/fs/nr_open come out to be 1073741816)
>
> Signed-off-by: lufei <lufei@uniontech.com>
> ---
> testcases/kernel/syscalls/unshare/unshare03.c | 14 ++++++--------
> 1 file changed, 6 insertions(+), 8 deletions(-)
>
> diff --git a/testcases/kernel/syscalls/unshare/unshare03.c b/testcases/kernel/syscalls/unshare/unshare03.c
> index 7c5e71c4e..bb568264c 100644
> --- a/testcases/kernel/syscalls/unshare/unshare03.c
> +++ b/testcases/kernel/syscalls/unshare/unshare03.c
> @@ -24,7 +24,7 @@
>
> static void run(void)
> {
> - int nr_open;
> + int rlim_max;
> int nr_limit;
> struct rlimit rlimit;
> struct tst_clone_args args = {
> @@ -32,14 +32,12 @@ static void run(void)
> .exit_signal = SIGCHLD,
> };
>
> - SAFE_FILE_SCANF(FS_NR_OPEN, "%d", &nr_open);
> - tst_res(TDEBUG, "Maximum number of file descriptors: %d", nr_open);
> + SAFE_GETRLIMIT(RLIMIT_NOFILE, &rlimit);
> + rlim_max = rlimit.rlim_max;
>
> - nr_limit = nr_open + NR_OPEN_LIMIT;
> + nr_limit = rlim_max + NR_OPEN_LIMIT;
> SAFE_FILE_PRINTF(FS_NR_OPEN, "%d", nr_limit);
>
> - SAFE_GETRLIMIT(RLIMIT_NOFILE, &rlimit);
> -
> rlimit.rlim_cur = nr_limit;
> rlimit.rlim_max = nr_limit;
>
> @@ -47,10 +45,10 @@ static void run(void)
> tst_res(TDEBUG, "Set new maximum number of file descriptors to : %d",
> nr_limit);
>
> - SAFE_DUP2(2, nr_open + NR_OPEN_DUP);
> + SAFE_DUP2(2, rlim_max + NR_OPEN_DUP);
>
> if (!SAFE_CLONE(&args)) {
> - SAFE_FILE_PRINTF(FS_NR_OPEN, "%d", nr_open);
> + SAFE_FILE_PRINTF(FS_NR_OPEN, "%d", rlim_max);
> TST_EXP_FAIL(unshare(CLONE_FILES), EMFILE);
> exit(0);
> }
Why do we bother with reading the /rpoc/sys/fs/nr_open file? All that we
need to to do is to dup() a file descriptor and tnen set the nr_open
limit to fd - 2. And if we do so we can drop the rlimit that increases
the limit so that it's greater than nr_open as well.
--
Cyril Hrubis
chrubis@suse.cz
--
Mailing list info: https://lists.linux.it/listinfo/ltp
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [LTP] [PATCH] unshare03: using soft limit of NOFILE
2025-04-01 10:11 ` Cyril Hrubis
@ 2025-04-01 12:54 ` Petr Vorel
2025-04-01 13:55 ` Cyril Hrubis
2025-04-09 6:41 ` Lu Fei
1 sibling, 1 reply; 15+ messages in thread
From: Petr Vorel @ 2025-04-01 12:54 UTC (permalink / raw)
To: Cyril Hrubis; +Cc: lufei, ltp
> Hi!
> > I think it's safer to set NOFILE increasing from soft limit than from
> > hard limit.
> > Hard limit may lead to dup2 ENOMEM error which bring the result to
> > TBROK on little memory machine. (e.g. 2GB memory in my situation, hard
> > limit in /proc/sys/fs/nr_open come out to be 1073741816)
> > Signed-off-by: lufei <lufei@uniontech.com>
> > ---
> > testcases/kernel/syscalls/unshare/unshare03.c | 14 ++++++--------
> > 1 file changed, 6 insertions(+), 8 deletions(-)
> > diff --git a/testcases/kernel/syscalls/unshare/unshare03.c b/testcases/kernel/syscalls/unshare/unshare03.c
> > index 7c5e71c4e..bb568264c 100644
> > --- a/testcases/kernel/syscalls/unshare/unshare03.c
> > +++ b/testcases/kernel/syscalls/unshare/unshare03.c
> > @@ -24,7 +24,7 @@
> > static void run(void)
> > {
> > - int nr_open;
> > + int rlim_max;
> > int nr_limit;
> > struct rlimit rlimit;
> > struct tst_clone_args args = {
> > @@ -32,14 +32,12 @@ static void run(void)
> > .exit_signal = SIGCHLD,
> > };
> > - SAFE_FILE_SCANF(FS_NR_OPEN, "%d", &nr_open);
> > - tst_res(TDEBUG, "Maximum number of file descriptors: %d", nr_open);
> > + SAFE_GETRLIMIT(RLIMIT_NOFILE, &rlimit);
> > + rlim_max = rlimit.rlim_max;
> > - nr_limit = nr_open + NR_OPEN_LIMIT;
> > + nr_limit = rlim_max + NR_OPEN_LIMIT;
> > SAFE_FILE_PRINTF(FS_NR_OPEN, "%d", nr_limit);
> > - SAFE_GETRLIMIT(RLIMIT_NOFILE, &rlimit);
> > -
> > rlimit.rlim_cur = nr_limit;
> > rlimit.rlim_max = nr_limit;
> > @@ -47,10 +45,10 @@ static void run(void)
> > tst_res(TDEBUG, "Set new maximum number of file descriptors to : %d",
> > nr_limit);
> > - SAFE_DUP2(2, nr_open + NR_OPEN_DUP);
> > + SAFE_DUP2(2, rlim_max + NR_OPEN_DUP);
> > if (!SAFE_CLONE(&args)) {
> > - SAFE_FILE_PRINTF(FS_NR_OPEN, "%d", nr_open);
> > + SAFE_FILE_PRINTF(FS_NR_OPEN, "%d", rlim_max);
> > TST_EXP_FAIL(unshare(CLONE_FILES), EMFILE);
> > exit(0);
> > }
> Why do we bother with reading the /rpoc/sys/fs/nr_open file? All that we
> need to to do is to dup() a file descriptor and tnen set the nr_open
> limit to fd - 2. And if we do so we can drop the rlimit that increases
> the limit so that it's greater than nr_open as well.
IMHO file descriptor will be 3, fd - 2 == 1. And trying to set 1 to
/rpoc/sys/fs/nr_open leads to EINVAL. You probably mean something different.
The test is based on unshare_test.c [1] from Al Viro (611fbeb44a777 [2]). Both
tests IMHO use nr_open + 1024 nr_open + 1024 and then call dup2() on nr_open + 64
to avoid later EINVAL when writing /rpoc/sys/fs/nr_open.
Kind regards,
Petr
[1] https://web.git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/tools/testing/selftests/core/unshare_test.c
[2] https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=611fbeb44a777e5ab54ab3127ec85f72147911d8
--
Mailing list info: https://lists.linux.it/listinfo/ltp
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [LTP] [PATCH] unshare03: using soft limit of NOFILE
2025-04-01 12:54 ` Petr Vorel
@ 2025-04-01 13:55 ` Cyril Hrubis
0 siblings, 0 replies; 15+ messages in thread
From: Cyril Hrubis @ 2025-04-01 13:55 UTC (permalink / raw)
To: Petr Vorel; +Cc: lufei, ltp
Hi!
> IMHO file descriptor will be 3, fd - 2 == 1. And trying to set 1 to
> /rpoc/sys/fs/nr_open leads to EINVAL. You probably mean something different.
Ah, we cannot set nr_open to anything that is smaller than BITS_PER_LONG:
...
unsigned int sysctl_nr_open __read_mostly = 1024*1024;
unsigned int sysctl_nr_open_min = BITS_PER_LONG;
/* our min() is unusable in constant expressions ;-/ */
#define __const_min(x, y) ((x) < (y) ? (x) : (y))
unsigned int sysctl_nr_open_max =
__const_min(INT_MAX, ~(size_t)0/sizeof(void *)) & -BITS_PER_LONG;
...
Then we need a filedescriptor that is >= 64 and set the nr_open to 64.
--
Cyril Hrubis
chrubis@suse.cz
--
Mailing list info: https://lists.linux.it/listinfo/ltp
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [LTP] [PATCH] unshare03: using soft limit of NOFILE
2025-04-01 10:11 ` Cyril Hrubis
2025-04-01 12:54 ` Petr Vorel
@ 2025-04-09 6:41 ` Lu Fei
1 sibling, 0 replies; 15+ messages in thread
From: Lu Fei @ 2025-04-09 6:41 UTC (permalink / raw)
To: Cyril Hrubis; +Cc: ltp
Hi, Cyril
Thanks very much for the smart suggestion. I'll submit a v2 patch.
On Tue, Apr 01, 2025 at 12:11:21PM +0200, Cyril Hrubis wrote:
> Hi!
> > I think it's safer to set NOFILE increasing from soft limit than from
> > hard limit.
> >
> > Hard limit may lead to dup2 ENOMEM error which bring the result to
> > TBROK on little memory machine. (e.g. 2GB memory in my situation, hard
> > limit in /proc/sys/fs/nr_open come out to be 1073741816)
> >
> > Signed-off-by: lufei <lufei@uniontech.com>
> > ---
> > testcases/kernel/syscalls/unshare/unshare03.c | 14 ++++++--------
> > 1 file changed, 6 insertions(+), 8 deletions(-)
> >
> > diff --git a/testcases/kernel/syscalls/unshare/unshare03.c b/testcases/kernel/syscalls/unshare/unshare03.c
> > index 7c5e71c4e..bb568264c 100644
> > --- a/testcases/kernel/syscalls/unshare/unshare03.c
> > +++ b/testcases/kernel/syscalls/unshare/unshare03.c
> > @@ -24,7 +24,7 @@
> >
> > static void run(void)
> > {
> > - int nr_open;
> > + int rlim_max;
> > int nr_limit;
> > struct rlimit rlimit;
> > struct tst_clone_args args = {
> > @@ -32,14 +32,12 @@ static void run(void)
> > .exit_signal = SIGCHLD,
> > };
> >
> > - SAFE_FILE_SCANF(FS_NR_OPEN, "%d", &nr_open);
> > - tst_res(TDEBUG, "Maximum number of file descriptors: %d", nr_open);
> > + SAFE_GETRLIMIT(RLIMIT_NOFILE, &rlimit);
> > + rlim_max = rlimit.rlim_max;
> >
> > - nr_limit = nr_open + NR_OPEN_LIMIT;
> > + nr_limit = rlim_max + NR_OPEN_LIMIT;
> > SAFE_FILE_PRINTF(FS_NR_OPEN, "%d", nr_limit);
> >
> > - SAFE_GETRLIMIT(RLIMIT_NOFILE, &rlimit);
> > -
> > rlimit.rlim_cur = nr_limit;
> > rlimit.rlim_max = nr_limit;
> >
> > @@ -47,10 +45,10 @@ static void run(void)
> > tst_res(TDEBUG, "Set new maximum number of file descriptors to : %d",
> > nr_limit);
> >
> > - SAFE_DUP2(2, nr_open + NR_OPEN_DUP);
> > + SAFE_DUP2(2, rlim_max + NR_OPEN_DUP);
> >
> > if (!SAFE_CLONE(&args)) {
> > - SAFE_FILE_PRINTF(FS_NR_OPEN, "%d", nr_open);
> > + SAFE_FILE_PRINTF(FS_NR_OPEN, "%d", rlim_max);
> > TST_EXP_FAIL(unshare(CLONE_FILES), EMFILE);
> > exit(0);
> > }
>
> Why do we bother with reading the /rpoc/sys/fs/nr_open file? All that we
> need to to do is to dup() a file descriptor and tnen set the nr_open
> limit to fd - 2. And if we do so we can drop the rlimit that increases
> the limit so that it's greater than nr_open as well.
>
> --
> Cyril Hrubis
> chrubis@suse.cz
>
--
Mailing list info: https://lists.linux.it/listinfo/ltp
^ permalink raw reply [flat|nested] 15+ messages in thread
* [LTP] [PATCH v2] unshare03: set nr_open with sizeof(long)*8
2025-03-14 4:42 [LTP] [PATCH] unshare03: using soft limit of NOFILE lufei
2025-03-27 10:33 ` Petr Vorel
2025-04-01 10:11 ` Cyril Hrubis
@ 2025-04-09 7:49 ` lufei
2025-04-10 15:35 ` Jan Stancek via ltp
` (2 more replies)
2 siblings, 3 replies; 15+ messages in thread
From: lufei @ 2025-04-09 7:49 UTC (permalink / raw)
To: ltp; +Cc: lufei, viro
Set nr_open with sizeof(long)*8 to trigger EMFILE, instead of reading
number of filedescriptors limit.
Signed-off-by: lufei <lufei@uniontech.com>
---
testcases/kernel/syscalls/unshare/unshare03.c | 23 ++-----------------
1 file changed, 2 insertions(+), 21 deletions(-)
diff --git a/testcases/kernel/syscalls/unshare/unshare03.c b/testcases/kernel/syscalls/unshare/unshare03.c
index 7c5e71c4e..c3b98930d 100644
--- a/testcases/kernel/syscalls/unshare/unshare03.c
+++ b/testcases/kernel/syscalls/unshare/unshare03.c
@@ -17,44 +17,25 @@
#include "lapi/sched.h"
#define FS_NR_OPEN "/proc/sys/fs/nr_open"
-#define NR_OPEN_LIMIT 1024
-#define NR_OPEN_DUP 64
#ifdef HAVE_UNSHARE
static void run(void)
{
- int nr_open;
- int nr_limit;
- struct rlimit rlimit;
struct tst_clone_args args = {
.flags = CLONE_FILES,
.exit_signal = SIGCHLD,
};
- SAFE_FILE_SCANF(FS_NR_OPEN, "%d", &nr_open);
- tst_res(TDEBUG, "Maximum number of file descriptors: %d", nr_open);
+ int nr_open = sizeof(long) * 8;
- nr_limit = nr_open + NR_OPEN_LIMIT;
- SAFE_FILE_PRINTF(FS_NR_OPEN, "%d", nr_limit);
-
- SAFE_GETRLIMIT(RLIMIT_NOFILE, &rlimit);
-
- rlimit.rlim_cur = nr_limit;
- rlimit.rlim_max = nr_limit;
-
- SAFE_SETRLIMIT(RLIMIT_NOFILE, &rlimit);
- tst_res(TDEBUG, "Set new maximum number of file descriptors to : %d",
- nr_limit);
-
- SAFE_DUP2(2, nr_open + NR_OPEN_DUP);
+ SAFE_DUP2(2, nr_open + 1);
if (!SAFE_CLONE(&args)) {
SAFE_FILE_PRINTF(FS_NR_OPEN, "%d", nr_open);
TST_EXP_FAIL(unshare(CLONE_FILES), EMFILE);
exit(0);
}
-
}
static void setup(void)
--
2.39.3
--
Mailing list info: https://lists.linux.it/listinfo/ltp
^ permalink raw reply related [flat|nested] 15+ messages in thread
* Re: [LTP] [PATCH v2] unshare03: set nr_open with sizeof(long)*8
2025-04-09 7:49 ` [LTP] [PATCH v2] unshare03: set nr_open with sizeof(long)*8 lufei
@ 2025-04-10 15:35 ` Jan Stancek via ltp
2025-04-11 3:09 ` Li Wang via ltp
2025-04-11 9:30 ` [LTP] [PATCH v3] " lufei
2 siblings, 0 replies; 15+ messages in thread
From: Jan Stancek via ltp @ 2025-04-10 15:35 UTC (permalink / raw)
To: lufei; +Cc: viro, ltp
On Wed, Apr 9, 2025 at 9:49 AM lufei <lufei@uniontech.com> wrote:
>
> Set nr_open with sizeof(long)*8 to trigger EMFILE, instead of reading
> number of filedescriptors limit.
An explanation for this number would be nice.
>
> Signed-off-by: lufei <lufei@uniontech.com>
Acked-by: Jan Stancek <jstancek@redhat.com>
This fixes test for me on 6.15.0-0.rc1
> ---
> testcases/kernel/syscalls/unshare/unshare03.c | 23 ++-----------------
> 1 file changed, 2 insertions(+), 21 deletions(-)
>
> diff --git a/testcases/kernel/syscalls/unshare/unshare03.c b/testcases/kernel/syscalls/unshare/unshare03.c
> index 7c5e71c4e..c3b98930d 100644
> --- a/testcases/kernel/syscalls/unshare/unshare03.c
> +++ b/testcases/kernel/syscalls/unshare/unshare03.c
> @@ -17,44 +17,25 @@
> #include "lapi/sched.h"
>
> #define FS_NR_OPEN "/proc/sys/fs/nr_open"
> -#define NR_OPEN_LIMIT 1024
> -#define NR_OPEN_DUP 64
>
> #ifdef HAVE_UNSHARE
>
> static void run(void)
> {
> - int nr_open;
> - int nr_limit;
> - struct rlimit rlimit;
> struct tst_clone_args args = {
> .flags = CLONE_FILES,
> .exit_signal = SIGCHLD,
> };
>
> - SAFE_FILE_SCANF(FS_NR_OPEN, "%d", &nr_open);
> - tst_res(TDEBUG, "Maximum number of file descriptors: %d", nr_open);
> + int nr_open = sizeof(long) * 8;
>
> - nr_limit = nr_open + NR_OPEN_LIMIT;
> - SAFE_FILE_PRINTF(FS_NR_OPEN, "%d", nr_limit);
> -
> - SAFE_GETRLIMIT(RLIMIT_NOFILE, &rlimit);
> -
> - rlimit.rlim_cur = nr_limit;
> - rlimit.rlim_max = nr_limit;
> -
> - SAFE_SETRLIMIT(RLIMIT_NOFILE, &rlimit);
> - tst_res(TDEBUG, "Set new maximum number of file descriptors to : %d",
> - nr_limit);
> -
> - SAFE_DUP2(2, nr_open + NR_OPEN_DUP);
> + SAFE_DUP2(2, nr_open + 1);
>
> if (!SAFE_CLONE(&args)) {
> SAFE_FILE_PRINTF(FS_NR_OPEN, "%d", nr_open);
> TST_EXP_FAIL(unshare(CLONE_FILES), EMFILE);
> exit(0);
> }
> -
> }
>
> static void setup(void)
> --
> 2.39.3
>
>
> --
> Mailing list info: https://lists.linux.it/listinfo/ltp
>
--
Mailing list info: https://lists.linux.it/listinfo/ltp
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [LTP] [PATCH v2] unshare03: set nr_open with sizeof(long)*8
2025-04-09 7:49 ` [LTP] [PATCH v2] unshare03: set nr_open with sizeof(long)*8 lufei
2025-04-10 15:35 ` Jan Stancek via ltp
@ 2025-04-11 3:09 ` Li Wang via ltp
2025-04-11 3:21 ` Li Wang via ltp
2025-04-11 9:30 ` [LTP] [PATCH v3] " lufei
2 siblings, 1 reply; 15+ messages in thread
From: Li Wang via ltp @ 2025-04-11 3:09 UTC (permalink / raw)
To: lufei; +Cc: viro, ltp
On Wed, Apr 9, 2025 at 3:50 PM lufei <lufei@uniontech.com> wrote:
> Set nr_open with sizeof(long)*8 to trigger EMFILE, instead of reading
> number of filedescriptors limit.
>
Any new changes in Linux that have made the previous way not work now?
>
> Signed-off-by: lufei <lufei@uniontech.com>
> ---
> testcases/kernel/syscalls/unshare/unshare03.c | 23 ++-----------------
> 1 file changed, 2 insertions(+), 21 deletions(-)
>
> diff --git a/testcases/kernel/syscalls/unshare/unshare03.c
> b/testcases/kernel/syscalls/unshare/unshare03.c
> index 7c5e71c4e..c3b98930d 100644
> --- a/testcases/kernel/syscalls/unshare/unshare03.c
> +++ b/testcases/kernel/syscalls/unshare/unshare03.c
> @@ -17,44 +17,25 @@
> #include "lapi/sched.h"
>
> #define FS_NR_OPEN "/proc/sys/fs/nr_open"
> -#define NR_OPEN_LIMIT 1024
> -#define NR_OPEN_DUP 64
>
> #ifdef HAVE_UNSHARE
>
> static void run(void)
> {
> - int nr_open;
> - int nr_limit;
> - struct rlimit rlimit;
> struct tst_clone_args args = {
> .flags = CLONE_FILES,
> .exit_signal = SIGCHLD,
> };
>
> - SAFE_FILE_SCANF(FS_NR_OPEN, "%d", &nr_open);
> - tst_res(TDEBUG, "Maximum number of file descriptors: %d", nr_open);
> + int nr_open = sizeof(long) * 8;
>
> - nr_limit = nr_open + NR_OPEN_LIMIT;
> - SAFE_FILE_PRINTF(FS_NR_OPEN, "%d", nr_limit);
> -
> - SAFE_GETRLIMIT(RLIMIT_NOFILE, &rlimit);
> -
> - rlimit.rlim_cur = nr_limit;
> - rlimit.rlim_max = nr_limit;
> -
> - SAFE_SETRLIMIT(RLIMIT_NOFILE, &rlimit);
> - tst_res(TDEBUG, "Set new maximum number of file descriptors to :
> %d",
> - nr_limit);
> -
> - SAFE_DUP2(2, nr_open + NR_OPEN_DUP);
> + SAFE_DUP2(2, nr_open + 1);
>
> if (!SAFE_CLONE(&args)) {
> SAFE_FILE_PRINTF(FS_NR_OPEN, "%d", nr_open);
> TST_EXP_FAIL(unshare(CLONE_FILES), EMFILE);
> exit(0);
> }
> -
> }
>
> static void setup(void)
> --
> 2.39.3
>
>
> --
> Mailing list info: https://lists.linux.it/listinfo/ltp
>
>
--
Regards,
Li Wang
--
Mailing list info: https://lists.linux.it/listinfo/ltp
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [LTP] [PATCH v2] unshare03: set nr_open with sizeof(long)*8
2025-04-11 3:09 ` Li Wang via ltp
@ 2025-04-11 3:21 ` Li Wang via ltp
2025-04-11 6:01 ` Lu Fei
0 siblings, 1 reply; 15+ messages in thread
From: Li Wang via ltp @ 2025-04-11 3:21 UTC (permalink / raw)
To: lufei; +Cc: viro, ltp
On Fri, Apr 11, 2025 at 11:09 AM Li Wang <liwang@redhat.com> wrote:
>
>
> On Wed, Apr 9, 2025 at 3:50 PM lufei <lufei@uniontech.com> wrote:
>
>> Set nr_open with sizeof(long)*8 to trigger EMFILE, instead of reading
>> number of filedescriptors limit.
>>
>
> Any new changes in Linux that have made the previous way not work now?
>
Ah, I see. As you pointed out in v1, that hard limit may lead to dup2
ENOMEM error which brings the result to TBROK ona small RAM system.
So, I agree Jan, we'd better add more description to the patch.
Reviewed-by: Li Wang <liwang@redhat.com>
>
>
>
>>
>> Signed-off-by: lufei <lufei@uniontech.com>
>> ---
>> testcases/kernel/syscalls/unshare/unshare03.c | 23 ++-----------------
>> 1 file changed, 2 insertions(+), 21 deletions(-)
>>
>> diff --git a/testcases/kernel/syscalls/unshare/unshare03.c
>> b/testcases/kernel/syscalls/unshare/unshare03.c
>> index 7c5e71c4e..c3b98930d 100644
>> --- a/testcases/kernel/syscalls/unshare/unshare03.c
>> +++ b/testcases/kernel/syscalls/unshare/unshare03.c
>> @@ -17,44 +17,25 @@
>> #include "lapi/sched.h"
>>
>> #define FS_NR_OPEN "/proc/sys/fs/nr_open"
>> -#define NR_OPEN_LIMIT 1024
>> -#define NR_OPEN_DUP 64
>>
>> #ifdef HAVE_UNSHARE
>>
>> static void run(void)
>> {
>> - int nr_open;
>> - int nr_limit;
>> - struct rlimit rlimit;
>> struct tst_clone_args args = {
>> .flags = CLONE_FILES,
>> .exit_signal = SIGCHLD,
>> };
>>
>> - SAFE_FILE_SCANF(FS_NR_OPEN, "%d", &nr_open);
>> - tst_res(TDEBUG, "Maximum number of file descriptors: %d",
>> nr_open);
>> + int nr_open = sizeof(long) * 8;
>>
>> - nr_limit = nr_open + NR_OPEN_LIMIT;
>> - SAFE_FILE_PRINTF(FS_NR_OPEN, "%d", nr_limit);
>> -
>> - SAFE_GETRLIMIT(RLIMIT_NOFILE, &rlimit);
>> -
>> - rlimit.rlim_cur = nr_limit;
>> - rlimit.rlim_max = nr_limit;
>> -
>> - SAFE_SETRLIMIT(RLIMIT_NOFILE, &rlimit);
>> - tst_res(TDEBUG, "Set new maximum number of file descriptors to :
>> %d",
>> - nr_limit);
>> -
>> - SAFE_DUP2(2, nr_open + NR_OPEN_DUP);
>> + SAFE_DUP2(2, nr_open + 1);
>>
>> if (!SAFE_CLONE(&args)) {
>> SAFE_FILE_PRINTF(FS_NR_OPEN, "%d", nr_open);
>> TST_EXP_FAIL(unshare(CLONE_FILES), EMFILE);
>> exit(0);
>> }
>> -
>> }
>>
>> static void setup(void)
>> --
>> 2.39.3
>>
>>
>> --
>> Mailing list info: https://lists.linux.it/listinfo/ltp
>>
>>
>
> --
> Regards,
> Li Wang
>
--
Regards,
Li Wang
--
Mailing list info: https://lists.linux.it/listinfo/ltp
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [LTP] [PATCH v2] unshare03: set nr_open with sizeof(long)*8
2025-04-11 3:21 ` Li Wang via ltp
@ 2025-04-11 6:01 ` Lu Fei
2025-04-11 9:03 ` Cyril Hrubis
0 siblings, 1 reply; 15+ messages in thread
From: Lu Fei @ 2025-04-11 6:01 UTC (permalink / raw)
To: Li Wang, Jan Stancek; +Cc: viro, ltp
Hi, Li, Jan.
Sorry for so few details about the patch.
The origin code is using nr_open +1024 as fd number limit and then set
fd to nr_open + 64, then set nr_open to origin nr_open in child process
to make unshare(CLONE_FILES) fails with EMFILE. I tested this on my vm
(1cpu, 2GB mem), met dup2 fails with ENOMEM, this make the test case
BROKEN. I was try to fix this.
In patch v2, I was try to use rlimit.rlim_max instead of using
nr_open(read from /proc/sys/fs/nr_open), this works well. But Cyril
gives a better approch, so, I submit patch v3.
Quoted from Cyril's comment:
>
>Ah, we cannot set nr_open to anything that is smaller than BITS_PER_LONG:
>...
>unsigned int sysctl_nr_open __read_mostly = 1024*1024;
>unsigned int sysctl_nr_open_min = BITS_PER_LONG;
>/* our min() is unusable in constant expressions ;-/ */
>#define __const_min(x, y) ((x) < (y) ? (x) : (y))
>unsigned int sysctl_nr_open_max =
> __const_min(INT_MAX, ~(size_t)0/sizeof(void *)) & -BITS_PER_LONG;
>...
>Then we need a filedescriptor that is >= 64 and set the nr_open to 64.
I'm using sizeof(long)*8 is because I was thinking only set
filediscriptor >= BITS_PER_LONG and then set nr_open = BITS_PER_LONG
could make EMFILE happen. Less than BITS_PER_LONG would trigger other
error than EMFILE.
Am I misunderstood Cyril?
Thanks for reply.
On Fri, Apr 11, 2025 at 11:21:23AM +0800, Li Wang wrote:
> On Fri, Apr 11, 2025 at 11:09 AM Li Wang <liwang@redhat.com> wrote:
>
> >
> >
> > On Wed, Apr 9, 2025 at 3:50 PM lufei <lufei@uniontech.com> wrote:
> >
> >> Set nr_open with sizeof(long)*8 to trigger EMFILE, instead of reading
> >> number of filedescriptors limit.
> >>
> >
> > Any new changes in Linux that have made the previous way not work now?
> >
>
> Ah, I see. As you pointed out in v1, that hard limit may lead to dup2
> ENOMEM error which brings the result to TBROK ona small RAM system.
>
> So, I agree Jan, we'd better add more description to the patch.
>
> Reviewed-by: Li Wang <liwang@redhat.com>
>
>
>
> >
> >
> >
> >>
> >> Signed-off-by: lufei <lufei@uniontech.com>
> >> ---
> >> testcases/kernel/syscalls/unshare/unshare03.c | 23 ++-----------------
> >> 1 file changed, 2 insertions(+), 21 deletions(-)
> >>
> >> diff --git a/testcases/kernel/syscalls/unshare/unshare03.c
> >> b/testcases/kernel/syscalls/unshare/unshare03.c
> >> index 7c5e71c4e..c3b98930d 100644
> >> --- a/testcases/kernel/syscalls/unshare/unshare03.c
> >> +++ b/testcases/kernel/syscalls/unshare/unshare03.c
> >> @@ -17,44 +17,25 @@
> >> #include "lapi/sched.h"
> >>
> >> #define FS_NR_OPEN "/proc/sys/fs/nr_open"
> >> -#define NR_OPEN_LIMIT 1024
> >> -#define NR_OPEN_DUP 64
> >>
> >> #ifdef HAVE_UNSHARE
> >>
> >> static void run(void)
> >> {
> >> - int nr_open;
> >> - int nr_limit;
> >> - struct rlimit rlimit;
> >> struct tst_clone_args args = {
> >> .flags = CLONE_FILES,
> >> .exit_signal = SIGCHLD,
> >> };
> >>
> >> - SAFE_FILE_SCANF(FS_NR_OPEN, "%d", &nr_open);
> >> - tst_res(TDEBUG, "Maximum number of file descriptors: %d",
> >> nr_open);
> >> + int nr_open = sizeof(long) * 8;
> >>
> >> - nr_limit = nr_open + NR_OPEN_LIMIT;
> >> - SAFE_FILE_PRINTF(FS_NR_OPEN, "%d", nr_limit);
> >> -
> >> - SAFE_GETRLIMIT(RLIMIT_NOFILE, &rlimit);
> >> -
> >> - rlimit.rlim_cur = nr_limit;
> >> - rlimit.rlim_max = nr_limit;
> >> -
> >> - SAFE_SETRLIMIT(RLIMIT_NOFILE, &rlimit);
> >> - tst_res(TDEBUG, "Set new maximum number of file descriptors to :
> >> %d",
> >> - nr_limit);
> >> -
> >> - SAFE_DUP2(2, nr_open + NR_OPEN_DUP);
> >> + SAFE_DUP2(2, nr_open + 1);
> >>
> >> if (!SAFE_CLONE(&args)) {
> >> SAFE_FILE_PRINTF(FS_NR_OPEN, "%d", nr_open);
> >> TST_EXP_FAIL(unshare(CLONE_FILES), EMFILE);
> >> exit(0);
> >> }
> >> -
> >> }
> >>
> >> static void setup(void)
> >> --
> >> 2.39.3
> >>
> >>
> >> --
> >> Mailing list info: https://lists.linux.it/listinfo/ltp
> >>
> >>
> >
> > --
> > Regards,
> > Li Wang
> >
>
>
> --
> Regards,
> Li Wang
--
Mailing list info: https://lists.linux.it/listinfo/ltp
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [LTP] [PATCH v2] unshare03: set nr_open with sizeof(long)*8
2025-04-11 6:01 ` Lu Fei
@ 2025-04-11 9:03 ` Cyril Hrubis
0 siblings, 0 replies; 15+ messages in thread
From: Cyril Hrubis @ 2025-04-11 9:03 UTC (permalink / raw)
To: Lu Fei; +Cc: ltp, viro
Hi!
> >unsigned int sysctl_nr_open __read_mostly = 1024*1024;
> >unsigned int sysctl_nr_open_min = BITS_PER_LONG;
> >/* our min() is unusable in constant expressions ;-/ */
> >#define __const_min(x, y) ((x) < (y) ? (x) : (y))
> >unsigned int sysctl_nr_open_max =
> > __const_min(INT_MAX, ~(size_t)0/sizeof(void *)) & -BITS_PER_LONG;
> >...
>
> >Then we need a filedescriptor that is >= 64 and set the nr_open to 64.
>
> I'm using sizeof(long)*8 is because I was thinking only set
> filediscriptor >= BITS_PER_LONG and then set nr_open = BITS_PER_LONG
> could make EMFILE happen. Less than BITS_PER_LONG would trigger other
> error than EMFILE.
>
> Am I misunderstood Cyril?
The actual patch is fine. What they are asking for is a better
description. The patch description should tell _why_ the change is
needed.
--
Cyril Hrubis
chrubis@suse.cz
--
Mailing list info: https://lists.linux.it/listinfo/ltp
^ permalink raw reply [flat|nested] 15+ messages in thread
* [LTP] [PATCH v3] unshare03: set nr_open with sizeof(long)*8
2025-04-09 7:49 ` [LTP] [PATCH v2] unshare03: set nr_open with sizeof(long)*8 lufei
2025-04-10 15:35 ` Jan Stancek via ltp
2025-04-11 3:09 ` Li Wang via ltp
@ 2025-04-11 9:30 ` lufei
2025-04-16 1:55 ` Li Wang via ltp
2 siblings, 1 reply; 15+ messages in thread
From: lufei @ 2025-04-11 9:30 UTC (permalink / raw)
To: ltp; +Cc: lufei, viro
dup2(2, nr_open+64) may cause ENOMEM error, which makes the testcase
BROKEN. Change to dup2(2, sizeof(long)*8 +1) and set nr_open to sizeof(long)*8
Signed-off-by: lufei <lufei@uniontech.com>
---
testcases/kernel/syscalls/unshare/unshare03.c | 23 ++-----------------
1 file changed, 2 insertions(+), 21 deletions(-)
diff --git a/testcases/kernel/syscalls/unshare/unshare03.c b/testcases/kernel/syscalls/unshare/unshare03.c
index 7c5e71c4e..c3b98930d 100644
--- a/testcases/kernel/syscalls/unshare/unshare03.c
+++ b/testcases/kernel/syscalls/unshare/unshare03.c
@@ -17,44 +17,25 @@
#include "lapi/sched.h"
#define FS_NR_OPEN "/proc/sys/fs/nr_open"
-#define NR_OPEN_LIMIT 1024
-#define NR_OPEN_DUP 64
#ifdef HAVE_UNSHARE
static void run(void)
{
- int nr_open;
- int nr_limit;
- struct rlimit rlimit;
struct tst_clone_args args = {
.flags = CLONE_FILES,
.exit_signal = SIGCHLD,
};
- SAFE_FILE_SCANF(FS_NR_OPEN, "%d", &nr_open);
- tst_res(TDEBUG, "Maximum number of file descriptors: %d", nr_open);
+ int nr_open = sizeof(long) * 8;
- nr_limit = nr_open + NR_OPEN_LIMIT;
- SAFE_FILE_PRINTF(FS_NR_OPEN, "%d", nr_limit);
-
- SAFE_GETRLIMIT(RLIMIT_NOFILE, &rlimit);
-
- rlimit.rlim_cur = nr_limit;
- rlimit.rlim_max = nr_limit;
-
- SAFE_SETRLIMIT(RLIMIT_NOFILE, &rlimit);
- tst_res(TDEBUG, "Set new maximum number of file descriptors to : %d",
- nr_limit);
-
- SAFE_DUP2(2, nr_open + NR_OPEN_DUP);
+ SAFE_DUP2(2, nr_open + 1);
if (!SAFE_CLONE(&args)) {
SAFE_FILE_PRINTF(FS_NR_OPEN, "%d", nr_open);
TST_EXP_FAIL(unshare(CLONE_FILES), EMFILE);
exit(0);
}
-
}
static void setup(void)
--
2.39.3
--
Mailing list info: https://lists.linux.it/listinfo/ltp
^ permalink raw reply related [flat|nested] 15+ messages in thread
* Re: [LTP] [PATCH v3] unshare03: set nr_open with sizeof(long)*8
2025-04-11 9:30 ` [LTP] [PATCH v3] " lufei
@ 2025-04-16 1:55 ` Li Wang via ltp
0 siblings, 0 replies; 15+ messages in thread
From: Li Wang via ltp @ 2025-04-16 1:55 UTC (permalink / raw)
To: lufei; +Cc: viro, ltp
Hi Lufei,
I helped refine the patch description and merged.
See:
https://github.com/linux-test-project/ltp/commit/fc8be6ed405d3a949161c19b917e79ade969a505
--
Regards,
Li Wang
--
Mailing list info: https://lists.linux.it/listinfo/ltp
^ permalink raw reply [flat|nested] 15+ messages in thread
end of thread, other threads:[~2025-04-16 1:56 UTC | newest]
Thread overview: 15+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-03-14 4:42 [LTP] [PATCH] unshare03: using soft limit of NOFILE lufei
2025-03-27 10:33 ` Petr Vorel
[not found] ` <0A99FFBB46DDB0B4+Z+YKSlAwn1vx3Dz4@rocky>
2025-03-28 8:50 ` Petr Vorel
2025-04-01 10:11 ` Cyril Hrubis
2025-04-01 12:54 ` Petr Vorel
2025-04-01 13:55 ` Cyril Hrubis
2025-04-09 6:41 ` Lu Fei
2025-04-09 7:49 ` [LTP] [PATCH v2] unshare03: set nr_open with sizeof(long)*8 lufei
2025-04-10 15:35 ` Jan Stancek via ltp
2025-04-11 3:09 ` Li Wang via ltp
2025-04-11 3:21 ` Li Wang via ltp
2025-04-11 6:01 ` Lu Fei
2025-04-11 9:03 ` Cyril Hrubis
2025-04-11 9:30 ` [LTP] [PATCH v3] " lufei
2025-04-16 1:55 ` Li Wang via ltp
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox