* process file descriptor limit handling
@ 2003-11-20 19:53 Ulrich Drepper
2005-03-09 23:28 ` Chris Wright
0 siblings, 1 reply; 2+ messages in thread
From: Ulrich Drepper @ 2003-11-20 19:53 UTC (permalink / raw)
To: linux-kernel, Linus Torvalds
[-- Attachment #1: Type: text/plain, Size: 1872 bytes --]
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
The current kernel (and all before as far as I can see) have a problem
with the file system limit handling. The behavior does not conform to
the current POSIX spec.
Look at the attached example code. The programs opens descriptors up to
the limit, then lowers the limit, closes a descriptor below the number
given to the last setrlimit call, and tries to open a descriptor. This
currently succeeds.
The problem is that RLIMIT_NOFILE is defined as the number of open
descriptors. In the example case, before the final open call, there are
N1-1 open descriptors, with RLIMIT_NOFILE set to N2 (which is << N1).
The open call should fail. Another, more common solution, is to have
the setrlimit call fail in case the new limit is larger than the largest
file descriptor in use. Returning EINVAL in this case is just fine and
is apparently what other platforms do.
One could also take the position that the current behavior has it's
advantages. A program could open all the file descriptors it needs, and
then close a few which can be used to open some files for the normal
mode of operation. Possible, maybe even quite secure, but still smells
a bit like a hack.
It might also be that some wording is getting in the specification which
will allow the current kernel behavior to continue to exist. More
through a loophole, but still.
Anyway, I think the existing behavior is probably an oversight. Whether
it is worth keeping is a question somebody (= Linus) will have to
answer. My recommendation is probably, to make setrlimit fail.
- --
➧ Ulrich Drepper ➧ Red Hat, Inc. ➧ 444 Castro St ➧ Mountain View, CA ❖
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.2.3 (GNU/Linux)
iD8DBQE/vRvS2ijCOnn/RHQRAl4HAKDK3xp5vydo1bqfDNvVZUCxObukMQCeI/1l
xFnOkWYL4iXmzqjVuIXfYWI=
=ftW/
-----END PGP SIGNATURE-----
[-- Attachment #2: t.c --]
[-- Type: text/x-c, Size: 1193 bytes --]
#include <errno.h>
#include <fcntl.h>
#include <stdio.h>
#include <unistd.h>
#include <sys/resource.h>
int
main (void)
{
#define N1 100
struct rlimit r;
r.rlim_cur = N1;
r.rlim_max = N1;
setrlimit (RLIMIT_NOFILE, &r);
int i;
int fd[N1];
for (i = 0; ; ++i)
if ((fd[i] = open ("/dev/null", 0)) < 0)
{
if (i == N1 - 3)
printf ("fine, %d file descriptors open\n", N1);
else
{
puts ("*** make sure the parent doesn't open any descriptors other than 0, 1, 2");
return 1;
}
break;
}
#define N2 50
r.rlim_cur = N2;
r.rlim_max = N2;
int e = setrlimit (RLIMIT_NOFILE, &r);
if (e == EINVAL)
{
puts ("good, setrlimit sees the open descriptors");
return 0;
}
getrlimit (RLIMIT_NOFILE, &r);
if (r.rlim_cur != N2 || r.rlim_max != N2)
{
puts ("getrlimit returned different values");
return 1;
}
close (fd[N2 - 4]);
fd[N2 - 4] = open ("/dev/null", 0);
if (fd[N2 - 4] != -1)
{
printf ("opening a new descriptor succeeded even though %d are open and the limit is %d\n",
N1 - 1, N2);
return 1;
}
return 0;
}
^ permalink raw reply [flat|nested] 2+ messages in thread
* Re: process file descriptor limit handling
2003-11-20 19:53 process file descriptor limit handling Ulrich Drepper
@ 2005-03-09 23:28 ` Chris Wright
0 siblings, 0 replies; 2+ messages in thread
From: Chris Wright @ 2005-03-09 23:28 UTC (permalink / raw)
To: Ulrich Drepper; +Cc: linux-kernel, Linus Torvalds
* Ulrich Drepper (drepper@redhat.com) wrote:
> -----BEGIN PGP SIGNED MESSAGE-----
> Hash: SHA1
>
> The current kernel (and all before as far as I can see) have a problem
> with the file system limit handling. The behavior does not conform to
> the current POSIX spec.
<snip>
> It might also be that some wording is getting in the specification which
> will allow the current kernel behavior to continue to exist. More
> through a loophole, but still.
This seems the case. SuS v3 says:
setrlimit
RLIMIT_NOFILE
This is a number one greater than the maximum value that the system
may assign to a newly-created descriptor. If this limit is exceeded,
functions that allocate a file descriptor shall fail with errno set
to [EMFILE]. This limit constrains the number of file descriptors
that a process may allocate.
open
[EMFILE]
{OPEN_MAX} file descriptors are currently open in the calling process.
limits.h
{OPEN_MAX}
Maximum number of files that one process can have open at any one time.
Minimum Acceptable Value: {_POSIX_OPEN_MAX}
So, one view says your test program is within the spec, since the new fd
is still one less than the current rlimit.
Anyway, here's a simple patch that would fail the second setrlimit, as you
suggested.
thanks,
-chris
--
Linux Security Modules http://lsm.immunix.org http://lsm.bkbits.net
^ permalink raw reply [flat|nested] 2+ messages in thread
end of thread, other threads:[~2005-03-09 23:31 UTC | newest]
Thread overview: 2+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2003-11-20 19:53 process file descriptor limit handling Ulrich Drepper
2005-03-09 23:28 ` Chris Wright
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox