From: Rasmus Villemoes <linux@rasmusvillemoes.dk>
To: Alexander Viro <viro@zeniv.linux.org.uk>
Cc: Linus Torvalds <torvalds@linux-foundation.org>,
Rasmus Villemoes <linux@rasmusvillemoes.dk>,
linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org
Subject: [RFC] vfs: don't bother clearing close_on_exec bit for unused fds
Date: Tue, 3 Nov 2015 10:41:19 +0100 [thread overview]
Message-ID: <1446543679-28849-1-git-send-email-linux@rasmusvillemoes.dk> (raw)
In fc90888d07b8 (vfs: conditionally clear close-on-exec flag) a
conditional was added to __clear_close_on_exec to avoid dirtying a
cache line in the common case where the bit is already clear. However,
AFAICT, we don't rely on the close_on_exec bit being clear for unused
fds, except as an optimization in do_close_on_exec(); if I haven't
missed anything, __{set,clear}_close_on_exec is always called when a
new fd is allocated. At the expense of also reading through ->open_fds
in do_close_on_exec(), we can avoid accessing the close_on_exec bitmap
altogether in close(), which I think is a reasonable trade-off.
The conditional added in the commit above still makes sense to avoid
the dirtying on the allocation paths, but I also think it might make
sense in __set_close_on_exec: I suppose any given app handling a
non-trivial amount of fds uses O_CLOEXEC for either almost none or
almost all of them.
Signed-off-by: Rasmus Villemoes <linux@rasmusvillemoes.dk>
---
I'm sure I've missed something, hence the RFC. But if not, there's
probably also a few memsets which become redundant. And the
__set_close_on_exec part should probably be its own patch...
fs/file.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/fs/file.c b/fs/file.c
index c6986dce0334..93cfbcd450c3 100644
--- a/fs/file.c
+++ b/fs/file.c
@@ -231,7 +231,8 @@ repeat:
static inline void __set_close_on_exec(int fd, struct fdtable *fdt)
{
- __set_bit(fd, fdt->close_on_exec);
+ if (!test_bit(fd, fdt->close_on_exec))
+ __set_bit(fd, fdt->close_on_exec);
}
static inline void __clear_close_on_exec(int fd, struct fdtable *fdt)
@@ -644,7 +645,6 @@ int __close_fd(struct files_struct *files, unsigned fd)
if (!file)
goto out_unlock;
rcu_assign_pointer(fdt->fd[fd], NULL);
- __clear_close_on_exec(fd, fdt);
__put_unused_fd(files, fd);
spin_unlock(&files->file_lock);
return filp_close(file, files);
@@ -667,7 +667,7 @@ void do_close_on_exec(struct files_struct *files)
fdt = files_fdtable(files);
if (fd >= fdt->max_fds)
break;
- set = fdt->close_on_exec[i];
+ set = fdt->close_on_exec[i] & fdt->open_fds[i];
if (!set)
continue;
fdt->close_on_exec[i] = 0;
--
2.6.1
next reply other threads:[~2015-11-03 9:41 UTC|newest]
Thread overview: 6+ messages / expand[flat|nested] mbox.gz Atom feed top
2015-11-03 9:41 Rasmus Villemoes [this message]
2015-11-03 22:45 ` [RFC] vfs: don't bother clearing close_on_exec bit for unused fds Linus Torvalds
2015-11-03 23:13 ` Rasmus Villemoes
2015-11-04 1:31 ` Eric Dumazet
2015-11-04 10:59 ` Rasmus Villemoes
2015-11-04 12:33 ` Eric Dumazet
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1446543679-28849-1-git-send-email-linux@rasmusvillemoes.dk \
--to=linux@rasmusvillemoes.dk \
--cc=linux-fsdevel@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=torvalds@linux-foundation.org \
--cc=viro@zeniv.linux.org.uk \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).