From: Greg KH <gregkh@linuxfoundation.org>
To: Russ Dill <russ.dill@gmail.com>
Cc: Al Viro <viro@zeniv.linux.org.uk>,
linux-kernel <linux-kernel@vger.kernel.org>,
Nick Kossifidis <mickflemm@gmail.com>,
"Theodore Ts'o" <tytso@mit.edu>
Subject: Re: fasync race in fs/fcntl.c
Date: Mon, 4 Mar 2013 15:39:08 +0800 [thread overview]
Message-ID: <20130304073908.GA982@kroah.com> (raw)
In-Reply-To: <CA+Bv8XaEBZz3RWh5=y-kLoEMdjxwNFBUBBxUKdq-NuV6iZ=QRg@mail.gmail.com>
On Sun, Mar 03, 2013 at 10:16:10PM -0800, Russ Dill wrote:
> On Sat, Mar 2, 2013 at 4:09 PM, Russ Dill <russ.dill@gmail.com> wrote:
> > On Sat, Mar 2, 2013 at 11:49 AM, Al Viro <viro@zeniv.linux.org.uk> wrote:
> >> On Sat, Mar 02, 2013 at 03:00:28AM -0800, Russ Dill wrote:
> >>> I'm seeing a race in fs/fcntl.c. I'm not sure exactly how the race is
> >>> occurring, but the following is my best guess. A kernel log is
> >>> attached.
> >>
> >> [snip the analysis - it's a different lock anyway]
> >>
> >> The traces below are essentially sys_execve() getting to get_random_bytes(),
> >> to kill_fasync(), to send_sigio(), which spins on tasklist_lock.
> >>
> >> Could you rebuild it with lockdep enabled and try to reproduce that?
> >> I very much doubt that this execve() is a part of deadlock - it's
> >> getting caught on one, but it shouldn't be holding any locks that
> >> nest inside tasklist_lock at that point, so even it hadn't been there,
> >> the process holding tasklist_lock probably wouldn't have progressed any
> >> further...
> >
> > ok, I did screw up the analysis quite badly, luckily, lockdep got it right away.
> >
>
> So lockdep gives some clues, but seems a bit confused, so here's what happened.
>
> mix_pool_bytes /* takes nonblocking_pool.lock */
> add_device_randomness
> posix_cpu_timers_exit
> __exit_signal
> release_task /* takes write lock on tasklist_lock */
> do_exit
> __module_put_and_exit
> cryptomgr_test
>
> send_sigio /* takes read lock on tasklist_lock */
> kill_fasync_rcu
> kill_fasync
> account /* takes nonblocking_pool.lock */
> extract_entropy
> get_random_bytes
> create_elf_tables
> load_elf_binary
> load_elf_library
> search_binary_handler
>
> This would mark the culprit as 613370549 'random: Mix cputime from
> each thread that exits to the pool'. So long as I'm not as crazy on
> the last analysis as this one, may I suggest a revert of this commit
> for 3.8.3?
I'll revert it, but shouldn't we fix this properly upstream in Linus's
tree as well? I'd rather take the fix than a revert so that we don't
have a problem that no one remembers to fix until 3.9-final is out.
thanks,
greg k-h
next prev parent reply other threads:[~2013-03-04 7:38 UTC|newest]
Thread overview: 11+ messages / expand[flat|nested] mbox.gz Atom feed top
2013-03-02 11:00 fasync race in fs/fcntl.c Russ Dill
2013-03-02 17:54 ` Al Viro
2013-03-02 18:42 ` Al Viro
2013-03-02 19:25 ` Al Viro
2013-03-02 19:49 ` Al Viro
2013-03-03 0:09 ` Russ Dill
2013-03-04 6:16 ` Russ Dill
2013-03-04 7:39 ` Greg KH [this message]
2013-03-04 8:03 ` [PATCH] Revert "random: Mix cputime from each thread that exits to the pool" Russ Dill
2013-03-04 17:05 ` Theodore Ts'o
2013-03-04 19:33 ` Russ Dill
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20130304073908.GA982@kroah.com \
--to=gregkh@linuxfoundation.org \
--cc=linux-kernel@vger.kernel.org \
--cc=mickflemm@gmail.com \
--cc=russ.dill@gmail.com \
--cc=tytso@mit.edu \
--cc=viro@zeniv.linux.org.uk \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox