From: Dave Chinner <david@fromorbit.com>
To: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Al Viro <viro@zeniv.linux.org.uk>, Jan Kara <jack@suse.cz>,
Dave Jones <davej@redhat.com>, Oleg Nesterov <oleg@redhat.com>,
"Paul E. McKenney" <paulmck@linux.vnet.ibm.com>,
Linux Kernel <linux-kernel@vger.kernel.org>,
"Eric W. Biederman" <ebiederm@xmission.com>,
Andrey Vagin <avagin@openvz.org>,
Steven Rostedt <rostedt@goodmis.org>
Subject: Re: frequent softlockups with 3.10rc6.
Date: Fri, 28 Jun 2013 17:21:41 +1000 [thread overview]
Message-ID: <20130628072141.GB9047@dastard> (raw)
In-Reply-To: <CA+55aFyZYsbMpP+6dkdkhdDn9gpTx0dkv25MUtcnswer_a2x9w@mail.gmail.com>
On Thu, Jun 27, 2013 at 07:59:50PM -1000, Linus Torvalds wrote:
> On Thu, Jun 27, 2013 at 5:54 PM, Dave Chinner <david@fromorbit.com> wrote:
> > On Thu, Jun 27, 2013 at 04:54:53PM -1000, Linus Torvalds wrote:
> >>
> >> So what made it all start happening now? I don't recall us having had
> >> these kinds of issues before..
> >
> > Not sure - it's a sudden surprise for me, too. Then again, I haven't
> > been looking at sync from a performance or lock contention point of
> > view any time recently. The algorithm that wait_sb_inodes() is
> > effectively unchanged since at least 2009, so it's probably a case
> > of it having been protected from contention by some external factor
> > we've fixed/removed recently. Perhaps the bdi-flusher thread
> > replacement in -rc1 has changed the timing sufficiently that it no
> > longer serialises concurrent sync calls as much....
> >
> > However, the inode_sb_list_lock is known to be a badly contended
> > lock from a create/unlink fastpath for XFS, so it's not like this sort
> > of thing is completely unexpected.
>
> That whole inode_sb_list_lock seems moronic. Why isn't it a per-sb
> one? No, that won't fix all problems, but it might at least help a
> *bit*.
Historic. That's how we initially split up the old global inode_lock
in 2.6.38 in preparation for the RCU dentry walk code. It was never
intended as a long term solution.....
Besides, making the inode_sb_list_lock per sb won't help solve this
problem, anyway. The case that I'm testing involves a filesystem
that contains 99.97% of all inodes cached by the system. This is a
pretty common situation....
> Also, looking some more now at that wait_sb_inodes logic, I have to
> say that if the problem is primarily the inode->i_lock, then that's
> just crazy. We normally shouldn't even *need* that lock, since we
> could do a totally unlocked iget() as long as the count is non-zero.
The problem is not the inode->i_lock. lockstat is pretty clear on
that...
> And no, I don't think really need the i_lock for checking
> "mapping->nrpages == 0" or the magical "inode is being freed" bits
> either. Or at least we could easily do some of this optimistically for
> the common cases.
Right, we could check some of it optimisitcally, but we'd still be
walking millions of inodes under the inode_sb_list_lock on each
sync() call just to find the one inode that is dirty. It's like
polishing a turd - no matter how shiny you make it, it's still just
a pile of shit.
> I'm attaching a pretty trivial patch, which may obviously be trivially
> totally flawed. I have not tested this in any way, but half the new
> lines are comments about why it's doing what it is doing. And I
> really think that it should make the "actually take the inode lock" be
> something quite rare.
It looks ok, but I still think it is solving the wrong problem.
FWIW, your optimisation has much wider application that just this
one place. I'll have a look to see how we can apply this approach
across all the inode lookup+validate code we currently have that
unconditionally takes the inode->i_lock....
Cheers,
Dave.
--
Dave Chinner
david@fromorbit.com
next prev parent reply other threads:[~2013-06-28 7:21 UTC|newest]
Thread overview: 93+ messages / expand[flat|nested] mbox.gz Atom feed top
2013-06-19 16:45 frequent softlockups with 3.10rc6 Dave Jones
2013-06-19 17:53 ` Dave Jones
2013-06-19 18:13 ` Paul E. McKenney
2013-06-19 18:42 ` Dave Jones
2013-06-20 0:12 ` Dave Jones
2013-06-20 16:16 ` Paul E. McKenney
2013-06-20 16:27 ` Dave Jones
2013-06-21 15:11 ` Dave Jones
2013-06-21 19:59 ` Oleg Nesterov
2013-06-22 1:37 ` Dave Jones
2013-06-22 17:31 ` Oleg Nesterov
2013-06-22 21:59 ` Dave Jones
2013-06-23 5:00 ` Andrew Vagin
2013-06-23 14:36 ` Oleg Nesterov
2013-06-23 15:06 ` Dave Jones
2013-06-23 16:04 ` Oleg Nesterov
2013-06-24 0:21 ` Dave Jones
2013-06-24 2:00 ` Dave Jones
2013-06-24 14:39 ` Oleg Nesterov
2013-06-24 14:52 ` Steven Rostedt
2013-06-24 16:00 ` Dave Jones
2013-06-24 16:24 ` Steven Rostedt
2013-06-24 16:51 ` Dave Jones
2013-06-24 17:04 ` Steven Rostedt
2013-06-25 16:55 ` Dave Jones
2013-06-25 17:21 ` Steven Rostedt
2013-06-25 17:23 ` Steven Rostedt
2013-06-25 17:26 ` Dave Jones
2013-06-25 17:31 ` Steven Rostedt
2013-06-25 17:32 ` Steven Rostedt
2013-06-25 17:29 ` Steven Rostedt
2013-06-25 17:34 ` Dave Jones
2013-06-24 16:37 ` Oleg Nesterov
2013-06-24 16:49 ` Dave Jones
2013-06-24 15:57 ` Dave Jones
2013-06-24 17:35 ` Oleg Nesterov
2013-06-24 17:44 ` Dave Jones
2013-06-24 17:53 ` Steven Rostedt
2013-06-24 18:00 ` Dave Jones
2013-06-25 15:35 ` Dave Jones
2013-06-25 16:23 ` Steven Rostedt
2013-06-26 5:23 ` Dave Jones
2013-06-26 19:52 ` Steven Rostedt
2013-06-26 20:00 ` Dave Jones
2013-06-27 3:01 ` Steven Rostedt
2013-06-26 5:48 ` Dave Jones
2013-06-26 19:18 ` Oleg Nesterov
2013-06-26 19:40 ` Dave Jones
2013-06-27 0:22 ` Dave Jones
2013-06-27 1:06 ` Eric W. Biederman
2013-06-27 2:32 ` Tejun Heo
2013-06-27 7:55 ` Dave Chinner
2013-06-27 10:06 ` Dave Chinner
2013-06-27 12:52 ` Dave Chinner
2013-06-27 15:21 ` Dave Jones
2013-06-28 1:13 ` Dave Chinner
2013-06-28 3:58 ` Dave Chinner
2013-06-28 10:28 ` Jan Kara
2013-06-29 3:39 ` Dave Chinner
2013-07-01 12:00 ` Jan Kara
2013-07-02 6:29 ` Dave Chinner
2013-07-02 8:19 ` Jan Kara
2013-07-02 12:38 ` Dave Chinner
2013-07-02 14:05 ` Jan Kara
2013-07-02 16:13 ` Linus Torvalds
2013-07-02 16:57 ` Jan Kara
2013-07-02 17:38 ` Linus Torvalds
2013-07-03 3:07 ` Dave Chinner
2013-07-03 3:28 ` Linus Torvalds
2013-07-03 4:49 ` Dave Chinner
2013-07-04 7:19 ` Andrew Morton
2013-06-29 20:13 ` Dave Jones
2013-06-29 22:23 ` Linus Torvalds
2013-06-29 23:44 ` Dave Jones
2013-06-30 0:21 ` Steven Rostedt
2013-07-01 12:49 ` Pavel Machek
2013-06-30 0:17 ` Steven Rostedt
2013-06-30 2:05 ` Dave Chinner
2013-06-30 2:34 ` Dave Chinner
2013-06-27 14:30 ` Dave Jones
2013-06-28 1:18 ` Dave Chinner
2013-06-28 2:54 ` Linus Torvalds
2013-06-28 3:54 ` Dave Chinner
2013-06-28 5:59 ` Linus Torvalds
2013-06-28 7:21 ` Dave Chinner [this message]
2013-06-28 8:22 ` Linus Torvalds
2013-06-28 8:32 ` Al Viro
2013-06-28 8:22 ` Al Viro
2013-06-28 9:49 ` Jan Kara
2013-07-01 17:57 ` block layer softlockup Dave Jones
2013-07-02 2:07 ` Dave Chinner
2013-07-02 6:01 ` Dave Jones
2013-07-02 7:30 ` Dave Chinner
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20130628072141.GB9047@dastard \
--to=david@fromorbit.com \
--cc=avagin@openvz.org \
--cc=davej@redhat.com \
--cc=ebiederm@xmission.com \
--cc=jack@suse.cz \
--cc=linux-kernel@vger.kernel.org \
--cc=oleg@redhat.com \
--cc=paulmck@linux.vnet.ibm.com \
--cc=rostedt@goodmis.org \
--cc=torvalds@linux-foundation.org \
--cc=viro@zeniv.linux.org.uk \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox