public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: Dave Chinner <david@fromorbit.com>
To: OGAWA Hirofumi <hirofumi@mail.parknet.co.jp>
Cc: Al Viro <viro@zeniv.linux.org.uk>,
	linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org,
	tux3@tux3.org
Subject: Re: [PATCH] Optimize wait_sb_inodes()
Date: Thu, 27 Jun 2013 14:47:05 +1000	[thread overview]
Message-ID: <20130627044705.GB29790@dastard> (raw)
In-Reply-To: <87wqpg76ls.fsf@devron.myhome.or.jp>

On Thu, Jun 27, 2013 at 09:14:07AM +0900, OGAWA Hirofumi wrote:
> Dave Chinner <david@fromorbit.com> writes:
> 
> >> On another view, wait_sb_inodes() would (arguably) be necessary for
> >> legacy FSes. But, for example, if data=journal on ext*, wait_sb_inodes()
> >> would be more than useless, because ext* can be done it by own
> >> transaction list (and more efficient way).
> >> 
> >> Likewise, on tux3, the state is same with data=journal.
> >> 
> >> Also, even if data=ordered, ext* might be able to check in-flight I/O by
> >> ordered data list (with some new additional check, I'm not sure).
> >
> > Why would you bother solving this problem differently in every
> > single filesystem? It's solvable at the VFS by tracking inodes that
> > are no longer dirty but still under writeback on the BDI. Then
> > converting wait_sb_inodes() to walk all the dirty and writeback
> > inodes would be sufficient for data integrity purposes, and it would
> > be done under the bdi writeback lock, not the inode_sb_list_lock....
> >
> > Alternatively, splitting up the inode sb list and lock (say via the
> > per-node list_lru structures in -mm and -next that are being added
> > for exactly this purpose) would also significantly reduce lock
> > contention on both the create/evict fast paths and the
> > wait_sb_inodes() walk that is currently done....
> >
> > So I think that you should address the problem properly at the VFS
> > level so everyone benefits, not push interfaces that allow
> > filesystem specific hacks to work around VFS level deficiencies...
> 
> Optimizing wait_sb_inodes() might help lock contention, but it doesn't
> help unnecessary wait/check.

You have your own wait code, that doesn't make what the VFS does
unnecesary. Quite frankly, I don't trust individual filesystems to
get it right - there's a long history of filesystem specific data
sync problems (including in XFS), and the best way to avoid that is
to ensure the VFS gets it right for you.

Indeed, we've gone from having sooper special secret sauce data sync
code in XFS to using the VFS over the past few years, and the result
is that it is now more reliable and faster than when we were trying
to be smart and do it all ourselves. We got to where we are by
fixing the problems in the VFS rather than continuing to try to work
around them.

> Since some FSes know about current
> in-flight I/O already in those internal, so I think, those FSes can be
> done it here, or are already doing in ->sync_fs().

Sure, do your internal checks in ->sync_fs(), but if
wait_sb_inodes() does not have any lock contention and very little
overhead, then why do you need to avoid it? This wait has to be done
somewhere between sync_inodes_sb() dispatching all the IO and
->sync_fs completing, so what's the problem with hving the VFS do
that *for everyone* efficiently?

Fix the root cause of the problem - the sub-optimal VFS code.
Hacking around it specifically for out-of-tree code is not the way
things get done around here...

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

  reply	other threads:[~2013-06-27  4:47 UTC|newest]

Thread overview: 20+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2013-06-26  8:45 [PATCH] Optimize wait_sb_inodes() OGAWA Hirofumi
2013-06-26 14:32 ` Jörn Engel
2013-06-27  0:01   ` OGAWA Hirofumi
2013-06-26 23:11 ` Dave Chinner
2013-06-27  0:14   ` OGAWA Hirofumi
2013-06-27  4:47     ` Dave Chinner [this message]
2013-06-27  5:18       ` OGAWA Hirofumi
2013-06-27  6:38         ` Dave Chinner
2013-06-27  7:22           ` OGAWA Hirofumi
2013-06-27  9:40             ` Dave Chinner
2013-06-27 10:06               ` OGAWA Hirofumi
2013-06-27 18:36                 ` Theodore Ts'o
2013-06-27 23:37                   ` OGAWA Hirofumi
2013-06-27 23:45                     ` Theodore Ts'o
2013-06-28  0:30                       ` OGAWA Hirofumi
2013-06-28  5:23                         ` Dave Chinner
2013-06-28  7:39                           ` OGAWA Hirofumi
2013-06-28  3:00                   ` Daniel Phillips
2013-06-27  7:22         ` Daniel Phillips
2013-06-27  5:50       ` Daniel Phillips

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20130627044705.GB29790@dastard \
    --to=david@fromorbit.com \
    --cc=hirofumi@mail.parknet.co.jp \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=tux3@tux3.org \
    --cc=viro@zeniv.linux.org.uk \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox