From: Dave Chinner <david@fromorbit.com>
To: Christoph Hellwig <hch@lst.de>
Cc: kernel test robot <ying.huang@linux.intel.com>,
xfs@oss.sgi.com, lkp@01.org, LKML <linux-kernel@vger.kernel.org>,
Dave Chinner <dchinner@redhat.com>
Subject: Re: [lkp] [xfs] fbcc025613: -5.6% fsmark.files_per_sec
Date: Mon, 22 Feb 2016 22:22:14 +1100 [thread overview]
Message-ID: <20160222112214.GF25832@dastard> (raw)
In-Reply-To: <20160222085409.GA19493@lst.de>
On Mon, Feb 22, 2016 at 09:54:09AM +0100, Christoph Hellwig wrote:
> On Fri, Feb 19, 2016 at 05:49:32PM +1100, Dave Chinner wrote:
> > That doesn't really seem right. The writeback should be done as a
> > single ioend, with a single completion, with a single setsize
> > transaction, adn then all the pages are marked clean sequentially.
> > The above behaviour implies we are ending up doing something like:
> >
> > fsync proc io completion
> > wait on page 0
> > end page 0 writeback
> > wake up page 0
> > wait on page 1
> > end page 1 writeback
> > wake up page 1
> > wait on page 2
> > end page 2 writeback
> > wake up page 2
> >
> > Though in slightly larger batches than a single page (10 wakeups a
> > file, so batches of around 100 pages per wakeup?). i.e. the fsync
> > IO wait appears to be racing with IO completion marking pages as
> > done. I simply cannot see how the above change would cause that, as
> > it was simply a change in the IO submission code that doesn't affect
> > overall size or shape of the IOs being submitted.
>
> Could this be the lack of blk plugs, which will cause us to complete
> too early?
No, because block plugging is still in place on the patch that the
regression is reported on. The difference it makes is that we don't
do any IO submission while building the ioend chaing, and submit it
all in one hit at the end of the ->writepages call.
However, this is an intermediate patch in the series, and later
patches correct this and we end up 4 commits later with bios being
built directly and being submitted the moment they are full. With
the entire series in place, I can't reproduce any sort of bad
behaviour, nor do I see any repeatable performance differential.
So I really want to know if this regression is seen with the entire
patchset applied, and if I can't reproduce on a local ramdisk or
real storage then we need to decide how much we care about fsync
performance on a volatile ramdisk...
Cheers,
Dave.
--
Dave Chinner
david@fromorbit.com
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
prev parent reply other threads:[~2016-02-22 11:38 UTC|newest]
Thread overview: 3+ messages / expand[flat|nested] mbox.gz Atom feed top
[not found] <87vb5lqunb.fsf@yhuang-dev.intel.com>
2016-02-19 6:49 ` [lkp] [xfs] fbcc025613: -5.6% fsmark.files_per_sec Dave Chinner
2016-02-22 8:54 ` Christoph Hellwig
2016-02-22 11:22 ` Dave Chinner [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20160222112214.GF25832@dastard \
--to=david@fromorbit.com \
--cc=dchinner@redhat.com \
--cc=hch@lst.de \
--cc=linux-kernel@vger.kernel.org \
--cc=lkp@01.org \
--cc=xfs@oss.sgi.com \
--cc=ying.huang@linux.intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox