From: Dave Chinner <david@fromorbit.com>
To: Xupeng Yun <xupeng@xupeng.me>
Cc: XFS group <xfs@oss.sgi.com>
Subject: Re: Bad performance with XFS + 2.6.38 / 2.6.39
Date: Mon, 12 Dec 2011 12:00:53 +1100 [thread overview]
Message-ID: <20111212010053.GM14273@dastard> (raw)
In-Reply-To: <CACaf2aYTsxOBXEJEbQu7gwAminBc3R2usDHvypJW0AqOfnz0Pg@mail.gmail.com>
On Mon, Dec 12, 2011 at 08:40:15AM +0800, Xupeng Yun wrote:
> On Mon, Dec 12, 2011 at 07:39, Dave Chinner <david@fromorbit.com> wrote:
> >
> > > ====== XFS + 2.6.29 ======
> >
> > Read 21GB @ 11k iops, 210MB/s, av latency of 1.3ms/IO
> > Wrote 2.3GB @ 1250 iops, 20MB/s, av latency of 0.27ms/IO
> > Total 1.5m IOs, 95% @ <= 2ms
> >
> > > ====== XFS + 2.6.39 ======
> >
> > Read 6.5GB @ 3.5k iops, 55MB/s, av latency of 4.5ms/IO
> > Wrote 700MB @ 386 iops, 6MB/s, av latency of 0.39ms/IO
> > Total 460k IOs, 95% @ <= 10ms, 4ms > 50% < 10ms
> >
> > Looking at the IO stats there, this doesn't look to me like an XFS
> > problem. The IO times are much, much longer on 2.6.39, so that's the
> > first thing to understand. If the two tests are doing identical IO
> > patterns, then I'd be looking at validating raw device performance
> > first.
> >
>
> Thank you Dave.
>
> I also did raw device and ext4 performance test with 2.6.39, all these
> tests are
> doing identical IO patterns(non-buffered IO, 16 IO threads, 16KB block size,
> mixed random read and write, r:w=9:1):
> ====== raw device + 2.6.39 ======
> Read 21.7GB @ 11.6k IOPS , 185MB/s, av latency of 1.37 ms/IO
> Wrote 2.4GB @ 1.3k IOPS, 20MB/s, av latency of 0.095 ms/IO
> Total 1.5M IOs, @ 96% <= 2ms
>
> ====== ext4 + 2.6.39 ======
> Read 21.7GB @ 11.6k IOPS , 185MB/s, av latency of 1.37 ms/IO
> Wrote 2.4GB @ 1.3k IOPS, 20MB/s, av latency of 0.1 ms/IO
> Total 1.5M IOs, @ 96% <= 2ms
>
> ====== XFS + 2.6.39 ======
> Read 6.5GB @ 3.5k iops, 55MB/s, av latency of 4.5ms/IO
> Wrote 700MB @ 386 iops, 6MB/s, av latency of 0.39ms/IO
> Total 460k IOs, @ 95% <= 10ms, 4ms > 50% < 10ms
Oh, of course, now I remember what the problem is - it's a locking
issue that was fixed in 3.0.11, 3.1.5 and 3.2-rc1.
commit 0c38a2512df272b14ef4238b476a2e4f70da1479
Author: Dave Chinner <dchinner@redhat.com>
Date: Thu Aug 25 07:17:01 2011 +0000
xfs: don't serialise direct IO reads on page cache checks
There is no need to grab the i_mutex of the IO lock in exclusive
mode if we don't need to invalidate the page cache. Taking these
locks on every direct IO effective serialises them as taking the IO
lock in exclusive mode has to wait for all shared holders to drop
the lock. That only happens when IO is complete, so effective it
prevents dispatch of concurrent direct IO reads to the same inode.
Fix this by taking the IO lock shared to check the page cache state,
and only then drop it and take the IO lock exclusively if there is
work to be done. Hence for the normal direct IO case, no exclusive
locking will occur.
Signed-off-by: Dave Chinner <dchinner@redhat.com>
Tested-by: Joern Engel <joern@logfs.org>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Alex Elder <aelder@sgi.com>
Cheers,
Dave.
--
Dave Chinner
david@fromorbit.com
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
next prev parent reply other threads:[~2011-12-12 1:00 UTC|newest]
Thread overview: 20+ messages / expand[flat|nested] mbox.gz Atom feed top
2011-12-11 12:45 Bad performance with XFS + 2.6.38 / 2.6.39 Xupeng Yun
2011-12-11 23:39 ` Dave Chinner
2011-12-12 0:40 ` Xupeng Yun
2011-12-12 1:00 ` Dave Chinner [this message]
2011-12-12 2:00 ` Xupeng Yun
2011-12-12 13:57 ` Christoph Hellwig
2011-12-21 9:08 ` Yann Dupont
2011-12-21 15:10 ` Stan Hoeppner
2011-12-21 17:56 ` Yann Dupont
2011-12-21 22:26 ` Dave Chinner
2011-12-22 9:23 ` Yann Dupont
2011-12-22 11:02 ` Yann Dupont
2012-01-02 10:06 ` Yann Dupont
2012-01-02 16:08 ` Peter Grandi
2012-01-02 18:02 ` Peter Grandi
2012-01-04 10:54 ` Yann Dupont
2012-01-02 20:35 ` Dave Chinner
2012-01-03 8:20 ` Yann Dupont
2012-01-04 12:33 ` Christoph Hellwig
2012-01-04 13:06 ` Yann Dupont
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20111212010053.GM14273@dastard \
--to=david@fromorbit.com \
--cc=xfs@oss.sgi.com \
--cc=xupeng@xupeng.me \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox