From: Dave Chinner <david@fromorbit.com>
To: Shrinand Javadekar <shrinand@maginatics.com>
Cc: xfs@oss.sgi.com
Subject: Re: Inode and dentry cache behavior
Date: Fri, 24 Apr 2015 16:15:54 +1000 [thread overview]
Message-ID: <20150424061554.GN15810@dastard> (raw)
In-Reply-To: <CABppvi7+Mu78FAM75YvJvekX2CHtKk4yeMrU7j35fvvWRb923Q@mail.gmail.com>
On Thu, Apr 23, 2015 at 04:48:51PM -0700, Shrinand Javadekar wrote:
> > from the iostat log:
> >
> > Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
> > .....
> > dm-6 0.00 0.00 0.20 22.40 0.00 0.09 8.00 22.28 839.01 1224.00 835.57 44.25 100.00
> > dm-7 0.00 0.00 0.00 1.20 0.00 0.00 8.00 2.82 1517.33 0.00 1517.33 833.33 100.00
> > dm-8 0.00 0.00 0.00 195.20 0.00 0.76 8.00 1727.51 4178.89 0.00 4178.89 5.12 100.00
> > ...
> > dm-7 0.00 0.00 0.00 0.00 0.00 0.00 0.00 1.00 0.00 0.00 0.00 0.00 100.00
> > dm-8 0.00 0.00 0.00 0.00 0.00 0.00 0.00 1178.85 0.00 0.00 0.00 0.00 100.00
> >
> > dm-7 is showing almost a second for single IO wait times, when it is
> > actually completing IO. dm-8 has a massive queue depth - I can only
> > assume you've tuned sys/block/*/queue/nr_requests to something
> > really large? But like dm-7, it's showing very long IO times, and
> > that's likely the source of your latency problems.
>
> I see that /sys/block/*/queue/nr_requests is set to 128 which is way
> less than the queue depth shown in the iostat numbers. What gives?
No idea, but it's indicative of a problem below XFS. Work out what
is happening with your storage hardware first, then work your way up
the stack...
> One other observation we had was that xfs shows a large amount of
> directory fragmentation. Directory fragmentation was shown at ~40%
> whereas file fragmentation was very low at 0.1%.
Pretty common. Directories are only accessed a single block at a
time, and sequential offset reads are pretty rare, so fragmentation
makes little difference to performance. You're seeing almost zero
read IO load, so the directory layout is not a concern for this
workload.
Cheers,
Dave.
--
Dave Chinner
david@fromorbit.com
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
next prev parent reply other threads:[~2015-04-24 6:16 UTC|newest]
Thread overview: 9+ messages / expand[flat|nested] mbox.gz Atom feed top
2015-04-23 19:50 Inode and dentry cache behavior Shrinand Javadekar
2015-04-23 22:43 ` Dave Chinner
2015-04-23 23:48 ` Shrinand Javadekar
2015-04-24 6:15 ` Dave Chinner [this message]
2015-04-29 0:17 ` Shrinand Javadekar
2015-04-29 1:30 ` Dave Chinner
2015-04-29 17:46 ` Shrinand Javadekar
2015-05-11 21:07 ` Shrinand Javadekar
2015-05-11 21:44 ` Dave Chinner
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20150424061554.GN15810@dastard \
--to=david@fromorbit.com \
--cc=shrinand@maginatics.com \
--cc=xfs@oss.sgi.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox