From: Eric Sandeen <sandeen@sandeen.net>
To: xfs mailing list <xfs@oss.sgi.com>
Subject: Re: [PATCH] fix readahead calculations in xfs_dir2_leaf_getdents()
Date: Wed, 23 Sep 2009 13:42:54 -0500 [thread overview]
Message-ID: <4ABA6C2E.2080104@sandeen.net> (raw)
In-Reply-To: <4ABA5192.80509@sandeen.net>
Eric Sandeen wrote:
> This is for bug #850,
> http://oss.sgi.com/bugzilla/show_bug.cgi?id=850
> XFS file system segfaults , repeatedly and 100% reproducable in 2.6.30 , 2.6.31
Grr well this slowed things down a little, on about 200,000 entries in a
~10MB directory on a single sata spindle.
stracing /bin/ls (no color/stats, output to /dev/null) 4x in a row with
cache drops in between shows:
stock:
% time seconds usecs/call calls errors syscall
------ ----------- ----------- --------- --------- ----------------
100.00 0.241378 414 583 getdents
100.00 0.231012 396 583 getdents
100.00 0.244977 420 583 getdents
100.00 0.258624 444 583 getdents
patched:
% time seconds usecs/call calls errors syscall
------ ----------- ----------- --------- --------- ----------------
100.00 0.285928 769 372 getdents
100.00 0.273747 736 372 getdents
100.00 0.271060 729 372 getdents
100.00 0.251360 676 372 getdents
so that's slowed down a bit. Weird that more calls, originally, made it
faster overall...?
But one thing I noticed is that we choose readahead based on a guess at
the readdir buffer size, and at least for glibc's readdir it has this:
const size_t default_allocation =
(4 * BUFSIZ < sizeof (struct dirent64) ?
sizeof (struct dirent64) : 4 * BUFSIZ);
where BUFSIZ is a magical 8192.
But we do at max PAGE_SIZE which gives us almost no readahead ...
So bumping our "bufsize" up to 32k, things speed up nicely. Wonder if
the stock broken bufsize method led to more inadvertent readahead....
32k:
% time seconds usecs/call calls errors syscall
------ ----------- ----------- --------- --------- ----------------
100.00 0.176826 475 372 getdents
100.00 0.177491 477 372 getdents
100.00 0.176548 475 372 getdents
100.00 0.139812 376 372 getdents
Think it's worth it?
-Eric
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
next prev parent reply other threads:[~2009-09-23 18:41 UTC|newest]
Thread overview: 8+ messages / expand[flat|nested] mbox.gz Atom feed top
2009-09-23 16:49 [PATCH] fix readahead calculations in xfs_dir2_leaf_getdents() Eric Sandeen
2009-09-23 18:42 ` Eric Sandeen [this message]
2009-09-23 20:29 ` Michael Monnerie
2009-09-25 19:42 ` [PATCH V2] " Eric Sandeen
2009-09-26 17:04 ` Christoph Hellwig
2009-09-26 18:03 ` Eric Sandeen
2009-10-07 22:22 ` Alex Elder
2009-10-07 22:24 ` Eric Sandeen
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=4ABA6C2E.2080104@sandeen.net \
--to=sandeen@sandeen.net \
--cc=xfs@oss.sgi.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox