From: Ric Wheeler <rwheeler@redhat.com>
To: linux-ext4@vger.kernel.org
Cc: Theodore Tso <tytso@mit.edu>, Eric Sandeen <sandeen@redhat.com>
Subject: suspiciously good fsck times?
Date: Thu, 10 Jul 2008 13:29:22 -0400 [thread overview]
Message-ID: <487646F2.9030706@redhat.com> (raw)
(Repost to the list - this was mistakingly sent to linux-ext4-owner)
Just to be mean, I have been trying to test the fsck speed of ext4 with
lots of small files. The test I ran uses fs_mark to fill a 1TB Seagate
drive with 45.6 million 20k files (distributed between 256 subdirectories).
Running on ext3, "fsck -f" takes about one hour.
Running on ext4, with uninit_bg, the same fsck is finished in a bit over
5 minutes - more than 10x faster. (Without uninit_bg, the fsck takes
about 10 minutes).
Is this too good to be true? Below is the fsck run itself, the tree is
Ted's latest git tree and his 1.41 WIP tools,
ric
[root@localhost Perf]# time /sbin/fsck.ext4 -t -t -f /dev/sdb1
e4fsck 1.41-WIP (07-Jul-2008)
Pass 1: Checking inodes, blocks, and sizes
Pass 1: Memory used: 40632k/69424k (36424k/4209k), time: 204.95/78.22/25.58
Pass 1: I/O read: 11140MB, write: 0MB, rate: 54.35MB/s
Pass 2: Checking directory structure
Pass 2: Memory used: 70184k/61968k (51803k/18382k), time: 76.47/50.27/ 8.77
Pass 2: I/O read: 3023MB, write: 0MB, rate: 39.53MB/s
Pass 3: Checking directory connectivity
Peak memory: Memory used: 70184k/61968k (59256k/10929k), time:
281.72/128.59/34.35
Pass 3A: Memory used: 70184k/61968k (59256k/10929k), time: 0.00/ 0.00/
0.00
Pass 3A: I/O read: 0MB, write: 0MB, rate: 0.00MB/s
Pass 3: Memory used: 70184k/61968k (51803k/18382k), time: 0.03/ 0.00/ 0.00
Pass 3: I/O read: 1MB, write: 0MB, rate: 37.86MB/s
Pass 4: Checking reference counts
Pass 4: Memory used: 70184k/44968k (27354k/42831k), time: 2.37/ 2.36/ 0.00
Pass 4: I/O read: 0MB, write: 0MB, rate: 0.00MB/s
Pass 5: Checking group summary information
Pass 5: Memory used: 70184k/240k (64619k/5566k), time: 19.40/ 5.52/ 0.29
Pass 5: I/O read: 34MB, write: 0MB, rate: 1.75MB/s
/dev/sdb1: 45600268/61054976 files (0.0% non-contiguous),
232657574/244190000 blocks
Memory used: 70184k/240k (64889k/5296k), time: 303.54/136.48/34.65
I/O read: 14198MB, write: 1MB, rate: 46.77MB/s
real 5m3.993s
user 2m16.477s
sys 0m35.041s
next reply other threads:[~2008-07-10 17:29 UTC|newest]
Thread overview: 8+ messages / expand[flat|nested] mbox.gz Atom feed top
2008-07-10 17:29 Ric Wheeler [this message]
[not found] <4876025A.80909@gmail.com>
[not found] ` <20080710151822.GA25939@mit.edu>
[not found] ` <48762F9F.5070308@redhat.com>
[not found] ` <48763564.2090505@redhat.com>
[not found] ` <20080710172117.GE10402@mit.edu>
2008-07-10 17:27 ` suspiciously good fsck times? Ric Wheeler
2008-07-10 17:30 ` Ric Wheeler
-- strict thread matches above, loose matches on Subject: below --
2008-07-10 17:28 [ricwheeler@gmail.com: suspiciously good fsck times?] Theodore Tso
2008-07-10 17:53 ` suspiciously good fsck times? Theodore Tso
2008-07-10 20:13 ` Ric Wheeler
2008-07-14 21:19 ` Andreas Dilger
2008-07-15 0:47 ` Ric Wheeler
2008-07-11 15:39 ` Ric Wheeler
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=487646F2.9030706@redhat.com \
--to=rwheeler@redhat.com \
--cc=linux-ext4@vger.kernel.org \
--cc=sandeen@redhat.com \
--cc=tytso@mit.edu \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).