From: Martin Steigerwald <Martin@lichtvoll.de>
To: linux-xfs@oss.sgi.com
Subject: Re: XFS for lots of small files
Date: Tue, 6 May 2008 20:55:36 +0200 [thread overview]
Message-ID: <200805062055.36755.Martin@lichtvoll.de> (raw)
In-Reply-To: <4820832B.3070903@dubielvitrum.pl>
Am Dienstag 06 Mai 2008 schrieb Leszek Dubiel:
> Hello!
Hi Leszek,
> I consider moving server from reiserfs to xfs. In all benchmarks I have
> read both file systems have had comparable results.
>
> But I've made a test:
>
> 1. formated /dev/hda2 with reiserfs with default options and made
> 10.000 files
> 2. formated /dev/hda2 with xfs with default options and made 10.000
>
> Reiserfs created those files in 2 (two) seconds, and xfs created them
> in 35 (thirty five) seconds.
>
> Is that normal? What I am doing wrong?
>
> My system is Debian, current stable version. Below is a log of
> operation.
>
>
> Thanks in advance.
[...]
> debian:/mnt/hdc2# time for f in `seq 9999`; do echo $f > $f; done
>
> real 0m35.558s
> user 0m0.256s
> sys 0m1.080s
>
> debian:/mnt/hdc2# time cat * | wc -l
> 9999
>
> real 0m0.239s
> user 0m0.020s
> sys 0m0.172s
I get
martin@shambala:~/Zeit/filetest -> rm *; sync ; time for ((I=1; I<=10000;
I=I+1)); do echo $I > $I; done
real 0m10.642s
user 0m0.907s
sys 0m1.713s
martin@shambala:~/Zeit/filetest -> sync ; time cat * >/dev/null
real 0m0.238s
user 0m0.087s
sys 0m0.153s
martin@shambala:~/Zeit/filetest -> sync ; time cat * | wc -l
10000
real 0m0.375s
user 0m0.120s
sys 0m0.247s
martin@shambala:~/Zeit/filetest -> sync ; time rm *
real 0m7.600s
user 0m0.113s
sys 0m1.377s
for XFS with optimized settings...
shambala> xfs_info /home
meta-data=/dev/sda5 isize=256 agcount=6, agsize=4883256
blks
= sectsz=512 attr=2
data = bsize=4096 blocks=29299536, imaxpct=25
= sunit=0 swidth=0 blks
naming =version 2 bsize=4096
log =internal bsize=4096 blocks=32768, version=2
= sectsz=512 sunit=0 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
agcount is two more than would be optimal cause I growed the partition
once.
shambala> mount | grep home
/dev/sda5 on /home type xfs (rw,relatime,logbsize=256k,logbufs=8)
This is on a ThinkPad T42 internal laptop 160 GB harddisk drive with I
think 5400rpm.
Partition I tested on was not empty at that time and is heavily used.
shambala> LANG=EN df -h /home
Filesystem Size Used Avail Use% Mounted on
/dev/sda5 112G 84G 29G 75% /home
And there is quite some fragmentation on it:
xfs_db> frag
actual 653519, ideal 587066, fragmentation factor 10.17%
I do not have free space in my playground LVM to test against ext3 and
reiserfs at the moment.
Ciao,
--
Martin 'Helios' Steigerwald - http://www.Lichtvoll.de
GPG: 03B0 0D6C 0040 0710 4AFA B82F 991B EAAC A599 84C7
next prev parent reply other threads:[~2008-05-06 18:55 UTC|newest]
Thread overview: 8+ messages / expand[flat|nested] mbox.gz Atom feed top
2008-05-06 16:11 XFS for lots of small files Leszek Dubiel
2008-05-06 16:23 ` Nicolas KOWALSKI
2008-05-06 18:55 ` Martin Steigerwald [this message]
2008-05-23 0:44 ` Linda Walsh
2008-05-24 16:16 ` Martin Steigerwald
2008-05-25 3:25 ` Eric Sandeen
2008-05-25 11:38 ` Martin Steigerwald
2008-05-25 15:39 ` Eric Sandeen
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=200805062055.36755.Martin@lichtvoll.de \
--to=martin@lichtvoll.de \
--cc=linux-xfs@oss.sgi.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox