public inbox for linux-xfs@vger.kernel.org
 help / color / mirror / Atom feed
From: Jens Beyer <jens.beyer@1und1.de>
To: xfs@oss.sgi.com
Subject: XFS perfomance degradation on growing filesystem size
Date: Fri, 4 Jul 2008 08:41:26 +0200	[thread overview]
Message-ID: <20080704064126.GA14847@webde.de> (raw)


Hi,

I have encountered a strange performance problem during some 
hardware evaluation tests: 

I am running a benchmark to measure especially random read/write 
I/O on an raid device and found that (under some circumstances) 
the performance of Random Read I/O is inverse proportional to the 
size of the tested XFS filesystem. 

In numbers this means that on a 100GB partition I get a throughput 
of ~25 MB/s and on the same hardware at 1TB FS size only 18 MB/s 
(and at 2+ TB like 14 MB/s) (absolute values depend on options, 
kernel version and are for random read i/o at 8k test block size).

Surprisingly this degradation does not affect random write or 
seq read/write (at least not by this factor).
Even more surprising using an ext3 filesystem I always get ~25 MB/s.

My test setups included: 
- kernel vanilla 2.6.24, 2.6.25.8, 2.6.24-ubuntu_8.04, 2.6.20, 32/64bit
- xfsprogs v2.9.8/7
- benchmarks:
  - iozone:   iozone -i 0 -i 2 -r 8k -s 1g -t 32 -+p 100
  - tiobench: tiobench.pl --size 32000 --random 100000 --block 8192 \
                          --dir /mnt --threads 32 --numruns 1
    (Bench is for 8k blocksize, 32 Threads with enough data to 
    be beyond simple ram cache).
- The hardware itself where recent HP dual/quadcores with 4GB RAM 
  with external SAS Raids (MSA60, MSA70) and 15k SAS disks (different 
  types).

I tried most options like but not limited to: agcount, logbufs, 
nobarrier, blockdev --setra, (...), but none had an significant impact. 
All benchmarks where run using deadline i/o scheduler

Does anyone has a clue on what is going on - or even can reproduce 
this? Or, is this the default behavior? Could this be an hardware 
problem ?

Thanks for any comment,
Jens

             reply	other threads:[~2008-07-04  6:40 UTC|newest]

Thread overview: 4+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2008-07-04  6:41 Jens Beyer [this message]
2008-07-04  7:59 ` XFS perfomance degradation on growing filesystem size Dave Chinner
2008-07-07  8:04   ` Jens Beyer
2008-07-07 22:06     ` Dave Chinner

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20080704064126.GA14847@webde.de \
    --to=jens.beyer@1und1.de \
    --cc=xfs@oss.sgi.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox