public inbox for linux-xfs@vger.kernel.org
 help / color / mirror / Atom feed
From: Emmanuel Florac <eflorac@intellique.com>
To: stan@hardwarefreak.com
Cc: xfs@oss.sgi.com
Subject: Re: XFS: Abysmal write performance because of excessive seeking (allocation groups to blame?)
Date: Mon, 9 Apr 2012 14:45:58 +0200	[thread overview]
Message-ID: <20120409144558.6072c1eb@galadriel.home> (raw)
In-Reply-To: <4F827341.2000607@hardwarefreak.com>

Le Mon, 09 Apr 2012 00:27:29 -0500 vous écriviez:

> In your RAID10 random write testing, was this with a filesystem or
> doing direct block IO? 

Doing random IO in a file lying on an XFS filesystem.

> If the latter, I wonder if its write pattern
> is anything like the access pattern we'd see hitting dozens of AGs
> while creating 10s of thousands of files.

I suppose the file creation process to hit more some defined hot spots
than pure random access.

I just have a machine for testing purposes with 15 4TB drives in
RAID-6, not exactly a IOPS demon :)

So I've build a tar file to make it somewhat similar to OP's
problem :

root@3[raid]# ls -lh test.tar 
-rw-r--r-- 1 root root 2,6G  9 avril 13:52 test.tar
root@3[raid]# tar tf test.tar | wc -l
234318

# echo 3 > /proc/sys/vm/drop_caches
# time tar xf test.tar

real    1m2.584s
user    0m1.376s
sys     0m13.643s

Let's rerun it with files cached (the machine has 16 GB RAM, so
every single file must be cached):

# time tar xf test.tar

real    0m50.842s
user    0m0.809s
sys     0m13.767s

Typical IOs during unarchiving: no read, write IO bound.

Device:         rrqm/s   wrqm/s     r/s     w/s    rMB/s    wMB/s avgrq-sz avgqu-sz   await  svctm  %util
sda               0,00  1573,50    0,00  480,50     0,00    36,96   157,52    60,65  124,45   2,08 100,10
dm-0              0,00     0,00    0,00 2067,00     0,00    39,56    39,20   322,55  151,62   0,48 100,10

The OP setup being 6 15k drives, should provide roughly the same
number of true IOPS (1200) as my slow as hell bunch of 7200RPM 
4TB drives (1500). I suppose write cache makes for most of
the difference; or else 15K drives are overrated :)

Alas, I can't run the test on this machine with ext4: I can't 
get mkfs.ext4 to swallow my big device. 

mkfs -t ext4 -v -b 4096 -n /dev/dm-0 2147483647

should work (though drastically limiting the filesystem size),
but dies miserably when removing the -n flag. Mmmph, I suppose it's 
production ready if you don't have much data to store.

JFS doesn't work either. And I was wondering why I'm using XFS?  :)

-- 
------------------------------------------------------------------------
Emmanuel Florac     |   Direction technique
                    |   Intellique
                    |	<eflorac@intellique.com>
                    |   +33 1 78 94 84 02
------------------------------------------------------------------------

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

  reply	other threads:[~2012-04-09 12:46 UTC|newest]

Thread overview: 64+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2012-04-05 18:10 XFS: Abysmal write performance because of excessive seeking (allocation groups to blame?) Stefan Ring
2012-04-05 19:56 ` Peter Grandi
2012-04-05 22:41   ` Peter Grandi
2012-04-06 14:36   ` Peter Grandi
2012-04-06 15:37     ` Stefan Ring
2012-04-07 13:33       ` Peter Grandi
2012-04-05 21:37 ` Christoph Hellwig
2012-04-06  1:09   ` Peter Grandi
2012-04-06  8:25   ` Stefan Ring
2012-04-07 18:57     ` Martin Steigerwald
2012-04-10 14:02       ` Stefan Ring
2012-04-10 14:32         ` Joe Landman
2012-04-10 15:56           ` Stefan Ring
2012-04-10 18:13         ` Martin Steigerwald
2012-04-10 20:44         ` Stan Hoeppner
2012-04-10 21:00           ` Stefan Ring
2012-04-05 22:32 ` Roger Willcocks
2012-04-06  7:11   ` Stefan Ring
2012-04-06  8:24     ` Stefan Ring
2012-04-05 23:07 ` Peter Grandi
2012-04-06  0:13   ` Peter Grandi
2012-04-06  7:27     ` Stefan Ring
2012-04-06 23:28       ` Stan Hoeppner
2012-04-07  7:27         ` Stefan Ring
2012-04-07  8:53           ` Emmanuel Florac
2012-04-07 14:57           ` Stan Hoeppner
2012-04-09 11:02             ` Stefan Ring
2012-04-09 12:48               ` Emmanuel Florac
2012-04-09 12:53                 ` Stefan Ring
2012-04-09 13:03                   ` Emmanuel Florac
2012-04-09 23:38               ` Stan Hoeppner
2012-04-10  6:11                 ` Stefan Ring
2012-04-10 20:29                   ` Stan Hoeppner
2012-04-10 20:43                     ` Stefan Ring
2012-04-10 21:29                       ` Stan Hoeppner
2012-04-09  0:19           ` Dave Chinner
2012-04-09 11:39             ` Emmanuel Florac
2012-04-09 21:47               ` Dave Chinner
2012-04-07  8:49         ` Emmanuel Florac
2012-04-08 20:33           ` Stan Hoeppner
2012-04-08 21:45             ` Emmanuel Florac
2012-04-09  5:27               ` Stan Hoeppner
2012-04-09 12:45                 ` Emmanuel Florac [this message]
2012-04-13 19:36                   ` Stefan Ring
2012-04-14  7:32                     ` Stan Hoeppner
2012-04-14 11:30                       ` Stefan Ring
2012-04-09 14:21         ` Geoffrey Wehrman
2012-04-10 19:30           ` Stan Hoeppner
2012-04-11 22:19             ` Geoffrey Wehrman
2012-04-07 16:50       ` Peter Grandi
2012-04-07 17:10         ` Joe Landman
2012-04-08 21:42           ` Stan Hoeppner
2012-04-09  5:13             ` Stan Hoeppner
2012-04-09 11:52               ` Stefan Ring
2012-04-10  7:34                 ` Stan Hoeppner
2012-04-10 13:59                   ` Stefan Ring
2012-04-09  9:23             ` Stefan Ring
2012-04-09 23:06               ` Stan Hoeppner
2012-04-06  0:53   ` Peter Grandi
2012-04-06  7:32     ` Stefan Ring
2012-04-06  5:53   ` Stefan Ring
2012-04-06 15:35     ` Peter Grandi
2012-04-10 14:05       ` Stefan Ring
2012-04-07 19:11     ` Peter Grandi

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20120409144558.6072c1eb@galadriel.home \
    --to=eflorac@intellique.com \
    --cc=stan@hardwarefreak.com \
    --cc=xfs@oss.sgi.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox