* More testing: 4x parallel 2G writes, sequential reads
@ 2007-11-07 22:42 Eric Sandeen
2007-11-07 23:09 ` Andreas Dilger
` (2 more replies)
0 siblings, 3 replies; 7+ messages in thread
From: Eric Sandeen @ 2007-11-07 22:42 UTC (permalink / raw)
To: ext4 development
I tried ext4 vs. xfs doing 4 parallel 2G IO writes in 1M units to 4
different subdirectories of the root of the filesystem:
http://people.redhat.com/esandeen/seekwatcher/ext4_4_threads.png
http://people.redhat.com/esandeen/seekwatcher/xfs_4_threads.png
http://people.redhat.com/esandeen/seekwatcher/ext4_xfs_4_threads.png
and then read them back sequentially:
http://people.redhat.com/esandeen/seekwatcher/ext4_4_threads_read.png
http://people.redhat.com/esandeen/seekwatcher/xfs_4_threads_read.png
http://people.redhat.com/esandeen/seekwatcher/ext4_xfs_4_read_threads.png
At the end of the write, ext4 had on the order of 400 extents/file, xfs
had on the order of 30 extents/file. It's clear especially from the
read graph that ext4 is interleaving the 4 files, in about 5M chunks on
average. Throughput seems comparable between ext4 & xfs nonetheless.
Again this was on a decent HW raid so seek penalties are probably not
too bad.
-Eric
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: More testing: 4x parallel 2G writes, sequential reads
2007-11-07 22:42 More testing: 4x parallel 2G writes, sequential reads Eric Sandeen
@ 2007-11-07 23:09 ` Andreas Dilger
2007-11-07 23:18 ` Eric Sandeen
2007-11-08 3:39 ` Eric Sandeen
2007-11-07 23:17 ` Alex Tomas
2007-11-08 0:14 ` Shapor Naghibzadeh
2 siblings, 2 replies; 7+ messages in thread
From: Andreas Dilger @ 2007-11-07 23:09 UTC (permalink / raw)
To: Eric Sandeen; +Cc: ext4 development
On Nov 07, 2007 16:42 -0600, Eric Sandeen wrote:
> I tried ext4 vs. xfs doing 4 parallel 2G IO writes in 1M units to 4
> different subdirectories of the root of the filesystem:
>
> http://people.redhat.com/esandeen/seekwatcher/ext4_4_threads.png
> http://people.redhat.com/esandeen/seekwatcher/xfs_4_threads.png
> http://people.redhat.com/esandeen/seekwatcher/ext4_xfs_4_threads.png
>
> and then read them back sequentially:
>
> http://people.redhat.com/esandeen/seekwatcher/ext4_4_threads_read.png
> http://people.redhat.com/esandeen/seekwatcher/xfs_4_threads_read.png
> http://people.redhat.com/esandeen/seekwatcher/ext4_xfs_4_read_threads.png
>
> At the end of the write, ext4 had on the order of 400 extents/file, xfs
> had on the order of 30 extents/file. It's clear especially from the
> read graph that ext4 is interleaving the 4 files, in about 5M chunks on
> average. Throughput seems comparable between ext4 & xfs nonetheless.
The question is what the "best" result is for this kind of workload?
In HPC applications the common case is that you will also have the data
files read back in parallel instead of serially.
The test shows ext4 finishing marginally faster in the write case, and
marginally slower in the read case. What happens if you have 4 parallel
readers?
Cheers, Andreas
--
Andreas Dilger
Sr. Software Engineer, Lustre Group
Sun Microsystems of Canada, Inc.
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: More testing: 4x parallel 2G writes, sequential reads
2007-11-07 22:42 More testing: 4x parallel 2G writes, sequential reads Eric Sandeen
2007-11-07 23:09 ` Andreas Dilger
@ 2007-11-07 23:17 ` Alex Tomas
2007-11-08 0:14 ` Shapor Naghibzadeh
2 siblings, 0 replies; 7+ messages in thread
From: Alex Tomas @ 2007-11-07 23:17 UTC (permalink / raw)
To: Eric Sandeen; +Cc: ext4 development
Hi,
could you try to larger preallocation? like 512/1024/2048 blocks, please?
thanks, Alex
Eric Sandeen wrote:
> I tried ext4 vs. xfs doing 4 parallel 2G IO writes in 1M units to 4
> different subdirectories of the root of the filesystem:
>
> http://people.redhat.com/esandeen/seekwatcher/ext4_4_threads.png
> http://people.redhat.com/esandeen/seekwatcher/xfs_4_threads.png
> http://people.redhat.com/esandeen/seekwatcher/ext4_xfs_4_threads.png
>
> and then read them back sequentially:
>
> http://people.redhat.com/esandeen/seekwatcher/ext4_4_threads_read.png
> http://people.redhat.com/esandeen/seekwatcher/xfs_4_threads_read.png
> http://people.redhat.com/esandeen/seekwatcher/ext4_xfs_4_read_threads.png
>
> At the end of the write, ext4 had on the order of 400 extents/file, xfs
> had on the order of 30 extents/file. It's clear especially from the
> read graph that ext4 is interleaving the 4 files, in about 5M chunks on
> average. Throughput seems comparable between ext4 & xfs nonetheless.
>
> Again this was on a decent HW raid so seek penalties are probably not
> too bad.
>
> -Eric
> -
> To unsubscribe from this list: send the line "unsubscribe linux-ext4" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
>
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: More testing: 4x parallel 2G writes, sequential reads
2007-11-07 23:09 ` Andreas Dilger
@ 2007-11-07 23:18 ` Eric Sandeen
2007-11-08 3:39 ` Eric Sandeen
1 sibling, 0 replies; 7+ messages in thread
From: Eric Sandeen @ 2007-11-07 23:18 UTC (permalink / raw)
To: Eric Sandeen, ext4 development
Andreas Dilger wrote:
> The question is what the "best" result is for this kind of workload?
> In HPC applications the common case is that you will also have the data
> files read back in parallel instead of serially.
Agreed, I'm not trying to argue what's better or worse, I'm just seeing
what it's doing.
The main reason I did sequential reads back is that it more clearly
shows the file layout for each file on the graph. :) I'm just getting
a handle on how the allocations are going for various types of writes.
> The test shows ext4 finishing marginally faster in the write case, and
> marginally slower in the read case. What happens if you have 4 parallel
> readers?
I'll test that a bit later (have to run now); I expect parallel readers
may go faster, since the blocks are interleaved, and it might be able to
suck them up pretty much in order across all 4 files.
I'd also like to test some of this under a single head, rather than on
HW raid...
-Eric
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: More testing: 4x parallel 2G writes, sequential reads
2007-11-07 22:42 More testing: 4x parallel 2G writes, sequential reads Eric Sandeen
2007-11-07 23:09 ` Andreas Dilger
2007-11-07 23:17 ` Alex Tomas
@ 2007-11-08 0:14 ` Shapor Naghibzadeh
2007-11-08 3:06 ` Eric Sandeen
2 siblings, 1 reply; 7+ messages in thread
From: Shapor Naghibzadeh @ 2007-11-08 0:14 UTC (permalink / raw)
To: Eric Sandeen; +Cc: ext4 development
On Wed, Nov 07, 2007 at 04:42:59PM -0600, Eric Sandeen wrote:
> Again this was on a decent HW raid so seek penalties are probably not
> too bad.
You may want to verify that by doing a benchmark on the raw device. I
recently did some benchmarks doing random I/O on a Dell 2850 w/ a PERC
(megaraid) RAID5 w/ 128MB onboard writeback cache and 6x 15krpm drives
and noticed appoximately one order of magnitude throughput drop on
small (stripe-sized) random reads versus linear. It maxed out at ~100
random read IOPs or "seeks/sec" (suprisingly low).
Out of curiousity, how are you counting the seeks?
Shapor
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: More testing: 4x parallel 2G writes, sequential reads
2007-11-08 0:14 ` Shapor Naghibzadeh
@ 2007-11-08 3:06 ` Eric Sandeen
0 siblings, 0 replies; 7+ messages in thread
From: Eric Sandeen @ 2007-11-08 3:06 UTC (permalink / raw)
To: Shapor Naghibzadeh; +Cc: ext4 development
Shapor Naghibzadeh wrote:
> On Wed, Nov 07, 2007 at 04:42:59PM -0600, Eric Sandeen wrote:
>> Again this was on a decent HW raid so seek penalties are probably not
>> too bad.
>
> You may want to verify that by doing a benchmark on the raw device. I
> recently did some benchmarks doing random I/O on a Dell 2850 w/ a PERC
> (megaraid) RAID5 w/ 128MB onboard writeback cache and 6x 15krpm drives
> and noticed appoximately one order of magnitude throughput drop on
> small (stripe-sized) random reads versus linear. It maxed out at ~100
> random read IOPs or "seeks/sec" (suprisingly low).
>
> Out of curiousity, how are you counting the seeks?
Chris Mason's seekwatcher (google can find it for you) is doing the
graphing, it uses blocktrace for the raw data.
-Eric
> Shapor
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: More testing: 4x parallel 2G writes, sequential reads
2007-11-07 23:09 ` Andreas Dilger
2007-11-07 23:18 ` Eric Sandeen
@ 2007-11-08 3:39 ` Eric Sandeen
1 sibling, 0 replies; 7+ messages in thread
From: Eric Sandeen @ 2007-11-08 3:39 UTC (permalink / raw)
To: Eric Sandeen, ext4 development
Andreas Dilger wrote:
> The test shows ext4 finishing marginally faster in the write case, and
> marginally slower in the read case. What happens if you have 4 parallel
> readers?
http://people.redhat.com/esandeen/seekwatcher/ext4_4_thread_par_read.png
http://people.redhat.com/esandeen/seekwatcher/xfs_4_thread_par_read.png
http://people.redhat.com/esandeen/seekwatcher/ext4_xfs_4_thread_par_read.png
-Eric
^ permalink raw reply [flat|nested] 7+ messages in thread
end of thread, other threads:[~2007-11-08 3:39 UTC | newest]
Thread overview: 7+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2007-11-07 22:42 More testing: 4x parallel 2G writes, sequential reads Eric Sandeen
2007-11-07 23:09 ` Andreas Dilger
2007-11-07 23:18 ` Eric Sandeen
2007-11-08 3:39 ` Eric Sandeen
2007-11-07 23:17 ` Alex Tomas
2007-11-08 0:14 ` Shapor Naghibzadeh
2007-11-08 3:06 ` Eric Sandeen
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).