* Re: [Lse-tech] Re: ext3 performance bottleneck as the number of spindles gets large
2002-06-20 0:54 ` Andrew Morton
@ 2002-06-20 4:09 ` Dave Hansen
2002-06-20 6:03 ` Andreas Dilger
0 siblings, 1 reply; 20+ messages in thread
From: Dave Hansen @ 2002-06-20 4:09 UTC (permalink / raw)
To: Andrew Morton
Cc: mgross, Linux Kernel Mailing List, lse-tech, richard.a.griffiths
Andrew Morton wrote:
> mgross wrote:
>>Has anyone done any work looking into the I/O scaling of Linux / ext3 per
>>spindle or per adapter? We would like to compare notes.
>
> No. ext3 scalability is very poor, I'm afraid. The fs really wasn't
> up and running until kernel 2.4.5 and we just didn't have time to
> address that issue.
Ick. That takes the prize for the highest BKL contention I've ever
seen, except for some horribly contrived torture tests of mine. I've
had data like this sent to me a few times to analyze and the only
thing I've been able to suggest up to this point is not to use ext3.
>>I've only just started to look at the ext3 code but it seems to me that replacing the
>>BKL with a per - ext3 file system lock could remove some of the contention thats
>>getting measured. What data are the BKL protecting in these ext3 functions? Could a
>>lock per FS approach work?
>
> The vague plan there is to replace lock_kernel with lock_journal
> where appropriate. But ext3 scalability work of this nature
> will be targetted at the 2.5 kernel, most probably.
I really doubt that dropping in lock_journal will help this case very
much. Every single kernel_flag entry in the lockmeter output where
Util > 0.00% is caused by ext3. The schedule entry is probably caused
by something in ext3 grabbing BKL, getting scheduled out for some
reason, then having it implicitly released in schedule(). The
schedule() contention comes from the reacquire_kernel_lock().
We used to see plenty of ext2 BKL contention, but Al Viro did a good
job fixing that early in 2.5 using a per-inode rwlock. I think that
this is the required level of lock granularity, another global lock
just won't cut it.
http://lse.sourceforge.net/lockhier/bkl_rollup.html#getblock
--
Dave Hansen
haveblue@us.ibm.com
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [Lse-tech] Re: ext3 performance bottleneck as the number of spindles gets large
2002-06-20 4:09 ` [Lse-tech] " Dave Hansen
@ 2002-06-20 6:03 ` Andreas Dilger
2002-06-20 6:53 ` Andrew Morton
0 siblings, 1 reply; 20+ messages in thread
From: Andreas Dilger @ 2002-06-20 6:03 UTC (permalink / raw)
To: Dave Hansen
Cc: Andrew Morton, mgross, Linux Kernel Mailing List, lse-tech,
richard.a.griffiths
On Jun 19, 2002 21:09 -0700, Dave Hansen wrote:
> Andrew Morton wrote:
> >The vague plan there is to replace lock_kernel with lock_journal
> >where appropriate. But ext3 scalability work of this nature
> >will be targetted at the 2.5 kernel, most probably.
>
> I really doubt that dropping in lock_journal will help this case very
> much. Every single kernel_flag entry in the lockmeter output where
> Util > 0.00% is caused by ext3. The schedule entry is probably caused
> by something in ext3 grabbing BKL, getting scheduled out for some
> reason, then having it implicitly released in schedule(). The
> schedule() contention comes from the reacquire_kernel_lock().
>
> We used to see plenty of ext2 BKL contention, but Al Viro did a good
> job fixing that early in 2.5 using a per-inode rwlock. I think that
> this is the required level of lock granularity, another global lock
> just won't cut it.
> http://lse.sourceforge.net/lockhier/bkl_rollup.html#getblock
There are a variety of different efforts that could be made towards
removing the BKL from ext2 and ext3. The first, of course, would be
to have a per-filesystem lock instead of taking the BKL (I don't know
if Al has changed lock_super() in 2.5 to be a real semaphore or not).
As Andrew mentioned, there would also need to be be a per-journal lock to
ensure coherency of the journal data. Currently the per-filesystem and
per-journal lock would be equivalent, but when a single journal device
can be shared among multiple filesystems they would be different locks.
I will leave it up to Andrew and Stephen to discuss locking scalability
within the journal layer.
Within the filesystem there can be a large number of increasingly fine
locks added - a superblock-only lock with per-group locks, or even
per-bitmap and per-inode-table(-block) locks if needed. This would
allow multi- threaded inode and block allocations, but a sane lock
ranking strategy would have to be developed. The bitmap locks would
only need to be 2-state locks, because you only look at the bitmaps
when you want to modify them. The inode table locks would be read/write
locks.
If there is a try-writelock mechanism for the individual inode table
blocks you can avoid write lock contention for creations by simply
finding the first un-write-locked block in the target group's inode table
(usually in the hundreds of blocks per group for default parameters).
For inode allocation you don't really care which inode you get, as long
as you get one in the preferred group (even that isn't critical for
directory creation). For inode deletions you will get essentially random
block locking, which is actually improved by the find-first-unlocked
allocation policy (at the expense of dirtying more inode table blocks).
Contention for the superblock lock for updates to the superblock free
block and free inode counts could be mitigated by keeping "per-group
delta buckets" in memory, that are written into the superblock only
once every few seconds or at statfs time instead of needing multiple
locks for each block/inode alloc/free. The groups already keep their
own summary counts for free blocks and inodes. The coherency of these
fields with the superblock on recovery would be handled at journal
recovery time (either in the kernel or e2fsck*). Other than these two
fields there are few write updates to the superblock (on ext3 there
is also the orphan list, modified at truncate and when an open file is
unlinked and when such a file is closed).
I have even been thinking about multi-threaded directory-entry creation
in a single directory. One nice thing about ext2/ext3 directory blocks
is that each one is self-contained and can be modified independently.
For regular ext2/ext3 directories you would only be able to do
multi-threaded deletes by having a lock for each directory block.
For creations you would need to lock the entire directory to ensure
exclusive access for a create, which is the same single-threaded behaviour
for a single directory we have today with the directory i_sem.
However, if you are using the htree indexed directory layout (which you
will be, if you care about scalable filesystem performance) then there
is only a single[**] block into which a given filename can be added, so
you can have per-block locks even for file creation. As the number of
directory entries grows (and hence more directory blocks) the locking
becomes increasingly more fine-grained so you get better scalability
with larger directories, which is what you want.
Cheers, Andreas
[*] If we think that we will go to any kind of per-group locking in the
near future, the support for this could be added into e2fsck and
existing kernels today with read support for a COMPAT flag to
ensure maximal forwards compatibility. On e2fsck runs we already
validate the superblock on each boot, and the group descriptor table
is contiguous with the superblock, so the amount of extra checking
at boot time would be very minimal.
The kernel already has ext[23]_count_free_{blocks,inodes} functions
that just need a bit of tweaking to check only the descriptor
summaries unless mounted with debug and check options, and to update
the superblock counts at mount time if the COMPAT flag is set.
[**] In rare circumstances you may have a large number of hash collisions
for a single hash value which fill more than one block, so an entry
with that hash value could live in more than a single block. This
would need to be handled somehow (e.g. always getting the locks on
all such blocks in order at create time; you only need a single
block lock at delete time).
--
Andreas Dilger
http://www-mddsp.enel.ucalgary.ca/People/adilger/
http://sourceforge.net/projects/ext2resize/
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [Lse-tech] Re: ext3 performance bottleneck as the number of spindles gets large
2002-06-20 6:03 ` Andreas Dilger
@ 2002-06-20 6:53 ` Andrew Morton
0 siblings, 0 replies; 20+ messages in thread
From: Andrew Morton @ 2002-06-20 6:53 UTC (permalink / raw)
To: Andreas Dilger
Cc: Dave Hansen, mgross, Linux Kernel Mailing List, lse-tech,
richard.a.griffiths
Andreas Dilger wrote:
>
> On Jun 19, 2002 21:09 -0700, Dave Hansen wrote:
> > Andrew Morton wrote:
> > >The vague plan there is to replace lock_kernel with lock_journal
> > >where appropriate. But ext3 scalability work of this nature
> > >will be targetted at the 2.5 kernel, most probably.
> >
> > I really doubt that dropping in lock_journal will help this case very
> > much. Every single kernel_flag entry in the lockmeter output where
> > Util > 0.00% is caused by ext3. The schedule entry is probably caused
> > by something in ext3 grabbing BKL, getting scheduled out for some
> > reason, then having it implicitly released in schedule(). The
> > schedule() contention comes from the reacquire_kernel_lock().
> >
> > We used to see plenty of ext2 BKL contention, but Al Viro did a good
> > job fixing that early in 2.5 using a per-inode rwlock. I think that
> > this is the required level of lock granularity, another global lock
> > just won't cut it.
> > http://lse.sourceforge.net/lockhier/bkl_rollup.html#getblock
>
> There are a variety of different efforts that could be made towards
> removing the BKL from ext2 and ext3. The first, of course, would be
> to have a per-filesystem lock instead of taking the BKL (I don't know
> if Al has changed lock_super() in 2.5 to be a real semaphore or not).
lock_super() has been `down()' for a long time. In 2.4, too.
> As Andrew mentioned, there would also need to be be a per-journal lock to
> ensure coherency of the journal data. Currently the per-filesystem and
> per-journal lock would be equivalent, but when a single journal device
> can be shared among multiple filesystems they would be different locks.
Well. First I want to know if block-highmem is in there. If not,
then yep, we'll spend ages spinning on the BKL. Because ext3 _is_
BKL-happy, and if a CPU takes a disk interrupt while holding the BKL
and then sits there in interrupt context copying tons of cache-cold
memory around, guess what the other CPUs will be doing?
> I will leave it up to Andrew and Stephen to discuss locking scalability
> within the journal layer.
ext3 is about 700x as complex as ext2. It will need to be done with
some care.
> Within the filesystem there can be a large number of increasingly fine
> locks added - a superblock-only lock with per-group locks, or even
> per-bitmap and per-inode-table(-block) locks if needed. This would
> allow multi- threaded inode and block allocations, but a sane lock
> ranking strategy would have to be developed. The bitmap locks would
> only need to be 2-state locks, because you only look at the bitmaps
> when you want to modify them. The inode table locks would be read/write
> locks.
The next steps for ext2 are: stare at Anton's next set of graphs and
then, I expect, removal of the fs-private bitmap LRUs, per-cpu buffer
LRUs to avoid blockdev mapping lock contention, per-blockgroup locks
and removal of lock_super from the block allocator.
But there's no point in doing that while zone->lock and pagemap_lru_lock
are top of the list. Fixes for both of those are in progress.
ext2 is bog-simple. It will scale up the wazoo in 2.6.
> If there is a try-writelock mechanism for the individual inode table
> blocks you can avoid write lock contention for creations by simply
> finding the first un-write-locked block in the target group's inode table
> (usually in the hundreds of blocks per group for default parameters).
Depends on what the profile say, Andreas. And I mean profiles - lockmeter
tends to tell you "what", not "why". Start at the top of the list. Fix
them by design if possible. If not, tweak it!
-
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [Lse-tech] Re: ext3 performance bottleneck as the number of spindles gets large
[not found] <59885C5E3098D511AD690002A5072D3C057B499E@orsmsx111.jf.intel.com>
@ 2002-06-20 16:10 ` Dave Hansen
2002-06-20 20:47 ` John Hawkes
0 siblings, 1 reply; 20+ messages in thread
From: Dave Hansen @ 2002-06-20 16:10 UTC (permalink / raw)
To: Gross, Mark
Cc: 'Russell Leighton', Andrew Morton, mgross,
Linux Kernel Mailing List, lse-tech, Griffiths, Richard A
Gross, Mark wrote:
> We will get around to reformatting our spindles to some other FS after
> we get as much data and analysis out of our current configuration as we
> can get.
>
> We'll report out our findings on the lock contention, and throughput
> data for some other FS then. I'd like recommendations on what file
> systems to try, besides ext2.
Do you really need a journaling FS? If not, I think ext2 is a sure
bet to be the fastest. If you do need journaling, try reiserfs and jfs.
BTW, what kind of workload are you running under?
--
Dave Hansen
haveblue@us.ibm.com
^ permalink raw reply [flat|nested] 20+ messages in thread
* RE: [Lse-tech] Re: ext3 performance bottleneck as the number of s pindles gets large
@ 2002-06-20 16:24 Gross, Mark
2002-06-20 21:11 ` [Lse-tech] Re: ext3 performance bottleneck as the number of spindles " Andrew Morton
0 siblings, 1 reply; 20+ messages in thread
From: Gross, Mark @ 2002-06-20 16:24 UTC (permalink / raw)
To: 'Dave Hansen', Gross, Mark
Cc: 'Russell Leighton', Andrew Morton, mgross,
Linux Kernel Mailing List, lse-tech, Griffiths, Richard A
I'm don't have much visibility into this platform's journaling requirements.
I suspect its to enable fast reboot / recovery from some klutz bumping the
power cord or a crash of some sort.
I will raise the issue with the platform folks. However; for now I'm
looking for ways to make it scale competitively WRT adapters and spindles
for writes without changing the file system. If this turns out to be a dead
end then, hopefully, we'll move to a more spindle friendly file system.
The workload is http://www.coker.com.au/bonnie++/ (one of the newer versions
;)
--mgross
(W) 503-712-8218
MS: JF1-05
2111 N.E. 25th Ave.
Hillsboro, OR 97124
> -----Original Message-----
> From: Dave Hansen [mailto:haveblue@us.ibm.com]
> Sent: Thursday, June 20, 2002 9:10 AM
> To: Gross, Mark
> Cc: 'Russell Leighton'; Andrew Morton; mgross@unix-os.sc.intel.com;
> Linux Kernel Mailing List; lse-tech@lists.sourceforge.net; Griffiths,
> Richard A
> Subject: Re: [Lse-tech] Re: ext3 performance bottleneck as
> the number of
> spindles gets large
>
>
> Gross, Mark wrote:
> > We will get around to reformatting our spindles to some
> other FS after
> > we get as much data and analysis out of our current
> configuration as we
> > can get.
> >
> > We'll report out our findings on the lock contention, and
> throughput
> > data for some other FS then. I'd like recommendations on what file
> > systems to try, besides ext2.
>
> Do you really need a journaling FS? If not, I think ext2 is a sure
> bet to be the fastest. If you do need journaling, try
> reiserfs and jfs.
>
> BTW, what kind of workload are you running under?
>
> --
> Dave Hansen
> haveblue@us.ibm.com
>
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [Lse-tech] Re: ext3 performance bottleneck as the number of spindles gets large
2002-06-20 16:10 ` Dave Hansen
@ 2002-06-20 20:47 ` John Hawkes
0 siblings, 0 replies; 20+ messages in thread
From: John Hawkes @ 2002-06-20 20:47 UTC (permalink / raw)
To: Dave Hansen, Gross, Mark
Cc: 'Russell Leighton', Andrew Morton, mgross,
Linux Kernel Mailing List, lse-tech, Griffiths, Richard A
From: "Dave Hansen" <haveblue@us.ibm.com>
> > We'll report out our findings on the lock contention, and throughput
> > data for some other FS then. I'd like recommendations on what file
> > systems to try, besides ext2.
>
> Do you really need a journaling FS? If not, I think ext2 is a sure
> bet to be the fastest. If you do need journaling, try reiserfs and
jfs.
XFS in 2.4.x scales much better on larger CPU counts than do ext3 or
ReiserFS. That's because XFS is a much lighter user of the BKL in 2.4.x
than ext3, ReiserFS, or ext2.
John Hawkes
hawkes@sgi.com
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [Lse-tech] Re: ext3 performance bottleneck as the number of spindles gets large
2002-06-20 16:24 [Lse-tech] Re: ext3 performance bottleneck as the number of s pindles gets large Gross, Mark
@ 2002-06-20 21:11 ` Andrew Morton
0 siblings, 0 replies; 20+ messages in thread
From: Andrew Morton @ 2002-06-20 21:11 UTC (permalink / raw)
To: Gross, Mark
Cc: 'Dave Hansen', 'Russell Leighton', mgross,
Linux Kernel Mailing List, lse-tech, Griffiths, Richard A
"Gross, Mark" wrote:
>
> ...
> The workload is http://www.coker.com.au/bonnie++/ (one of the newer versions
> ;)
>
Please tell me exactly how you're using it: how many filesystems, how
many controllers, disk topology, physical memory, size of filesystems,
etc. Sufficient for me to be able to reproduce it and find out what
is happening.
Also: what is your best-case aggregate bandwidth? Platter-speed of disks
multiplied by number of disks, please?
Thanks to the BKL you've effectively got 1.3 to 1.5 CPUs, but we should be
able to saturate six or eight disks on a uniprocessor kernel. It's
possible that we're looking at the wrong thing.
-
^ permalink raw reply [flat|nested] 20+ messages in thread
* [Lse-tech] Re: ext3 performance bottleneck as the number of spindles gets large
@ 2002-06-21 22:03 Duc Vianney
2002-06-21 23:11 ` Andrew Morton
2002-06-22 0:19 ` kwijibo
0 siblings, 2 replies; 20+ messages in thread
From: Duc Vianney @ 2002-06-21 22:03 UTC (permalink / raw)
To: Andrew Morton, mgross, Griffiths, Richard A, Jens Axboe,
Linux Kernel Mailing List, lse-tech
Andrew Morton wrote:
>If you have time, please test ext2 and/or reiserfs and/or ext3
>in writeback mode.
I ran IOzone on ext2fs, ext3fs, JFS, and Reiserfs on an SMP 4-way
500MHz, 2.5GB RAM, two 9.1GB SCSI drives. The test partition is 1GB,
test file size is 128MB, test block size is 4KB, and IO threads varies
from 1 to 6. When comparing with other file system for this test
environment, the results on a 2.5.19 SMP kernel show ext3fs is having
performance problem with Writes and in particularly, with Random Write.
I think the BKL contention patch would help ext3fs, but I need to verify
it first.
The following data are throughput in MB/sec obtained from IOzone
benchmark running on all file systems installed with default options.
Kernels 2519smp4 2519smp4 2519smp4 2519smp4
No of threads=1 ext2-1t jfs-1t ext3-1t reiserfs-1t
Initial write 138010 111023 29808 48170
Rewrite 205736 204538 119543 142765
Read 236500 237235 231860 236959
Re-read 242927 243577 240284 242776
Random read 204292 206010 201664 207219
Random write 180144 180461 1090 121676
No of threads=2 ext2-2t jfs-2t ext3-2t reiserfs-2t
Initial write 196477 143395 62248 55260
Rewrite 261641 261441 126604 205076
Read 292566 292796 313562 291434
Re-read 302239 306423 341416 303424
Random read 296152 295430 316966 288584
Random write 253026 251013 958 203358
No of threads=4 ext2-4t jfs-4t ext3-4t reiserfs-4t
Initial write 79513 172302 42051 48782
Rewrite 256568 269840 124912 231395
Read 290599 303669 327066 283793
Re-read 289578 303644 327362 287531
Random read 354011 353455 353806 351671
Random write 279704 279922 2482 250498
No of threads=6 ext2-6t jfs-6t ext3-6t reiserfs-6t
Initial write 98559 69825 59728 15576
Rewrite 274993 286987 126048 232193
Read 330522 326143 332147 326163
Re-read 339672 328890 333094 326725
Random read 348059 346154 347901 344927
Random write 281613 280213 3659 227579
Cheers,
Duc J Vianney, dvianney@us.ibm.com
home page: http://www-124.ibm.com/developerworks/opensource/linuxperf/
project page: http://www-124.ibm.com/developerworks/projects/linuxperf
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [Lse-tech] Re: ext3 performance bottleneck as the number of spindles gets large
2002-06-21 22:03 Duc Vianney
@ 2002-06-21 23:11 ` Andrew Morton
2002-06-22 0:19 ` kwijibo
1 sibling, 0 replies; 20+ messages in thread
From: Andrew Morton @ 2002-06-21 23:11 UTC (permalink / raw)
To: Duc Vianney
Cc: mgross, Griffiths, Richard A, Jens Axboe,
Linux Kernel Mailing List, lse-tech
Duc Vianney wrote:
>
> Andrew Morton wrote:
> >If you have time, please test ext2 and/or reiserfs and/or ext3
> >in writeback mode.
> I ran IOzone on ext2fs, ext3fs, JFS, and Reiserfs on an SMP 4-way
> 500MHz, 2.5GB RAM, two 9.1GB SCSI drives. The test partition is 1GB,
> test file size is 128MB, test block size is 4KB, and IO threads varies
> from 1 to 6. When comparing with other file system for this test
> environment, the results on a 2.5.19 SMP kernel show ext3fs is having
> performance problem with Writes and in particularly, with Random Write.
> I think the BKL contention patch would help ext3fs, but I need to verify
> it first.
>
> The following data are throughput in MB/sec obtained from IOzone
> benchmark running on all file systems installed with default options.
>
> Kernels 2519smp4 2519smp4 2519smp4 2519smp4
> No of threads=1 ext2-1t jfs-1t ext3-1t reiserfs-1t
>
> Initial write 138010 111023 29808 48170
> Rewrite 205736 204538 119543 142765
> Read 236500 237235 231860 236959
> Re-read 242927 243577 240284 242776
> Random read 204292 206010 201664 207219
> Random write 180144 180461 1090 121676
ext3 only allows dirty data to remain in memory for five seconds,
whereas the other filesystems allow it for thirty. This is
a reasonable thing to do, but it hurts badly in benchmarks.
If you run a benchmark which takes ext2 ten seconds to
complete, ext2 will do it all in-RAM. But after five
seconds, ext3 will go to disk and the test takes vastly longer.
I suspect that is what is happening here - we're seeing the
difference between disk bandwidth and memory bandwidth.
If you choose a larger file, a shorter file or a longer-running
test then the difference will not be so gross.
You can confirm this by trying a one-gigabyte file instead.
The "Initial write" is fishy. I wonder if the same thing
is happening here - there may have been lots of dirty memory
left in-core (and unaccounted for) after the test completed.
iozone has a `-e' option which causes it to include the fsync()
time in the timing calculations. Using that would give a
better comparison, unless you are specifically trying to test
in-memory performance. And we're not doing that here.
-
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [Lse-tech] Re: ext3 performance bottleneck as the number of spindles gets large
2002-06-21 22:03 Duc Vianney
2002-06-21 23:11 ` Andrew Morton
@ 2002-06-22 0:19 ` kwijibo
2002-06-22 8:10 ` kwijibo
1 sibling, 1 reply; 20+ messages in thread
From: kwijibo @ 2002-06-22 0:19 UTC (permalink / raw)
To: Duc Vianney
Cc: Andrew Morton, mgross, Griffiths, Richard A, Jens Axboe,
Linux Kernel Mailing List, lse-tech
This web site may be of interest for this discussion:
http://labs.zianet.com. I have benchmarks using NFS
with ext3 there. It also compares ext3 with ReiserFS.
The page is not quite complete but it has the
benchmarks up.
Steven
Duc Vianney wrote:
>Andrew Morton wrote:
>
>
>>If you have time, please test ext2 and/or reiserfs and/or ext3
>>in writeback mode.
>>
>>
>I ran IOzone on ext2fs, ext3fs, JFS, and Reiserfs on an SMP 4-way
>500MHz, 2.5GB RAM, two 9.1GB SCSI drives. The test partition is 1GB,
>test file size is 128MB, test block size is 4KB, and IO threads varies
>from 1 to 6. When comparing with other file system for this test
>environment, the results on a 2.5.19 SMP kernel show ext3fs is having
>performance problem with Writes and in particularly, with Random Write.
>I think the BKL contention patch would help ext3fs, but I need to verify
>it first.
>
>The following data are throughput in MB/sec obtained from IOzone
>benchmark running on all file systems installed with default options.
>
>
>Kernels 2519smp4 2519smp4 2519smp4 2519smp4
>No of threads=1 ext2-1t jfs-1t ext3-1t reiserfs-1t
>
>Initial write 138010 111023 29808 48170
>Rewrite 205736 204538 119543 142765
>Read 236500 237235 231860 236959
>Re-read 242927 243577 240284 242776
>Random read 204292 206010 201664 207219
>Random write 180144 180461 1090 121676
>
>No of threads=2 ext2-2t jfs-2t ext3-2t reiserfs-2t
>
>Initial write 196477 143395 62248 55260
>Rewrite 261641 261441 126604 205076
>Read 292566 292796 313562 291434
>Re-read 302239 306423 341416 303424
>Random read 296152 295430 316966 288584
>Random write 253026 251013 958 203358
>
>No of threads=4 ext2-4t jfs-4t ext3-4t reiserfs-4t
>
>Initial write 79513 172302 42051 48782
>Rewrite 256568 269840 124912 231395
>Read 290599 303669 327066 283793
>Re-read 289578 303644 327362 287531
>Random read 354011 353455 353806 351671
>Random write 279704 279922 2482 250498
>
>No of threads=6 ext2-6t jfs-6t ext3-6t reiserfs-6t
>
>Initial write 98559 69825 59728 15576
>Rewrite 274993 286987 126048 232193
>Read 330522 326143 332147 326163
>Re-read 339672 328890 333094 326725
>Random read 348059 346154 347901 344927
>Random write 281613 280213 3659 227579
>
>Cheers,
>Duc J Vianney, dvianney@us.ibm.com
>home page: http://www-124.ibm.com/developerworks/opensource/linuxperf/
>project page: http://www-124.ibm.com/developerworks/projects/linuxperf
>
>
>-
>To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
>the body of a message to majordomo@vger.kernel.org
>More majordomo info at http://vger.kernel.org/majordomo-info.html
>Please read the FAQ at http://www.tux.org/lkml/
>
>
>
>
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [Lse-tech] Re: ext3 performance bottleneck as the number of spindles gets large
2002-06-22 0:19 ` kwijibo
@ 2002-06-22 8:10 ` kwijibo
0 siblings, 0 replies; 20+ messages in thread
From: kwijibo @ 2002-06-22 8:10 UTC (permalink / raw)
To: kwijibo
Cc: Duc Vianney, Andrew Morton, mgross, Griffiths, Richard A,
Jens Axboe, Linux Kernel Mailing List, lse-tech
If you tried the link earlier and it didn't work I'm sorry.
Had a mental brain fart with the web server. It should
work now.
Steven
kwijibo@zianet.com wrote:
> This web site may be of interest for this discussion:
> http://labs.zianet.com. I have benchmarks using NFS
> with ext3 there. It also compares ext3 with ReiserFS.
> The page is not quite complete but it has the
> benchmarks up.
>
> Steven
>
> Duc Vianney wrote:
>
>> Andrew Morton wrote:
>>
>>
>>> If you have time, please test ext2 and/or reiserfs and/or ext3
>>> in writeback mode.
>>>
>>
>> I ran IOzone on ext2fs, ext3fs, JFS, and Reiserfs on an SMP 4-way
>> 500MHz, 2.5GB RAM, two 9.1GB SCSI drives. The test partition is 1GB,
>> test file size is 128MB, test block size is 4KB, and IO threads varies
>> from 1 to 6. When comparing with other file system for this test
>> environment, the results on a 2.5.19 SMP kernel show ext3fs is having
>> performance problem with Writes and in particularly, with Random Write.
>> I think the BKL contention patch would help ext3fs, but I need to verify
>> it first.
>>
>> The following data are throughput in MB/sec obtained from IOzone
>> benchmark running on all file systems installed with default options.
>>
>>
>> Kernels 2519smp4 2519smp4 2519smp4 2519smp4
>> No of threads=1 ext2-1t jfs-1t ext3-1t reiserfs-1t
>>
>> Initial write 138010 111023 29808 48170
>> Rewrite 205736 204538 119543 142765
>> Read 236500 237235 231860 236959
>> Re-read 242927 243577 240284 242776
>> Random read 204292 206010 201664 207219
>> Random write 180144 180461 1090 121676
>>
>> No of threads=2 ext2-2t jfs-2t ext3-2t reiserfs-2t
>>
>> Initial write 196477 143395 62248 55260
>> Rewrite 261641 261441 126604 205076
>> Read 292566 292796 313562 291434
>> Re-read 302239 306423 341416 303424
>> Random read 296152 295430 316966 288584
>> Random write 253026 251013 958 203358
>>
>> No of threads=4 ext2-4t jfs-4t ext3-4t reiserfs-4t
>>
>> Initial write 79513 172302 42051 48782
>> Rewrite 256568 269840 124912 231395
>> Read 290599 303669 327066 283793
>> Re-read 289578 303644 327362 287531
>> Random read 354011 353455 353806 351671
>> Random write 279704 279922 2482 250498
>>
>> No of threads=6 ext2-6t jfs-6t ext3-6t reiserfs-6t
>>
>> Initial write 98559 69825 59728 15576
>> Rewrite 274993 286987 126048 232193
>> Read 330522 326143 332147 326163
>> Re-read 339672 328890 333094 326725
>> Random read 348059 346154 347901 344927
>> Random write 281613 280213 3659 227579
>>
>> Cheers,
>> Duc J Vianney, dvianney@us.ibm.com
>> home page: http://www-124.ibm.com/developerworks/opensource/linuxperf/
>> project page: http://www-124.ibm.com/developerworks/projects/linuxperf
>>
>>
>> -
>> To unsubscribe from this list: send the line "unsubscribe
>> linux-kernel" in
>> the body of a message to majordomo@vger.kernel.org
>> More majordomo info at http://vger.kernel.org/majordomo-info.html
>> Please read the FAQ at http://www.tux.org/lkml/
>>
>>
>>
>>
>
>
>
> -
> To unsubscribe from this list: send the line "unsubscribe
> linux-kernel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at http://www.tux.org/lkml/
>
>
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [Lse-tech] Re: ext3 performance bottleneck as the number of spindles gets large
2002-06-23 6:00 ` Christopher E. Brown
@ 2002-06-23 6:35 ` William Lee Irwin III
2002-06-23 7:29 ` Dave Hansen
2002-06-23 17:06 ` Eric W. Biederman
0 siblings, 2 replies; 20+ messages in thread
From: William Lee Irwin III @ 2002-06-23 6:35 UTC (permalink / raw)
To: Christopher E. Brown
Cc: Andreas Dilger, Griffiths, Richard A, 'Andrew Morton',
mgross, 'Jens Axboe', Linux Kernel Mailing List, lse-tech
On Sun, Jun 23, 2002 at 12:00:01AM -0600, Christopher E. Brown wrote:
> However, multiple busses are *rare* on x86. There are alot of chained
> busses via PCI to PCI bridge, but few systems with 2 or more PCI
> busses of any type with parallel access to the CPU.
NUMA-Q has them.
Cheers,
Bill
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [Lse-tech] Re: ext3 performance bottleneck as the number of spindles gets large
2002-06-23 6:35 ` [Lse-tech] " William Lee Irwin III
@ 2002-06-23 7:29 ` Dave Hansen
2002-06-23 7:36 ` William Lee Irwin III
2002-06-23 17:06 ` Eric W. Biederman
1 sibling, 1 reply; 20+ messages in thread
From: Dave Hansen @ 2002-06-23 7:29 UTC (permalink / raw)
To: William Lee Irwin III
Cc: Christopher E. Brown, Andreas Dilger, Griffiths, Richard A,
'Andrew Morton', mgross, 'Jens Axboe',
Linux Kernel Mailing List, lse-tech
William Lee Irwin III wrote:
> On Sun, Jun 23, 2002 at 12:00:01AM -0600, Christopher E. Brown wrote:
>
>>However, multiple busses are *rare* on x86. There are alot of chained
>>busses via PCI to PCI bridge, but few systems with 2 or more PCI
>>busses of any type with parallel access to the CPU.
>
> NUMA-Q has them.
>
Yep, 2 independent busses per quad. That's a _lot_ of busses when you
have an 8 or 16 quad system. (I wonder who has one of those... ;)
Almost all of the server-type boxes that we play with have multiple
PCI busses. Even my old dual-PPro has 2.
--
Dave Hansen
haveblue@us.ibm.com
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [Lse-tech] Re: ext3 performance bottleneck as the number of spindles gets large
2002-06-23 7:29 ` Dave Hansen
@ 2002-06-23 7:36 ` William Lee Irwin III
2002-06-23 7:45 ` Dave Hansen
0 siblings, 1 reply; 20+ messages in thread
From: William Lee Irwin III @ 2002-06-23 7:36 UTC (permalink / raw)
To: Dave Hansen
Cc: Christopher E. Brown, Andreas Dilger, Griffiths, Richard A,
'Andrew Morton', mgross, 'Jens Axboe',
Linux Kernel Mailing List, lse-tech
>> On Sun, Jun 23, 2002 at 12:00:01AM -0600, Christopher E. Brown wrote:
>>> However, multiple busses are *rare* on x86. There are alot of chained
>>> busses via PCI to PCI bridge, but few systems with 2 or more PCI
>>> busses of any type with parallel access to the CPU.
William Lee Irwin III wrote:
>> NUMA-Q has them.
On Sun, Jun 23, 2002 at 12:29:23AM -0700, Dave Hansen wrote:
> Yep, 2 independent busses per quad. That's a _lot_ of busses when you
> have an 8 or 16 quad system. (I wonder who has one of those... ;)
> Almost all of the server-type boxes that we play with have multiple
> PCI busses. Even my old dual-PPro has 2.
I thought I saw 3 PCI and 1 ISA per-quad., but maybe that's the
"independent" bit coming into play.
Cheers,
Bill
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [Lse-tech] Re: ext3 performance bottleneck as the number of spindles gets large
2002-06-23 7:36 ` William Lee Irwin III
@ 2002-06-23 7:45 ` Dave Hansen
2002-06-23 7:55 ` Christopher E. Brown
2002-06-23 16:21 ` Martin J. Bligh
0 siblings, 2 replies; 20+ messages in thread
From: Dave Hansen @ 2002-06-23 7:45 UTC (permalink / raw)
To: William Lee Irwin III
Cc: Christopher E. Brown, Andreas Dilger, Griffiths, Richard A,
'Andrew Morton', mgross, 'Jens Axboe',
Linux Kernel Mailing List, lse-tech
William Lee Irwin III wrote:
> On Sun, Jun 23, 2002 at 12:29:23AM -0700, Dave Hansen wrote:
>> Yep, 2 independent busses per quad. That's a _lot_ of busses
>> when you have an 8 or 16 quad system. (I wonder who has one of
>> those... ;) Almost all of the server-type boxes that we play with
>> have multiple PCI busses. Even my old dual-PPro has 2.
>
> I thought I saw 3 PCI and 1 ISA per-quad., but maybe that's the
> "independent" bit coming into play.
>
Hmmmm. Maybe there is another one for the onboard devices. I thought
that there were 8 slots and 4 per bus. I could
be wrong. BTW, the ISA slot is EISA and as far as I can tell is only
used for the MDC.
--
Dave Hansen
haveblue@us.ibm.com
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [Lse-tech] Re: ext3 performance bottleneck as the number of spindles gets large
2002-06-23 7:45 ` Dave Hansen
@ 2002-06-23 7:55 ` Christopher E. Brown
2002-06-23 8:11 ` David Lang
2002-06-23 8:31 ` Dave Hansen
2002-06-23 16:21 ` Martin J. Bligh
1 sibling, 2 replies; 20+ messages in thread
From: Christopher E. Brown @ 2002-06-23 7:55 UTC (permalink / raw)
To: Dave Hansen
Cc: William Lee Irwin III, Andreas Dilger, Griffiths, Richard A,
'Andrew Morton', mgross, 'Jens Axboe',
Linux Kernel Mailing List, lse-tech
On Sun, 23 Jun 2002, Dave Hansen wrote:
> William Lee Irwin III wrote:
> > On Sun, Jun 23, 2002 at 12:29:23AM -0700, Dave Hansen wrote:
> >> Yep, 2 independent busses per quad. That's a _lot_ of busses
> >> when you have an 8 or 16 quad system. (I wonder who has one of
> >> those... ;) Almost all of the server-type boxes that we play with
> >> have multiple PCI busses. Even my old dual-PPro has 2.
> >
> > I thought I saw 3 PCI and 1 ISA per-quad., but maybe that's the
> > "independent" bit coming into play.
> >
> Hmmmm. Maybe there is another one for the onboard devices. I thought
> that there were 8 slots and 4 per bus. I could
> be wrong. BTW, the ISA slot is EISA and as far as I can tell is only
> used for the MDC.
Do you mean independent in that there are 2 sets of 4 slots each
detected as a seperate PCI bus, or independent in that each set of 4
had *direct* access to the cpu side, and *does not* access via a
PCI:PCI bridge?
I have stacks of PPro/PII/Xeon boards around, but 9 out of 10 have
chianed buses. Even the old PPro x 6 (Avion 6600/ALR 6x6/Unisys
HR/HS6000) had 2 PCI buses, however the second BUS hung off of a
PCI:PCI bridge.
--
I route, therefore you are.
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [Lse-tech] Re: ext3 performance bottleneck as the number of spindles gets large
2002-06-23 7:55 ` Christopher E. Brown
@ 2002-06-23 8:11 ` David Lang
2002-06-23 8:31 ` Dave Hansen
1 sibling, 0 replies; 20+ messages in thread
From: David Lang @ 2002-06-23 8:11 UTC (permalink / raw)
To: Christopher E. Brown
Cc: Dave Hansen, William Lee Irwin III, Andreas Dilger,
Griffiths, Richard A, 'Andrew Morton', mgross,
'Jens Axboe', Linux Kernel Mailing List, lse-tech
most chipsets only have one PCI bus on them so any others need to be
bridged to that one.
David Lang
On Sun, 23 Jun 2002, Christopher E. Brown wrote:
> Date: Sun, 23 Jun 2002 01:55:28 -0600 (MDT)
> From: Christopher E. Brown <cbrown@woods.net>
> To: Dave Hansen <haveblue@us.ibm.com>
> Cc: William Lee Irwin III <wli@holomorphy.com>,
> Andreas Dilger <adilger@clusterfs.com>,
> "Griffiths, Richard A" <richard.a.griffiths@intel.com>,
> 'Andrew Morton' <akpm@zip.com.au>, mgross@unix-os.sc.intel.com,
> 'Jens Axboe' <axboe@suse.de>,
> Linux Kernel Mailing List <linux-kernel@vger.kernel.org>,
> lse-tech@lists.sourceforge.net
> Subject: Re: [Lse-tech] Re: ext3 performance bottleneck as the number of
> spindles gets large
>
> On Sun, 23 Jun 2002, Dave Hansen wrote:
>
> > William Lee Irwin III wrote:
> > > On Sun, Jun 23, 2002 at 12:29:23AM -0700, Dave Hansen wrote:
> > >> Yep, 2 independent busses per quad. That's a _lot_ of busses
> > >> when you have an 8 or 16 quad system. (I wonder who has one of
> > >> those... ;) Almost all of the server-type boxes that we play with
> > >> have multiple PCI busses. Even my old dual-PPro has 2.
> > >
> > > I thought I saw 3 PCI and 1 ISA per-quad., but maybe that's the
> > > "independent" bit coming into play.
> > >
> > Hmmmm. Maybe there is another one for the onboard devices. I thought
> > that there were 8 slots and 4 per bus. I could
> > be wrong. BTW, the ISA slot is EISA and as far as I can tell is only
> > used for the MDC.
>
>
> Do you mean independent in that there are 2 sets of 4 slots each
> detected as a seperate PCI bus, or independent in that each set of 4
> had *direct* access to the cpu side, and *does not* access via a
> PCI:PCI bridge?
>
>
>
> I have stacks of PPro/PII/Xeon boards around, but 9 out of 10 have
> chianed buses. Even the old PPro x 6 (Avion 6600/ALR 6x6/Unisys
> HR/HS6000) had 2 PCI buses, however the second BUS hung off of a
> PCI:PCI bridge.
>
>
> --
> I route, therefore you are.
>
> -
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at http://www.tux.org/lkml/
>
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [Lse-tech] Re: ext3 performance bottleneck as the number of spindles gets large
2002-06-23 7:55 ` Christopher E. Brown
2002-06-23 8:11 ` David Lang
@ 2002-06-23 8:31 ` Dave Hansen
1 sibling, 0 replies; 20+ messages in thread
From: Dave Hansen @ 2002-06-23 8:31 UTC (permalink / raw)
To: Christopher E. Brown
Cc: William Lee Irwin III, Andreas Dilger, Griffiths, Richard A,
'Andrew Morton', mgross, 'Jens Axboe',
Linux Kernel Mailing List, lse-tech
Christopher E. Brown wrote:
> Do you mean independent in that there are 2 sets of 4 slots each
> detected as a seperate PCI bus, or independent in that each set of 4
> had *direct* access to the cpu side, and *does not* access via a
> PCI:PCI bridge?
No PCI:PCI bridges, at least for NUMA-Q.
http://telia.dl.sourceforge.net/sourceforge/lse/linux_on_numaq.pdf
--
Dave Hansen
haveblue@us.ibm.com
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [Lse-tech] Re: ext3 performance bottleneck as the number of spindles gets large
2002-06-23 7:45 ` Dave Hansen
2002-06-23 7:55 ` Christopher E. Brown
@ 2002-06-23 16:21 ` Martin J. Bligh
1 sibling, 0 replies; 20+ messages in thread
From: Martin J. Bligh @ 2002-06-23 16:21 UTC (permalink / raw)
To: Dave Hansen, William Lee Irwin III
Cc: Christopher E. Brown, Andreas Dilger, Griffiths, Richard A,
'Andrew Morton', mgross, 'Jens Axboe',
Linux Kernel Mailing List, lse-tech
> >> Yep, 2 independent busses per quad. That's a _lot_ of busses
> >> when you have an 8 or 16 quad system. (I wonder who has one of
> >> those... ;) Almost all of the server-type boxes that we play with
> >> have multiple PCI busses. Even my old dual-PPro has 2.
> >
> > I thought I saw 3 PCI and 1 ISA per-quad., but maybe that's the
> > "independent" bit coming into play.
> >
> Hmmmm. Maybe there is another one for the onboard devices. I thought
> that there were 8 slots and 4 per bus. I could
> be wrong. BTW, the ISA slot is EISA and as far as I can tell is only
> used for the MDC.
NUMA-Q has 2 PCI buses per quad, 3 slots in one, 4 in the other,
plus the EISA slots.
Multiple independant PCI buses are also available on other more
common architecutres, eg Netfinity 8500R, x360, x440, etc.
Anything with the Intel Profusion chipset will have this feature,
the bottleneck becomes the "P6 system bus" backplane they're all
connected to, which has a theoretical limit of 800Mb/s IIRC, though
nobody's been able to get more than 420Mb/s out of it in practice,
as far as I know.
The thing that makes the NUMA-Q a massive IO shovelling engine is
having one of these IO backplanes per quad too ... 16 x 800Mb/s
= 12.8Gb/s ;-)
M.
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [Lse-tech] Re: ext3 performance bottleneck as the number of spindles gets large
2002-06-23 6:35 ` [Lse-tech] " William Lee Irwin III
2002-06-23 7:29 ` Dave Hansen
@ 2002-06-23 17:06 ` Eric W. Biederman
1 sibling, 0 replies; 20+ messages in thread
From: Eric W. Biederman @ 2002-06-23 17:06 UTC (permalink / raw)
To: William Lee Irwin III
Cc: Christopher E. Brown, Andreas Dilger, Griffiths, Richard A,
'Andrew Morton', mgross, 'Jens Axboe',
Linux Kernel Mailing List, lse-tech
William Lee Irwin III <wli@holomorphy.com> writes:
> On Sun, Jun 23, 2002 at 12:00:01AM -0600, Christopher E. Brown wrote:
> > However, multiple busses are *rare* on x86. There are alot of chained
> > busses via PCI to PCI bridge, but few systems with 2 or more PCI
> > busses of any type with parallel access to the CPU.
>
> NUMA-Q has them.
As do the latest round of dual P4 Xeon chipsets. The Intel E7500 and
the Serverworks Grand Champion.
So on new systems this is easy to get if you want it.
Eric
^ permalink raw reply [flat|nested] 20+ messages in thread
end of thread, other threads:[~2002-06-23 17:17 UTC | newest]
Thread overview: 20+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2002-06-20 16:24 [Lse-tech] Re: ext3 performance bottleneck as the number of s pindles gets large Gross, Mark
2002-06-20 21:11 ` [Lse-tech] Re: ext3 performance bottleneck as the number of spindles " Andrew Morton
-- strict thread matches above, loose matches on Subject: below --
2002-06-23 4:33 Andreas Dilger
2002-06-23 6:00 ` Christopher E. Brown
2002-06-23 6:35 ` [Lse-tech] " William Lee Irwin III
2002-06-23 7:29 ` Dave Hansen
2002-06-23 7:36 ` William Lee Irwin III
2002-06-23 7:45 ` Dave Hansen
2002-06-23 7:55 ` Christopher E. Brown
2002-06-23 8:11 ` David Lang
2002-06-23 8:31 ` Dave Hansen
2002-06-23 16:21 ` Martin J. Bligh
2002-06-23 17:06 ` Eric W. Biederman
2002-06-21 22:03 Duc Vianney
2002-06-21 23:11 ` Andrew Morton
2002-06-22 0:19 ` kwijibo
2002-06-22 8:10 ` kwijibo
[not found] <59885C5E3098D511AD690002A5072D3C057B499E@orsmsx111.jf.intel.com>
2002-06-20 16:10 ` Dave Hansen
2002-06-20 20:47 ` John Hawkes
2002-06-19 21:29 mgross
2002-06-20 0:54 ` Andrew Morton
2002-06-20 4:09 ` [Lse-tech] " Dave Hansen
2002-06-20 6:03 ` Andreas Dilger
2002-06-20 6:53 ` Andrew Morton
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox