* XFS performance problems on Linux x86_64 @ 2007-11-27 21:20 Johan Andersson 2007-11-27 22:05 ` David Chinner 0 siblings, 1 reply; 7+ messages in thread From: Johan Andersson @ 2007-11-27 21:20 UTC (permalink / raw) To: xfs Hi! I am using Gentoo Linux on XFS root filesystem on a number of machines, where some are P4 based i686, and some new are Intel Core 2 Duo based x86_64 based. When the new x86_64 based machines were put into service, we noticed that they are extremely slow on file io. I have now created two test partitions, each 5G in size, on the same disk. One is xfs and one is ext3, both filesystems created with default options. My simple test is to rsync our local portage tree to the 5G partition: ===================================================================== tmpc-masv2 xfs # time rsync -r --delete rsync://devsrv/portage portage real 5m55.037s user 0m1.291s sys 0m10.352s ====================================================================== tmpc-masv2 ext3 # time rsync -r --delete rsync://devsrv/portage portage real 0m28.943s user 0m1.095s sys 0m5.384s I have repeated this a number of times to make sure caching on the server does not interfere, with about the same results every time. Any idea why XFS appears to be 12 times slower than ext3 on the 64-bit machine? I have also some statistics from bonnie++: XFS: > Version 1.93c ------Sequential Output------ --Sequential Input- --Random- > Concurrency 1 -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- > Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP > tmpc-masv2 4G 929 99 48914 8 23036 3 1872 96 50322 4 162.0 1 > Latency 8913us 1675ms 492ms 54567us 161ms 503ms > Version 1.93c ------Sequential Create------ --------Random Create-------- > tmpc-masv2 -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- > files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP > 16 3241 13 +++++ +++ 3541 13 3729 14 +++++ +++ 1001 4 > Latency 60600us 80us 34066us 82412us 22us 269ms EXT3: > Version 1.93c ------Sequential Output------ --Sequential Input- --Random- > Concurrency 1 -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- > Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP > tmpc-masv2 4G 581 98 43340 9 22933 4 2435 96 50829 4 153.5 1 > Latency 56412us 2111ms 1885ms 41179us 101ms 690ms > Version 1.93c ------Sequential Create------ --------Random Create-------- > tmpc-masv2 -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- > files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP > 16 31286 38 +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++ > Latency 11233us 145us 165us 7555us 8us 40us As it looks here, xfs performs ok (but not as good as expected) on large files, but creating and deleting files is extremely slow. The machine these test run on uses Gentoo kernel sources 2.6.23-gentoo-r1 (also tested with 2.6.22-gentoo-r8). xfsprogs is 2.9.4. /Johan Andersson ^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: XFS performance problems on Linux x86_64 2007-11-27 21:20 XFS performance problems on Linux x86_64 Johan Andersson @ 2007-11-27 22:05 ` David Chinner 2007-11-27 23:13 ` Bernd Schubert 0 siblings, 1 reply; 7+ messages in thread From: David Chinner @ 2007-11-27 22:05 UTC (permalink / raw) To: Johan Andersson; +Cc: xfs On Tue, Nov 27, 2007 at 10:20:05PM +0100, Johan Andersson wrote: > Hi! > > I am using Gentoo Linux on XFS root filesystem on a number of machines, > where some are P4 based i686, and some new are Intel Core 2 Duo based > x86_64 based. > When the new x86_64 based machines were put into service, we noticed > that they are extremely slow on file io. I have now created two test > partitions, each 5G in size, on the same disk. One is xfs and one is > ext3, both filesystems created with default options. My simple test is > to rsync our local portage tree to the 5G partition: > ===================================================================== > tmpc-masv2 xfs # time rsync -r --delete rsync://devsrv/portage portage > > real 5m55.037s > user 0m1.291s > sys 0m10.352s > > ====================================================================== > tmpc-masv2 ext3 # time rsync -r --delete rsync://devsrv/portage portage > > real 0m28.943s > user 0m1.095s > sys 0m5.384s > > I have repeated this a number of times to make sure caching on the > server does not interfere, with about the same results every time. > > Any idea why XFS appears to be 12 times slower than ext3 on the 64-bit > machine? # mkfs.xfs -f -l lazy-count=1,version=2,size=128m -i attr=2 -d agcount=4 <dev> # mount -o logbsize=256k <dev> <mtpt> And if you don't care about filsystem corruption on power loss: # mount -o logbsize=256k,nobarrier <dev> <mtpt> Those mkfs values (except for log size) will be hte defaults in the next release of xfsprogs. Cheers, Dave. -- Dave Chinner Principal Engineer SGI Australian Software Group ^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: XFS performance problems on Linux x86_64 2007-11-27 22:05 ` David Chinner @ 2007-11-27 23:13 ` Bernd Schubert 2007-11-30 4:58 ` David Chinner 0 siblings, 1 reply; 7+ messages in thread From: Bernd Schubert @ 2007-11-27 23:13 UTC (permalink / raw) To: linux-xfs Hello David, David Chinner wrote: > > # mkfs.xfs -f -l lazy-count=1,version=2,size=128m -i attr=2 -d agcount=4 > # <dev> mount -o logbsize=256k <dev> <mtpt> thanks, I was also going to ask which are optimal parameters. Just didn't have the time yet :) Any idea when these options will be default? Cheers, Bernd ^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: XFS performance problems on Linux x86_64 2007-11-27 23:13 ` Bernd Schubert @ 2007-11-30 4:58 ` David Chinner 2007-11-30 7:17 ` Tóth Csaba 0 siblings, 1 reply; 7+ messages in thread From: David Chinner @ 2007-11-30 4:58 UTC (permalink / raw) To: Bernd Schubert; +Cc: linux-xfs On Wed, Nov 28, 2007 at 12:13:57AM +0100, Bernd Schubert wrote: > Hello David, > > David Chinner wrote: > > > > # mkfs.xfs -f -l lazy-count=1,version=2,size=128m -i attr=2 -d agcount=4 > > # <dev> mount -o logbsize=256k <dev> <mtpt> > > thanks, I was also going to ask which are optimal parameters. Just didn't > have the time yet :) > Any idea when these options will be default? They should already be the defaults in the current CVS tree. ;) Cheers, Dave. -- Dave Chinner Principal Engineer SGI Australian Software Group ^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: XFS performance problems on Linux x86_64 2007-11-30 4:58 ` David Chinner @ 2007-11-30 7:17 ` Tóth Csaba 2007-11-30 7:54 ` David Chinner 0 siblings, 1 reply; 7+ messages in thread From: Tóth Csaba @ 2007-11-30 7:17 UTC (permalink / raw) To: David Chinner; +Cc: Bernd Schubert, linux-xfs Hello list, I tried this parameters, and got this results with bonnie++. As i think there isnt any speedup with this parameteres, or i am doing something wrong? tsabi oldbck ~ # uname -a Linux oldbck 2.6.23-gentoo-r2-uk-01 #1 SMP Tue Nov 20 03:43:04 CET 2007 x86_64 Intel(R) Xeon(TM) CPU 2.80GHz GenuineIntel GNU/Linux test 1: oldbck mnt # mkfs.xfs -i size=512 -f /dev/md5 meta-data=/dev/md5 isize=512 agcount=32, agsize=4513008 blks = sectsz=512 attr=0 data = bsize=4096 blocks=144416192, imaxpct=25 = sunit=16 swidth=64 blks, unwritten=1 naming =version 2 bsize=4096 log =internal log bsize=4096 blocks=32768, version=1 = sectsz=512 sunit=0 blks, lazy-count=0 realtime =none extsz=262144 blocks=0, rtextents=0 oldbck mnt # mount /dev/md5 /mnt/data oldbck mnt # bonnie++ -u root -d /mnt/data Version 1.93c ------Sequential Output------ --Sequential Input- --Random- Concurrency 1 -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP oldbck 4G 522 99 235160 50 86692 17 1040 96 241681 22 457.9 9 Latency 15620us 205ms 119ms 100ms 50727us 79657us Version 1.93c ------Sequential Create------ --------Random Create-------- oldbck -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP 16 11000 77 +++++ +++ 13765 82 11977 83 +++++ +++ 11049 77 Latency 66657us 49us 76936us 68300us 18us 72243us 1.93c,1.93c,oldbck,1,1196392150,4G,,522,99,235160,50,86692,17,1040,96,241681,22,457.9,9,16,,,,,11000,77,+++++,+++,13765,82,11977,83,+++++,+++,11049,77,15620us,205ms,119ms,100ms,50727us,79657us,66657us,49us,76936us,68300us,18us,72243us test 2: oldbck mnt # mkfs.xfs -f -l lazy-count=1,version=2,size=128m -i attr=2,size=512 -d agcount=4 /dev/md5 meta-data=/dev/md5 isize=512 agcount=4, agsize=36104048 blks = sectsz=512 attr=2 data = bsize=4096 blocks=144416192, imaxpct=25 = sunit=16 swidth=64 blks, unwritten=1 naming =version 2 bsize=4096 log =internal log bsize=4096 blocks=32768, version=2 = sectsz=512 sunit=16 blks, lazy-count=1 realtime =none extsz=262144 blocks=0, rtextents=0 oldbck mnt # mount -o logbsize=256k,nobarrier /dev/md5 /mnt/data oldbck mnt # bonnie++ -u root -d /mnt/data Version 1.93c ------Sequential Output------ --Sequential Input- --Random- Concurrency 1 -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP oldbck 4G 523 99 237016 44 87202 17 1040 96 245389 22 446.5 7 Latency 15531us 184ms 133ms 105ms 11835us 85541us Version 1.93c ------Sequential Create------ --------Random Create-------- oldbck -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP 16 12772 77 +++++ +++ 14759 74 13499 81 +++++ +++ 11617 70 Latency 86835us 56us 79869us 79967us 24us 95818us 1.93c,1.93c,oldbck,1,1196391889,4G,,523,99,237016,44,87202,17,1040,96,245389,22,446.5,7,16,,,,,12772,77,+++++,+++,14759,74,13499,81,+++++,+++,11617,70,15531us,184ms,133ms,105ms,11835us,85541us,86835us,56us,79869us,79967us,24us,95818us test 3: oldbck mnt # mkfs.ext3 /dev/md5 mke2fs 1.40.2 (12-Jul-2007) Filesystem label= OS type: Linux Block size=4096 (log=2) Fragment size=4096 (log=2) 72220672 inodes, 144416192 blocks 7220809 blocks (5.00%) reserved for the super user First data block=0 Maximum filesystem blocks=4294967296 4408 block groups 32768 blocks per group, 32768 fragments per group 16384 inodes per group Superblock backups stored on blocks: 32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208, 4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968, 102400000 Writing inode tables: done Creating journal (32768 blocks): done Writing superblocks and filesystem accounting information: done This filesystem will be automatically checked every 32 mounts or 180 days, whichever comes first. Use tune2fs -c or -i to override. oldbck mnt # mount /dev/md5 /mnt/data oldbck mnt # oldbck mnt # bonnie++ -u root -d /mnt/data Version 1.93c ------Sequential Output------ --Sequential Input- --Random- Concurrency 1 -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP oldbck 4G 326 99 211619 80 91680 23 1341 96 237493 21 495.5 9 Latency 32953us 220ms 1599ms 62374us 59342us 472ms Version 1.93c ------Sequential Create------ --------Random Create-------- oldbck -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP 16 +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++ Latency 7677us 165us 252us 17010us 13us 243us 1.93c,1.93c,oldbck,1,1196389368,4G,,326,99,211619,80,91680,23,1341,96,237493,21,495.5,9,16,,,,,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++,32953us,220ms,1599ms,62374us,59342us,472ms,7677us,165us,252us,17010us,13us,243us David Chinner írta: > On Wed, Nov 28, 2007 at 12:13:57AM +0100, Bernd Schubert wrote: >> Hello David, >> >> David Chinner wrote: >>> # mkfs.xfs -f -l lazy-count=1,version=2,size=128m -i attr=2 -d agcount=4 >>> # <dev> mount -o logbsize=256k <dev> <mtpt> >> thanks, I was also going to ask which are optimal parameters. Just didn't >> have the time yet :) >> Any idea when these options will be default? > > They should already be the defaults in the current CVS tree. ;) > > Cheers, > > Dave. ^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: XFS performance problems on Linux x86_64 2007-11-30 7:17 ` Tóth Csaba @ 2007-11-30 7:54 ` David Chinner 2007-11-30 8:17 ` Tóth Csaba 0 siblings, 1 reply; 7+ messages in thread From: David Chinner @ 2007-11-30 7:54 UTC (permalink / raw) To: Tóth Csaba; +Cc: David Chinner, Bernd Schubert, linux-xfs On Fri, Nov 30, 2007 at 08:17:32AM +0100, Tóth Csaba wrote: > Hello list, > > I tried this parameters, and got this results with bonnie++. As i think > there isnt any speedup with this parameteres, or i am doing something wrong? The latter. Try creating more than 16 files in your test. Maybe 160,000 instead? Cheers, Dave. -- Dave Chinner Principal Engineer SGI Australian Software Group ^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: XFS performance problems on Linux x86_64 2007-11-30 7:54 ` David Chinner @ 2007-11-30 8:17 ` Tóth Csaba 0 siblings, 0 replies; 7+ messages in thread From: Tóth Csaba @ 2007-11-30 8:17 UTC (permalink / raw) To: linux-xfs Hey, David Chinner írta: > On Fri, Nov 30, 2007 at 08:17:32AM +0100, Tóth Csaba wrote: >> Hello list, >> >> I tried this parameters, and got this results with bonnie++. As i think >> there isnt any speedup with this parameteres, or i am doing something wrong? > > Try creating more than 16 files in your test. Maybe 160,000 instead? I believe bonnie++ has a test like that too. (i dont know anything about wehat tests bonnie++ has exactly) ok, ty for the reply. I just didnt understanded why i didnt get the same results. tsabi ^ permalink raw reply [flat|nested] 7+ messages in thread
end of thread, other threads:[~2007-11-30 8:17 UTC | newest] Thread overview: 7+ messages (download: mbox.gz follow: Atom feed -- links below jump to the message on this page -- 2007-11-27 21:20 XFS performance problems on Linux x86_64 Johan Andersson 2007-11-27 22:05 ` David Chinner 2007-11-27 23:13 ` Bernd Schubert 2007-11-30 4:58 ` David Chinner 2007-11-30 7:17 ` Tóth Csaba 2007-11-30 7:54 ` David Chinner 2007-11-30 8:17 ` Tóth Csaba
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox