public inbox for linux-xfs@vger.kernel.org
 help / color / mirror / Atom feed
From: yy <yy@xspring.net>
To: xfs <xfs@oss.sgi.com>
Subject: XFS buffer IO performance is very poor
Date: Wed, 11 Feb 2015 15:39:52 +0800	[thread overview]
Message-ID: <tencent_39AFE52BC5E06F89B6B2B3ED@qq.com> (raw)


[-- Attachment #1.1: Type: text/plain, Size: 3865 bytes --]

Hi,
I am run some test with fio on XFS, and I found that buffer IO is very poor. These are some result:


read(iops) write(iops)
direct IO + ext3 1848  1232
buffer IO + ext3 1976 1319
direct IO + XFS 1954 1304
buffer IO + XFS 307 203


I do not understand why such a big difference?ext3 is much better.
direct IO parameters:
fio --filename=/data1/fio.dat —direct=1 --thread --rw=randrw --rwmixread=60 --ioengine=libaio --runtime=300 --iodepth=1 --size=40G --numjobs=32 -name=test_rw --group_reporting --bs=16k —time_base


buffer IO parametes:
fio --filename=/data1/fio.dat --direct=0 --thread --rw=randrw --rwmixread=60 --ioengine=libaio --runtime=300 --iodepth=1 --size=40G --numjobs=32 -name=test_rw --group_reporting --bs=16k —time_base


the system I've used for my tests:
HW server: 4 cores (Intel), 32GB RAM, running RHEL 6.5
Kernel:2.6.32-431.el6.x86_64
storage: 10disks RAID1+0, stripe size: 256KB


XFS format parametes:
#mkfs.xfs -d su=256k,sw=5 /dev/sdb1
#cat /proc/mounts
/dev/sdb1 /data1 xfs rw,noatime,attr2,delaylog,nobarrier,logbsize=256k,sunit=512,swidth=2560,noquota 0 0
#fdisk -ul
Device Boot   Start     End   Blocks  Id System
/dev/sdb1       128 2929356359 1464678116  83 Linux




# fio --filename=/data1/fio.dat --direct=0 --thread --rw=randrw --rwmixread=60 --ioengine=libaio --runtime=300 --iodepth=1 --size=40G --numjobs=32 -name=test_rw --group_reporting --bs=16k --time_base
test_rw: (g=0): rw=randrw, bs=16K-16K/16K-16K/16K-16K, ioengine=libaio, iodepth=1
...
test_rw: (g=0): rw=randrw, bs=16K-16K/16K-16K/16K-16K, ioengine=libaio, iodepth=1
fio-2.0.13
Starting 32 threads
Jobs: 32 (f=32): [mmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmm] [100.0% done] [5466K/3644K/0K /s] [341 /227 /0 iops] [eta 00m:00s]
test_rw: (groupid=0, jobs=32): err= 0: pid=5711: Wed Feb 11 15:26:30 2015
 read : io=1442.2MB, bw=4922.3KB/s, iops=307 , runt=300010msec
  slat (usec): min=7 , max=125345 , avg=5765.52, stdev=3741.61
  clat (usec): min=0 , max=192 , avg= 2.72, stdev= 1.12
  lat (usec): min=7 , max=125348 , avg=5770.09, stdev=3741.68
  clat percentiles (usec):
  | 1.00th=[  1], 5.00th=[  2], 10.00th=[  2], 20.00th=[  2],
  | 30.00th=[  2], 40.00th=[  3], 50.00th=[  3], 60.00th=[  3],
  | 70.00th=[  3], 80.00th=[  3], 90.00th=[  3], 95.00th=[  4],
  | 99.00th=[  4], 99.50th=[  4], 99.90th=[  14], 99.95th=[  16],
  | 99.99th=[  20]
  bw (KB/s) : min=  16, max= 699, per=3.22%, avg=158.37, stdev=85.79
 write: io=978736KB, bw=3262.4KB/s, iops=203 , runt=300010msec
  slat (usec): min=10 , max=577043 , avg=148215.93, stdev=125650.40
  clat (usec): min=0 , max=198 , avg= 2.50, stdev= 1.26
  lat (usec): min=11 , max=577048 , avg=148220.20, stdev=125650.94
  clat percentiles (usec):
  | 1.00th=[  1], 5.00th=[  1], 10.00th=[  1], 20.00th=[  2],
  | 30.00th=[  2], 40.00th=[  2], 50.00th=[  3], 60.00th=[  3],
  | 70.00th=[  3], 80.00th=[  3], 90.00th=[  3], 95.00th=[  3],
  | 99.00th=[  4], 99.50th=[  6], 99.90th=[  14], 99.95th=[  14],
  | 99.99th=[  17]
  bw (KB/s) : min=  25, max= 448, per=3.17%, avg=103.28, stdev=46.76
  lat (usec) : 2=6.40%, 4=88.39%, 10=4.93%, 20=0.27%, 50=0.01%
  lat (usec) : 100=0.01%, 250=0.01%
 cpu     : usr=0.00%, sys=0.13%, ctx=238853, majf=18446744073709551520, minf=18446744073709278371
 IO depths  : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, =64=0.0%
  submit  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, =64=0.0%
  complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, =64=0.0%
  issued  : total=r=92296/w=61171/d=0, short=r=0/w=0/d=0


Run status group 0 (all jobs):
 READ: io=1442.2MB, aggrb=4922KB/s, minb=4922KB/s, maxb=4922KB/s, mint=300010msec, maxt=300010msec
 WRITE: io=978736KB, aggrb=3262KB/s, minb=3262KB/s, maxb=3262KB/s, mint=300010msec, maxt=300010msec


Disk stats (read/write):
 sdb: ios=89616/55141, merge=0/0, ticks=442611/171325, in_queue=613823, util=97.08%

[-- Attachment #1.2: Type: text/html, Size: 8492 bytes --]

[-- Attachment #2: Type: text/plain, Size: 121 bytes --]

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

             reply	other threads:[~2015-02-11  7:39 UTC|newest]

Thread overview: 8+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2015-02-11  7:39 yy [this message]
2015-02-11 13:35 ` XFS buffer IO performance is very poor Brian Foster
2015-02-11 16:08 ` Eric Sandeen
  -- strict thread matches above, loose matches on Subject: below --
2015-02-12  5:30 yy
2015-02-12  6:59 yy
2015-02-12 21:04 ` Dave Chinner
2015-02-13  2:20 yy
2015-02-13 13:46 ` Carlos Maiolino

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=tencent_39AFE52BC5E06F89B6B2B3ED@qq.com \
    --to=yy@xspring.net \
    --cc=xfs@oss.sgi.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox