public inbox for linux-xfs@vger.kernel.org
 help / color / mirror / Atom feed
* xfs write performance issue
@ 2015-03-19 17:01 Hans-Peter Jansen
  2015-03-19 17:10 ` Emmanuel Florac
  2015-03-19 23:18 ` Dave Chinner
  0 siblings, 2 replies; 4+ messages in thread
From: Hans-Peter Jansen @ 2015-03-19 17:01 UTC (permalink / raw)
  To: xfs

Hi,

I'm struggling with a severe write performance problem to a 12TB XFS FS.
The system sports an ancient userspace (openSUSE 11.1), but major parts are 
current, e.g. kernel 3.19.1.

Unfortunately, for historical reasons, it's also 32bit (pae), and I cannot get 
rid of it quickly..

Partition was migrated several times (to higher capacity disks), and the 
filesystem is somewhat aged too:

~# LANG=C xfs_info /dev/sdc1
meta-data=/dev/sdc1              isize=256    agcount=17, agsize=183105406 
blks
         =                       sectsz=512   attr=2, projid32bit=0
         =                       crc=0        finobt=0
data     =                       bsize=4096   blocks=2929687287, imaxpct=5
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=0
log      =Intern                 bsize=4096   blocks=32768, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0

~# LANG=C parted /dev/sdc
GNU Parted 1.8.8
Using /dev/sdc
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) unit s                                                           
(parted) print                                                            
Model: Areca ARC-1680-VOL#001 (scsi)
Disk /dev/sdc: 23437498368s
Sector size (logical/physical): 512B/512B
Partition Table: gpt

Number  Start  End           Size          File system  Name     Flags                 
 1      34s    23437498334s  23437498301s  xfs          primary  , , , , , , , 
, , , , 

(parted) q                                                                

This is on a Areca 1680 RAID 5 set, consisting of:

SLOT 05(0:7) 	Raid Set # 001 	4000.8GB 	Hitachi HUS724040ALE640
SLOT 06(0:6) 	Raid Set # 001 	4000.8GB 	Hitachi HUS724040ALE640
SLOT 07(0:5) 	Raid Set # 001 	4000.8GB 	Hitachi HUS724040ALE640
SLOT 08(0:4) 	Raid Set # 001 	4000.8GB 	HGST HUS724040ALA640 

Volume Set Name 	ARC-1680-VOL#001
Raid Set Name 	Raid Set # 001
Volume Capacity 	12000.0GB
SCSI Ch/Id/Lun 	0/0/3
Raid Level 	Raid 5
Stripe Size 	128KBytes
Block Size 	512Bytes
Member Disks 	4
Cache Mode 	Write Back
Write Protection 	Disabled
Tagged Queuing 	Enabled
Volume State 	Normal

Read performance for a bigger file is about 400 MB/s on average (with flushed 
caches of course..)

~# LANG=C dd if=Django_Unchained.mp4 of=/dev/null bs=1M 
1305+1 records in
1305+1 records out
1369162196 bytes (1.4 GB) copied, 3.32714 s, 412 MB/s

Write performance is disastrous: it's about 1.5 MB/s.

~# LANG=C dd if=Django_Unchained.mp4 of=xxx bs=1M 
482+0 records in
482+0 records out
505413632 bytes (505 MB) copied, 368.816 s, 1.4 MB/s
1083+0 records in
1083+0 records out
1135607808 bytes (1.1 GB) copied, 840.072 s, 1.4 MB/s
1305+1 records in
1305+1 records out
1369162196 bytes (1.4 GB) copied, 1014.87 s, 1.4 MB/s

The question is, what could explain these numbers. Bad alignment? Bad stripe 
size? And what can I do to resolve this - without loosing all my data..

Cheers,
Pete

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: xfs write performance issue
  2015-03-19 17:01 xfs write performance issue Hans-Peter Jansen
@ 2015-03-19 17:10 ` Emmanuel Florac
  2015-03-19 23:18 ` Dave Chinner
  1 sibling, 0 replies; 4+ messages in thread
From: Emmanuel Florac @ 2015-03-19 17:10 UTC (permalink / raw)
  To: Hans-Peter Jansen; +Cc: xfs

Le Thu, 19 Mar 2015 18:01:50 +0100
Hans-Peter Jansen <hpj@urpla.net> écrivait:

> Write performance is disastrous: it's about 1.5 MB/s.
> 

write cache is probably off. RAID array performance with write cache
off usually falls to abysmal levels. Check that your RAID controller
has write cacne enabled.

-- 
------------------------------------------------------------------------
Emmanuel Florac     |   Direction technique
                    |   Intellique
                    |	<eflorac@intellique.com>
                    |   +33 1 78 94 84 02
------------------------------------------------------------------------

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: xfs write performance issue
  2015-03-19 17:01 xfs write performance issue Hans-Peter Jansen
  2015-03-19 17:10 ` Emmanuel Florac
@ 2015-03-19 23:18 ` Dave Chinner
  2015-03-20  8:05   ` Hans-Peter Jansen
  1 sibling, 1 reply; 4+ messages in thread
From: Dave Chinner @ 2015-03-19 23:18 UTC (permalink / raw)
  To: Hans-Peter Jansen; +Cc: xfs

On Thu, Mar 19, 2015 at 06:01:50PM +0100, Hans-Peter Jansen wrote:
> ~# LANG=C xfs_info /dev/sdc1
> meta-data=/dev/sdc1              isize=256    agcount=17, agsize=183105406 
> blks
>          =                       sectsz=512   attr=2, projid32bit=0
>          =                       crc=0        finobt=0
> data     =                       bsize=4096   blocks=2929687287, imaxpct=5
>          =                       sunit=0      swidth=0 blks
> naming   =version 2              bsize=4096   ascii-ci=0 ftype=0
> log      =Intern                 bsize=4096   blocks=32768, version=2
>          =                       sectsz=512   sunit=0 blks, lazy-count=1
> realtime =none                   extsz=4096   blocks=0, rtextents=0
....
> ~# LANG=C dd if=Django_Unchained.mp4 of=/dev/null bs=1M 
> 1305+1 records in
> 1305+1 records out
> 1369162196 bytes (1.4 GB) copied, 3.32714 s, 412 MB/s
> 
> Write performance is disastrous: it's about 1.5 MB/s.
> 
> ~# LANG=C dd if=Django_Unchained.mp4 of=xxx bs=1M 
> 482+0 records in
> 482+0 records out
> 505413632 bytes (505 MB) copied, 368.816 s, 1.4 MB/s

Why did it stop half way through? ENOSPC?

> 1083+0 records in
> 1083+0 records out
> 1135607808 bytes (1.1 GB) copied, 840.072 s, 1.4 MB/s

That's incomplete, too.

> The question is, what could explain these numbers. Bad alignment? Bad stripe 
> size? And what can I do to resolve this - without loosing all my data..

More than likely you've fragmented free space, and so writes
are small random write IO. Output of 'df -h' and:

$ xfs_db -r -c "freesp -s" <dev>

would be instructive.

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: xfs write performance issue
  2015-03-19 23:18 ` Dave Chinner
@ 2015-03-20  8:05   ` Hans-Peter Jansen
  0 siblings, 0 replies; 4+ messages in thread
From: Hans-Peter Jansen @ 2015-03-20  8:05 UTC (permalink / raw)
  To: Dave Chinner; +Cc: xfs

Hi Dave,

On Freitag, 20. März 2015 10:18:54 Dave Chinner wrote:
> On Thu, Mar 19, 2015 at 06:01:50PM +0100, Hans-Peter Jansen wrote:
> > ~# LANG=C xfs_info /dev/sdc1
> > meta-data=/dev/sdc1              isize=256    agcount=17, agsize=183105406
> > blks
> > 
> >          =                       sectsz=512   attr=2, projid32bit=0
> >          =                       crc=0        finobt=0
> > 
> > data     =                       bsize=4096   blocks=2929687287, imaxpct=5
> > 
> >          =                       sunit=0      swidth=0 blks
> > 
> > naming   =version 2              bsize=4096   ascii-ci=0 ftype=0
> > log      =Intern                 bsize=4096   blocks=32768, version=2
> > 
> >          =                       sectsz=512   sunit=0 blks, lazy-count=1
> > 
> > realtime =none                   extsz=4096   blocks=0, rtextents=0
> 
> ....
> 
> > ~# LANG=C dd if=Django_Unchained.mp4 of=/dev/null bs=1M
> > 1305+1 records in
> > 1305+1 records out
> > 1369162196 bytes (1.4 GB) copied, 3.32714 s, 412 MB/s
> > 
> > Write performance is disastrous: it's about 1.5 MB/s.
> > 
> > ~# LANG=C dd if=Django_Unchained.mp4 of=xxx bs=1M
> > 482+0 records in
> > 482+0 records out
> > 505413632 bytes (505 MB) copied, 368.816 s, 1.4 MB/s
> 
> Why did it stop half way through? ENOSPC?

Signaled with USR1 (impatient operator..)

> > 1083+0 records in
> > 1083+0 records out
> > 1135607808 bytes (1.1 GB) copied, 840.072 s, 1.4 MB/s
> 
> That's incomplete, too.

Still impatient.. Sorry for not omitting superfluous output.
 
> > The question is, what could explain these numbers. Bad alignment? Bad
> > stripe size? And what can I do to resolve this - without loosing all my
> > data..
> More than likely you've fragmented free space, and so writes
> are small random write IO. Output of 'df -h' and:
> 
> $ xfs_db -r -c "freesp -s" <dev>
> 
> would be instructive.

With pleasure. In fact, I have two very similar sets:

Filesystem            Size  Used Avail Use% Mounted on
/dev/sdc1              11T  8.7T  2.3T  80% /work
/dev/sdd1              11T  4.4T  6.6T  40% /video

~# xfs_db -r -c "freesp -s" /dev/sdc1
   from      to extents  blocks    pct
      1       1   19465   19465   0.00
      2       3   27313   67348   0.01
      4       7   51150  280764   0.05
      8      15  210606 2722553   0.45
     16      31    2730   61134   0.01
     32      63    3396  152440   0.03
     64     127    3012  271804   0.04
    128     255    3083  570011   0.09
    256     511    3108 1134725   0.19
    512    1023    3257 2354022   0.39
   1024    2047    3449 4985593   0.82
   2048    4095    3427 9820314   1.61
   4096    8191    2624 15074450   2.47
   8192   16383    1546 17345180   2.85
  16384   32767     537 11970907   1.97
  32768   65535     216 10151639   1.67
  65536  131071      68 5782598   0.95
 131072  262143      25 4753893   0.78
 262144  524287      40 15117520   2.48
 524288 1048575      32 23913842   3.93
1048576 2097151       7 10855214   1.78
2097152 4194303       3 8824217   1.45
8388608 16777215       1 8572687   1.41
67108864 134217727       4 310678786  51.01
134217728 183105406       1 143607762  23.58
total free extents 339100
total free blocks 609088868
average free extent size 1796.19


~# xfs_db -r -c "freesp -s" /dev/sdd1
   from      to extents  blocks    pct
      1       1     933     933   0.00
      2       3     616    1400   0.00
      4       7     286    1503   0.00
      8      15     549    7084   0.00
     16      31     480   12151   0.00
     32      63     583   25882   0.00
     64     127     463   41568   0.00
    128     255     320   57243   0.00
    256     511     368  135099   0.01
    512    1023     258  180464   0.01
   1024    2047     512  800351   0.05
   2048    4095    1124 3455641   0.20
   4096    8191    1387 8176955   0.46
   8192   16383    1422 16262072   0.92
  16384   32767     920 21547937   1.22
  32768   65535     646 29770655   1.69
  65536  131071     971 108167644   6.13
 131072  262143       7 1378026   0.08
 262144  524287       3 1238114   0.07
 524288 1048575       1  655157   0.04
4194304 8388607       1 6882208   0.39
134217728 268435455       6 1566631060  88.74
total free extents 11856
total free blocks 1765429147
average free extent size 148906


Both suffer from the same abysmal write performance.

I took care of testing the idle one (sdd).

Thanks,
Pete

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2015-03-20  8:05 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2015-03-19 17:01 xfs write performance issue Hans-Peter Jansen
2015-03-19 17:10 ` Emmanuel Florac
2015-03-19 23:18 ` Dave Chinner
2015-03-20  8:05   ` Hans-Peter Jansen

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox