From: Hans-Peter Jansen <hpj@urpla.net>
To: xfs@oss.sgi.com
Subject: xfs write performance issue
Date: Thu, 19 Mar 2015 18:01:50 +0100 [thread overview]
Message-ID: <8976870.8vOdNBKrI1@xrated> (raw)
Hi,
I'm struggling with a severe write performance problem to a 12TB XFS FS.
The system sports an ancient userspace (openSUSE 11.1), but major parts are
current, e.g. kernel 3.19.1.
Unfortunately, for historical reasons, it's also 32bit (pae), and I cannot get
rid of it quickly..
Partition was migrated several times (to higher capacity disks), and the
filesystem is somewhat aged too:
~# LANG=C xfs_info /dev/sdc1
meta-data=/dev/sdc1 isize=256 agcount=17, agsize=183105406
blks
= sectsz=512 attr=2, projid32bit=0
= crc=0 finobt=0
data = bsize=4096 blocks=2929687287, imaxpct=5
= sunit=0 swidth=0 blks
naming =version 2 bsize=4096 ascii-ci=0 ftype=0
log =Intern bsize=4096 blocks=32768, version=2
= sectsz=512 sunit=0 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
~# LANG=C parted /dev/sdc
GNU Parted 1.8.8
Using /dev/sdc
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) unit s
(parted) print
Model: Areca ARC-1680-VOL#001 (scsi)
Disk /dev/sdc: 23437498368s
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Number Start End Size File system Name Flags
1 34s 23437498334s 23437498301s xfs primary , , , , , , ,
, , , ,
(parted) q
This is on a Areca 1680 RAID 5 set, consisting of:
SLOT 05(0:7) Raid Set # 001 4000.8GB Hitachi HUS724040ALE640
SLOT 06(0:6) Raid Set # 001 4000.8GB Hitachi HUS724040ALE640
SLOT 07(0:5) Raid Set # 001 4000.8GB Hitachi HUS724040ALE640
SLOT 08(0:4) Raid Set # 001 4000.8GB HGST HUS724040ALA640
Volume Set Name ARC-1680-VOL#001
Raid Set Name Raid Set # 001
Volume Capacity 12000.0GB
SCSI Ch/Id/Lun 0/0/3
Raid Level Raid 5
Stripe Size 128KBytes
Block Size 512Bytes
Member Disks 4
Cache Mode Write Back
Write Protection Disabled
Tagged Queuing Enabled
Volume State Normal
Read performance for a bigger file is about 400 MB/s on average (with flushed
caches of course..)
~# LANG=C dd if=Django_Unchained.mp4 of=/dev/null bs=1M
1305+1 records in
1305+1 records out
1369162196 bytes (1.4 GB) copied, 3.32714 s, 412 MB/s
Write performance is disastrous: it's about 1.5 MB/s.
~# LANG=C dd if=Django_Unchained.mp4 of=xxx bs=1M
482+0 records in
482+0 records out
505413632 bytes (505 MB) copied, 368.816 s, 1.4 MB/s
1083+0 records in
1083+0 records out
1135607808 bytes (1.1 GB) copied, 840.072 s, 1.4 MB/s
1305+1 records in
1305+1 records out
1369162196 bytes (1.4 GB) copied, 1014.87 s, 1.4 MB/s
The question is, what could explain these numbers. Bad alignment? Bad stripe
size? And what can I do to resolve this - without loosing all my data..
Cheers,
Pete
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
next reply other threads:[~2015-03-19 17:02 UTC|newest]
Thread overview: 4+ messages / expand[flat|nested] mbox.gz Atom feed top
2015-03-19 17:01 Hans-Peter Jansen [this message]
2015-03-19 17:10 ` xfs write performance issue Emmanuel Florac
2015-03-19 23:18 ` Dave Chinner
2015-03-20 8:05 ` Hans-Peter Jansen
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=8976870.8vOdNBKrI1@xrated \
--to=hpj@urpla.net \
--cc=xfs@oss.sgi.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox