From: Linas Jankauskas <linas.j@iv.lt>
To: xfs@oss.sgi.com
Subject: Slow performance after ~4.5TB
Date: Mon, 12 Nov 2012 10:14:13 +0200 [thread overview]
Message-ID: <50A0AFD5.2020607@iv.lt> (raw)
[-- Attachment #1: Type: text/plain, Size: 2645 bytes --]
Hello,
we have 30 backup servers with 20TB backup partition each.
While server is new and empty rsync is compying data prety fast, but
when it reaches about 4.5TB write operation become very slow (about 10
times slower).
I have attached cpu and disk graphs.
As you can see first week, while server was empty, rsync was using "user"
cpu and data copying was fast. Later rsync started to use "system" cpu
and data copying became much slower. Same situation is on all our backup
servers. Before we had used smaller partition with ext4 and we had no
problems.
Most time rsync is spending on ftruncate:
% time seconds usecs/call calls errors syscall
------ ----------- ----------- --------- --------- ----------------
99.99 18.362863 165431 111 ftruncate
0.00 0.000712 3 224 112 open
0.00 0.000195 1 257 write
0.00 0.000171 1 250 read
0.00 0.000075 1 112 lchown
0.00 0.000039 0 112 lstat
0.00 0.000028 0 112 close
0.00 0.000021 0 112 chmod
0.00 0.000011 0 396 select
0.00 0.000000 0 112 utimes
------ ----------- ----------- --------- --------- ----------------
100.00 18.364115 1798 112 total
I have checked disk fragmentation, but its not big:
xfs_db -c frag -r /dev/sda5
actual 80838233, ideal 80234908, fragmentation factor 0.75%
Here is some info from xfs_io statfs:
fd.path = "/var"
statfs.f_bsize = 4096
statfs.f_blocks = 5368112145
statfs.f_bavail = 3414301671
statfs.f_files = 4294907072
statfs.f_ffree = 4204584125
geom.bsize = 4096
geom.agcount = 20
geom.agblocks = 268435455
geom.datablocks = 5368633873
geom.rtblocks = 0
geom.rtextents = 0
geom.rtextsize = 1
geom.sunit = 0
geom.swidth = 0
counts.freedata = 3414301671
counts.freertx = 0
counts.freeino = 61
counts.allocino = 90323008
Partition table:
Filesystem Size Used Avail Use% Mounted on
/dev/sda3 1008M 225M 733M 24% /
/dev/sda1 124M 26M 92M 22% /boot
/dev/sda4 4.0G 522M 3.3G 14% /usr
/dev/sda5 20T 7.3T 13T 37% /var
Inodes:
Filesystem Inodes IUsed IFree IUse% Mounted on
/dev/sda3 65536 4974 60562 8% /
/dev/sda1 32768 38 32730 1% /boot
/dev/sda4 262144 15586 246558 6% /usr
Any idea what could be a reason?
Let me know if any other info is needed.
Thanks
Linas
[-- Attachment #2: graph_cpu.png --]
[-- Type: image/png, Size: 43514 bytes --]
[-- Attachment #3: graph_disk.png --]
[-- Type: image/png, Size: 16694 bytes --]
[-- Attachment #4: Type: text/plain, Size: 121 bytes --]
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
next reply other threads:[~2012-11-12 8:12 UTC|newest]
Thread overview: 11+ messages / expand[flat|nested] mbox.gz Atom feed top
2012-11-12 8:14 Linas Jankauskas [this message]
2012-11-12 9:04 ` Slow performance after ~4.5TB Dave Chinner
2012-11-12 9:46 ` Linas Jankauskas
2012-11-12 12:32 ` Dave Chinner
2012-11-12 13:58 ` Linas Jankauskas
2012-11-12 22:36 ` Dave Chinner
2012-11-13 9:13 ` Linas Jankauskas
2012-11-13 19:50 ` Dave Chinner
2012-11-14 9:01 ` Linas Jankauskas
2012-11-14 21:13 ` Dave Chinner
2012-11-15 8:34 ` Linas Jankauskas
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=50A0AFD5.2020607@iv.lt \
--to=linas.j@iv.lt \
--cc=xfs@oss.sgi.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox