public inbox for linux-xfs@vger.kernel.org
 help / color / mirror / Atom feed
* Slow performance after ~4.5TB
@ 2012-11-12  8:14 Linas Jankauskas
  2012-11-12  9:04 ` Dave Chinner
  0 siblings, 1 reply; 11+ messages in thread
From: Linas Jankauskas @ 2012-11-12  8:14 UTC (permalink / raw)
  To: xfs

[-- Attachment #1: Type: text/plain, Size: 2645 bytes --]

Hello,

we have 30 backup servers with 20TB backup partition each.
While server is new and empty rsync is compying data prety fast, but
when it reaches about 4.5TB write operation become very slow (about 10
times slower).

I have attached cpu and disk graphs.

As you can see first week, while server was empty, rsync was using "user"
cpu and data copying was fast. Later rsync started to use "system" cpu
and data copying became much slower. Same situation is on all our backup
servers. Before we had used smaller partition with ext4 and we had no
problems.

Most time rsync is spending on ftruncate:

% time     seconds  usecs/call     calls    errors syscall
------ ----------- ----------- --------- --------- ----------------
  99.99   18.362863      165431       111           ftruncate
   0.00    0.000712           3       224       112 open
   0.00    0.000195           1       257           write
   0.00    0.000171           1       250           read
   0.00    0.000075           1       112           lchown
   0.00    0.000039           0       112           lstat
   0.00    0.000028           0       112           close
   0.00    0.000021           0       112           chmod
   0.00    0.000011           0       396           select
   0.00    0.000000           0       112           utimes
------ ----------- ----------- --------- --------- ----------------
100.00   18.364115                  1798       112 total


I have checked disk fragmentation, but its not big:

xfs_db -c frag -r /dev/sda5
actual 80838233, ideal 80234908, fragmentation factor 0.75%

Here is some info from xfs_io statfs:

fd.path = "/var"
statfs.f_bsize = 4096
statfs.f_blocks = 5368112145
statfs.f_bavail = 3414301671
statfs.f_files = 4294907072
statfs.f_ffree = 4204584125
geom.bsize = 4096
geom.agcount = 20
geom.agblocks = 268435455
geom.datablocks = 5368633873
geom.rtblocks = 0
geom.rtextents = 0
geom.rtextsize = 1
geom.sunit = 0
geom.swidth = 0
counts.freedata = 3414301671
counts.freertx = 0
counts.freeino = 61
counts.allocino = 90323008

Partition table:

Filesystem            Size  Used Avail Use% Mounted on
/dev/sda3            1008M  225M  733M  24% /
/dev/sda1             124M   26M   92M  22% /boot
/dev/sda4             4.0G  522M  3.3G  14% /usr
/dev/sda5              20T  7.3T   13T  37% /var

Inodes:

Filesystem            Inodes   IUsed   IFree IUse% Mounted on
/dev/sda3              65536    4974   60562    8% /
/dev/sda1              32768      38   32730    1% /boot
/dev/sda4             262144   15586  246558    6% /usr


Any idea what could be a reason?
Let me know if any other info is needed.

Thanks
Linas

[-- Attachment #2: graph_cpu.png --]
[-- Type: image/png, Size: 43514 bytes --]

[-- Attachment #3: graph_disk.png --]
[-- Type: image/png, Size: 16694 bytes --]

[-- Attachment #4: Type: text/plain, Size: 121 bytes --]

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2012-11-15  8:32 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2012-11-12  8:14 Slow performance after ~4.5TB Linas Jankauskas
2012-11-12  9:04 ` Dave Chinner
2012-11-12  9:46   ` Linas Jankauskas
2012-11-12 12:32     ` Dave Chinner
2012-11-12 13:58       ` Linas Jankauskas
2012-11-12 22:36         ` Dave Chinner
2012-11-13  9:13           ` Linas Jankauskas
2012-11-13 19:50             ` Dave Chinner
2012-11-14  9:01               ` Linas Jankauskas
2012-11-14 21:13             ` Dave Chinner
2012-11-15  8:34               ` Linas Jankauskas

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox