public inbox for linux-xfs@vger.kernel.org
 help / color / mirror / Atom feed
From: Stefan Priebe - Profihost AG <s.priebe@profihost.ag>
To: Dave Chinner <david@fromorbit.com>
Cc: stan@hardwarefreak.com, "xfs@oss.sgi.com" <xfs@oss.sgi.com>
Subject: Re: suddenly slow writes on XFS Filesystem
Date: Mon, 07 May 2012 08:39:13 +0200	[thread overview]
Message-ID: <4FA76E11.1070708@profihost.ag> (raw)
In-Reply-To: <20120507013456.GW5091@dastard>

Hi,

after deleting 400GB it was faster. Now there are still 300GB free but
it is slow as hell again ;-(

Am 07.05.2012 03:34, schrieb Dave Chinner:
> On Sun, May 06, 2012 at 11:01:14AM +0200, Stefan Priebe wrote:
>> Hi,
>>
>> since a few days i've experienced a really slow fs on one of our
>> backup systems.
>>
>> I'm not sure whether this is XFS related or related to the
>> Controller / Disks.
>>
>> It is a raid 10 of 20 SATA Disks and i can only write to them with
>> about 700kb/s while doing random i/o.
> 
> What sort of random IO? size, read, write, direct or buffered, data
> or metadata, etc?
There are 4 rsync processes running and doing backups of other severs.

> iostat -x -d -m 5 and vmstat 5 traces would be
> useful to see if it is your array that is slow.....

~ # iostat -x -d -m 5
Linux 2.6.40.28intel (server844-han)    05/07/2012      _x86_64_
(8 CPU)

Device:         rrqm/s   wrqm/s     r/s     w/s    rMB/s    wMB/s
avgrq-sz avgqu-sz   await  svctm  %util
sdb               0,00     0,00  254,80   25,40     1,72     0,16
13,71     0,86    3,08   2,39  67,06
sda               0,00     0,20    0,00    1,20     0,00     0,00
6,50     0,00    0,00   0,00   0,00

Device:         rrqm/s   wrqm/s     r/s     w/s    rMB/s    wMB/s
avgrq-sz avgqu-sz   await  svctm  %util
sdb               0,00     0,00  187,40   24,20     1,26     0,19
14,05     0,75    3,56   3,33  70,50
sda               0,00     0,00    0,00    0,40     0,00     0,00
4,50     0,00    0,00   0,00   0,00

Device:         rrqm/s   wrqm/s     r/s     w/s    rMB/s    wMB/s
avgrq-sz avgqu-sz   await  svctm  %util
sdb               0,00    11,20  242,40   92,00     1,56     0,89
15,00     4,70   14,06   1,58  52,68
sda               0,00     0,20    0,00    2,60     0,00     0,02
12,00     0,00    0,00   0,00   0,00

Device:         rrqm/s   wrqm/s     r/s     w/s    rMB/s    wMB/s
avgrq-sz avgqu-sz   await  svctm  %util
sdb               0,00     0,00  166,20   24,00     0,99     0,17
12,51     0,57    3,02   2,40  45,56
sda               0,00     0,00    0,00    0,00     0,00     0,00
0,00     0,00    0,00   0,00   0,00

qDevice:         rrqm/s   wrqm/s     r/s     w/s    rMB/s    wMB/s
avgrq-sz avgqu-sz   await  svctm  %util
sdb               0,00     0,00  188,00   25,40     1,22     0,16
13,23     0,44    2,04   1,78  38,02
sda               0,00     0,00    0,00    0,00     0,00     0,00
0,00     0,00    0,00   0,00   0,00


# vmstat
procs -----------memory---------- ---swap-- -----io---- -system--
----cpu----
 r  b   swpd   free   buff  cache   si   so    bi    bo   in   cs us sy
id wa
 7  0      0 788632     48 12189652    0    0   173   395   13   45  1
16 82  1
[root@server844-han /serverbackup (master)]# vmstat
procs -----------memory---------- ---swap-- -----io---- -system--
----cpu----
 r  b   swpd   free   buff  cache   si   so    bi    bo   in   cs us sy
id wa
 4  0      0 778148     48 12189776    0    0   173   395   13   45  1
16 82  1
[root@server844-han /serverbackup (master)]# vmstat
procs -----------memory---------- ---swap-- -----io---- -system--
----cpu----
 r  b   swpd   free   buff  cache   si   so    bi    bo   in   cs us sy
id wa
 2  0      0 774372     48 12189876    0    0   173   395   13   45  1
16 82  1
[root@server844-han /serverbackup (master)]# vmstat
procs -----------memory---------- ---swap-- -----io---- -system--
----cpu----
 r  b   swpd   free   buff  cache   si   so    bi    bo   in   cs us sy
id wa
 5  0      0 771240     48 12189936    0    0   173   395   13   45  1
16 82  1
[root@server844-han /serverbackup (master)]# vmstat
procs -----------memory---------- ---swap-- -----io---- -system--
----cpu----
 r  b   swpd   free   buff  cache   si   so    bi    bo   in   cs us sy
id wa
 6  0      0 768636     48 12190000    0    0   173   395   13   45  1
16 82  1

> 
>> I tried vanilla Kernel 3.0.30
>> and 3.3.4 - no difference. Writing to another partition on another
>> xfs array works fine.
>>
>> Details:
>> #~ df -h
>> /dev/sdb1             4,6T  4,4T  207G  96% /mnt
> 
> Your filesystem is near full - the allocation algorithms definitely
> slow down as you approach ENOSPC, and IO efficiency goes to hell
> because of a lack of contiguous free space to allocate from.
I've now 94% used but it is still slow. It seems it was just getting
fast with more than 450GB free space.

/dev/sdb1             4,6T  4,3T  310G  94% /mnt

>> #~ df -i
>> /dev/sdb1            4875737052 4659318044 216419008  96% /mnt
> You have 4.6 *billion* inodes in your filesystem?
Yes - it backups around 100 servers with a lot of files.

Greet Stefan

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

  reply	other threads:[~2012-05-07  6:39 UTC|newest]

Thread overview: 33+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2012-05-06  9:01 suddenly slow writes on XFS Filesystem Stefan Priebe
2012-05-06 10:31 ` Martin Steigerwald
2012-05-06 10:33 ` Martin Steigerwald
2012-05-06 15:45   ` Stan Hoeppner
2012-05-06 19:25     ` Stefan Priebe
2012-05-07  1:39       ` Dave Chinner
2012-05-06 21:43     ` Martin Steigerwald
2012-05-07  6:40       ` Stefan Priebe - Profihost AG
2012-05-07  1:34 ` Dave Chinner
2012-05-07  6:39   ` Stefan Priebe - Profihost AG [this message]
2012-05-07  7:17     ` Dave Chinner
2012-05-07  7:22       ` Stefan Priebe - Profihost AG
2012-05-07 16:36         ` Stan Hoeppner
2012-05-07 19:08           ` Martin Steigerwald
2012-05-07 20:05           ` Stefan Priebe
2012-05-09  6:57             ` Stan Hoeppner
2012-05-09  7:04               ` Dave Chinner
2012-05-09  7:36                 ` Stefan Priebe - Profihost AG
2012-05-09  7:49                 ` Stan Hoeppner
2013-02-15 15:06                 ` 32bit apps and inode64 Stefan Priebe - Profihost AG
2013-02-15 21:46                   ` Ben Myers
2013-02-16 10:24                     ` Stefan Priebe - Profihost AG
2013-02-17 21:33                       ` Dave Chinner
2013-02-18  8:12                         ` Stefan Priebe - Profihost AG
2013-02-18 22:06                           ` Dave Chinner
2013-02-17  8:13                     ` Jeff Liu
2013-02-19 19:11                       ` Ben Myers
2012-05-07 23:42         ` suddenly slow writes on XFS Filesystem Dave Chinner
2012-05-07  8:21     ` Martin Steigerwald
2012-05-07 16:44       ` Stan Hoeppner
2012-05-07  8:31     ` Martin Steigerwald
2012-05-07 13:57       ` Stefan Priebe - Profihost AG
2012-05-07 14:32         ` Martin Steigerwald

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=4FA76E11.1070708@profihost.ag \
    --to=s.priebe@profihost.ag \
    --cc=david@fromorbit.com \
    --cc=stan@hardwarefreak.com \
    --cc=xfs@oss.sgi.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox