public inbox for linux-xfs@vger.kernel.org
 help / color / mirror / Atom feed
From: Dave Chinner <david@fromorbit.com>
To: Mark Seger <mjseger@gmail.com>
Cc: Nathan Scott <nathans@redhat.com>, xfs@oss.sgi.com
Subject: Re: definitions for /proc/fs/xfs/stat
Date: Mon, 17 Jun 2013 09:14:29 +1000	[thread overview]
Message-ID: <20130616231429.GH29338@dastard> (raw)
In-Reply-To: <CAC2B=ZHBxCcvg4DMDdcRBXGrRJ2KVAibW1ToQ3yU5T5bQuHJtA@mail.gmail.com>

On Sun, Jun 16, 2013 at 06:31:13PM -0400, Mark Seger wrote:
> >
> > There is no way that fallocate() of 1000x1k files should be causing
> > 450MB/s of IO for 5 seconds.
> 
> I agree and that's what has me puzzled as well.
> 
> > However, I still have no idea what you are running this test on - as
> > I asked in another email, can you provide some information about
> > the system your are seeing this problem on so we can try to work out
> > what might be causing this?
> >
> 
> sorry about that.  This is an HP box with 192GB RAM and 6 2-core
> hyperthreaded CPUs, running ubuntu/precise
> 
> segerm@az1-sw-object-0006:~$ uname -a
> Linux az1-sw-object-0006 2.6.38-16-server #68hf1026116v20120926-Ubuntu SMP
> Wed Sep 26 14:34:13 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux

So it running a pretty old Ubuntu something-or-other kernel. There's
only limited help I can give you for this kernel as I've got no idea
what Ubuntu have put in it...

> segerm@az1-sw-object-0006:~$ python --version
> Python 2.7.1+
> 
> segerm@az1-sw-object-0006:~$ xfs_repair -V
> xfs_repair version 3.1.4
> 
> segerm@az1-sw-object-0006:~$ cat /proc/meminfo
> MemTotal:       198191696 kB
> MemFree:        166202324 kB
> Buffers:          193268 kB
> Cached:         21595332 kB
....
> over 60 mounts, but here's the one I'm writing to:
> 
> segerm@az1-sw-object-0006:~$ mount | grep disk0
> /dev/sdc1 on /srv/node/disk0 type xfs (rw,nobarrier)
> 
> not sure what you're looking for here so here's it all
> 
> segerm@az1-sw-object-0006:~$ cat /proc/partitions
> major minor  #blocks  name
> 
>    8        0  976762584 sda
>    8        1     248976 sda1
>    8        2          1 sda2
>    8        5  976510993 sda5
>  251        0   41943040 dm-0
>  251        1    8785920 dm-1
>  251        2    2928640 dm-2
>    8       16  976762584 sdb
>    8       17  976760832 sdb1
>  251        3  126889984 dm-3
>  251        4     389120 dm-4
>  251        5   41943040 dm-5
>    8       32 2930233816 sdc
>    8       33 2930233344 sdc1
....

> segerm@az1-sw-object-0006:~$ xfs_info /srv/node/disk0
> meta-data=/dev/sdc1              isize=1024   agcount=32, agsize=22892416
> blks
>          =                       sectsz=512   attr=2
> data     =                       bsize=4096   blocks=732557312, imaxpct=5
>          =                       sunit=64     swidth=64 blks
> naming   =version 2              bsize=4096   ascii-ci=0
> log      =internal               bsize=4096   blocks=357696, version=2
>          =                       sectsz=512   sunit=64 blks, lazy-count=1
> realtime =none                   extsz=4096   blocks=0, rtextents=0

Ok, that's interesting - a 1k inode size, and sunit=swidth=256k. But
it doesn't cause a current kernel to reproduce the behaviour you are
seeing....

sunit=256k is interesting, because:

>     0.067874 cpu=0 pid=41977 fallocate [285] entry fd=15 mode=0x1
> offset=0x0 len=10240
>     0.067980 cpu=0 pid=41977 block_rq_insert dev_t=0x04100030 wr=write
> flags=SYNC sector=0xaec11a00 len=262144

That's a write which is rounded up to 256k.

BTW, that's a trace for a also a 10k fallocate, not 1k, but
regardless it doesn't change behaviour on my TOT test kernel.

> I hope this helps but if there's any more I can provide I'll be
> happy to do so.

It doesn't tell me what XFS is doing with the fallocate call.
Providing the trace-cmd trace output from the FAQ might shed some
light on it...

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

  reply	other threads:[~2013-06-16 23:14 UTC|newest]

Thread overview: 28+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2013-06-14 16:37 definitions for /proc/fs/xfs/stat Mark Seger
2013-06-14 22:16 ` Nathan Scott
2013-06-14 22:37   ` Mark Seger
2013-06-15  0:17     ` Nathan Scott
2013-06-15  1:55       ` Mark Seger
2013-06-15  2:04         ` Dave Chinner
2013-06-15 10:35           ` Mark Seger
2013-06-15 16:22             ` Mark Seger
2013-06-16  0:11               ` Dave Chinner
2013-06-16 12:58                 ` Mark Seger
2013-06-16 22:06                   ` Dave Chinner
2013-06-16 22:31                     ` Mark Seger
2013-06-16 23:14                       ` Dave Chinner [this message]
2013-06-16 23:31                         ` Mark Seger
2013-06-17  1:11                   ` Nathan Scott
2013-06-17  2:46                     ` Dave Chinner
2013-06-17  5:41                       ` Nathan Scott
2013-06-17 10:57                         ` Mark Seger
2013-06-17 11:13                           ` Dave Chinner
2013-06-17 14:57                             ` Mark Seger
2013-06-17 20:28                               ` Stefan Ring
2013-06-18  0:15                                 ` Dave Chinner
2013-06-18 10:17                                   ` Mark Seger
2013-06-19 23:02                               ` Useful stats (was Re: definitions for /proc/fs/xfs/stat) Nathan Scott
2013-06-17 11:19                         ` definitions for /proc/fs/xfs/stat Dave Chinner
2013-06-17 13:18                           ` Stan Hoeppner
2013-06-18  0:13                     ` Mark Goodwin
2013-06-16  0:00             ` Dave Chinner

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20130616231429.GH29338@dastard \
    --to=david@fromorbit.com \
    --cc=mjseger@gmail.com \
    --cc=nathans@redhat.com \
    --cc=xfs@oss.sgi.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox