public inbox for linux-xfs@vger.kernel.org
 help / color / mirror / Atom feed
From: Stan Hoeppner <stan@hardwarefreak.com>
To: xfs@oss.sgi.com
Subject: Re: Question regarding performance on big files.
Date: Mon, 20 Sep 2010 14:48:04 -0500	[thread overview]
Message-ID: <4C97BA74.5030304@hardwarefreak.com> (raw)
In-Reply-To: <4C979439.7070906@opencubetech.com>

Mathieu AVILA put forth on 9/20/2010 12:04 PM:
>  Hello XFS team,
> 
> I have run into trouble with XFS, but excuse me if this question has
> been asked a dozens times.
> 
> I'm am filling a very big file on a XFS filesystem on Linux that stands
> on a software RAID 0. Performance are very good until I get 2 "holes"
> during which my write stalls for a few seconds.
> Mkfs parameters:
> mkxfs.xfs -b size 4096 -s size 4096 -d agcount=2 -i size=2048
> The RAID0 is done a 2 SATA disks of 500 GB each.

What happens when you make the filesystem using defaults?

mkfs.xfs /dev/[device]

Not sure if it is related to your issue, but your manual agcount setting
seems really low.  agcount greatly affects parallelism.  With a manual
setting of 2, you're dictating serial read/write stream behavior to/from
each drive.  This is not good.

I have a server with a single 500GB SATA drive with two XFS filesystem
partitions for data, each of 100GB, and a 35GB EXT partition for the /
filesystem.  Over half the drive space is unallocated.  Yet each XFS
filesystem has 4 default allocation groups.  If I were to create two
more 100GB filesystems, I'd end up with 16 AGs for 400GB worth of XFS
filesystems on a single 500GB drive.

meta-data=/dev/sda6    isize=256    agcount=4, agsize=6103694 blks
         =             sectsz=512   attr=2
data     =             bsize=4096   blocks=24414775, imaxpct=25
         =             sunit=0      swidth=0 blks
naming   =version 2    bsize=4096
log      =internal     bsize=4096   blocks=11921, version=2
         =             sectsz=512   sunit=0 blks, lazy-count=0
realtime =none         extsz=4096   blocks=0, rtextents=0

My suggestion would be to create the filesystem using default values and
see what you get.  2.6.18 is rather old, and I don't know if XFS picks
up the mdraid config and uses that info accordingly.  Newer versions of
XFS do this automatically and correctly, so you don't need to manually
specify anything with mkfs.xfs.

If default mkfs values still yield issues/problems, remake the
filesystem specifying '-d sw=2' and retest.

You specified '-b size=4096'.  This is the default for block size so
there's no need to specify it.

You specified '-s size=4096'.  This needs to match the sector size of
the underlying physical disk, which is 512 bytes in your case.  This may
be part of your problem as well.

You specified '-d agcount=2'.  From man mkfs.xfs:

"The data section of the filesystem is divided into _value_ allocation
groups (default value is scaled automatically based on the underlying
device size)."

My guess is that mkfs.xfs with no manual agcount forced would yield
something like 32-40 allocations groups on your RAID0 1TB XFS
filesystem.  Theoretically, this should boost your performance 16-20
times over your current agcount setting of 2 allocation groups.  In
reality the boost won't be nearly that great, but your performance
should be greatly improved nonetheless.

-- 
Stan


_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

  reply	other threads:[~2010-09-20 19:47 UTC|newest]

Thread overview: 6+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2010-09-20 17:04 Question regarding performance on big files Mathieu AVILA
2010-09-20 19:48 ` Stan Hoeppner [this message]
2010-09-22 10:26   ` Mathieu AVILA
2010-09-22 20:41     ` Stan Hoeppner
2010-09-23  8:55       ` Mathieu AVILA
2010-09-23 22:03         ` Stan Hoeppner

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=4C97BA74.5030304@hardwarefreak.com \
    --to=stan@hardwarefreak.com \
    --cc=xfs@oss.sgi.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox