public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: Dave Chinner <david@fromorbit.com>
To: Justin Piszcz <jpiszcz@lucidpixels.com>
Cc: xfs@oss.sgi.com, linux-kernel@vger.kernel.org
Subject: Re: 2.6.38.4: xfs speed problem?
Date: Sun, 8 May 2011 10:33:22 +1000	[thread overview]
Message-ID: <20110508003321.GI26837@dastard> (raw)
In-Reply-To: <alpine.DEB.2.02.1105071042210.20162@p34.internal.lan>

On Sat, May 07, 2011 at 12:09:46PM -0400, Justin Piszcz wrote:
> Hello,
> 
> Using 2.6.38.4 on two hosts:
> 
> Host 1:
> $ /usr/bin/time find geocities.data 1> /dev/null
> 80.92user 417.93system 2:19:07elapsed 5%CPU (0avgtext+0avgdata 105520maxresident)k
> 0inputs+0outputs (0major+73373minor)pagefaults 0swaps
> 
> # xfs_db -c frag -f /dev/sda1
> actual 40203982, ideal 40088075, fragmentation factor 0.29%
> 
> meta-data=/dev/sda1              isize=256    agcount=44, agsize=268435455 blks
>          =                       sectsz=512   attr=2
> data     =                       bsize=4096   blocks=11718704640, imaxpct=5
>          =                       sunit=0      swidth=0 blks
> naming   =version 2              bsize=4096   ascii-ci=0
> log      =internal               bsize=4096   blocks=521728, version=2
>          =                       sectsz=512   sunit=0 blks, lazy-count=1
> realtime =none                   extsz=4096   blocks=0, rtextents=0
> 
> --
> 
> Host 2:
> $ /usr/bin/time find geocities.data 1>/dev/null
> 54.60user 337.20system 48:42.71elapsed 13%CPU (0avgtext+0avgdata 105632maxresident)k
> 0inputs+0outputs (1major+72981minor)pagefaults 0swaps
> 
> # xfs_db -c frag -f /dev/sdb1
> actual 37998306, ideal 37939331, fragmentation factor 0.16%
> 
> meta-data=/dev/sdb1              isize=256    agcount=10, agsize=268435455 blks
>          =                       sectsz=512   attr=2
> data     =                       bsize=4096   blocks=2441379328, imaxpct=5
>          =                       sunit=0      swidth=0 blks
> naming   =version 2              bsize=4096   ascii-ci=0
> log      =internal               bsize=4096   blocks=521728, version=2
>          =                       sectsz=512   sunit=0 blks, lazy-count=1
> realtime =none                   extsz=4096   blocks=0, rtextents=0
> 
> 
> --
> 
> Host 1: RAID-6 (7200 RPM Drives, 18+1 hot spare)

Those will be 3TB drives

> Host 2: RAID-6 (7200 RPM Drives, 12)

and those are 1TB drives.

Different hardware is guaranteed to give you different performance,
especially from a seek capability perspective.

> Each system uses a 3ware 9750-24i4e controller, same settings.
> 
> Any thoughts why one is > 2x faster than the other?

Different filesystem sizes mean different directory, inode and data
layouts, especially if you are using inode64.

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

  reply	other threads:[~2011-05-08  0:33 UTC|newest]

Thread overview: 3+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2011-05-07 16:09 2.6.38.4: xfs speed problem? Justin Piszcz
2011-05-08  0:33 ` Dave Chinner [this message]
2011-05-08 17:18   ` Stan Hoeppner

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20110508003321.GI26837@dastard \
    --to=david@fromorbit.com \
    --cc=jpiszcz@lucidpixels.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=xfs@oss.sgi.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox