public inbox for linux-xfs@vger.kernel.org
 help / color / mirror / Atom feed
From: Stan Hoeppner <stan@hardwarefreak.com>
To: Dave Chinner <david@fromorbit.com>,
	Emmanuel Florac <eflorac@intellique.com>
Cc: Steve Brooks <steveb@mcs.st-and.ac.uk>, xfs@oss.sgi.com
Subject: Re: XFS tune to adaptec ASR71605
Date: Tue, 06 May 2014 18:20:46 -0500	[thread overview]
Message-ID: <53696E4E.5030404@hardwarefreak.com> (raw)
In-Reply-To: <20140506200917.GM5421@dastard>

On 5/6/2014 3:09 PM, Dave Chinner wrote:
> On Tue, May 06, 2014 at 03:51:49PM +0200, Emmanuel Florac wrote:
>> Le Tue, 6 May 2014 14:14:35 +0100 (BST)
>> Steve Brooks <steveb@mcs.st-and.ac.uk> écrivait:
>>
>>> Thanks for the reply Emmanuel, I installed and rua bonnie++ although
>>> I will need to research the results
>>
>> Yup, they're... weird :) Write speed is abysmal, but random seeks very
>> high. Please try my settings so that we can compare the numbers more
>> directly:
> 
> Friends don't let friends use bonnie++ for benchmarking storage.
> The numbers you get will be irrelevant to you application, and it's
> so synthetic is doesn't reflect any real-world workload at all.
> 
> The only useful benchmark for determining if changes are going to
> improve application performance is to measure your application's
> performance.

Exactly.  The OP's post begins with:

"After much reading around this is what I came up with...  All hosts
have 16x4TB WD RE WD4000FYYZ drives and will run RAID 6...

Stripe-unit size                         : 512 KB"

That's a 7 MB stripe width.  Such a setup is suitable for large
streaming workloads which generate no RMW, and that's about it.  For
anything involving random writes the performance will be very low, even
with write cache enabled, because each writeback operation will involve
reading and writing 1.5 MB minimum.  Depending on the ARC firmware, if
it does scrubbing, it may read/write 7 MB for each RMW operation.

"Everything begins and ends with the workload".

Describe the workload on each machine, if not all the same, and we'll be
in a far better position to advise what RAID level and stripe unit size
you should use, and how best to configure XFS.

All the best,

Stan

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

      reply	other threads:[~2014-05-06 23:20 UTC|newest]

Thread overview: 10+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2014-05-06 10:25 XFS tune to adaptec ASR71605 Steve Brooks
2014-05-06 11:00 ` Emmanuel Florac
2014-05-06 13:14   ` Steve Brooks
2014-05-06 13:51     ` Emmanuel Florac
2014-05-06 14:40       ` Steve Brooks
2014-05-06 14:59         ` Emmanuel Florac
2014-05-06 15:22           ` Steve Brooks
2014-05-06 16:03           ` XFS tune to adaptec ASR71605 [SOLVED] Steve Brooks
2014-05-06 20:09       ` XFS tune to adaptec ASR71605 Dave Chinner
2014-05-06 23:20         ` Stan Hoeppner [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=53696E4E.5030404@hardwarefreak.com \
    --to=stan@hardwarefreak.com \
    --cc=david@fromorbit.com \
    --cc=eflorac@intellique.com \
    --cc=steveb@mcs.st-and.ac.uk \
    --cc=xfs@oss.sgi.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox