public inbox for linux-xfs@vger.kernel.org
 help / color / mirror / Atom feed
From: Stan Hoeppner <stan@hardwarefreak.com>
To: xfs@oss.sgi.com
Subject: Re: RAID6 r-m-w, op-journaled fs, SSDs
Date: Sat, 30 Apr 2011 22:17:38 -0500	[thread overview]
Message-ID: <4DBCD0D2.3030109@hardwarefreak.com> (raw)
In-Reply-To: <201104302350.32287@zmi.at>

On 4/30/2011 4:50 PM, Michael Monnerie wrote:
> On Samstag, 30. April 2011 Stan Hoeppner wrote:
>> Poor cache management, I'd guess, is one reason why you see Areca
>> RAID  cards with 1-4GB cache DRAM whereas competing cards w/ similar
>> price/performance/features from LSI, Adaptec, and others sport
>> 512MB.
>
> On one server (XENserver virtualized with ~14 VMs running Linux) which
> suffered from slow I/O on RAID-6 during heavy times, I upgraded the
> cache from 1G to 4G using an Areca ARC-1260 controller (somewhat
> outdated now), and couldn't see any advantage. Maybe it would have been
> measurable, but the damn thing was still pretty slow, so using more hard
> disks is still the better option than upgrading the cache.
>
> Just for documentation if someone sees slow I/O on Areca. More spindles
> rock. That server had 8x 10krpm WD Raptor 150G drives by the time.

Similar to the case with CPUs, more cache can only take you so far.  The 
benefit resulting from the cache size, locality (on/off chip), and 
algorithm is often very workload dependent, as is the case with RAID 
controller cache.

Adding controller cache can benefit some workloads, depending on the 
controller make/model, but I agree with you that adding spindles, or 
swapping to faster spindles (say 7.2 to 15k, or SSD), will typically 
benefit all workloads.  However, given that DIMMs are so cheap compared 
to hot swap disks, maxing out controller cache on models that have DIMM 
slots is an inexpensive first step to take when faced with an IO bottleneck.

Larger controller cache seemed to have more positive impact on SCSI RAID 
controllers of the mid/late 90s than on modern controllers.  The 
difference between 8MB and 64MB was substantial with many workloads back 
then.  On many modern SAS/SATA controllers the difference between 512MB 
and 1GB isn't nearly as profound, if any at all.  The shared SCSI bus 
dictated sequential access to all 15 drives on the bus which would tend 
to explain why more cache made a big difference, by masking the 
latencies.  SAS/SATA allows concurrent access to all drives 
simultaneously (assuming no expanders) without the SCSI latencies.  This 
may tend to explain why larger RAID cache on today's controllers doesn't 
yield the benefits of previous generation SCSI RAID cards.

-- 
Stan

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

  reply	other threads:[~2011-05-01  3:14 UTC|newest]

Thread overview: 11+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2011-04-30 15:27 RAID6 r-m-w, op-journaled fs, SSDs Peter Grandi
2011-04-30 16:02 ` Emmanuel Florac
2011-04-30 19:54   ` Stan Hoeppner
2011-04-30 21:50     ` Michael Monnerie
2011-05-01  3:17       ` Stan Hoeppner [this message]
2011-05-01  9:14       ` Emmanuel Florac
2011-05-01  9:11     ` Emmanuel Florac
2011-04-30 22:27 ` NeilBrown
2011-05-01 15:31   ` Peter Grandi
2011-05-01 18:32     ` David Brown
2011-05-01  9:36 ` Dave Chinner

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=4DBCD0D2.3030109@hardwarefreak.com \
    --to=stan@hardwarefreak.com \
    --cc=xfs@oss.sgi.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox