linux-raid.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: David Brown <david.brown@hesbynett.no>
To: vincent Ferrer <vincentchicago1@gmail.com>
Cc: stan@hardwarefreak.com, linux-raid@vger.kernel.org
Subject: Re: raid5 to utilize upto 8 cores
Date: Fri, 17 Aug 2012 09:52:40 +0200	[thread overview]
Message-ID: <502DF848.6050903@hesbynett.no> (raw)
In-Reply-To: <CAEyJA_sfAUk9PNFfss9m=pAi90Pb0gTHo9seV-BDexr3pLVvxg@mail.gmail.com>

On 17/08/2012 00:11, vincent Ferrer wrote:
> On Wed, Aug 15, 2012 at 10:58 PM, Stan Hoeppner <stan@hardwarefreak.com> wrote:
>> On 8/15/2012 9:56 PM, vincent Ferrer wrote:
>>
>>> - My  storage server  has upto 8 cores  running linux kernel 2.6.32.27.
>>> - I created  a raid5 device of  10  SSDs .
>>
>> No it is not normal practice.  I 'preach' against it regularly when I
>> see OPs doing it.  It's quite insane.
>>
>> There are a couple of sane things you can do today to address your problem:
>>
>> Stan
>>
>
> Hi Stan,
> Follow-up question for  2  types of setups i may have to prepare:
> 1) setup A   has   80   SSDs.    Question: Should I still  create one
> raid5 device or should I create  8  raid5 device each having 10 SSDs ?
>      My linux based storage server may be accessed by  upto   10-20
> physically  different clients.
>

I have difficultly imagining the sort of workload that would justify 80 
SSDs.  Certainly you have to think about far more than just the disks or 
the raid setup - you would be looking at massive network bandwidth, 
multiple servers with large PCI express buses, etc.  Probably you would 
want dedicated SAN hardware of some sort.  Otherwise you could get 
pretty much the same performance and capacity using 10 hard disks (and 
maybe a little extra ram to improve caching).

But as a general rule, you want to limit the number of disks (or 
partitions) you have in a single raid5 to perhaps 6 devices.  With too 
many devices, you increase the probability that you will get a failure, 
and then a second failure during a rebuild.  You can use raid6 for extra 
protection - but that also (currently) suffers from the single-thread 
bottleneck.

Remember also that raid5 (or raid6) requires a RMW for updates larger 
than a single block but smaller than a full stripe - that means it needs 
to read from every disk in the array before it can write.  The wider the 
array, the bigger effect this is.

>   2) Setup B  has only 12 SSDs.  Question:  Is it more practical to
> have only one raid5  device,  even though I may have 4-5  physically
> different  clients or create 2 raid5 devices each having  6 SSDs.

Again, I would put only 6 disks in a raid5.

>
> Reason I am asking because I have seen enterprise storage arrays from
> EMC/IBM where new raid5 device is created on demand  and (storage
> firmware may spread across automatically across all the available
> drives/spindles or can be intelligently selected by storage admin by
> analyzing  workload to avoid  hot-spots)
>
> Partitioning was only done because I am still waiting  budget approval
> to buy SSDs.
>
> regards
> vincy
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>
>


  reply	other threads:[~2012-08-17  7:52 UTC|newest]

Thread overview: 17+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2012-08-16  2:56 raid5 to utilize upto 8 cores vincent Ferrer
2012-08-16  5:58 ` Stan Hoeppner
2012-08-16  7:03   ` Mikael Abrahamsson
2012-08-16  7:52   ` David Brown
2012-08-16 15:47     ` Flynn
2012-08-17  7:15     ` Stan Hoeppner
2012-08-17  7:29       ` David Brown
2012-08-17 10:52         ` Stan Hoeppner
2012-08-17 11:47           ` David Brown
2012-08-18  4:55             ` Stan Hoeppner
2012-08-18  8:59               ` David Brown
     [not found]   ` <CAEyJA_ungvS_o6dpKL+eghpavRwtY9eaDNCRJF0eUULoC0P6BA@mail.gmail.com>
2012-08-16  8:55     ` Stan Hoeppner
2012-08-16 22:11   ` vincent Ferrer
2012-08-17  7:52     ` David Brown [this message]
2012-08-17  8:29     ` Stan Hoeppner
     [not found] ` <CAD9gYJLwuai2kGw1D1wQoK8cOvMOiCCcN3hAY=k_jj0=4og3Vg@mail.gmail.com>
     [not found]   ` <CAEyJA_tGFtN2HMYa=vDV7m9N8thA-6MJ5TFo20X1yEpG3HQWYw@mail.gmail.com>
     [not found]     ` <CAD9gYJK09kRMb_v25uwmG7eRfFQLQyEd4SMXWBSPwYkpP56jcw@mail.gmail.com>
2012-08-16 21:51       ` vincent Ferrer
2012-08-16 22:29         ` Roberto Spadim

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=502DF848.6050903@hesbynett.no \
    --to=david.brown@hesbynett.no \
    --cc=linux-raid@vger.kernel.org \
    --cc=stan@hardwarefreak.com \
    --cc=vincentchicago1@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).