linux-raid.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Stan Hoeppner <stan@hardwarefreak.com>
To: Veedar Hokstadt <veedar@gmail.com>
Cc: linux-raid@vger.kernel.org
Subject: Re: Is a raid0 512 byte chunk size possible? Or is it just too small?
Date: Fri, 30 Aug 2013 16:33:31 -0500	[thread overview]
Message-ID: <52210FAB.7060606@hardwarefreak.com> (raw)
In-Reply-To: <CANb3qHcczcKrCOPWy7MfY2aP_0V2c5KkQa6zzm5f1J7Qx20W8w@mail.gmail.com>

On 8/30/2013 2:32 PM, Veedar Hokstadt wrote:
> Hello,
> I would like to use mdadm to set up a raid0 with a 512B chunk size.
> 
> I ask as my purpose is to mimic a raid0 config from a Lacie NAS box
> that uses a 512B chunk size.

Your reasoning is flawed.  Why would you want to imitate a configuration
that is inherently flawed?

> The lowest chunk value mdadm will accpet is 4. Anything less and mdadm
> gives an error "invalid chunk/rounding value"

For good reason.

> Is there any way to create a raid0 with a 512B chunk?

First, if you're using RAID0 it absolutely must be assumed that you
desire maximum speed, care nothing for redundancy, and you don't care if
you lose your data when a disk fails because you have a full backup of
the RAID0 filesystem.

If you want speed, using RAID0 with a 512 byte chunk isn't going to
achieve it.  On the contrary, using such a small chunk will drop a
hammer on your throughput because you're processing a much larger number
of IOs per quantity of data transferred.  This is extremely inefficient,
and throughput drops.  With RAID0 you typically want a very large chunk,
the largest your drives can ingest efficiently in a single IO.  I'll
make an educated assumption that you plan to store media files on this
array, probably DVDs/CDs, and/or use it as a DVR.  In this case you want
a large chunk, 512KB-1MB.

However, you've stated you want to duplicate a NAS device.  Consider
that GbE using Rtl 81xx devices tops out at ~70-90 MBs application level
throughput.  Two modern drives in RAID0 with a proper chunk size can
read/write at double that rate.  Given this fact, why are you bothering
with RAID0?  You won't see any of the increased performance RAID0 can
give you.  In fact a single modern drive can saturate GbE.

I assume you are using RAID0 simply as an inexpensive way to maximize
your storage capacity.  That's fine with backups or if you have the
original media.  If you don't, or don't want to go through the hassle of
recreating your RAID0 after a disk failure and replacement, and copying
all your files back to it, I suggest you use RAID1/5/6/10 instead.

-- 
Stan




  reply	other threads:[~2013-08-30 21:33 UTC|newest]

Thread overview: 7+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2013-08-30 19:32 Is a raid0 512 byte chunk size possible? Or is it just too small? Veedar Hokstadt
2013-08-30 21:33 ` Stan Hoeppner [this message]
2013-08-31  0:58   ` Marcus Sorensen
2013-08-31  5:05     ` Veedar Hokstadt
2013-08-31  6:40       ` Stan Hoeppner
2013-08-31  7:10       ` Roman Mamedov
2013-09-07 19:57         ` Veedar Hokstadt

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=52210FAB.7060606@hardwarefreak.com \
    --to=stan@hardwarefreak.com \
    --cc=linux-raid@vger.kernel.org \
    --cc=veedar@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).