From: Thomas Fjellstrom <thomas@fjellstrom.ca>
To: Chris Murphy <lists@colorremedies.com>
Cc: linux-raid Raid <linux-raid@vger.kernel.org>
Subject: Re: recommended way to add ssd cache to mdraid array
Date: Fri, 11 Jan 2013 05:20:07 -0700 [thread overview]
Message-ID: <201301110520.07128.thomas@fjellstrom.ca> (raw)
In-Reply-To: <6FD79D70-01A4-4DC9-9494-157773CD0F11@colorremedies.com>
On Thu Jan 10, 2013, Chris Murphy wrote:
> On Jan 10, 2013, at 3:49 AM, Thomas Fjellstrom <thomas@fjellstrom.ca> wrote:
> > A lot of it will be streaming. Some may end up being random read/writes.
> > The test is just to gauge over all performance of the setup. 600MBs read
> > is far more than I need, but having writes at 1/3 that seems odd to me.
>
> Tell us how many disks there are, and what the chunk size is. It could be
> too small if you have too few disks which results in a small full stripe
> size for a video context. If you're using the default, it could be too big
> and you're getting a lot of RWM. Stan, and others, can better answer this.
As stated earlier, its a 7x2TB array.
> You said these are unpartitioned disks, I think. In which case alignment of
> 4096 byte sectors isn't a factor if these are AF disks.
They are AF disks.
> Unlikely to make up the difference is the scheduler. Parallel fs's like XFS
> don't perform nearly as well with CFQ, so you should have a kernel
> parameter elevator=noop.
>
> Another thing to look at is md/stripe_cache_size which probably needs to be
> higher for your application.
I'll look into it.
> Another thing to look at is if you're using XFS, what your mount options
> are. Invariably with an array of this size you need to be mounting with
> the inode64 option.
I'm not sure, but I think that's the default.
> > The reason I've selected RAID6 to begin with is I've read (on this
> > mailing list, and on some hardware tech sites) that even with SAS
> > drives, the rebuild/resync time on a large array using large disks
> > (2TB+) is long enough that it gives more than enough time for another
> > disk to hit a random read error,
>
> This is true for high density consumer SATA drives. It's not nearly as
> applicable for low to moderate density nearline SATA which has an order of
> magnitude lower UER, or for enterprise SAS (and some enterprise SATA)
> which has yet another order of magnitude lower UER. So it depends on the
> disks, and the RAID size, and the backup/restore strategy.
Plain old seagate baracudas, so not the best but at least they aren't greens.
> Another way people get into trouble with the event you're talking about, is
> they don't do regular scrubs or poll drive SMART data. I have no empirical
> data, but I'd expect much better than order of magnitude lower array loss
> during a rebuild when the array is being properly maintained, rather than
> considering it a push button "it's magic" appliance to be forgotten about.
Debian seems to set up a weekly or monthly scrub, which I leave on due to
reading that same fact.
>
> Chris Murphy--
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
--
Thomas Fjellstrom
thomas@fjellstrom.ca
next prev parent reply other threads:[~2013-01-11 12:20 UTC|newest]
Thread overview: 53+ messages / expand[flat|nested] mbox.gz Atom feed top
2012-12-22 6:57 recommended way to add ssd cache to mdraid array Thomas Fjellstrom
2012-12-23 3:44 ` Thomas Fjellstrom
2013-01-09 18:41 ` Thomas Fjellstrom
2013-01-10 6:25 ` Chris Murphy
2013-01-10 10:49 ` Thomas Fjellstrom
2013-01-10 21:36 ` Chris Murphy
2013-01-11 0:18 ` Stan Hoeppner
2013-01-11 12:35 ` Thomas Fjellstrom
2013-01-11 12:48 ` Thomas Fjellstrom
2013-01-14 0:05 ` Tommy Apel Hansen
2013-01-14 8:58 ` Thomas Fjellstrom
2013-01-14 18:22 ` Thomas Fjellstrom
2013-01-14 19:45 ` Stan Hoeppner
2013-01-14 21:53 ` Thomas Fjellstrom
2013-01-14 22:51 ` Chris Murphy
2013-01-15 3:25 ` Thomas Fjellstrom
2013-01-15 1:50 ` Stan Hoeppner
2013-01-15 3:52 ` Thomas Fjellstrom
2013-01-15 8:38 ` Stan Hoeppner
2013-01-15 9:02 ` Tommy Apel
2013-01-15 11:19 ` Stan Hoeppner
2013-01-15 10:47 ` Tommy Apel
2013-01-16 5:31 ` Thomas Fjellstrom
2013-01-16 8:59 ` John Robinson
2013-01-16 21:29 ` Stan Hoeppner
2013-02-10 6:59 ` Thomas Fjellstrom
2013-01-16 22:06 ` Stan Hoeppner
2013-01-14 21:38 ` Tommy Apel Hansen
2013-01-14 21:47 ` Tommy Apel Hansen
2013-01-11 12:20 ` Thomas Fjellstrom [this message]
2013-01-11 17:39 ` Chris Murphy
2013-01-11 17:46 ` Chris Murphy
2013-01-11 18:52 ` Thomas Fjellstrom
2013-01-12 0:47 ` Phil Turmel
2013-01-12 3:56 ` Chris Murphy
2013-01-13 22:13 ` Phil Turmel
2013-01-13 23:20 ` Chris Murphy
2013-01-14 0:23 ` Phil Turmel
2013-01-14 3:58 ` Chris Murphy
2013-01-14 22:00 ` Thomas Fjellstrom
2013-01-11 18:51 ` Thomas Fjellstrom
2013-01-11 22:17 ` Stan Hoeppner
2013-01-12 2:44 ` Thomas Fjellstrom
2013-01-12 8:33 ` Stan Hoeppner
2013-01-12 14:44 ` Thomas Fjellstrom
2013-01-13 19:18 ` Chris Murphy
2013-01-14 9:06 ` Thomas Fjellstrom
2013-01-11 18:50 ` Stan Hoeppner
2013-01-12 2:45 ` Thomas Fjellstrom
2013-01-12 12:06 ` Roy Sigurd Karlsbakk
2013-01-12 14:14 ` Stan Hoeppner
2013-01-12 16:37 ` Roy Sigurd Karlsbakk
2013-01-10 13:13 ` Brad Campbell
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=201301110520.07128.thomas@fjellstrom.ca \
--to=thomas@fjellstrom.ca \
--cc=linux-raid@vger.kernel.org \
--cc=lists@colorremedies.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).