From: Brad Campbell <brad@wasp.net.au>
To: Gordon Henderson <gordon@drogon.net>
Cc: linux-raid@vger.kernel.org
Subject: Re: Spare disk could not sleep / standby
Date: Wed, 09 Mar 2005 09:11:38 +0400 [thread overview]
Message-ID: <422E858A.5090602@wasp.net.au> (raw)
In-Reply-To: <Pine.LNX.4.56.0503081133140.4055@lion.drogon.net>
Gordon Henderson wrote:
>
> I'm in the middle of building up a new home server - looking at RAID-5 or
> 6 and 2.6.x, so maybe it's time to look at all this again, but it sounds
> like the auto superblock update might thwart it all now...
Nah... As far as I can tell, 20ms after the last write, the auto superblock update will write the
array as clean. You can then spin the disks down as you normally would after a delay. It's just like
a normal write. There is an overhead I guess, where prior to the next write it's going to mark the
superblocks as dirty. I wonder in your case if this would spin up *all* the disks at once, or do a
staged spin up, given it's going to touch all the disks "at the same time"?
I have my Raid-6 with ext3 and a commit time of 30s. With a idle system, it really stays idle.
Nothing touches the disks. If I wanted to spin them down I could do that.
The thing I *love* about this feature, is when I do something totally stupid and panic the box, 90%
of the time I don't need a resync as the array was marked clean after the last write. Thanks Neil!
Just for yuk's, here are a couple of photos of my latest Frankenstein. 3TB of Raid-6 in a Midi-Tower
case. Had to re-wire the PSU internally to export an extra 12v rail to an appropriate place however.
I have been beating Raid-6 senseless for the last week now and doing horrid things to the hardware.
I'm now completely confident in its stability and ready to use it for production. Thanks HPA!
http://www.wasp.net.au/~brad/nas/nas-front.jpg
http://www.wasp.net.au/~brad/nas/nas-psu.jpg
http://www.wasp.net.au/~brad/nas/nas-rear.jpg
http://www.wasp.net.au/~brad/nas/nas-side.jpg
Regards,
Brad
--
"Human beings, who are almost unique in having the ability
to learn from the experience of others, are also remarkable
for their apparent disinclination to do so." -- Douglas Adams
next prev parent reply other threads:[~2005-03-09 5:11 UTC|newest]
Thread overview: 20+ messages / expand[flat|nested] mbox.gz Atom feed top
2005-03-08 4:05 Spare disk could not sleep / standby Peter Evertz
2005-03-08 4:14 ` Guy
2005-03-08 4:40 ` Neil Brown
2005-03-08 5:20 ` Molle Bestefich
2005-03-08 5:36 ` Neil Brown
2005-03-08 5:46 ` Molle Bestefich
2005-03-08 6:03 ` Neil Brown
2005-03-08 6:24 ` Molle Bestefich
[not found] ` <422D625C.5020803@medien.uni-weimar.de>
2005-03-08 8:57 ` Molle Bestefich
2005-03-08 10:51 ` Tobias Hofmann
2005-03-08 13:13 ` Gordon Henderson
2005-03-09 5:11 ` Brad Campbell [this message]
2005-03-09 9:03 ` Tobias Hofmann
2005-03-08 8:51 ` David Greaves
2005-03-08 15:59 ` Mike Tran
2005-03-09 15:53 ` Spare disk could not sleep / standby [probably dangerous PATCH] Peter Evertz
2005-03-09 10:44 ` Mike Tran
2005-03-09 20:05 ` Peter Evertz
2005-03-09 16:29 ` Mike Tran
2005-03-09 23:20 ` Peter Evertz
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=422E858A.5090602@wasp.net.au \
--to=brad@wasp.net.au \
--cc=gordon@drogon.net \
--cc=linux-raid@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).