linux-raid.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Roger Heflin <rogerheflin@gmail.com>
To: Tony Germano <tony_germano@hotmail.com>
Cc: linux-raid@vger.kernel.org
Subject: Re: Proposal: non-striping RAID4
Date: Fri, 23 May 2008 10:12:38 -0500	[thread overview]
Message-ID: <4836DEE6.4030300@gmail.com> (raw)
In-Reply-To: <BAY130-W418FFF9E3F58536967DDEAE1C60@phx.gbl>

Tony Germano wrote:
> I would like to bring this back to the attention of the group (from November 2007) since the conversation died off and it looks like a few key features important to me were left out of the discussion... *grin*
> 
> The original post was regarding "unRAID" developed by http://lime-technology.com/
> 
> I had an idea in my head, and "unRAID" has features almost identical to what I was thinking about with the exception of a couple deal breaking design decisions. These are due to the proprietary front end, not the modified driver.
> 
> Bad decision #1) Implementation is for a NAS Appliance. Files are only accessible through a Samba share. (Though this is great for the hoards of people that use it as network storage for their windows media center pcs.)
> 
> Bad decision #2) Imposed ReiserFS.
> 
> Oh yeah, and it's not free in either sense of the word.
> 
> The most relevant uses I can think of for this type of array are archive storage and low use media servers. Keeping that in mind...
> 
> Good Thing #1)
> "JBOD with parity." Each usable disk is seen separately and has its own filesystem. This allows mixed sized disks and replacing older smaller drives with newer larger ones one at a time while utilizing the extra capacity right away (after expanding the filesystem.) In the event that two or more disks are lost, surviving non-parity disks still have 100% of their data. (Adding a new disk larger than the parity disk is possible, but takes multiple steps of converting it to the new parity disk and then adding the old parity disk back to the array as a regular disk... acceptable to me)
> 
> Good Thing #2)
> You can spin down idle disks. Since there is no data striping and file systems don't [have to] span drives, reading a file only requires 1 disk to be spinning. Writing only requires 1 disk + parity disk. This is an important feature to the "GREEN" community. On my mythtv server, I only record a few shows each week. I would have disks in this setup possibly not accessed for weeks or even months at a time. They don't need to be spinning, and performance is of no importance to me as long as it can keep up with writing HD streams.
> 
> Hopefully this brings a new perspective to the idea.
> 

I would think (for mythtv and similar uses) that they way to handle this would 
be to setup the raid array similar to enterprise class offline storage systems, 
you have a local disk cache of say 10-20GB, and when something is accessed you 
spinup the array and copy the entire file in, on writing, every hour or so (or 
any time you have to spin up the array) you copy all or part of the file off of 
the cache onto the array.   This would require either proper hooks in the kernel 
to deal with the offline storage concept or software at the application level to 
do it.

I know most of the offline storage has all of the files showing on a filesytem 
with proper metadata, but the file data is actually elsewhere, and when an 
application access the files the offline system (behind the scenes) brings the 
file data back from the offline storage onto the cache.   With this on a myth 
system I would expect the array to at most be spun up < 1x per hour for under 10 
minutes (my array does 35MB/s write, 90MB/s read-1.5GB recording would take 45 
seconds to copy from cache to array, and a reading a recording would take <20 
seconds to copy from cache to array plus spinup time), so a most the array would 
be actually spun up for maybe 3-5 minutes per hour under heavy usage, and 
probably not spun up at all in when things were not used.    My array uses about 
40W for the 4 disks, so being spun down 23 hours a day would save about 1KWhr 
per day, At the low power rate I pay (0.07/kwh) that comes to about $25 per year 
in power, in some more expensive states it would be 2-3 times that, and probably 
higher in Europe.

The big question is would the disks survive being spun down that often?

                             Roger

  parent reply	other threads:[~2008-05-23 15:12 UTC|newest]

Thread overview: 17+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2008-05-22 21:15 Proposal: non-striping RAID4 Tony Germano
2008-05-22 22:10 ` David Lethe
2008-05-22 22:56   ` Tony Germano
2008-05-23 15:12 ` Roger Heflin [this message]
2008-05-23 15:47 ` Chris Green
2008-05-24 14:21   ` Bill Davidsen
2008-05-24 14:19     ` Chris Green
2008-05-28 23:14       ` Bill Davidsen
2008-05-30 17:23         ` Tony Germano
  -- strict thread matches above, loose matches on Subject: below --
2007-11-23 15:58 Chris Green
2007-11-10  0:57 James Lee
2007-11-12  1:29 ` Bill Davidsen
2007-11-13 23:48   ` James Lee
2007-11-14  1:06     ` James Lee
2007-11-14 23:16       ` Bill Davidsen
2007-11-15  0:24         ` James Lee
2007-11-15  6:01           ` Neil Brown

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=4836DEE6.4030300@gmail.com \
    --to=rogerheflin@gmail.com \
    --cc=linux-raid@vger.kernel.org \
    --cc=tony_germano@hotmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).