linux-raid.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Bill Davidsen <davidsen@tmr.com>
To: Light King <thelightking@gmail.com>
Cc: linux-raid@vger.kernel.org
Subject: Re: Doubt
Date: Mon, 09 Nov 2009 12:48:58 -0500	[thread overview]
Message-ID: <4AF8560A.4020001@tmr.com> (raw)
In-Reply-To: <d3540f4d0911042326o754d5f06qf958d8ab5b309e86@mail.gmail.com>

Light King wrote:
> i have four cf card and i have one pci based cf card
> contoller(addonics card with pata_sil6800 driver).when i am connecting
> this four cf card to addonics card(which has four slots for cf cards)
> and inserting this total hardware package to pci slot of pc in linux
> os it is showing four different block devices to me.So using mdadm
> 2.6.3 software raid i am creating a raid device of level 0 . If one cf
> card from this hardware package getting failed the raid-device is
> becoming inactive .If i am trying to reactive the raid device using
> mdamd -R command it is giving a error of "memory cannot be allocated
> for the raid device " .The same thing i am trying with raid10 (our
> hardware only supports raid  level 0 ,1,10) and if one cf card got
> failed we are abel to reactive the raid device.But  the issue we are
> facing in raid10 is it is taking 50%(2-CF card out of 4) of total
> memory space as mirroring which is a loss for us .
>
> So we dont want any kind of data recovery in our raid device (like
> raid0) but we want if one cf card failed also, the raid device should
> run or should reactive without any error(like raid10) but we should
> abel to use the total disk space (like raid0).
>
> or
>
> any idea to increase size of storage memory created by raid10 (50% is
> going waste due to mirroring and our hardware doesnot support raid5) .
>   

If I understand what you are asking, when one part of your array fails, 
you want to throw away all the data on all the devices and create a new 
array using the remaining functional devices. I guess you could run a 
script to do that, but only if you put the controller in JBOD mode so 
software raid can manipulate the individual devices. Then you could use 
the drive fail event to trigger the script.

If that isn't what you want, have a go at explaining what you want to 
happen when a device fails. Bear in mind that with raid0 when any one 
fails all of your data is gone. Period. You have traded capacity and 
performance for reliability, so there is no recovery other than start 
over using the working bits.

-- 
Bill Davidsen <davidsen@tmr.com>
  "We can't solve today's problems by using the same thinking we
   used in creating them." - Einstein


  parent reply	other threads:[~2009-11-09 17:48 UTC|newest]

Thread overview: 6+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2009-11-05  7:26 Doubt Light King
2009-11-05  7:38 ` Doubt Iustin Pop
2009-11-05  7:42 ` Doubt Michael Evans
     [not found]   ` <d3540f4d0911050044w4ff51fddoba0aced44e3988b3@mail.gmail.com>
2009-11-05 16:35     ` Doubt Michael Evans
2009-11-09 17:48 ` Bill Davidsen [this message]
2009-11-09 18:35 ` Doubt Drew

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=4AF8560A.4020001@tmr.com \
    --to=davidsen@tmr.com \
    --cc=linux-raid@vger.kernel.org \
    --cc=thelightking@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).