linux-raid.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* Doubt
@ 2009-11-05  7:26 Light King
  2009-11-05  7:38 ` Doubt Iustin Pop
                   ` (3 more replies)
  0 siblings, 4 replies; 6+ messages in thread
From: Light King @ 2009-11-05  7:26 UTC (permalink / raw)
  To: linux-raid

i have four cf card and i have one pci based cf card
contoller(addonics card with pata_sil6800 driver).when i am connecting
this four cf card to addonics card(which has four slots for cf cards)
and inserting this total hardware package to pci slot of pc in linux
os it is showing four different block devices to me.So using mdadm
2.6.3 software raid i am creating a raid device of level 0 . If one cf
card from this hardware package getting failed the raid-device is
becoming inactive .If i am trying to reactive the raid device using
mdamd -R command it is giving a error of "memory cannot be allocated
for the raid device " .The same thing i am trying with raid10 (our
hardware only supports raid  level 0 ,1,10) and if one cf card got
failed we are abel to reactive the raid device.But  the issue we are
facing in raid10 is it is taking 50%(2-CF card out of 4) of total
memory space as mirroring which is a loss for us .

So we dont want any kind of data recovery in our raid device (like
raid0) but we want if one cf card failed also, the raid device should
run or should reactive without any error(like raid10) but we should
abel to use the total disk space (like raid0).

or

any idea to increase size of storage memory created by raid10 (50% is
going waste due to mirroring and our hardware doesnot support raid5) .



How we are simulating  a cf card fail?
answer: according to raid document  "
http://tldp.org/HOWTO/Software-RAID-HOWTO-6.html " if we switch of the
system and remove a cf card ,it will simulate a disk failure .

Please provid any help to me i will be kind of you.
thanking you for your genuine support

ANSH
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: Doubt
  2009-11-05  7:26 Doubt Light King
@ 2009-11-05  7:38 ` Iustin Pop
  2009-11-05  7:42 ` Doubt Michael Evans
                   ` (2 subsequent siblings)
  3 siblings, 0 replies; 6+ messages in thread
From: Iustin Pop @ 2009-11-05  7:38 UTC (permalink / raw)
  To: Light King; +Cc: linux-raid

On Thu, Nov 05, 2009 at 12:56:30PM +0530, Light King wrote:
> So we dont want any kind of data recovery in our raid device (like
> raid0) but we want if one cf card failed also, the raid device should
> run or should reactive without any error(like raid10) but we should
> abel to use the total disk space (like raid0).

Uh... you don't want to use any space for redundancy but you want redundancy??

iustin

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: Doubt
  2009-11-05  7:26 Doubt Light King
  2009-11-05  7:38 ` Doubt Iustin Pop
@ 2009-11-05  7:42 ` Michael Evans
       [not found]   ` <d3540f4d0911050044w4ff51fddoba0aced44e3988b3@mail.gmail.com>
  2009-11-09 17:48 ` Doubt Bill Davidsen
  2009-11-09 18:35 ` Doubt Drew
  3 siblings, 1 reply; 6+ messages in thread
From: Michael Evans @ 2009-11-05  7:42 UTC (permalink / raw)
  To: Light King; +Cc: linux-raid

Your requirements are contradictory.  You want to span all your
devices with a single storage system, but you do not want to use any
devices for redundancy and expect the filing system on them to remain
consistent should any of the devices fail.

That is simply impossible for file-systems, which is what block-device
aggregation such as mdadm is designed to support.  Were you to loose
any random device out of the four portions of the filesystem metadata
as well as your actual data would be missing.  That may be tolerable
for special cases (regularly sampled data, such as sensor outputs
comes to mind, when you don't -require- the sensor data, but merely
want to have it), however those cases are all application specific,
not general solution.

One typical way a specific application might use four devices would be
a round-robin method.  In this a list of currently online devices
would be kept, then each cohesive unit would be stored to the next
device in the list.  Should a device be added the list would grow,
should a device fail (be removed) it would be taken out of the list.

You have four choices then:

1) What I described above
2) A Raid 0 that gives you 100% storage, but all devices working or none.
3) A Raid 1+0 or 10 (same idea different drivers) solution, you're
already trying it and disliking it though.
4) Raid 5; you spend more CPU but you use one of the devices for
recovery data, so that you can tolerate a single failure.
5) Technically you might also have raid 6; but I'm not counting it
because you're already complaining about loosing 50% of your data and
this has the addition of being slower (BUT surviving -literally- any 2
devices, instead of any 1 device of the correct set.)

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: Doubt
       [not found]   ` <d3540f4d0911050044w4ff51fddoba0aced44e3988b3@mail.gmail.com>
@ 2009-11-05 16:35     ` Michael Evans
  0 siblings, 0 replies; 6+ messages in thread
From: Michael Evans @ 2009-11-05 16:35 UTC (permalink / raw)
  To: Light King, linux-raid

You're not looking for something that raid, software or otherwise, can provide.

Given that you are using only 4 devices, overall storage to redundancy
ratios can be 0, 25, 50, 66, or 75%.  Were you using 5 devices those
ratios would be 0, 20, 40, 50, 60, 66, and 80%. (The 50% number can
always be attained when using raid10. As can 2 redundant stripes per
data stripe (2/3); I'm ignoring higher multiples of this number.)

If you only want a small percentage of redundancy you must seek other
solutions.  My prior suggestion of using each card individually and
utilizing some kind of software solution to distribute the load could
work; you could also use the par2 (parchive version 2 aka par2cmdline
) command to create redundancy information for one or more files
within a directory; it uses a more general case reed-solomon
http://en.wikipedia.org/wiki/Reed–Solomon_error_correction to
logically divide a set of input files in to a number of byte-chunks
and then produce a rough percentage of redundancy or to produce a
specified number of redundancy blocks (blocks that can fail in the
file in question).

This won't protect you from device-level failure that compromises the
filesystem, but it will protect against partial device failure.  For
your application merely detecting the existence of a failure may be
sufficient, in which case any number of checksum utilities would be
useful.

On Thu, Nov 5, 2009 at 12:44 AM, Light King <thelightking@gmail.com> wrote:
> Sir ,
>
> Thanks for Ur valuable reply .  I ve some more thoughts ....
>
> We want a solution like if 5 to 10 % of memory is going for reductancy
> thn it is ok for us . We dont want data recovery fully of permanent
> data . We want when system is running if one CARD goes bad then the
> array shd continue work without any disturbance .
>
> Can we run the reductancy application of the array in RAM (We ve to
> specify some space in RAM) of the system ? When system
> is switched off we dont want previous data stored but ;upto the period
> the system is switched on we want the CF cars to work as a cache (Its
> not like RAM opeartion exactly)for our system for running data .
>
> plz give some idea ......
>
> Ansh
>
>
> On Thu, Nov 5, 2009 at 1:12 PM, Michael Evans <mjevans1983@gmail.com> wrote:
>> Your requirements are contradictory.  You want to span all your
>> devices with a single storage system, but you do not want to use any
>> devices for redundancy and expect the filing system on them to remain
>> consistent should any of the devices fail.
>>
>> That is simply impossible for file-systems, which is what block-device
>> aggregation such as mdadm is designed to support.  Were you to loose
>> any random device out of the four portions of the filesystem metadata
>> as well as your actual data would be missing.  That may be tolerable
>> for special cases (regularly sampled data, such as sensor outputs
>> comes to mind, when you don't -require- the sensor data, but merely
>> want to have it), however those cases are all application specific,
>> not general solution.
>>
>> One typical way a specific application might use four devices would be
>> a round-robin method.  In this a list of currently online devices
>> would be kept, then each cohesive unit would be stored to the next
>> device in the list.  Should a device be added the list would grow,
>> should a device fail (be removed) it would be taken out of the list.
>>
>> You have four choices then:
>>
>> 1) What I described above
>> 2) A Raid 0 that gives you 100% storage, but all devices working or none.
>> 3) A Raid 1+0 or 10 (same idea different drivers) solution, you're
>> already trying it and disliking it though.
>> 4) Raid 5; you spend more CPU but you use one of the devices for
>> recovery data, so that you can tolerate a single failure.
>> 5) Technically you might also have raid 6; but I'm not counting it
>> because you're already complaining about loosing 50% of your data and
>> this has the addition of being slower (BUT surviving -literally- any 2
>> devices, instead of any 1 device of the correct set.)
>>
>
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: Doubt
  2009-11-05  7:26 Doubt Light King
  2009-11-05  7:38 ` Doubt Iustin Pop
  2009-11-05  7:42 ` Doubt Michael Evans
@ 2009-11-09 17:48 ` Bill Davidsen
  2009-11-09 18:35 ` Doubt Drew
  3 siblings, 0 replies; 6+ messages in thread
From: Bill Davidsen @ 2009-11-09 17:48 UTC (permalink / raw)
  To: Light King; +Cc: linux-raid

Light King wrote:
> i have four cf card and i have one pci based cf card
> contoller(addonics card with pata_sil6800 driver).when i am connecting
> this four cf card to addonics card(which has four slots for cf cards)
> and inserting this total hardware package to pci slot of pc in linux
> os it is showing four different block devices to me.So using mdadm
> 2.6.3 software raid i am creating a raid device of level 0 . If one cf
> card from this hardware package getting failed the raid-device is
> becoming inactive .If i am trying to reactive the raid device using
> mdamd -R command it is giving a error of "memory cannot be allocated
> for the raid device " .The same thing i am trying with raid10 (our
> hardware only supports raid  level 0 ,1,10) and if one cf card got
> failed we are abel to reactive the raid device.But  the issue we are
> facing in raid10 is it is taking 50%(2-CF card out of 4) of total
> memory space as mirroring which is a loss for us .
>
> So we dont want any kind of data recovery in our raid device (like
> raid0) but we want if one cf card failed also, the raid device should
> run or should reactive without any error(like raid10) but we should
> abel to use the total disk space (like raid0).
>
> or
>
> any idea to increase size of storage memory created by raid10 (50% is
> going waste due to mirroring and our hardware doesnot support raid5) .
>   

If I understand what you are asking, when one part of your array fails, 
you want to throw away all the data on all the devices and create a new 
array using the remaining functional devices. I guess you could run a 
script to do that, but only if you put the controller in JBOD mode so 
software raid can manipulate the individual devices. Then you could use 
the drive fail event to trigger the script.

If that isn't what you want, have a go at explaining what you want to 
happen when a device fails. Bear in mind that with raid0 when any one 
fails all of your data is gone. Period. You have traded capacity and 
performance for reliability, so there is no recovery other than start 
over using the working bits.

-- 
Bill Davidsen <davidsen@tmr.com>
  "We can't solve today's problems by using the same thinking we
   used in creating them." - Einstein


^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: Doubt
  2009-11-05  7:26 Doubt Light King
                   ` (2 preceding siblings ...)
  2009-11-09 17:48 ` Doubt Bill Davidsen
@ 2009-11-09 18:35 ` Drew
  3 siblings, 0 replies; 6+ messages in thread
From: Drew @ 2009-11-09 18:35 UTC (permalink / raw)
  To: Light King; +Cc: linux-raid

> So we dont want any kind of data recovery in our raid device (like
> raid0) but we want if one cf card failed also, the raid device should
> run or should reactive without any error(like raid10) but we should
> abel to use the total disk space (like raid0).

I honestly don't think there is any sort of setup that will work as
you described. If you want to have an array continue to function with
missing devices, you *will* have to sacrifice some space. The closest
type of block device I can think of which *might* achieve your goal is
spanning. Basically it turns all four drives into one huge drive but
doesn't stripe across them so *some* data will survive a device
failure.

Even if you were use spanning (assuming mdadm supports it, haven't
looked) and md doesn't complain when the device drops out, the
filesystem will choke as soon as you tried to access the data on the
missing device.

What is the application that needs this configuration?


-- 
Drew

"Nothing in life is to be feared. It is only to be understood."
--Marie Curie

^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2009-11-09 18:35 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2009-11-05  7:26 Doubt Light King
2009-11-05  7:38 ` Doubt Iustin Pop
2009-11-05  7:42 ` Doubt Michael Evans
     [not found]   ` <d3540f4d0911050044w4ff51fddoba0aced44e3988b3@mail.gmail.com>
2009-11-05 16:35     ` Doubt Michael Evans
2009-11-09 17:48 ` Doubt Bill Davidsen
2009-11-09 18:35 ` Doubt Drew

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).