linux-raid.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* md software raid
@ 2009-10-23  9:11 ian.d
  2009-10-23  9:38 ` Robin Hill
  0 siblings, 1 reply; 31+ messages in thread
From: ian.d @ 2009-10-23  9:11 UTC (permalink / raw)
  To: linux-raid


hello

As a newbie to Linux kernel etc.

How do you tell which version of md software raid is included in a specific
kernel version?
Is there a command to run which will return this info?
Is it possible to install different versions of md in different kernels?

Your advice is appreciated
Thanks
ian.d
-- 
View this message in context: http://www.nabble.com/md-software-raid-tp26023091p26023091.html
Sent from the linux-raid mailing list archive at Nabble.com.


^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: md software raid
  2009-10-23  9:11 md software raid ian.d
@ 2009-10-23  9:38 ` Robin Hill
  2009-10-27 10:19   ` Ian Docherty
  0 siblings, 1 reply; 31+ messages in thread
From: Robin Hill @ 2009-10-23  9:38 UTC (permalink / raw)
  To: linux-raid

[-- Attachment #1: Type: text/plain, Size: 1665 bytes --]

On Fri Oct 23, 2009 at 02:11:09AM -0700, ian.d wrote:

> 
> hello
> 
> As a newbie to Linux kernel etc.
> 
> How do you tell which version of md software raid is included in a specific
> kernel version?
> Is there a command to run which will return this info?
> Is it possible to install different versions of md in different kernels?
> 
There's two separate components to Linux software RAID - the md code in
the kernel, and the mdadm application which interfaces with it.  I very
much doubt the kernel code is versioned at all, other than by the kernel
version (or GIT version).  The mdadm application is versioned, and you
can get the installed version by running 'mdadm -V'.

Some of the mdadm functionality will only work with newer kernel
versions, but you should be able to run any version with any kernel
(within reason anyway) and make use of the core functionality.  There
may be some exceptions here, but they should be detailed in the release
notes for specific versions.

As for changing the md version in the kernel - it may be possible, but
generally there's a lot of work involved in back/forward porting code
between kernel versions (because of changes to core structures, etc).

Is there any particular reason you're asking this (e.g. needing to use
some of the latest functionality with an older kernel)?  You may get
a more definitive answer if you have a specific case.

HTH,
    Robin
-- 
     ___        
    ( ' }     |       Robin Hill        <robin@robinhill.me.uk> |
   / / )      | Little Jim says ....                            |
  // !!       |      "He fallen in de water !!"                 |

[-- Attachment #2: Type: application/pgp-signature, Size: 198 bytes --]

^ permalink raw reply	[flat|nested] 31+ messages in thread

* RE: md software raid
  2009-10-23  9:38 ` Robin Hill
@ 2009-10-27 10:19   ` Ian Docherty
  2009-10-27 20:52     ` Bill Davidsen
  0 siblings, 1 reply; 31+ messages in thread
From: Ian Docherty @ 2009-10-27 10:19 UTC (permalink / raw)
  To: 'Robin Hill', linux-raid



> -----Original Message-----
> From: linux-raid-owner@vger.kernel.org [mailto:linux-raid-
> owner@vger.kernel.org] On Behalf Of Robin Hill
> Sent: 23 October 2009 10:38
> To: linux-raid@vger.kernel.org
> Subject: Re: md software raid
> 
> On Fri Oct 23, 2009 at 02:11:09AM -0700, ian.d wrote:
> 
> >
> > hello
> >
> > As a newbie to Linux kernel etc.
> >
> > How do you tell which version of md software raid is included in a
> > specific kernel version?
> > Is there a command to run which will return this info?
> > Is it possible to install different versions of md in different
> kernels?
> >
> There's two separate components to Linux software RAID - the md code in
> the kernel, and the mdadm application which interfaces with it.  I very
> much doubt the kernel code is versioned at all, other than by the
> kernel version (or GIT version).  The mdadm application is versioned,
> and you can get the installed version by running 'mdadm -V'.
> 
> Some of the mdadm functionality will only work with newer kernel
> versions, but you should be able to run any version with any kernel
> (within reason anyway) and make use of the core functionality.  There
> may be some exceptions here, but they should be detailed in the release
> notes for specific versions.
> 
> As for changing the md version in the kernel - it may be possible, but
> generally there's a lot of work involved in back/forward porting code
> between kernel versions (because of changes to core structures, etc).
> 
> Is there any particular reason you're asking this (e.g. needing to use
> some of the latest functionality with an older kernel)?  You may get a
> more definitive answer if you have a specific case.

Thanks this info was very helpful.

The reason I asked was because I have been following a thread re MD-XFS 50%
write performance issue with different kernel releases and as performance is
important to me  I was thinking it might be better for me to use an older
kernel (2.6.28.4) but keeping md,mdadm functionality up to date 

Ian.d


> 
> HTH,
>     Robin
> --
>      ___
>     ( ' }     |       Robin Hill        <robin@robinhill.me.uk> |
>    / / )      | Little Jim says ....                            |
>   // !!       |      "He fallen in de water !!"                 |


^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: md software raid
  2009-10-27 10:19   ` Ian Docherty
@ 2009-10-27 20:52     ` Bill Davidsen
  2009-10-27 20:55       ` Richard Scobie
  2009-10-28  0:45       ` Leslie Rhorer
  0 siblings, 2 replies; 31+ messages in thread
From: Bill Davidsen @ 2009-10-27 20:52 UTC (permalink / raw)
  To: Ian Docherty; +Cc: 'Robin Hill', linux-raid

Ian Docherty wrote:
>   
>> -----Original Message-----
>> From: linux-raid-owner@vger.kernel.org [mailto:linux-raid-
>> owner@vger.kernel.org] On Behalf Of Robin Hill
>> Sent: 23 October 2009 10:38
>> To: linux-raid@vger.kernel.org
>> Subject: Re: md software raid
>>
>> On Fri Oct 23, 2009 at 02:11:09AM -0700, ian.d wrote:
>>
>>     
>>> hello
>>>
>>> As a newbie to Linux kernel etc.
>>>
>>> How do you tell which version of md software raid is included in a
>>> specific kernel version?
>>> Is there a command to run which will return this info?
>>> Is it possible to install different versions of md in different
>>>       
>> kernels?
>>     
>> There's two separate components to Linux software RAID - the md code in
>> the kernel, and the mdadm application which interfaces with it.  I very
>> much doubt the kernel code is versioned at all, other than by the
>> kernel version (or GIT version).  The mdadm application is versioned,
>> and you can get the installed version by running 'mdadm -V'.
>>
>> Some of the mdadm functionality will only work with newer kernel
>> versions, but you should be able to run any version with any kernel
>> (within reason anyway) and make use of the core functionality.  There
>> may be some exceptions here, but they should be detailed in the release
>> notes for specific versions.
>>
>> As for changing the md version in the kernel - it may be possible, but
>> generally there's a lot of work involved in back/forward porting code
>> between kernel versions (because of changes to core structures, etc).
>>
>> Is there any particular reason you're asking this (e.g. needing to use
>> some of the latest functionality with an older kernel)?  You may get a
>> more definitive answer if you have a specific case.
>>     
>
> Thanks this info was very helpful.
>
> The reason I asked was because I have been following a thread re MD-XFS 50%
> write performance issue with different kernel releases and as performance is
> important to me  I was thinking it might be better for me to use an older
> kernel (2.6.28.4) but keeping md,mdadm functionality up to date 
>   

That's certainly a valid point, if you are stuck using xfs by preference 
or requirement. I have seen claims that 2.6.32 will be better, but I 
have no intention of testing it, having parted company with xfs a while ago.

-- 
Bill Davidsen <davidsen@tmr.com>
  Unintended results are the well-earned reward for incompetence.


^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: md software raid
  2009-10-27 20:52     ` Bill Davidsen
@ 2009-10-27 20:55       ` Richard Scobie
  2009-10-27 21:15         ` Christopher Chen
  2009-10-28  0:45       ` Leslie Rhorer
  1 sibling, 1 reply; 31+ messages in thread
From: Richard Scobie @ 2009-10-27 20:55 UTC (permalink / raw)
  To: Bill Davidsen; +Cc: Ian Docherty, 'Robin Hill', linux-raid

Bill Davidsen wrote:


> That's certainly a valid point, if you are stuck using xfs by preference 
> or requirement. I have seen claims that 2.6.32 will be better, but I 
> have no intention of testing it, having parted company with xfs a while 
> ago.

The OP of the thread metioned, was able to duplicate the slowdown using 
ext3, so it probably has nothing to do with XFS.

Regards,

Richard



^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: md software raid
  2009-10-27 20:55       ` Richard Scobie
@ 2009-10-27 21:15         ` Christopher Chen
  0 siblings, 0 replies; 31+ messages in thread
From: Christopher Chen @ 2009-10-27 21:15 UTC (permalink / raw)
  To: Richard Scobie; +Cc: Bill Davidsen, Ian Docherty, Robin Hill, linux-raid

On Tue, Oct 27, 2009 at 1:55 PM, Richard Scobie <richard@sauce.co.nz> wrote:
> Bill Davidsen wrote:
>
>
>> That's certainly a valid point, if you are stuck using xfs by preference
>> or requirement. I have seen claims that 2.6.32 will be better, but I have no
>> intention of testing it, having parted company with xfs a while ago.
>
> The OP of the thread metioned, was able to duplicate the slowdown using
> ext3, so it probably has nothing to do with XFS.

It probably has to do with the added writes...if they can try it with
ext2 (yeah, yeah, yuck) they might see something.

Especially if this is raid 5 or 6.

cc

-- 
Chris Chen <muffaleta@gmail.com>
"The fact that yours is better than anyone else's
is not a guarantee that it's any good."
-- Seen on a wall

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: md software raid
  2009-10-28  0:50               ` Richard Scobie
@ 2009-10-27 22:00                 ` Ric Wheeler
  2009-10-28 15:08                   ` Eric Sandeen
  2009-10-28  1:09                 ` Max Waterman
  2009-10-28  1:10                 ` Majed B.
  2 siblings, 1 reply; 31+ messages in thread
From: Ric Wheeler @ 2009-10-27 22:00 UTC (permalink / raw)
  To: Richard Scobie
  Cc: Majed B., Leslie Rhorer, linux-raid, Christoph Hellwig,
	Eric Sandeen, Dave Chinner

On 10/27/2009 08:50 PM, Richard Scobie wrote:
> Majed B. wrote:
>> Indeed xfs_repair doesn't require the abusive amount of memory
>> xfs_check requires.
>>
>> I've been a happy XFS user for a few years now, but the fact the
>> xfsprogs aren't being maintained properly and xfs_check is still a
>> failure, I'm considering other alternatives.
>
> This should change soon, see the September entry:
>
> http://xfs.org/index.php/XFS_Status_Updates
>
> "On the userspace side a large patch series to reduce the memory usage 
> in xfs_repair to acceptable levels was posted, but not yet merged."
>
> Regards,
>
> Richard

There are several people still actively working on both XFS & its tools 
and I am sure that they are interested in hearing about issues :-)

ric


^ permalink raw reply	[flat|nested] 31+ messages in thread

* RE: md software raid
  2009-10-27 20:52     ` Bill Davidsen
  2009-10-27 20:55       ` Richard Scobie
@ 2009-10-28  0:45       ` Leslie Rhorer
  2009-10-28  0:47         ` Majed B.
  1 sibling, 1 reply; 31+ messages in thread
From: Leslie Rhorer @ 2009-10-28  0:45 UTC (permalink / raw)
  To: linux-raid

> That's certainly a valid point, if you are stuck using xfs by preference
> or requirement. I have seen claims that 2.6.32 will be better, but I
> have no intention of testing it, having parted company with xfs a while
> ago.

	May I ask, "Why?"  I am running XFS on both my video server and its
backup.  Generally I have been very pleased.  The only issue I have had was
trying to fsck the arrays.  I only have 10G of swap space, and XFS requires
much more than that to successfully check file systems in excess of 6T.  It
wasn't a really big deal, however.  I just plugged in a 750G drive, shut
down the default swap, and turned it up on the large drive.  Once done, I
switched the swap back to the boot drive.


^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: md software raid
  2009-10-28  0:45       ` Leslie Rhorer
@ 2009-10-28  0:47         ` Majed B.
  2009-10-28  0:52           ` Leslie Rhorer
  0 siblings, 1 reply; 31+ messages in thread
From: Majed B. @ 2009-10-28  0:47 UTC (permalink / raw)
  To: Leslie Rhorer; +Cc: linux-raid

Leslie,

How do you check xfs? xfs_check? Why not use xfs_repair -n?

On Wed, Oct 28, 2009 at 3:45 AM, Leslie Rhorer <lrhorer@satx.rr.com> wrote:
>> That's certainly a valid point, if you are stuck using xfs by preference
>> or requirement. I have seen claims that 2.6.32 will be better, but I
>> have no intention of testing it, having parted company with xfs a while
>> ago.
>
>        May I ask, "Why?"  I am running XFS on both my video server and its
> backup.  Generally I have been very pleased.  The only issue I have had was
> trying to fsck the arrays.  I only have 10G of swap space, and XFS requires
> much more than that to successfully check file systems in excess of 6T.  It
> wasn't a really big deal, however.  I just plugged in a 750G drive, shut
> down the default swap, and turned it up on the large drive.  Once done, I
> switched the swap back to the boot drive.
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>



-- 
       Majed B.
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: md software raid
  2009-10-28  0:58             ` Majed B.
@ 2009-10-28  0:50               ` Richard Scobie
  2009-10-27 22:00                 ` Ric Wheeler
                                   ` (2 more replies)
  0 siblings, 3 replies; 31+ messages in thread
From: Richard Scobie @ 2009-10-28  0:50 UTC (permalink / raw)
  To: Majed B.; +Cc: Leslie Rhorer, linux-raid

Majed B. wrote:
> Indeed xfs_repair doesn't require the abusive amount of memory
> xfs_check requires.
> 
> I've been a happy XFS user for a few years now, but the fact the
> xfsprogs aren't being maintained properly and xfs_check is still a
> failure, I'm considering other alternatives.

This should change soon, see the September entry:

http://xfs.org/index.php/XFS_Status_Updates

"On the userspace side a large patch series to reduce the memory usage 
in xfs_repair to acceptable levels was posted, but not yet merged."

Regards,

Richard

^ permalink raw reply	[flat|nested] 31+ messages in thread

* RE: md software raid
  2009-10-28  0:47         ` Majed B.
@ 2009-10-28  0:52           ` Leslie Rhorer
  2009-10-28  0:58             ` Majed B.
  2009-10-28  3:50             ` Christoph Hellwig
  0 siblings, 2 replies; 31+ messages in thread
From: Leslie Rhorer @ 2009-10-28  0:52 UTC (permalink / raw)
  To: linux-raid


> Leslie,
> 
> How do you check xfs? xfs_check?

	Yes.

> Why not use xfs_repair -n?

	I guess the short answer is, "I didn't know it would make a
difference".  I take it, then, xfs_repair uses a completely different method
of scanning for errors than xfs_check, one whihcdoes not require so much
memory?  I find that a bit surprising.


^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: md software raid
  2009-10-28  0:52           ` Leslie Rhorer
@ 2009-10-28  0:58             ` Majed B.
  2009-10-28  0:50               ` Richard Scobie
  2009-10-28  3:50             ` Christoph Hellwig
  1 sibling, 1 reply; 31+ messages in thread
From: Majed B. @ 2009-10-28  0:58 UTC (permalink / raw)
  To: Leslie Rhorer; +Cc: linux-raid

Indeed xfs_repair doesn't require the abusive amount of memory
xfs_check requires.

I've been a happy XFS user for a few years now, but the fact the
xfsprogs aren't being maintained properly and xfs_check is still a
failure, I'm considering other alternatives.

A filesystem that provides speed and a small footprint on the array
for its master file table is great, but a filesystem that has
maintained tools and tools that work well in case of data corruption
is a preferred one, at least to me.

I almost lost 5.5TB worth of data recently and the tools available
made it really hard to fix problems.

On Wed, Oct 28, 2009 at 3:52 AM, Leslie Rhorer <lrhorer@satx.rr.com> wrote:
>
>> Leslie,
>>
>> How do you check xfs? xfs_check?
>
>        Yes.
>
>> Why not use xfs_repair -n?
>
>        I guess the short answer is, "I didn't know it would make a
> difference".  I take it, then, xfs_repair uses a completely different method
> of scanning for errors than xfs_check, one whihcdoes not require so much
> memory?  I find that a bit surprising.
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>



-- 
       Majed B.
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: md software raid
  2009-10-28  0:50               ` Richard Scobie
  2009-10-27 22:00                 ` Ric Wheeler
@ 2009-10-28  1:09                 ` Max Waterman
  2009-10-28  1:10                 ` Majed B.
  2 siblings, 0 replies; 31+ messages in thread
From: Max Waterman @ 2009-10-28  1:09 UTC (permalink / raw)
  To: Richard Scobie; +Cc: Majed B., Leslie Rhorer, linux-raid

Richard Scobie wrote:
> Majed B. wrote:
>> Indeed xfs_repair doesn't require the abusive amount of memory
>> xfs_check requires.
>>
>> I've been a happy XFS user for a few years now, but the fact the
>> xfsprogs aren't being maintained properly and xfs_check is still a
>> failure, I'm considering other alternatives.
> 
> This should change soon, see the September entry:
> 
> http://xfs.org/index.php/XFS_Status_Updates
> 
> "On the userspace side a large patch series to reduce the memory usage 
> in xfs_repair to acceptable levels was posted, but not yet merged."
> 

I also notice SGI still (since their take-over) seem to feature it 
fairly prominently on their web site, indicating to me that it probably 
won't fall into dis-use any time soon...though web sites can be misleading.

Max.

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: md software raid
  2009-10-28  0:50               ` Richard Scobie
  2009-10-27 22:00                 ` Ric Wheeler
  2009-10-28  1:09                 ` Max Waterman
@ 2009-10-28  1:10                 ` Majed B.
  2009-10-28  2:03                   ` Richard Scobie
  2009-10-28  2:11                   ` Leslie Rhorer
  2 siblings, 2 replies; 31+ messages in thread
From: Majed B. @ 2009-10-28  1:10 UTC (permalink / raw)
  To: Richard Scobie; +Cc: linux-raid

Thank you Richard for the update!

Though with the recent performance drops of XFS on 2.6.3x kernels, and
the fact that the XFS patches are fairly new (and probably buggy), I'd
rather stay away from XFS for a while and look into other possible
options. If any...

On Wed, Oct 28, 2009 at 3:50 AM, Richard Scobie <richard@sauce.co.nz> wrote:
> Majed B. wrote:
>>
>> Indeed xfs_repair doesn't require the abusive amount of memory
>> xfs_check requires.
>>
>> I've been a happy XFS user for a few years now, but the fact the
>> xfsprogs aren't being maintained properly and xfs_check is still a
>> failure, I'm considering other alternatives.
>
> This should change soon, see the September entry:
>
> http://xfs.org/index.php/XFS_Status_Updates
>
> "On the userspace side a large patch series to reduce the memory usage in
> xfs_repair to acceptable levels was posted, but not yet merged."
>
> Regards,
>
> Richard
>



-- 
       Majed B.

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: md software raid
  2009-10-28  1:10                 ` Majed B.
@ 2009-10-28  2:03                   ` Richard Scobie
  2009-10-28  2:11                   ` Leslie Rhorer
  1 sibling, 0 replies; 31+ messages in thread
From: Richard Scobie @ 2009-10-28  2:03 UTC (permalink / raw)
  To: Majed B.; +Cc: linux-raid

Majed B. wrote:
> Thank you Richard for the update!
> 
> Though with the recent performance drops of XFS on 2.6.3x kernels, and
> the fact that the XFS patches are fairly new (and probably buggy), I'd
> rather stay away from XFS for a while and look into other possible
> options. If any...

No, as I mentioned earlier, the original poster of the thread of the 
"reduced write performance on late kernels", later posted that the 
slowdown occured on ext filesystems as well and indeed if you search the 
archive back a week or so, you will find he has bisected the patchs that 
have caused this regression.

Regards,

Richard

^ permalink raw reply	[flat|nested] 31+ messages in thread

* RE: md software raid
  2009-10-28  1:10                 ` Majed B.
  2009-10-28  2:03                   ` Richard Scobie
@ 2009-10-28  2:11                   ` Leslie Rhorer
  2009-10-28  2:26                     ` Majed B.
  1 sibling, 1 reply; 31+ messages in thread
From: Leslie Rhorer @ 2009-10-28  2:11 UTC (permalink / raw)
  To: linux-raid

> Thank you Richard for the update!
> 
> Though with the recent performance drops of XFS on 2.6.3x kernels, and
> the fact that the XFS patches are fairly new (and probably buggy), I'd
> rather stay away from XFS for a while and look into other possible
> options. If any...

	I hear what you are saying, but reformatting an array to a new fs is
definitely a daunting prospect.  I have a minor coronary every time I think
of deliberately reducing the number of copies of my data from 2 to 1 while
the array is re-formatted and the data copied back to the main array.  This
process takes long enough that the odds of encountering two drive failures
on the RAID5 backup system during the copy are not infinitesimal...

	<sigh>

	I guess one of these days I am going to have to bite the bullet and
build a *SECOND* backup system.  I don't suppose I will ever breathe
perfectly easy until I can have two completely independent systems fail and
still not lose any data.  That, or maybe Blu-ray disks will become cheap
enough to create multi-terabyte backups on them.  Right now, the best price
I see is $112 a terabyte, and I can do substantially better than that using
hard drives.  Even if they do get cheaper than hard drives, the thought of
swapping out 400+ Blu-Ray discs over a period of several days doesn't really
thrill me.  Maybe a cold hard drive backup is the answer?  It eliminates
most of the pitfalls of a Blu-Ray backup, can easily be overwritten, only
requires a relative handful of discs, and costs less than $75 a terabyte.  

	Hmmm...


^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: md software raid
  2009-10-28  2:11                   ` Leslie Rhorer
@ 2009-10-28  2:26                     ` Majed B.
  2009-10-28  2:54                       ` Leslie Rhorer
       [not found]                       ` <4D87015385157D4285D56CA6101072FF3A6675B8@exchange07.valvesoftware.com>
  0 siblings, 2 replies; 31+ messages in thread
From: Majed B. @ 2009-10-28  2:26 UTC (permalink / raw)
  To: Leslie Rhorer; +Cc: linux-raid

Leslie,

Backup is never cheap. Unless you print the data in 0 & 1 on paper in
font size 8 or 6....or maybe a reduced form of it... heh.

On Wed, Oct 28, 2009 at 5:11 AM, Leslie Rhorer <lrhorer@satx.rr.com> wrote:
>
>        I hear what you are saying, but reformatting an array to a new fs is
> definitely a daunting prospect.  I have a minor coronary every time I think
> of deliberately reducing the number of copies of my data from 2 to 1 while
> the array is re-formatted and the data copied back to the main array.  This
> process takes long enough that the odds of encountering two drive failures
> on the RAID5 backup system during the copy are not infinitesimal...
>
>        <sigh>
>
>        I guess one of these days I am going to have to bite the bullet and
> build a *SECOND* backup system.  I don't suppose I will ever breathe
> perfectly easy until I can have two completely independent systems fail and
> still not lose any data.  That, or maybe Blu-ray disks will become cheap
> enough to create multi-terabyte backups on them.  Right now, the best price
> I see is $112 a terabyte, and I can do substantially better than that using
> hard drives.  Even if they do get cheaper than hard drives, the thought of
> swapping out 400+ Blu-Ray discs over a period of several days doesn't really
> thrill me.  Maybe a cold hard drive backup is the answer?  It eliminates
> most of the pitfalls of a Blu-Ray backup, can easily be overwritten, only
> requires a relative handful of discs, and costs less than $75 a terabyte.
>
>        Hmmm...
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html



--
      Majed B.
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: md software raid
       [not found]                       ` <4D87015385157D4285D56CA6101072FF3A6675B8@exchange07.valvesoftware.com>
@ 2009-10-28  2:52                         ` Majed B.
  2009-10-28  3:26                           ` Guy Watkins
  2009-10-28  3:08                         ` Leslie Rhorer
  1 sibling, 1 reply; 31+ messages in thread
From: Majed B. @ 2009-10-28  2:52 UTC (permalink / raw)
  To: Chris Green; +Cc: linux-raid@vger.kernel.org

Thanks for the numbers Chris! I guess I kind of anticipated that,
which is why I suggested using a compressed form, sort of like
zip/rar...

Though if the data is mostly multimedia, then I guess it's pointless.

Punch-cards, anyone?

On Wed, Oct 28, 2009 at 5:45 AM, Chris Green <cgreen@valvesoftware.com> wrote:
> I'm guessing you could get about a megabyte per sheet of paper this way.
> So a 1TB drive would take up 1000000 sheets of paper. Looking at staples.com,
> it looks like that would cost you in the neighborhood of $8000. The ink cartridges
> would probably cost more, not to mention the cost of a place to store it :-)
-- 
       Majed B.

^ permalink raw reply	[flat|nested] 31+ messages in thread

* RE: md software raid
  2009-10-28  2:26                     ` Majed B.
@ 2009-10-28  2:54                       ` Leslie Rhorer
       [not found]                       ` <4D87015385157D4285D56CA6101072FF3A6675B8@exchange07.valvesoftware.com>
  1 sibling, 0 replies; 31+ messages in thread
From: Leslie Rhorer @ 2009-10-28  2:54 UTC (permalink / raw)
  To: linux-raid

> Leslie,
> 
> Backup is never cheap.

	Yeah, I know, and the most expensive thing in the world is a
non-existent backup.  I don't mind os much paying for one backup, but two...
well, lets just say I'm part Scott and part German.

> Unless you print the data in 0 & 1 on paper in
> font size 8 or 6....or maybe a reduced form of it... heh.

	Funny you should mention it.  A certain military customer of ours
recently ordered a pair of Gig-E circuits (no surprise there), and - believe
it or not - four *TELEGRAPH* circuits!

	I kid you not.


^ permalink raw reply	[flat|nested] 31+ messages in thread

* RE: md software raid
       [not found]                       ` <4D87015385157D4285D56CA6101072FF3A6675B8@exchange07.valvesoftware.com>
  2009-10-28  2:52                         ` Majed B.
@ 2009-10-28  3:08                         ` Leslie Rhorer
  2009-10-28  3:11                           ` Chris Green
  1 sibling, 1 reply; 31+ messages in thread
From: Leslie Rhorer @ 2009-10-28  3:08 UTC (permalink / raw)
  To: linux-raid

> I'm guessing you could get about a megabyte per sheet of paper this way.

	You've got to be kidding!  At 72 points/in, a font size of 8 allows
for one byte and one space per inch, or about 10 bytes per line.  One can
fairly easily display 1KB per page, but nowhere near 1MB.

> So a 1TB drive would take up 1000000 sheets of paper. Looking at
> staples.com,

	Make that more like a billion pages.  Remember, the entire Library
of Congress can fit on a 1T drive.

> it looks like that would cost you in the neighborhood of $8000. The ink
> cartridges

	It's more like $8 million.


^ permalink raw reply	[flat|nested] 31+ messages in thread

* RE: md software raid
  2009-10-28  3:08                         ` Leslie Rhorer
@ 2009-10-28  3:11                           ` Chris Green
  2009-10-28  3:29                             ` Leslie Rhorer
  2009-10-28  3:35                             ` Guy Watkins
  0 siblings, 2 replies; 31+ messages in thread
From: Chris Green @ 2009-10-28  3:11 UTC (permalink / raw)
  To: 'Leslie Rhorer', linux-raid@vger.kernel.org

I was thinking 300dpi on letter paper gives you about 1mbyte, and you'd bloat it by 2x for redundancy, using both sides of the paper.
But I guess 300 dpi is actually 300 dots per square inch, not 300 per linear inch, which means its only 10s of k per page.


-----Original Message-----
From: linux-raid-owner@vger.kernel.org [mailto:linux-raid-owner@vger.kernel.org] On Behalf Of Leslie Rhorer
Sent: Tuesday, October 27, 2009 8:08 PM
To: linux-raid@vger.kernel.org
Subject: RE: md software raid

> I'm guessing you could get about a megabyte per sheet of paper this way.

	You've got to be kidding!  At 72 points/in, a font size of 8 allows
for one byte and one space per inch, or about 10 bytes per line.  One can
fairly easily display 1KB per page, but nowhere near 1MB.

> So a 1TB drive would take up 1000000 sheets of paper. Looking at
> staples.com,

	Make that more like a billion pages.  Remember, the entire Library
of Congress can fit on a 1T drive.

> it looks like that would cost you in the neighborhood of $8000. The ink
> cartridges

	It's more like $8 million.

--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 31+ messages in thread

* RE: md software raid
  2009-10-28  2:52                         ` Majed B.
@ 2009-10-28  3:26                           ` Guy Watkins
  0 siblings, 0 replies; 31+ messages in thread
From: Guy Watkins @ 2009-10-28  3:26 UTC (permalink / raw)
  To: 'Majed B.', 'Chris Green'; +Cc: linux-raid

} -----Original Message-----
} From: linux-raid-owner@vger.kernel.org [mailto:linux-raid-
} owner@vger.kernel.org] On Behalf Of Majed B.
} Sent: Tuesday, October 27, 2009 10:52 PM
} To: Chris Green
} Cc: linux-raid@vger.kernel.org
} Subject: Re: md software raid
} 
} Thanks for the numbers Chris! I guess I kind of anticipated that,
} which is why I suggested using a compressed form, sort of like
} zip/rar...
} 
} Though if the data is mostly multimedia, then I guess it's pointless.
} 
} Punch-cards, anyone?

1 TBytes on punch cards would stack 1,973 miles high.  Assuming 100 cards
per inch and 80 bytes per card.  :)

} 
} On Wed, Oct 28, 2009 at 5:45 AM, Chris Green <cgreen@valvesoftware.com>
} wrote:
} > I'm guessing you could get about a megabyte per sheet of paper this way.
} > So a 1TB drive would take up 1000000 sheets of paper. Looking at
} staples.com,
} > it looks like that would cost you in the neighborhood of $8000. The ink
} cartridges
} > would probably cost more, not to mention the cost of a place to store it
} :-)
} --
}        Majed B.


^ permalink raw reply	[flat|nested] 31+ messages in thread

* RE: md software raid
  2009-10-28  3:11                           ` Chris Green
@ 2009-10-28  3:29                             ` Leslie Rhorer
  2009-11-05 13:51                               ` Matt Garman
  2009-10-28  3:35                             ` Guy Watkins
  1 sibling, 1 reply; 31+ messages in thread
From: Leslie Rhorer @ 2009-10-28  3:29 UTC (permalink / raw)
  To: linux-raid

> I was thinking 300dpi on letter paper gives you about 1mbyte, and you'd
> bloat it by 2x for redundancy, using both sides of the paper.
> But I guess 300 dpi is actually 300 dots per square inch

	No, it's dots per linear inch, or 90,000 dots per square inch,
assuming a square pixel.

> not 300 per
> linear inch, which means its only 10s of k per page.

	'Not even 10s of K in a human-readable font.  I doubt anyone with
normal eyesight could even read the print at 10K per page.


^ permalink raw reply	[flat|nested] 31+ messages in thread

* RE: md software raid
  2009-10-28  3:11                           ` Chris Green
  2009-10-28  3:29                             ` Leslie Rhorer
@ 2009-10-28  3:35                             ` Guy Watkins
  2009-10-28 11:27                               ` Max Waterman
  1 sibling, 1 reply; 31+ messages in thread
From: Guy Watkins @ 2009-10-28  3:35 UTC (permalink / raw)
  To: 'Chris Green', 'Leslie Rhorer', linux-raid

If you just used the 300 dots per inch as 1 bit per dot, you could store
15120000 (300*300*8*10.5) bits per side (assuming 1/4 inch border on 8.5x11
paper 2 sided).  That is 1.8MB per page.  You would need 555556 pages.  That
would stack 463 feet high (assuming 100 pages per inch).

} -----Original Message-----
} From: linux-raid-owner@vger.kernel.org [mailto:linux-raid-
} owner@vger.kernel.org] On Behalf Of Chris Green
} Sent: Tuesday, October 27, 2009 11:12 PM
} To: 'Leslie Rhorer'; linux-raid@vger.kernel.org
} Subject: RE: md software raid
} 
} I was thinking 300dpi on letter paper gives you about 1mbyte, and you'd
} bloat it by 2x for redundancy, using both sides of the paper.
} But I guess 300 dpi is actually 300 dots per square inch, not 300 per
} linear inch, which means its only 10s of k per page.
} 
} 
} -----Original Message-----
} From: linux-raid-owner@vger.kernel.org [mailto:linux-raid-
} owner@vger.kernel.org] On Behalf Of Leslie Rhorer
} Sent: Tuesday, October 27, 2009 8:08 PM
} To: linux-raid@vger.kernel.org
} Subject: RE: md software raid
} 
} > I'm guessing you could get about a megabyte per sheet of paper this way.
} 
} 	You've got to be kidding!  At 72 points/in, a font size of 8 allows
} for one byte and one space per inch, or about 10 bytes per line.  One can
} fairly easily display 1KB per page, but nowhere near 1MB.
} 
} > So a 1TB drive would take up 1000000 sheets of paper. Looking at
} > staples.com,
} 
} 	Make that more like a billion pages.  Remember, the entire Library
} of Congress can fit on a 1T drive.
} 
} > it looks like that would cost you in the neighborhood of $8000. The ink
} > cartridges
} 
} 	It's more like $8 million.
} 
} --
} To unsubscribe from this list: send the line "unsubscribe linux-raid" in
} the body of a message to majordomo@vger.kernel.org
} More majordomo info at  http://vger.kernel.org/majordomo-info.html
} --
} To unsubscribe from this list: send the line "unsubscribe linux-raid" in
} the body of a message to majordomo@vger.kernel.org
} More majordomo info at  http://vger.kernel.org/majordomo-info.html


^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: md software raid
  2009-10-28  0:52           ` Leslie Rhorer
  2009-10-28  0:58             ` Majed B.
@ 2009-10-28  3:50             ` Christoph Hellwig
  2009-10-28  6:37               ` Leslie Rhorer
  1 sibling, 1 reply; 31+ messages in thread
From: Christoph Hellwig @ 2009-10-28  3:50 UTC (permalink / raw)
  To: Leslie Rhorer; +Cc: linux-raid

On Tue, Oct 27, 2009 at 07:52:56PM -0500, Leslie Rhorer wrote:
> 
> > Leslie,
> > 
> > How do you check xfs? xfs_check?
> 
> 	Yes.
> 
> > Why not use xfs_repair -n?
> 
> 	I guess the short answer is, "I didn't know it would make a
> difference".  I take it, then, xfs_repair uses a completely different method
> of scanning for errors than xfs_check, one whihcdoes not require so much
> memory?  I find that a bit surprising.

xfs_repair is a separate program that is actually mainainted.  xfs_check
is deprecated and we'll eventually remove it after porting one
remaining checking pass over to xfs_repair (currently xfs_repair can't
check the freespace btree but only fully rebuild them when in repair
mode)


^ permalink raw reply	[flat|nested] 31+ messages in thread

* RE: md software raid
  2009-10-28  3:50             ` Christoph Hellwig
@ 2009-10-28  6:37               ` Leslie Rhorer
  0 siblings, 0 replies; 31+ messages in thread
From: Leslie Rhorer @ 2009-10-28  6:37 UTC (permalink / raw)
  To: linux-raid

> > > Why not use xfs_repair -n?
> >
> > 	I guess the short answer is, "I didn't know it would make a
> > difference".  I take it, then, xfs_repair uses a completely different
> method
> > of scanning for errors than xfs_check, one whihcdoes not require so much
> > memory?  I find that a bit surprising.
> 
> xfs_repair is a separate program that is actually mainainted.  xfs_check
> is deprecated and we'll eventually remove it after porting one
> remaining checking pass over to xfs_repair (currently xfs_repair can't
> check the freespace btree but only fully rebuild them when in repair
> mode)

	OK, great!  Query: my swap space of 10G was nowhere nearly enough to
cover the useage by xfs_check.  Will it be sufficient for xfs_repair with
the -n option?


^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: md software raid
  2009-10-28  3:35                             ` Guy Watkins
@ 2009-10-28 11:27                               ` Max Waterman
  0 siblings, 0 replies; 31+ messages in thread
From: Max Waterman @ 2009-10-28 11:27 UTC (permalink / raw)
  To: Guy Watkins; +Cc: 'Chris Green', 'Leslie Rhorer', linux-raid

Guy Watkins wrote:
> If you just used the 300 dots per inch as 1 bit per dot, you could store
> 15120000 (300*300*8*10.5) bits per side (assuming 1/4 inch border on 8.5x11
> paper 2 sided).  That is 1.8MB per page.  You would need 555556 pages.  That
> would stack 463 feet high (assuming 100 pages per inch).
>   

Yeah, but who, in their right mind, would stack them all in one pile?

Max

> } -----Original Message-----
> } From: linux-raid-owner@vger.kernel.org [mailto:linux-raid-
> } owner@vger.kernel.org] On Behalf Of Chris Green
> } Sent: Tuesday, October 27, 2009 11:12 PM
> } To: 'Leslie Rhorer'; linux-raid@vger.kernel.org
> } Subject: RE: md software raid
> } 
> } I was thinking 300dpi on letter paper gives you about 1mbyte, and you'd
> } bloat it by 2x for redundancy, using both sides of the paper.
> } But I guess 300 dpi is actually 300 dots per square inch, not 300 per
> } linear inch, which means its only 10s of k per page.
> } 
> } 
> } -----Original Message-----
> } From: linux-raid-owner@vger.kernel.org [mailto:linux-raid-
> } owner@vger.kernel.org] On Behalf Of Leslie Rhorer
> } Sent: Tuesday, October 27, 2009 8:08 PM
> } To: linux-raid@vger.kernel.org
> } Subject: RE: md software raid
> } 
> } > I'm guessing you could get about a megabyte per sheet of paper this way.
> } 
> } 	You've got to be kidding!  At 72 points/in, a font size of 8 allows
> } for one byte and one space per inch, or about 10 bytes per line.  One can
> } fairly easily display 1KB per page, but nowhere near 1MB.
> } 
> } > So a 1TB drive would take up 1000000 sheets of paper. Looking at
> } > staples.com,
> } 
> } 	Make that more like a billion pages.  Remember, the entire Library
> } of Congress can fit on a 1T drive.
> } 
> } > it looks like that would cost you in the neighborhood of $8000. The ink
> } > cartridges
> } 
> } 	It's more like $8 million.
> } 
> } --
> } To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> } the body of a message to majordomo@vger.kernel.org
> } More majordomo info at  http://vger.kernel.org/majordomo-info.html
> } --
> } To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> } the body of a message to majordomo@vger.kernel.org
> } More majordomo info at  http://vger.kernel.org/majordomo-info.html
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>   


^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: md software raid
  2009-10-27 22:00                 ` Ric Wheeler
@ 2009-10-28 15:08                   ` Eric Sandeen
  2009-10-28 16:06                     ` Bernd Schubert
  0 siblings, 1 reply; 31+ messages in thread
From: Eric Sandeen @ 2009-10-28 15:08 UTC (permalink / raw)
  To: Ric Wheeler
  Cc: Richard Scobie, Majed B., Leslie Rhorer, linux-raid,
	Christoph Hellwig, Eric Sandeen, Dave Chinner

Ric Wheeler wrote:
> On 10/27/2009 08:50 PM, Richard Scobie wrote:
>> Majed B. wrote:
>>> Indeed xfs_repair doesn't require the abusive amount of memory
>>> xfs_check requires.
>>>
>>> I've been a happy XFS user for a few years now, but the fact the
>>> xfsprogs aren't being maintained properly and xfs_check is still a
>>> failure, I'm considering other alternatives.
>>
>> This should change soon, see the September entry:
>>
>> http://xfs.org/index.php/XFS_Status_Updates
>>
>> "On the userspace side a large patch series to reduce the memory usage 
>> in xfs_repair to acceptable levels was posted, but not yet merged."
>>
>> Regards,
>>
>> Richard
> 
> There are several people still actively working on both XFS & its tools 
> and I am sure that they are interested in hearing about issues :-)
> 
> ric
> 


FWIW, this is merged now, but not yet in a usable release.

Still, existing xfs_repair has much better memory footprint than 
xfs_check, and it is the tool you want to use whether you are just 
checking (with -n) or repairing.

xfs_check is more or less deprecated; it is known to have large memory 
requirements, and xfs_repair is the tool to use.

-Eric

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: md software raid
  2009-10-28 15:08                   ` Eric Sandeen
@ 2009-10-28 16:06                     ` Bernd Schubert
  2009-10-28 23:39                       ` Dave Chinner
  0 siblings, 1 reply; 31+ messages in thread
From: Bernd Schubert @ 2009-10-28 16:06 UTC (permalink / raw)
  Cc: Ric Wheeler, Richard Scobie, Majed B., Leslie Rhorer, linux-raid,
	Christoph Hellwig, Eric Sandeen, Dave Chinner

On Wednesday 28 October 2009, Eric Sandeen wrote:

> xfs_check is more or less deprecated; it is known to have large memory
> requirements, and xfs_repair is the tool to use.

What about adding a warning message to it then? I guess most people don't 
check list archives first before using it. 


Thanks,
Bernd

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: md software raid
  2009-10-28 16:06                     ` Bernd Schubert
@ 2009-10-28 23:39                       ` Dave Chinner
  0 siblings, 0 replies; 31+ messages in thread
From: Dave Chinner @ 2009-10-28 23:39 UTC (permalink / raw)
  To: Bernd Schubert
  Cc: Eric Sandeen, Ric Wheeler, Richard Scobie, Majed B.,
	Leslie Rhorer, linux-raid, Christoph Hellwig, Eric Sandeen

On Wed, Oct 28, 2009 at 05:06:46PM +0100, Bernd Schubert wrote:
> On Wednesday 28 October 2009, Eric Sandeen wrote:
> 
> > xfs_check is more or less deprecated; it is known to have large memory
> > requirements, and xfs_repair is the tool to use.
> 
> What about adding a warning message to it then? I guess most people don't 
> check list archives first before using it. 

IIUC, the plan is to add the one set of checks to xfs_repair that it
currently doesn't do (doesn't check free space trees because it
simply rebuilds them from scratch during the repair process), then
xfs_check will be changed to be wrapper around xfs_repair. i.e. the
xfs_check command is most likely not going away, just changing
implementation....

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: md software raid
  2009-10-28  3:29                             ` Leslie Rhorer
@ 2009-11-05 13:51                               ` Matt Garman
  0 siblings, 0 replies; 31+ messages in thread
From: Matt Garman @ 2009-11-05 13:51 UTC (permalink / raw)
  To: Leslie Rhorer; +Cc: linux-raid


What about microfilm / microfiche?

On Tue, Oct 27, 2009 at 10:29:04PM -0500, Leslie Rhorer wrote:
> > I was thinking 300dpi on letter paper gives you about 1mbyte, and you'd
> > bloat it by 2x for redundancy, using both sides of the paper.
> > But I guess 300 dpi is actually 300 dots per square inch
> 
> 	No, it's dots per linear inch, or 90,000 dots per square inch,
> assuming a square pixel.
> 
> > not 300 per
> > linear inch, which means its only 10s of k per page.
> 
> 	'Not even 10s of K in a human-readable font.  I doubt anyone with
> normal eyesight could even read the print at 10K per page.
> 
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 31+ messages in thread

end of thread, other threads:[~2009-11-05 13:51 UTC | newest]

Thread overview: 31+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2009-10-23  9:11 md software raid ian.d
2009-10-23  9:38 ` Robin Hill
2009-10-27 10:19   ` Ian Docherty
2009-10-27 20:52     ` Bill Davidsen
2009-10-27 20:55       ` Richard Scobie
2009-10-27 21:15         ` Christopher Chen
2009-10-28  0:45       ` Leslie Rhorer
2009-10-28  0:47         ` Majed B.
2009-10-28  0:52           ` Leslie Rhorer
2009-10-28  0:58             ` Majed B.
2009-10-28  0:50               ` Richard Scobie
2009-10-27 22:00                 ` Ric Wheeler
2009-10-28 15:08                   ` Eric Sandeen
2009-10-28 16:06                     ` Bernd Schubert
2009-10-28 23:39                       ` Dave Chinner
2009-10-28  1:09                 ` Max Waterman
2009-10-28  1:10                 ` Majed B.
2009-10-28  2:03                   ` Richard Scobie
2009-10-28  2:11                   ` Leslie Rhorer
2009-10-28  2:26                     ` Majed B.
2009-10-28  2:54                       ` Leslie Rhorer
     [not found]                       ` <4D87015385157D4285D56CA6101072FF3A6675B8@exchange07.valvesoftware.com>
2009-10-28  2:52                         ` Majed B.
2009-10-28  3:26                           ` Guy Watkins
2009-10-28  3:08                         ` Leslie Rhorer
2009-10-28  3:11                           ` Chris Green
2009-10-28  3:29                             ` Leslie Rhorer
2009-11-05 13:51                               ` Matt Garman
2009-10-28  3:35                             ` Guy Watkins
2009-10-28 11:27                               ` Max Waterman
2009-10-28  3:50             ` Christoph Hellwig
2009-10-28  6:37               ` Leslie Rhorer

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).