linux-raid.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* > 2TB ?
@ 2005-02-10 20:47 No email
  2005-02-10 21:10 ` Peter T. Breuer
  0 siblings, 1 reply; 5+ messages in thread
From: No email @ 2005-02-10 20:47 UTC (permalink / raw)
  To: linux-raid


Forgive me as this is probably a silly question and one that has been
answered many times, I have tried to search for the answers but have
ended up more confused than when I started.  So thought maybe I could
ask the community to put me out of my misery

Is there a version of MD that can create larger than 2TB raid sets?  

Obviously with 400GB drives being commonplace a typical 12 bay solution
is capable of significant capacity, but if we need to split this down
into multiple raid sets, then we start to lose.  

So, if anyone can advise me as to MD / Linux size limitations I would be
grateful :)






^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: > 2TB ?
  2005-02-10 20:47 > 2TB ? No email
@ 2005-02-10 21:10 ` Peter T. Breuer
  2005-02-11  9:32   ` Molle Bestefich
  0 siblings, 1 reply; 5+ messages in thread
From: Peter T. Breuer @ 2005-02-10 21:10 UTC (permalink / raw)
  To: linux-raid

No email <null@hotmail.com> wrote:
> 
> Forgive me as this is probably a silly question and one that has been
> answered many times, I have tried to search for the answers but have
> ended up more confused than when I started.  So thought maybe I could
> ask the community to put me out of my misery
> 
> Is there a version of MD that can create larger than 2TB raid sets?  

Hmm ...  with 1KB blocks, and at 32 bits, 2^32 * 1KB = 4TB would be the
notional limit of any block device counted in blocks and using a 32bit
counter.

But in the 2.6 kernel the counter is 64bit.

> Obviously with 400GB drives being commonplace a typical 12 bay solution
> is capable of significant capacity,

3.6TB under raid0 or linear?

> but if we need to split this down
> into multiple raid sets, then we start to lose.  

I don't see that you have to on general principle even on a 2.4 kernel,
but maybe there are extra limits in the case of md devices over block
devices that I do not know about.

> So, if anyone can advise me as to MD / Linux size limitations I would be
> grateful :)

Well, I don't have anything to add to the observation above and I'm too
lazy to look! But where do you get the "2TB" number from? I can believe
that as a number for 1KB block devices in general in early 2.4 kernel
times, before signage had been dealt with.

Peter


^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: > 2TB ?
  2005-02-10 21:10 ` Peter T. Breuer
@ 2005-02-11  9:32   ` Molle Bestefich
  2005-02-11 13:41     ` Carlos Knowlton
  0 siblings, 1 reply; 5+ messages in thread
From: Molle Bestefich @ 2005-02-11  9:32 UTC (permalink / raw)
  To: linux-raid

No email <null@hotmail.com> wrote:
>
> Forgive me as this is probably a silly question and one that has been
> answered many times, I have tried to search for the answers but have
> ended up more confused than when I started.  So thought maybe I could
> ask the community to put me out of my misery
>
> Is there a version of MD that can create larger than 2TB raid sets?

I have a couple of terabyte software RAID 1+0 arrays under Linux.
No size problems with MD as of yet.

But the filesystems is a different affair, and I think this is where
you should watch out.
Linux filesystems seems to stink real bad when they span multiple
terabytes, at least that's my personal experience.  I've tried both
ext3 and reiserfs.  Even simple operations such as deleting files
suddenly take on the order of 10-20 minutes.

I haven't got a ready explanation for why ext3 and reiser can't handle
TB sizes, but I'd definately advice you against a multi-TB setup using
Linux (at least until you find someone who has a working setup..)

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: > 2TB ?
  2005-02-11  9:32   ` Molle Bestefich
@ 2005-02-11 13:41     ` Carlos Knowlton
  2005-02-11 22:10       ` XFS or JFS? (Was: > 2TB ?) Molle Bestefich
  0 siblings, 1 reply; 5+ messages in thread
From: Carlos Knowlton @ 2005-02-11 13:41 UTC (permalink / raw)
  To: linux-raid

Molle Bestefich wrote:

>No email <null@hotmail.com> wrote:
>  
>
>>Forgive me as this is probably a silly question and one that has been
>>answered many times, I have tried to search for the answers but have
>>ended up more confused than when I started.  So thought maybe I could
>>ask the community to put me out of my misery
>>
>>Is there a version of MD that can create larger than 2TB raid sets?
>>    
>>
>
>I have a couple of terabyte software RAID 1+0 arrays under Linux.
>No size problems with MD as of yet.
>
>But the filesystems is a different affair, and I think this is where
>you should watch out.
>Linux filesystems seems to stink real bad when they span multiple
>terabytes, at least that's my personal experience.  I've tried both
>ext3 and reiserfs.  Even simple operations such as deleting files
>suddenly take on the order of 10-20 minutes.
>  
>
I'm running some 3TB software arrays (12 * 250GB RAID5) with no 
trouble.  I've opted for XFS over ext3 or reiserfs, and I see no trouble 
in accessing or deleting files.  However, my Windows clients can't see 
the volume unless I tell them that it is less than 2TB ("max disk size" 
param in smb.conf).

>I haven't got a ready explanation for why ext3 and reiser can't handle
>TB sizes, but I'd definately advice you against a multi-TB setup using
>Linux (at least until you find someone who has a working setup..)
>  
>
I would tend to agree, unless you use a 2.6 kernel, and XFS, then it's 
not a problem.  Well, as far a software RAID goes anyway - I wish it 
handled trivial media errors more gracefully (ie, without dropping 
disks).  You should always back-up your data.

-CK

^ permalink raw reply	[flat|nested] 5+ messages in thread

* XFS or JFS? (Was: > 2TB ?)
  2005-02-11 13:41     ` Carlos Knowlton
@ 2005-02-11 22:10       ` Molle Bestefich
  0 siblings, 0 replies; 5+ messages in thread
From: Molle Bestefich @ 2005-02-11 22:10 UTC (permalink / raw)
  To: linux-raid

Carlos Knowlton wrote:
> Molle Bestefich wrote:
>>Linux filesystems seems to stink real bad when they span multiple
>>terabytes, at least that's my personal experience.  I've tried both
>>ext3 and reiserfs.  Even simple operations such as deleting files
>>suddenly take on the order of 10-20 minutes.
>>
> I'm running some 3TB software arrays (12 * 250GB RAID5) with no
> trouble.  I've opted for XFS over ext3 or reiserfs, and I see no trouble
> in accessing or deleting files.

Is there anybody out there with a qualified opinion on what is best
suited for TB arrays, XFS or JFS?

> not a problem.  Well, as far a software RAID goes anyway - I wish it
> handled trivial media errors more gracefully (ie, without dropping
> disks).  You should always back-up your data.

Second that.  MD is not the brightest thing around.
I particularly dislike the game where you have a failed disk and
accidentally yank a cable on another disk, and MD increases the usage
counter on the remaining (unusable) disks in the array.  Plugging in
and out disks while rebooting Linux to see if you can get the counters
to match and MD to assemble the array again does give that nice
adrenalin surge, but I still prefer the more relaxing desktop games
that come with Linux.

^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2005-02-11 22:10 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2005-02-10 20:47 > 2TB ? No email
2005-02-10 21:10 ` Peter T. Breuer
2005-02-11  9:32   ` Molle Bestefich
2005-02-11 13:41     ` Carlos Knowlton
2005-02-11 22:10       ` XFS or JFS? (Was: > 2TB ?) Molle Bestefich

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).