public inbox for linux-xfs@vger.kernel.org
 help / color / mirror / Atom feed
* 128TB filesystem limit?
@ 2010-03-25 23:15 david
  2010-03-25 23:54 ` Dave Chinner
                   ` (2 more replies)
  0 siblings, 3 replies; 13+ messages in thread
From: david @ 2010-03-25 23:15 UTC (permalink / raw)
  To: xfs

I'm working with a raid 0 (md) array on top of 10 16x1TB raid 6 hardware 
arrays.

fdisk -l shows me 10 drives like

WARNING: GPT (GUID Partition Table) detected on '/dev/sdk'! The util fdisk 
doesn't support GPT. Use GNU Parted.


Disk /dev/sdk: 13999.9 GB, 13999999025152 bytes
255 heads, 63 sectors/track, 1702069 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x00000000

    Device Boot      Start         End      Blocks   Id  System
/dev/sdk1               1      267350  2147483647+  ee  EFI GPT

and then the md0 device as

Disk /dev/md0: 139999.9 GB, 139999989596160 bytes
2 heads, 4 sectors/track, -1 cylinders
Units = cylinders of 8 * 512 = 4096 bytes
Disk identifier: 0x00000000

Disk /dev/md0 doesn't contain a valid partition table


I then did mkfs.xfs /dev/md0

but a df is showing me 128TB

is this just rounding error combined with the 1000=1k vs 1024=1k marketing 
stuff, or is there some limit I am bumping into here.

David Lang

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: 128TB filesystem limit?
  2010-03-25 23:15 128TB filesystem limit? david
@ 2010-03-25 23:54 ` Dave Chinner
  2010-03-26  0:03   ` david
  2010-03-27  9:06 ` Emmanuel Florac
  2010-03-28 21:17 ` Peter Grandi
  2 siblings, 1 reply; 13+ messages in thread
From: Dave Chinner @ 2010-03-25 23:54 UTC (permalink / raw)
  To: david; +Cc: xfs

On Thu, Mar 25, 2010 at 04:15:42PM -0700, david@lang.hm wrote:
> I'm working with a raid 0 (md) array on top of 10 16x1TB raid 6
> hardware arrays.
> 
> fdisk -l shows me 10 drives like
> 
> WARNING: GPT (GUID Partition Table) detected on '/dev/sdk'! The util
> fdisk doesn't support GPT. Use GNU Parted.
> 
> 
> Disk /dev/sdk: 13999.9 GB, 13999999025152 bytes
> 255 heads, 63 sectors/track, 1702069 cylinders
> Units = cylinders of 16065 * 512 = 8225280 bytes
> Disk identifier: 0x00000000
> 
>    Device Boot      Start         End      Blocks   Id  System
> /dev/sdk1               1      267350  2147483647+  ee  EFI GPT
> 
> and then the md0 device as
> 
> Disk /dev/md0: 139999.9 GB, 139999989596160 bytes
> 2 heads, 4 sectors/track, -1 cylinders
> Units = cylinders of 8 * 512 = 4096 bytes
> Disk identifier: 0x00000000
> 
> Disk /dev/md0 doesn't contain a valid partition table
> 
> 
> I then did mkfs.xfs /dev/md0
> 
> but a df is showing me 128TB

What is in /proc/partitions?

> is this just rounding error combined with the 1000=1k vs 1024=1k
> marketing stuff,

Probably.

> or is there some limit I am bumping into here.

Unlikely to be an XFS limit - I was doing some "what happens if"
testing on multi-PB sized XFS filesystems hosted on sparse files
a couple of days ago....

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: 128TB filesystem limit?
  2010-03-25 23:54 ` Dave Chinner
@ 2010-03-26  0:03   ` david
  2010-03-26  0:35     ` Dave Chinner
  0 siblings, 1 reply; 13+ messages in thread
From: david @ 2010-03-26  0:03 UTC (permalink / raw)
  To: Dave Chinner; +Cc: xfs

On Fri, 26 Mar 2010, Dave Chinner wrote:

> On Thu, Mar 25, 2010 at 04:15:42PM -0700, david@lang.hm wrote:
>> I'm working with a raid 0 (md) array on top of 10 16x1TB raid 6
>> hardware arrays.
>>
>> fdisk -l shows me 10 drives like
>>
>> WARNING: GPT (GUID Partition Table) detected on '/dev/sdk'! The util
>> fdisk doesn't support GPT. Use GNU Parted.
>>
>>
>> Disk /dev/sdk: 13999.9 GB, 13999999025152 bytes
>> 255 heads, 63 sectors/track, 1702069 cylinders
>> Units = cylinders of 16065 * 512 = 8225280 bytes
>> Disk identifier: 0x00000000
>>
>>    Device Boot      Start         End      Blocks   Id  System
>> /dev/sdk1               1      267350  2147483647+  ee  EFI GPT
>>
>> and then the md0 device as
>>
>> Disk /dev/md0: 139999.9 GB, 139999989596160 bytes
>> 2 heads, 4 sectors/track, -1 cylinders
>> Units = cylinders of 8 * 512 = 4096 bytes
>> Disk identifier: 0x00000000
>>
>> Disk /dev/md0 doesn't contain a valid partition table
>>
>>
>> I then did mkfs.xfs /dev/md0
>>
>> but a df is showing me 128TB
>
> What is in /proc/partitions?

# cat /proc/partitions
major minor  #blocks  name

    8        0  292542464 sda
    8        1    2048287 sda1
    8        2    2048287 sda2
    8        3    2048287 sda3
    8        4  286390755 sda4
    8       16 13671874048 sdb
    8       17 13671874014 sdb1
    8       32 13671874048 sdc
    8       33 13671874014 sdc1
    8       48 13671874048 sdd
    8       49 13671874014 sdd1
    8       64 13671874048 sde
    8       65 13671874014 sde1
    8       80 13671874048 sdf
    8       81 13671874014 sdf1
    8       96 13671874048 sdg
    8       97 13671874014 sdg1
    8      112 13671874048 sdh
    8      113 13671874014 sdh1
    8      128 13671874048 sdi
    8      129 13671874014 sdi1
    8      144 13671874048 sdj
    8      145 13671874014 sdj1
    8      160 13671874048 sdk
    8      161 13671874014 sdk1
    9        0 136718739840 md0

>> is this just rounding error combined with the 1000=1k vs 1024=1k
>> marketing stuff,
>
> Probably.
>
>> or is there some limit I am bumping into here.
>
> Unlikely to be an XFS limit - I was doing some "what happens if"
> testing on multi-PB sized XFS filesystems hosted on sparse files
> a couple of days ago....

Ok, 128TB is a suspiciously round (in computer terms) number, especially 
when the math is 10 sets of 14 drives (each 1TB), so I figured I'd double 
check.

David Lang

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: 128TB filesystem limit?
  2010-03-26  0:03   ` david
@ 2010-03-26  0:35     ` Dave Chinner
  2010-03-26  2:02       ` david
  0 siblings, 1 reply; 13+ messages in thread
From: Dave Chinner @ 2010-03-26  0:35 UTC (permalink / raw)
  To: david; +Cc: xfs

On Thu, Mar 25, 2010 at 05:03:52PM -0700, david@lang.hm wrote:
> On Fri, 26 Mar 2010, Dave Chinner wrote:
> 
> >On Thu, Mar 25, 2010 at 04:15:42PM -0700, david@lang.hm wrote:
> >>I'm working with a raid 0 (md) array on top of 10 16x1TB raid 6
> >>hardware arrays.
....
> >>I then did mkfs.xfs /dev/md0
> >>
> >>but a df is showing me 128TB
> >
> >What is in /proc/partitions?
> 
> # cat /proc/partitions
> major minor  #blocks  name
> 
>    8        0  292542464 sda
>    8        1    2048287 sda1
>    8        2    2048287 sda2
>    8        3    2048287 sda3
>    8        4  286390755 sda4
>    8       16 13671874048 sdb
>    8       17 13671874014 sdb1
>    8       32 13671874048 sdc
>    8       33 13671874014 sdc1
....
>    8      160 13671874048 sdk
>    8      161 13671874014 sdk1
>    9        0 136718739840 md0

Is there any reason for putting partitions on these block devices?
You could just use the block devices without partitions, and that
will avoid alignment potential problems....

> >>is this just rounding error combined with the 1000=1k vs 1024=1k
> >>marketing stuff,
> >
> >Probably.
> >
> >>or is there some limit I am bumping into here.
> >
> >Unlikely to be an XFS limit - I was doing some "what happens if"
> >testing on multi-PB sized XFS filesystems hosted on sparse files
> >a couple of days ago....
> 
> Ok, 128TB is a suspiciously round (in computer terms) number,
> especially when the math is 10 sets of 14 drives (each 1TB), so I
> figured I'd double check.

136718739840 / 10^9 = 136.72TB    <==== marketing number
136718739840 / 2^30 = 127.33TiB   <==== what df shows

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: 128TB filesystem limit?
  2010-03-26  0:35     ` Dave Chinner
@ 2010-03-26  2:02       ` david
  2010-03-26  4:35         ` Eric Sandeen
  0 siblings, 1 reply; 13+ messages in thread
From: david @ 2010-03-26  2:02 UTC (permalink / raw)
  To: Dave Chinner; +Cc: xfs

On Fri, 26 Mar 2010, Dave Chinner wrote:

> On Thu, Mar 25, 2010 at 05:03:52PM -0700, david@lang.hm wrote:
>> On Fri, 26 Mar 2010, Dave Chinner wrote:
>>
>>> On Thu, Mar 25, 2010 at 04:15:42PM -0700, david@lang.hm wrote:
>>>> I'm working with a raid 0 (md) array on top of 10 16x1TB raid 6
>>>> hardware arrays.
> ....
>>>> I then did mkfs.xfs /dev/md0
>>>>
>>>> but a df is showing me 128TB
>>>
>>> What is in /proc/partitions?
>>
>> # cat /proc/partitions
>> major minor  #blocks  name
>>
>>    8        0  292542464 sda
>>    8        1    2048287 sda1
>>    8        2    2048287 sda2
>>    8        3    2048287 sda3
>>    8        4  286390755 sda4
>>    8       16 13671874048 sdb
>>    8       17 13671874014 sdb1
>>    8       32 13671874048 sdc
>>    8       33 13671874014 sdc1
> ....
>>    8      160 13671874048 sdk
>>    8      161 13671874014 sdk1
>>    9        0 136718739840 md0
>
> Is there any reason for putting partitions on these block devices?
> You could just use the block devices without partitions, and that
> will avoid alignment potential problems....

I would like to raid to auto-assemble and I can't do that without 
partitions, can I

>>>> is this just rounding error combined with the 1000=1k vs 1024=1k
>>>> marketing stuff,
>>>
>>> Probably.
>>>
>>>> or is there some limit I am bumping into here.
>>>
>>> Unlikely to be an XFS limit - I was doing some "what happens if"
>>> testing on multi-PB sized XFS filesystems hosted on sparse files
>>> a couple of days ago....
>>
>> Ok, 128TB is a suspiciously round (in computer terms) number,
>> especially when the math is 10 sets of 14 drives (each 1TB), so I
>> figured I'd double check.
>
> 136718739840 / 10^9 = 136.72TB    <==== marketing number
> 136718739840 / 2^30 = 127.33TiB   <==== what df shows

Thanks.

the next fun thing is figuring out what sort of stride, etc parameters I 
should have used for this filesystem.

David Lang

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: 128TB filesystem limit?
  2010-03-26  2:02       ` david
@ 2010-03-26  4:35         ` Eric Sandeen
  2010-03-26  4:56           ` david
  0 siblings, 1 reply; 13+ messages in thread
From: Eric Sandeen @ 2010-03-26  4:35 UTC (permalink / raw)
  To: david; +Cc: xfs

david@lang.hm wrote:
> On Fri, 26 Mar 2010, Dave Chinner wrote:

...

>> Is there any reason for putting partitions on these block devices?
>> You could just use the block devices without partitions, and that
>> will avoid alignment potential problems....
> 
> I would like to raid to auto-assemble and I can't do that without
> partitions, can I

I think you can.... it's not like MD is putting anything in the partition
table; you just give it block devices, I doubt it cares if it's a whole
disk or some partition.

Worth a check anyway ;)

...


> the next fun thing is figuring out what sort of stride, etc parameters I
> should have used for this filesystem.

mkfs.xfs should suss that out for you automatically based on talking to md;
of course you'd want to configure md to line up well with the hardware
alignment.

-Eric

> David Lang

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: 128TB filesystem limit?
  2010-03-26  4:35         ` Eric Sandeen
@ 2010-03-26  4:56           ` david
  2010-03-26  6:09             ` Stan Hoeppner
  2010-03-26  7:26             ` Steve Costaras
  0 siblings, 2 replies; 13+ messages in thread
From: david @ 2010-03-26  4:56 UTC (permalink / raw)
  To: Eric Sandeen; +Cc: xfs

On Thu, 25 Mar 2010, Eric Sandeen wrote:

> david@lang.hm wrote:
>> On Fri, 26 Mar 2010, Dave Chinner wrote:
>
> ...
>
>>> Is there any reason for putting partitions on these block devices?
>>> You could just use the block devices without partitions, and that
>>> will avoid alignment potential problems....
>>
>> I would like to raid to auto-assemble and I can't do that without
>> partitions, can I
>
> I think you can.... it's not like MD is putting anything in the partition
> table; you just give it block devices, I doubt it cares if it's a whole
> disk or some partition.
>
> Worth a check anyway ;)

I know that md will work on raw devices, but the auto-assembly stuff looks 
for the right partition type, I would have to maintain a conf file across 
potential system rebuilds if I used the raw partitions.

> ...
>
>
>> the next fun thing is figuring out what sort of stride, etc parameters I
>> should have used for this filesystem.
>
> mkfs.xfs should suss that out for you automatically based on talking to md;
> of course you'd want to configure md to line up well with the hardware
> alignment.

in this case md thinks it's working with 10 12.8TB drives, I really doubt 
that it's going to do the right thing.

I'm not exactly sure what the right thing is in this case. the hardware 
raid is useing 64K chunks across 16 drives (so 14 * 64K worth of data per 
stripe), but there are 10 of these stripes before you get back to hitting 
the same drive again.

David Lang

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: 128TB filesystem limit?
  2010-03-26  4:56           ` david
@ 2010-03-26  6:09             ` Stan Hoeppner
  2010-03-26  7:26             ` Steve Costaras
  1 sibling, 0 replies; 13+ messages in thread
From: Stan Hoeppner @ 2010-03-26  6:09 UTC (permalink / raw)
  To: xfs

david@lang.hm put forth on 3/25/2010 11:56 PM:

>>> the next fun thing is figuring out what sort of stride, etc parameters I
>>> should have used for this filesystem.
>>
>> mkfs.xfs should suss that out for you automatically based on talking
>> to md;
>> of course you'd want to configure md to line up well with the hardware
>> alignment.
> 
> in this case md thinks it's working with 10 12.8TB drives, I really
> doubt that it's going to do the right thing.
> 
> I'm not exactly sure what the right thing is in this case. the hardware
> raid is useing 64K chunks across 16 drives (so 14 * 64K worth of data
> per stripe), but there are 10 of these stripes before you get back to
> hitting the same drive again.

It would be helpful if you told us the primary application(s) that will be
writing to this large multi-level RAID setup.  Primarily large files or
small?  Database?  ??

-- 
Stan

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: 128TB filesystem limit?
  2010-03-26  4:56           ` david
  2010-03-26  6:09             ` Stan Hoeppner
@ 2010-03-26  7:26             ` Steve Costaras
  1 sibling, 0 replies; 13+ messages in thread
From: Steve Costaras @ 2010-03-26  7:26 UTC (permalink / raw)
  To: xfs



On 03/25/2010 23:56, david@lang.hm wrote:
> On Thu, 25 Mar 2010, Eric Sandeen wrote:
>
>> david@lang.hm wrote:
>>> On Fri, 26 Mar 2010, Dave Chinner wrote:
>>
>> ...
>>
>>>> Is there any reason for putting partitions on these block devices?
>>>> You could just use the block devices without partitions, and that
>>>> will avoid alignment potential problems....
>>>
>>> I would like to raid to auto-assemble and I can't do that without
>>> partitions, can I
>>
>> I think you can.... it's not like MD is putting anything in the 
>> partition
>> table; you just give it block devices, I doubt it cares if it's a whole
>> disk or some partition.
>>
>> Worth a check anyway ;)
>
> I know that md will work on raw devices, but the auto-assembly stuff 
> looks for the right partition type, I would have to maintain a conf 
> file across potential system rebuilds if I used the raw partitions.
>
>> ...
>>
>>
>>> the next fun thing is figuring out what sort of stride, etc 
>>> parameters I
>>> should have used for this filesystem.
>>
>> mkfs.xfs should suss that out for you automatically based on talking 
>> to md;
>> of course you'd want to configure md to line up well with the hardware
>> alignment.
>
> in this case md thinks it's working with 10 12.8TB drives, I really 
> doubt that it's going to do the right thing.
>
> I'm not exactly sure what the right thing is in this case. the 
> hardware raid is useing 64K chunks across 16 drives (so 14 * 64K worth 
> of data per stripe), but there are 10 of these stripes before you get 
> back to hitting the same drive again.
>
> David Lang
>

It does here at least, I never use partition tables on any of the arrays 
here just use LVM against what it sees as the 'raw' disk.    I haven't 
tried it w/ a 128TB array but with smaller ones that's what I've used in 
the past (hw raid, md raid-0, file system).   Recently for systems now I 
just use HW raid; LVM; and then filesystem (lvm does the striping/raid-0 
function).    when you create the physical volume w/ lvm just make sure 
you allign it (older versions use --metadatasize to 'pad' the start 
offset), newer versions have the dataalignment option.


Steve

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: 128TB filesystem limit?
  2010-03-25 23:15 128TB filesystem limit? david
  2010-03-25 23:54 ` Dave Chinner
@ 2010-03-27  9:06 ` Emmanuel Florac
  2010-03-27 14:28   ` Steve Costaras
  2010-03-27 18:45   ` david
  2010-03-28 21:17 ` Peter Grandi
  2 siblings, 2 replies; 13+ messages in thread
From: Emmanuel Florac @ 2010-03-27  9:06 UTC (permalink / raw)
  To: david; +Cc: xfs

Le Thu, 25 Mar 2010 16:15:42 -0700 (PDT) vous écriviez:

> is this just rounding error combined with the 1000=1k vs 1024=1k
> marketing stuff, or is there some limit I am bumping into here.

This isn't an xfs limit, I've set up several hundred big xfs FS for
more than 5 years (13 to 76 TB) and never saw that. It must be a bug in
df or elsewhere. What distribution is this? and architecture?

-- 
------------------------------------------------------------------------
Emmanuel Florac     |   Direction technique
                    |   Intellique
                    |	<eflorac@intellique.com>
                    |   +33 1 78 94 84 02
------------------------------------------------------------------------

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: 128TB filesystem limit?
  2010-03-27  9:06 ` Emmanuel Florac
@ 2010-03-27 14:28   ` Steve Costaras
  2010-03-27 18:45   ` david
  1 sibling, 0 replies; 13+ messages in thread
From: Steve Costaras @ 2010-03-27 14:28 UTC (permalink / raw)
  To: xfs


[-- Attachment #1.1: Type: text/plain, Size: 1668 bytes --]


 From my previous experience it's pure speculation until someone 
actually HAS a file system of X size to make a determination such as 
that.   Having run into limits that 'should not have been there' at 1TB, 
2TB, 8TB, 16TB, and 32TB when I've crossed each one (different file 
systems but all at the time of crossing them have been 'supposedly' 
capable of handling it, don't.  Most recent is the 32TiB limit in JFS, 
granted it looks to be all the jfs tools but that doesn't matter when 
you still loose all your data.  ;)

I know that XFS can handle >64TiB as I have that running (though made 
sure I had backups before I expanded to that).    I have not seen a 
deployment of 128TiB to see if that works, not saying it can't or wont 
just that I haven't seen it.

However from the thread here it appears that <128TiB (just shy it seems) 
works and what the OP seems to be running into is a units discrepancy.   
Using base 10 on the drives and then having the system use base 2 for 
display.   This is more dramatic the larger the drive/array and the lack 
of education/updates to properly display the units (?iB for base 2 (e.g. 
TiB) and ?B for base 10 (e.g. TB)).   So easily confused.



On 03/27/2010 04:06, Emmanuel Florac wrote:
> Le Thu, 25 Mar 2010 16:15:42 -0700 (PDT) vous écriviez:
>
>    
>> is this just rounding error combined with the 1000=1k vs 1024=1k
>> marketing stuff, or is there some limit I am bumping into here.
>>      
> This isn't an xfs limit, I've set up several hundred big xfs FS for
> more than 5 years (13 to 76 TB) and never saw that. It must be a bug in
> df or elsewhere. What distribution is this? and architecture?
>
>    

[-- Attachment #1.2: Type: text/html, Size: 2191 bytes --]

[-- Attachment #2: Type: text/plain, Size: 121 bytes --]

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: 128TB filesystem limit?
  2010-03-27  9:06 ` Emmanuel Florac
  2010-03-27 14:28   ` Steve Costaras
@ 2010-03-27 18:45   ` david
  1 sibling, 0 replies; 13+ messages in thread
From: david @ 2010-03-27 18:45 UTC (permalink / raw)
  To: Emmanuel Florac; +Cc: xfs

On Sat, 27 Mar 2010, Emmanuel Florac wrote:

> Le Thu, 25 Mar 2010 16:15:42 -0700 (PDT) vous ?criviez:
>
>> is this just rounding error combined with the 1000=1k vs 1024=1k
>> marketing stuff, or is there some limit I am bumping into here.
>
> This isn't an xfs limit, I've set up several hundred big xfs FS for
> more than 5 years (13 to 76 TB) and never saw that. It must be a bug in
> df or elsewhere. What distribution is this? and architecture?

debian 5 on amd64.

at this point I'm fairly sure that it's just rounding error combined with 
the difference between 10^9 and 2^30 definitions of TB

David Lang

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: 128TB filesystem limit?
  2010-03-25 23:15 128TB filesystem limit? david
  2010-03-25 23:54 ` Dave Chinner
  2010-03-27  9:06 ` Emmanuel Florac
@ 2010-03-28 21:17 ` Peter Grandi
  2 siblings, 0 replies; 13+ messages in thread
From: Peter Grandi @ 2010-03-28 21:17 UTC (permalink / raw)
  To: Linux XFS

> I'm working with a raid 0 (md) array on top of 10 16x1TB raid
> 6 hardware arrays.  fdisk -l shows me 10 drives like [ ... ] I
> then did mkfs.xfs /dev/md0 but a df is showing me 128TB [ ... ]

A 128Tb single filesystem ontop a 160 drives RAID60! Even
better, one made of 10x 14+2 RAID6 components.

It shows a dedication to "if it is possible it must make sense"
logic and an admirable faith in luck.

Perhaps a small discrepancy in size units is not the most
interesting aspect of this query.

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 13+ messages in thread

end of thread, other threads:[~2010-03-28 21:18 UTC | newest]

Thread overview: 13+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2010-03-25 23:15 128TB filesystem limit? david
2010-03-25 23:54 ` Dave Chinner
2010-03-26  0:03   ` david
2010-03-26  0:35     ` Dave Chinner
2010-03-26  2:02       ` david
2010-03-26  4:35         ` Eric Sandeen
2010-03-26  4:56           ` david
2010-03-26  6:09             ` Stan Hoeppner
2010-03-26  7:26             ` Steve Costaras
2010-03-27  9:06 ` Emmanuel Florac
2010-03-27 14:28   ` Steve Costaras
2010-03-27 18:45   ` david
2010-03-28 21:17 ` Peter Grandi

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox