public inbox for linux-xfs@vger.kernel.org
 help / color / mirror / Atom feed
From: Steve Costaras <stevecs@chaven.com>
To: xfs@oss.sgi.com
Subject: Re: 128TB filesystem limit?
Date: Fri, 26 Mar 2010 02:26:28 -0500	[thread overview]
Message-ID: <4BAC61A4.6030607@chaven.com> (raw)
In-Reply-To: <alpine.DEB.2.00.1003252152350.16138@asgard.lang.hm>



On 03/25/2010 23:56, david@lang.hm wrote:
> On Thu, 25 Mar 2010, Eric Sandeen wrote:
>
>> david@lang.hm wrote:
>>> On Fri, 26 Mar 2010, Dave Chinner wrote:
>>
>> ...
>>
>>>> Is there any reason for putting partitions on these block devices?
>>>> You could just use the block devices without partitions, and that
>>>> will avoid alignment potential problems....
>>>
>>> I would like to raid to auto-assemble and I can't do that without
>>> partitions, can I
>>
>> I think you can.... it's not like MD is putting anything in the 
>> partition
>> table; you just give it block devices, I doubt it cares if it's a whole
>> disk or some partition.
>>
>> Worth a check anyway ;)
>
> I know that md will work on raw devices, but the auto-assembly stuff 
> looks for the right partition type, I would have to maintain a conf 
> file across potential system rebuilds if I used the raw partitions.
>
>> ...
>>
>>
>>> the next fun thing is figuring out what sort of stride, etc 
>>> parameters I
>>> should have used for this filesystem.
>>
>> mkfs.xfs should suss that out for you automatically based on talking 
>> to md;
>> of course you'd want to configure md to line up well with the hardware
>> alignment.
>
> in this case md thinks it's working with 10 12.8TB drives, I really 
> doubt that it's going to do the right thing.
>
> I'm not exactly sure what the right thing is in this case. the 
> hardware raid is useing 64K chunks across 16 drives (so 14 * 64K worth 
> of data per stripe), but there are 10 of these stripes before you get 
> back to hitting the same drive again.
>
> David Lang
>

It does here at least, I never use partition tables on any of the arrays 
here just use LVM against what it sees as the 'raw' disk.    I haven't 
tried it w/ a 128TB array but with smaller ones that's what I've used in 
the past (hw raid, md raid-0, file system).   Recently for systems now I 
just use HW raid; LVM; and then filesystem (lvm does the striping/raid-0 
function).    when you create the physical volume w/ lvm just make sure 
you allign it (older versions use --metadatasize to 'pad' the start 
offset), newer versions have the dataalignment option.


Steve

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

  parent reply	other threads:[~2010-03-26  7:24 UTC|newest]

Thread overview: 13+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2010-03-25 23:15 128TB filesystem limit? david
2010-03-25 23:54 ` Dave Chinner
2010-03-26  0:03   ` david
2010-03-26  0:35     ` Dave Chinner
2010-03-26  2:02       ` david
2010-03-26  4:35         ` Eric Sandeen
2010-03-26  4:56           ` david
2010-03-26  6:09             ` Stan Hoeppner
2010-03-26  7:26             ` Steve Costaras [this message]
2010-03-27  9:06 ` Emmanuel Florac
2010-03-27 14:28   ` Steve Costaras
2010-03-27 18:45   ` david
2010-03-28 21:17 ` Peter Grandi

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=4BAC61A4.6030607@chaven.com \
    --to=stevecs@chaven.com \
    --cc=xfs@oss.sgi.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox