public inbox for linux-xfs@vger.kernel.org
 help / color / mirror / Atom feed
* Practical file system size question
@ 2013-04-17 13:18 Robert Bennett
  2013-04-17 13:53 ` Joe Landman
  2013-04-17 14:02 ` Rafa Griman
  0 siblings, 2 replies; 4+ messages in thread
From: Robert Bennett @ 2013-04-17 13:18 UTC (permalink / raw)
  To: xfs


[-- Attachment #1.1: Type: text/plain, Size: 877 bytes --]

We have been running our Storage on XFS for over three years now and are
extremely happy.  We are running each file system on LSI Hardware Raid with
3 RAID groups of 12+2 with 3 Hot Spares, and 8 file systems per Head Node.
These are running on 2TB SAS HDDs.  The individual file system size is 66TB
in this configuration.  The time has come to look into moving to 3TB SAS
HDDs.  With very rudimentary math, this should move us to the neighborhood
of 99TB per file system.  Our OS is linux 2.6.32-279.11.1.el6.x86_64.

The question is - does anyone have experience with this type of
configuration and in particular with 3TB HDDs and a file system size of
99TB?  The rebuild time with 2TB drives is ~ 24 hours.  Should I expect the
rebuild time for the 3TB drives to be ~ 36 hours?

Thanks for all the hard work all of you do on a file system that continues
to dazzle.

-bob
**

[-- Attachment #1.2: Type: text/html, Size: 1089 bytes --]

[-- Attachment #2: Type: text/plain, Size: 121 bytes --]

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: Practical file system size question
  2013-04-17 13:18 Practical file system size question Robert Bennett
@ 2013-04-17 13:53 ` Joe Landman
  2013-04-17 14:28   ` Robert Bennett
  2013-04-17 14:02 ` Rafa Griman
  1 sibling, 1 reply; 4+ messages in thread
From: Joe Landman @ 2013-04-17 13:53 UTC (permalink / raw)
  To: xfs

On 04/17/2013 09:18 AM, Robert Bennett wrote:

[...]

> The question is - does anyone have experience with this type of
> configuration and in particular with 3TB HDDs and a file system size of
> 99TB?  The rebuild time with 2TB drives is ~ 24 hours.  Should I expect

Yes, we've built up to 240TB raw (180TB usable) xfs systems for our 
customers on our units.  Very doable, though there are some considerations.

First off, why 3TB when 4TB are in market, pretty stable ... ?  Unless 
there is a budgetary limitation you are working against, I'd advise at 
least looking at those.

Second, which LSI controllers are you using?  Firmware updates for the 
controllers to use 3TB and higher drives are pretty much mandated.  Our 
experience with various LSI cards has ranged from not so good to pretty 
good, and this is a firmware version revision dependent (along with 
driver) issue from what we found.

> the rebuild time for the 3TB drives to be ~ 36 hours?

Not in our experience.  Typically 5-10 hours depending upon RAID card 
used, RAID configuration, drives chosen, and other issues.  Its pretty 
complex, and hard to predict in advance without trying it.


-- 
Joseph Landman, Ph.D
Founder and CEO
Scalable Informatics, Inc.
email: landman@scalableinformatics.com
web  : http://scalableinformatics.com
        http://scalableinformatics.com/siflash
phone: +1 734 786 8423 x121
fax  : +1 866 888 3112
cell : +1 734 612 4615

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: Practical file system size question
  2013-04-17 13:18 Practical file system size question Robert Bennett
  2013-04-17 13:53 ` Joe Landman
@ 2013-04-17 14:02 ` Rafa Griman
  1 sibling, 0 replies; 4+ messages in thread
From: Rafa Griman @ 2013-04-17 14:02 UTC (permalink / raw)
  To: xfs

Hi :)

On Wed, Apr 17, 2013 at 3:18 PM, Robert Bennett
<rbennett@mail.crawford.com> wrote:
>
> We have been running our Storage on XFS for over three years now and are
> extremely happy.  We are running each file system on LSI Hardware Raid with
> 3 RAID groups of 12+2 with 3 Hot Spares, and 8 file systems per Head Node.
> These are running on 2TB SAS HDDs.  The individual file system size is 66TB
> in this configuration.  The time has come to look into moving to 3TB SAS
> HDDs.  With very rudimentary math, this should move us to the neighborhood
> of 99TB per file system.  Our OS is linux 2.6.32-279.11.1.el6.x86_64.
>
> The question is - does anyone have experience with this type of
> configuration and in particular with 3TB HDDs and a file system size of
> 99TB?  The rebuild time with 2TB drives is ~ 24 hours.  Should I expect the
> rebuild time for the 3TB drives to be ~ 36 hours?
>
> Thanks for all the hard work all of you do on a file system that continues
> to dazzle.


When you say "LSI Hardware Raid", I assume it's some sort of
NetApp/Engenio storage array (aka E2600, E2400, E5500). Am I correct?
If so, you should try the new their new feature Dynamic Disk Pooling:

http://www.netapp.com/us/system/pdf-reader.aspx?m=ds-3309.pdf&cc=us

http://www.netapp.com/us/technology/dynamic-disk-pools.aspx

It lowers your rebuild times quite a lot.

If you mean internal LSI RAID PCIe controllers in a server ... can't
be of much help here :(

HTH

   Rafa

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: Practical file system size question
  2013-04-17 13:53 ` Joe Landman
@ 2013-04-17 14:28   ` Robert Bennett
  0 siblings, 0 replies; 4+ messages in thread
From: Robert Bennett @ 2013-04-17 14:28 UTC (permalink / raw)
  To: Joe Landman; +Cc: xfs


[-- Attachment #1.1: Type: text/plain, Size: 2901 bytes --]

Thanks for the quick replies.  It's very nice.

We are using LSI MegaRAID 9280-8E cards.  I got bit by the LSI firmware bug
a while back and chased what appeared to be a memory issue before
discovering that a firmware upgrade solved all my problems.  I don't
remember how many grey hairs were a result of that exercise, but it wasn't
just a few.

We build our own storage from Super Micro Components.

Why not look at 4TB drives?  Short sightedness.  Fear.  Baby Steps.  Take
your pick.  With that being said - I will look into it.

Thanks for all the insights.

-bob

*Bob Bennett*
Director of IT
Crawford Media Services, Inc.
d: 678.536.4906 | e: rbennett@mail.crawford.com | w: www.crawford.com

6 West Druid Hills Drive, NE

Atlanta, GA 30329

view map<http://maps.google.com/maps/ms?ie=UTF8&msa=0&msid=203794978981965716430.00049801625e306a428e2&ll=33.831714,-84.339152&spn=0.001845,0.004289&t=h&z=19&iwloc=00049801625fe86307c32>

*Create ******** **•** Manage ********** **•** Serve*


On Wed, Apr 17, 2013 at 9:53 AM, Joe Landman <joe.landman@gmail.com> wrote:

> On 04/17/2013 09:18 AM, Robert Bennett wrote:
>
> [...]
>
>
>  The question is - does anyone have experience with this type of
>> configuration and in particular with 3TB HDDs and a file system size of
>> 99TB?  The rebuild time with 2TB drives is ~ 24 hours.  Should I expect
>>
>
> Yes, we've built up to 240TB raw (180TB usable) xfs systems for our
> customers on our units.  Very doable, though there are some considerations.
>
> First off, why 3TB when 4TB are in market, pretty stable ... ?  Unless
> there is a budgetary limitation you are working against, I'd advise at
> least looking at those.
>
> Second, which LSI controllers are you using?  Firmware updates for the
> controllers to use 3TB and higher drives are pretty much mandated.  Our
> experience with various LSI cards has ranged from not so good to pretty
> good, and this is a firmware version revision dependent (along with driver)
> issue from what we found.
>
>
>  the rebuild time for the 3TB drives to be ~ 36 hours?
>>
>
> Not in our experience.  Typically 5-10 hours depending upon RAID card
> used, RAID configuration, drives chosen, and other issues.  Its pretty
> complex, and hard to predict in advance without trying it.
>
>
> --
> Joseph Landman, Ph.D
> Founder and CEO
> Scalable Informatics, Inc.
> email: landman@scalableinformatics.**com <landman@scalableinformatics.com>
> web  : http://scalableinformatics.com
>        http://scalableinformatics.**com/siflash<http://scalableinformatics.com/siflash>
> phone: +1 734 786 8423 x121
> fax  : +1 866 888 3112
> cell : +1 734 612 4615
>
> ______________________________**_________________
> xfs mailing list
> xfs@oss.sgi.com
> http://oss.sgi.com/mailman/**listinfo/xfs<http://oss.sgi.com/mailman/listinfo/xfs>
>

[-- Attachment #1.2: Type: text/html, Size: 6182 bytes --]

[-- Attachment #2: Type: text/plain, Size: 121 bytes --]

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2013-04-17 14:28 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2013-04-17 13:18 Practical file system size question Robert Bennett
2013-04-17 13:53 ` Joe Landman
2013-04-17 14:28   ` Robert Bennett
2013-04-17 14:02 ` Rafa Griman

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox