public inbox for linux-xfs@vger.kernel.org
 help / color / mirror / Atom feed
From: Emmanuel Florac <eflorac@intellique.com>
To: xfs@oss.sgi.com
Subject: Re: creating a new 80 TB XFS
Date: Fri, 24 Feb 2012 15:08:05 +0100	[thread overview]
Message-ID: <20120224150805.243e4906@harpe.intellique.com> (raw)
In-Reply-To: <4F478818.4050803@cape-horn-eng.com>

Le Fri, 24 Feb 2012 13:52:40 +0100
Richard Ems <richard.ems@cape-horn-eng.com> écrivait:

> Hi list,
> 
> We are getting now 32 x 3 TB Hitachi SATA HDDs.
> I plan to configure them in a single RAID 6 set with one or two
> hot-standby discs. The raw storage space will then be 28 x 3 TB = 84
> TB. On this one RAID set I will create only one volume.
> Any thoughts on this?

If you'd rather go for more safety you could build 2 16 drives RAID-6
arrays instead. I'd be somewhat reluctant to make a 30 drives array
--though current drives are quite safe apparently.

> 
> *MKFS*
> We also heavily use ACLs for almost all of our files. Christoph
> Hellwig suggested in a previous mail to use "-i size=512" on XFS
> creation, so my mkfs.xfs would look something like:
> 
> mkfs.xfs -i size=512 -d su=stripe_size,sw=28 -L Backup_2 /dev/sdX1

Looks OK to me.
 
> 
> *MOUNT*
> On mount I will use the options
> 
> mount -o noatime,nobarrier,nofail,logbufs=8,logbsize=256k,inode64
> /dev/sdX1 /mount_point

I think that the logbufs/logbsize option matches the default here. Use
delaylog if applicable. See the xfs FAQ.
 
> What about the largeio mount option? In which cases would it be
> useful?
> 

If you're mostly writing/reading large files. Like really large
(several megabytes and more).

-- 
------------------------------------------------------------------------
Emmanuel Florac     |   Direction technique
                    |   Intellique
                    |	<eflorac@intellique.com>
                    |   +33 1 78 94 84 02
------------------------------------------------------------------------

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

  reply	other threads:[~2012-02-24 14:08 UTC|newest]

Thread overview: 21+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2012-02-24 12:52 creating a new 80 TB XFS Richard Ems
2012-02-24 14:08 ` Emmanuel Florac [this message]
2012-02-24 15:43   ` Richard Ems
2012-02-24 16:20     ` Martin Steigerwald
2012-02-24 16:51       ` Stan Hoeppner
2012-02-25 10:59         ` Martin Steigerwald
2012-02-24 16:58     ` Roger Willcocks
2012-02-25 21:57     ` Peter Grandi
2012-02-26  2:57       ` Stan Hoeppner
2012-02-26 16:08         ` Emmanuel Florac
2012-02-26 16:55           ` Joe Landman
2012-02-24 14:52 ` Peter Grandi
2012-02-24 14:57 ` Michael Weissenbacher
2012-02-24 16:05   ` Richard Ems
2012-02-24 15:17 ` Eric Sandeen
2012-10-01 14:28   ` Richard Ems
2012-10-01 14:36     ` Richard Ems
2012-10-01 14:39     ` Eric Sandeen
2012-10-01 14:45       ` Richard Ems
2012-02-27 11:56 ` Michael Monnerie
2012-02-27 12:20   ` Richard Ems

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20120224150805.243e4906@harpe.intellique.com \
    --to=eflorac@intellique.com \
    --cc=xfs@oss.sgi.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox