linux-xfs.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Emmanuel Florac <eflorac@intellique.com>
To: fuser ct1 <fuserct1@gmail.com>
Cc: linux-xfs@vger.kernel.org
Subject: Re: Safe XFS limits (100TB+)
Date: Thu, 2 Feb 2017 19:16:27 +0100	[thread overview]
Message-ID: <20170202191627.42c50e67@harpe.intellique.com> (raw)
In-Reply-To: <CAL8yqihyYEaCEzWJLqAKsw1BBOQDm-AYR2dCUeN-X-449jCwzQ@mail.gmail.com>

[-- Attachment #1: Type: text/plain, Size: 1920 bytes --]

Le Thu, 2 Feb 2017 16:46:09 +0000
fuser ct1 <fuserct1@gmail.com> écrivait:

> Hello list.
> 
> Despite searching I couldn't find guidance, or many use cases,
> regarding XFS beyond 100TB.
> 
> Of course the filesystem limits are way beyond this, but I was
> looking for real world experiences...

I manage and support several hosts I built and set up, some running for
many years, with very large XFS volumes.
Recent XFS volumes with XFS v5 seem to promise even more robustness,
thanks to metadata checksums.

Currently in use under heavy load machines with the following usable
volumes, almost all of them using RAID 60 (21 to 28 drives x 2 or x3):

1 490 TB volume
3 390 TB volumes
1 240 TB volume 
2 180 TB volumes 
5 160 TB volumes 
11 120 TB volumes
4 90 TB volumes
14 77 TB volumes
many, many 50 and 40 TB volumes.

2x22 disks Raid 60 is perfectly OK, as long as you're using good disks.
I only use HGST, and have a failure rate so low I don't even bother
tracking it precisely anymore (like 2 or 3 failures a year among the
couple thousands disks listed above). 

Use recent xfs progs and kernel, use xfs v5 if possible. Don't forget
proper optimisations (use noop scheduler, enlarge nr_requests and
read_ahead_kb a lot) for high sequential throughput (video is all about
sequential throughput) and you should be happy and safe.

xfs_repair on a filled fast 100 TB volume only needs 15 minutes or so.
And it was after a very, very bad power event (someone connected a
studio light to the UPS and brought everything down literally in
flames).

-- 
------------------------------------------------------------------------
Emmanuel Florac     |   Direction technique
                    |   Intellique
                    |	<eflorac@intellique.com>
                    |   +33 1 78 94 84 02
------------------------------------------------------------------------

[-- Attachment #2: Signature digitale OpenPGP --]
[-- Type: application/pgp-signature, Size: 181 bytes --]

  parent reply	other threads:[~2017-02-02 18:21 UTC|newest]

Thread overview: 7+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-02-02 16:46 Safe XFS limits (100TB+) fuser ct1
2017-02-02 17:16 ` Eric Sandeen
2017-02-02 17:52   ` fuser ct1
2017-02-02 17:55     ` Eric Sandeen
2017-02-02 18:16 ` Emmanuel Florac [this message]
2017-02-02 19:14   ` Martin Steigerwald
     [not found]   ` <CAL8yqih36vWy-Z1PESVZOqDEoW8G9=k5LBM0aToe4JhBM755bA@mail.gmail.com>
2017-02-03 17:10     ` Emmanuel Florac

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20170202191627.42c50e67@harpe.intellique.com \
    --to=eflorac@intellique.com \
    --cc=fuserct1@gmail.com \
    --cc=linux-xfs@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).