From: Joe Landman <landman@scalableinformatics.com>
To: xfs@oss.sgi.com
Subject: Re: creating a new 80 TB XFS
Date: Sun, 26 Feb 2012 11:55:23 -0500 [thread overview]
Message-ID: <4F4A63FB.9020408@scalableinformatics.com> (raw)
In-Reply-To: <20120226170820.45730357@galadriel.home>
On 02/26/2012 11:08 AM, Emmanuel Florac wrote:
> Le Sat, 25 Feb 2012 20:57:05 -0600 vous écriviez:
>
>> As others mentioned, an xfs_[check|repair] can take many hours or even
>> days on a multi-terabyte huge metadata filesystem.
>
> Just nitpicking, but I never had such a problem. I've run quite a lot
> of xfs_repair on 40TB+ filesystems, and it rarely was longer than 10 to
> 20 minutes. The important part is having enough RAM if the system hits
> swap it makes the check much slower).
We've found that adding the -m X and -P options seem to fix many of the
longer running issues for large nearly full many TB file systems.
Biggest one we've repaired has been 108TB and its taken a few hours,
with ~80% utilization of the underlying file system.
I don't know if the sparse file bit we reported last year (with more
data reported to the list in January this year) has had much attention
(hard to reproduce I would imagine). But apart from this, repair seems
to work reasonably quickly. I've not seen an instance after using the
-m X -P options, or "days" for repair, even on heavily fragmented file
systems. Possibly Peter has seen this, and he might describe his
observations in this regard.
Repair time is important. There's no doubt of that. To some degree,
repair performance will be related to the speed of accessing the data on
the drives, so if your best case IO speeds are low, performance on
repair won't be terribly good. Memory size is also important ... we've
had some repairs start swapping (not good) during repair. Hence the -m
X option (for suitable values of X).
Joe
--
Joseph Landman, Ph.D
Founder and CEO
Scalable Informatics Inc.
email: landman@scalableinformatics.com
web : http://scalableinformatics.com
http://scalableinformatics.com/sicluster
phone: +1 734 786 8423 x121
fax : +1 866 888 3112
cell : +1 734 612 4615
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
next prev parent reply other threads:[~2012-02-26 16:55 UTC|newest]
Thread overview: 21+ messages / expand[flat|nested] mbox.gz Atom feed top
2012-02-24 12:52 creating a new 80 TB XFS Richard Ems
2012-02-24 14:08 ` Emmanuel Florac
2012-02-24 15:43 ` Richard Ems
2012-02-24 16:20 ` Martin Steigerwald
2012-02-24 16:51 ` Stan Hoeppner
2012-02-25 10:59 ` Martin Steigerwald
2012-02-24 16:58 ` Roger Willcocks
2012-02-25 21:57 ` Peter Grandi
2012-02-26 2:57 ` Stan Hoeppner
2012-02-26 16:08 ` Emmanuel Florac
2012-02-26 16:55 ` Joe Landman [this message]
2012-02-24 14:52 ` Peter Grandi
2012-02-24 14:57 ` Michael Weissenbacher
2012-02-24 16:05 ` Richard Ems
2012-02-24 15:17 ` Eric Sandeen
2012-10-01 14:28 ` Richard Ems
2012-10-01 14:36 ` Richard Ems
2012-10-01 14:39 ` Eric Sandeen
2012-10-01 14:45 ` Richard Ems
2012-02-27 11:56 ` Michael Monnerie
2012-02-27 12:20 ` Richard Ems
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=4F4A63FB.9020408@scalableinformatics.com \
--to=landman@scalableinformatics.com \
--cc=xfs@oss.sgi.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox