From: "Arkadiusz Miśkiewicz" <arekm@maven.pl>
To: stan@hardwarefreak.com
Cc: Stor?? <289471341@qq.com>, Jeff Liu <jeff.liu@oracle.com>,
xfs@oss.sgi.com
Subject: Re: [xfs_check Out of memory: ]
Date: Sun, 29 Dec 2013 12:23:10 +0100 [thread overview]
Message-ID: <201312291223.10955.arekm@maven.pl> (raw)
In-Reply-To: <52BF72BC.8020002@hardwarefreak.com>
On Sunday 29 of December 2013, Stan Hoeppner wrote:
> On 12/28/2013 5:39 PM, Arkadiusz Miśkiewicz wrote:
> > On Saturday 28 of December 2013, Stan Hoeppner wrote:
> >> On 12/27/2013 5:20 PM, Arkadiusz Miśkiewicz wrote:
> > It's a backup copy that needs to be directly accessible (so you could run
> > production directly from backup server for example). That solution won't
> > work.
>
> So it's an rsnapshot server and you have many millions of hardlinks.
Something like that (initially it was just copy of few other servers but now
hardlinks are also in use).
> The obvious solution here is to simply use a greater number of smaller
> XFS filesystems with fewer hardlinks in each. This is by far the best
> way to avoid the xfs_repair memory consumption issue due to massive
> inode count.
> You might even be able to accomplish this using sparse files. This
> would preclude the need to repartition your storage for more
> filesystems, and would allow better utilization of your storage. Dave
> is the sparse filesystem expert so I'll defer to him on whether this is
> possible, or applicable to your workload.
I'll go SSD way since making things more complicated just for xfs_repair isn't
sane.
[...]
> > Adding SSD is my only long term option it seems.
>
> It's not a perfect solution by any means, and the SSD you choose matters
> greatly, which I why I recommended the Samsung 840 Pro. More RAM is the
> best option with your current setup, but is not available for your
> system. Using more filesystems with fewer inodes in each is by far the
> best option, WRT xfs_repair and limited memory.
The server is over 30TB but I used 7TB partitions. Unfortunately it's not
possible to go low with these since hardlinks needs to be on the same
partition etc.
[...]
> > So now more important question. How to actually estimate these things?
> > Example: 10TB xfs filesystem fully written with files - 10kb each file
> > (html pages, images etc) - web server. How much ram my server would need
> > for repair to succeed?
>
> One method is to simply ask xfs_repair how much memory it needs to
> repair the filesystem. Usage:
Assume I'm planning new server and I need to figure that out without actually
having hardware or fs. How to estimate this?
If there is a way I'll gladly describe it and add to xfs faq.
xfs_repair estimate doesn't work, too - see below.
> $ umount /mount/point
> $ xfs_repair -n -m 1 -vv /mount/point
> $ mount /mount/point
>
> e.g.
>
> $ umount /dev/sda7
> $ xfs_repair -n -m 1 -vv /dev/sda7
> Phase 1 - find and verify superblock...
> - max_mem = 1024, icount = 85440, imem = 333, dblock =
> 24414775, dmem = 11921
> Required memory for repair is greater that the maximum specified with
> the -m option. Please increase it to at least 60.
> $ mount /dev/sda7
Phase 1 - find and verify superblock...
- max_mem = 1024, icount = 124489792, imem = 486288, dblock =
1953509376, dmem = 953862
Required memory for repair is greater that the maximum specified
with the -m option. Please increase it to at least 1455.
So minimal 1.5GB but the real usage were nowhere near that minimal estimate.
xfs_repair needed somewhere around 30-40GB for this fs.
So 2x64GB SSD (raid1) for swap should be ok for now but in long term 2x128GB
is the way to go it seems.
--
Arkadiusz Miśkiewicz, arekm / maven.pl
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
next prev parent reply other threads:[~2013-12-29 11:23 UTC|newest]
Thread overview: 19+ messages / expand[flat|nested] mbox.gz Atom feed top
2013-12-27 6:48 [xfs_check Out of memory: ] Stor??
2013-12-27 7:41 ` Jeff Liu
2013-12-27 8:07 ` Arkadiusz Miśkiewicz
2013-12-27 22:42 ` Dave Chinner
2013-12-27 23:20 ` Arkadiusz Miśkiewicz
2013-12-28 16:55 ` Stan Hoeppner
2013-12-28 17:35 ` Jay Ashworth
2013-12-28 22:01 ` Stan Hoeppner
2013-12-28 23:39 ` Arkadiusz Miśkiewicz
2013-12-29 0:54 ` Stan Hoeppner
2013-12-29 11:23 ` Arkadiusz Miśkiewicz [this message]
2013-12-29 9:50 ` Dave Chinner
2013-12-29 11:57 ` Arkadiusz Miśkiewicz
2013-12-29 23:27 ` Dave Chinner
2013-12-30 1:55 ` Stan Hoeppner
2013-12-30 11:27 ` Matthias Schniedermeyer
2013-12-30 13:19 ` Roger Willcocks
2013-12-30 16:25 ` Stan Hoeppner
2013-12-30 17:19 ` Stefan Ring
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=201312291223.10955.arekm@maven.pl \
--to=arekm@maven.pl \
--cc=289471341@qq.com \
--cc=jeff.liu@oracle.com \
--cc=stan@hardwarefreak.com \
--cc=xfs@oss.sgi.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox