From: Dave Chinner <david@fromorbit.com>
To: "Arkadiusz Miśkiewicz" <arekm@maven.pl>
Cc: Stor?? <289471341@qq.com>, Jeff Liu <jeff.liu@oracle.com>,
xfs@oss.sgi.com
Subject: Re: [xfs_check Out of memory: ]
Date: Sat, 28 Dec 2013 09:42:12 +1100 [thread overview]
Message-ID: <20131227224212.GK20579@dastard> (raw)
In-Reply-To: <201312270907.22638.arekm@maven.pl>
On Fri, Dec 27, 2013 at 09:07:22AM +0100, Arkadiusz Miśkiewicz wrote:
> On Friday 27 of December 2013, Jeff Liu wrote:
> > On 12/27 2013 14:48 PM, Stor?? wrote:
> > > Hey:
> > >
> > > 20T xfs file system
> > >
> > >
> > >
> > > /usr/sbin/xfs_check: line 28: 14447 Killed
> > > xfs_db$DBOPTS -i -p xfs_check -c "check$OPTS" $1
> >
> > xfs_check is deprecated and please use xfs_repair -n instead.
> >
> > The following back traces show us that it seems your system is run out
> > memory when executing xfs_check, thus, snmp daemon/xfs_db were killed.
>
> This reminds me a question...
>
> Could xfs_repair store its temporary data (some of that data, the biggest
> parte) on disk instead of in memory?
Where on disk? We can't write to the disk until we've verified all
the free space is really free space, and guess what uses all the
memory? Besides, if the information is not being referenced
regularly (and it usually isn't), then swap space is about as
efficient as any database we might come up with...
> I don't know it that would make sense, so asking. Not sure if xfs_repair needs
> to access that data frequently (so on disk makes no sense) or maybe it needs
> only for iteration purposes in some later phase (so on disk should work).
>
> Anyway memory usage of xfs_repair was always a problem for me (like 16GB not
> enough for 7TB fs due to huge amount of fies being stored). With parallel scan
> it's even worse obviously.
Yes, your problem is that the filesystem you are checking contains
40+GB of metadata and a large amount of that needs to be kept in
memory from phase 3 through to phase 6. If you really want to add
some kind of database interface to store this information somewhere
else, then I'll review the patches. ;)
Cheers,
Dave.
--
Dave Chinner
david@fromorbit.com
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
next prev parent reply other threads:[~2013-12-27 22:42 UTC|newest]
Thread overview: 19+ messages / expand[flat|nested] mbox.gz Atom feed top
2013-12-27 6:48 [xfs_check Out of memory: ] Stor??
2013-12-27 7:41 ` Jeff Liu
2013-12-27 8:07 ` Arkadiusz Miśkiewicz
2013-12-27 22:42 ` Dave Chinner [this message]
2013-12-27 23:20 ` Arkadiusz Miśkiewicz
2013-12-28 16:55 ` Stan Hoeppner
2013-12-28 17:35 ` Jay Ashworth
2013-12-28 22:01 ` Stan Hoeppner
2013-12-28 23:39 ` Arkadiusz Miśkiewicz
2013-12-29 0:54 ` Stan Hoeppner
2013-12-29 11:23 ` Arkadiusz Miśkiewicz
2013-12-29 9:50 ` Dave Chinner
2013-12-29 11:57 ` Arkadiusz Miśkiewicz
2013-12-29 23:27 ` Dave Chinner
2013-12-30 1:55 ` Stan Hoeppner
2013-12-30 11:27 ` Matthias Schniedermeyer
2013-12-30 13:19 ` Roger Willcocks
2013-12-30 16:25 ` Stan Hoeppner
2013-12-30 17:19 ` Stefan Ring
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20131227224212.GK20579@dastard \
--to=david@fromorbit.com \
--cc=289471341@qq.com \
--cc=arekm@maven.pl \
--cc=jeff.liu@oracle.com \
--cc=xfs@oss.sgi.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox