public inbox for linux-xfs@vger.kernel.org
 help / color / mirror / Atom feed
From: "Daniele P." <daniele@interline.it>
To: xfs@oss.sgi.com
Subject: xfsrepair memory consumption
Date: Tue, 20 Mar 2007 15:32:05 +0100	[thread overview]
Message-ID: <200703201532.06076.daniele@interline.it> (raw)

Hi all,
I'm just asking if any work has been done/is in progress to improve
xfs_repair memory consumption.
I know that xfs tools use a lot of memory but IIRC someone wrote on this
mailing list that s/he'll work on this issue.
But now I see that the memory requirements are increasing instead of
lowering.

I discovered this because running xfs_repair 2.6.20-1 on a 300GB file
system fills up the entire memory on a debian sarge (256MB Mem/256MB
swap) and eventually the process is killed

....
Phase 6 - check inode connectivity...
- resetting contents of realtime bitmap and summary inodes
- ensuring existence of lost+found directory
- traversing filesystem starting at / ...
Killed

Next I tried a self compiled xfsprogs 2.8.18-1 from cvs but things get
worse.
So I increased the memory to 512MB, and made sure not to use the multi
thread but still no luck:

....
Phase 3 - for each AG...
- scan and clear agi unlinked lists...
- process known inodes and perform inode discovery...
- agno = 0
- agno = 1
- agno = 2
Killed

Finally using the *old* version 2.6.20-1 with 512MB of memory xfs_repair
finished with success, but using near all the available memory.

More info:

enceladus:~# xfs_info /dev/sdb1
meta-data=/media/iomega300       isize=256    agcount=16, agsize=4578901 blks
         =                       sectsz=512
data     =                       bsize=4096   blocks=73262416, imaxpct=25
         =                       sunit=0      swidth=0 blks, unwritten=1
naming   =version 2              bsize=4096
log      =internal               bsize=4096   blocks=32768, version=1
         =                       sectsz=512   sunit=0 blks
realtime =none                   extsz=65536  blocks=0, rtextents=0
enceladus:~# df /dev/sdb1
Filesystem           1K-blocks      Used Available Use% Mounted on
/dev/sdb1            292918592 164698268 128220324  57% /media/300
enceladus:~# df -i /dev/sdb1
Filesystem            Inodes   IUsed   IFree IUse% Mounted on
/dev/sdb1            293049664 6511481 286538183    3% /media/300

Thanks in Advance,
Daniele P.

             reply	other threads:[~2007-03-20 14:51 UTC|newest]

Thread overview: 8+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2007-03-20 14:32 Daniele P. [this message]
2007-03-20 22:13 ` xfsrepair memory consumption David Chinner
2007-03-20 23:48   ` Barry Naujok
2007-03-21  8:35     ` Daniele P.
2007-03-21  8:34   ` Daniele P.
     [not found] <200703210843.TAA08491@larry.melbourne.sgi.com>
2007-03-21 11:08 ` Daniele P.
2007-03-21 21:36   ` Chris Wedgwood
2007-03-23  8:36     ` Daniele P.

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=200703201532.06076.daniele@interline.it \
    --to=daniele@interline.it \
    --cc=xfs@oss.sgi.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox