public inbox for linux-xfs@vger.kernel.org
 help / color / mirror / Atom feed
From: "Arkadiusz Miśkiewicz" <arekm@maven.pl>
To: stan@hardwarefreak.com
Cc: Stor?? <289471341@qq.com>, Jeff Liu <jeff.liu@oracle.com>,
	xfs@oss.sgi.com
Subject: Re: [xfs_check Out of memory: ]
Date: Sun, 29 Dec 2013 00:39:16 +0100	[thread overview]
Message-ID: <201312290039.17125.arekm@maven.pl> (raw)
In-Reply-To: <52BF0295.9040301@hardwarefreak.com>

On Saturday 28 of December 2013, Stan Hoeppner wrote:
> On 12/27/2013 5:20 PM, Arkadiusz Miśkiewicz wrote:
> ...
> 
> > - can't add more RAM easily, machine is at remote location, uses obsolete
> > DDR2, have no more ram slots and so on
> 
> ...
> 
> > So looks like my future backup servers will need to have 64GB, 128GB or
> > maybe even more ram that will be there only for xfs_repair usage. That's
> > gigantic waste of resources. And there are modern processors that don't
> > work with more than 32GB of ram - like "Intel Xeon E3-1220v2" (
> > http://tnij.org/tkqas9e ). So adding ram means replacing CPU, likely
> > replacing mainboard. Fun :)
> 
> ..
> 
> > IMO ram usage is a real problem for xfs_repair and there has to be some
> > upstream solution other than "buy more" (and waste more) approach.
> 
> The problem isn't xfs_repair.  

This problem is fully solvable on xfs_repair side (if disk space outside of 
broken xfs fs is available).

> The problem is that you expect this tool
> to handle an infinite number of inodes while using a finite amount of
> memory, or at least somewhat less memory than you have installed.  We
> don't see your problem reported very often which seems to indicate your
> situation is a corner case, or that others simply

It's not something common. Happens from time to time judging based on #xfs 
questions.

> size their systems
> properly without complaint.

I guess having milions of tiny files (few kb each file) in simply not 
something common rather than "properly sizing systems".

> If you'd actually like advice on how to solve this, today, with
> realistic solutions, in lieu of the devs recoding xfs_repair for the
> single goal of using less memory, then here are your options:
> 
> 1.  Rewrite or redo your workload to not create so many small files,
>     so many inodes, i.e. use a database

It's a backup copy that needs to be directly accessible (so you could run 
production directly from backup server for example).  That solution won't 
work.

> 2.  Add more RAM to the system

> 3.  Add an SSD of sufficient size/speed for swap duty to handle
>     xfs_repair requirements for filesystems with arbitrarily high
>     inode counts

That would work... if the server was locally available.

Right now my working "solution" is:
- add 40GB of swap space
- stop all other services
- run xfs_repair, leave it for 1-2 days

Adding SSD is my only long term option it seems.

> The fact that the systems are remote, that you have no more DIMM slots,
> are not good arguments for you to make in this context.  Every system
> will require some type of hardware addition/replacement/maintenance.
> And this is not the first software "problem" that requires more hardware
> to solve.  If your application that creates these millions of files
> needed twice as much RAM, forcing an upgrade, would you be complaining
> this way on their mailing list?

If that application could do its job without requiring 2xRAM then surely I 
would write about this to ml.

> If so I'd suggest the problem lay
> somewhere other than xfs_repair and that application.

IMO this problem could be solved on xfs_repair side but well... someone would 
have to write patches and that's unlikely to happen.

So now more important question. How to actually estimate these things? 
Example: 10TB xfs filesystem fully written with files - 10kb each file (html 
pages, images etc) - web server. How much ram my server would need for repair 
to succeed?

-- 
Arkadiusz Miśkiewicz, arekm / maven.pl

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

  parent reply	other threads:[~2013-12-28 23:39 UTC|newest]

Thread overview: 19+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2013-12-27  6:48 [xfs_check Out of memory: ] Stor??
2013-12-27  7:41 ` Jeff Liu
2013-12-27  8:07   ` Arkadiusz Miśkiewicz
2013-12-27 22:42     ` Dave Chinner
2013-12-27 23:20       ` Arkadiusz Miśkiewicz
2013-12-28 16:55         ` Stan Hoeppner
2013-12-28 17:35           ` Jay Ashworth
2013-12-28 22:01             ` Stan Hoeppner
2013-12-28 23:39           ` Arkadiusz Miśkiewicz [this message]
2013-12-29  0:54             ` Stan Hoeppner
2013-12-29 11:23               ` Arkadiusz Miśkiewicz
2013-12-29  9:50         ` Dave Chinner
2013-12-29 11:57           ` Arkadiusz Miśkiewicz
2013-12-29 23:27             ` Dave Chinner
2013-12-30  1:55           ` Stan Hoeppner
2013-12-30 11:27             ` Matthias Schniedermeyer
2013-12-30 13:19             ` Roger Willcocks
2013-12-30 16:25               ` Stan Hoeppner
2013-12-30 17:19             ` Stefan Ring

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=201312290039.17125.arekm@maven.pl \
    --to=arekm@maven.pl \
    --cc=289471341@qq.com \
    --cc=jeff.liu@oracle.com \
    --cc=stan@hardwarefreak.com \
    --cc=xfs@oss.sgi.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox