public inbox for linux-xfs@vger.kernel.org
 help / color / mirror / Atom feed
From: Dave Chinner <david@fromorbit.com>
To: blafoo <mail@blafoo.org>
Cc: xfs@oss.sgi.com
Subject: Re: OOM on quotacheck (again?)
Date: Thu, 20 Sep 2012 06:59:24 +1000	[thread overview]
Message-ID: <20120919205924.GC31501@dastard> (raw)
In-Reply-To: <5059D2B4.8010300@blafoo.org>

On Wed, Sep 19, 2012 at 04:12:04PM +0200, blafoo wrote:
> Hi all,
> 
> for the last couple of days i've been trying to compile a new kernel for
> our webserver-platform which is based on debian-squeeze.
> 
> Hardware: a mix of Dell PE2850, 2950, R710
> - raid-10 with 4 disks (old setup, PE2850)
> - raid-1 system, raid-10 content (current setup)
> - currently running linux-2.6.37 custom built, vmalloc set to default
> (128MB)

Which implies you are running a 32 bit kernel even on 64 bit CPUs
(e.g. R710).

> 
> All systems have an xfs-filesystem as their content-partition and have
> group-quota enabled (no other xfs-settings active). the
> content-partition varies in size between 250GB and 1TB and contains
> between 3 and 10 million files.
> 
> Every time i try to mount the xfs-file-system and a quota-check is
> needed, the server goes out of memory (oom). I can easily reproduce this
> by rebooting the server, resetting the quota-flags with

No surprise if you are running an i686 kernel (32 bit). You've got
way more inodes than can fit in the kernel memory segment.

> xfs_db -x -c 'sb 0' -c 'write qflags 0'
> 
> and rerun the quota-check.
> 
> This is true for various kernels but not all. What i've tried so far:
> 
> 2.6.37.x - fails with OOM
> 2.6.39.4 - suprisingly works (see below why)
> 3.2.29 - fails with OOM
> 3.4.10 - fails with OOM

8a00ebe xfs: Ensure inode reclaim can run during quotacheck

$ git describe --contains 8a00ebe
v3.5-rc1~91^2~54

So the OOM problem was fixed in 3.5.

> 3.6.0rc5 - fails with vmalloc error (XFS (sda7): xfs_buf_get_map: failed
> to map pages), with vmalloc=256 the systems hangs on mount infitly.

Running on a x86-64 kernel will make the vmalloc problem go away.
There's very little we can do about the limited vmalloc address
space on i686 kernels. As it is, the known recent regression in this
space:

bcf62ab xfs: Fix overallocation in xfs_buf_allocate_memory()

$ git describe --contains bcf62ab
v3.6-rc1~42^2~35

was fixed in 3.6-rc1, so I'm not really that sure why you'd be
running out of vmalloc space as there shouldn't be any metadata that
is vmalloc'd in your given filesystem configuration...

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

  reply	other threads:[~2012-09-19 20:58 UTC|newest]

Thread overview: 10+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2012-09-19 14:12 OOM on quotacheck (again?) blafoo
2012-09-19 20:59 ` Dave Chinner [this message]
2012-09-20  9:32   ` Volker
2012-09-24 13:21     ` Dave Chinner
2012-09-24 14:47       ` Volker
2012-10-02 16:29         ` Volker
2012-10-02 20:09           ` Dave Chinner
2012-10-02 20:49             ` Volker
2012-10-02 22:15               ` Dave Chinner
2012-10-04 14:19                 ` Volker

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20120919205924.GC31501@dastard \
    --to=david@fromorbit.com \
    --cc=mail@blafoo.org \
    --cc=xfs@oss.sgi.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox