public inbox for linux-xfs@vger.kernel.org
 help / color / mirror / Atom feed
From: Dave Chinner <david@fromorbit.com>
To: Christian Kujau <lists@nerdbynature.de>
Cc: LKML <linux-kernel@vger.kernel.org>, xfs@oss.sgi.com
Subject: Re: 2.6.39-rc4+: oom-killer busy killing tasks
Date: Mon, 25 Apr 2011 09:46:55 +1000	[thread overview]
Message-ID: <20110424234655.GC12436@dastard> (raw)
In-Reply-To: <alpine.DEB.2.01.1104211841510.18728@trent.utfs.org>

On Thu, Apr 21, 2011 at 06:57:16PM -0700, Christian Kujau wrote:
> Hi,
> 
> after the block layer regression[0] seemed to be fixed, the machine 
> appeared to be running fine. But after putting some disk I/O to the system 
> (PowerBook G4) it became unresponsive, I/O wait went up high and I could 
> see that the OOM killer was killing processes. Logging in via SSH was 
> sometimes possible, but the each session was killed shortly after, so I 
> could not do much.
> 
> The box finally rebooted itself, the logfile recorded something xfs 
> related in the first backtrace, hence I'm cc'ing the xfs list too:
> 
> du invoked oom-killer: gfp_mask=0x842d0, order=0, oom_adj=0, oom_score_adj=0
> Call Trace:
> [c0009ce4] show_stack+0x70/0x1bc (unreliable)
> [c008f508] T.528+0x74/0x1cc
> [c008f734] T.526+0xd4/0x2a0
> [c008fb7c] out_of_memory+0x27c/0x360
> [c0093b3c] __alloc_pages_nodemask+0x6f8/0x708
> [c00c00b4] new_slab+0x244/0x27c
> [c00c0620] T.879+0x1cc/0x37c
> [c00c08d0] kmem_cache_alloc+0x100/0x108
> [c01cb2b8] kmem_zone_alloc+0xa4/0x114
> [c01a7d58] xfs_inode_alloc+0x40/0x13c
> [c01a8218] xfs_iget+0x258/0x5a0
> [c01c922c] xfs_lookup+0xf8/0x114
> [c01d70b0] xfs_vn_lookup+0x5c/0xb0
> [c00d14c8] d_alloc_and_lookup+0x54/0x90
> [c00d1d4c] do_lookup+0x248/0x2bc
> [c00d33cc] path_lookupat+0xfc/0x8f4
> [c00d3bf8] do_path_lookup+0x34/0xac
> [c00d53e0] user_path_at+0x64/0xb4
> [c00ca638] vfs_fstatat+0x58/0xbc
> [c00ca6c0] sys_fstatat64+0x24/0x50
> [c00124f4] ret_from_syscall+0x0/0x38
>  --- Exception: c01 at 0xff4b050
>    LR = 0x10008cf8
> 
> 
> This is wih today's git (91e8549bde...); full log & .config on: 
> 
>   http://nerdbynature.de/bits/2.6.39-rc4/oom/

You memory is full of xfs inodes, and it doesn't appear that memory
reclaim has kicked in at all to free any - the numbers just keep
growing at 1-2000 inodes/s.

I'd say they are not being reclaimmmed because the VFS hasn't let go
of them yet. Can you also dump /proc/sys/fs/{dentry,inode}-state so
we can see if the VFS has released the inodes such that they can be
reclaimed by XFS?

BTW, what are your mount options? If it is the problem I suspect it
is, then using noatime with stop it from occurring....

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

  parent reply	other threads:[~2011-04-24 23:43 UTC|newest]

Thread overview: 42+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2011-04-22  1:57 2.6.39-rc4+: oom-killer busy killing tasks Christian Kujau
2011-04-22  2:58 ` Minchan Kim
2011-04-22  3:03   ` Christian Kujau
2011-04-22 17:41   ` Christian Kujau
2011-04-22 18:46     ` Christian Kujau
2011-04-22 22:47       ` Minchan Kim
2011-04-24 23:46 ` Dave Chinner [this message]
2011-04-25  5:51   ` Christian Kujau
2011-04-25  7:19     ` Christian Kujau
2011-04-26 15:14       ` Christian Kujau
2011-04-27  2:26       ` Dave Chinner
2011-04-27  7:46         ` Christian Kujau
2011-04-27 10:28           ` Dave Chinner
2011-04-27 23:16             ` Minchan Kim
2011-04-27 23:56               ` Dave Chinner
2011-04-28 17:30             ` Christian Kujau
2011-04-28 23:37               ` Dave Chinner
2011-04-29 17:32                 ` Christian Kujau
2011-04-29 19:58                 ` Christian Kujau
2011-04-29 20:17                   ` Markus Trippelsdorf
2011-04-29 20:20                     ` Christian Kujau
2011-04-29 20:21                       ` Markus Trippelsdorf
2011-04-30  0:17                     ` Christian Kujau
2011-05-01  8:01                       ` Dave Chinner
2011-05-02  4:59                         ` Christian Kujau
2011-05-02 12:19                           ` Dave Chinner
2011-05-02 19:59                             ` Christian Kujau
2011-05-03  0:51                               ` Dave Chinner
2011-05-03  4:04                                 ` Christian Kujau
2011-05-03  6:36                                   ` Dave Chinner
2011-05-03 20:53                                 ` Christian Kujau
2011-05-04  0:46                                   ` Christian Kujau
2011-05-04  1:51                                     ` Christian Kujau
2011-05-04  7:36                                     ` Dave Chinner
2011-05-04 11:12                                       ` Dave Chinner
2011-05-04 19:10                                         ` Christian Kujau
2011-05-04 23:15                                           ` Dave Chinner
2011-05-05  2:07                                             ` Christian Kujau
2011-05-02  9:26                         ` Christian Kujau
2011-05-02 12:38                           ` Dave Chinner
2011-04-25  8:02   ` Christian Kujau
2011-04-25  9:50     ` Christian Kujau

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20110424234655.GC12436@dastard \
    --to=david@fromorbit.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=lists@nerdbynature.de \
    --cc=xfs@oss.sgi.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox