public inbox for linux-xfs@vger.kernel.org
 help / color / mirror / Atom feed
From: Dave Chinner <david@fromorbit.com>
To: Marc Lehmann <schmorp@schmorp.de>
Cc: xfs@oss.sgi.com
Subject: Re: frequent kernel BUG and lockups - 2.6.39 + xfs_fsr
Date: Sun, 7 Aug 2011 00:20:05 +1000	[thread overview]
Message-ID: <20110806142005.GG3162@dastard> (raw)
In-Reply-To: <20110806122556.GB20341@schmorp.de>

On Sat, Aug 06, 2011 at 02:25:56PM +0200, Marc Lehmann wrote:
> I get frequent (for servers) lockups and crashes when using 2.6.39. I saw the
> same problems using 3.0.0rc5, 5 and 6, and I think also 2.6.38. I don't see
> this lockups on 2.6.30 or 2.6.26 (all the respetcive latest debian kernels).
> 
> The symtpom slightly differs - sometimes I get thousands of backtraces
> before the machine locks up, usually I get only one, and either the
> machine locks up completely, or only the processes using the filesystem in
> question (presumably) lock - all unkillable.
> 
> The backtraces look all very similar:
> 
>    http://ue.tst.eu/85b9c9f66e36dda81be46892661c5bd0.txt

Tainted kernel. Please reproduce without the NVidia binary drivers.

> this is from a desktop system - it tends to be harder to get these from
> servers.
> 
> all the backtraces crash with a null pointer dereference in xfs_iget, or
> in xfs_trans_log_inode, and always for process xfs_fsr.

and when you do, please record an event trace of the
xfs_swap_extent* trace points while xfs_fsr is running and triggers
a crash. That will tell me if xfs_fsr is corrupting inodes,

> I haven't seen a crash without xfs_fsr.

Then don't use xfs_fsr until we know if it is the cause of the
problem (except to reproduce the problem).

And as I always ask - why do you need to run xfs_fsr so often?  Do
you really have filesystems that get quickly fragmented (or are you
just running it from a cron-job because having on-line
defragmentation is what all the cool kids do ;)? If you are getting
fragmentation, what is the workload that is causing it?

Cheers,

Dave.

-- 
Dave Chinner
david@fromorbit.com

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

  reply	other threads:[~2011-08-06 14:20 UTC|newest]

Thread overview: 18+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2011-08-06 12:25 frequent kernel BUG and lockups - 2.6.39 + xfs_fsr Marc Lehmann
2011-08-06 14:20 ` Dave Chinner [this message]
2011-08-07  1:42   ` Marc Lehmann
2011-08-07 10:26     ` Dave Chinner
2011-08-08 19:02       ` Marc Lehmann
2011-08-09 10:10         ` Michael Monnerie
2011-08-09 11:15           ` Marc Lehmann
2011-08-10  6:59             ` Michael Monnerie
2011-08-11 22:04               ` Marc Lehmann
2011-08-12  4:05                 ` Dave Chinner
2011-08-26  8:08                   ` Marc Lehmann
2011-08-31 12:45                     ` Dave Chinner
2011-08-10 14:16             ` Dave Chinner
2011-08-11 22:07               ` Marc Lehmann
2011-08-09  9:16       ` Marc Lehmann
2011-08-09 11:35         ` Dave Chinner
2011-08-09 16:35           ` Marc Lehmann
2011-08-09 22:31             ` Dave Chinner

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20110806142005.GG3162@dastard \
    --to=david@fromorbit.com \
    --cc=schmorp@schmorp.de \
    --cc=xfs@oss.sgi.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox