From: Lachlan McIlroy <lmcilroy@redhat.com>
To: Patrick Schreurs <patrick@news-service.com>
Cc: linux-xfs@oss.sgi.com, Tommy van Leeuwen <tommy@news-service.com>,
Eric Sandeen <sandeen@sandeen.net>
Subject: Re: 2.6.30 panic - xfs_fs_destroy_inode
Date: Tue, 23 Jun 2009 04:17:13 -0400 (EDT) [thread overview]
Message-ID: <1587994907.388291245745033392.JavaMail.root@zmail05.collab.prod.int.phx2.redhat.com> (raw)
In-Reply-To: <4A408316.2070903@news-service.com>
[-- Attachment #1: Type: text/plain, Size: 4585 bytes --]
It looks to me like xfs_reclaim_inode() has returned EAGAIN because the
XFS_RECLAIM flag was set on the xfs inode. This implies we are trying
to reclaim an inode that is already in the process of being reclaimed.
I'm not sure how this happened but it could be a simple case of ignoring
this error since the reclaim is already in progress.
----- "Patrick Schreurs" <patrick@news-service.com> wrote:
> Another one (see attachement). This time on a server with SAS drives
> and
> without the lazy-count option:
>
> meta-data=/dev/sdb isize=256 agcount=4,
> agsize=27471812
> blks
> = sectsz=512 attr=2
> data = bsize=4096 blocks=109887246,
> imaxpct=25
> = sunit=0 swidth=0 blks
> naming =version 2 bsize=4096 ascii-ci=0
> log =internal bsize=4096 blocks=32768, version=2
> = sectsz=512 sunit=0 blks,
> lazy-count=0
> realtime =none extsz=4096 blocks=0, rtextents=0
>
> We really don't want to rollback to 2.6.28.x as this doesn't solve the
>
> issue.
>
> Any hint would be appreciated.
>
> -Patrick
>
> Patrick Schreurs wrote:
> > Just had another one. It's likely we'll have to downgrade to
> 2.6.28.x.
> >
> > These servers have 28 SCSI disks mounted separately (JBOD). The
> workload
> > is basically i/o load (90% read, 10% write) from these disks. The
> > servers are not extreme busy (overloaded).
> >
> > xfs_info from a random disk:
> >
> > sb02:~# xfs_info /dev/sdb
> > meta-data=/dev/sdb isize=256 agcount=4,
> agsize=18310547
> > blks
> > = sectsz=512 attr=2
> > data = bsize=4096 blocks=73242187,
> imaxpct=25
> > = sunit=0 swidth=0 blks
> > naming =version 2 bsize=4096 ascii-ci=0
> > log =internal bsize=4096 blocks=32768,
> version=2
> > = sectsz=512 sunit=0 blks,
> lazy-count=1
> > realtime =none extsz=4096 blocks=0, rtextents=0
> >
> > As you can see we use lazy-count=1. Mount options aren't very
> exotic:
> > rw,noatime,nodiratime
> >
> > We are seeing these panic's on at least 3 different servers.
> >
> > If you have any hints on how to investigate, we would greatly
> appreciate
> > it.
> >
> > -Patrick
> >
> > Eric Sandeen wrote:
> >> Others aren't hitting this, what sort of workload are you running
> when
> >> you hit it?
> >>
> >> I have not had time to look at it yet but some sort of testcase may
>
> >> greatly help.
> >>
> >> -Eric
> >>
> >> On Jun 20, 2009, at 5:18 AM, Patrick Schreurs
> >> <patrick@news-service.com> wrote:
> >>
> >>> Unfortunately another panic. See attachment.
> >>>
> >>> Would love to receive some advice on this issue.
> >>>
> >>> Thanks in advance.
> >>>
> >>> -Patrick
> >>>
> >>> Patrick Schreurs wrote:
> >>>> Eric Sandeen wrote:
> >>>>> Patrick Schreurs wrote:
> >>>>>> Hi all,
> >>>>>>
> >>>>>> We are experiencing kernel panics on servers running 2.6.29(.1)
>
> >>>>>> and 2.6.30. I've included two attachments to demonstrate.
> >>>>>>
> >>>>>> The error is:
> >>>>>> Kernel panic - not syncing: xfs_fs_destroy_inode: cannot
> reclaim ...
> >>>>>>
> >>>>>> OS is 64bit Debian lenny.
> >>>>>>
> >>>>>> Is this a known issue? Any comments on this?
> >>>>>
> >>>>> It's not known to me, was this a recent upgrade? (IOW, did it
> start
> >>>>> with .29(.1)?
> >>>> We've seen this on 2 separate servers. It probably happened more
>
> >>>> often, but we didn't captured the panic message. One server was
> >>>> running 2.6.29.1, the other server was running 2.6.30. Currently
>
> >>>> we've updated all similar servers to 2.6.30.
> >>>> If we can provide you with more details to help fix this issue,
> >>>> please let us know.
> >>>> -Patrick
> >>>> _______________________________________________
> >>>> xfs mailing list
> >>>> xfs@oss.sgi.com
> >>>> http://oss.sgi.com/mailman/listinfo/xfs
> >>> <sb06-20090619.png>
> >>
> >> _______________________________________________
> >> xfs mailing list
> >> xfs@oss.sgi.com
> >> http://oss.sgi.com/mailman/listinfo/xfs
> >
> > _______________________________________________
> > xfs mailing list
> > xfs@oss.sgi.com
> > http://oss.sgi.com/mailman/listinfo/xfs
>
>
> [image/jpeg:sb08-20090623.jpg]
>
> _______________________________________________
> xfs mailing list
> xfs@oss.sgi.com
> http://oss.sgi.com/mailman/listinfo/xfs
[-- Attachment #2: sb08-20090623.jpg --]
[-- Type: image/jpeg, Size: 70970 bytes --]
[-- Attachment #3: Type: text/plain, Size: 121 bytes --]
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
next prev parent reply other threads:[~2009-06-23 8:16 UTC|newest]
Thread overview: 20+ messages / expand[flat|nested] mbox.gz Atom feed top
2009-06-17 17:04 2.6.30 panic - xfs_fs_destroy_inode Patrick Schreurs
2009-06-17 21:31 ` Eric Sandeen
2009-06-18 7:55 ` Patrick Schreurs
2009-06-20 10:18 ` Patrick Schreurs
2009-06-20 13:29 ` Eric Sandeen
2009-06-20 16:31 ` Patrick Schreurs
2009-06-23 7:24 ` Patrick Schreurs
2009-06-23 8:17 ` Lachlan McIlroy [this message]
2009-06-23 17:13 ` Christoph Hellwig
2009-06-30 20:13 ` Patrick Schreurs
2009-06-30 20:42 ` Christoph Hellwig
2009-07-20 19:19 ` Patrick Schreurs
2009-07-20 20:14 ` Christoph Hellwig
2009-07-01 12:44 ` Christoph Hellwig
2009-07-02 7:09 ` Tommy van Leeuwen
2009-07-02 17:31 ` Patrick Schreurs
2009-07-21 14:12 ` Christoph Hellwig
2009-07-22 8:55 ` Tommy van Leeuwen
2009-08-17 21:14 ` Christoph Hellwig
2009-08-20 12:24 ` Tommy van Leeuwen
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1587994907.388291245745033392.JavaMail.root@zmail05.collab.prod.int.phx2.redhat.com \
--to=lmcilroy@redhat.com \
--cc=linux-xfs@oss.sgi.com \
--cc=patrick@news-service.com \
--cc=sandeen@sandeen.net \
--cc=tommy@news-service.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox