public inbox for linux-xfs@vger.kernel.org
 help / color / mirror / Atom feed
From: Dave Chinner <david@fromorbit.com>
To: Zorro Lang <zlang@redhat.com>
Cc: linux-xfs@vger.kernel.org, fstests@vger.kernel.org,
	"Darrick J. Wong" <djwong@kernel.org>,
	Carlos Maiolino <carlos@maiolino.me>
Subject: Re: [Bug report][fstests generic/047] Internal error !(flags & XFS_DABUF_MAP_HOLE_OK) at line 2572 of file fs/xfs/libxfs/xfs_da_btree.c. Caller xfs_dabuf_map.constprop.0+0x26c/0x368 [xfs]
Date: Tue, 7 Nov 2023 07:33:50 +1100	[thread overview]
Message-ID: <ZUlNroz8l5h1s1oF@dread.disaster.area> (raw)
In-Reply-To: <20231106192627.ilvijcbpmp3l3wcz@dell-per750-06-vm-08.rhts.eng.pek2.redhat.com>

On Tue, Nov 07, 2023 at 03:26:27AM +0800, Zorro Lang wrote:
> On Mon, Nov 06, 2023 at 05:13:30PM +1100, Dave Chinner wrote:
> > On Sun, Oct 29, 2023 at 12:11:22PM +0800, Zorro Lang wrote:
> > > Hi xfs list,
> > > 
> > > Recently I always hit xfs corruption by running fstests generic/047 [1], and
> > > it show more failures in dmesg[2], e.g:
> > 
> > OK, g/047 is an fsync test.
> > 
> > > 
> > >   XFS (loop1): Internal error !(flags & XFS_DABUF_MAP_HOLE_OK) at line 2572 of file fs/xfs/libxfs/xfs_da_btree.c.  Caller xfs_dabuf_map.constprop.0+0x26c/0x368 [xfs]
> > 
> > Ok, a directory block index translated to a hole in the file
> > mapping. That's bad...
....
> > > _check_xfs_filesystem: filesystem on /dev/loop1 is inconsistent (r)
> > > *** xfs_repair -n output ***
> > > Phase 1 - find and verify superblock...
> > > Phase 2 - using internal log
> > >         - zero log...
> > >         - scan filesystem freespace and inode maps...
> > >         - found root inode chunk
> > > Phase 3 - for each AG...
> > >         - scan (but don't clear) agi unlinked lists...
> > >         - process known inodes and perform inode discovery...
> > >         - agno = 0
> > > bad nblocks 9 for inode 128, would reset to 0
> > > no . entry for directory 128
> > > no .. entry for root directory 128
> > > problem with directory contents in inode 128
> > > would clear root inode 128
> > > bad nblocks 8 for inode 131, would reset to 0
> > > bad nblocks 8 for inode 132, would reset to 0
> > > bad nblocks 8 for inode 133, would reset to 0
> > > ...
> > > bad nblocks 8 for inode 62438, would reset to 0
> > > bad nblocks 8 for inode 62439, would reset to 0
> > > bad nblocks 8 for inode 62440, would reset to 0
> > > bad nblocks 8 for inode 62441, would reset to 0
> > 
> > Yet all the files - including the data files that were fsync'd - are
> > all bad.
> > 
> > Aparently the journal has been recovered, but lots of metadata
> > updates that should have been in the journal are missing after
> > recovery has completed? That doesn't make a whole lot of sense -
> > when did these tests start failing? Can you run a bisect?
> 
> Hi Dave,
> 
> Thanks for your reply :) I tried to do a kernel bisect long time, but
> find nothing ... Then suddently, I found it's failed from a xfsprogs
> change [1].
> 
> Although that's not the root cause of this bug (on s390x), it just
> enabled "nrext64" by default, which I never tested on s390x before.
> For now, we know this's an issue about this feature, and only on
> s390x for now.

That's not good. Can you please determine if this is a zero-day bug
with the nrext64 feature? I think it was merged in 5.19, so if you
could try to reproduce it on a 5.18 and 5.19 kernels first, that
would be handy.

Also, from your s390 kernel build, can you get the pahole output
for the struct xfs_dinode both for a good kernel and a bad kernel?

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

  reply	other threads:[~2023-11-06 20:33 UTC|newest]

Thread overview: 19+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-10-29  4:11 [Bug report][fstests generic/047] Internal error !(flags & XFS_DABUF_MAP_HOLE_OK) at line 2572 of file fs/xfs/libxfs/xfs_da_btree.c. Caller xfs_dabuf_map.constprop.0+0x26c/0x368 [xfs] Zorro Lang
2023-11-06  6:13 ` Dave Chinner
2023-11-06 19:26   ` Zorro Lang
2023-11-06 20:33     ` Dave Chinner [this message]
2023-11-06 22:20       ` Darrick J. Wong
2023-11-07  8:05       ` Zorro Lang
2023-11-07  8:13         ` Dave Chinner
2023-11-07 15:13           ` Zorro Lang
2023-11-08  6:38             ` Dave Chinner
     [not found]               ` <CAN=2_H+CdEK_rEUmYbmkCjSRqhX2cwi5yRHQcKAmKDPF16vqOw@mail.gmail.com>
2023-11-09  6:14                 ` Dave Chinner
2023-11-09 14:09                   ` Zorro Lang
2023-11-09 23:13                     ` Dave Chinner
2023-11-10  1:36                       ` Zorro Lang
2023-11-10  2:03                         ` Dave Chinner
2023-11-10  4:32                           ` Darrick J. Wong
2023-11-10  7:34                           ` Christoph Hellwig
2023-11-10 13:56                           ` Zorro Lang
2023-11-14 11:17                           ` edward6
2023-11-07  8:29       ` Christoph Hellwig

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=ZUlNroz8l5h1s1oF@dread.disaster.area \
    --to=david@fromorbit.com \
    --cc=carlos@maiolino.me \
    --cc=djwong@kernel.org \
    --cc=fstests@vger.kernel.org \
    --cc=linux-xfs@vger.kernel.org \
    --cc=zlang@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox