* [Bug 217769] XFS crash on mount on kernels >= 6.1
2023-08-07 16:35 [Bug 217769] New: XFS crash on mount on kernels >= 6.1 bugzilla-daemon
@ 2023-08-07 17:17 ` bugzilla-daemon
2023-08-07 18:06 ` bugzilla-daemon
` (16 subsequent siblings)
17 siblings, 0 replies; 21+ messages in thread
From: bugzilla-daemon @ 2023-08-07 17:17 UTC (permalink / raw)
To: linux-xfs
https://bugzilla.kernel.org/show_bug.cgi?id=217769
Eric Sandeen (sandeen@sandeen.net) changed:
What |Removed |Added
----------------------------------------------------------------------------
CC| |sandeen@sandeen.net
--- Comment #1 from Eric Sandeen (sandeen@sandeen.net) ---
Please try running xfs_repair on the filesystems in question, and capture the
output. (you can use xfs_repair -n to do a dry run if you prefer, it will yield
the same basic information.)
My guess is that you will find complaints about unlinked inodes - please let us
know.
--
You may reply to this email to add a comment.
You are receiving this mail because:
You are watching the assignee of the bug.
^ permalink raw reply [flat|nested] 21+ messages in thread* [Bug 217769] XFS crash on mount on kernels >= 6.1
2023-08-07 16:35 [Bug 217769] New: XFS crash on mount on kernels >= 6.1 bugzilla-daemon
2023-08-07 17:17 ` [Bug 217769] " bugzilla-daemon
@ 2023-08-07 18:06 ` bugzilla-daemon
2023-08-07 19:01 ` bugzilla-daemon
` (15 subsequent siblings)
17 siblings, 0 replies; 21+ messages in thread
From: bugzilla-daemon @ 2023-08-07 18:06 UTC (permalink / raw)
To: linux-xfs
https://bugzilla.kernel.org/show_bug.cgi?id=217769
--- Comment #2 from Mariusz Gronczewski (xani666@gmail.com) ---
It did, thanks for help! Why is that reported as "corruption of in-memory data"
?
The filesystem on other machine also had exactly 3 disconnected inodes
Phase 1 - find and verify superblock...
Phase 2 - using internal log
- zero log...
- scan filesystem freespace and inode maps...
agi unlinked bucket 0 is 559936 in ag 2 (inode=34114368)
agi unlinked bucket 42 is 175466 in ag 2 (inode=33729898)
agi unlinked bucket 53 is 198581 in ag 2 (inode=33753013)
- found root inode chunk
Phase 3 - for each AG...
- scan (but don't clear) agi unlinked lists...
- process known inodes and perform inode discovery...
- agno = 0
- agno = 1
- agno = 2
- agno = 3
- process newly discovered inodes...
Phase 4 - check for duplicate blocks...
- setting up duplicate extent list...
- check for inodes claiming duplicate blocks...
- agno = 0
- agno = 1
- agno = 2
- agno = 3
No modify flag set, skipping phase 5
Phase 6 - check inode connectivity...
- traversing filesystem ...
- traversal finished ...
- moving disconnected inodes to lost+found ...
disconnected inode 33729898, would move to lost+found
disconnected inode 33753013, would move to lost+found
disconnected inode 34114368, would move to lost+found
Phase 7 - verify link counts...
--
You may reply to this email to add a comment.
You are receiving this mail because:
You are watching the assignee of the bug.
^ permalink raw reply [flat|nested] 21+ messages in thread* [Bug 217769] XFS crash on mount on kernels >= 6.1
2023-08-07 16:35 [Bug 217769] New: XFS crash on mount on kernels >= 6.1 bugzilla-daemon
2023-08-07 17:17 ` [Bug 217769] " bugzilla-daemon
2023-08-07 18:06 ` bugzilla-daemon
@ 2023-08-07 19:01 ` bugzilla-daemon
2023-08-07 19:40 ` bugzilla-daemon
` (14 subsequent siblings)
17 siblings, 0 replies; 21+ messages in thread
From: bugzilla-daemon @ 2023-08-07 19:01 UTC (permalink / raw)
To: linux-xfs
https://bugzilla.kernel.org/show_bug.cgi?id=217769
--- Comment #3 from Eric Sandeen (sandeen@sandeen.net) ---
It's essentially an unexpected/inconsistent in-memory state, as opposed to an
on-disk structure that was found to be corrupt.
I presume that it boots ok now post-repair?
Do you know if this was the root or /boot filesystem or something else? It's
still a mystery about how filesystems get into this state; we should never have
a clean filesystem that requires no log recovery, but with unlinked inodes ...
recovery is supposed to clear that.
It may have persisted on this filesystem for a very long time and it's just
recent code changes that have started tripping over it, but I've always had a
hunch that /boot seems to show the problem more often.
--
You may reply to this email to add a comment.
You are receiving this mail because:
You are watching the assignee of the bug.
^ permalink raw reply [flat|nested] 21+ messages in thread* [Bug 217769] XFS crash on mount on kernels >= 6.1
2023-08-07 16:35 [Bug 217769] New: XFS crash on mount on kernels >= 6.1 bugzilla-daemon
` (2 preceding siblings ...)
2023-08-07 19:01 ` bugzilla-daemon
@ 2023-08-07 19:40 ` bugzilla-daemon
2023-08-07 20:15 ` bugzilla-daemon
` (13 subsequent siblings)
17 siblings, 0 replies; 21+ messages in thread
From: bugzilla-daemon @ 2023-08-07 19:40 UTC (permalink / raw)
To: linux-xfs
https://bugzilla.kernel.org/show_bug.cgi?id=217769
--- Comment #4 from Mariusz Gronczewski (xani666@gmail.com) ---
> It's essentially an unexpected/inconsistent in-memory state, as opposed to an
> on-disk structure that was found to be corrupt.
Shouldn't that also set filesystem as "dirty" ? The problem is that this
*basically* makes system unbootable without intervention; OS thinks it is clean
so it doesn't run xfs_repair, then driver crashes without marking it unclean,
reboot and process repeats. The crash also blows every mounted XFS system which
means even if log partition doesn't have that problem none of logs will
persist. I had to reformat /var/log in ext4 to even gather them on my laptop
> I presume that it boots ok now post-repair?
Yes
> Do you know if this was the root or /boot filesystem or something else? It's
> still a mystery about how filesystems get into this state; we should never
> have a clean filesystem that requires no log recovery, but with unlinked
> inodes ... recovery is supposed to clear that.
It was root in both cases, we keep /boot on ext4
So far (well, we got few hundred more machines to upgrade) I've only seen that
on old ones, might be some bug that was fixed but left the mark on filesystem ?
> It may have persisted on this filesystem for a very long time and it's just
> recent code changes that have started tripping over it, but I've always had a
> hunch that /boot seems to show the problem more often.
That would track, I only saw that on old machines (I think they were formatted
around 4.9 kernel release, some even earlier). I just had another case on
machine but this time reading certain files triggered it.
--
You may reply to this email to add a comment.
You are receiving this mail because:
You are watching the assignee of the bug.
^ permalink raw reply [flat|nested] 21+ messages in thread* [Bug 217769] XFS crash on mount on kernels >= 6.1
2023-08-07 16:35 [Bug 217769] New: XFS crash on mount on kernels >= 6.1 bugzilla-daemon
` (3 preceding siblings ...)
2023-08-07 19:40 ` bugzilla-daemon
@ 2023-08-07 20:15 ` bugzilla-daemon
2023-08-07 22:34 ` bugzilla-daemon
` (12 subsequent siblings)
17 siblings, 0 replies; 21+ messages in thread
From: bugzilla-daemon @ 2023-08-07 20:15 UTC (permalink / raw)
To: linux-xfs
https://bugzilla.kernel.org/show_bug.cgi?id=217769
--- Comment #5 from Eric Sandeen (sandeen@sandeen.net) ---
Ok thanks, the other instance I recently saw of this problem likely also
started on a rather old kernel.
--
You may reply to this email to add a comment.
You are receiving this mail because:
You are watching the assignee of the bug.
^ permalink raw reply [flat|nested] 21+ messages in thread* [Bug 217769] XFS crash on mount on kernels >= 6.1
2023-08-07 16:35 [Bug 217769] New: XFS crash on mount on kernels >= 6.1 bugzilla-daemon
` (4 preceding siblings ...)
2023-08-07 20:15 ` bugzilla-daemon
@ 2023-08-07 22:34 ` bugzilla-daemon
2023-08-08 13:39 ` bugzilla-daemon
` (11 subsequent siblings)
17 siblings, 0 replies; 21+ messages in thread
From: bugzilla-daemon @ 2023-08-07 22:34 UTC (permalink / raw)
To: linux-xfs
https://bugzilla.kernel.org/show_bug.cgi?id=217769
--- Comment #6 from Mariusz Gronczewski (xani666@gmail.com) ---
I still have image of that VM with the problem if you want me to check
something on it
--
You may reply to this email to add a comment.
You are receiving this mail because:
You are watching the assignee of the bug.
^ permalink raw reply [flat|nested] 21+ messages in thread* [Bug 217769] XFS crash on mount on kernels >= 6.1
2023-08-07 16:35 [Bug 217769] New: XFS crash on mount on kernels >= 6.1 bugzilla-daemon
` (5 preceding siblings ...)
2023-08-07 22:34 ` bugzilla-daemon
@ 2023-08-08 13:39 ` bugzilla-daemon
2023-08-08 14:51 ` bugzilla-daemon
` (10 subsequent siblings)
17 siblings, 0 replies; 21+ messages in thread
From: bugzilla-daemon @ 2023-08-08 13:39 UTC (permalink / raw)
To: linux-xfs
https://bugzilla.kernel.org/show_bug.cgi?id=217769
--- Comment #7 from Eric Sandeen (sandeen@sandeen.net) ---
What might be most useful is to create an xfs_metadump image of the problematic
filesystem (with -o if you are ok with showing filenames in the clear) and from
there we can examine things.
A metadump image is metadata only, no file data, and compresses well. This can
then be turned back into a filesystem image for analysis.
--
You may reply to this email to add a comment.
You are receiving this mail because:
You are watching the assignee of the bug.
^ permalink raw reply [flat|nested] 21+ messages in thread* [Bug 217769] XFS crash on mount on kernels >= 6.1
2023-08-07 16:35 [Bug 217769] New: XFS crash on mount on kernels >= 6.1 bugzilla-daemon
` (6 preceding siblings ...)
2023-08-08 13:39 ` bugzilla-daemon
@ 2023-08-08 14:51 ` bugzilla-daemon
2023-08-08 14:54 ` bugzilla-daemon
` (9 subsequent siblings)
17 siblings, 0 replies; 21+ messages in thread
From: bugzilla-daemon @ 2023-08-08 14:51 UTC (permalink / raw)
To: linux-xfs
https://bugzilla.kernel.org/show_bug.cgi?id=217769
--- Comment #8 from Mariusz Gronczewski (xani666@gmail.com) ---
Created attachment 304795
--> https://bugzilla.kernel.org/attachment.cgi?id=304795&action=edit
Metadata dump
--
You may reply to this email to add a comment.
You are receiving this mail because:
You are watching the assignee of the bug.
^ permalink raw reply [flat|nested] 21+ messages in thread* [Bug 217769] XFS crash on mount on kernels >= 6.1
2023-08-07 16:35 [Bug 217769] New: XFS crash on mount on kernels >= 6.1 bugzilla-daemon
` (7 preceding siblings ...)
2023-08-08 14:51 ` bugzilla-daemon
@ 2023-08-08 14:54 ` bugzilla-daemon
2023-08-08 14:55 ` bugzilla-daemon
` (8 subsequent siblings)
17 siblings, 0 replies; 21+ messages in thread
From: bugzilla-daemon @ 2023-08-08 14:54 UTC (permalink / raw)
To: linux-xfs
https://bugzilla.kernel.org/show_bug.cgi?id=217769
--- Comment #9 from Mariusz Gronczewski (xani666@gmail.com) ---
Created attachment 304796
--> https://bugzilla.kernel.org/attachment.cgi?id=304796&action=edit
Metadata dump after xfs_repair
--
You may reply to this email to add a comment.
You are receiving this mail because:
You are watching the assignee of the bug.
^ permalink raw reply [flat|nested] 21+ messages in thread* [Bug 217769] XFS crash on mount on kernels >= 6.1
2023-08-07 16:35 [Bug 217769] New: XFS crash on mount on kernels >= 6.1 bugzilla-daemon
` (8 preceding siblings ...)
2023-08-08 14:54 ` bugzilla-daemon
@ 2023-08-08 14:55 ` bugzilla-daemon
2023-08-08 21:54 ` bugzilla-daemon
` (7 subsequent siblings)
17 siblings, 0 replies; 21+ messages in thread
From: bugzilla-daemon @ 2023-08-08 14:55 UTC (permalink / raw)
To: linux-xfs
https://bugzilla.kernel.org/show_bug.cgi?id=217769
--- Comment #10 from Mariusz Gronczewski (xani666@gmail.com) ---
I've attached metadata dump before and after xfs_repair
--
You may reply to this email to add a comment.
You are receiving this mail because:
You are watching the assignee of the bug.
^ permalink raw reply [flat|nested] 21+ messages in thread* [Bug 217769] XFS crash on mount on kernels >= 6.1
2023-08-07 16:35 [Bug 217769] New: XFS crash on mount on kernels >= 6.1 bugzilla-daemon
` (9 preceding siblings ...)
2023-08-08 14:55 ` bugzilla-daemon
@ 2023-08-08 21:54 ` bugzilla-daemon
2023-08-08 22:07 ` bugzilla-daemon
` (6 subsequent siblings)
17 siblings, 0 replies; 21+ messages in thread
From: bugzilla-daemon @ 2023-08-08 21:54 UTC (permalink / raw)
To: linux-xfs
https://bugzilla.kernel.org/show_bug.cgi?id=217769
--- Comment #11 from Eric Sandeen (sandeen@sandeen.net) ---
Another question - Did the systems repeatedly fail to boot, or fail once and
then succeed?
If anything at all had happened on the filesystem prior to encountering the
problem, I think that the next boot should have seen a dirty log, and cleaned
up the problem as a result.
But if the first action on the fs was to create a tmpfile or unlink a file, we
might shut down before the log ever gets dirty and requires replay.
--
You may reply to this email to add a comment.
You are receiving this mail because:
You are watching the assignee of the bug.
^ permalink raw reply [flat|nested] 21+ messages in thread* [Bug 217769] XFS crash on mount on kernels >= 6.1
2023-08-07 16:35 [Bug 217769] New: XFS crash on mount on kernels >= 6.1 bugzilla-daemon
` (10 preceding siblings ...)
2023-08-08 21:54 ` bugzilla-daemon
@ 2023-08-08 22:07 ` bugzilla-daemon
2023-08-09 16:43 ` bugzilla-daemon
` (5 subsequent siblings)
17 siblings, 0 replies; 21+ messages in thread
From: bugzilla-daemon @ 2023-08-08 22:07 UTC (permalink / raw)
To: linux-xfs
https://bugzilla.kernel.org/show_bug.cgi?id=217769
--- Comment #12 from Mariusz Gronczewski (xani666@gmail.com) ---
The system was never marked as dirty so fsck (which fixed the problem) wasn't
happening on boot. So yeah, once I upgraded kernel if it failed the first time
it failed on every reboot after till manually fscking or downgrading kernel.
On the one machine where it failed after the boot (reading of certain files
triggered it), that too was repeatable
Maybe on "corruption of in-memory data" driver should also mark FS as dirty?
After all if data is corrupted from actual memory error (and not like here,
from loading bad data) there is nonzero chance some of that data ended up being
written to disk
--
You may reply to this email to add a comment.
You are receiving this mail because:
You are watching the assignee of the bug.
^ permalink raw reply [flat|nested] 21+ messages in thread* [Bug 217769] XFS crash on mount on kernels >= 6.1
2023-08-07 16:35 [Bug 217769] New: XFS crash on mount on kernels >= 6.1 bugzilla-daemon
` (11 preceding siblings ...)
2023-08-08 22:07 ` bugzilla-daemon
@ 2023-08-09 16:43 ` bugzilla-daemon
2023-08-09 18:31 ` bugzilla-daemon
` (4 subsequent siblings)
17 siblings, 0 replies; 21+ messages in thread
From: bugzilla-daemon @ 2023-08-09 16:43 UTC (permalink / raw)
To: linux-xfs
https://bugzilla.kernel.org/show_bug.cgi?id=217769
--- Comment #13 from Richard W.M. Jones (rjones@redhat.com) ---
Was VMware (the hypervisor) ever involved here, eg. were these
VMware guests, was VMware Tools installed, were they converted
from VMware to KVM, or anything similar?
--
You may reply to this email to add a comment.
You are receiving this mail because:
You are watching the assignee of the bug.
^ permalink raw reply [flat|nested] 21+ messages in thread* [Bug 217769] XFS crash on mount on kernels >= 6.1
2023-08-07 16:35 [Bug 217769] New: XFS crash on mount on kernels >= 6.1 bugzilla-daemon
` (12 preceding siblings ...)
2023-08-09 16:43 ` bugzilla-daemon
@ 2023-08-09 18:31 ` bugzilla-daemon
2023-08-09 19:10 ` bugzilla-daemon
` (3 subsequent siblings)
17 siblings, 0 replies; 21+ messages in thread
From: bugzilla-daemon @ 2023-08-09 18:31 UTC (permalink / raw)
To: linux-xfs
https://bugzilla.kernel.org/show_bug.cgi?id=217769
--- Comment #14 from Mariusz Gronczewski (xani666@gmail.com) ---
Nope and I've seen same problem on bare metal
--
You may reply to this email to add a comment.
You are receiving this mail because:
You are watching the assignee of the bug.
^ permalink raw reply [flat|nested] 21+ messages in thread* [Bug 217769] XFS crash on mount on kernels >= 6.1
2023-08-07 16:35 [Bug 217769] New: XFS crash on mount on kernels >= 6.1 bugzilla-daemon
` (13 preceding siblings ...)
2023-08-09 18:31 ` bugzilla-daemon
@ 2023-08-09 19:10 ` bugzilla-daemon
2023-08-29 23:42 ` Darrick J. Wong
2023-08-29 23:42 ` bugzilla-daemon
` (2 subsequent siblings)
17 siblings, 1 reply; 21+ messages in thread
From: bugzilla-daemon @ 2023-08-09 19:10 UTC (permalink / raw)
To: linux-xfs
https://bugzilla.kernel.org/show_bug.cgi?id=217769
--- Comment #15 from Richard W.M. Jones (rjones@redhat.com) ---
No problems. We had a similar bug reported internally that
happens on VMware guests, and I'm just trying to rule out VMware
as a factor.
--
You may reply to this email to add a comment.
You are receiving this mail because:
You are watching the assignee of the bug.
^ permalink raw reply [flat|nested] 21+ messages in thread* [Bug 217769] XFS crash on mount on kernels >= 6.1
2023-08-07 16:35 [Bug 217769] New: XFS crash on mount on kernels >= 6.1 bugzilla-daemon
` (14 preceding siblings ...)
2023-08-09 19:10 ` bugzilla-daemon
@ 2023-08-29 23:42 ` bugzilla-daemon
2023-11-14 15:57 ` bugzilla-daemon
2023-11-14 23:46 ` bugzilla-daemon
17 siblings, 0 replies; 21+ messages in thread
From: bugzilla-daemon @ 2023-08-29 23:42 UTC (permalink / raw)
To: linux-xfs
https://bugzilla.kernel.org/show_bug.cgi?id=217769
--- Comment #16 from Darrick J. Wong (djwong@kernel.org) ---
On Wed, Aug 09, 2023 at 07:10:03PM +0000, bugzilla-daemon@kernel.org wrote:
> https://bugzilla.kernel.org/show_bug.cgi?id=217769
>
> --- Comment #15 from Richard W.M. Jones (rjones@redhat.com) ---
> No problems. We had a similar bug reported internally that
> happens on VMware guests, and I'm just trying to rule out VMware
> as a factor.
Does this:
https://lore.kernel.org/linux-xfs/20230829232043.GE28186@frogsfrogsfrogs/T/#u
help in any way?
--D
> --
> You may reply to this email to add a comment.
>
> You are receiving this mail because:
> You are watching the assignee of the bug.
--
You may reply to this email to add a comment.
You are receiving this mail because:
You are watching the assignee of the bug.
^ permalink raw reply [flat|nested] 21+ messages in thread* [Bug 217769] XFS crash on mount on kernels >= 6.1
2023-08-07 16:35 [Bug 217769] New: XFS crash on mount on kernels >= 6.1 bugzilla-daemon
` (15 preceding siblings ...)
2023-08-29 23:42 ` bugzilla-daemon
@ 2023-11-14 15:57 ` bugzilla-daemon
2023-11-14 23:46 ` Darrick J. Wong
2023-11-14 23:46 ` bugzilla-daemon
17 siblings, 1 reply; 21+ messages in thread
From: bugzilla-daemon @ 2023-11-14 15:57 UTC (permalink / raw)
To: linux-xfs
https://bugzilla.kernel.org/show_bug.cgi?id=217769
Grant Millar (grant@cylo.net) changed:
What |Removed |Added
----------------------------------------------------------------------------
CC| |grant@cylo.net
--- Comment #17 from Grant Millar (grant@cylo.net) ---
We're experiencing the same bug following a data migration to new servers.
The servers are all running a fresh install of Debian 12 with brand new
hardware.
So far in the past 3 days we've had 2 mounts fail with:
[28797.357684] XFS (sdn): Internal error xfs_trans_cancel at line 1097 of file
fs/xfs/xfs_trans.c. Caller xfs_rename+0x61a/0xea0 [xfs]
[28797.488475] XFS (sdn): Corruption of in-memory data (0x8) detected at
xfs_trans_cancel+0x146/0x150 [xfs] (fs/xfs/xfs_trans.c:1098). Shutting down
filesystem.
[28797.488595] XFS (sdn): Please unmount the filesystem and rectify the
problem(s)
Both occurred in the same function on separate servers: xfs_rename+0x61a/0xea0
Neither mounts are the root filesystem.
versionnum [0xbcf5+0x18a] =
V5,NLINK,DIRV2,ATTR,QUOTA,ALIGN,LOGV2,EXTFLG,SECTOR,MOREBITS,ATTR2,LAZYSBCOUNT,PROJID32BIT,CRC,FTYPE,FINOBT,SPARSE_INODES,REFLINK,INOBTCNT,BIGTIME
meta-data=/dev/sdk isize=512 agcount=17, agsize=268435455 blks
= sectsz=4096 attr=2, projid32bit=1
= crc=1 finobt=1, sparse=1, rmapbt=0
= reflink=1 bigtime=1 inobtcount=1 nrext64=0
data = bsize=4096 blocks=4394582016, imaxpct=50
= sunit=0 swidth=0 blks
naming =version 2 bsize=4096 ascii-ci=0, ftype=1
log =internal log bsize=4096 blocks=521728, version=2
= sectsz=4096 sunit=1 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
Please let me know if I can provide more information.
--
You may reply to this email to add a comment.
You are receiving this mail because:
You are watching the assignee of the bug.
^ permalink raw reply [flat|nested] 21+ messages in thread* Re: [Bug 217769] XFS crash on mount on kernels >= 6.1
2023-11-14 15:57 ` bugzilla-daemon
@ 2023-11-14 23:46 ` Darrick J. Wong
0 siblings, 0 replies; 21+ messages in thread
From: Darrick J. Wong @ 2023-11-14 23:46 UTC (permalink / raw)
To: bugzilla-daemon; +Cc: linux-xfs
On Tue, Nov 14, 2023 at 03:57:06PM +0000, bugzilla-daemon@kernel.org wrote:
> https://bugzilla.kernel.org/show_bug.cgi?id=217769
>
> Grant Millar (grant@cylo.net) changed:
>
> What |Removed |Added
> ----------------------------------------------------------------------------
> CC| |grant@cylo.net
>
> --- Comment #17 from Grant Millar (grant@cylo.net) ---
> We're experiencing the same bug following a data migration to new servers.
>
> The servers are all running a fresh install of Debian 12 with brand new
> hardware.
>
> So far in the past 3 days we've had 2 mounts fail with:
>
> [28797.357684] XFS (sdn): Internal error xfs_trans_cancel at line 1097 of file
> fs/xfs/xfs_trans.c. Caller xfs_rename+0x61a/0xea0 [xfs]
> [28797.488475] XFS (sdn): Corruption of in-memory data (0x8) detected at
> xfs_trans_cancel+0x146/0x150 [xfs] (fs/xfs/xfs_trans.c:1098). Shutting down
> filesystem.
> [28797.488595] XFS (sdn): Please unmount the filesystem and rectify the
> problem(s)
>
> Both occurred in the same function on separate servers: xfs_rename+0x61a/0xea0
>
> Neither mounts are the root filesystem.
This should be fixed in 6.6, could you try that and report back?
(See "xfs: reload entire unlinked bucket lists")
--D
>
> versionnum [0xbcf5+0x18a] =
> V5,NLINK,DIRV2,ATTR,QUOTA,ALIGN,LOGV2,EXTFLG,SECTOR,MOREBITS,ATTR2,LAZYSBCOUNT,PROJID32BIT,CRC,FTYPE,FINOBT,SPARSE_INODES,REFLINK,INOBTCNT,BIGTIME
>
> meta-data=/dev/sdk isize=512 agcount=17, agsize=268435455 blks
> = sectsz=4096 attr=2, projid32bit=1
> = crc=1 finobt=1, sparse=1, rmapbt=0
> = reflink=1 bigtime=1 inobtcount=1 nrext64=0
> data = bsize=4096 blocks=4394582016, imaxpct=50
> = sunit=0 swidth=0 blks
> naming =version 2 bsize=4096 ascii-ci=0, ftype=1
> log =internal log bsize=4096 blocks=521728, version=2
> = sectsz=4096 sunit=1 blks, lazy-count=1
> realtime =none extsz=4096 blocks=0, rtextents=0
>
> Please let me know if I can provide more information.
>
> --
> You may reply to this email to add a comment.
>
> You are receiving this mail because:
> You are watching the assignee of the bug.
^ permalink raw reply [flat|nested] 21+ messages in thread
* [Bug 217769] XFS crash on mount on kernels >= 6.1
2023-08-07 16:35 [Bug 217769] New: XFS crash on mount on kernels >= 6.1 bugzilla-daemon
` (16 preceding siblings ...)
2023-11-14 15:57 ` bugzilla-daemon
@ 2023-11-14 23:46 ` bugzilla-daemon
17 siblings, 0 replies; 21+ messages in thread
From: bugzilla-daemon @ 2023-11-14 23:46 UTC (permalink / raw)
To: linux-xfs
https://bugzilla.kernel.org/show_bug.cgi?id=217769
--- Comment #18 from Darrick J. Wong (djwong@kernel.org) ---
On Tue, Nov 14, 2023 at 03:57:06PM +0000, bugzilla-daemon@kernel.org wrote:
> https://bugzilla.kernel.org/show_bug.cgi?id=217769
>
> Grant Millar (grant@cylo.net) changed:
>
> What |Removed |Added
> ----------------------------------------------------------------------------
> CC| |grant@cylo.net
>
> --- Comment #17 from Grant Millar (grant@cylo.net) ---
> We're experiencing the same bug following a data migration to new servers.
>
> The servers are all running a fresh install of Debian 12 with brand new
> hardware.
>
> So far in the past 3 days we've had 2 mounts fail with:
>
> [28797.357684] XFS (sdn): Internal error xfs_trans_cancel at line 1097 of
> file
> fs/xfs/xfs_trans.c. Caller xfs_rename+0x61a/0xea0 [xfs]
> [28797.488475] XFS (sdn): Corruption of in-memory data (0x8) detected at
> xfs_trans_cancel+0x146/0x150 [xfs] (fs/xfs/xfs_trans.c:1098). Shutting down
> filesystem.
> [28797.488595] XFS (sdn): Please unmount the filesystem and rectify the
> problem(s)
>
> Both occurred in the same function on separate servers:
> xfs_rename+0x61a/0xea0
>
> Neither mounts are the root filesystem.
This should be fixed in 6.6, could you try that and report back?
(See "xfs: reload entire unlinked bucket lists")
--D
>
> versionnum [0xbcf5+0x18a] =
>
> V5,NLINK,DIRV2,ATTR,QUOTA,ALIGN,LOGV2,EXTFLG,SECTOR,MOREBITS,ATTR2,LAZYSBCOUNT,PROJID32BIT,CRC,FTYPE,FINOBT,SPARSE_INODES,REFLINK,INOBTCNT,BIGTIME
>
> meta-data=/dev/sdk isize=512 agcount=17, agsize=268435455
> blks
> = sectsz=4096 attr=2, projid32bit=1
> = crc=1 finobt=1, sparse=1, rmapbt=0
> = reflink=1 bigtime=1 inobtcount=1
> nrext64=0
> data = bsize=4096 blocks=4394582016, imaxpct=50
> = sunit=0 swidth=0 blks
> naming =version 2 bsize=4096 ascii-ci=0, ftype=1
> log =internal log bsize=4096 blocks=521728, version=2
> = sectsz=4096 sunit=1 blks, lazy-count=1
> realtime =none extsz=4096 blocks=0, rtextents=0
>
> Please let me know if I can provide more information.
>
> --
> You may reply to this email to add a comment.
>
> You are receiving this mail because:
> You are watching the assignee of the bug.
--
You may reply to this email to add a comment.
You are receiving this mail because:
You are watching the assignee of the bug.
^ permalink raw reply [flat|nested] 21+ messages in thread