From: Dave Chinner <david@fromorbit.com>
To: Wengang Wang <wen.gang.wang@oracle.com>
Cc: "linux-xfs@vger.kernel.org" <linux-xfs@vger.kernel.org>,
"chandanrlinux@gmail.com" <chandanrlinux@gmail.com>
Subject: Re: [PATCH 1/3] xfs: pass alloc flags through to xfs_extent_busy_flush()
Date: Fri, 16 Jun 2023 10:17:46 +1000 [thread overview]
Message-ID: <ZIuqKv58eTQL/Iij@dread.disaster.area> (raw)
In-Reply-To: <396ACF78-518E-432A-9016-B2EAFD800B7C@oracle.com>
On Thu, Jun 15, 2023 at 11:51:09PM +0000, Wengang Wang wrote:
>
>
> > On Jun 15, 2023, at 4:33 PM, Dave Chinner <david@fromorbit.com> wrote:
> >
> > On Thu, Jun 15, 2023 at 11:09:41PM +0000, Wengang Wang wrote:
> >> When mounting the problematic metadump with the patches, I see the following reported.
> >>
> >> For more information about troubleshooting your instance using a console connection, see the documentation: https://docs.cloud.oracle.com/en-us/iaas/Content/Compute/References/serialconsole.htm#four
> >> =================================================
> >> [ 67.212496] loop: module loaded
> >> [ 67.214732] loop0: detected capacity change from 0 to 629137408
> >> [ 67.247542] XFS (loop0): Deprecated V4 format (crc=0) will not be supported after September 2030.
> >> [ 67.249257] XFS (loop0): Mounting V4 Filesystem af755a98-5f62-421d-aa81-2db7bffd2c40
> >> [ 72.241546] XFS (loop0): Starting recovery (logdev: internal)
> >> [ 92.218256] XFS (loop0): Internal error ltbno + ltlen > bno at line 1957 of file fs/xfs/libxfs/xfs_alloc.c. Caller xfs_free_ag_extent+0x3f6/0x870 [xfs]
> >> [ 92.249802] CPU: 1 PID: 4201 Comm: mount Not tainted 6.4.0-rc6 #8
> >
> > What is the test you are running? Please describe how you reproduced
> > this failure - a reproducer script would be the best thing here.
>
> I was mounting a (copy of) V4 metadump from customer.
Is the metadump obfuscated? Can I get a copy of it via a private,
secure channel?
> > Does the test fail on a v5 filesytsem?
>
> N/A.
>
> >
> >> I think that’s because that the same EFI record was going to be freed again
> >> by xfs_extent_free_finish_item() after it already got freed by xfs_efi_item_recover().
How is this happening? Where (and why) are we defering an extent we
have successfully freed into a new xefi that we create a new intent
for and then defer?
Can you post the debug output and analysis that lead you to this
observation? I certainly can't see how this can happen from looking
at the code
> >> I was trying to fix above issue in my previous patch by checking the intent
> >> log item’s lsn and avoid running iop_recover() in xlog_recover_process_intents().
> >>
> >> Now I am thinking if we can pass a flag, say XFS_EFI_PROCESSED, from
> >> xfs_efi_item_recover() after it processed that record to the xfs_efi_log_item
> >> memory structure somehow. In xfs_extent_free_finish_item(), we skip to process
> >> that xfs_efi_log_item on seeing XFS_EFI_PROCESSED and return OK. By that
> >> we can avoid the double free.
> >
> > I'm not really interested in speculation of the cause or the fix at
> > this point. I want to know how the problem is triggered so I can
> > work out exactly what caused it, along with why we don't have
> > coverage of this specific failure case in fstests already.
> >
>
> I get to know the cause by adding additional debug log along with
> my previous patch.
Can you please post that debug and analysis, rather than just a
stack trace that is completely lacking in context? Nothing can be
inferred from a stack trace, and what you are saying is occurring
does not match what the code should actually be doing. So I need to
actually look at what is happening in detail to work out where this
mismatch is coming from....
-Dave.
--
Dave Chinner
david@fromorbit.com
next prev parent reply other threads:[~2023-06-16 0:19 UTC|newest]
Thread overview: 29+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-06-15 1:41 [PATCH 0/3] xfs: fix xfs_extent_busy_flush() deadlock in EFI processing Dave Chinner
2023-06-15 1:41 ` [PATCH 1/3] xfs: pass alloc flags through to xfs_extent_busy_flush() Dave Chinner
2023-06-15 3:32 ` Darrick J. Wong
2023-06-15 3:48 ` Dave Chinner
2023-06-15 21:57 ` Wengang Wang
2023-06-15 22:14 ` Dave Chinner
2023-06-15 22:31 ` Wengang Wang
2023-06-15 23:09 ` Wengang Wang
2023-06-15 23:33 ` Dave Chinner
2023-06-15 23:51 ` Wengang Wang
2023-06-16 0:17 ` Dave Chinner [this message]
2023-06-16 0:42 ` Wengang Wang
2023-06-16 4:27 ` Wengang Wang
2023-06-16 5:04 ` Wengang Wang
2023-06-16 7:36 ` Dave Chinner
2023-06-16 17:43 ` Wengang Wang
2023-06-16 22:29 ` Dave Chinner
2023-06-16 22:53 ` Wengang Wang
2023-06-16 23:14 ` Wengang Wang
2023-06-17 0:47 ` Dave Chinner
2023-06-20 16:56 ` Wengang Wang
2023-06-22 1:15 ` Wengang Wang
2023-06-15 1:42 ` [PATCH 2/3] xfs: allow extent free intents to be retried Dave Chinner
2023-06-15 3:38 ` Darrick J. Wong
2023-06-15 3:57 ` Dave Chinner
2023-06-15 14:41 ` Darrick J. Wong
2023-06-15 22:21 ` Dave Chinner
2023-06-15 1:42 ` [PATCH 3/3] xfs: don't block in busy flushing when freeing extents Dave Chinner
2023-06-15 3:40 ` Darrick J. Wong
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=ZIuqKv58eTQL/Iij@dread.disaster.area \
--to=david@fromorbit.com \
--cc=chandanrlinux@gmail.com \
--cc=linux-xfs@vger.kernel.org \
--cc=wen.gang.wang@oracle.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox