From: Dave Chinner <david@fromorbit.com>
To: Pengfei Xu <pengfei.xu@intel.com>
Cc: Helge Deller <deller@gmx.de>,
linux-xfs <linux-xfs@vger.kernel.org>,
asml.silence@gmail.com, geert@linux-m68k.org,
linux-kernel@vger.kernel.org, heng.su@intel.com
Subject: Re: [Syzkaller & bisect] There is "xfs_dquot_alloc" related BUG in v6.2 in guest
Date: Tue, 28 Feb 2023 08:46:07 +1100 [thread overview]
Message-ID: <20230227214607.GB360264@dread.disaster.area> (raw)
In-Reply-To: <Y/xbyABFGZEeKduv@xpf.sh.intel.com>
On Mon, Feb 27, 2023 at 03:29:12PM +0800, Pengfei Xu wrote:
> Hi Dave and Helge Deller,
>
> Thanks Helge Deller to add the xfs mailing list!
>
> On 2023-02-27 at 09:34:03 +1100, Dave Chinner wrote:
> > On Sat, Feb 25, 2023 at 08:58:25PM +0100, Helge Deller wrote:
> > > Looping in xfs mailing list as this seems to be a XFS problem...
> > > On 2/24/23 05:39, Pengfei Xu wrote:
> > > > [ 71.225966] XFS (loop1): Quotacheck: Unsuccessful (Error -5): Disabling quotas.
> > > > [ 71.226310] xfs filesystem being mounted at /root/syzkaller.qCVHXV/0/file0 supports timestamps until 2038 (0x7fffffff)
> > > > [ 71.227591] BUG: kernel NULL pointer dereference, address: 00000000000002a8
> > > > [ 71.227873] #PF: supervisor read access in kernel mode
> > > > [ 71.228077] #PF: error_code(0x0000) - not-present page
> > > > [ 71.228280] PGD c313067 P4D c313067 PUD c1fe067 PMD 0
> > > > [ 71.228494] Oops: 0000 [#1] PREEMPT SMP NOPTI
> > > > [ 71.228673] CPU: 0 PID: 161 Comm: kworker/0:4 Not tainted 6.2.0-c9c3395d5e3d #1
> > > > [ 71.228961] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.16.0-0-gd239552ce722-prebuilt.qemu.org 04/01/2014
> > > > [ 71.229400] Workqueue: xfs-inodegc/loop1 xfs_inodegc_worker
> > > > [ 71.229626] RIP: 0010:xfs_dquot_alloc+0x95/0x1e0
> > > > [ 71.229820] Code: 80 15 ad 85 48 c7 c6 7c 6b 92 83 e8 75 0f 6b ff 49 8b 8d 60 01 00 00 44 89 e0 31 d2 48 c7 c6 18 ae 8f 83 48 8d bb 18 02 00 00 <f7> b1 a8 02 2
> > > > [ 71.230528] RSP: 0018:ffffc90000babc20 EFLAGS: 00010246
> > > > [ 71.230737] RAX: 0000000000000009 RBX: ffff8880093c98c0 RCX: 0000000000000000
> > > > [ 71.231014] RDX: 0000000000000000 RSI: ffffffff838fae18 RDI: ffff8880093c9ad8
> > > > [ 71.231292] RBP: ffffc90000babc48 R08: 0000000000000002 R09: 0000000000000000
> > > > [ 71.231570] R10: ffffc90000baba80 R11: ffff88800af08d98 R12: 0000000000000009
> > > > [ 71.231850] R13: ffff88800c4bc000 R14: ffff88800c4bc000 R15: 0000000000000004
> > > > [ 71.232129] FS: 0000000000000000(0000) GS:ffff88807dc00000(0000) knlGS:0000000000000000
> > > > [ 71.232441] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
> > > > [ 71.232668] CR2: 00000000000002a8 CR3: 000000000a1d2002 CR4: 0000000000770ef0
> > > > [ 71.232949] PKRU: 55555554
> > > > [ 71.233061] Call Trace:
> > > > [ 71.233162] <TASK>
> > > > [ 71.233254] xfs_qm_dqread+0x46/0x440
> > > > [ 71.233410] ? xfs_qm_dqget_inode+0x13e/0x500
> > > > [ 71.233596] xfs_qm_dqget_inode+0x154/0x500
> > > > [ 71.233774] xfs_qm_dqattach_one+0x142/0x3c0
> > > > [ 71.233961] xfs_qm_dqattach_locked+0x14a/0x170
> > > > [ 71.234149] xfs_qm_dqattach+0x52/0x80
> > > > [ 71.234307] xfs_inactive+0x186/0x340
> > > > [ 71.234461] xfs_inodegc_worker+0xd3/0x430
> > > > [ 71.234630] process_one_work+0x3b1/0x960
> > > > [ 71.234802] worker_thread+0x52/0x660
> > > > [ 71.234957] ? __pfx_worker_thread+0x10/0x10
> > > > [ 71.235136] kthread+0x161/0x1a0
> > > > [ 71.235279] ? __pfx_kthread+0x10/0x10
> > > > [ 71.235442] ret_from_fork+0x29/0x50
> > > > [ 71.235602] </TASK>
> > > > [ 71.235696] Modules linked in:
> > > > [ 71.235826] CR2: 00000000000002a8
> > > > [ 71.235964] ---[ end trace 0000000000000000 ]---
> >
> > Looks like a quota disable race with background inode inactivation
> > reading in dquots.
> >
> > Can you test the patch below?
> >
> Thanks for your fixed patch in short time!
> I installed below patch on top of v6.2 kernel.
> And there was no any BUG dmesg anyway, so the fixed patch worked.
>
> And left some "Internal error xfs_iunlink_remove_inode" related Call Trace.
> I'm new to xfs, could you help to check if it is some other issue or we
> could ignore it.
I'm guessing this the filesystem detecting a corruption and shutting
down. That's normal behaviour when tools like syzkaller through
random crap at the filesystem and expect it to like it.
> I put the dmesg in bugzilla attachment as follow:
> https://bugzilla.kernel.org/show_bug.cgi?id=217078 ->
> https://bugzilla.kernel.org/attachment.cgi?id=303793
I am not authorised to access bug 217078, so I can't read any of
this. Just cut out and attach the relevant dmesg output to the email.
-Dave.
--
Dave Chinner
david@fromorbit.com
next prev parent reply other threads:[~2023-02-27 21:46 UTC|newest]
Thread overview: 5+ messages / expand[flat|nested] mbox.gz Atom feed top
[not found] <Y/g/femUL7jZ9gF3@xpf.sh.intel.com>
2023-02-25 19:58 ` [Syzkaller & bisect] There is "xfs_dquot_alloc" related BUG in v6.2 in guest Helge Deller
2023-02-26 22:34 ` Dave Chinner
2023-02-27 7:29 ` Pengfei Xu
2023-02-27 21:46 ` Dave Chinner [this message]
2023-02-28 1:12 ` Pengfei Xu
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20230227214607.GB360264@dread.disaster.area \
--to=david@fromorbit.com \
--cc=asml.silence@gmail.com \
--cc=deller@gmx.de \
--cc=geert@linux-m68k.org \
--cc=heng.su@intel.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-xfs@vger.kernel.org \
--cc=pengfei.xu@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox