From: Brian Foster <bfoster@redhat.com>
To: Kent Overstreet <kent.overstreet@linux.dev>
Cc: linux-bcachefs@vger.kernel.org
Subject: Re: [BUG] bcachefs fio lockup via generic/703
Date: Thu, 29 Feb 2024 10:55:46 -0500 [thread overview]
Message-ID: <ZeCpApARcJvDyAbf@bfoster> (raw)
In-Reply-To: <h6cve56ifz6fszequog73uzgyjnalvtbdwz23kxupt4eun2ymh@anxj25u7gaf2>
On Wed, Feb 28, 2024 at 09:59:12PM -0500, Kent Overstreet wrote:
> On Wed, Feb 28, 2024 at 07:02:39PM -0500, Kent Overstreet wrote:
> > On Wed, Feb 28, 2024 at 03:13:04PM -0500, Brian Foster wrote:
> > > On Wed, Feb 28, 2024 at 03:03:06PM -0500, Kent Overstreet wrote:
> > > > On Wed, Feb 28, 2024 at 02:47:26PM -0500, Brian Foster wrote:
> > > > > Hi Kent,
> > > > >
> > > > > Firstly, I confirmed that today's master seems to avoid the splat I sent
> > > > > previously (re: your comment about a reverse journal replay patch or
> > > > > some such).
> > > > >
> > > > > I still reproduce the stall issue on this system. After peeling away at
> > > > > it, I was eventually able to reproduce without the drop writes
> > > > > (dm-flakey) behavior from the test, and with fio using either the libaio
> > > > > or sync I/O engine options. The sync I/O mode fortunately provides a
> > > > > more useful stack trace:
> > > > >
> > > > > # cat /proc/177747/stack
> > > > > [<0>] bch2_dio_write_flush+0x122/0x160 [bcachefs]
> > > > > [<0>] bch2_direct_write+0xb53/0xce0 [bcachefs]
> > > > > [<0>] bch2_write_iter+0x142/0xc70 [bcachefs]
> > > > > [<0>] vfs_write+0x29b/0x470
> > > > > [<0>] ksys_write+0x6f/0xf0
> > > > > [<0>] do_syscall_64+0x86/0x170
> > > > > [<0>] entry_SYSCALL_64_after_hwframe+0x6e/0x76
> > > > >
> > > > > ... which resolves down to the closure_sync() call in
> > > > > bch2_dio_write_flush(). The problem seems to go away if I remove the
> > > > > preceding journal flush from that function. This seems to rule out
> > > > > io_uring/aio and instead suggest that we're getting stuck somehow
> > > > > waiting on a journal flush.
> > > > >
> > > > > Based on that I went back to the first commit before 746a33c96b7a0
> > > > > ("bcachefs: better journal pipelining"). With that, I can run hundreds
> > > > > of iterations of generic/703 without a problem, so this appears to be a
> > > > > regression associated with the journal pipeline improvements. I'm
> > > > > currently re-running on the last known good commit with my test tweaks
> > > > > backed out (i.e. so back to io_uring and drop writes) just to
> > > > > corroborate that it's the same problem, but so far it's running as
> > > > > expected...
> > > >
> > > > So I suppose the journal must be getting stuck, and a journal write
> > > > isn't completing - what does sysfs internal/journal_debug say when it happens?
> > > >
> > >
...
> >
> > I might see it - journal_last_unwritten_seq() is now wrong, it's
> > assuming that j->seq_ondisk + 1 hasn't been submitted yet.
> >
> > bch2_journal_flush_seq_async() then sets that journal buf to "must be a
> > flush write", but if it's already been submitted - whoops
>
> On further reading, journal_flusH_seq_async() looks correct, but
> bch2_journal_noflush_seq() was wrong - the following patch is in
> -testing:
>
...
So I can still reproduce the issue on master, which includes commit
3c8f22258ab ("bcachefs: Fix bch2_journal_noflush_seq()"). Just for
reference, the related debug info is as follows:
# cat /proc/13918/stack
[<0>] bch2_dio_write_flush+0x15b/0x190 [bcachefs]
[<0>] bch2_direct_write+0xb75/0xd30 [bcachefs]
[<0>] bch2_write_iter+0x4c/0xf10 [bcachefs]
[<0>] vfs_write+0x29b/0x470
[<0>] ksys_write+0x6f/0xf0
[<0>] do_syscall_64+0x86/0x170
[<0>] entry_SYSCALL_64_after_hwframe+0x6e/0x76
# cat journal_debug
dirty journal entries: 0/32768
seq: 72
seq_ondisk: 72
last_seq: 73
last_seq_ondisk: 72
flushed_seq_ondisk: 72
watermark: stripe
each entry reserved: 361
nr flush writes: 65
nr noflush writes: 0
average write size: 2.14 KiB
nr direct reclaim: 0
nr background reclaim: 66
reclaim kicked: 0
reclaim runs in: 0 ms
blocked: 0
current entry sectors: 512
current entry error: ok
current entry: closed
unwritten entries:
last buf closed
replay done: 1
space:
discarded 512:244736
clean ondisk 512:244736
clean 512:244736
total 512:245760
dev 0:
nr 480
bucket size 512
available 478:188
discard_idx 1
dirty_ondisk 1 (seq 72)
dirty_idx 1 (seq 72)
cur_idx 1 (seq 72)
# cat closures
0000000044781eb9: __closure_sync+0x49/0x180 -> closure_sync_fn+0x0/0x30 p 0000000000000000 r 1
W bch2_journal_flush_seq_async.part.0+0xed/0x590 [bcachefs]
000000001b9a5601: bch2_fs_open+0x538/0x15e0 [bcachefs] -> 0x0 p 0000000000000000 r 1 R
Brian
next prev parent reply other threads:[~2024-02-29 15:54 UTC|newest]
Thread overview: 8+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-02-28 19:47 [BUG] bcachefs fio lockup via generic/703 Brian Foster
2024-02-28 20:03 ` Kent Overstreet
2024-02-28 20:13 ` Brian Foster
2024-02-28 23:43 ` Kent Overstreet
2024-02-29 0:02 ` Kent Overstreet
2024-02-29 2:59 ` Kent Overstreet
2024-02-29 15:55 ` Brian Foster [this message]
2024-02-29 16:24 ` Kent Overstreet
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=ZeCpApARcJvDyAbf@bfoster \
--to=bfoster@redhat.com \
--cc=kent.overstreet@linux.dev \
--cc=linux-bcachefs@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox