From: "Theodore Ts'o" <tytso@mit.edu>
To: "Lu, Davina" <davinalu@amazon.com>
Cc: "Bhatnagar, Rishabh" <risbhat@amazon.com>,
Jan Kara <jack@suse.cz>, "jack@suse.com" <jack@suse.com>,
"linux-ext4@vger.kernel.org" <linux-ext4@vger.kernel.org>,
"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
"gregkh@linuxfoundation.org" <gregkh@linuxfoundation.org>,
"Park, SeongJae" <sjpark@amazon.com>
Subject: Re: Tasks stuck jbd2 for a long time
Date: Thu, 17 Aug 2023 22:41:44 -0400 [thread overview]
Message-ID: <20230818024144.GD3464136@mit.edu> (raw)
In-Reply-To: <d82df68eb8514951a7f7acc923132796@amazon.com>
On Fri, Aug 18, 2023 at 01:31:35AM +0000, Lu, Davina wrote:
>
> Looks like this is a similar issue I saw before with fio test (buffered IO with 100 threads), it is also shows "ext4-rsv-conversion" work queue takes lots CPU and make journal update every stuck.
Given the stack traces, it is very much a different problem.
> There is a patch and see if this is the same issue? this is not the
> finial patch since there may have some issue from Ted. I will
> forward that email to you in a different loop. I didn't continue on
> this patch that time since we thought is might not be the real case
> in RDS.
The patch which you've included is dangerous and can cause file system
corruption. See my reply at [1], and your corrected patch which
addressed my concern at [2]. If folks want to try a patch, please use
the one at [2], and not the one you quoted in this thread, since it's
missing critically needed locking.
[1] https://lore.kernel.org/r/YzTMZ26AfioIbl27@mit.edu
[2] https://lore.kernel.org/r/53153bdf0cce4675b09bc2ee6483409f@amazon.com
The reason why we never pursued it is because (a) at one of our weekly
ext4 video chats, I was informed by Oleg Kiselev that the performance
issue was addressed in a different way, and (b) I'd want to reproduce
the issue on a machine under my control so I could understand what was
was going on and so we could examine the dynamics of what was
happening with and without the patch. So I'd would have needed to
know how many CPU's what kind of storage device (HDD?, SSD? md-raid?
etc.) was in use, in addition to the fio recipe.
Finally, I'm a bit nervous about setting the internal __WQ_ORDERED
flag with max_active > 1. What was that all about, anyway?
- Ted
next prev parent reply other threads:[~2023-08-18 2:43 UTC|newest]
Thread overview: 15+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-08-15 19:01 Tasks stuck jbd2 for a long time Bhatnagar, Rishabh
2023-08-16 2:28 ` Theodore Ts'o
2023-08-16 3:57 ` Bhatnagar, Rishabh
2023-08-16 14:53 ` Jan Kara
2023-08-16 18:32 ` Bhatnagar, Rishabh
2023-08-16 21:52 ` Jan Kara
2023-08-16 22:53 ` Bhatnagar, Rishabh
2023-08-17 10:49 ` Jan Kara
2023-08-17 18:59 ` Bhatnagar, Rishabh
2023-08-18 1:19 ` Theodore Ts'o
2023-08-18 1:31 ` Lu, Davina
2023-08-18 2:41 ` Theodore Ts'o [this message]
2023-08-21 1:10 ` Lu, Davina
2023-08-21 18:38 ` Theodore Ts'o
2023-08-24 3:52 ` Lu, Davina
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20230818024144.GD3464136@mit.edu \
--to=tytso@mit.edu \
--cc=davinalu@amazon.com \
--cc=gregkh@linuxfoundation.org \
--cc=jack@suse.com \
--cc=jack@suse.cz \
--cc=linux-ext4@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=risbhat@amazon.com \
--cc=sjpark@amazon.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).