linux-fsdevel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH] buffer: Associate the meta bio with blkg from buffer page
@ 2024-08-28  3:32 Haifeng Xu
  2024-08-28  5:19 ` Christoph Hellwig
                   ` (3 more replies)
  0 siblings, 4 replies; 8+ messages in thread
From: Haifeng Xu @ 2024-08-28  3:32 UTC (permalink / raw)
  To: viro, brauner, jack
  Cc: tytso, yi.zhang, yukuai1, tj, linux-ext4, linux-block,
	linux-fsdevel, linux-kernel, Haifeng Xu

In our production environment, we found many tasks were hung for
a long time. Their call traces are like these:

thread 1:

PID: 189529  TASK: ffff92ab51e5c080  CPU: 34  COMMAND: "mc"
[ffffa638db807800] __schedule at ffffffff83b19898
[ffffa638db807888] schedule at ffffffff83b19e9e
[ffffa638db8078a8] io_schedule at ffffffff83b1a316
[ffffa638db8078c0] bit_wait_io at ffffffff83b1a751
[ffffa638db8078d8] __wait_on_bit at ffffffff83b1a373
[ffffa638db807918] out_of_line_wait_on_bit at ffffffff83b1a46d
[ffffa638db807970] __wait_on_buffer at ffffffff831b9c64
[ffffa638db807988] jbd2_log_do_checkpoint at ffffffff832b556e
[ffffa638db8079e8] __jbd2_log_wait_for_space at ffffffff832b55dc
[ffffa638db807a30] add_transaction_credits at ffffffff832af369
[ffffa638db807a98] start_this_handle at ffffffff832af50f
[ffffa638db807b20] jbd2__journal_start at ffffffff832afe1f
[ffffa638db807b60] __ext4_journal_start_sb at ffffffff83241af3
[ffffa638db807ba8] __ext4_new_inode at ffffffff83253be6
[ffffa638db807c80] ext4_mkdir at ffffffff8327ec9e
[ffffa638db807d10] vfs_mkdir at ffffffff83182a92
[ffffa638db807d50] ovl_mkdir_real at ffffffffc0965c9f [overlay]
[ffffa638db807d80] ovl_create_real at ffffffffc0965e8b [overlay]
[ffffa638db807db8] ovl_create_or_link at ffffffffc09677cc [overlay]
[ffffa638db807e10] ovl_create_object at ffffffffc0967a48 [overlay]
[ffffa638db807e60] ovl_mkdir at ffffffffc0967ad3 [overlay]
[ffffa638db807e70] vfs_mkdir at ffffffff83182a92
[ffffa638db807eb0] do_mkdirat at ffffffff83184305
[ffffa638db807f08] __x64_sys_mkdirat at ffffffff831843df
[ffffa638db807f28] do_syscall_64 at ffffffff83b0bf1c
[ffffa638db807f50] entry_SYSCALL_64_after_hwframe at ffffffff83c0007c

other threads:

PID: 21125  TASK: ffff929f5b9a0000  CPU: 44  COMMAND: "task_server"
[ffffa638aff9b900] __schedule at ffffffff83b19898
[ffffa638aff9b988] schedule at ffffffff83b19e9e
[ffffa638aff9b9a8] schedule_preempt_disabled at ffffffff83b1a24e
[ffffa638aff9b9b8] __mutex_lock at ffffffff83b1af28
[ffffa638aff9ba38] __mutex_lock_slowpath at ffffffff83b1b1a3
[ffffa638aff9ba48] mutex_lock at ffffffff83b1b1e2
[ffffa638aff9ba60] mutex_lock_io at ffffffff83b1b210
[ffffa638aff9ba80] __jbd2_log_wait_for_space at ffffffff832b563b
[ffffa638aff9bac8] add_transaction_credits at ffffffff832af369
[ffffa638aff9bb30] start_this_handle at ffffffff832af50f
[ffffa638aff9bbb8] jbd2__journal_start at ffffffff832afe1f
[ffffa638aff9bbf8] __ext4_journal_start_sb at ffffffff83241af3
[ffffa638aff9bc40] ext4_dirty_inode at ffffffff83266d0a
[ffffa638aff9bc60] __mark_inode_dirty at ffffffff831ab423
[ffffa638aff9bca0] generic_update_time at ffffffff8319169d
[ffffa638aff9bcb0] inode_update_time at ffffffff831916e5
[ffffa638aff9bcc0] file_update_time at ffffffff83191b01
[ffffa638aff9bd08] file_modified at ffffffff83191d47
[ffffa638aff9bd20] ext4_write_checks at ffffffff8324e6e4
[ffffa638aff9bd40] ext4_buffered_write_iter at ffffffff8324edfb
[ffffa638aff9bd78] ext4_file_write_iter at ffffffff8324f553
[ffffa638aff9bdf8] ext4_file_write_iter at ffffffff8324f505
[ffffa638aff9be00] new_sync_write at ffffffff8316dfca
[ffffa638aff9be90] vfs_write at ffffffff8316e975
[ffffa638aff9bec8] ksys_write at ffffffff83170a97
[ffffa638aff9bf08] __x64_sys_write at ffffffff83170b2a
[ffffa638aff9bf18] do_syscall_64 at ffffffff83b0bf1c
[ffffa638aff9bf38] asm_common_interrupt at ffffffff83c00cc8
[ffffa638aff9bf50] entry_SYSCALL_64_after_hwframe at ffffffff83c0007c

The filesystem is ext4(ordered). The meta data can be written out by
writeback, but if there are too many dirty pages, we had to do
checkpoint to write out the meta data in current thread context.

In this case, the blkg of thread1 has set io.max, so the j_checkpoint_mutex
can't be released and many threads must wait for it. However, the blkg from
buffer page didn' set any io policy. Therefore, for the meta buffer head,
we can associate the bio with blkg from the buffer page instead of current
thread context.

Signed-off-by: Haifeng Xu <haifeng.xu@shopee.com>
---
 fs/buffer.c | 11 +++++++++++
 1 file changed, 11 insertions(+)

diff --git a/fs/buffer.c b/fs/buffer.c
index e55ad471c530..a7889f258d0d 100644
--- a/fs/buffer.c
+++ b/fs/buffer.c
@@ -2819,6 +2819,17 @@ static void submit_bh_wbc(blk_opf_t opf, struct buffer_head *bh,
 	if (wbc) {
 		wbc_init_bio(wbc, bio);
 		wbc_account_cgroup_owner(wbc, bh->b_page, bh->b_size);
+	} else if (buffer_meta(bh)) {
+		struct folio *folio;
+		struct cgroup_subsys_state *memcg_css, *blkcg_css;
+
+		folio = page_folio(bh->b_page);
+		memcg_css = mem_cgroup_css_from_folio(folio);
+		if (cgroup_subsys_on_dfl(memory_cgrp_subsys) &&
+		    cgroup_subsys_on_dfl(io_cgrp_subsys)) {
+			blkcg_css = cgroup_e_css(memcg_css->cgroup, &io_cgrp_subsys);
+			bio_associate_blkg_from_css(bio, blkcg_css);
+		}
 	}
 
 	submit_bio(bio);
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2024-08-31  9:49 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-08-28  3:32 [PATCH] buffer: Associate the meta bio with blkg from buffer page Haifeng Xu
2024-08-28  5:19 ` Christoph Hellwig
2024-08-29 16:20 ` kernel test robot
2024-08-29 17:43 ` kernel test robot
2024-08-30 19:37 ` Tejun Heo
2024-08-31  6:11   ` Yu Kuai
2024-08-31  8:03     ` Tejun Heo
2024-08-31  9:48       ` Yu Kuai

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).