From: Ming Lei <ming.lei@redhat.com>
To: "Thomas Hellström" <thomas.hellstrom@linux.intel.com>
Cc: Jens Axboe <axboe@kernel.dk>, Christoph Hellwig <hch@lst.de>,
linux-block@vger.kernel.org
Subject: Re: Blockdev 6.13-rc lockdep splat regressions
Date: Mon, 13 Jan 2025 08:55:22 +0800 [thread overview]
Message-ID: <Z4RkemI9f6N5zoEF@fedora> (raw)
In-Reply-To: <1310ca51dce185c977055fae131f6ff6fd2e2089.camel@linux.intel.com>
On Sun, Jan 12, 2025 at 06:44:53PM +0100, Thomas Hellström wrote:
> On Sun, 2025-01-12 at 23:50 +0800, Ming Lei wrote:
> > On Sun, Jan 12, 2025 at 12:33:13PM +0100, Thomas Hellström wrote:
> > > On Sat, 2025-01-11 at 11:05 +0800, Ming Lei wrote:
> >
> > ...
> >
> > >
> > > Ah, You're right, it's a different warning this time. Posted the
> > > warning below. (Note: This is also with Christoph's series applied
> > > on
> > > top).
> > >
> > > May I also humbly suggest the following lockdep priming to be able
> > > to
> > > catch the reclaim lockdep splats early without reclaim needing to
> > > happen. That will also pick up splat #2 below.
> > >
> > > 8<-------------------------------------------------------------
> > >
> > > diff --git a/block/blk-core.c b/block/blk-core.c
> > > index 32fb28a6372c..2dd8dc9aed7f 100644
> > > --- a/block/blk-core.c
> > > +++ b/block/blk-core.c
> > > @@ -458,6 +458,11 @@ struct request_queue *blk_alloc_queue(struct
> > > queue_limits *lim, int node_id)
> > >
> > > q->nr_requests = BLKDEV_DEFAULT_RQ;
> > >
> > > + fs_reclaim_acquire(GFP_KERNEL);
> > > + rwsem_acquire_read(&q->io_lockdep_map, 0, 0, _RET_IP_);
> > > + rwsem_release(&q->io_lockdep_map, _RET_IP_);
> > > + fs_reclaim_release(GFP_KERNEL);
> > > +
> > > return q;
> >
> > Looks one nice idea for injecting fs_reclaim, maybe it can be
> > added to inject framework?
>
> For the intel gpu drivers, we typically always prime lockdep like this
> if we *know* that the lock will be grabbed during reclaim, like if it's
> part of shrinker processing or similar.
>
> So sooner or later we *know* this sequence will happen so we add it
> near the lock initialization to always be executed when the lock(map)
> is initialized.
>
> So I don't really see a need for them to be periodially injected?
What I suggested is to add the verification for every allocation with
direct reclaim by one kernel config which depends on both lockdep and
fault inject.
>
> >
> > >
> > > fail_stats:
> > >
> > > 8<-------------------------------------------------------------
> > >
> > > #1:
> > > 106.921533]
> > > ======================================================
> > > [ 106.921716] WARNING: possible circular locking dependency
> > > detected
> > > [ 106.921725] 6.13.0-rc6+ #121 Tainted: G U
> > > [ 106.921734] ----------------------------------------------------
> > > --
> > > [ 106.921743] kswapd0/117 is trying to acquire lock:
> > > [ 106.921751] ffff8ff4e2da09f0 (&q->q_usage_counter(io)){++++}-
> > > {0:0},
> > > at: __submit_bio+0x80/0x220
> > > [ 106.921769]
> > > but task is already holding lock:
> > > [ 106.921778] ffffffff8e65e1c0 (fs_reclaim){+.+.}-{0:0}, at:
> > > balance_pgdat+0xe2/0xa10
> > > [ 106.921791]
> > > which lock already depends on the new lock.
> > >
> > > [ 106.921803]
> > > the existing dependency chain (in reverse order) is:
> > > [ 106.921814]
> > > -> #1 (fs_reclaim){+.+.}-{0:0}:
> > > [ 106.921824] fs_reclaim_acquire+0x9d/0xd0
> > > [ 106.921833] __kmalloc_cache_node_noprof+0x5d/0x3f0
> > > [ 106.921842] blk_mq_init_tags+0x3d/0xb0
> > > [ 106.921851] blk_mq_alloc_map_and_rqs+0x4e/0x3d0
> > > [ 106.921860] blk_mq_init_sched+0x100/0x260
> > > [ 106.921868] elevator_switch+0x8d/0x2e0
> > > [ 106.921877] elv_iosched_store+0x174/0x1e0
> > > [ 106.921885] queue_attr_store+0x142/0x180
> > > [ 106.921893] kernfs_fop_write_iter+0x168/0x240
> > > [ 106.921902] vfs_write+0x2b2/0x540
> > > [ 106.921910] ksys_write+0x72/0xf0
> > > [ 106.921916] do_syscall_64+0x95/0x180
> > > [ 106.921925] entry_SYSCALL_64_after_hwframe+0x76/0x7e
> >
> > That is another regression from commit
> >
> > af2814149883 block: freeze the queue in queue_attr_store
> >
> > and queue_wb_lat_store() has same risk too.
> >
> > I will cook a patch to fix it.
>
> Thanks. Are these splats going to be silenced for 6.13-rc? Like having
> the new lockdep checks under a special config until they are fixed?
It is too late for v6.13, and Christoph's fix won't be available for v6.13
too.
Thanks,
Ming
next prev parent reply other threads:[~2025-01-13 0:55 UTC|newest]
Thread overview: 14+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-01-10 10:12 Blockdev 6.13-rc lockdep splat regressions Thomas Hellström
2025-01-10 10:14 ` Christoph Hellwig
2025-01-10 10:21 ` Thomas Hellström
2025-01-10 12:13 ` Ming Lei
2025-01-10 14:36 ` Thomas Hellström
2025-01-11 3:05 ` Ming Lei
2025-01-12 11:33 ` Thomas Hellström
2025-01-12 15:50 ` Ming Lei
2025-01-12 17:44 ` Thomas Hellström
2025-01-13 0:55 ` Ming Lei [this message]
2025-01-13 8:48 ` Thomas Hellström
2025-01-13 9:28 ` Ming Lei
2025-01-13 9:58 ` Thomas Hellström
2025-01-13 10:40 ` Ming Lei
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=Z4RkemI9f6N5zoEF@fedora \
--to=ming.lei@redhat.com \
--cc=axboe@kernel.dk \
--cc=hch@lst.de \
--cc=linux-block@vger.kernel.org \
--cc=thomas.hellstrom@linux.intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).