From: "gregkh@linuxfoundation.org" <gregkh@linuxfoundation.org>
To: "Cheng-Jui Wang (王正睿)" <Cheng-Jui.Wang@mediatek.com>
Cc: "peterz@infradead.org" <peterz@infradead.org>,
"rostedt@goodmis.org" <rostedt@goodmis.org>,
"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
"surenb@google.com" <surenb@google.com>,
"torvalds@linux-foundation.org" <torvalds@linux-foundation.org>,
wsd_upstream <wsd_upstream@mediatek.com>,
"paulmck@kernel.org" <paulmck@kernel.org>,
"Bobule Chang (張弘義)" <bobule.chang@mediatek.com>,
"fweisbec@gmail.com" <fweisbec@gmail.com>
Subject: Re: [BUG v6.0-rc2] lockdep splat on ct_kernel_enter()
Date: Wed, 1 Mar 2023 13:52:29 +0100 [thread overview]
Message-ID: <Y/9KjY48FoHIHJ44@kroah.com> (raw)
In-Reply-To: <833cecca4460ae3c371455cef75b40a1f3922758.camel@mediatek.com>
On Wed, Mar 01, 2023 at 12:37:29PM +0000, Cheng-Jui Wang (王正睿) wrote:
> On Mon, 2022-08-22 at 16:44 -0400, Steven Rostedt wrote:
> > My tests are failing because of this splat:
> >
> > [ 16.073659] ------------[ cut here ]------------
> > [ 16.074407] bus: 'platform': add driver acpi-ged
> > [ 16.074424] DEBUG_LOCKS_WARN_ON(lockdep_hardirqs_enabled())
> > [ 16.074424] WARNING: CPU: 0 PID: 0 at
> > kernel/locking/lockdep.c:5506 check_flags+0x114/0x1d0
>
> > [ 16.074424] lock_is_held_type+0x6f/0x130
> > [ 16.186284] rcu_read_lock_sched_held+0x4a/0x90
> > [ 16.186284] trace_rcu_dyntick+0x3a/0xe0
> > [ 16.186284] ct_kernel_enter.constprop.0+0x66/0xa0
> > [ 16.186284] ct_idle_exit+0xd/0x30
> > [ 16.186284] cpuidle_enter_state+0x28a/0x310
> > [ 16.186284] cpuidle_enter+0x2e/0x50
> > [ 16.186284] do_idle+0x1ec/0x280
>
> Our test in v6.1 stable is failing due to this splat too. The v6.1
> stable kernel still has this splat.
>
> This splat can be fixed by Peter's patch
> https://lore.kernel.org/all/20220608144516.808451191@infradead.org/
> , but the fix is part of a big patchset
> https://lore.kernel.org/all/20220608142723.103523089@infradead.org/
> introduced in 6.2.
>
> Could the fixes be backported to v6.1 stable?
What "fixes" exactly are you referring to? Can you provide a series of
git commit ids that cleanly apply or better yet, and series of patches
that you have backported and tested to ensure that they work properly?
thanks,
greg k-h
next prev parent reply other threads:[~2023-03-01 12:52 UTC|newest]
Thread overview: 13+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-08-22 20:44 [BUG v6.0-rc2] lockdep splat on ct_kernel_enter() Steven Rostedt
2022-08-22 22:28 ` Steven Rostedt
2022-08-22 22:38 ` Steven Rostedt
2022-08-23 0:48 ` Steven Rostedt
2022-08-23 1:40 ` Steven Rostedt
2022-08-23 2:01 ` Steven Rostedt
2022-08-23 2:36 ` Paul E. McKenney
2023-03-01 12:37 ` Cheng-Jui Wang (王正睿)
2023-03-01 12:52 ` gregkh [this message]
2023-03-02 3:42 ` Cheng-Jui Wang (王正睿)
2023-03-02 6:42 ` gregkh
2023-03-01 14:14 ` Frederic Weisbecker
2023-03-01 20:15 ` Steven Rostedt
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=Y/9KjY48FoHIHJ44@kroah.com \
--to=gregkh@linuxfoundation.org \
--cc=Cheng-Jui.Wang@mediatek.com \
--cc=bobule.chang@mediatek.com \
--cc=fweisbec@gmail.com \
--cc=linux-kernel@vger.kernel.org \
--cc=paulmck@kernel.org \
--cc=peterz@infradead.org \
--cc=rostedt@goodmis.org \
--cc=surenb@google.com \
--cc=torvalds@linux-foundation.org \
--cc=wsd_upstream@mediatek.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox