public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: Joel Fernandes <joel@joelfernandes.org>
To: Guenter Roeck <linux@roeck-us.net>
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
	stable@vger.kernel.org, patches@lists.linux.dev,
	linux-kernel@vger.kernel.org, torvalds@linux-foundation.org,
	akpm@linux-foundation.org, shuah@kernel.org,
	patches@kernelci.org, lkft-triage@lists.linaro.org,
	pavel@denx.de, jonathanh@nvidia.com, f.fainelli@gmail.com,
	sudipm.mukherjee@gmail.com, srw@sladewatkins.net, rwarsow@gmx.de,
	conor@kernel.org, paulmck@kernel.org
Subject: Re: [PATCH 5.15 00/92] 5.15.126-rc1 review
Date: Wed, 9 Aug 2023 20:14:13 +0000	[thread overview]
Message-ID: <20230809201413.GA3374446@google.com> (raw)
In-Reply-To: <35e4b770-2ead-4a19-ad01-fa75996adef4@roeck-us.net>

On Wed, Aug 09, 2023 at 12:25:48PM -0700, Guenter Roeck wrote:
> On Wed, Aug 09, 2023 at 02:35:59PM -0400, Joel Fernandes wrote:
> > On Wed, Aug 9, 2023 at 12:18 PM Guenter Roeck <linux@roeck-us.net> wrote:
> > >
> > > On 8/9/23 06:53, Joel Fernandes wrote:
> > > > On Wed, Aug 09, 2023 at 12:40:36PM +0200, Greg Kroah-Hartman wrote:
> > > >> This is the start of the stable review cycle for the 5.15.126 release.
> > > >> There are 92 patches in this series, all will be posted as a response
> > > >> to this one.  If anyone has any issues with these being applied, please
> > > >> let me know.
> > > >>
> > > >> Responses should be made by Fri, 11 Aug 2023 10:36:10 +0000.
> > > >> Anything received after that time might be too late.
> > > >>
> > > >> The whole patch series can be found in one patch at:
> > > >>      https://www.kernel.org/pub/linux/kernel/v5.x/stable-review/patch-5.15.126-rc1.gz
> > > >> or in the git tree and branch at:
> > > >>      git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable-rc.git linux-5.15.y
> > > >> and the diffstat can be found below.
> > > >
> > > > Not necesscarily new with 5.15 stable but 3 of the 19 rcutorture scenarios
> > > > hang with this -rc: TREE04, TREE07, TASKS03.
> > > >
> > > > 5.15 has a known stop machine issue where it hangs after 1.5 hours with cpu
> > > > hotplug rcutorture testing. Me and tglx are continuing to debug this. The
> > > > issue does not show up on anything but 5.15 stable kernels and neither on
> > > > mainline.
> > > >
> > >
> > > Do you by any have a crash pattern that we could possibly use to find the crash
> > > in ChromeOS crash logs ? No idea if that would help, but it could provide some
> > > additional data points.
> > 
> > The pattern shows as a hard hang, the system is unresponsive and all CPUs
> > are stuck in stop_machine. Sometimes it recovers on its own from the
> > hang and then RCU immediately gives stall warnings. It takes 1.5 hour
> > to reproduce and sometimes never happens for several hours.
> > 
> > It appears related to CPU hotplug since gdb showed me most of the CPUs
> > are spinning in multi_cpu_stop() / stop machine after the hang.
> > 
> 
> Hmm, we do see lots of soft lockups with multi_cpu_stop() in the backtrace,
> but not with v5.15.y but with v5.4.y. The actual hang is in stop_machine_yield().

Interesting. It looks similar as far as the stack dump in gdb goes, here are
the stacks I dumped with the hang I referred to:
https://paste.debian.net/1288308/

But in dmesg, it prints nothing for about 20-30 mins before recovering, then
I get RCU stalls. It looks like this:

[  682.721962] kvm-clock: cpu 7, msr 199981c1, secondary cpu clock
[  682.736830] kvm-guest: stealtime: cpu 7, msr 1f5db140
[  684.445875] smpboot: Booting Node 0 Processor 5 APIC 0x5
[  684.467831] kvm-clock: cpu 5, msr 19998141, secondary cpu clock
[  684.555766] kvm-guest: stealtime: cpu 5, msr 1f55b140
[  687.356637] smpboot: Booting Node 0 Processor 4 APIC 0x4
[  687.377214] kvm-clock: cpu 4, msr 19998101, secondary cpu clock
[ 2885.473742] kvm-guest: stealtime: cpu 4, msr 1f51b140
[ 2886.456408] rcu: INFO: rcu_sched self-detected stall on CPU
[ 2886.457590] rcu_torture_fwd_prog_nr: Duration 15423 cver 170 gps 337
[ 2886.464934] rcu: 0-...!: (2 ticks this GP) idle=7eb/0/0x1 softirq=118271/118271 fqs=0 last_accelerate: e3cd/71c0 dyntick_enabled: 1
[ 2886.490837] (t=2199034 jiffies g=185489 q=4)
[ 2886.497297] rcu: rcu_sched kthread timer wakeup didn't happen for 2199031 jiffies! g185489 f0x0 RCU_GP_WAIT_FQS(5) ->state=0x402
[ 2886.514201] rcu: Possible timer handling issue on cpu=0 timer-softirq=441616
[ 2886.524593] rcu: rcu_sched kthread starved for 2199034 jiffies! g185489 f0x0 RCU_GP_WAIT_FQS(5) ->state=0x402 ->cpu=0
[ 2886.540067] rcu: Unless rcu_sched kthread gets sufficient CPU time, OOM is now expected behavior.
[ 2886.551967] rcu: RCU grace-period kthread stack dump:
[ 2886.558644] task:rcu_sched       state:I stack:14896 pid:   15 ppid:     2 flags:0x00004000
[ 2886.569640] Call Trace:
[ 2886.572940]  <TASK>
[ 2886.575902]  __schedule+0x284/0x6e0
[ 2886.580969]  schedule+0x53/0xa0
[ 2886.585231]  schedule_timeout+0x8f/0x130

In that huge gap, I connect gdb and dumped those stacks in above link.

On 5.15 stable you could repro it in about an hour and a half most of the time by running something like:
tools/testing/selftests/rcutorture/bin/kvm.sh --cpus 48 --duration 60 --configs TREE04

Let me know if you saw anything like this. I am currently trying to panic the
kernel when the hang happens so I can get better traces.

thanks,

 - Joel


  reply	other threads:[~2023-08-09 20:14 UTC|newest]

Thread overview: 27+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-08-09 10:40 [PATCH 5.15 00/92] 5.15.126-rc1 review Greg Kroah-Hartman
2023-08-09 13:53 ` Joel Fernandes
2023-08-09 16:18   ` Guenter Roeck
2023-08-09 18:35     ` Joel Fernandes
2023-08-09 18:39       ` Joel Fernandes
2023-08-09 19:25       ` Guenter Roeck
2023-08-09 20:14         ` Joel Fernandes [this message]
2023-08-09 20:38           ` Guenter Roeck
2023-08-09 20:39             ` Joel Fernandes
2023-08-09 21:45               ` Guenter Roeck
2023-08-10 17:55                 ` Paul E. McKenney
2023-08-10 21:54                   ` Joel Fernandes
2023-08-10 22:14                     ` Joel Fernandes
2023-08-10 22:34                       ` Paul E. McKenney
2023-08-10 22:55                     ` Guenter Roeck
2023-08-10 23:13                       ` Joel Fernandes
2023-08-09 17:05 ` SeongJae Park
2023-08-09 19:38 ` Florian Fainelli
2023-08-10 10:16 ` Harshit Mogalapalli
2023-08-10 10:25   ` Guenter Roeck
2023-08-10 10:24 ` Guenter Roeck
2023-08-10 16:25   ` Florian Fainelli
2023-08-11 10:02     ` Greg Kroah-Hartman
2023-08-10 16:06 ` Guenter Roeck
2023-08-11 10:05   ` Greg Kroah-Hartman
2023-08-10 21:11 ` Ron Economos
2023-08-10 21:54 ` Daniel Díaz

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20230809201413.GA3374446@google.com \
    --to=joel@joelfernandes.org \
    --cc=akpm@linux-foundation.org \
    --cc=conor@kernel.org \
    --cc=f.fainelli@gmail.com \
    --cc=gregkh@linuxfoundation.org \
    --cc=jonathanh@nvidia.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux@roeck-us.net \
    --cc=lkft-triage@lists.linaro.org \
    --cc=patches@kernelci.org \
    --cc=patches@lists.linux.dev \
    --cc=paulmck@kernel.org \
    --cc=pavel@denx.de \
    --cc=rwarsow@gmx.de \
    --cc=shuah@kernel.org \
    --cc=srw@sladewatkins.net \
    --cc=stable@vger.kernel.org \
    --cc=sudipm.mukherjee@gmail.com \
    --cc=torvalds@linux-foundation.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox