From: Marc Zyngier <maz@kernel.org>
To: Joel Fernandes <joel@joelfernandes.org>
Cc: "moderated list:ARM/STM32 ARCHITECTURE"
<linux-arm-kernel@lists.infradead.org>,
Will Deacon <will@kernel.org>,
Mark Rutland <mark.rutland@arm.com>,
Catalin Marinas <catalin.marinas@arm.com>,
rcu <rcu@vger.kernel.org>,
"Paul E. McKenney" <paulmck@kernel.org>
Subject: Re: arm64 torture test hotplug failures (offlining causes -EBUSY)
Date: Mon, 16 Jan 2023 18:03:14 +0000 [thread overview]
Message-ID: <864jsqnyql.wl-maz@kernel.org> (raw)
In-Reply-To: <CAEXW_YQB1Wm3YhgUP8S9a4drgvDasu3s+zyt3j9Wwpt83W=W=Q@mail.gmail.com>
Hi Joel,
On Mon, 16 Jan 2023 17:03:31 +0000,
Joel Fernandes <joel@joelfernandes.org> wrote:
>
> Hello,
> I am seeing -EBUSY returned a lot during torture_onoff() when running
> rcutorture on arm64. This causes hotplug failure 30% of the time. I am
> also seeing this in 6.1-rc kernels. I believe see this only for CPU0.
>
> This causes warnings in torture tests:
> [ 217.582290] rcu-torture:torture_onoff task: offline 0 failed: errno -16
> [ 221.866362] rcu-torture:torture_onoff task: offline 0 failed: errno -16
>
> Full kernel log here:
> http://box.joelfernandes.org:9080/job/rcutorture_stable_arm/job/linux-5.15.y/7/artifact/tools/testing/selftests/rcutorture/res/2023.01.15-14.51.11/TREE04/console.log
>
> Any ideas on why this is happening and only for CPU 0 (presumably the
> boot CPU)? I'd personally need these warnings to go away for my tests
> as this causes rcutorture's tests to not cleanly pass for me. It
> appears remove_cpu() -> device_offline() is what returns the error.
I've taken your kernel for a ride as a KVM guest (probably similar to
what you are doing), and saw the same thing (CPU0 not offlining):
[ 64.555845] Detected VIPT I-cache on CPU4
[ 64.556146] GICv3: CPU4: found redistributor 4 region 0:0x000000003ff70000
[ 64.556689] CPU4: Booted secondary processor 0x0000000004 [0x612f0290]
[ 69.823670] rcu-torture:torture_onoff task: offline 0 failed: errno -16
[ 73.991960] psci: CPU7 killed (polled 0 ms)
[ 74.239626] rcu-torture: rcu_torture_read_exit: Start of episode
[ 74.243863] rcu-torture: rcu_torture_read_exit: End of episode
I then tried v6.2-rc4 with defconfig + RCU_TORTURE and your command
line, and CPU0 does seem to hotplug off correctly:
[ 47.217109] psci: CPU3 killed (polled 0 ms)
[ 52.241009] Detected VIPT I-cache on CPU3
[ 52.241227] cacheinfo: Unable to detect cache hierarchy for CPU 3
[ 52.241481] GICv3: CPU3: found redistributor 3 region 0:0x000000003ff50000
[ 52.241849] CPU3: Booted secondary processor 0x0000000003 [0x612f0290]
[ 56.337011] psci: CPU0 killed (polled 0 ms)
[...]
[ 121.090339] rcu-torture: Free-Block Circulation: 922 920 919 918 917 916 914 913 912 911 0
[ 125.574311] Detected VIPT I-cache on CPU0
[ 125.574557] cacheinfo: Unable to detect cache hierarchy for CPU 0
[ 125.574901] GICv3: CPU0: found redistributor 0 region 0:0x000000003fef0000
[ 125.575322] CPU0: Booted secondary processor 0x0000000000 [0x612f0290]
[ 130.176893] rcu-torture: rcu_torture_read_exit: Start of episode
[ 130.317001] psci: CPU0 killed (polled 0 ms)
[...]
[ 225.588999] Detected VIPT I-cache on CPU0
[ 225.589224] cacheinfo: Unable to detect cache hierarchy for CPU 0
[ 225.589535] GICv3: CPU0: found redistributor 0 region 0:0x000000003fef0000
[ 225.589946] CPU0: Booted secondary processor 0x0000000000 [0x612f0290]
No such error is being reported.
Is there anything special in your config that would help triggering
this with the current tip of tree?
Thanks,
M.
--
Without deviation from the norm, progress is not possible.
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
next prev parent reply other threads:[~2023-01-16 18:04 UTC|newest]
Thread overview: 34+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-01-16 17:03 arm64 torture test hotplug failures (offlining causes -EBUSY) Joel Fernandes
2023-01-16 18:03 ` Marc Zyngier [this message]
2023-01-16 22:43 ` Joel Fernandes
2023-01-16 18:32 ` Zhouyi Zhou
2023-01-16 22:38 ` Joel Fernandes
2023-01-17 0:15 ` Joel Fernandes
2023-01-17 0:37 ` Zhouyi Zhou
2023-01-17 1:45 ` Joel Fernandes
2023-01-17 3:15 ` Zhouyi Zhou
2023-01-17 4:34 ` Joel Fernandes
2023-01-17 11:42 ` Zhouyi Zhou
2023-01-17 19:50 ` Joel Fernandes
2023-01-18 10:15 ` Zhouyi Zhou
2023-01-18 15:51 ` Joel Fernandes
2023-01-17 4:30 ` Paul E. McKenney
2023-01-17 4:36 ` Joel Fernandes
2023-01-17 4:54 ` Paul E. McKenney
2023-01-17 20:02 ` Joel Fernandes
2023-01-17 20:42 ` Paul E. McKenney
2023-01-18 2:17 ` Joel Fernandes
2023-01-18 4:00 ` Paul E. McKenney
2023-01-18 16:51 ` Will Deacon
2023-01-18 17:56 ` Paul E. McKenney
2023-01-18 22:01 ` Joel Fernandes
2023-01-19 9:12 ` Mark Rutland
2023-01-18 22:37 ` Joel Fernandes
2023-01-18 22:39 ` Joel Fernandes
2023-01-19 0:15 ` Paul E. McKenney
2023-01-19 0:53 ` Joel Fernandes
2023-01-19 3:21 ` Zhouyi Zhou
2023-01-19 8:26 ` Joel Fernandes
2023-01-19 12:17 ` Zhouyi Zhou
2023-01-19 13:57 ` Frederic Weisbecker
2023-01-19 20:25 ` Joel Fernandes
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=864jsqnyql.wl-maz@kernel.org \
--to=maz@kernel.org \
--cc=catalin.marinas@arm.com \
--cc=joel@joelfernandes.org \
--cc=linux-arm-kernel@lists.infradead.org \
--cc=mark.rutland@arm.com \
--cc=paulmck@kernel.org \
--cc=rcu@vger.kernel.org \
--cc=will@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).