From: Andrew Donnellan <ajd@linux.ibm.com>
To: "Christopher M. Riedl" <cmr@informatik.wtf>, linuxppc-dev@ozlabs.org
Subject: Re: [PATCH 3/3] powerpc/spinlock: Fix oops in shared-processor spinlocks
Date: Thu, 1 Aug 2019 13:33:25 +1000 [thread overview]
Message-ID: <750b390e-678d-0b26-d004-f05f1dcd52fb@linux.ibm.com> (raw)
In-Reply-To: <20190728125438.1550-4-cmr@informatik.wtf>
On 28/7/19 10:54 pm, Christopher M. Riedl wrote:
> Booting w/ ppc64le_defconfig + CONFIG_PREEMPT results in the attached
> kernel trace due to calling shared-processor spinlocks while not running
> in an SPLPAR. Previously, the out-of-line spinlocks implementations were
> selected based on CONFIG_PPC_SPLPAR at compile time without a runtime
> shared-processor LPAR check.
>
> To fix, call the actual spinlock implementations from a set of common
> functions, spin_yield() and rw_yield(), which check for shared-processor
> LPAR during runtime and select the appropriate lock implementation.
>
> [ 0.430878] BUG: Kernel NULL pointer dereference at 0x00000100
> [ 0.431991] Faulting instruction address: 0xc000000000097f88
> [ 0.432934] Oops: Kernel access of bad area, sig: 7 [#1]
> [ 0.433448] LE PAGE_SIZE=64K MMU=Radix MMU=Hash PREEMPT SMP NR_CPUS=2048 NUMA PowerNV
> [ 0.434479] Modules linked in:
> [ 0.435055] CPU: 0 PID: 2 Comm: kthreadd Not tainted 5.2.0-rc6-00491-g249155c20f9b #28
> [ 0.435730] NIP: c000000000097f88 LR: c000000000c07a88 CTR: c00000000015ca10
> [ 0.436383] REGS: c0000000727079f0 TRAP: 0300 Not tainted (5.2.0-rc6-00491-g249155c20f9b)
> [ 0.437004] MSR: 9000000002009033 <SF,HV,VEC,EE,ME,IR,DR,RI,LE> CR: 84000424 XER: 20040000
> [ 0.437874] CFAR: c000000000c07a84 DAR: 0000000000000100 DSISR: 00080000 IRQMASK: 1
> [ 0.437874] GPR00: c000000000c07a88 c000000072707c80 c000000001546300 c00000007be38a80
> [ 0.437874] GPR04: c0000000726f0c00 0000000000000002 c00000007279c980 0000000000000100
> [ 0.437874] GPR08: c000000001581b78 0000000080000001 0000000000000008 c00000007279c9b0
> [ 0.437874] GPR12: 0000000000000000 c000000001730000 c000000000142558 0000000000000000
> [ 0.437874] GPR16: 0000000000000000 0000000000000000 0000000000000000 0000000000000000
> [ 0.437874] GPR20: 0000000000000000 0000000000000000 0000000000000000 0000000000000000
> [ 0.437874] GPR24: c00000007be38a80 c000000000c002f4 0000000000000000 0000000000000000
> [ 0.437874] GPR28: c000000072221a00 c0000000726c2600 c00000007be38a80 c00000007be38a80
> [ 0.443992] NIP [c000000000097f88] __spin_yield+0x48/0xa0
> [ 0.444523] LR [c000000000c07a88] __raw_spin_lock+0xb8/0xc0
> [ 0.445080] Call Trace:
> [ 0.445670] [c000000072707c80] [c000000072221a00] 0xc000000072221a00 (unreliable)
> [ 0.446425] [c000000072707cb0] [c000000000bffb0c] __schedule+0xbc/0x850
> [ 0.447078] [c000000072707d70] [c000000000c002f4] schedule+0x54/0x130
> [ 0.447694] [c000000072707da0] [c0000000001427dc] kthreadd+0x28c/0x2b0
> [ 0.448389] [c000000072707e20] [c00000000000c1cc] ret_from_kernel_thread+0x5c/0x70
> [ 0.449143] Instruction dump:
> [ 0.449821] 4d9e0020 552a043e 210a07ff 79080fe0 0b080000 3d020004 3908b878 794a1f24
> [ 0.450587] e8e80000 7ce7502a e8e70000 38e70100 <7ca03c2c> 70a70001 78a50020 4d820020
> [ 0.452808] ---[ end trace 474d6b2b8fc5cb7e ]---
>
> Signed-off-by: Christopher M. Riedl <cmr@informatik.wtf>
This should probably head to stable?
> ---
> arch/powerpc/include/asm/spinlock.h | 36 ++++++++++++++++++++---------
> 1 file changed, 25 insertions(+), 11 deletions(-)
>
> diff --git a/arch/powerpc/include/asm/spinlock.h b/arch/powerpc/include/asm/spinlock.h
> index 1e7721176f39..8161809c6be1 100644
> --- a/arch/powerpc/include/asm/spinlock.h
> +++ b/arch/powerpc/include/asm/spinlock.h
> @@ -103,11 +103,9 @@ static inline int arch_spin_trylock(arch_spinlock_t *lock)
> /* We only yield to the hypervisor if we are in shared processor mode */
> void splpar_spin_yield(arch_spinlock_t *lock);
> void splpar_rw_yield(arch_rwlock_t *lock);
> -#define __spin_yield(x) splpar_spin_yield(x)
> -#define __rw_yield(x) splpar_rw_yield(x)
> #else /* SPLPAR */
> -#define __spin_yield(x) barrier()
> -#define __rw_yield(x) barrier()
> +#define splpar_spin_yield(lock)
> +#define splpar_rw_yield(lock)
I prefer using #ifdef on the function definition and declaring an
alternative function with an empty body for the !SPLPAR case, seeing an
empty #define just feels a bit weird
> #endif
>
> static inline bool is_shared_processor(void)
> @@ -121,6 +119,22 @@ static inline bool is_shared_processor(void)
> #endif
> }
>
> +static inline void spin_yield(arch_spinlock_t *lock)
> +{
> + if (is_shared_processor())
> + splpar_spin_yield(lock);
> + else
> + barrier();
> +}
> +
> +static inline void rw_yield(arch_rwlock_t *lock)
> +{
> + if (is_shared_processor())
> + splpar_rw_yield(lock);
> + else
> + barrier();
> +}
> +
> static inline void arch_spin_lock(arch_spinlock_t *lock)
> {
> while (1) {
> @@ -129,7 +143,7 @@ static inline void arch_spin_lock(arch_spinlock_t *lock)
> do {
> HMT_low();
> if (is_shared_processor())
> - __spin_yield(lock);
> + spin_yield(lock);
> } while (unlikely(lock->slock != 0));
> HMT_medium();
> }
> @@ -148,7 +162,7 @@ void arch_spin_lock_flags(arch_spinlock_t *lock, unsigned long flags)
> do {
> HMT_low();
> if (is_shared_processor())
> - __spin_yield(lock);
> + spin_yield(lock);
> } while (unlikely(lock->slock != 0));
> HMT_medium();
> local_irq_restore(flags_dis);
> @@ -238,7 +252,7 @@ static inline void arch_read_lock(arch_rwlock_t *rw)
> do {
> HMT_low();
> if (is_shared_processor())
> - __rw_yield(rw);
> + rw_yield(rw);
> } while (unlikely(rw->lock < 0));
> HMT_medium();
> }
> @@ -252,7 +266,7 @@ static inline void arch_write_lock(arch_rwlock_t *rw)
> do {
> HMT_low();
> if (is_shared_processor())
> - __rw_yield(rw);
> + rw_yield(rw);
> } while (unlikely(rw->lock != 0));
> HMT_medium();
> }
> @@ -292,9 +306,9 @@ static inline void arch_write_unlock(arch_rwlock_t *rw)
> rw->lock = 0;
> }
>
> -#define arch_spin_relax(lock) __spin_yield(lock)
> -#define arch_read_relax(lock) __rw_yield(lock)
> -#define arch_write_relax(lock) __rw_yield(lock)
> +#define arch_spin_relax(lock) spin_yield(lock)
> +#define arch_read_relax(lock) rw_yield(lock)
> +#define arch_write_relax(lock) rw_yield(lock)
>
> /* See include/linux/spinlock.h */
> #define smp_mb__after_spinlock() smp_mb()
>
--
Andrew Donnellan OzLabs, ADL Canberra
ajd@linux.ibm.com IBM Australia Limited
prev parent reply other threads:[~2019-08-01 3:35 UTC|newest]
Thread overview: 11+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-07-28 12:54 [PATCH 0/3] Fix oops in shared-processor spinlocks Christopher M. Riedl
2019-07-28 12:54 ` [PATCH 1/3] powerpc/spinlocks: Refactor SHARED_PROCESSOR Christopher M. Riedl
2019-07-30 21:31 ` Thiago Jung Bauermann
2019-07-30 23:31 ` Christopher M Riedl
2019-07-31 0:11 ` Thiago Jung Bauermann
2019-07-31 2:36 ` Christopher M Riedl
2019-08-01 3:20 ` Andrew Donnellan
2019-07-28 12:54 ` [PATCH 2/3] powerpc/spinlocks: Rename SPLPAR-only spinlocks Christopher M. Riedl
2019-08-01 3:27 ` Andrew Donnellan
2019-07-28 12:54 ` [PATCH 3/3] powerpc/spinlock: Fix oops in shared-processor spinlocks Christopher M. Riedl
2019-08-01 3:33 ` Andrew Donnellan [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=750b390e-678d-0b26-d004-f05f1dcd52fb@linux.ibm.com \
--to=ajd@linux.ibm.com \
--cc=cmr@informatik.wtf \
--cc=linuxppc-dev@ozlabs.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).