From mboxrd@z Thu Jan 1 00:00:00 1970 From: Waiman Long Subject: Re: [PATCH 6/8] powerpc/pseries: implement paravirt qspinlocks for SPLPAR Date: Thu, 2 Jul 2020 17:01:06 -0400 Message-ID: <6b8ccb02-53ca-35d2-0dc6-2fc8e5523a97@redhat.com> References: <20200702074839.1057733-1-npiggin@gmail.com> <20200702074839.1057733-7-npiggin@gmail.com> Mime-Version: 1.0 Content-Type: text/plain; charset=utf-8; format=flowed Content-Transfer-Encoding: 8bit Return-path: In-Reply-To: <20200702074839.1057733-7-npiggin@gmail.com> Content-Language: en-US Sender: linux-arch-owner@vger.kernel.org To: Nicholas Piggin Cc: Will Deacon , Peter Zijlstra , Boqun Feng , Ingo Molnar , Anton Blanchard , linuxppc-dev@lists.ozlabs.org, linux-kernel@vger.kernel.org, virtualization@lists.linux-foundation.org, kvm-ppc@vger.kernel.org, linux-arch@vger.kernel.org List-Id: virtualization@lists.linuxfoundation.org On 7/2/20 3:48 AM, Nicholas Piggin wrote: > Signed-off-by: Nicholas Piggin > --- > arch/powerpc/include/asm/paravirt.h | 23 ++++++++ > arch/powerpc/include/asm/qspinlock.h | 55 +++++++++++++++++++ > arch/powerpc/include/asm/qspinlock_paravirt.h | 5 ++ > arch/powerpc/platforms/pseries/Kconfig | 5 ++ > arch/powerpc/platforms/pseries/setup.c | 6 +- > include/asm-generic/qspinlock.h | 2 + > 6 files changed, 95 insertions(+), 1 deletion(-) > create mode 100644 arch/powerpc/include/asm/qspinlock_paravirt.h > > diff --git a/arch/powerpc/include/asm/paravirt.h b/arch/powerpc/include/asm/paravirt.h > index 7a8546660a63..5fae9dfa6fe9 100644 > --- a/arch/powerpc/include/asm/paravirt.h > +++ b/arch/powerpc/include/asm/paravirt.h > @@ -29,6 +29,16 @@ static inline void yield_to_preempted(int cpu, u32 yield_count) > { > plpar_hcall_norets(H_CONFER, get_hard_smp_processor_id(cpu), yield_count); > } > + > +static inline void prod_cpu(int cpu) > +{ > + plpar_hcall_norets(H_PROD, get_hard_smp_processor_id(cpu)); > +} > + > +static inline void yield_to_any(void) > +{ > + plpar_hcall_norets(H_CONFER, -1, 0); > +} > #else > static inline bool is_shared_processor(void) > { > @@ -45,6 +55,19 @@ static inline void yield_to_preempted(int cpu, u32 yield_count) > { > ___bad_yield_to_preempted(); /* This would be a bug */ > } > + > +extern void ___bad_yield_to_any(void); > +static inline void yield_to_any(void) > +{ > + ___bad_yield_to_any(); /* This would be a bug */ > +} > + > +extern void ___bad_prod_cpu(void); > +static inline void prod_cpu(int cpu) > +{ > + ___bad_prod_cpu(); /* This would be a bug */ > +} > + > #endif > > #define vcpu_is_preempted vcpu_is_preempted > diff --git a/arch/powerpc/include/asm/qspinlock.h b/arch/powerpc/include/asm/qspinlock.h > index f84da77b6bb7..997a9a32df77 100644 > --- a/arch/powerpc/include/asm/qspinlock.h > +++ b/arch/powerpc/include/asm/qspinlock.h > @@ -3,9 +3,36 @@ > #define _ASM_POWERPC_QSPINLOCK_H > > #include > +#include > > #define _Q_PENDING_LOOPS (1 << 9) /* not tuned */ > > +#ifdef CONFIG_PARAVIRT_SPINLOCKS > +extern void native_queued_spin_lock_slowpath(struct qspinlock *lock, u32 val); > +extern void __pv_queued_spin_lock_slowpath(struct qspinlock *lock, u32 val); > + > +static __always_inline void queued_spin_lock_slowpath(struct qspinlock *lock, u32 val) > +{ > + if (!is_shared_processor()) > + native_queued_spin_lock_slowpath(lock, val); > + else > + __pv_queued_spin_lock_slowpath(lock, val); > +} You may need to match the use of __pv_queued_spin_lock_slowpath() with the corresponding __pv_queued_spin_unlock(), e.g. #define queued_spin_unlock queued_spin_unlock static inline queued_spin_unlock(struct qspinlock *lock) {         if (!is_shared_processor())                 smp_store_release(&lock->locked, 0);         else                 __pv_queued_spin_unlock(lock); } Otherwise, pv_kick() will never be called. Cheers, Longman