From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id EE73A8837 for ; Sun, 22 Dec 2024 03:39:43 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734838784; cv=none; b=OIS87e/+c/e2OqDEYAaRjHT9aWf6Nkof1PJGaBue1v0oOhWDGHCeHRhGBfJbwuUPZUBSl1bk5HNXQnFGfnksu60zUo0KrewgFdmj65o86bx3XRiop1TrLdXxhIPSmF/ohyqIJdOOvCcE/AK7lBpohbfvon+Q3JibnxyFC9fmFZI= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734838784; c=relaxed/simple; bh=qVctlfRDKSqvGJJe7RLiJvr7L2bxxrBrfqXVwQ0w6s4=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=FXlE9g9KyEQzxBellRGzB5DE6u6F4ExxI2aClLorC50ZL2IkYQfx+kR7ccIfS39ZeArBflJFzqurfke0sBaxNCD97KLU4iOTGc9X48ki32PSiphLZlItxqKOkhC+OVLDtN6HqbfMLYTOyVnthX8bdaLsCw3BChTITt4xxpP1R7o= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=tVoxEstt; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="tVoxEstt" Received: by smtp.kernel.org (Postfix) with ESMTPSA id ED015C4CECD; Sun, 22 Dec 2024 03:39:39 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1734838783; bh=qVctlfRDKSqvGJJe7RLiJvr7L2bxxrBrfqXVwQ0w6s4=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=tVoxEsttpfKO04c7l90tfZedtt5O/gvzreIrjFHDMsKYik6f5RNlj6hctQpmBYGHa cB0z3aAWCVbY/x48vdndxkSoM6xkE3MGWMgkTVaW528YXVxG5dq7/wxNkmTYgfDTJZ lEQIDHZ5wPtt/LLqnCcPNyn0vRyi/oA48KS9h6J4N2/D6xYa1F7ruBXecFsN9R5pvg ArwZXEVRPrszEfwmhFzhy6TvMiqMhTK+OKSoU+67OLku1r0L5SAp3I7W/SPgjsqHmW Axg5a/VJ55IYbdmgEw87Y99kD2ibkhBNMYsrlqm0geadGY7XR9zsYMf2/1j1t+Akmr Z3ezNhTrAurpQ== From: guoren@kernel.org To: paul.walmsley@sifive.com, palmer@dabbelt.com, guoren@kernel.org, bjorn@rivosinc.com, conor@kernel.org, leobras@redhat.com, peterz@infradead.org, parri.andrea@gmail.com, will@kernel.org, longman@redhat.com, boqun.feng@gmail.com, arnd@arndb.de, alexghiti@rivosinc.com, ajones@ventanamicro.com, rkrcmar@ventanamicro.com Cc: linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, Guo Ren Subject: [PATCH 2/3] RISC-V: paravirt: Add pvqspinlock frontend Date: Sat, 21 Dec 2024 22:39:16 -0500 Message-Id: <20241222033917.1754495-3-guoren@kernel.org> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20241222033917.1754495-1-guoren@kernel.org> References: <20241222033917.1754495-1-guoren@kernel.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit From: Guo Ren Add an unfair qspinlock virtualization-friendly frontend, by halting the virtual CPU rather than spinning. Using static_call to switch between: native_queued_spin_lock_slowpath() __pv_queued_spin_lock_slowpath() native_queued_spin_unlock() __pv_queued_spin_unlock() Add the pv_wait & pv_kick implementations. Reviewed-by: Leonardo Bras Signed-off-by: Guo Ren Signed-off-by: Guo Ren --- arch/riscv/Kconfig | 12 ++++ arch/riscv/include/asm/Kbuild | 1 - arch/riscv/include/asm/qspinlock.h | 35 +++++++++++ arch/riscv/include/asm/qspinlock_paravirt.h | 28 +++++++++ arch/riscv/kernel/Makefile | 1 + arch/riscv/kernel/qspinlock_paravirt.c | 67 +++++++++++++++++++++ arch/riscv/kernel/setup.c | 4 ++ 7 files changed, 147 insertions(+), 1 deletion(-) create mode 100644 arch/riscv/include/asm/qspinlock.h create mode 100644 arch/riscv/include/asm/qspinlock_paravirt.h create mode 100644 arch/riscv/kernel/qspinlock_paravirt.c diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig index d4a7ca0388c0..e241ac39ecd6 100644 --- a/arch/riscv/Kconfig +++ b/arch/riscv/Kconfig @@ -1071,6 +1071,18 @@ config PARAVIRT_TIME_ACCOUNTING If in doubt, say N here. +config PARAVIRT_SPINLOCKS + bool "Paravirtualization layer for spinlocks" + depends on QUEUED_SPINLOCKS + default y + help + Paravirtualized spinlocks allow a unfair qspinlock to replace the + test-set kvm-guest virt spinlock implementation with something + virtualization-friendly, for example, halt the virtual CPU rather + than spinning. + + If you are unsure how to answer this question, answer Y. + config RELOCATABLE bool "Build a relocatable kernel" depends on MMU && 64BIT && !XIP_KERNEL diff --git a/arch/riscv/include/asm/Kbuild b/arch/riscv/include/asm/Kbuild index de13d5a234f8..c726330d2b9f 100644 --- a/arch/riscv/include/asm/Kbuild +++ b/arch/riscv/include/asm/Kbuild @@ -12,6 +12,5 @@ generic-y += spinlock_types.h generic-y += ticket_spinlock.h generic-y += qrwlock.h generic-y += qrwlock_types.h -generic-y += qspinlock.h generic-y += user.h generic-y += vmlinux.lds.h diff --git a/arch/riscv/include/asm/qspinlock.h b/arch/riscv/include/asm/qspinlock.h new file mode 100644 index 000000000000..1d9f32334ff1 --- /dev/null +++ b/arch/riscv/include/asm/qspinlock.h @@ -0,0 +1,35 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * Copyright (c), 2024 Alibaba + * Authors: + * Guo Ren + */ + +#ifndef _ASM_RISCV_QSPINLOCK_H +#define _ASM_RISCV_QSPINLOCK_H + +#ifdef CONFIG_PARAVIRT_SPINLOCKS +#include + +/* How long a lock should spin before we consider blocking */ +#define SPIN_THRESHOLD (1 << 15) + +void native_queued_spin_lock_slowpath(struct qspinlock *lock, u32 val); +void __pv_init_lock_hash(void); +void __pv_queued_spin_lock_slowpath(struct qspinlock *lock, u32 val); + +static inline void queued_spin_lock_slowpath(struct qspinlock *lock, u32 val) +{ + static_call(pv_queued_spin_lock_slowpath)(lock, val); +} + +#define queued_spin_unlock queued_spin_unlock +static inline void queued_spin_unlock(struct qspinlock *lock) +{ + static_call(pv_queued_spin_unlock)(lock); +} +#endif /* CONFIG_PARAVIRT_SPINLOCKS */ + +#include + +#endif /* _ASM_RISCV_QSPINLOCK_H */ diff --git a/arch/riscv/include/asm/qspinlock_paravirt.h b/arch/riscv/include/asm/qspinlock_paravirt.h new file mode 100644 index 000000000000..a365203dd782 --- /dev/null +++ b/arch/riscv/include/asm/qspinlock_paravirt.h @@ -0,0 +1,28 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * Copyright (c), 2024 Alibaba Cloud + * Authors: + * Guo Ren + */ + +#ifndef _ASM_RISCV_QSPINLOCK_PARAVIRT_H +#define _ASM_RISCV_QSPINLOCK_PARAVIRT_H + +void pv_wait(u8 *ptr, u8 val); +void pv_kick(int cpu); + +void dummy_queued_spin_lock_slowpath(struct qspinlock *lock, u32 val); +void dummy_queued_spin_unlock(struct qspinlock *lock); + +DECLARE_STATIC_CALL(pv_queued_spin_lock_slowpath, dummy_queued_spin_lock_slowpath); +DECLARE_STATIC_CALL(pv_queued_spin_unlock, dummy_queued_spin_unlock); + +void __init pv_qspinlock_init(void); + +void __pv_queued_spin_unlock_slowpath(struct qspinlock *lock, u8 locked); + +bool pv_is_native_spin_unlock(void); + +void __pv_queued_spin_unlock(struct qspinlock *lock); + +#endif /* _ASM_RISCV_QSPINLOCK_PARAVIRT_H */ diff --git a/arch/riscv/kernel/Makefile b/arch/riscv/kernel/Makefile index 063d1faf5a53..79f823e0e57d 100644 --- a/arch/riscv/kernel/Makefile +++ b/arch/riscv/kernel/Makefile @@ -123,3 +123,4 @@ obj-$(CONFIG_COMPAT) += compat_vdso/ obj-$(CONFIG_64BIT) += pi/ obj-$(CONFIG_ACPI) += acpi.o obj-$(CONFIG_ACPI_NUMA) += acpi_numa.o +obj-$(CONFIG_PARAVIRT_SPINLOCKS) += qspinlock_paravirt.o diff --git a/arch/riscv/kernel/qspinlock_paravirt.c b/arch/riscv/kernel/qspinlock_paravirt.c new file mode 100644 index 000000000000..4ec4765f57f3 --- /dev/null +++ b/arch/riscv/kernel/qspinlock_paravirt.c @@ -0,0 +1,67 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Copyright (c), 2024 Alibaba Cloud + * Authors: + * Guo Ren + */ + +#include +#include +#include + +void pv_kick(int cpu) +{ + sbi_ecall(SBI_EXT_PVLOCK, SBI_EXT_PVLOCK_KICK_CPU, + cpuid_to_hartid_map(cpu), 0, 0, 0, 0, 0); + return; +} + +void pv_wait(u8 *ptr, u8 val) +{ + unsigned long flags; + + if (in_nmi()) + return; + + local_irq_save(flags); + if (READ_ONCE(*ptr) != val) + goto out; + + wait_for_interrupt(); +out: + local_irq_restore(flags); +} + +static void native_queued_spin_unlock(struct qspinlock *lock) +{ + smp_store_release(&lock->locked, 0); +} + +DEFINE_STATIC_CALL(pv_queued_spin_lock_slowpath, native_queued_spin_lock_slowpath); +EXPORT_STATIC_CALL(pv_queued_spin_lock_slowpath); + +DEFINE_STATIC_CALL(pv_queued_spin_unlock, native_queued_spin_unlock); +EXPORT_STATIC_CALL(pv_queued_spin_unlock); + +void __init pv_qspinlock_init(void) +{ + if (num_possible_cpus() == 1) + return; + + if (!sbi_probe_extension(SBI_EXT_PVLOCK)) + return; + + pr_info("PV qspinlocks enabled\n"); + __pv_init_lock_hash(); + + static_call_update(pv_queued_spin_lock_slowpath, __pv_queued_spin_lock_slowpath); + static_call_update(pv_queued_spin_unlock, __pv_queued_spin_unlock); +} + +bool pv_is_native_spin_unlock(void) +{ + if (static_call_query(pv_queued_spin_unlock) == native_queued_spin_unlock) + return true; + else + return false; +} diff --git a/arch/riscv/kernel/setup.c b/arch/riscv/kernel/setup.c index 45010e71df86..8b51ff5c7300 100644 --- a/arch/riscv/kernel/setup.c +++ b/arch/riscv/kernel/setup.c @@ -278,6 +278,10 @@ static void __init riscv_spinlock_init(void) pr_err("Queued spinlock without Zabha or Ziccrse"); else pr_info("Queued spinlock %s: enabled\n", using_ext); + +#ifdef CONFIG_PARAVIRT_SPINLOCKS + pv_qspinlock_init(); +#endif } extern void __init init_rt_signal_env(void); -- 2.40.1