From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D96DC35A3BF for ; Fri, 3 Apr 2026 06:41:56 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1775198516; cv=none; b=GuQ1991HCX/Vcq3BXROZsGA9UugTGIb9cC3P6OE4UoPA6QI/a1lhcuqWZra1AzfwtjsZ7ypjlM+bwuriB+hIsSiNdVv1V4Xbh1I/jhSNamK7lu/2lG9yS+koqe+sciIVpMPKmGEr7SCBvhQmnXWl9iwY4Sd6lR8zeq7U+1YJ6U0= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1775198516; c=relaxed/simple; bh=vHjtB8ZwSiysO/yNHs/n76SbrQR4MUuTaOEj/b0VRxI=; h=Date:To:From:Subject:Message-Id; b=fijXTlKulU5OCpxdffzJbAhAMbba+FLARhZEfDb2sddAgo5XxgfUpfaxU5vo/sv92OufmcKDM8CvHD7DH73q9Gkfh2Msuy5OVxjJRczZxgCBQaINQelfmkOXcN7QidYl5M3FWf+0/Y1w610FkOqGbnVIaRsbFOvqp+2QPu0Q4hc= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linux-foundation.org header.i=@linux-foundation.org header.b=Azgcoxx/; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linux-foundation.org header.i=@linux-foundation.org header.b="Azgcoxx/" Received: by smtp.kernel.org (Postfix) with ESMTPSA id AB832C4CEF7; Fri, 3 Apr 2026 06:41:56 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linux-foundation.org; s=korg; t=1775198516; bh=vHjtB8ZwSiysO/yNHs/n76SbrQR4MUuTaOEj/b0VRxI=; h=Date:To:From:Subject:From; b=Azgcoxx/UDpBSMMxxZuxtE5Gqe5bK4UxweArn6yihk35aDVaydi9VJRZ+stP+Ycdp ZxknSLYn1ATFW5P5Erfi/1O+nTfTYjyU8etUenqB2pIvHfgxkmBJ1mhzuTttoilCS4 oSBix3/9px/U2qyST1jlXifKnHoPJVaHbw1mLwb8= Date: Thu, 02 Apr 2026 23:41:56 -0700 To: mm-commits@vger.kernel.org,will@kernel.org,tytso@mit.edu,svens@linux.ibm.com,song@kernel.org,richard@nod.at,richard.henderson@linaro.org,palmer@dabbelt.com,npiggin@gmail.com,mpe@ellerman.id.au,mingo@redhat.com,mattst88@gmail.com,maddy@linux.ibm.com,linux@armlinux.org.uk,linmag7@gmail.com,linan122@huawei.com,kernel@xen0n.name,johannes@sipsolutions.net,jason@zx2c4.com,hpa@zytor.com,herbert@gondor.apana.org.au,hca@linux.ibm.com,gor@linux.ibm.com,ebiggers@kernel.org,dsterba@suse.com,davem@davemloft.net,dan.j.williams@intel.com,clm@fb.com,chenhuacai@kernel.org,catalin.marinas@arm.com,bp@alien8.de,borntraeger@linux.ibm.com,arnd@arndb.de,ardb@kernel.org,aou@eecs.berkeley.edu,anton.ivanov@cambridgegreys.com,andreas@gaisler.com,alex@ghiti.fr,agordeev@linux.ibm.com,hch@lst.de,akpm@linux-foundation.org From: Andrew Morton Subject: [merged mm-nonmm-stable] riscv-move-the-xor-code-to-lib-raid.patch removed from -mm tree Message-Id: <20260403064156.AB832C4CEF7@smtp.kernel.org> Precedence: bulk X-Mailing-List: mm-commits@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: The quilt patch titled Subject: riscv: move the XOR code to lib/raid/ has been removed from the -mm tree. Its filename was riscv-move-the-xor-code-to-lib-raid.patch This patch was dropped because it was merged into the mm-nonmm-stable branch of git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm ------------------------------------------------------ From: Christoph Hellwig Subject: riscv: move the XOR code to lib/raid/ Date: Fri, 27 Mar 2026 07:16:48 +0100 Move the optimized XOR into lib/raid and include it it in xor.ko instead of always building it into the main kernel image. Link: https://lkml.kernel.org/r/20260327061704.3707577-17-hch@lst.de Signed-off-by: Christoph Hellwig Reviewed-by: Eric Biggers Tested-by: Eric Biggers Cc: Albert Ou Cc: Alexander Gordeev Cc: Alexandre Ghiti Cc: Andreas Larsson Cc: Anton Ivanov Cc: Ard Biesheuvel Cc: Arnd Bergmann Cc: "Borislav Petkov (AMD)" Cc: Catalin Marinas Cc: Chris Mason Cc: Christian Borntraeger Cc: Dan Williams Cc: David S. Miller Cc: David Sterba Cc: Heiko Carstens Cc: Herbert Xu Cc: "H. Peter Anvin" Cc: Huacai Chen Cc: Ingo Molnar Cc: Jason A. Donenfeld Cc: Johannes Berg Cc: Li Nan Cc: Madhavan Srinivasan Cc: Magnus Lindholm Cc: Matt Turner Cc: Michael Ellerman Cc: Nicholas Piggin Cc: Palmer Dabbelt Cc: Richard Henderson Cc: Richard Weinberger Cc: Russell King Cc: Song Liu Cc: Sven Schnelle Cc: Ted Ts'o Cc: Vasily Gorbik Cc: WANG Xuerui Cc: Will Deacon Signed-off-by: Andrew Morton --- arch/riscv/include/asm/xor.h | 54 --------------------- arch/riscv/lib/Makefile | 1 arch/riscv/lib/xor.S | 81 -------------------------------- lib/raid/xor/Makefile | 1 lib/raid/xor/riscv/xor-glue.c | 56 ++++++++++++++++++++++ lib/raid/xor/riscv/xor.S | 77 ++++++++++++++++++++++++++++++ 6 files changed, 136 insertions(+), 134 deletions(-) --- a/arch/riscv/include/asm/xor.h~riscv-move-the-xor-code-to-lib-raid +++ a/arch/riscv/include/asm/xor.h @@ -2,60 +2,10 @@ /* * Copyright (C) 2021 SiFive */ - -#include -#include -#ifdef CONFIG_RISCV_ISA_V #include -#include -#include - -static void xor_vector_2(unsigned long bytes, unsigned long *__restrict p1, - const unsigned long *__restrict p2) -{ - kernel_vector_begin(); - xor_regs_2_(bytes, p1, p2); - kernel_vector_end(); -} - -static void xor_vector_3(unsigned long bytes, unsigned long *__restrict p1, - const unsigned long *__restrict p2, - const unsigned long *__restrict p3) -{ - kernel_vector_begin(); - xor_regs_3_(bytes, p1, p2, p3); - kernel_vector_end(); -} - -static void xor_vector_4(unsigned long bytes, unsigned long *__restrict p1, - const unsigned long *__restrict p2, - const unsigned long *__restrict p3, - const unsigned long *__restrict p4) -{ - kernel_vector_begin(); - xor_regs_4_(bytes, p1, p2, p3, p4); - kernel_vector_end(); -} - -static void xor_vector_5(unsigned long bytes, unsigned long *__restrict p1, - const unsigned long *__restrict p2, - const unsigned long *__restrict p3, - const unsigned long *__restrict p4, - const unsigned long *__restrict p5) -{ - kernel_vector_begin(); - xor_regs_5_(bytes, p1, p2, p3, p4, p5); - kernel_vector_end(); -} +#include -static struct xor_block_template xor_block_rvv = { - .name = "rvv", - .do_2 = xor_vector_2, - .do_3 = xor_vector_3, - .do_4 = xor_vector_4, - .do_5 = xor_vector_5 -}; -#endif /* CONFIG_RISCV_ISA_V */ +extern struct xor_block_template xor_block_rvv; #define arch_xor_init arch_xor_init static __always_inline void __init arch_xor_init(void) --- a/arch/riscv/lib/Makefile~riscv-move-the-xor-code-to-lib-raid +++ a/arch/riscv/lib/Makefile @@ -16,5 +16,4 @@ lib-$(CONFIG_MMU) += uaccess.o lib-$(CONFIG_64BIT) += tishift.o lib-$(CONFIG_RISCV_ISA_ZICBOZ) += clear_page.o obj-$(CONFIG_FUNCTION_ERROR_INJECTION) += error-inject.o -lib-$(CONFIG_RISCV_ISA_V) += xor.o lib-$(CONFIG_RISCV_ISA_V) += riscv_v_helpers.o diff --git a/arch/riscv/lib/xor.S a/arch/riscv/lib/xor.S deleted file mode 100644 --- a/arch/riscv/lib/xor.S +++ /dev/null @@ -1,81 +0,0 @@ -/* SPDX-License-Identifier: GPL-2.0-or-later */ -/* - * Copyright (C) 2021 SiFive - */ -#include -#include -#include - -SYM_FUNC_START(xor_regs_2_) - vsetvli a3, a0, e8, m8, ta, ma - vle8.v v0, (a1) - vle8.v v8, (a2) - sub a0, a0, a3 - vxor.vv v16, v0, v8 - add a2, a2, a3 - vse8.v v16, (a1) - add a1, a1, a3 - bnez a0, xor_regs_2_ - ret -SYM_FUNC_END(xor_regs_2_) -EXPORT_SYMBOL(xor_regs_2_) - -SYM_FUNC_START(xor_regs_3_) - vsetvli a4, a0, e8, m8, ta, ma - vle8.v v0, (a1) - vle8.v v8, (a2) - sub a0, a0, a4 - vxor.vv v0, v0, v8 - vle8.v v16, (a3) - add a2, a2, a4 - vxor.vv v16, v0, v16 - add a3, a3, a4 - vse8.v v16, (a1) - add a1, a1, a4 - bnez a0, xor_regs_3_ - ret -SYM_FUNC_END(xor_regs_3_) -EXPORT_SYMBOL(xor_regs_3_) - -SYM_FUNC_START(xor_regs_4_) - vsetvli a5, a0, e8, m8, ta, ma - vle8.v v0, (a1) - vle8.v v8, (a2) - sub a0, a0, a5 - vxor.vv v0, v0, v8 - vle8.v v16, (a3) - add a2, a2, a5 - vxor.vv v0, v0, v16 - vle8.v v24, (a4) - add a3, a3, a5 - vxor.vv v16, v0, v24 - add a4, a4, a5 - vse8.v v16, (a1) - add a1, a1, a5 - bnez a0, xor_regs_4_ - ret -SYM_FUNC_END(xor_regs_4_) -EXPORT_SYMBOL(xor_regs_4_) - -SYM_FUNC_START(xor_regs_5_) - vsetvli a6, a0, e8, m8, ta, ma - vle8.v v0, (a1) - vle8.v v8, (a2) - sub a0, a0, a6 - vxor.vv v0, v0, v8 - vle8.v v16, (a3) - add a2, a2, a6 - vxor.vv v0, v0, v16 - vle8.v v24, (a4) - add a3, a3, a6 - vxor.vv v0, v0, v24 - vle8.v v8, (a5) - add a4, a4, a6 - vxor.vv v16, v0, v8 - add a5, a5, a6 - vse8.v v16, (a1) - add a1, a1, a6 - bnez a0, xor_regs_5_ - ret -SYM_FUNC_END(xor_regs_5_) -EXPORT_SYMBOL(xor_regs_5_) --- a/lib/raid/xor/Makefile~riscv-move-the-xor-code-to-lib-raid +++ a/lib/raid/xor/Makefile @@ -17,6 +17,7 @@ xor-$(CONFIG_ARM64) += arm64/xor-neon.o xor-$(CONFIG_CPU_HAS_LSX) += loongarch/xor_simd.o xor-$(CONFIG_CPU_HAS_LSX) += loongarch/xor_simd_glue.o xor-$(CONFIG_ALTIVEC) += powerpc/xor_vmx.o powerpc/xor_vmx_glue.o +xor-$(CONFIG_RISCV_ISA_V) += riscv/xor.o riscv/xor-glue.o CFLAGS_arm/xor-neon.o += $(CC_FLAGS_FPU) diff --git a/lib/raid/xor/riscv/xor-glue.c a/lib/raid/xor/riscv/xor-glue.c new file mode 100644 --- /dev/null +++ a/lib/raid/xor/riscv/xor-glue.c @@ -0,0 +1,56 @@ +// SPDX-License-Identifier: GPL-2.0-or-later +/* + * Copyright (C) 2021 SiFive + */ + +#include +#include +#include +#include +#include + +static void xor_vector_2(unsigned long bytes, unsigned long *__restrict p1, + const unsigned long *__restrict p2) +{ + kernel_vector_begin(); + xor_regs_2_(bytes, p1, p2); + kernel_vector_end(); +} + +static void xor_vector_3(unsigned long bytes, unsigned long *__restrict p1, + const unsigned long *__restrict p2, + const unsigned long *__restrict p3) +{ + kernel_vector_begin(); + xor_regs_3_(bytes, p1, p2, p3); + kernel_vector_end(); +} + +static void xor_vector_4(unsigned long bytes, unsigned long *__restrict p1, + const unsigned long *__restrict p2, + const unsigned long *__restrict p3, + const unsigned long *__restrict p4) +{ + kernel_vector_begin(); + xor_regs_4_(bytes, p1, p2, p3, p4); + kernel_vector_end(); +} + +static void xor_vector_5(unsigned long bytes, unsigned long *__restrict p1, + const unsigned long *__restrict p2, + const unsigned long *__restrict p3, + const unsigned long *__restrict p4, + const unsigned long *__restrict p5) +{ + kernel_vector_begin(); + xor_regs_5_(bytes, p1, p2, p3, p4, p5); + kernel_vector_end(); +} + +struct xor_block_template xor_block_rvv = { + .name = "rvv", + .do_2 = xor_vector_2, + .do_3 = xor_vector_3, + .do_4 = xor_vector_4, + .do_5 = xor_vector_5 +}; diff --git a/lib/raid/xor/riscv/xor.S a/lib/raid/xor/riscv/xor.S new file mode 100664 --- /dev/null +++ a/lib/raid/xor/riscv/xor.S @@ -0,0 +1,77 @@ +/* SPDX-License-Identifier: GPL-2.0-or-later */ +/* + * Copyright (C) 2021 SiFive + */ +#include +#include +#include + +SYM_FUNC_START(xor_regs_2_) + vsetvli a3, a0, e8, m8, ta, ma + vle8.v v0, (a1) + vle8.v v8, (a2) + sub a0, a0, a3 + vxor.vv v16, v0, v8 + add a2, a2, a3 + vse8.v v16, (a1) + add a1, a1, a3 + bnez a0, xor_regs_2_ + ret +SYM_FUNC_END(xor_regs_2_) + +SYM_FUNC_START(xor_regs_3_) + vsetvli a4, a0, e8, m8, ta, ma + vle8.v v0, (a1) + vle8.v v8, (a2) + sub a0, a0, a4 + vxor.vv v0, v0, v8 + vle8.v v16, (a3) + add a2, a2, a4 + vxor.vv v16, v0, v16 + add a3, a3, a4 + vse8.v v16, (a1) + add a1, a1, a4 + bnez a0, xor_regs_3_ + ret +SYM_FUNC_END(xor_regs_3_) + +SYM_FUNC_START(xor_regs_4_) + vsetvli a5, a0, e8, m8, ta, ma + vle8.v v0, (a1) + vle8.v v8, (a2) + sub a0, a0, a5 + vxor.vv v0, v0, v8 + vle8.v v16, (a3) + add a2, a2, a5 + vxor.vv v0, v0, v16 + vle8.v v24, (a4) + add a3, a3, a5 + vxor.vv v16, v0, v24 + add a4, a4, a5 + vse8.v v16, (a1) + add a1, a1, a5 + bnez a0, xor_regs_4_ + ret +SYM_FUNC_END(xor_regs_4_) + +SYM_FUNC_START(xor_regs_5_) + vsetvli a6, a0, e8, m8, ta, ma + vle8.v v0, (a1) + vle8.v v8, (a2) + sub a0, a0, a6 + vxor.vv v0, v0, v8 + vle8.v v16, (a3) + add a2, a2, a6 + vxor.vv v0, v0, v16 + vle8.v v24, (a4) + add a3, a3, a6 + vxor.vv v0, v0, v24 + vle8.v v8, (a5) + add a4, a4, a6 + vxor.vv v16, v0, v8 + add a5, a5, a6 + vse8.v v16, (a1) + add a1, a1, a6 + bnez a0, xor_regs_5_ + ret +SYM_FUNC_END(xor_regs_5_) _ Patches currently in -mm which might be from hch@lst.de are