From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 5D182371CFE for ; Fri, 27 Mar 2026 17:51:50 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774633910; cv=none; b=hGqCoHgMWjWHIVLxFU0UxueFj+FmpnTBVAxwhpXArXqADzdCCggsiCSU9oZpqKmhwYt25yN6X6/1+i7xabwz+VcljztNAnCeBESURewVd5OgHVJG7EWSI7wp/kxq2SQ4cYcSUVj7b2w1LePPEHIKJ9lX2liGnyE7lYL3isndMX0= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774633910; c=relaxed/simple; bh=aHrB+6DpMdZ0NWOlDPz7IsvCriLrZbc1rdn/UEqxmKQ=; h=Date:To:From:Subject:Message-Id; b=A0FFZZ+OW9YBaip+TpSJPZbo7e9F2Jze5WCiTfha4qkKZWQHjWuVhbJ4aXyqRIBFt9vEgTV9FRn8LOuhsdYg4gu4hsVZd6Yv2UqmJfrJvN29/FfcTRt6Ze464g7Z8ffIowK1qAN1k+5TIcIx4Y9fvBGOg3ym3si1ZT+6f6dfK0c= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linux-foundation.org header.i=@linux-foundation.org header.b=qSkegnOI; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linux-foundation.org header.i=@linux-foundation.org header.b="qSkegnOI" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 0655DC19423; Fri, 27 Mar 2026 17:51:50 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linux-foundation.org; s=korg; t=1774633910; bh=aHrB+6DpMdZ0NWOlDPz7IsvCriLrZbc1rdn/UEqxmKQ=; h=Date:To:From:Subject:From; b=qSkegnOIu4yDyeM+aPejM48/c46rba/Yi8l7hL2kGSlCUK5LqImbixhJxOSJpfOFZ Rx8cGAsSJi8M+B/2ndwWu0QJLvwKQUce4uyaz+HDjq1vnGzMPehf1eFRDRClueT8QR H7H/82KPAqNZCNdt4b5TU4hR4QLxda7iokglnYTk= Date: Fri, 27 Mar 2026 10:51:49 -0700 To: mm-commits@vger.kernel.org,hch@lst.de,akpm@linux-foundation.org From: Andrew Morton Subject: + xor-use-static_call-for-xor_gen.patch added to mm-nonmm-unstable branch Message-Id: <20260327175150.0655DC19423@smtp.kernel.org> Precedence: bulk X-Mailing-List: mm-commits@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: The patch titled Subject: xor: use static_call for xor_gen has been added to the -mm mm-nonmm-unstable branch. Its filename is xor-use-static_call-for-xor_gen.patch This patch will shortly appear at https://git.kernel.org/pub/scm/linux/kernel/git/akpm/25-new.git/tree/patches/xor-use-static_call-for-xor_gen.patch This patch will later appear in the mm-nonmm-unstable branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm Before you just go and hit "reply", please: a) Consider who else should be cc'ed b) Prefer to cc a suitable mailing list as well c) Ideally: find the original patch on the mailing list and do a reply-to-all to that, adding suitable additional cc's *** Remember to use Documentation/process/submit-checklist.rst when testing your code *** The -mm tree is included into linux-next via various branches at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm and is updated there most days ------------------------------------------------------ From: Christoph Hellwig Subject: xor: use static_call for xor_gen Date: Fri, 27 Mar 2026 07:16:59 +0100 Avoid the indirect call for xor_generation by using a static_call. Link: https://lkml.kernel.org/r/20260327061704.3707577-28-hch@lst.de Signed-off-by: Christoph Hellwig Cc: Albert Ou Cc: Alexander Gordeev Cc: Alexandre Ghiti Cc: Andreas Larsson Cc: Anton Ivanov Cc: Ard Biesheuvel Cc: Arnd Bergmann Cc: "Borislav Petkov (AMD)" Cc: Catalin Marinas Cc: Chris Mason Cc: Christian Borntraeger Cc: Dan Williams Cc: David S. Miller Cc: David Sterba Cc: Heiko Carstens Cc: Herbert Xu Cc: "H. Peter Anvin" Cc: Huacai Chen Cc: Ingo Molnar Cc: Jason A. Donenfeld Cc: Johannes Berg Cc: Li Nan Cc: Madhavan Srinivasan Cc: Magnus Lindholm Cc: Matt Turner Cc: Michael Ellerman Cc: Nicholas Piggin Cc: Palmer Dabbelt Cc: Richard Henderson Cc: Richard Weinberger Cc: Russell King Cc: Song Liu Cc: Sven Schnelle Cc: Ted Ts'o Cc: Vasily Gorbik Cc: WANG Xuerui Cc: Will Deacon Signed-off-by: Andrew Morton --- lib/raid/xor/xor-core.c | 22 +++++++++++----------- 1 file changed, 11 insertions(+), 11 deletions(-) --- a/lib/raid/xor/xor-core.c~xor-use-static_call-for-xor_gen +++ a/lib/raid/xor/xor-core.c @@ -11,10 +11,10 @@ #include #include #include +#include #include "xor_impl.h" -/* The xor routine to use. */ -static struct xor_block_template *active_template; +DEFINE_STATIC_CALL_NULL(xor_gen_impl, *xor_block_8regs.xor_gen); /** * xor_gen - generate RAID-style XOR information @@ -37,13 +37,13 @@ void xor_gen(void *dest, void **srcs, un WARN_ON_ONCE(bytes == 0); WARN_ON_ONCE(bytes & 511); - active_template->xor_gen(dest, srcs, src_cnt, bytes); + static_call(xor_gen_impl)(dest, srcs, src_cnt, bytes); } EXPORT_SYMBOL(xor_gen); /* Set of all registered templates. */ static struct xor_block_template *__initdata template_list; -static bool __initdata xor_forced = false; +static struct xor_block_template *forced_template; /** * xor_register - register a XOR template @@ -69,7 +69,7 @@ void __init xor_register(struct xor_bloc */ void __init xor_force(struct xor_block_template *tmpl) { - active_template = tmpl; + forced_template = tmpl; } #define BENCH_SIZE 4096 @@ -111,7 +111,7 @@ static int __init calibrate_xor_blocks(v void *b1, *b2; struct xor_block_template *f, *fastest; - if (xor_forced) + if (forced_template) return 0; b1 = (void *) __get_free_pages(GFP_KERNEL, 2); @@ -128,7 +128,7 @@ static int __init calibrate_xor_blocks(v if (f->speed > fastest->speed) fastest = f; } - active_template = fastest; + static_call_update(xor_gen_impl, fastest->xor_gen); pr_info("xor: using function: %s (%d MB/sec)\n", fastest->name, fastest->speed); @@ -156,10 +156,10 @@ static int __init xor_init(void) * If this arch/cpu has a short-circuited selection, don't loop through * all the possible functions, just use the best one. */ - if (active_template) { + if (forced_template) { pr_info("xor: automatically using best checksumming function %-10s\n", - active_template->name); - xor_forced = true; + forced_template->name); + static_call_update(xor_gen_impl, forced_template->xor_gen); return 0; } @@ -170,7 +170,7 @@ static int __init xor_init(void) * Pick the first template as the temporary default until calibration * happens. */ - active_template = template_list; + static_call_update(xor_gen_impl, template_list->xor_gen); return 0; #endif } _ Patches currently in -mm which might be from hch@lst.de are xor-assert-that-xor_blocks-is-not-call-from-interrupt-context.patch arm-xor-remove-in_interrupt-handling.patch arm64-xor-fix-conflicting-attributes-for-xor_block_template.patch um-xor-cleanup-xorh.patch xor-move-to-lib-raid.patch xor-small-cleanups.patch xor-cleanup-registration-and-probing.patch xor-split-xorh.patch xor-remove-macro-abuse-for-xor-implementation-registrations.patch xor-move-generic-implementations-out-of-asm-generic-xorh.patch alpha-move-the-xor-code-to-lib-raid.patch arm-move-the-xor-code-to-lib-raid.patch arm64-move-the-xor-code-to-lib-raid.patch loongarch-move-the-xor-code-to-lib-raid.patch powerpc-move-the-xor-code-to-lib-raid.patch riscv-move-the-xor-code-to-lib-raid.patch sparc-move-the-xor-code-to-lib-raid.patch s390-move-the-xor-code-to-lib-raid.patch x86-move-the-xor-code-to-lib-raid.patch xor-avoid-indirect-calls-for-arm64-optimized-ops.patch xor-make-xorko-self-contained-in-lib-raid.patch xor-add-a-better-public-api.patch xor-add-a-better-public-api-2.patch async_xor-use-xor_gen.patch btrfs-use-xor_gen.patch xor-pass-the-entire-operation-to-the-low-level-ops.patch xor-use-static_call-for-xor_gen.patch xor-add-a-kunit-test-case.patch