From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 36A07FD0636 for ; Wed, 11 Mar 2026 07:06:06 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=WQXqgCCLxEE1H7CiSRA6NyfMEFGHcFIUeue1xRu3DEA=; b=hNSFEwsP6jhRMT j1GkAf4wnQ5ye+ft/8R7u3kLjzFo1igri3+/oZkyw8lUtG5Zoweq6LXzrjhstbSl+rX1QugYH/z3K eIcE9WDP5c/MFNvEbOjuY/CI0+GFe0i8z46rLV3qDyEfPXT/W/AmaLbvRuz6fHCqrJLALU106Sveb XnVU4zfKqJUx/jbnOhuKO5JWhD/PJMnDxUKRmBn4A7fMt6iq5wtXtYqPVbbcpqQxln930cs0PBye+ 5c9jtkOxYezeuERioHmFLGeMuy81Np+Ig0Oq15iNH6Yp+EVTJF4G3gqoRSs3hjq33JYjQKofRciaz mLoVZwdod9EgfPVGeOhg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98.2 #2 (Red Hat Linux)) id 1w0Ddm-0000000Ayp9-3tzP; Wed, 11 Mar 2026 07:05:58 +0000 Received: from [212.243.42.10] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.98.2 #2 (Red Hat Linux)) id 1w0Ddl-0000000Ayed-00Bb; Wed, 11 Mar 2026 07:05:57 +0000 From: Christoph Hellwig To: Andrew Morton Cc: Richard Henderson , Matt Turner , Magnus Lindholm , Russell King , Catalin Marinas , Will Deacon , Huacai Chen , WANG Xuerui , Madhavan Srinivasan , Michael Ellerman , Nicholas Piggin , "Christophe Leroy (CS GROUP)" , Paul Walmsley , Palmer Dabbelt , Albert Ou , Alexandre Ghiti , Heiko Carstens , Vasily Gorbik , Alexander Gordeev , Christian Borntraeger , Sven Schnelle , "David S. Miller" , Andreas Larsson , Richard Weinberger , Anton Ivanov , Johannes Berg , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H. Peter Anvin" , Herbert Xu , Dan Williams , Chris Mason , David Sterba , Arnd Bergmann , Song Liu , Yu Kuai , Li Nan , "Theodore Ts'o" , "Jason A. Donenfeld" , linux-alpha@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, loongarch@lists.linux.dev, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, sparclinux@vger.kernel.org, linux-um@lists.infradead.org, linux-crypto@vger.kernel.org, linux-btrfs@vger.kernel.org, linux-arch@vger.kernel.org, linux-raid@vger.kernel.org Subject: [PATCH 06/27] xor: cleanup registration and probing Date: Wed, 11 Mar 2026 08:03:38 +0100 Message-ID: <20260311070416.972667-7-hch@lst.de> X-Mailer: git-send-email 2.47.3 In-Reply-To: <20260311070416.972667-1-hch@lst.de> References: <20260311070416.972667-1-hch@lst.de> MIME-Version: 1.0 X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org Originally, the XOR code benchmarked all algorithms at load time, but it has since then been hacked multiple times to allow forcing an algorithm, and then commit 524ccdbdfb52 ("crypto: xor - defer load time benchmark to a later time") changed the logic to a two-step process or registration and benchmarking, but only when built-in. Rework this, so that the XOR_TRY_TEMPLATES macro magic now always just deals with adding the templates to the list, and benchmarking is always done in a second pass; for modular builds from module_init, and for the built-in case using a separate init call level. Signed-off-by: Christoph Hellwig --- lib/raid/xor/xor-core.c | 98 ++++++++++++++++++++--------------------- 1 file changed, 48 insertions(+), 50 deletions(-) diff --git a/lib/raid/xor/xor-core.c b/lib/raid/xor/xor-core.c index edb4e498da60..88667a89b75b 100644 --- a/lib/raid/xor/xor-core.c +++ b/lib/raid/xor/xor-core.c @@ -52,29 +52,14 @@ EXPORT_SYMBOL(xor_blocks); /* Set of all registered templates. */ static struct xor_block_template *__initdata template_list; +static bool __initdata xor_forced = false; -#ifndef MODULE static void __init do_xor_register(struct xor_block_template *tmpl) { tmpl->next = template_list; template_list = tmpl; } -static int __init register_xor_blocks(void) -{ - active_template = XOR_SELECT_TEMPLATE(NULL); - - if (!active_template) { -#define xor_speed do_xor_register - // register all the templates and pick the first as the default - XOR_TRY_TEMPLATES; -#undef xor_speed - active_template = template_list; - } - return 0; -} -#endif - #define BENCH_SIZE 4096 #define REPS 800U @@ -85,9 +70,6 @@ do_xor_speed(struct xor_block_template *tmpl, void *b1, void *b2) unsigned long reps; ktime_t min, start, t0; - tmpl->next = template_list; - template_list = tmpl; - preempt_disable(); reps = 0; @@ -111,63 +93,79 @@ do_xor_speed(struct xor_block_template *tmpl, void *b1, void *b2) pr_info(" %-16s: %5d MB/sec\n", tmpl->name, speed); } -static int __init -calibrate_xor_blocks(void) +static int __init calibrate_xor_blocks(void) { void *b1, *b2; struct xor_block_template *f, *fastest; - fastest = XOR_SELECT_TEMPLATE(NULL); - - if (fastest) { - printk(KERN_INFO "xor: automatically using best " - "checksumming function %-10s\n", - fastest->name); - goto out; - } + if (xor_forced) + return 0; b1 = (void *) __get_free_pages(GFP_KERNEL, 2); if (!b1) { - printk(KERN_WARNING "xor: Yikes! No memory available.\n"); + pr_warn("xor: Yikes! No memory available.\n"); return -ENOMEM; } b2 = b1 + 2*PAGE_SIZE + BENCH_SIZE; - /* - * If this arch/cpu has a short-circuited selection, don't loop through - * all the possible functions, just test the best one - */ - -#define xor_speed(templ) do_xor_speed((templ), b1, b2) - - printk(KERN_INFO "xor: measuring software checksum speed\n"); - template_list = NULL; - XOR_TRY_TEMPLATES; + pr_info("xor: measuring software checksum speed\n"); fastest = template_list; - for (f = fastest; f; f = f->next) + for (f = template_list; f; f = f->next) { + do_xor_speed(f, b1, b2); if (f->speed > fastest->speed) fastest = f; - + } + active_template = fastest; pr_info("xor: using function: %s (%d MB/sec)\n", fastest->name, fastest->speed); + free_pages((unsigned long)b1, 2); + return 0; +} + +static int __init xor_init(void) +{ + /* + * If this arch/cpu has a short-circuited selection, don't loop through + * all the possible functions, just use the best one. + */ + active_template = XOR_SELECT_TEMPLATE(NULL); + if (active_template) { + pr_info("xor: automatically using best checksumming function %-10s\n", + active_template->name); + xor_forced = true; + return 0; + } + +#define xor_speed do_xor_register + XOR_TRY_TEMPLATES; #undef xor_speed - free_pages((unsigned long)b1, 2); -out: - active_template = fastest; +#ifdef MODULE + return calibrate_xor_blocks(); +#else + /* + * Pick the first template as the temporary default until calibration + * happens. + */ + active_template = template_list; return 0; +#endif } -static __exit void xor_exit(void) { } +static __exit void xor_exit(void) +{ +} MODULE_DESCRIPTION("RAID-5 checksumming functions"); MODULE_LICENSE("GPL"); +/* + * When built-in we must register the default template before md, but we don't + * want calibration to run that early as that would delay the boot process. + */ #ifndef MODULE -/* when built-in xor.o must initialize before drivers/md/md.o */ -core_initcall(register_xor_blocks); +__initcall(calibrate_xor_blocks); #endif - -module_init(calibrate_xor_blocks); +core_initcall(xor_init); module_exit(xor_exit); -- 2.47.3 _______________________________________________ linux-riscv mailing list linux-riscv@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-riscv