From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mail-wm1-f73.google.com (mail-wm1-f73.google.com [209.85.128.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E61C737646C for ; Fri, 27 Mar 2026 11:31:21 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.73 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774611085; cv=none; b=OXfHiY83bX+bkRjq3pmQ/K1rL34OeWpFLsAyp7TG4oMsJ0jPhRhqnHHmTZWDnMGF10lxr8fxEhdWl70Sb1A76A+I5jXCdjdEgVTt7kPLCObcduNGwU7igNMzNi2mxf5Wz2BSS7Yz7wBoaucnryP8cU29fk0Y89bz7rE7tbXEcUI= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774611085; c=relaxed/simple; bh=CGCIXIbANKjnlVIWdJjOlF+xSkzvo1pAWMhL/x0wZTI=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=IdCK3e1Xl5hbfMYW56bPP1RyFKik8z1e1oROO8O/+dT92Y/yubNlyOEXkFLRJOy4A48KhGq4hBxGpbjceyvVyN8IqGf1XNyP18Rb1HoIKGs/L34LxN6uRSuHerNO1EoCWgN6egu52eRW2aBcDfN5a9SufqnP50L0+2jxjDrb0DE= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--ardb.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=jVlWRd6M; arc=none smtp.client-ip=209.85.128.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--ardb.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="jVlWRd6M" Received: by mail-wm1-f73.google.com with SMTP id 5b1f17b1804b1-48531e6012bso23053315e9.1 for ; Fri, 27 Mar 2026 04:31:21 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20251104; t=1774611079; x=1775215879; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=tqBesCV1iVGFjs/+MQv4dUKUK6pO8mTUu//mtGexVyc=; b=jVlWRd6MrhFiz2QlRhwAlG+LbL61VG0LnL9yjYxkzyeLY6rrZxQ+E9bh8stxWrCV/0 HnX3iTLlotLbT2H0IugjbRC5AoH1WbXMkrg9pszV4NuZTzSNXtIo/0IXtwJHlpCEVb+s KAewOLNJAHdsaXhuCPqT5cMe/ULAffty6qJ0ckrumL9hjKFkVOq+wfTuW+W1w8/s2M+K 2anhOmNASpxdTXaCeg++yXEX6urMu+YeJ7kp/qa++17npXrtYVUt1i3SsC9kYpPMK+te hvXcr4KmvLSA638kCCYfQlgs6AGQbhCF7dnLwRKpvOih5rQ13XZNgJJ6EVonDQzlLW+j blgw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20251104; t=1774611079; x=1775215879; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=tqBesCV1iVGFjs/+MQv4dUKUK6pO8mTUu//mtGexVyc=; b=TKVY3r6fQj4JDZVFKsgstwwZIeE1Fx7YB5G/Ad6FeuVbhRd5m1rHD4nlIqq5aqGdCB tAUmWPam2h2GFJIABhEoeLnI/bhhu3cvjOFQssf4xsGkPC5eZeSL3I4HvW3uvy5XXGPI gvbX8jEAa1+nYQ+83rxwPEiM29eFMC9pZbufqvDI430viH9cqirL8Hh6XA7lmwx3m7yq slqG4w2056xe3EnlcxXmayqsEbmxXU24cy/3CiRoD3OUj/6dDvdsdEoGh4I/+lovGH+b ZKq8TFqGrxIOabHnioG/KUHuNEUOFUVZBFfjUnxa1HC9F7FYmnJsyQJrEzr76kHkjAbk r7UA== X-Gm-Message-State: AOJu0YzOnvf4Ureng4AFZKXmvQdtZ+f9uVm9qx4LCdS77CzS9bRdvw9B AvRhk99cTUNzM5SEFnee/6uqJQSr8G1fKu0Tb6lSj/RX0wAxallt0x5IqJ7Q7w4vlzV6CbtMjNX w+YhctEI1im2lGeUJ+9LDzO9xCMDPiQ3X2UhVf+WyATs0hYwxDWM+/wIK0UBlMT9PshhFsIfYpL HSfd7EIS4PFIZ0Fb5jsGc6CGSnGo3Klq8= X-Received: from wmby4.prod.google.com ([2002:a05:600c:c044:b0:485:3b8d:c9a1]) (user=ardb job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:4fc9:b0:485:2fe9:336f with SMTP id 5b1f17b1804b1-487280ba3a8mr32244135e9.30.1774611075472; Fri, 27 Mar 2026 04:31:15 -0700 (PDT) Date: Fri, 27 Mar 2026 12:30:51 +0100 In-Reply-To: <20260327113047.4043492-7-ardb+git@google.com> Precedence: bulk X-Mailing-List: linux-raid@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20260327113047.4043492-7-ardb+git@google.com> X-Developer-Key: i=ardb@kernel.org; a=openpgp; fpr=F43D03328115A198C90016883D200E9CA6329909 X-Developer-Signature: v=1; a=openpgp-sha256; l=8240; i=ardb@kernel.org; h=from:subject; bh=2HOdmXCw5GMomfEjnLJM2NrjzlJj2ghqB/RAm4StiJM=; b=owGbwMvMwCVmkMcZplerG8N4Wi2JIfNYVk733aq4vcZ8rxze7zCZ7pNsefXA73c/Nl6wzr26R vjn/8qyjlIWBjEuBlkxRRaB2X/f7Tw9UarWeZYszBxWJpAhDFycAjCRVbwMf0WEtX7vPsDHuD0o wV3Bky//+Gur14r1mexGM/97yMzZsZ2R4fvqYA8z1ZWPbY233tjh4bdNvyE1V8F4h01Tq+1Cvns rmAE= X-Mailer: git-send-email 2.53.0.1018.g2bb0e51243-goog Message-ID: <20260327113047.4043492-10-ardb+git@google.com> Subject: [PATCH 3/5] xor/arm: Replace vectorized implementation with arm64's intrinsics From: Ard Biesheuvel To: linux-raid@vger.kernel.org Cc: linux-arm-kernel@lists.infradead.org, linux-crypto@vger.kernel.org, Ard Biesheuvel , Christoph Hellwig , Russell King , Arnd Bergmann , Eric Biggers Content-Type: text/plain; charset="UTF-8" From: Ard Biesheuvel Drop the XOR implementation generated by the vectorizer: this has always been a bit of a hack, and now that arm64 has an intrinsics version that works on ARM too, let's use that instead. So copy the part of the arm64 code that can be shared (so not the EOR3 version). The arm64 code will be updated in a subsequent patch to share this implementation. Signed-off-by: Ard Biesheuvel --- lib/raid/xor/arm/xor-neon.c | 183 ++++++++++++++++++-- lib/raid/xor/arm/xor-neon.h | 7 + lib/raid/xor/arm/xor_arch.h | 7 +- lib/raid/xor/xor-8regs.c | 2 - 4 files changed, 174 insertions(+), 25 deletions(-) diff --git a/lib/raid/xor/arm/xor-neon.c b/lib/raid/xor/arm/xor-neon.c index 23147e3a7904..a3e2b4af8d36 100644 --- a/lib/raid/xor/arm/xor-neon.c +++ b/lib/raid/xor/arm/xor-neon.c @@ -1,26 +1,175 @@ // SPDX-License-Identifier: GPL-2.0-only /* - * Copyright (C) 2013 Linaro Ltd + * Authors: Jackie Liu + * Copyright (C) 2018,Tianjin KYLIN Information Technology Co., Ltd. */ #include "xor_impl.h" -#include "xor_arch.h" +#include "xor-neon.h" -#ifndef __ARM_NEON__ -#error You should compile this file with '-march=armv7-a -mfloat-abi=softfp -mfpu=neon' -#endif +#include -/* - * Pull in the reference implementations while instructing GCC (through - * -ftree-vectorize) to attempt to exploit implicit parallelism and emit - * NEON instructions. Clang does this by default at O2 so no pragma is - * needed. - */ -#ifdef CONFIG_CC_IS_GCC -#pragma GCC optimize "tree-vectorize" -#endif +static void __xor_neon_2(unsigned long bytes, unsigned long * __restrict p1, + const unsigned long * __restrict p2) +{ + uint64_t *dp1 = (uint64_t *)p1; + uint64_t *dp2 = (uint64_t *)p2; + + register uint64x2_t v0, v1, v2, v3; + long lines = bytes / (sizeof(uint64x2_t) * 4); + + do { + /* p1 ^= p2 */ + v0 = veorq_u64(vld1q_u64(dp1 + 0), vld1q_u64(dp2 + 0)); + v1 = veorq_u64(vld1q_u64(dp1 + 2), vld1q_u64(dp2 + 2)); + v2 = veorq_u64(vld1q_u64(dp1 + 4), vld1q_u64(dp2 + 4)); + v3 = veorq_u64(vld1q_u64(dp1 + 6), vld1q_u64(dp2 + 6)); + + /* store */ + vst1q_u64(dp1 + 0, v0); + vst1q_u64(dp1 + 2, v1); + vst1q_u64(dp1 + 4, v2); + vst1q_u64(dp1 + 6, v3); + + dp1 += 8; + dp2 += 8; + } while (--lines > 0); +} + +static void __xor_neon_3(unsigned long bytes, unsigned long * __restrict p1, + const unsigned long * __restrict p2, + const unsigned long * __restrict p3) +{ + uint64_t *dp1 = (uint64_t *)p1; + uint64_t *dp2 = (uint64_t *)p2; + uint64_t *dp3 = (uint64_t *)p3; + + register uint64x2_t v0, v1, v2, v3; + long lines = bytes / (sizeof(uint64x2_t) * 4); + + do { + /* p1 ^= p2 */ + v0 = veorq_u64(vld1q_u64(dp1 + 0), vld1q_u64(dp2 + 0)); + v1 = veorq_u64(vld1q_u64(dp1 + 2), vld1q_u64(dp2 + 2)); + v2 = veorq_u64(vld1q_u64(dp1 + 4), vld1q_u64(dp2 + 4)); + v3 = veorq_u64(vld1q_u64(dp1 + 6), vld1q_u64(dp2 + 6)); + + /* p1 ^= p3 */ + v0 = veorq_u64(v0, vld1q_u64(dp3 + 0)); + v1 = veorq_u64(v1, vld1q_u64(dp3 + 2)); + v2 = veorq_u64(v2, vld1q_u64(dp3 + 4)); + v3 = veorq_u64(v3, vld1q_u64(dp3 + 6)); + + /* store */ + vst1q_u64(dp1 + 0, v0); + vst1q_u64(dp1 + 2, v1); + vst1q_u64(dp1 + 4, v2); + vst1q_u64(dp1 + 6, v3); + + dp1 += 8; + dp2 += 8; + dp3 += 8; + } while (--lines > 0); +} + +static void __xor_neon_4(unsigned long bytes, unsigned long * __restrict p1, + const unsigned long * __restrict p2, + const unsigned long * __restrict p3, + const unsigned long * __restrict p4) +{ + uint64_t *dp1 = (uint64_t *)p1; + uint64_t *dp2 = (uint64_t *)p2; + uint64_t *dp3 = (uint64_t *)p3; + uint64_t *dp4 = (uint64_t *)p4; + + register uint64x2_t v0, v1, v2, v3; + long lines = bytes / (sizeof(uint64x2_t) * 4); + + do { + /* p1 ^= p2 */ + v0 = veorq_u64(vld1q_u64(dp1 + 0), vld1q_u64(dp2 + 0)); + v1 = veorq_u64(vld1q_u64(dp1 + 2), vld1q_u64(dp2 + 2)); + v2 = veorq_u64(vld1q_u64(dp1 + 4), vld1q_u64(dp2 + 4)); + v3 = veorq_u64(vld1q_u64(dp1 + 6), vld1q_u64(dp2 + 6)); + + /* p1 ^= p3 */ + v0 = veorq_u64(v0, vld1q_u64(dp3 + 0)); + v1 = veorq_u64(v1, vld1q_u64(dp3 + 2)); + v2 = veorq_u64(v2, vld1q_u64(dp3 + 4)); + v3 = veorq_u64(v3, vld1q_u64(dp3 + 6)); + + /* p1 ^= p4 */ + v0 = veorq_u64(v0, vld1q_u64(dp4 + 0)); + v1 = veorq_u64(v1, vld1q_u64(dp4 + 2)); + v2 = veorq_u64(v2, vld1q_u64(dp4 + 4)); + v3 = veorq_u64(v3, vld1q_u64(dp4 + 6)); + + /* store */ + vst1q_u64(dp1 + 0, v0); + vst1q_u64(dp1 + 2, v1); + vst1q_u64(dp1 + 4, v2); + vst1q_u64(dp1 + 6, v3); + + dp1 += 8; + dp2 += 8; + dp3 += 8; + dp4 += 8; + } while (--lines > 0); +} + +static void __xor_neon_5(unsigned long bytes, unsigned long * __restrict p1, + const unsigned long * __restrict p2, + const unsigned long * __restrict p3, + const unsigned long * __restrict p4, + const unsigned long * __restrict p5) +{ + uint64_t *dp1 = (uint64_t *)p1; + uint64_t *dp2 = (uint64_t *)p2; + uint64_t *dp3 = (uint64_t *)p3; + uint64_t *dp4 = (uint64_t *)p4; + uint64_t *dp5 = (uint64_t *)p5; + + register uint64x2_t v0, v1, v2, v3; + long lines = bytes / (sizeof(uint64x2_t) * 4); + + do { + /* p1 ^= p2 */ + v0 = veorq_u64(vld1q_u64(dp1 + 0), vld1q_u64(dp2 + 0)); + v1 = veorq_u64(vld1q_u64(dp1 + 2), vld1q_u64(dp2 + 2)); + v2 = veorq_u64(vld1q_u64(dp1 + 4), vld1q_u64(dp2 + 4)); + v3 = veorq_u64(vld1q_u64(dp1 + 6), vld1q_u64(dp2 + 6)); + + /* p1 ^= p3 */ + v0 = veorq_u64(v0, vld1q_u64(dp3 + 0)); + v1 = veorq_u64(v1, vld1q_u64(dp3 + 2)); + v2 = veorq_u64(v2, vld1q_u64(dp3 + 4)); + v3 = veorq_u64(v3, vld1q_u64(dp3 + 6)); + + /* p1 ^= p4 */ + v0 = veorq_u64(v0, vld1q_u64(dp4 + 0)); + v1 = veorq_u64(v1, vld1q_u64(dp4 + 2)); + v2 = veorq_u64(v2, vld1q_u64(dp4 + 4)); + v3 = veorq_u64(v3, vld1q_u64(dp4 + 6)); + + /* p1 ^= p5 */ + v0 = veorq_u64(v0, vld1q_u64(dp5 + 0)); + v1 = veorq_u64(v1, vld1q_u64(dp5 + 2)); + v2 = veorq_u64(v2, vld1q_u64(dp5 + 4)); + v3 = veorq_u64(v3, vld1q_u64(dp5 + 6)); + + /* store */ + vst1q_u64(dp1 + 0, v0); + vst1q_u64(dp1 + 2, v1); + vst1q_u64(dp1 + 4, v2); + vst1q_u64(dp1 + 6, v3); -#define NO_TEMPLATE -#include "../xor-8regs.c" + dp1 += 8; + dp2 += 8; + dp3 += 8; + dp4 += 8; + dp5 += 8; + } while (--lines > 0); +} -__DO_XOR_BLOCKS(neon_inner, xor_8regs_2, xor_8regs_3, xor_8regs_4, xor_8regs_5); +__DO_XOR_BLOCKS(neon_inner, __xor_neon_2, __xor_neon_3, __xor_neon_4, + __xor_neon_5); diff --git a/lib/raid/xor/arm/xor-neon.h b/lib/raid/xor/arm/xor-neon.h new file mode 100644 index 000000000000..406e0356f05b --- /dev/null +++ b/lib/raid/xor/arm/xor-neon.h @@ -0,0 +1,7 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ + +extern struct xor_block_template xor_block_arm4regs; +extern struct xor_block_template xor_block_neon; + +void xor_gen_neon_inner(void *dest, void **srcs, unsigned int src_cnt, + unsigned int bytes); diff --git a/lib/raid/xor/arm/xor_arch.h b/lib/raid/xor/arm/xor_arch.h index 775ff835df65..f1ddb64fe62a 100644 --- a/lib/raid/xor/arm/xor_arch.h +++ b/lib/raid/xor/arm/xor_arch.h @@ -3,12 +3,7 @@ * Copyright (C) 2001 Russell King */ #include - -extern struct xor_block_template xor_block_arm4regs; -extern struct xor_block_template xor_block_neon; - -void xor_gen_neon_inner(void *dest, void **srcs, unsigned int src_cnt, - unsigned int bytes); +#include "xor-neon.h" static __always_inline void __init arch_xor_init(void) { diff --git a/lib/raid/xor/xor-8regs.c b/lib/raid/xor/xor-8regs.c index 1edaed8acffe..46b3c8bdc27f 100644 --- a/lib/raid/xor/xor-8regs.c +++ b/lib/raid/xor/xor-8regs.c @@ -93,11 +93,9 @@ xor_8regs_5(unsigned long bytes, unsigned long * __restrict p1, } while (--lines > 0); } -#ifndef NO_TEMPLATE DO_XOR_BLOCKS(8regs, xor_8regs_2, xor_8regs_3, xor_8regs_4, xor_8regs_5); struct xor_block_template xor_block_8regs = { .name = "8regs", .xor_gen = xor_gen_8regs, }; -#endif /* NO_TEMPLATE */ -- 2.53.0.1018.g2bb0e51243-goog