linux-arm-kernel.lists.infradead.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v1 0/3] Import Arm Optimized Routines str{n}cmp functions
@ 2022-02-15 17:07 Joey Gouly
  2022-02-15 17:07 ` [PATCH v1 1/3] arm64: lib: Import latest version of Arm Optimized Routines' strcmp Joey Gouly
                   ` (3 more replies)
  0 siblings, 4 replies; 10+ messages in thread
From: Joey Gouly @ 2022-02-15 17:07 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: nd, catalin.marinas, joey.gouly, mark.rutland, robin.murphy, will

Hi all,

The previous str{n}cmp routines were not MTE safe, so were disabled in:
  59a68d413808 ("arm64: Mitigate MTE issues with str{n}cmp()")

The Arm Optimized Routines repository recently merged [1] their strcmp.S and
strcmp-mte.S files into a single file that is MTE safe.

Therefore we can import these new MTE safe functions and remove the workaround.

I did some light boot tests using QEMU.

Thanks,
Joey

[1] https://github.com/ARM-software/optimized-routines/commit/7b91c3cdb12b023004cb4dda30a1aa3424329ce6

Joey Gouly (3):
  arm64: lib:  Import latest version of Arm Optimized Routines' strcmp
  arm64: lib:  Import latest version of Arm Optimized Routines' strncmp
  Revert "arm64: Mitigate MTE issues with str{n}cmp()"

 arch/arm64/include/asm/assembler.h |   5 -
 arch/arm64/include/asm/string.h    |   2 -
 arch/arm64/lib/strcmp.S            | 240 +++++++++++++++--------------
 arch/arm64/lib/strncmp.S           | 236 +++++++++++++++++-----------
 4 files changed, 269 insertions(+), 214 deletions(-)

-- 
2.17.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 10+ messages in thread

* [PATCH v1 1/3] arm64: lib: Import latest version of Arm Optimized Routines' strcmp
  2022-02-15 17:07 [PATCH v1 0/3] Import Arm Optimized Routines str{n}cmp functions Joey Gouly
@ 2022-02-15 17:07 ` Joey Gouly
  2022-02-16 16:44   ` Russell King (Oracle)
  2022-02-15 17:07 ` [PATCH v1 2/3] arm64: lib: Import latest version of Arm Optimized Routines' strncmp Joey Gouly
                   ` (2 subsequent siblings)
  3 siblings, 1 reply; 10+ messages in thread
From: Joey Gouly @ 2022-02-15 17:07 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: nd, catalin.marinas, joey.gouly, mark.rutland, robin.murphy, will

Import the latest version of the Arm Optimized Routines strcmp function based
on the upstream code of string/aarch64/strcmp.S at commit 189dfefe37d5 from:
  https://github.com/ARM-software/optimized-routines

This latest version includes MTE support.

Signed-off-by: Joey Gouly <joey.gouly@arm.com>
Cc: Robin Murphy <robin.murphy@arm.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will@kernel.org>
---
 arch/arm64/lib/strcmp.S | 238 +++++++++++++++++++++-------------------
 1 file changed, 126 insertions(+), 112 deletions(-)

diff --git a/arch/arm64/lib/strcmp.S b/arch/arm64/lib/strcmp.S
index 83bcad72ec97..758de77afd2f 100644
--- a/arch/arm64/lib/strcmp.S
+++ b/arch/arm64/lib/strcmp.S
@@ -1,9 +1,9 @@
 /* SPDX-License-Identifier: GPL-2.0-only */
 /*
- * Copyright (c) 2012-2021, Arm Limited.
+ * Copyright (c) 2012-2022, Arm Limited.
  *
  * Adapted from the original at:
- * https://github.com/ARM-software/optimized-routines/blob/afd6244a1f8d9229/string/aarch64/strcmp.S
+ * https://github.com/ARM-software/optimized-routines/blob/189dfefe37d54c5b/string/aarch64/strcmp.S
  */
 
 #include <linux/linkage.h>
@@ -11,161 +11,175 @@
 
 /* Assumptions:
  *
- * ARMv8-a, AArch64
+ * ARMv8-a, AArch64.
+ * MTE compatible.
  */
 
 #define L(label) .L ## label
 
 #define REP8_01 0x0101010101010101
 #define REP8_7f 0x7f7f7f7f7f7f7f7f
-#define REP8_80 0x8080808080808080
 
-/* Parameters and result.  */
 #define src1		x0
 #define src2		x1
 #define result		x0
 
-/* Internal variables.  */
 #define data1		x2
 #define data1w		w2
 #define data2		x3
 #define data2w		w3
 #define has_nul		x4
 #define diff		x5
+#define off1		x5
 #define syndrome	x6
-#define tmp1		x7
-#define tmp2		x8
-#define tmp3		x9
-#define zeroones	x10
-#define pos		x11
-
-	/* Start of performance-critical section  -- one 64B cache line.  */
-	.align 6
+#define tmp		x6
+#define data3		x7
+#define zeroones	x8
+#define shift		x9
+#define off2		x10
+
+/* On big-endian early bytes are at MSB and on little-endian LSB.
+   LS_FW means shifting towards early bytes.  */
+#ifdef __AARCH64EB__
+# define LS_FW lsl
+#else
+# define LS_FW lsr
+#endif
+
+/* NUL detection works on the principle that (X - 1) & (~X) & 0x80
+   (=> (X - 1) & ~(X | 0x7f)) is non-zero iff a byte is zero, and
+   can be done in parallel across the entire word.
+   Since carry propagation makes 0x1 bytes before a NUL byte appear
+   NUL too in big-endian, byte-reverse the data before the NUL check.  */
+
+
 SYM_FUNC_START_WEAK_PI(strcmp)
-	eor	tmp1, src1, src2
-	mov	zeroones, #REP8_01
-	tst	tmp1, #7
+	sub	off2, src2, src1
+	mov	zeroones, REP8_01
+	and	tmp, src1, 7
+	tst	off2, 7
 	b.ne	L(misaligned8)
-	ands	tmp1, src1, #7
-	b.ne	L(mutual_align)
-	/* NUL detection works on the principle that (X - 1) & (~X) & 0x80
-	   (=> (X - 1) & ~(X | 0x7f)) is non-zero iff a byte is zero, and
-	   can be done in parallel across the entire word.  */
+	cbnz	tmp, L(mutual_align)
+
+	.p2align 4
+
 L(loop_aligned):
-	ldr	data1, [src1], #8
-	ldr	data2, [src2], #8
+	ldr	data2, [src1, off2]
+	ldr	data1, [src1], 8
 L(start_realigned):
-	sub	tmp1, data1, zeroones
-	orr	tmp2, data1, #REP8_7f
-	eor	diff, data1, data2	/* Non-zero if differences found.  */
-	bic	has_nul, tmp1, tmp2	/* Non-zero if NUL terminator.  */
+#ifdef __AARCH64EB__
+	rev	tmp, data1
+	sub	has_nul, tmp, zeroones
+	orr	tmp, tmp, REP8_7f
+#else
+	sub	has_nul, data1, zeroones
+	orr	tmp, data1, REP8_7f
+#endif
+	bics	has_nul, has_nul, tmp	/* Non-zero if NUL terminator.  */
+	ccmp	data1, data2, 0, eq
+	b.eq	L(loop_aligned)
+#ifdef __AARCH64EB__
+	rev	has_nul, has_nul
+#endif
+	eor	diff, data1, data2
 	orr	syndrome, diff, has_nul
-	cbz	syndrome, L(loop_aligned)
-	/* End of performance-critical section  -- one 64B cache line.  */
-
 L(end):
-#ifndef	__AARCH64EB__
+#ifndef __AARCH64EB__
 	rev	syndrome, syndrome
 	rev	data1, data1
-	/* The MS-non-zero bit of the syndrome marks either the first bit
-	   that is different, or the top bit of the first zero byte.
-	   Shifting left now will bring the critical information into the
-	   top bits.  */
-	clz	pos, syndrome
 	rev	data2, data2
-	lsl	data1, data1, pos
-	lsl	data2, data2, pos
-	/* But we need to zero-extend (char is unsigned) the value and then
-	   perform a signed 32-bit subtraction.  */
-	lsr	data1, data1, #56
-	sub	result, data1, data2, lsr #56
-	ret
-#else
-	/* For big-endian we cannot use the trick with the syndrome value
-	   as carry-propagation can corrupt the upper bits if the trailing
-	   bytes in the string contain 0x01.  */
-	/* However, if there is no NUL byte in the dword, we can generate
-	   the result directly.  We can't just subtract the bytes as the
-	   MSB might be significant.  */
-	cbnz	has_nul, 1f
-	cmp	data1, data2
-	cset	result, ne
-	cneg	result, result, lo
-	ret
-1:
-	/* Re-compute the NUL-byte detection, using a byte-reversed value.  */
-	rev	tmp3, data1
-	sub	tmp1, tmp3, zeroones
-	orr	tmp2, tmp3, #REP8_7f
-	bic	has_nul, tmp1, tmp2
-	rev	has_nul, has_nul
-	orr	syndrome, diff, has_nul
-	clz	pos, syndrome
-	/* The MS-non-zero bit of the syndrome marks either the first bit
-	   that is different, or the top bit of the first zero byte.
+#endif
+	clz	shift, syndrome
+	/* The most-significant-non-zero bit of the syndrome marks either the
+	   first bit that is different, or the top bit of the first zero byte.
 	   Shifting left now will bring the critical information into the
 	   top bits.  */
-	lsl	data1, data1, pos
-	lsl	data2, data2, pos
+	lsl	data1, data1, shift
+	lsl	data2, data2, shift
 	/* But we need to zero-extend (char is unsigned) the value and then
 	   perform a signed 32-bit subtraction.  */
-	lsr	data1, data1, #56
-	sub	result, data1, data2, lsr #56
+	lsr	data1, data1, 56
+	sub	result, data1, data2, lsr 56
 	ret
-#endif
+
+	.p2align 4
 
 L(mutual_align):
 	/* Sources are mutually aligned, but are not currently at an
 	   alignment boundary.  Round down the addresses and then mask off
-	   the bytes that preceed the start point.  */
-	bic	src1, src1, #7
-	bic	src2, src2, #7
-	lsl	tmp1, tmp1, #3		/* Bytes beyond alignment -> bits.  */
-	ldr	data1, [src1], #8
-	neg	tmp1, tmp1		/* Bits to alignment -64.  */
-	ldr	data2, [src2], #8
-	mov	tmp2, #~0
-#ifdef __AARCH64EB__
-	/* Big-endian.  Early bytes are at MSB.  */
-	lsl	tmp2, tmp2, tmp1	/* Shift (tmp1 & 63).  */
-#else
-	/* Little-endian.  Early bytes are at LSB.  */
-	lsr	tmp2, tmp2, tmp1	/* Shift (tmp1 & 63).  */
-#endif
-	orr	data1, data1, tmp2
-	orr	data2, data2, tmp2
+	   the bytes that precede the start point.  */
+	bic	src1, src1, 7
+	ldr	data2, [src1, off2]
+	ldr	data1, [src1], 8
+	neg	shift, src2, lsl 3	/* Bits to alignment -64.  */
+	mov	tmp, -1
+	LS_FW	tmp, tmp, shift
+	orr	data1, data1, tmp
+	orr	data2, data2, tmp
 	b	L(start_realigned)
 
 L(misaligned8):
 	/* Align SRC1 to 8 bytes and then compare 8 bytes at a time, always
-	   checking to make sure that we don't access beyond page boundary in
-	   SRC2.  */
-	tst	src1, #7
-	b.eq	L(loop_misaligned)
+	   checking to make sure that we don't access beyond the end of SRC2.  */
+	cbz	tmp, L(src1_aligned)
 L(do_misaligned):
-	ldrb	data1w, [src1], #1
-	ldrb	data2w, [src2], #1
-	cmp	data1w, #1
-	ccmp	data1w, data2w, #0, cs	/* NZCV = 0b0000.  */
+	ldrb	data1w, [src1], 1
+	ldrb	data2w, [src2], 1
+	cmp	data1w, 0
+	ccmp	data1w, data2w, 0, ne	/* NZCV = 0b0000.  */
 	b.ne	L(done)
-	tst	src1, #7
+	tst	src1, 7
 	b.ne	L(do_misaligned)
 
-L(loop_misaligned):
-	/* Test if we are within the last dword of the end of a 4K page.  If
-	   yes then jump back to the misaligned loop to copy a byte at a time.  */
-	and	tmp1, src2, #0xff8
-	eor	tmp1, tmp1, #0xff8
-	cbz	tmp1, L(do_misaligned)
-	ldr	data1, [src1], #8
-	ldr	data2, [src2], #8
-
-	sub	tmp1, data1, zeroones
-	orr	tmp2, data1, #REP8_7f
-	eor	diff, data1, data2	/* Non-zero if differences found.  */
-	bic	has_nul, tmp1, tmp2	/* Non-zero if NUL terminator.  */
+L(src1_aligned):
+	neg	shift, src2, lsl 3
+	bic	src2, src2, 7
+	ldr	data3, [src2], 8
+#ifdef __AARCH64EB__
+	rev	data3, data3
+#endif
+	lsr	tmp, zeroones, shift
+	orr	data3, data3, tmp
+	sub	has_nul, data3, zeroones
+	orr	tmp, data3, REP8_7f
+	bics	has_nul, has_nul, tmp
+	b.ne	L(tail)
+
+	sub	off1, src2, src1
+
+	.p2align 4
+
+L(loop_unaligned):
+	ldr	data3, [src1, off1]
+	ldr	data2, [src1, off2]
+#ifdef __AARCH64EB__
+	rev	data3, data3
+#endif
+	sub	has_nul, data3, zeroones
+	orr	tmp, data3, REP8_7f
+	ldr	data1, [src1], 8
+	bics	has_nul, has_nul, tmp
+	ccmp	data1, data2, 0, eq
+	b.eq	L(loop_unaligned)
+
+	lsl	tmp, has_nul, shift
+#ifdef __AARCH64EB__
+	rev	tmp, tmp
+#endif
+	eor	diff, data1, data2
+	orr	syndrome, diff, tmp
+	cbnz	syndrome, L(end)
+L(tail):
+	ldr	data1, [src1]
+	neg	shift, shift
+	lsr	data2, data3, shift
+	lsr	has_nul, has_nul, shift
+#ifdef __AARCH64EB__
+	rev     data2, data2
+	rev	has_nul, has_nul
+#endif
+	eor	diff, data1, data2
 	orr	syndrome, diff, has_nul
-	cbz	syndrome, L(loop_misaligned)
 	b	L(end)
 
 L(done):
-- 
2.17.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH v1 2/3] arm64: lib: Import latest version of Arm Optimized Routines' strncmp
  2022-02-15 17:07 [PATCH v1 0/3] Import Arm Optimized Routines str{n}cmp functions Joey Gouly
  2022-02-15 17:07 ` [PATCH v1 1/3] arm64: lib: Import latest version of Arm Optimized Routines' strcmp Joey Gouly
@ 2022-02-15 17:07 ` Joey Gouly
  2022-02-15 17:07 ` [PATCH v1 3/3] Revert "arm64: Mitigate MTE issues with str{n}cmp()" Joey Gouly
  2022-02-16 16:30 ` [PATCH v1 0/3] Import Arm Optimized Routines str{n}cmp functions Mark Rutland
  3 siblings, 0 replies; 10+ messages in thread
From: Joey Gouly @ 2022-02-15 17:07 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: nd, catalin.marinas, joey.gouly, mark.rutland, robin.murphy, will

Import the latest version of the Arm Optimized Routines strncmp function based
on the upstream code of string/aarch64/strncmp.S at commit 189dfefe37d5 from:
  https://github.com/ARM-software/optimized-routines

This latest version includes MTE support.

Signed-off-by: Joey Gouly <joey.gouly@arm.com>
Cc: Robin Murphy <robin.murphy@arm.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will@kernel.org>
---
 arch/arm64/lib/strncmp.S | 234 +++++++++++++++++++++++----------------
 1 file changed, 141 insertions(+), 93 deletions(-)

diff --git a/arch/arm64/lib/strncmp.S b/arch/arm64/lib/strncmp.S
index e42bcfcd37e6..a4884b97e9a8 100644
--- a/arch/arm64/lib/strncmp.S
+++ b/arch/arm64/lib/strncmp.S
@@ -1,9 +1,9 @@
 /* SPDX-License-Identifier: GPL-2.0-only */
 /*
- * Copyright (c) 2013-2021, Arm Limited.
+ * Copyright (c) 2013-2022, Arm Limited.
  *
  * Adapted from the original at:
- * https://github.com/ARM-software/optimized-routines/blob/e823e3abf5f89ecb/string/aarch64/strncmp.S
+ * https://github.com/ARM-software/optimized-routines/blob/189dfefe37d54c5b/string/aarch64/strncmp.S
  */
 
 #include <linux/linkage.h>
@@ -11,14 +11,14 @@
 
 /* Assumptions:
  *
- * ARMv8-a, AArch64
+ * ARMv8-a, AArch64.
+ * MTE compatible.
  */
 
 #define L(label) .L ## label
 
 #define REP8_01 0x0101010101010101
 #define REP8_7f 0x7f7f7f7f7f7f7f7f
-#define REP8_80 0x8080808080808080
 
 /* Parameters and result.  */
 #define src1		x0
@@ -39,10 +39,24 @@
 #define tmp3		x10
 #define zeroones	x11
 #define pos		x12
-#define limit_wd	x13
-#define mask		x14
-#define endloop		x15
+#define mask		x13
+#define endloop		x14
 #define count		mask
+#define offset		pos
+#define neg_offset	x15
+
+/* Define endian dependent shift operations.
+   On big-endian early bytes are at MSB and on little-endian LSB.
+   LS_FW means shifting towards early bytes.
+   LS_BK means shifting towards later bytes.
+   */
+#ifdef __AARCH64EB__
+#define LS_FW lsl
+#define LS_BK lsr
+#else
+#define LS_FW lsr
+#define LS_BK lsl
+#endif
 
 SYM_FUNC_START_WEAK_PI(strncmp)
 	cbz	limit, L(ret0)
@@ -52,9 +66,6 @@ SYM_FUNC_START_WEAK_PI(strncmp)
 	and	count, src1, #7
 	b.ne	L(misaligned8)
 	cbnz	count, L(mutual_align)
-	/* Calculate the number of full and partial words -1.  */
-	sub	limit_wd, limit, #1	/* limit != 0, so no underflow.  */
-	lsr	limit_wd, limit_wd, #3	/* Convert to Dwords.  */
 
 	/* NUL detection works on the principle that (X - 1) & (~X) & 0x80
 	   (=> (X - 1) & ~(X | 0x7f)) is non-zero iff a byte is zero, and
@@ -64,56 +75,52 @@ L(loop_aligned):
 	ldr	data1, [src1], #8
 	ldr	data2, [src2], #8
 L(start_realigned):
-	subs	limit_wd, limit_wd, #1
+	subs	limit, limit, #8
 	sub	tmp1, data1, zeroones
 	orr	tmp2, data1, #REP8_7f
 	eor	diff, data1, data2	/* Non-zero if differences found.  */
-	csinv	endloop, diff, xzr, pl	/* Last Dword or differences.  */
+	csinv	endloop, diff, xzr, hi	/* Last Dword or differences.  */
 	bics	has_nul, tmp1, tmp2	/* Non-zero if NUL terminator.  */
 	ccmp	endloop, #0, #0, eq
 	b.eq	L(loop_aligned)
 	/* End of main loop */
 
-	/* Not reached the limit, must have found the end or a diff.  */
-	tbz	limit_wd, #63, L(not_limit)
-
-	/* Limit % 8 == 0 => all bytes significant.  */
-	ands	limit, limit, #7
-	b.eq	L(not_limit)
-
-	lsl	limit, limit, #3	/* Bits -> bytes.  */
-	mov	mask, #~0
-#ifdef __AARCH64EB__
-	lsr	mask, mask, limit
-#else
-	lsl	mask, mask, limit
-#endif
-	bic	data1, data1, mask
-	bic	data2, data2, mask
-
-	/* Make sure that the NUL byte is marked in the syndrome.  */
-	orr	has_nul, has_nul, mask
-
-L(not_limit):
+L(full_check):
+#ifndef __AARCH64EB__
 	orr	syndrome, diff, has_nul
-
-#ifndef	__AARCH64EB__
+	add	limit, limit, 8	/* Rewind limit to before last subs. */
+L(syndrome_check):
+	/* Limit was reached. Check if the NUL byte or the difference
+	   is before the limit. */
 	rev	syndrome, syndrome
 	rev	data1, data1
-	/* The MS-non-zero bit of the syndrome marks either the first bit
-	   that is different, or the top bit of the first zero byte.
-	   Shifting left now will bring the critical information into the
-	   top bits.  */
 	clz	pos, syndrome
 	rev	data2, data2
 	lsl	data1, data1, pos
+	cmp	limit, pos, lsr #3
 	lsl	data2, data2, pos
 	/* But we need to zero-extend (char is unsigned) the value and then
 	   perform a signed 32-bit subtraction.  */
 	lsr	data1, data1, #56
 	sub	result, data1, data2, lsr #56
+	csel result, result, xzr, hi
 	ret
 #else
+	/* Not reached the limit, must have found the end or a diff.  */
+	tbz	limit, #63, L(not_limit)
+	add	tmp1, limit, 8
+	cbz	limit, L(not_limit)
+
+	lsl	limit, tmp1, #3	/* Bits -> bytes.  */
+	mov	mask, #~0
+	lsr	mask, mask, limit
+	bic	data1, data1, mask
+	bic	data2, data2, mask
+
+	/* Make sure that the NUL byte is marked in the syndrome.  */
+	orr	has_nul, has_nul, mask
+
+L(not_limit):
 	/* For big-endian we cannot use the trick with the syndrome value
 	   as carry-propagation can corrupt the upper bits if the trailing
 	   bytes in the string contain 0x01.  */
@@ -134,10 +141,11 @@ L(not_limit):
 	rev	has_nul, has_nul
 	orr	syndrome, diff, has_nul
 	clz	pos, syndrome
-	/* The MS-non-zero bit of the syndrome marks either the first bit
-	   that is different, or the top bit of the first zero byte.
+	/* The most-significant-non-zero bit of the syndrome marks either the
+	   first bit that is different, or the top bit of the first zero byte.
 	   Shifting left now will bring the critical information into the
 	   top bits.  */
+L(end_quick):
 	lsl	data1, data1, pos
 	lsl	data2, data2, pos
 	/* But we need to zero-extend (char is unsigned) the value and then
@@ -159,22 +167,12 @@ L(mutual_align):
 	neg	tmp3, count, lsl #3	/* 64 - bits(bytes beyond align). */
 	ldr	data2, [src2], #8
 	mov	tmp2, #~0
-	sub	limit_wd, limit, #1	/* limit != 0, so no underflow.  */
-#ifdef __AARCH64EB__
-	/* Big-endian.  Early bytes are at MSB.  */
-	lsl	tmp2, tmp2, tmp3	/* Shift (count & 63).  */
-#else
-	/* Little-endian.  Early bytes are at LSB.  */
-	lsr	tmp2, tmp2, tmp3	/* Shift (count & 63).  */
-#endif
-	and	tmp3, limit_wd, #7
-	lsr	limit_wd, limit_wd, #3
-	/* Adjust the limit. Only low 3 bits used, so overflow irrelevant.  */
-	add	limit, limit, count
-	add	tmp3, tmp3, count
+	LS_FW	tmp2, tmp2, tmp3	/* Shift (count & 63).  */
+	/* Adjust the limit and ensure it doesn't overflow.  */
+	adds	limit, limit, count
+	csinv	limit, limit, xzr, lo
 	orr	data1, data1, tmp2
 	orr	data2, data2, tmp2
-	add	limit_wd, limit_wd, tmp3, lsr #3
 	b	L(start_realigned)
 
 	.p2align 4
@@ -197,13 +195,11 @@ L(done):
 	/* Align the SRC1 to a dword by doing a bytewise compare and then do
 	   the dword loop.  */
 L(try_misaligned_words):
-	lsr	limit_wd, limit, #3
-	cbz	count, L(do_misaligned)
+	cbz	count, L(src1_aligned)
 
 	neg	count, count
 	and	count, count, #7
 	sub	limit, limit, count
-	lsr	limit_wd, limit, #3
 
 L(page_end_loop):
 	ldrb	data1w, [src1], #1
@@ -214,48 +210,100 @@ L(page_end_loop):
 	subs	count, count, #1
 	b.hi	L(page_end_loop)
 
-L(do_misaligned):
-	/* Prepare ourselves for the next page crossing.  Unlike the aligned
-	   loop, we fetch 1 less dword because we risk crossing bounds on
-	   SRC2.  */
-	mov	count, #8
-	subs	limit_wd, limit_wd, #1
-	b.lo	L(done_loop)
-L(loop_misaligned):
-	and	tmp2, src2, #0xff8
-	eor	tmp2, tmp2, #0xff8
-	cbz	tmp2, L(page_end_loop)
+	/* The following diagram explains the comparison of misaligned strings.
+	   The bytes are shown in natural order. For little-endian, it is
+	   reversed in the registers. The "x" bytes are before the string.
+	   The "|" separates data that is loaded at one time.
+	   src1     | a a a a a a a a | b b b c c c c c | . . .
+	   src2     | x x x x x a a a   a a a a a b b b | c c c c c . . .
+
+	   After shifting in each step, the data looks like this:
+	                STEP_A              STEP_B              STEP_C
+	   data1    a a a a a a a a     b b b c c c c c     b b b c c c c c
+	   data2    a a a a a a a a     b b b 0 0 0 0 0     0 0 0 c c c c c
 
+	   The bytes with "0" are eliminated from the syndrome via mask.
+
+	   Align SRC2 down to 16 bytes. This way we can read 16 bytes at a
+	   time from SRC2. The comparison happens in 3 steps. After each step
+	   the loop can exit, or read from SRC1 or SRC2. */
+L(src1_aligned):
+	/* Calculate offset from 8 byte alignment to string start in bits. No
+	   need to mask offset since shifts are ignoring upper bits. */
+	lsl	offset, src2, #3
+	bic	src2, src2, #0xf
+	mov	mask, -1
+	neg	neg_offset, offset
 	ldr	data1, [src1], #8
-	ldr	data2, [src2], #8
-	sub	tmp1, data1, zeroones
-	orr	tmp2, data1, #REP8_7f
-	eor	diff, data1, data2	/* Non-zero if differences found.  */
-	bics	has_nul, tmp1, tmp2	/* Non-zero if NUL terminator.  */
-	ccmp	diff, #0, #0, eq
-	b.ne	L(not_limit)
-	subs	limit_wd, limit_wd, #1
-	b.pl	L(loop_misaligned)
+	ldp	tmp1, tmp2, [src2], #16
+	LS_BK	mask, mask, neg_offset
+	and	neg_offset, neg_offset, #63	/* Need actual value for cmp later. */
+	/* Skip the first compare if data in tmp1 is irrelevant. */
+	tbnz	offset, 6, L(misaligned_mid_loop)
 
-L(done_loop):
-	/* We found a difference or a NULL before the limit was reached.  */
-	and	limit, limit, #7
-	cbz	limit, L(not_limit)
-	/* Read the last word.  */
-	sub	src1, src1, 8
-	sub	src2, src2, 8
-	ldr	data1, [src1, limit]
-	ldr	data2, [src2, limit]
-	sub	tmp1, data1, zeroones
-	orr	tmp2, data1, #REP8_7f
+L(loop_misaligned):
+	/* STEP_A: Compare full 8 bytes when there is enough data from SRC2.*/
+	LS_FW	data2, tmp1, offset
+	LS_BK	tmp1, tmp2, neg_offset
+	subs	limit, limit, #8
+	orr	data2, data2, tmp1	/* 8 bytes from SRC2 combined from two regs.*/
+	sub	has_nul, data1, zeroones
 	eor	diff, data1, data2	/* Non-zero if differences found.  */
-	bics	has_nul, tmp1, tmp2	/* Non-zero if NUL terminator.  */
-	ccmp	diff, #0, #0, eq
-	b.ne	L(not_limit)
+	orr	tmp3, data1, #REP8_7f
+	csinv	endloop, diff, xzr, hi	/* If limit, set to all ones. */
+	bic	has_nul, has_nul, tmp3	/* Non-zero if NUL byte found in SRC1. */
+	orr	tmp3, endloop, has_nul
+	cbnz	tmp3, L(full_check)
+
+	ldr	data1, [src1], #8
+L(misaligned_mid_loop):
+	/* STEP_B: Compare first part of data1 to second part of tmp2. */
+	LS_FW	data2, tmp2, offset
+#ifdef __AARCH64EB__
+	/* For big-endian we do a byte reverse to avoid carry-propagation
+	problem described above. This way we can reuse the has_nul in the
+	next step and also use syndrome value trick at the end. */
+	rev	tmp3, data1
+	#define data1_fixed tmp3
+#else
+	#define data1_fixed data1
+#endif
+	sub	has_nul, data1_fixed, zeroones
+	orr	tmp3, data1_fixed, #REP8_7f
+	eor	diff, data2, data1	/* Non-zero if differences found.  */
+	bic	has_nul, has_nul, tmp3	/* Non-zero if NUL terminator.  */
+#ifdef __AARCH64EB__
+	rev	has_nul, has_nul
+#endif
+	cmp	limit, neg_offset, lsr #3
+	orr	syndrome, diff, has_nul
+	bic	syndrome, syndrome, mask	/* Ignore later bytes. */
+	csinv	tmp3, syndrome, xzr, hi	/* If limit, set to all ones. */
+	cbnz	tmp3, L(syndrome_check)
+
+	/* STEP_C: Compare second part of data1 to first part of tmp1. */
+	ldp	tmp1, tmp2, [src2], #16
+	cmp	limit, #8
+	LS_BK	data2, tmp1, neg_offset
+	eor	diff, data2, data1	/* Non-zero if differences found.  */
+	orr	syndrome, diff, has_nul
+	and	syndrome, syndrome, mask	/* Ignore earlier bytes. */
+	csinv	tmp3, syndrome, xzr, hi	/* If limit, set to all ones. */
+	cbnz	tmp3, L(syndrome_check)
+
+	ldr	data1, [src1], #8
+	sub	limit, limit, #8
+	b	L(loop_misaligned)
+
+#ifdef	__AARCH64EB__
+L(syndrome_check):
+	clz	pos, syndrome
+	cmp	pos, limit, lsl #3
+	b.lo	L(end_quick)
+#endif
 
 L(ret0):
 	mov	result, #0
 	ret
-
 SYM_FUNC_END_PI(strncmp)
 EXPORT_SYMBOL_NOHWKASAN(strncmp)
-- 
2.17.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH v1 3/3] Revert "arm64: Mitigate MTE issues with str{n}cmp()"
  2022-02-15 17:07 [PATCH v1 0/3] Import Arm Optimized Routines str{n}cmp functions Joey Gouly
  2022-02-15 17:07 ` [PATCH v1 1/3] arm64: lib: Import latest version of Arm Optimized Routines' strcmp Joey Gouly
  2022-02-15 17:07 ` [PATCH v1 2/3] arm64: lib: Import latest version of Arm Optimized Routines' strncmp Joey Gouly
@ 2022-02-15 17:07 ` Joey Gouly
  2022-02-16 16:30 ` [PATCH v1 0/3] Import Arm Optimized Routines str{n}cmp functions Mark Rutland
  3 siblings, 0 replies; 10+ messages in thread
From: Joey Gouly @ 2022-02-15 17:07 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: nd, catalin.marinas, joey.gouly, mark.rutland, robin.murphy, will

This reverts commit 59a68d4138086c015ab8241c3267eec5550fbd44.

Now that the str{n}cmp functions have been updated to handle MTE
properly, the workaround to use the generic functions is no longer
needed.

Signed-off-by: Joey Gouly <joey.gouly@arm.com>
Cc: Robin Murphy <robin.murphy@arm.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will@kernel.org>
---
 arch/arm64/include/asm/assembler.h | 5 -----
 arch/arm64/include/asm/string.h    | 2 --
 arch/arm64/lib/strcmp.S            | 2 +-
 arch/arm64/lib/strncmp.S           | 2 +-
 4 files changed, 2 insertions(+), 9 deletions(-)

diff --git a/arch/arm64/include/asm/assembler.h b/arch/arm64/include/asm/assembler.h
index e8bd0af0141c..8df412178efb 100644
--- a/arch/arm64/include/asm/assembler.h
+++ b/arch/arm64/include/asm/assembler.h
@@ -535,11 +535,6 @@ alternative_endif
 #define EXPORT_SYMBOL_NOKASAN(name)	EXPORT_SYMBOL(name)
 #endif
 
-#ifdef CONFIG_KASAN_HW_TAGS
-#define EXPORT_SYMBOL_NOHWKASAN(name)
-#else
-#define EXPORT_SYMBOL_NOHWKASAN(name)	EXPORT_SYMBOL_NOKASAN(name)
-#endif
 	/*
 	 * Emit a 64-bit absolute little endian symbol reference in a way that
 	 * ensures that it will be resolved at build time, even when building a
diff --git a/arch/arm64/include/asm/string.h b/arch/arm64/include/asm/string.h
index 95f7686b728d..3a3264ff47b9 100644
--- a/arch/arm64/include/asm/string.h
+++ b/arch/arm64/include/asm/string.h
@@ -12,13 +12,11 @@ extern char *strrchr(const char *, int c);
 #define __HAVE_ARCH_STRCHR
 extern char *strchr(const char *, int c);
 
-#ifndef CONFIG_KASAN_HW_TAGS
 #define __HAVE_ARCH_STRCMP
 extern int strcmp(const char *, const char *);
 
 #define __HAVE_ARCH_STRNCMP
 extern int strncmp(const char *, const char *, __kernel_size_t);
-#endif
 
 #define __HAVE_ARCH_STRLEN
 extern __kernel_size_t strlen(const char *);
diff --git a/arch/arm64/lib/strcmp.S b/arch/arm64/lib/strcmp.S
index 758de77afd2f..e6815a3dd265 100644
--- a/arch/arm64/lib/strcmp.S
+++ b/arch/arm64/lib/strcmp.S
@@ -187,4 +187,4 @@ L(done):
 	ret
 
 SYM_FUNC_END_PI(strcmp)
-EXPORT_SYMBOL_NOHWKASAN(strcmp)
+EXPORT_SYMBOL_NOKASAN(strcmp)
diff --git a/arch/arm64/lib/strncmp.S b/arch/arm64/lib/strncmp.S
index a4884b97e9a8..bc195cb86693 100644
--- a/arch/arm64/lib/strncmp.S
+++ b/arch/arm64/lib/strncmp.S
@@ -306,4 +306,4 @@ L(ret0):
 	mov	result, #0
 	ret
 SYM_FUNC_END_PI(strncmp)
-EXPORT_SYMBOL_NOHWKASAN(strncmp)
+EXPORT_SYMBOL_NOKASAN(strncmp)
-- 
2.17.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* Re: [PATCH v1 0/3] Import Arm Optimized Routines str{n}cmp functions
  2022-02-15 17:07 [PATCH v1 0/3] Import Arm Optimized Routines str{n}cmp functions Joey Gouly
                   ` (2 preceding siblings ...)
  2022-02-15 17:07 ` [PATCH v1 3/3] Revert "arm64: Mitigate MTE issues with str{n}cmp()" Joey Gouly
@ 2022-02-16 16:30 ` Mark Rutland
  2022-02-16 16:52   ` Russell King (Oracle)
  3 siblings, 1 reply; 10+ messages in thread
From: Mark Rutland @ 2022-02-16 16:30 UTC (permalink / raw)
  To: Joey Gouly; +Cc: linux-arm-kernel, nd, catalin.marinas, robin.murphy, will

On Tue, Feb 15, 2022 at 05:07:20PM +0000, Joey Gouly wrote:
> Hi all,

Hi,

> The previous str{n}cmp routines were not MTE safe, so were disabled in:
>   59a68d413808 ("arm64: Mitigate MTE issues with str{n}cmp()")
> 
> The Arm Optimized Routines repository recently merged [1] their strcmp.S and
> strcmp-mte.S files into a single file that is MTE safe.
> 
> Therefore we can import these new MTE safe functions and remove the workaround.
> 
> I did some light boot tests using QEMU.

Nice!

As as minor thing, on the two import patches, I think we should be more
explicit about what's going on with licensing, so that it's clear the
license change relative to upstream is intended and legitimate.

For example, in commit:

  758602c04409d8c5 ("arm64: Import latest version of Cortex Strings' strcmp")

We had a note in the commit message:

| Note that for simplicity Arm have chosen to contribute this code
| to Linux under GPLv2 rather than the original MIT license.

... and I reckon it's worth being slightly more explicit, e.g.

| Note that for simplicity Arm have chosen to contribute this code
| to Linux under GPLv2 rather than the original MIT license. Arm is the
| sole copyright holder for this code.

That was previously confirmed at: 

  https://lore.kernel.org/linux-arm-kernel/20210526101723.GA3806@C02TD0UTHF1T.local/

So with that latter wording added to the import patches:

Acked-by: Mark Rutland <mark.rutland@arm.com>

As a heads-up, I believe this will conflict with changes I'm making to
the way symbol aliasing works:

  https://lore.kernel.org/lkml/20220216162229.1076788-1-mark.rutland@arm.com/

... and I reckon we need to fix that conflict in the arm64 tree, either
as a merge resolution or rebasing this atop my series.

Thanks,
Mark.

> 
> Thanks,
> Joey
> 
> [1] https://github.com/ARM-software/optimized-routines/commit/7b91c3cdb12b023004cb4dda30a1aa3424329ce6
> 
> Joey Gouly (3):
>   arm64: lib:  Import latest version of Arm Optimized Routines' strcmp
>   arm64: lib:  Import latest version of Arm Optimized Routines' strncmp
>   Revert "arm64: Mitigate MTE issues with str{n}cmp()"
> 
>  arch/arm64/include/asm/assembler.h |   5 -
>  arch/arm64/include/asm/string.h    |   2 -
>  arch/arm64/lib/strcmp.S            | 240 +++++++++++++++--------------
>  arch/arm64/lib/strncmp.S           | 236 +++++++++++++++++-----------
>  4 files changed, 269 insertions(+), 214 deletions(-)
> 
> -- 
> 2.17.1
> 

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v1 1/3] arm64: lib: Import latest version of Arm Optimized Routines' strcmp
  2022-02-15 17:07 ` [PATCH v1 1/3] arm64: lib: Import latest version of Arm Optimized Routines' strcmp Joey Gouly
@ 2022-02-16 16:44   ` Russell King (Oracle)
  2022-02-16 18:36     ` Robin Murphy
  0 siblings, 1 reply; 10+ messages in thread
From: Russell King (Oracle) @ 2022-02-16 16:44 UTC (permalink / raw)
  To: Joey Gouly
  Cc: linux-arm-kernel, nd, catalin.marinas, mark.rutland, robin.murphy,
	will

On Tue, Feb 15, 2022 at 05:07:21PM +0000, Joey Gouly wrote:
> Import the latest version of the Arm Optimized Routines strcmp function based
> on the upstream code of string/aarch64/strcmp.S at commit 189dfefe37d5 from:
>   https://github.com/ARM-software/optimized-routines
> 
> This latest version includes MTE support.
> 
> Signed-off-by: Joey Gouly <joey.gouly@arm.com>
> Cc: Robin Murphy <robin.murphy@arm.com>
> Cc: Mark Rutland <mark.rutland@arm.com>
> Cc: Catalin Marinas <catalin.marinas@arm.com>
> Cc: Will Deacon <will@kernel.org>
> ---
>  arch/arm64/lib/strcmp.S | 238 +++++++++++++++++++++-------------------
>  1 file changed, 126 insertions(+), 112 deletions(-)
> 
> diff --git a/arch/arm64/lib/strcmp.S b/arch/arm64/lib/strcmp.S
> index 83bcad72ec97..758de77afd2f 100644
> --- a/arch/arm64/lib/strcmp.S
> +++ b/arch/arm64/lib/strcmp.S
> @@ -1,9 +1,9 @@
>  /* SPDX-License-Identifier: GPL-2.0-only */

Looking at the LICENSE file in the above repository, it appears that
this code is licensed as "MIT OR Apache-2.0 WITH LLVM-exception".
Shouldn't the SPDX line be updated to reflect the origin license of
this code?

-- 
RMK's Patch system: https://www.armlinux.org.uk/developer/patches/
FTTP is here! 40Mbps down 10Mbps up. Decent connectivity at last!

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v1 0/3] Import Arm Optimized Routines str{n}cmp functions
  2022-02-16 16:30 ` [PATCH v1 0/3] Import Arm Optimized Routines str{n}cmp functions Mark Rutland
@ 2022-02-16 16:52   ` Russell King (Oracle)
  0 siblings, 0 replies; 10+ messages in thread
From: Russell King (Oracle) @ 2022-02-16 16:52 UTC (permalink / raw)
  To: Mark Rutland
  Cc: Joey Gouly, linux-arm-kernel, nd, catalin.marinas, robin.murphy,
	will

On Wed, Feb 16, 2022 at 04:30:26PM +0000, Mark Rutland wrote:
> On Tue, Feb 15, 2022 at 05:07:20PM +0000, Joey Gouly wrote:
> > Hi all,
> 
> Hi,
> 
> > The previous str{n}cmp routines were not MTE safe, so were disabled in:
> >   59a68d413808 ("arm64: Mitigate MTE issues with str{n}cmp()")
> > 
> > The Arm Optimized Routines repository recently merged [1] their strcmp.S and
> > strcmp-mte.S files into a single file that is MTE safe.
> > 
> > Therefore we can import these new MTE safe functions and remove the workaround.
> > 
> > I did some light boot tests using QEMU.
> 
> Nice!
> 
> As as minor thing, on the two import patches, I think we should be more
> explicit about what's going on with licensing, so that it's clear the
> license change relative to upstream is intended and legitimate.
> 
> For example, in commit:
> 
>   758602c04409d8c5 ("arm64: Import latest version of Cortex Strings' strcmp")
> 
> We had a note in the commit message:
> 
> | Note that for simplicity Arm have chosen to contribute this code
> | to Linux under GPLv2 rather than the original MIT license.
> 
> ... and I reckon it's worth being slightly more explicit, e.g.
> 
> | Note that for simplicity Arm have chosen to contribute this code
> | to Linux under GPLv2 rather than the original MIT license. Arm is the
> | sole copyright holder for this code.
> 
> That was previously confirmed at: 
> 
>   https://lore.kernel.org/linux-arm-kernel/20210526101723.GA3806@C02TD0UTHF1T.local/
> 
> So with that latter wording added to the import patches:
> 
> Acked-by: Mark Rutland <mark.rutland@arm.com>

As MIT is regarded as compatible with GPL v2 (MIT code can be integrated
into GPL v2 projects) it seems rather strange.

In any case, it's confusing to see the SPDX license identifiers saying
it's GPLv2 only code, but when you look at the LICENSE file in the
source repository, it's quite different. This probably needs some
explanation in the files, or the SPDX saying that it's GPLv2 only or
MIT or ...

-- 
RMK's Patch system: https://www.armlinux.org.uk/developer/patches/
FTTP is here! 40Mbps down 10Mbps up. Decent connectivity at last!

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v1 1/3] arm64: lib: Import latest version of Arm Optimized Routines' strcmp
  2022-02-16 16:44   ` Russell King (Oracle)
@ 2022-02-16 18:36     ` Robin Murphy
  2022-02-17 10:23       ` Joey Gouly
  0 siblings, 1 reply; 10+ messages in thread
From: Robin Murphy @ 2022-02-16 18:36 UTC (permalink / raw)
  To: Russell King (Oracle), Joey Gouly
  Cc: linux-arm-kernel, nd, catalin.marinas, mark.rutland, will

On 2022-02-16 16:44, Russell King (Oracle) wrote:
> On Tue, Feb 15, 2022 at 05:07:21PM +0000, Joey Gouly wrote:
>> Import the latest version of the Arm Optimized Routines strcmp function based
>> on the upstream code of string/aarch64/strcmp.S at commit 189dfefe37d5 from:
>>    https://github.com/ARM-software/optimized-routines
>>
>> This latest version includes MTE support.
>>
>> Signed-off-by: Joey Gouly <joey.gouly@arm.com>
>> Cc: Robin Murphy <robin.murphy@arm.com>
>> Cc: Mark Rutland <mark.rutland@arm.com>
>> Cc: Catalin Marinas <catalin.marinas@arm.com>
>> Cc: Will Deacon <will@kernel.org>
>> ---
>>   arch/arm64/lib/strcmp.S | 238 +++++++++++++++++++++-------------------
>>   1 file changed, 126 insertions(+), 112 deletions(-)
>>
>> diff --git a/arch/arm64/lib/strcmp.S b/arch/arm64/lib/strcmp.S
>> index 83bcad72ec97..758de77afd2f 100644
>> --- a/arch/arm64/lib/strcmp.S
>> +++ b/arch/arm64/lib/strcmp.S
>> @@ -1,9 +1,9 @@
>>   /* SPDX-License-Identifier: GPL-2.0-only */
> 
> Looking at the LICENSE file in the above repository, it appears that
> this code is licensed as "MIT OR Apache-2.0 WITH LLVM-exception".
> Shouldn't the SPDX line be updated to reflect the origin license of
> this code?

This is noted in the commits which first imported implementations from 
Arm Optimized Routines (020b199bc70d and earlier):

  "Note that for simplicity Arm have chosen to contribute this code
   to Linux under GPLv2 rather than the original MIT license."

Apologies for the confusion - I should have mentioned that to Joey 
beforehand, if I hadn't completely forgotten about it. I think it's just 
been implicit that we'd continue to follow the same approach going forward.

Thanks,
Robin.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v1 1/3] arm64: lib: Import latest version of Arm Optimized Routines' strcmp
  2022-02-16 18:36     ` Robin Murphy
@ 2022-02-17 10:23       ` Joey Gouly
  2022-02-25 14:21         ` Will Deacon
  0 siblings, 1 reply; 10+ messages in thread
From: Joey Gouly @ 2022-02-17 10:23 UTC (permalink / raw)
  To: Robin Murphy
  Cc: Russell King (Oracle), linux-arm-kernel, nd, catalin.marinas,
	mark.rutland, will

On Wed, Feb 16, 2022 at 06:36:12PM +0000, Robin Murphy wrote:
> On 2022-02-16 16:44, Russell King (Oracle) wrote:
> > On Tue, Feb 15, 2022 at 05:07:21PM +0000, Joey Gouly wrote:
> > > Import the latest version of the Arm Optimized Routines strcmp function based
> > > on the upstream code of string/aarch64/strcmp.S at commit 189dfefe37d5 from:
> > >    https://github.com/ARM-software/optimized-routines
> > > 
> > > This latest version includes MTE support.
> > > 
> > > Signed-off-by: Joey Gouly <joey.gouly@arm.com>
> > > Cc: Robin Murphy <robin.murphy@arm.com>
> > > Cc: Mark Rutland <mark.rutland@arm.com>
> > > Cc: Catalin Marinas <catalin.marinas@arm.com>
> > > Cc: Will Deacon <will@kernel.org>
> > > ---
> > >   arch/arm64/lib/strcmp.S | 238 +++++++++++++++++++++-------------------
> > >   1 file changed, 126 insertions(+), 112 deletions(-)
> > > 
> > > diff --git a/arch/arm64/lib/strcmp.S b/arch/arm64/lib/strcmp.S
> > > index 83bcad72ec97..758de77afd2f 100644
> > > --- a/arch/arm64/lib/strcmp.S
> > > +++ b/arch/arm64/lib/strcmp.S
> > > @@ -1,9 +1,9 @@
> > >   /* SPDX-License-Identifier: GPL-2.0-only */
> > 
> > Looking at the LICENSE file in the above repository, it appears that
> > this code is licensed as "MIT OR Apache-2.0 WITH LLVM-exception".
> > Shouldn't the SPDX line be updated to reflect the origin license of
> > this code?
> 
> This is noted in the commits which first imported implementations from Arm
> Optimized Routines (020b199bc70d and earlier):
> 
>  "Note that for simplicity Arm have chosen to contribute this code
>   to Linux under GPLv2 rather than the original MIT license."
> 
> Apologies for the confusion - I should have mentioned that to Joey
> beforehand, if I hadn't completely forgotten about it. I think it's just
> been implicit that we'd continue to follow the same approach going forward.
> 

Yes, I didn't mention it because I was just being implicit.

I've added a note about it in the commit message, and will send out a v2 after
deciding what we should do about the conflict with
https://lore.kernel.org/linux-arm-kernel/20220216162229.1076788-1-mark.rutland@arm.com/
(since I may have to rebase the patches onto a different base).

Thanks,
Joey

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v1 1/3] arm64: lib: Import latest version of Arm Optimized Routines' strcmp
  2022-02-17 10:23       ` Joey Gouly
@ 2022-02-25 14:21         ` Will Deacon
  0 siblings, 0 replies; 10+ messages in thread
From: Will Deacon @ 2022-02-25 14:21 UTC (permalink / raw)
  To: Joey Gouly
  Cc: Robin Murphy, Russell King (Oracle), linux-arm-kernel, nd,
	catalin.marinas, mark.rutland

On Thu, Feb 17, 2022 at 10:23:09AM +0000, Joey Gouly wrote:
> On Wed, Feb 16, 2022 at 06:36:12PM +0000, Robin Murphy wrote:
> > On 2022-02-16 16:44, Russell King (Oracle) wrote:
> > > On Tue, Feb 15, 2022 at 05:07:21PM +0000, Joey Gouly wrote:
> > > > Import the latest version of the Arm Optimized Routines strcmp function based
> > > > on the upstream code of string/aarch64/strcmp.S at commit 189dfefe37d5 from:
> > > >    https://github.com/ARM-software/optimized-routines
> > > > 
> > > > This latest version includes MTE support.
> > > > 
> > > > Signed-off-by: Joey Gouly <joey.gouly@arm.com>
> > > > Cc: Robin Murphy <robin.murphy@arm.com>
> > > > Cc: Mark Rutland <mark.rutland@arm.com>
> > > > Cc: Catalin Marinas <catalin.marinas@arm.com>
> > > > Cc: Will Deacon <will@kernel.org>
> > > > ---
> > > >   arch/arm64/lib/strcmp.S | 238 +++++++++++++++++++++-------------------
> > > >   1 file changed, 126 insertions(+), 112 deletions(-)
> > > > 
> > > > diff --git a/arch/arm64/lib/strcmp.S b/arch/arm64/lib/strcmp.S
> > > > index 83bcad72ec97..758de77afd2f 100644
> > > > --- a/arch/arm64/lib/strcmp.S
> > > > +++ b/arch/arm64/lib/strcmp.S
> > > > @@ -1,9 +1,9 @@
> > > >   /* SPDX-License-Identifier: GPL-2.0-only */
> > > 
> > > Looking at the LICENSE file in the above repository, it appears that
> > > this code is licensed as "MIT OR Apache-2.0 WITH LLVM-exception".
> > > Shouldn't the SPDX line be updated to reflect the origin license of
> > > this code?
> > 
> > This is noted in the commits which first imported implementations from Arm
> > Optimized Routines (020b199bc70d and earlier):
> > 
> >  "Note that for simplicity Arm have chosen to contribute this code
> >   to Linux under GPLv2 rather than the original MIT license."
> > 
> > Apologies for the confusion - I should have mentioned that to Joey
> > beforehand, if I hadn't completely forgotten about it. I think it's just
> > been implicit that we'd continue to follow the same approach going forward.
> > 
> 
> Yes, I didn't mention it because I was just being implicit.
> 
> I've added a note about it in the commit message, and will send out a v2 after
> deciding what we should do about the conflict with
> https://lore.kernel.org/linux-arm-kernel/20220216162229.1076788-1-mark.rutland@arm.com/
> (since I may have to rebase the patches onto a different base).

Please just send a new version based on -rc3 and we can figure out the
conflicts when I merge it together.

Will

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2022-02-25 14:23 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2022-02-15 17:07 [PATCH v1 0/3] Import Arm Optimized Routines str{n}cmp functions Joey Gouly
2022-02-15 17:07 ` [PATCH v1 1/3] arm64: lib: Import latest version of Arm Optimized Routines' strcmp Joey Gouly
2022-02-16 16:44   ` Russell King (Oracle)
2022-02-16 18:36     ` Robin Murphy
2022-02-17 10:23       ` Joey Gouly
2022-02-25 14:21         ` Will Deacon
2022-02-15 17:07 ` [PATCH v1 2/3] arm64: lib: Import latest version of Arm Optimized Routines' strncmp Joey Gouly
2022-02-15 17:07 ` [PATCH v1 3/3] Revert "arm64: Mitigate MTE issues with str{n}cmp()" Joey Gouly
2022-02-16 16:30 ` [PATCH v1 0/3] Import Arm Optimized Routines str{n}cmp functions Mark Rutland
2022-02-16 16:52   ` Russell King (Oracle)

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).