* [PATCH 1/4] powerpc/64: Make exception table clearer in __copy_tofrom_user_base
@ 2016-11-03 5:19 Paul Mackerras
2016-11-03 5:20 ` [PATCH 2/4] selftests/powerpc/64: Test all paths through copy routines Paul Mackerras
` (2 more replies)
0 siblings, 3 replies; 7+ messages in thread
From: Paul Mackerras @ 2016-11-03 5:19 UTC (permalink / raw)
To: linuxppc-dev
This aims to make the generation of exception table entries for the
loads and stores in __copy_tofrom_user_base clearer and easier to
verify. Instead of having a series of local labels on the loads
and stores, with a series of corresponding labels later for the
exception handlers, we now use macros to generate exception table
entries at the point of each load and store that could potentially
trap. We do this with the macros extable, lex (load exception),
and stex (store exception). These macros are used right before
the load or store to which they apply.
Some complexity is introduced by the fact that we have some more work
to do after hitting an exception. After an exception on a load, we
need to clear the rest of the destination buffer (if possible), and
then return the number of bytes not copied. After an exception on a
store, we only need to return the number of bytes not copied. In
either case, the fixup code uses r3 as the current pointer into the
destination buffer, that is, the address of the first byte of the
destination that has not been modified. However, at various points
in the copy loops, r3 can be 4, 8, 16 or 24 bytes behind that point.
To express this offset in an understandable way, we define a symbol
r3_offset which is updated at various points so that it equal to the
difference between the address of the first unmodified byte of the
destination and the value at r3. (In fact it only needs to be
accurate at the point of each lex or stex macro invocation.)
The rules for updating r3_offset are as follows:
* It starts out at 0
* An addi r3,r3,N instruction decreases r3_offset by N
* A store instruction (stb, sth, stw, std) to N(r3)
increases r3_offset by the width of the store (1, 2, 4, 8)
* A store with update instruction (stbu, sthu, stwu, stdu) to N(r3)
sets r3_offset to the width of the store.
There is some trickiness to the way that the lex and stex macros and
the associated exception handlers work. I would have liked to use
the current value of r3_offset in the name of the symbol used as
the exception handler, as in "extable .Lld_exc_$(r3_offset)" and then
have symbols .Lld_exc_0, .Lld_exc_8, .Lld_exc_16 etc. corresponding
to the offsets that needed to be added to r3. However, I couldn't
see a way to do that with gas.
Instead, the exception handler address is .Lld_exc - r3_offset or
.Lst_exc - r3_offset, that is, the distance ahead of .Lld_exc/.Lst_exc
that we start executing is equal to the amount that we need to add to
r3. This works because r3_offset is always a small multiple of 4,
and our instructions are 4 bytes long. This means that before
.Lld_exc and .Lst_exc, we have a sequence of instructions that
increments r3 by 4, 8, 16 or 24 depending on where we start. The
sequence increments r3 by 4 per instruction (on average).
We also replace the exception table for the 4k copy loop by a
macro per load or store. These loads and stores all use exactly
the same exception handler, which simply resets the argument registers
r3, r4 and r5 to there original values and re-does the whole copy
using the slower loop.
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
---
arch/powerpc/lib/copyuser_64.S | 579 +++++++++++++++++------------------------
1 file changed, 239 insertions(+), 340 deletions(-)
diff --git a/arch/powerpc/lib/copyuser_64.S b/arch/powerpc/lib/copyuser_64.S
index 60386b2..1c5247c 100644
--- a/arch/powerpc/lib/copyuser_64.S
+++ b/arch/powerpc/lib/copyuser_64.S
@@ -18,6 +18,36 @@
#define sHd sld /* Shift towards high-numbered address. */
#endif
+/*
+ * These macros are used to generate exception table entries.
+ * The exception handlers below use the original arguments
+ * (stored on the stack) and the point where we're up to in
+ * the destination buffer, i.e. the address of the first
+ * unmodified byte. Generally r3 points into the destination
+ * buffer, but the first unmodified byte is at a variable
+ * offset from r3. In the code below, the symbol r3_offset
+ * is set to indicate the current offset at each point in
+ * the code. This offset is then used as a negative offset
+ * from the exception handler code, and those instructions
+ * before the exception handlers are addi instructions that
+ * adjust r3 to point to the correct place.
+ */
+ .macro extable handler
+100:
+ .section __ex_table,"a"
+ .align 3
+ .llong 100b,\handler
+ .previous
+ .endm
+
+ .macro lex /* exception handler for load */
+ extable .Lld_exc - r3_offset
+ .endm
+
+ .macro stex /* exception handler for store */
+ extable .Lst_exc - r3_offset
+ .endm
+
.align 7
_GLOBAL_TOC(__copy_tofrom_user)
BEGIN_FTR_SECTION
@@ -26,7 +56,7 @@ FTR_SECTION_ELSE
b __copy_tofrom_user_power7
ALT_FTR_SECTION_END_IFCLR(CPU_FTR_VMX_COPY)
_GLOBAL(__copy_tofrom_user_base)
- /* first check for a whole page copy on a page boundary */
+ /* first check for a 4kB copy on a 4kB boundary */
cmpldi cr1,r5,16
cmpdi cr6,r5,4096
or r0,r3,r4
@@ -55,6 +85,7 @@ ALT_FTR_SECTION_END(CPU_FTR_UNALIGNED_LD_STD | CPU_FTR_CP_USE_DCBTZ, \
CPU_FTR_UNALIGNED_LD_STD)
.Ldst_aligned:
addi r3,r3,-16
+r3_offset = 16
BEGIN_FTR_SECTION
andi. r0,r4,7
bne .Lsrc_unaligned
@@ -62,57 +93,69 @@ END_FTR_SECTION_IFCLR(CPU_FTR_UNALIGNED_LD_STD)
blt cr1,.Ldo_tail /* if < 16 bytes to copy */
srdi r0,r5,5
cmpdi cr1,r0,0
-20: ld r7,0(r4)
-220: ld r6,8(r4)
+lex; ld r7,0(r4)
+lex; ld r6,8(r4)
addi r4,r4,16
mtctr r0
andi. r0,r5,0x10
beq 22f
addi r3,r3,16
+r3_offset = 0
addi r4,r4,-16
mr r9,r7
mr r8,r6
beq cr1,72f
-21: ld r7,16(r4)
-221: ld r6,24(r4)
+21:
+lex; ld r7,16(r4)
+lex; ld r6,24(r4)
addi r4,r4,32
-70: std r9,0(r3)
-270: std r8,8(r3)
-22: ld r9,0(r4)
-222: ld r8,8(r4)
-71: std r7,16(r3)
-271: std r6,24(r3)
+stex; std r9,0(r3)
+r3_offset = 8
+stex; std r8,8(r3)
+r3_offset = 16
+22:
+lex; ld r9,0(r4)
+lex; ld r8,8(r4)
+stex; std r7,16(r3)
+r3_offset = 24
+stex; std r6,24(r3)
addi r3,r3,32
+r3_offset = 0
bdnz 21b
-72: std r9,0(r3)
-272: std r8,8(r3)
+72:
+stex; std r9,0(r3)
+r3_offset = 8
+stex; std r8,8(r3)
+r3_offset = 16
andi. r5,r5,0xf
beq+ 3f
addi r4,r4,16
.Ldo_tail:
addi r3,r3,16
+r3_offset = 0
bf cr7*4+0,246f
-244: ld r9,0(r4)
+lex; ld r9,0(r4)
addi r4,r4,8
-245: std r9,0(r3)
+stex; std r9,0(r3)
addi r3,r3,8
246: bf cr7*4+1,1f
-23: lwz r9,0(r4)
+lex; lwz r9,0(r4)
addi r4,r4,4
-73: stw r9,0(r3)
+stex; stw r9,0(r3)
addi r3,r3,4
1: bf cr7*4+2,2f
-44: lhz r9,0(r4)
+lex; lhz r9,0(r4)
addi r4,r4,2
-74: sth r9,0(r3)
+stex; sth r9,0(r3)
addi r3,r3,2
2: bf cr7*4+3,3f
-45: lbz r9,0(r4)
-75: stb r9,0(r3)
+lex; lbz r9,0(r4)
+stex; stb r9,0(r3)
3: li r3,0
blr
.Lsrc_unaligned:
+r3_offset = 16
srdi r6,r5,3
addi r5,r5,-16
subf r4,r0,r4
@@ -125,58 +168,69 @@ END_FTR_SECTION_IFCLR(CPU_FTR_UNALIGNED_LD_STD)
add r5,r5,r0
bt cr7*4+0,28f
-24: ld r9,0(r4) /* 3+2n loads, 2+2n stores */
-25: ld r0,8(r4)
+lex; ld r9,0(r4) /* 3+2n loads, 2+2n stores */
+lex; ld r0,8(r4)
sLd r6,r9,r10
-26: ldu r9,16(r4)
+lex; ldu r9,16(r4)
sHd r7,r0,r11
sLd r8,r0,r10
or r7,r7,r6
blt cr6,79f
-27: ld r0,8(r4)
+lex; ld r0,8(r4)
b 2f
-28: ld r0,0(r4) /* 4+2n loads, 3+2n stores */
-29: ldu r9,8(r4)
+28:
+lex; ld r0,0(r4) /* 4+2n loads, 3+2n stores */
+lex; ldu r9,8(r4)
sLd r8,r0,r10
addi r3,r3,-8
+r3_offset = 24
blt cr6,5f
-30: ld r0,8(r4)
+lex; ld r0,8(r4)
sHd r12,r9,r11
sLd r6,r9,r10
-31: ldu r9,16(r4)
+lex; ldu r9,16(r4)
or r12,r8,r12
sHd r7,r0,r11
sLd r8,r0,r10
addi r3,r3,16
+r3_offset = 8
beq cr6,78f
1: or r7,r7,r6
-32: ld r0,8(r4)
-76: std r12,8(r3)
+lex; ld r0,8(r4)
+stex; std r12,8(r3)
+r3_offset = 16
2: sHd r12,r9,r11
sLd r6,r9,r10
-33: ldu r9,16(r4)
+lex; ldu r9,16(r4)
or r12,r8,r12
-77: stdu r7,16(r3)
+stex; stdu r7,16(r3)
+r3_offset = 8
sHd r7,r0,r11
sLd r8,r0,r10
bdnz 1b
-78: std r12,8(r3)
+78:
+stex; std r12,8(r3)
+r3_offset = 16
or r7,r7,r6
-79: std r7,16(r3)
+79:
+stex; std r7,16(r3)
+r3_offset = 24
5: sHd r12,r9,r11
or r12,r8,r12
-80: std r12,24(r3)
+stex; std r12,24(r3)
+r3_offset = 32
bne 6f
li r3,0
blr
6: cmpwi cr1,r5,8
addi r3,r3,32
+r3_offset = 0
sLd r9,r9,r10
ble cr1,7f
-34: ld r0,8(r4)
+lex; ld r0,8(r4)
sHd r7,r0,r11
or r9,r7,r9
7:
@@ -184,7 +238,7 @@ END_FTR_SECTION_IFCLR(CPU_FTR_UNALIGNED_LD_STD)
#ifdef __BIG_ENDIAN__
rotldi r9,r9,32
#endif
-94: stw r9,0(r3)
+stex; stw r9,0(r3)
#ifdef __LITTLE_ENDIAN__
rotrdi r9,r9,32
#endif
@@ -193,7 +247,7 @@ END_FTR_SECTION_IFCLR(CPU_FTR_UNALIGNED_LD_STD)
#ifdef __BIG_ENDIAN__
rotldi r9,r9,16
#endif
-95: sth r9,0(r3)
+stex; sth r9,0(r3)
#ifdef __LITTLE_ENDIAN__
rotrdi r9,r9,16
#endif
@@ -202,7 +256,7 @@ END_FTR_SECTION_IFCLR(CPU_FTR_UNALIGNED_LD_STD)
#ifdef __BIG_ENDIAN__
rotldi r9,r9,8
#endif
-96: stb r9,0(r3)
+stex; stb r9,0(r3)
#ifdef __LITTLE_ENDIAN__
rotrdi r9,r9,8
#endif
@@ -210,47 +264,55 @@ END_FTR_SECTION_IFCLR(CPU_FTR_UNALIGNED_LD_STD)
blr
.Ldst_unaligned:
+r3_offset = 0
PPC_MTOCRF(0x01,r6) /* put #bytes to 8B bdry into cr7 */
subf r5,r6,r5
li r7,0
cmpldi cr1,r5,16
bf cr7*4+3,1f
-35: lbz r0,0(r4)
-81: stb r0,0(r3)
+ extable .Lld_exc_r7
+ lbz r0,0(r4)
+ extable .Lst_exc_r7
+ stb r0,0(r3)
addi r7,r7,1
1: bf cr7*4+2,2f
-36: lhzx r0,r7,r4
-82: sthx r0,r7,r3
+ extable .Lld_exc_r7
+ lhzx r0,r7,r4
+ extable .Lst_exc_r7
+ sthx r0,r7,r3
addi r7,r7,2
2: bf cr7*4+1,3f
-37: lwzx r0,r7,r4
-83: stwx r0,r7,r3
+ extable .Lld_exc_r7
+ lwzx r0,r7,r4
+ extable .Lst_exc_r7
+ stwx r0,r7,r3
3: PPC_MTOCRF(0x01,r5)
add r4,r6,r4
add r3,r6,r3
b .Ldst_aligned
.Lshort_copy:
+r3_offset = 0
bf cr7*4+0,1f
-38: lwz r0,0(r4)
-39: lwz r9,4(r4)
+lex; lwz r0,0(r4)
+lex; lwz r9,4(r4)
addi r4,r4,8
-84: stw r0,0(r3)
-85: stw r9,4(r3)
+stex; stw r0,0(r3)
+stex; stw r9,4(r3)
addi r3,r3,8
1: bf cr7*4+1,2f
-40: lwz r0,0(r4)
+lex; lwz r0,0(r4)
addi r4,r4,4
-86: stw r0,0(r3)
+stex; stw r0,0(r3)
addi r3,r3,4
2: bf cr7*4+2,3f
-41: lhz r0,0(r4)
+lex; lhz r0,0(r4)
addi r4,r4,2
-87: sth r0,0(r3)
+stex; sth r0,0(r3)
addi r3,r3,2
3: bf cr7*4+3,4f
-42: lbz r0,0(r4)
-88: stb r0,0(r3)
+lex; lbz r0,0(r4)
+stex; stb r0,0(r3)
4: li r3,0
blr
@@ -258,48 +320,34 @@ END_FTR_SECTION_IFCLR(CPU_FTR_UNALIGNED_LD_STD)
* exception handlers follow
* we have to return the number of bytes not copied
* for an exception on a load, we set the rest of the destination to 0
+ * Note that the number of bytes of instructions for adjusting r3 needs
+ * to equal the amount of the adjustment, due to the trick of using
+ * .Lld_exc - r3_offset as the handler address.
*/
-136:
-137:
+.Lld_exc_r7:
add r3,r3,r7
- b 1f
-130:
-131:
+ b .Lld_exc
+
+ /* adjust by 24 */
addi r3,r3,8
-120:
-320:
-122:
-322:
-124:
-125:
-126:
-127:
-128:
-129:
-133:
+ nop
+ /* adjust by 16 */
addi r3,r3,8
-132:
+ nop
+ /* adjust by 8 */
addi r3,r3,8
-121:
-321:
-344:
-134:
-135:
-138:
-139:
-140:
-141:
-142:
-123:
-144:
-145:
+ nop
/*
- * here we have had a fault on a load and r3 points to the first
- * unmodified byte of the destination
+ * Here we have had a fault on a load and r3 points to the first
+ * unmodified byte of the destination. We use the original arguments
+ * and r3 to work out how much wasn't copied. Since we load some
+ * distance ahead of the stores, we continue copying byte-by-byte until
+ * we hit the load fault again in order to copy as much as possible.
*/
-1: ld r6,-24(r1)
+.Lld_exc:
+ ld r6,-24(r1)
ld r4,-16(r1)
ld r5,-8(r1)
subf r6,r6,r3
@@ -310,9 +358,11 @@ END_FTR_SECTION_IFCLR(CPU_FTR_UNALIGNED_LD_STD)
* first see if we can copy any more bytes before hitting another exception
*/
mtctr r5
+r3_offset = 0
+ extable .Lclear_rest
43: lbz r0,0(r4)
addi r4,r4,1
-89: stb r0,0(r3)
+stex; stb r0,0(r3)
addi r3,r3,1
bdnz 43b
li r3,0 /* huh? all copied successfully this time? */
@@ -321,13 +371,15 @@ END_FTR_SECTION_IFCLR(CPU_FTR_UNALIGNED_LD_STD)
/*
* here we have trapped again, need to clear ctr bytes starting at r3
*/
-143: mfctr r5
+.Lclear_rest:
+ mfctr r5
li r0,0
mr r4,r3
mr r3,r5 /* return the number of bytes not copied */
1: andi. r9,r4,7
beq 3f
-90: stb r0,0(r4)
+ extable 99f
+ stb r0,0(r4)
addic. r5,r5,-1
addi r4,r4,1
bne 1b
@@ -337,133 +389,54 @@ END_FTR_SECTION_IFCLR(CPU_FTR_UNALIGNED_LD_STD)
andi. r5,r5,7
blt cr1,93f
mtctr r9
+ extable 99f
91: std r0,0(r4)
addi r4,r4,8
bdnz 91b
93: beqlr
mtctr r5
+ extable 99f
92: stb r0,0(r4)
addi r4,r4,1
bdnz 92b
- blr
+99: blr
/*
* exception handlers for stores: we just need to work
* out how many bytes weren't copied
+ * Note that the number of bytes of instructions for adjusting r3 needs
+ * to equal the amount of the adjustment, due to the trick of using
+ * .Lst_exc - r3_offset as the handler address.
*/
-182:
-183:
+.Lst_exc_r7:
add r3,r3,r7
- b 1f
-371:
-180:
+ b .Lst_exc
+
+ /* adjust by 24 */
addi r3,r3,8
-171:
-177:
-179:
+ nop
+ /* adjust by 16 */
addi r3,r3,8
-370:
-372:
-176:
-178:
+ nop
+ /* adjust by 8 */
addi r3,r3,4
-185:
+ /* adjust by 4 */
addi r3,r3,4
-170:
-172:
-345:
-173:
-174:
-175:
-181:
-184:
-186:
-187:
-188:
-189:
-194:
-195:
-196:
-1:
+.Lst_exc:
ld r6,-24(r1)
ld r5,-8(r1)
add r6,r6,r5
- subf r3,r3,r6 /* #bytes not copied */
-190:
-191:
-192:
- blr /* #bytes not copied in r3 */
-
- .section __ex_table,"a"
- .align 3
- .llong 20b,120b
- .llong 220b,320b
- .llong 21b,121b
- .llong 221b,321b
- .llong 70b,170b
- .llong 270b,370b
- .llong 22b,122b
- .llong 222b,322b
- .llong 71b,171b
- .llong 271b,371b
- .llong 72b,172b
- .llong 272b,372b
- .llong 244b,344b
- .llong 245b,345b
- .llong 23b,123b
- .llong 73b,173b
- .llong 44b,144b
- .llong 74b,174b
- .llong 45b,145b
- .llong 75b,175b
- .llong 24b,124b
- .llong 25b,125b
- .llong 26b,126b
- .llong 27b,127b
- .llong 28b,128b
- .llong 29b,129b
- .llong 30b,130b
- .llong 31b,131b
- .llong 32b,132b
- .llong 76b,176b
- .llong 33b,133b
- .llong 77b,177b
- .llong 78b,178b
- .llong 79b,179b
- .llong 80b,180b
- .llong 34b,134b
- .llong 94b,194b
- .llong 95b,195b
- .llong 96b,196b
- .llong 35b,135b
- .llong 81b,181b
- .llong 36b,136b
- .llong 82b,182b
- .llong 37b,137b
- .llong 83b,183b
- .llong 38b,138b
- .llong 39b,139b
- .llong 84b,184b
- .llong 85b,185b
- .llong 40b,140b
- .llong 86b,186b
- .llong 41b,141b
- .llong 87b,187b
- .llong 42b,142b
- .llong 88b,188b
- .llong 43b,143b
- .llong 89b,189b
- .llong 90b,190b
- .llong 91b,191b
- .llong 92b,192b
-
- .text
+ subf r3,r3,r6 /* #bytes not copied in r3 */
+ blr
/*
* Routine to copy a whole page of data, optimized for POWER4.
* On POWER4 it is more than 50% faster than the simple loop
* above (following the .Ldst_aligned label).
*/
+ .macro exc
+ extable .Labort
+ .endm
.Lcopy_page_4K:
std r31,-32(1)
std r30,-40(1)
@@ -482,86 +455,86 @@ END_FTR_SECTION_IFCLR(CPU_FTR_UNALIGNED_LD_STD)
li r0,5
0: addi r5,r5,-24
mtctr r0
-20: ld r22,640(4)
-21: ld r21,512(4)
-22: ld r20,384(4)
-23: ld r11,256(4)
-24: ld r9,128(4)
-25: ld r7,0(4)
-26: ld r25,648(4)
-27: ld r24,520(4)
-28: ld r23,392(4)
-29: ld r10,264(4)
-30: ld r8,136(4)
-31: ldu r6,8(4)
+exc; ld r22,640(4)
+exc; ld r21,512(4)
+exc; ld r20,384(4)
+exc; ld r11,256(4)
+exc; ld r9,128(4)
+exc; ld r7,0(4)
+exc; ld r25,648(4)
+exc; ld r24,520(4)
+exc; ld r23,392(4)
+exc; ld r10,264(4)
+exc; ld r8,136(4)
+exc; ldu r6,8(4)
cmpwi r5,24
1:
-32: std r22,648(3)
-33: std r21,520(3)
-34: std r20,392(3)
-35: std r11,264(3)
-36: std r9,136(3)
-37: std r7,8(3)
-38: ld r28,648(4)
-39: ld r27,520(4)
-40: ld r26,392(4)
-41: ld r31,264(4)
-42: ld r30,136(4)
-43: ld r29,8(4)
-44: std r25,656(3)
-45: std r24,528(3)
-46: std r23,400(3)
-47: std r10,272(3)
-48: std r8,144(3)
-49: std r6,16(3)
-50: ld r22,656(4)
-51: ld r21,528(4)
-52: ld r20,400(4)
-53: ld r11,272(4)
-54: ld r9,144(4)
-55: ld r7,16(4)
-56: std r28,664(3)
-57: std r27,536(3)
-58: std r26,408(3)
-59: std r31,280(3)
-60: std r30,152(3)
-61: stdu r29,24(3)
-62: ld r25,664(4)
-63: ld r24,536(4)
-64: ld r23,408(4)
-65: ld r10,280(4)
-66: ld r8,152(4)
-67: ldu r6,24(4)
+exc; std r22,648(3)
+exc; std r21,520(3)
+exc; std r20,392(3)
+exc; std r11,264(3)
+exc; std r9,136(3)
+exc; std r7,8(3)
+exc; ld r28,648(4)
+exc; ld r27,520(4)
+exc; ld r26,392(4)
+exc; ld r31,264(4)
+exc; ld r30,136(4)
+exc; ld r29,8(4)
+exc; std r25,656(3)
+exc; std r24,528(3)
+exc; std r23,400(3)
+exc; std r10,272(3)
+exc; std r8,144(3)
+exc; std r6,16(3)
+exc; ld r22,656(4)
+exc; ld r21,528(4)
+exc; ld r20,400(4)
+exc; ld r11,272(4)
+exc; ld r9,144(4)
+exc; ld r7,16(4)
+exc; std r28,664(3)
+exc; std r27,536(3)
+exc; std r26,408(3)
+exc; std r31,280(3)
+exc; std r30,152(3)
+exc; stdu r29,24(3)
+exc; ld r25,664(4)
+exc; ld r24,536(4)
+exc; ld r23,408(4)
+exc; ld r10,280(4)
+exc; ld r8,152(4)
+exc; ldu r6,24(4)
bdnz 1b
-68: std r22,648(3)
-69: std r21,520(3)
-70: std r20,392(3)
-71: std r11,264(3)
-72: std r9,136(3)
-73: std r7,8(3)
-74: addi r4,r4,640
-75: addi r3,r3,648
+exc; std r22,648(3)
+exc; std r21,520(3)
+exc; std r20,392(3)
+exc; std r11,264(3)
+exc; std r9,136(3)
+exc; std r7,8(3)
+ addi r4,r4,640
+ addi r3,r3,648
bge 0b
mtctr r5
-76: ld r7,0(4)
-77: ld r8,8(4)
-78: ldu r9,16(4)
+exc; ld r7,0(4)
+exc; ld r8,8(4)
+exc; ldu r9,16(4)
3:
-79: ld r10,8(4)
-80: std r7,8(3)
-81: ld r7,16(4)
-82: std r8,16(3)
-83: ld r8,24(4)
-84: std r9,24(3)
-85: ldu r9,32(4)
-86: stdu r10,32(3)
+exc; ld r10,8(4)
+exc; std r7,8(3)
+exc; ld r7,16(4)
+exc; std r8,16(3)
+exc; ld r8,24(4)
+exc; std r9,24(3)
+exc; ldu r9,32(4)
+exc; stdu r10,32(3)
bdnz 3b
4:
-87: ld r10,8(4)
-88: std r7,8(3)
-89: std r8,16(3)
-90: std r9,24(3)
-91: std r10,32(3)
+exc; ld r10,8(4)
+exc; std r7,8(3)
+exc; std r8,16(3)
+exc; std r9,24(3)
+exc; std r10,32(3)
9: ld r20,-120(1)
ld r21,-112(1)
ld r22,-104(1)
@@ -581,7 +554,8 @@ END_FTR_SECTION_IFCLR(CPU_FTR_UNALIGNED_LD_STD)
* on an exception, reset to the beginning and jump back into the
* standard __copy_tofrom_user
*/
-100: ld r20,-120(1)
+.Labort:
+ ld r20,-120(1)
ld r21,-112(1)
ld r22,-104(1)
ld r23,-96(1)
@@ -597,79 +571,4 @@ END_FTR_SECTION_IFCLR(CPU_FTR_UNALIGNED_LD_STD)
ld r4,-16(r1)
li r5,4096
b .Ldst_aligned
-
- .section __ex_table,"a"
- .align 3
- .llong 20b,100b
- .llong 21b,100b
- .llong 22b,100b
- .llong 23b,100b
- .llong 24b,100b
- .llong 25b,100b
- .llong 26b,100b
- .llong 27b,100b
- .llong 28b,100b
- .llong 29b,100b
- .llong 30b,100b
- .llong 31b,100b
- .llong 32b,100b
- .llong 33b,100b
- .llong 34b,100b
- .llong 35b,100b
- .llong 36b,100b
- .llong 37b,100b
- .llong 38b,100b
- .llong 39b,100b
- .llong 40b,100b
- .llong 41b,100b
- .llong 42b,100b
- .llong 43b,100b
- .llong 44b,100b
- .llong 45b,100b
- .llong 46b,100b
- .llong 47b,100b
- .llong 48b,100b
- .llong 49b,100b
- .llong 50b,100b
- .llong 51b,100b
- .llong 52b,100b
- .llong 53b,100b
- .llong 54b,100b
- .llong 55b,100b
- .llong 56b,100b
- .llong 57b,100b
- .llong 58b,100b
- .llong 59b,100b
- .llong 60b,100b
- .llong 61b,100b
- .llong 62b,100b
- .llong 63b,100b
- .llong 64b,100b
- .llong 65b,100b
- .llong 66b,100b
- .llong 67b,100b
- .llong 68b,100b
- .llong 69b,100b
- .llong 70b,100b
- .llong 71b,100b
- .llong 72b,100b
- .llong 73b,100b
- .llong 74b,100b
- .llong 75b,100b
- .llong 76b,100b
- .llong 77b,100b
- .llong 78b,100b
- .llong 79b,100b
- .llong 80b,100b
- .llong 81b,100b
- .llong 82b,100b
- .llong 83b,100b
- .llong 84b,100b
- .llong 85b,100b
- .llong 86b,100b
- .llong 87b,100b
- .llong 88b,100b
- .llong 89b,100b
- .llong 90b,100b
- .llong 91b,100b
EXPORT_SYMBOL(__copy_tofrom_user)
--
2.7.4
^ permalink raw reply related [flat|nested] 7+ messages in thread
* [PATCH 2/4] selftests/powerpc/64: Test all paths through copy routines
2016-11-03 5:19 [PATCH 1/4] powerpc/64: Make exception table clearer in __copy_tofrom_user_base Paul Mackerras
@ 2016-11-03 5:20 ` Paul Mackerras
2016-11-03 7:59 ` kbuild test robot
2016-11-03 23:39 ` [PATCH 2/4 v2] " Paul Mackerras
2016-11-03 5:21 ` [PATCH 3/4] selftests/powerpc/64: Test exception cases in copy_tofrom_user Paul Mackerras
2016-11-03 5:23 ` [PATCH 4/4] powerpc/64: Copy as much as possible in __copy_tofrom_user Paul Mackerras
2 siblings, 2 replies; 7+ messages in thread
From: Paul Mackerras @ 2016-11-03 5:20 UTC (permalink / raw)
To: linuxppc-dev
The hand-coded assembler 64-bit copy routines include feature sections
that select one code path or another depending on which CPU we are
executing on. The self-tests for these copy routines end up testing
just one path. This adds a mechanism for selecting any desired code
path at compile time, and makes 2 or 3 versions of each test, each
using a different code path, so as to cover all the possible paths.
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
---
arch/powerpc/lib/copyuser_64.S | 7 +++++
arch/powerpc/lib/copyuser_power7.S | 21 ++++++++-------
arch/powerpc/lib/memcpy_64.S | 9 +++++--
arch/powerpc/lib/memcpy_power7.S | 22 ++++++++--------
tools/testing/selftests/powerpc/copyloops/Makefile | 30 +++++++++++++++++-----
.../selftests/powerpc/copyloops/asm/ppc_asm.h | 21 ++++++++-------
6 files changed, 68 insertions(+), 42 deletions(-)
diff --git a/arch/powerpc/lib/copyuser_64.S b/arch/powerpc/lib/copyuser_64.S
index 1c5247c..668d816 100644
--- a/arch/powerpc/lib/copyuser_64.S
+++ b/arch/powerpc/lib/copyuser_64.S
@@ -10,6 +10,11 @@
#include <asm/ppc_asm.h>
#include <asm/export.h>
+#ifndef SELFTEST_CASE
+/* 0 == most CPUs, 1 == POWER6, 2 == Cell */
+#define SELFTEST_CASE 0
+#endif
+
#ifdef __BIG_ENDIAN__
#define sLd sld /* Shift towards low-numbered address. */
#define sHd srd /* Shift towards high-numbered address. */
@@ -77,6 +82,7 @@ _GLOBAL(__copy_tofrom_user_base)
* At the time of writing the only CPU that has this combination of bits
* set is Power6.
*/
+test_feature = (SELFTEST_CASE == 1)
BEGIN_FTR_SECTION
nop
FTR_SECTION_ELSE
@@ -86,6 +92,7 @@ ALT_FTR_SECTION_END(CPU_FTR_UNALIGNED_LD_STD | CPU_FTR_CP_USE_DCBTZ, \
.Ldst_aligned:
addi r3,r3,-16
r3_offset = 16
+test_feature = (SELFTEST_CASE == 0)
BEGIN_FTR_SECTION
andi. r0,r4,7
bne .Lsrc_unaligned
diff --git a/arch/powerpc/lib/copyuser_power7.S b/arch/powerpc/lib/copyuser_power7.S
index da0c568..b2f7211 100644
--- a/arch/powerpc/lib/copyuser_power7.S
+++ b/arch/powerpc/lib/copyuser_power7.S
@@ -19,6 +19,11 @@
*/
#include <asm/ppc_asm.h>
+#ifndef SELFTEST_CASE
+/* 0 == don't use VMX, 1 == use VMX */
+#define SELFTEST_CASE 0
+#endif
+
#ifdef __BIG_ENDIAN__
#define LVS(VRT,RA,RB) lvsl VRT,RA,RB
#define VPERM(VRT,VRA,VRB,VRC) vperm VRT,VRA,VRB,VRC
@@ -92,7 +97,6 @@
_GLOBAL(__copy_tofrom_user_power7)
-#ifdef CONFIG_ALTIVEC
cmpldi r5,16
cmpldi cr1,r5,4096
@@ -101,16 +105,11 @@ _GLOBAL(__copy_tofrom_user_power7)
std r5,-STACKFRAMESIZE+STK_REG(R29)(r1)
blt .Lshort_copy
- bgt cr1,.Lvmx_copy
-#else
- cmpldi r5,16
- std r3,-STACKFRAMESIZE+STK_REG(R31)(r1)
- std r4,-STACKFRAMESIZE+STK_REG(R30)(r1)
- std r5,-STACKFRAMESIZE+STK_REG(R29)(r1)
-
- blt .Lshort_copy
-#endif
+test_feature = SELFTEST_CASE
+BEGIN_FTR_SECTION
+ bgt cr1,.Lvmx_copy
+END_FTR_SECTION_IFSET(CPU_FTR_ALTIVEC_COMP)
.Lnonvmx_copy:
/* Get the source 8B aligned */
@@ -290,8 +289,8 @@ err1; stb r0,0(r3)
addi r1,r1,STACKFRAMESIZE
b .Lnonvmx_copy
-#ifdef CONFIG_ALTIVEC
.Lvmx_copy:
+#ifdef CONFIG_ALTIVEC
mflr r0
std r0,16(r1)
stdu r1,-STACKFRAMESIZE(r1)
diff --git a/arch/powerpc/lib/memcpy_64.S b/arch/powerpc/lib/memcpy_64.S
index f4d6088..ea363ad 100644
--- a/arch/powerpc/lib/memcpy_64.S
+++ b/arch/powerpc/lib/memcpy_64.S
@@ -10,6 +10,11 @@
#include <asm/ppc_asm.h>
#include <asm/export.h>
+#ifndef SELFTEST_CASE
+/* For big-endian, 0 == most CPUs, 1 == POWER6, 2 == Cell */
+#define SELFTEST_CASE 0
+#endif
+
.align 7
_GLOBAL_TOC(memcpy)
BEGIN_FTR_SECTION
@@ -19,9 +24,7 @@ BEGIN_FTR_SECTION
std r3,-STACKFRAMESIZE+STK_REG(R31)(r1) /* save destination pointer for return value */
#endif
FTR_SECTION_ELSE
-#ifndef SELFTEST
b memcpy_power7
-#endif
ALT_FTR_SECTION_END_IFCLR(CPU_FTR_VMX_COPY)
#ifdef __LITTLE_ENDIAN__
/* dumb little-endian memcpy that will get replaced at runtime */
@@ -45,6 +48,7 @@ ALT_FTR_SECTION_END_IFCLR(CPU_FTR_VMX_COPY)
cleared.
At the time of writing the only CPU that has this combination of bits
set is Power6. */
+test_feature = (SELFTEST_CASE == 1)
BEGIN_FTR_SECTION
nop
FTR_SECTION_ELSE
@@ -53,6 +57,7 @@ ALT_FTR_SECTION_END(CPU_FTR_UNALIGNED_LD_STD | CPU_FTR_CP_USE_DCBTZ, \
CPU_FTR_UNALIGNED_LD_STD)
.Ldst_aligned:
addi r3,r3,-16
+test_feature = (SELFTEST_CASE == 0)
BEGIN_FTR_SECTION
andi. r0,r4,7
bne .Lsrc_unaligned
diff --git a/arch/powerpc/lib/memcpy_power7.S b/arch/powerpc/lib/memcpy_power7.S
index 786234f..c4f4e3b 100644
--- a/arch/powerpc/lib/memcpy_power7.S
+++ b/arch/powerpc/lib/memcpy_power7.S
@@ -19,7 +19,10 @@
*/
#include <asm/ppc_asm.h>
-_GLOBAL(memcpy_power7)
+#ifndef SELFTEST_CASE
+/* 0 == don't use VMX, 1 == use VMX */
+#define SELFTEST_CASE 0
+#endif
#ifdef __BIG_ENDIAN__
#define LVS(VRT,RA,RB) lvsl VRT,RA,RB
@@ -29,21 +32,16 @@ _GLOBAL(memcpy_power7)
#define VPERM(VRT,VRA,VRB,VRC) vperm VRT,VRB,VRA,VRC
#endif
-#ifdef CONFIG_ALTIVEC
+_GLOBAL(memcpy_power7)
cmpldi r5,16
cmpldi cr1,r5,4096
-
std r3,-STACKFRAMESIZE+STK_REG(R31)(r1)
-
blt .Lshort_copy
- bgt cr1,.Lvmx_copy
-#else
- cmpldi r5,16
- std r3,-STACKFRAMESIZE+STK_REG(R31)(r1)
-
- blt .Lshort_copy
-#endif
+test_feature = SELFTEST_CASE
+BEGIN_FTR_SECTION
+ bgt cr1, .Lvmx_copy
+END_FTR_SECTION_IFSET(CPU_FTR_ALTIVEC_COMP)
.Lnonvmx_copy:
/* Get the source 8B aligned */
@@ -223,8 +221,8 @@ _GLOBAL(memcpy_power7)
addi r1,r1,STACKFRAMESIZE
b .Lnonvmx_copy
-#ifdef CONFIG_ALTIVEC
.Lvmx_copy:
+#ifdef CONFIG_ALTIVEC
mflr r0
std r4,-STACKFRAMESIZE+STK_REG(R30)(r1)
std r5,-STACKFRAMESIZE+STK_REG(R29)(r1)
diff --git a/tools/testing/selftests/powerpc/copyloops/Makefile b/tools/testing/selftests/powerpc/copyloops/Makefile
index 384843e..1b351c3 100644
--- a/tools/testing/selftests/powerpc/copyloops/Makefile
+++ b/tools/testing/selftests/powerpc/copyloops/Makefile
@@ -7,17 +7,35 @@ CFLAGS += -maltivec
# Use our CFLAGS for the implicit .S rule
ASFLAGS = $(CFLAGS)
-TEST_PROGS := copyuser_64 copyuser_power7 memcpy_64 memcpy_power7
+TEST_PROGS := copyuser_64_t0 copyuser_64_t1 copyuser_64_t2 \
+ copyuser_p7_t0 copyuser_p7_t1 \
+ memcpy_64_t0 memcpy_64_t1 memcpy_64_t2 \
+ memcpy_p7_t0 memcpy_p7_t1
+
EXTRA_SOURCES := validate.c ../harness.c
all: $(TEST_PROGS)
-copyuser_64: CPPFLAGS += -D COPY_LOOP=test___copy_tofrom_user_base
-copyuser_power7: CPPFLAGS += -D COPY_LOOP=test___copy_tofrom_user_power7
-memcpy_64: CPPFLAGS += -D COPY_LOOP=test_memcpy
-memcpy_power7: CPPFLAGS += -D COPY_LOOP=test_memcpy_power7
+copyuser_64_t%: copyuser_64.S $(EXTRA_SOURCES)
+ $(CC) $(CPPFLAGS) $(CFLAGS) \
+ -D COPY_LOOP=test___copy_tofrom_user_base \
+ -D SELFTEST_CASE=$(subst copyuser_64_t,,$@) -o $@ $^
+
+copyuser_p7_t%: copyuser_power7.S $(EXTRA_SOURCES)
+ $(CC) $(CPPFLAGS) $(CFLAGS) \
+ -D COPY_LOOP=test___copy_tofrom_user_power7 \
+ -D SELFTEST_CASE=$(subst copyuser_p7_t,,$@) -o $@ $^
+
+# Strictly speaking, we only need the memcpy_64 test cases for big-endian
+memcpy_64_t%: memcpy_64.S $(EXTRA_SOURCES)
+ $(CC) $(CPPFLAGS) $(CFLAGS) \
+ -D COPY_LOOP=test_memcpy \
+ -D SELFTEST_CASE=$(subst memcpy_64_t,,$@) -o $@ $^
-$(TEST_PROGS): $(EXTRA_SOURCES)
+memcpy_p7_t%: memcpy_power7.S $(EXTRA_SOURCES)
+ $(CC) $(CPPFLAGS) $(CFLAGS) \
+ -D COPY_LOOP=test_memcpy_power7 \
+ -D SELFTEST_CASE=$(subst memcpy_p7_t,,$@) -o $@ $^
include ../../lib.mk
diff --git a/tools/testing/selftests/powerpc/copyloops/asm/ppc_asm.h b/tools/testing/selftests/powerpc/copyloops/asm/ppc_asm.h
index 50ae7d2..7eb0cd6 100644
--- a/tools/testing/selftests/powerpc/copyloops/asm/ppc_asm.h
+++ b/tools/testing/selftests/powerpc/copyloops/asm/ppc_asm.h
@@ -40,17 +40,16 @@ FUNC_START(enter_vmx_copy)
FUNC_START(exit_vmx_copy)
blr
-FUNC_START(memcpy_power7)
- blr
-
-FUNC_START(__copy_tofrom_user_power7)
- blr
-
FUNC_START(__copy_tofrom_user_base)
blr
-#define BEGIN_FTR_SECTION
-#define FTR_SECTION_ELSE
-#define ALT_FTR_SECTION_END_IFCLR(x)
-#define ALT_FTR_SECTION_END(x, y)
-#define END_FTR_SECTION_IFCLR(x)
+#define BEGIN_FTR_SECTION .if test_feature
+#define FTR_SECTION_ELSE .else
+#define ALT_FTR_SECTION_END_IFCLR(x) .endif
+#define ALT_FTR_SECTION_END_IFSET(x) .endif
+#define ALT_FTR_SECTION_END(x, y) .endif
+#define END_FTR_SECTION_IFCLR(x) .endif
+#define END_FTR_SECTION_IFSET(x) .endif
+
+/* Default to taking the first of any alternative feature sections */
+test_feature = 1
--
2.10.1
^ permalink raw reply related [flat|nested] 7+ messages in thread
* [PATCH 3/4] selftests/powerpc/64: Test exception cases in copy_tofrom_user
2016-11-03 5:19 [PATCH 1/4] powerpc/64: Make exception table clearer in __copy_tofrom_user_base Paul Mackerras
2016-11-03 5:20 ` [PATCH 2/4] selftests/powerpc/64: Test all paths through copy routines Paul Mackerras
@ 2016-11-03 5:21 ` Paul Mackerras
2016-11-03 23:40 ` [PATCH 3/4 v2] " Paul Mackerras
2016-11-03 5:23 ` [PATCH 4/4] powerpc/64: Copy as much as possible in __copy_tofrom_user Paul Mackerras
2 siblings, 1 reply; 7+ messages in thread
From: Paul Mackerras @ 2016-11-03 5:21 UTC (permalink / raw)
To: linuxppc-dev
From: Michael Ellerman <mpe@ellerman.id.au>
This adds a set of test cases to test the behaviour of
copy_tofrom_user when exceptions are encountered accessing the
source or destination. Currently, copy_tofrom_user does not always
copy as many bytes as possible when an exception occurs on a store
to the destination, and that is reflected in failures in these tests.
Based on a test program from Anton Blanchard.
[paulus@ozlabs.org - test all three paths, wrote commit description]
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
---
tools/testing/selftests/powerpc/copyloops/Makefile | 11 +-
.../selftests/powerpc/copyloops/asm/ppc_asm.h | 22 +---
.../powerpc/copyloops/copy_tofrom_user_reference.S | 24 +++++
.../selftests/powerpc/copyloops/exc_validate.c | 117 +++++++++++++++++++++
tools/testing/selftests/powerpc/copyloops/stubs.S | 19 ++++
5 files changed, 173 insertions(+), 20 deletions(-)
create mode 100644 tools/testing/selftests/powerpc/copyloops/copy_tofrom_user_reference.S
create mode 100644 tools/testing/selftests/powerpc/copyloops/exc_validate.c
create mode 100644 tools/testing/selftests/powerpc/copyloops/stubs.S
diff --git a/tools/testing/selftests/powerpc/copyloops/Makefile b/tools/testing/selftests/powerpc/copyloops/Makefile
index 1b351c3..feb7bf1 100644
--- a/tools/testing/selftests/powerpc/copyloops/Makefile
+++ b/tools/testing/selftests/powerpc/copyloops/Makefile
@@ -10,9 +10,10 @@ ASFLAGS = $(CFLAGS)
TEST_PROGS := copyuser_64_t0 copyuser_64_t1 copyuser_64_t2 \
copyuser_p7_t0 copyuser_p7_t1 \
memcpy_64_t0 memcpy_64_t1 memcpy_64_t2 \
- memcpy_p7_t0 memcpy_p7_t1
+ memcpy_p7_t0 memcpy_p7_t1 \
+ copyuser_64_exc_t0 copyuser_64_exc_t1 copyuser_64_exc_t2
-EXTRA_SOURCES := validate.c ../harness.c
+EXTRA_SOURCES := validate.c ../harness.c stubs.S
all: $(TEST_PROGS)
@@ -37,6 +38,12 @@ memcpy_p7_t%: memcpy_power7.S $(EXTRA_SOURCES)
-D COPY_LOOP=test_memcpy_power7 \
-D SELFTEST_CASE=$(subst memcpy_p7_t,,$@) -o $@ $^
+copyuser_64_exc_t%: copyuser_64.S exc_validate.c ../harness.c \
+ copy_tofrom_user_reference.S stubs.S
+ $(CC) $(CPPFLAGS) $(CFLAGS) \
+ -D COPY_LOOP=test___copy_tofrom_user_base \
+ -D SELFTEST_CASE=$(subst copyuser_64_exc_t,,$@) -o $@ $^
+
include ../../lib.mk
clean:
diff --git a/tools/testing/selftests/powerpc/copyloops/asm/ppc_asm.h b/tools/testing/selftests/powerpc/copyloops/asm/ppc_asm.h
index 7eb0cd6..03f10f1 100644
--- a/tools/testing/selftests/powerpc/copyloops/asm/ppc_asm.h
+++ b/tools/testing/selftests/powerpc/copyloops/asm/ppc_asm.h
@@ -1,3 +1,5 @@
+#ifndef __SELFTESTS_POWERPC_PPC_ASM_H
+#define __SELFTESTS_POWERPC_PPC_ASM_H
#include <ppc-asm.h>
#define CONFIG_ALTIVEC
@@ -25,24 +27,6 @@
#define PPC_MTOCRF(A, B) mtocrf A, B
-FUNC_START(enter_vmx_usercopy)
- li r3,1
- blr
-
-FUNC_START(exit_vmx_usercopy)
- li r3,0
- blr
-
-FUNC_START(enter_vmx_copy)
- li r3,1
- blr
-
-FUNC_START(exit_vmx_copy)
- blr
-
-FUNC_START(__copy_tofrom_user_base)
- blr
-
#define BEGIN_FTR_SECTION .if test_feature
#define FTR_SECTION_ELSE .else
#define ALT_FTR_SECTION_END_IFCLR(x) .endif
@@ -53,3 +37,5 @@ FUNC_START(__copy_tofrom_user_base)
/* Default to taking the first of any alternative feature sections */
test_feature = 1
+
+#endif /* __SELFTESTS_POWERPC_PPC_ASM_H */
diff --git a/tools/testing/selftests/powerpc/copyloops/copy_tofrom_user_reference.S b/tools/testing/selftests/powerpc/copyloops/copy_tofrom_user_reference.S
new file mode 100644
index 0000000..3363b86
--- /dev/null
+++ b/tools/testing/selftests/powerpc/copyloops/copy_tofrom_user_reference.S
@@ -0,0 +1,24 @@
+#include <asm/ppc_asm.h>
+
+_GLOBAL(copy_tofrom_user_reference)
+ cmpdi r5,0
+ beq 4f
+
+ mtctr r5
+
+1: lbz r6,0(r4)
+2: stb r6,0(r3)
+ addi r3,r3,1
+ addi r4,r4,1
+ bdnz 1b
+
+3: mfctr r3
+ blr
+
+4: mr r3,r5
+ blr
+
+.section __ex_table,"a"
+ .llong 1b,3b
+ .llong 2b,3b
+.text
diff --git a/tools/testing/selftests/powerpc/copyloops/exc_validate.c b/tools/testing/selftests/powerpc/copyloops/exc_validate.c
new file mode 100644
index 0000000..72e7830
--- /dev/null
+++ b/tools/testing/selftests/powerpc/copyloops/exc_validate.c
@@ -0,0 +1,117 @@
+#include <stdlib.h>
+#include <string.h>
+#include <stdio.h>
+#include <signal.h>
+#include <unistd.h>
+#include <sys/mman.h>
+
+extern char __start___ex_table[];
+extern char __stop___ex_table[];
+
+#if defined(__powerpc64__)
+#define UCONTEXT_NIA(UC) (UC)->uc_mcontext.gp_regs[PT_NIP]
+#elif defined(__powerpc__)
+#define UCONTEXT_NIA(UC) (UC)->uc_mcontext.uc_regs->gregs[PT_NIP]
+#else
+#error implement UCONTEXT_NIA
+#endif
+
+static void segv_handler(int signr, siginfo_t *info, void *ptr)
+{
+ ucontext_t *uc = (ucontext_t *)ptr;
+ unsigned long addr = (unsigned long)info->si_addr;
+ unsigned long *ip = &UCONTEXT_NIA(uc);
+ unsigned long *ex_p = (unsigned long *)__start___ex_table;
+
+ while (ex_p < (unsigned long *)__stop___ex_table) {
+ unsigned long insn, fixup;
+
+ insn = *ex_p++;
+ fixup = *ex_p++;
+
+ if (insn == *ip) {
+ *ip = fixup;
+ return;
+ }
+ }
+
+ printf("No exception table match for NIA %lx ADDR %lx\n", *ip, addr);
+ abort();
+}
+
+static void setup_segv_handler(void)
+{
+ struct sigaction action;
+
+ memset(&action, 0, sizeof(action));
+ action.sa_sigaction = segv_handler;
+ action.sa_flags = SA_SIGINFO;
+ sigaction(SIGSEGV, &action, NULL);
+}
+
+unsigned long COPY_LOOP(void *to, const void *from, unsigned long size);
+unsigned long test_copy_tofrom_user_reference(void *to, const void *from, unsigned long size);
+
+static int total_passed;
+static int total_failed;
+
+static void do_one_test(char *dstp, char *srcp, unsigned long len)
+{
+ unsigned long got, expected;
+
+ got = COPY_LOOP(dstp, srcp, len);
+ expected = test_copy_tofrom_user_reference(dstp, srcp, len);
+
+ if (got != expected) {
+ total_failed++;
+ printf("FAIL from=%p to=%p len=%ld returned %ld, expected %ld\n",
+ srcp, dstp, len, got, expected);
+ //abort();
+ } else
+ total_passed++;
+}
+
+//#define MAX_LEN 512
+#define MAX_LEN 16
+
+int main(void)
+{
+ int page_size;
+ static char *p, *q;
+ unsigned long src, dst, len;
+
+ page_size = getpagesize();
+ p = mmap(NULL, page_size * 2, PROT_READ|PROT_WRITE,
+ MAP_PRIVATE|MAP_ANONYMOUS, -1, 0);
+
+ if (p == MAP_FAILED) {
+ perror("mmap");
+ exit(1);
+ }
+
+ memset(p, 0, page_size);
+
+ setup_segv_handler();
+
+ if (mprotect(p + page_size, page_size, PROT_NONE)) {
+ perror("mprotect");
+ exit(1);
+ }
+
+ q = p + page_size - MAX_LEN;
+
+ for (src = 0; src < MAX_LEN; src++) {
+ for (dst = 0; dst < MAX_LEN; dst++) {
+ for (len = 0; len < MAX_LEN+1; len++) {
+ // printf("from=%p to=%p len=%ld\n", q+dst, q+src, len);
+ do_one_test(q+dst, q+src, len);
+ }
+ }
+ }
+
+ printf("Totals:\n");
+ printf(" Pass: %d\n", total_passed);
+ printf(" Fail: %d\n", total_failed);
+
+ return 0;
+}
diff --git a/tools/testing/selftests/powerpc/copyloops/stubs.S b/tools/testing/selftests/powerpc/copyloops/stubs.S
new file mode 100644
index 0000000..830e2a6
--- /dev/null
+++ b/tools/testing/selftests/powerpc/copyloops/stubs.S
@@ -0,0 +1,19 @@
+#include <asm/ppc_asm.h>
+
+FUNC_START(enter_vmx_usercopy)
+ li r3,1
+ blr
+
+FUNC_START(exit_vmx_usercopy)
+ li r3,0
+ blr
+
+FUNC_START(enter_vmx_copy)
+ li r3,1
+ blr
+
+FUNC_START(exit_vmx_copy)
+ blr
+
+FUNC_START(__copy_tofrom_user_base)
+ blr
--
2.10.1
^ permalink raw reply related [flat|nested] 7+ messages in thread
* [PATCH 4/4] powerpc/64: Copy as much as possible in __copy_tofrom_user
2016-11-03 5:19 [PATCH 1/4] powerpc/64: Make exception table clearer in __copy_tofrom_user_base Paul Mackerras
2016-11-03 5:20 ` [PATCH 2/4] selftests/powerpc/64: Test all paths through copy routines Paul Mackerras
2016-11-03 5:21 ` [PATCH 3/4] selftests/powerpc/64: Test exception cases in copy_tofrom_user Paul Mackerras
@ 2016-11-03 5:23 ` Paul Mackerras
2 siblings, 0 replies; 7+ messages in thread
From: Paul Mackerras @ 2016-11-03 5:23 UTC (permalink / raw)
To: linuxppc-dev
In __copy_tofrom_user, if we encounter an exception on a store, we
stop copying and return the number of bytes not copied. However,
if the store is wider than one byte and is to an unaligned address,
it is possible that the store operand overlaps a page boundary
and the exception occurred on the latter part of the store operand,
meaning that it would be possible to copy a few more bytes. Since
copy_to_user is generally expected to copy as much as possible,
it would be better to copy those extra few bytes. This adds code
to do that. Since this edge case is not performance-critical,
the code has been written to be compact rather than as fast as
possible.
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
---
arch/powerpc/lib/copyuser_64.S | 29 +++++++++++++++++++++++------
1 file changed, 23 insertions(+), 6 deletions(-)
diff --git a/arch/powerpc/lib/copyuser_64.S b/arch/powerpc/lib/copyuser_64.S
index 668d816..c479256 100644
--- a/arch/powerpc/lib/copyuser_64.S
+++ b/arch/powerpc/lib/copyuser_64.S
@@ -409,8 +409,8 @@ stex; stb r0,0(r3)
99: blr
/*
- * exception handlers for stores: we just need to work
- * out how many bytes weren't copied
+ * exception handlers for stores: we need to work out how many bytes
+ * weren't copied, and we may need to copy some more.
* Note that the number of bytes of instructions for adjusting r3 needs
* to equal the amount of the adjustment, due to the trick of using
* .Lst_exc - r3_offset as the handler address.
@@ -430,10 +430,27 @@ stex; stb r0,0(r3)
/* adjust by 4 */
addi r3,r3,4
.Lst_exc:
- ld r6,-24(r1)
- ld r5,-8(r1)
- add r6,r6,r5
- subf r3,r3,r6 /* #bytes not copied in r3 */
+ ld r6,-24(r1) /* original destination pointer */
+ ld r4,-16(r1) /* original source pointer */
+ ld r5,-8(r1) /* original number of bytes */
+ add r7,r6,r5
+ /*
+ * If the destination pointer isn't 8-byte aligned,
+ * we may have got the exception as a result of a
+ * store that overlapped a page boundary, so we may be
+ * able to copy a few more bytes.
+ */
+17: andi. r0,r3,7
+ beq 19f
+ subf r8,r6,r3 /* #bytes copied */
+ extable 19f
+ lbzx r0,r8,r4
+ extable 19f
+ stb r0,0(r3)
+ addi r3,r3,1
+ cmpld r3,r7
+ blt 17b
+19: subf r3,r3,r7 /* #bytes not copied in r3 */
blr
/*
--
2.10.1
^ permalink raw reply related [flat|nested] 7+ messages in thread
* Re: [PATCH 2/4] selftests/powerpc/64: Test all paths through copy routines
2016-11-03 5:20 ` [PATCH 2/4] selftests/powerpc/64: Test all paths through copy routines Paul Mackerras
@ 2016-11-03 7:59 ` kbuild test robot
2016-11-03 23:39 ` [PATCH 2/4 v2] " Paul Mackerras
1 sibling, 0 replies; 7+ messages in thread
From: kbuild test robot @ 2016-11-03 7:59 UTC (permalink / raw)
To: Paul Mackerras; +Cc: kbuild-all, linuxppc-dev
[-- Attachment #1: Type: text/plain, Size: 1343 bytes --]
Hi Paul,
[auto build test ERROR on linus/master]
[also build test ERROR on v4.9-rc3 next-20161028]
[cannot apply to powerpc/next]
[if your patch is applied to the wrong git tree, please drop us a note to help improve the system]
url: https://github.com/0day-ci/linux/commits/Paul-Mackerras/powerpc-64-Make-exception-table-clearer-in-__copy_tofrom_user_base/20161103-133058
config: powerpc-maple_defconfig (attached as .config)
compiler: powerpc64-linux-gnu-gcc (Debian 6.1.1-9) 6.1.1 20160705
reproduce:
wget https://git.kernel.org/cgit/linux/kernel/git/wfg/lkp-tests.git/plain/sbin/make.cross -O ~/bin/make.cross
chmod +x ~/bin/make.cross
# save the attached .config to linux build tree
make.cross ARCH=powerpc
All errors (new ones prefixed by >>):
>> arch/powerpc/lib/built-in.o:(__ftr_fixup+0x360): undefined reference to `CPU_FTR_ALTIVEC_COMP'
arch/powerpc/lib/built-in.o:(__ftr_fixup+0x368): undefined reference to `CPU_FTR_ALTIVEC_COMP'
arch/powerpc/lib/built-in.o:(__ftr_fixup+0x390): undefined reference to `CPU_FTR_ALTIVEC_COMP'
arch/powerpc/lib/built-in.o:(__ftr_fixup+0x398): undefined reference to `CPU_FTR_ALTIVEC_COMP'
---
0-DAY kernel test infrastructure Open Source Technology Center
https://lists.01.org/pipermail/kbuild-all Intel Corporation
[-- Attachment #2: .config.gz --]
[-- Type: application/gzip, Size: 16898 bytes --]
^ permalink raw reply [flat|nested] 7+ messages in thread
* [PATCH 2/4 v2] selftests/powerpc/64: Test all paths through copy routines
2016-11-03 5:20 ` [PATCH 2/4] selftests/powerpc/64: Test all paths through copy routines Paul Mackerras
2016-11-03 7:59 ` kbuild test robot
@ 2016-11-03 23:39 ` Paul Mackerras
1 sibling, 0 replies; 7+ messages in thread
From: Paul Mackerras @ 2016-11-03 23:39 UTC (permalink / raw)
To: linuxppc-dev
The hand-coded assembler 64-bit copy routines include feature sections
that select one code path or another depending on which CPU we are
executing on. The self-tests for these copy routines end up testing
just one path. This adds a mechanism for selecting any desired code
path at compile time, and makes 2 or 3 versions of each test, each
using a different code path, so as to cover all the possible paths.
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
---
v2: Fix compilation with CONFIG_ALTIVEC=n, update .gitignore
arch/powerpc/lib/copyuser_64.S | 7 +++++
arch/powerpc/lib/copyuser_power7.S | 21 +++++++--------
arch/powerpc/lib/memcpy_64.S | 9 +++++--
arch/powerpc/lib/memcpy_power7.S | 22 ++++++++--------
.../testing/selftests/powerpc/copyloops/.gitignore | 14 +++++++---
tools/testing/selftests/powerpc/copyloops/Makefile | 30 +++++++++++++++++-----
.../selftests/powerpc/copyloops/asm/ppc_asm.h | 21 ++++++++-------
7 files changed, 80 insertions(+), 44 deletions(-)
diff --git a/arch/powerpc/lib/copyuser_64.S b/arch/powerpc/lib/copyuser_64.S
index 1c5247c..668d816 100644
--- a/arch/powerpc/lib/copyuser_64.S
+++ b/arch/powerpc/lib/copyuser_64.S
@@ -10,6 +10,11 @@
#include <asm/ppc_asm.h>
#include <asm/export.h>
+#ifndef SELFTEST_CASE
+/* 0 == most CPUs, 1 == POWER6, 2 == Cell */
+#define SELFTEST_CASE 0
+#endif
+
#ifdef __BIG_ENDIAN__
#define sLd sld /* Shift towards low-numbered address. */
#define sHd srd /* Shift towards high-numbered address. */
@@ -77,6 +82,7 @@ _GLOBAL(__copy_tofrom_user_base)
* At the time of writing the only CPU that has this combination of bits
* set is Power6.
*/
+test_feature = (SELFTEST_CASE == 1)
BEGIN_FTR_SECTION
nop
FTR_SECTION_ELSE
@@ -86,6 +92,7 @@ ALT_FTR_SECTION_END(CPU_FTR_UNALIGNED_LD_STD | CPU_FTR_CP_USE_DCBTZ, \
.Ldst_aligned:
addi r3,r3,-16
r3_offset = 16
+test_feature = (SELFTEST_CASE == 0)
BEGIN_FTR_SECTION
andi. r0,r4,7
bne .Lsrc_unaligned
diff --git a/arch/powerpc/lib/copyuser_power7.S b/arch/powerpc/lib/copyuser_power7.S
index da0c568..5e16f36 100644
--- a/arch/powerpc/lib/copyuser_power7.S
+++ b/arch/powerpc/lib/copyuser_power7.S
@@ -19,6 +19,11 @@
*/
#include <asm/ppc_asm.h>
+#ifndef SELFTEST_CASE
+/* 0 == don't use VMX, 1 == use VMX */
+#define SELFTEST_CASE 0
+#endif
+
#ifdef __BIG_ENDIAN__
#define LVS(VRT,RA,RB) lvsl VRT,RA,RB
#define VPERM(VRT,VRA,VRB,VRC) vperm VRT,VRA,VRB,VRC
@@ -92,7 +97,6 @@
_GLOBAL(__copy_tofrom_user_power7)
-#ifdef CONFIG_ALTIVEC
cmpldi r5,16
cmpldi cr1,r5,4096
@@ -101,15 +105,12 @@ _GLOBAL(__copy_tofrom_user_power7)
std r5,-STACKFRAMESIZE+STK_REG(R29)(r1)
blt .Lshort_copy
- bgt cr1,.Lvmx_copy
-#else
- cmpldi r5,16
- std r3,-STACKFRAMESIZE+STK_REG(R31)(r1)
- std r4,-STACKFRAMESIZE+STK_REG(R30)(r1)
- std r5,-STACKFRAMESIZE+STK_REG(R29)(r1)
-
- blt .Lshort_copy
+#ifdef CONFIG_ALTIVEC
+test_feature = SELFTEST_CASE
+BEGIN_FTR_SECTION
+ bgt cr1,.Lvmx_copy
+END_FTR_SECTION_IFSET(CPU_FTR_ALTIVEC)
#endif
.Lnonvmx_copy:
@@ -290,8 +291,8 @@ err1; stb r0,0(r3)
addi r1,r1,STACKFRAMESIZE
b .Lnonvmx_copy
-#ifdef CONFIG_ALTIVEC
.Lvmx_copy:
+#ifdef CONFIG_ALTIVEC
mflr r0
std r0,16(r1)
stdu r1,-STACKFRAMESIZE(r1)
diff --git a/arch/powerpc/lib/memcpy_64.S b/arch/powerpc/lib/memcpy_64.S
index f4d6088..ea363ad 100644
--- a/arch/powerpc/lib/memcpy_64.S
+++ b/arch/powerpc/lib/memcpy_64.S
@@ -10,6 +10,11 @@
#include <asm/ppc_asm.h>
#include <asm/export.h>
+#ifndef SELFTEST_CASE
+/* For big-endian, 0 == most CPUs, 1 == POWER6, 2 == Cell */
+#define SELFTEST_CASE 0
+#endif
+
.align 7
_GLOBAL_TOC(memcpy)
BEGIN_FTR_SECTION
@@ -19,9 +24,7 @@ BEGIN_FTR_SECTION
std r3,-STACKFRAMESIZE+STK_REG(R31)(r1) /* save destination pointer for return value */
#endif
FTR_SECTION_ELSE
-#ifndef SELFTEST
b memcpy_power7
-#endif
ALT_FTR_SECTION_END_IFCLR(CPU_FTR_VMX_COPY)
#ifdef __LITTLE_ENDIAN__
/* dumb little-endian memcpy that will get replaced at runtime */
@@ -45,6 +48,7 @@ ALT_FTR_SECTION_END_IFCLR(CPU_FTR_VMX_COPY)
cleared.
At the time of writing the only CPU that has this combination of bits
set is Power6. */
+test_feature = (SELFTEST_CASE == 1)
BEGIN_FTR_SECTION
nop
FTR_SECTION_ELSE
@@ -53,6 +57,7 @@ ALT_FTR_SECTION_END(CPU_FTR_UNALIGNED_LD_STD | CPU_FTR_CP_USE_DCBTZ, \
CPU_FTR_UNALIGNED_LD_STD)
.Ldst_aligned:
addi r3,r3,-16
+test_feature = (SELFTEST_CASE == 0)
BEGIN_FTR_SECTION
andi. r0,r4,7
bne .Lsrc_unaligned
diff --git a/arch/powerpc/lib/memcpy_power7.S b/arch/powerpc/lib/memcpy_power7.S
index 786234f..6d59d3f 100644
--- a/arch/powerpc/lib/memcpy_power7.S
+++ b/arch/powerpc/lib/memcpy_power7.S
@@ -19,7 +19,10 @@
*/
#include <asm/ppc_asm.h>
-_GLOBAL(memcpy_power7)
+#ifndef SELFTEST_CASE
+/* 0 == don't use VMX, 1 == use VMX */
+#define SELFTEST_CASE 0
+#endif
#ifdef __BIG_ENDIAN__
#define LVS(VRT,RA,RB) lvsl VRT,RA,RB
@@ -29,20 +32,17 @@ _GLOBAL(memcpy_power7)
#define VPERM(VRT,VRA,VRB,VRC) vperm VRT,VRB,VRA,VRC
#endif
-#ifdef CONFIG_ALTIVEC
+_GLOBAL(memcpy_power7)
cmpldi r5,16
cmpldi cr1,r5,4096
-
std r3,-STACKFRAMESIZE+STK_REG(R31)(r1)
-
blt .Lshort_copy
- bgt cr1,.Lvmx_copy
-#else
- cmpldi r5,16
-
- std r3,-STACKFRAMESIZE+STK_REG(R31)(r1)
- blt .Lshort_copy
+#ifdef CONFIG_ALTIVEC
+test_feature = SELFTEST_CASE
+BEGIN_FTR_SECTION
+ bgt cr1, .Lvmx_copy
+END_FTR_SECTION_IFSET(CPU_FTR_ALTIVEC)
#endif
.Lnonvmx_copy:
@@ -223,8 +223,8 @@ _GLOBAL(memcpy_power7)
addi r1,r1,STACKFRAMESIZE
b .Lnonvmx_copy
-#ifdef CONFIG_ALTIVEC
.Lvmx_copy:
+#ifdef CONFIG_ALTIVEC
mflr r0
std r4,-STACKFRAMESIZE+STK_REG(R30)(r1)
std r5,-STACKFRAMESIZE+STK_REG(R29)(r1)
diff --git a/tools/testing/selftests/powerpc/copyloops/.gitignore b/tools/testing/selftests/powerpc/copyloops/.gitignore
index 25a192f..bc6c4ba 100644
--- a/tools/testing/selftests/powerpc/copyloops/.gitignore
+++ b/tools/testing/selftests/powerpc/copyloops/.gitignore
@@ -1,4 +1,10 @@
-copyuser_64
-copyuser_power7
-memcpy_64
-memcpy_power7
+copyuser_64_t0
+copyuser_64_t1
+copyuser_64_t2
+copyuser_power7_t0
+copyuser_power7_t1
+memcpy_64_t0
+memcpy_64_t1
+memcpy_64_t2
+memcpy_power7_t0
+memcpy_power7_t1
diff --git a/tools/testing/selftests/powerpc/copyloops/Makefile b/tools/testing/selftests/powerpc/copyloops/Makefile
index 384843e..1b351c3 100644
--- a/tools/testing/selftests/powerpc/copyloops/Makefile
+++ b/tools/testing/selftests/powerpc/copyloops/Makefile
@@ -7,17 +7,35 @@ CFLAGS += -maltivec
# Use our CFLAGS for the implicit .S rule
ASFLAGS = $(CFLAGS)
-TEST_PROGS := copyuser_64 copyuser_power7 memcpy_64 memcpy_power7
+TEST_PROGS := copyuser_64_t0 copyuser_64_t1 copyuser_64_t2 \
+ copyuser_p7_t0 copyuser_p7_t1 \
+ memcpy_64_t0 memcpy_64_t1 memcpy_64_t2 \
+ memcpy_p7_t0 memcpy_p7_t1
+
EXTRA_SOURCES := validate.c ../harness.c
all: $(TEST_PROGS)
-copyuser_64: CPPFLAGS += -D COPY_LOOP=test___copy_tofrom_user_base
-copyuser_power7: CPPFLAGS += -D COPY_LOOP=test___copy_tofrom_user_power7
-memcpy_64: CPPFLAGS += -D COPY_LOOP=test_memcpy
-memcpy_power7: CPPFLAGS += -D COPY_LOOP=test_memcpy_power7
+copyuser_64_t%: copyuser_64.S $(EXTRA_SOURCES)
+ $(CC) $(CPPFLAGS) $(CFLAGS) \
+ -D COPY_LOOP=test___copy_tofrom_user_base \
+ -D SELFTEST_CASE=$(subst copyuser_64_t,,$@) -o $@ $^
+
+copyuser_p7_t%: copyuser_power7.S $(EXTRA_SOURCES)
+ $(CC) $(CPPFLAGS) $(CFLAGS) \
+ -D COPY_LOOP=test___copy_tofrom_user_power7 \
+ -D SELFTEST_CASE=$(subst copyuser_p7_t,,$@) -o $@ $^
+
+# Strictly speaking, we only need the memcpy_64 test cases for big-endian
+memcpy_64_t%: memcpy_64.S $(EXTRA_SOURCES)
+ $(CC) $(CPPFLAGS) $(CFLAGS) \
+ -D COPY_LOOP=test_memcpy \
+ -D SELFTEST_CASE=$(subst memcpy_64_t,,$@) -o $@ $^
-$(TEST_PROGS): $(EXTRA_SOURCES)
+memcpy_p7_t%: memcpy_power7.S $(EXTRA_SOURCES)
+ $(CC) $(CPPFLAGS) $(CFLAGS) \
+ -D COPY_LOOP=test_memcpy_power7 \
+ -D SELFTEST_CASE=$(subst memcpy_p7_t,,$@) -o $@ $^
include ../../lib.mk
diff --git a/tools/testing/selftests/powerpc/copyloops/asm/ppc_asm.h b/tools/testing/selftests/powerpc/copyloops/asm/ppc_asm.h
index 50ae7d2..7eb0cd6 100644
--- a/tools/testing/selftests/powerpc/copyloops/asm/ppc_asm.h
+++ b/tools/testing/selftests/powerpc/copyloops/asm/ppc_asm.h
@@ -40,17 +40,16 @@ FUNC_START(enter_vmx_copy)
FUNC_START(exit_vmx_copy)
blr
-FUNC_START(memcpy_power7)
- blr
-
-FUNC_START(__copy_tofrom_user_power7)
- blr
-
FUNC_START(__copy_tofrom_user_base)
blr
-#define BEGIN_FTR_SECTION
-#define FTR_SECTION_ELSE
-#define ALT_FTR_SECTION_END_IFCLR(x)
-#define ALT_FTR_SECTION_END(x, y)
-#define END_FTR_SECTION_IFCLR(x)
+#define BEGIN_FTR_SECTION .if test_feature
+#define FTR_SECTION_ELSE .else
+#define ALT_FTR_SECTION_END_IFCLR(x) .endif
+#define ALT_FTR_SECTION_END_IFSET(x) .endif
+#define ALT_FTR_SECTION_END(x, y) .endif
+#define END_FTR_SECTION_IFCLR(x) .endif
+#define END_FTR_SECTION_IFSET(x) .endif
+
+/* Default to taking the first of any alternative feature sections */
+test_feature = 1
--
2.7.4
^ permalink raw reply related [flat|nested] 7+ messages in thread
* [PATCH 3/4 v2] selftests/powerpc/64: Test exception cases in copy_tofrom_user
2016-11-03 5:21 ` [PATCH 3/4] selftests/powerpc/64: Test exception cases in copy_tofrom_user Paul Mackerras
@ 2016-11-03 23:40 ` Paul Mackerras
0 siblings, 0 replies; 7+ messages in thread
From: Paul Mackerras @ 2016-11-03 23:40 UTC (permalink / raw)
To: linuxppc-dev
This adds a set of test cases to test the behaviour of
copy_tofrom_user when exceptions are encountered accessing the
source or destination. Currently, copy_tofrom_user does not always
copy as many bytes as possible when an exception occurs on a store
to the destination, and that is reflected in failures in these tests.
Based on a test program from Anton Blanchard.
[paulus@ozlabs.org - test all three paths, wrote commit description]
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
---
v2: update .gitignore
.../testing/selftests/powerpc/copyloops/.gitignore | 3 +
tools/testing/selftests/powerpc/copyloops/Makefile | 11 +-
.../selftests/powerpc/copyloops/asm/ppc_asm.h | 22 +---
.../powerpc/copyloops/copy_tofrom_user_reference.S | 24 +++++
.../selftests/powerpc/copyloops/exc_validate.c | 117 +++++++++++++++++++++
tools/testing/selftests/powerpc/copyloops/stubs.S | 19 ++++
6 files changed, 176 insertions(+), 20 deletions(-)
create mode 100644 tools/testing/selftests/powerpc/copyloops/copy_tofrom_user_reference.S
create mode 100644 tools/testing/selftests/powerpc/copyloops/exc_validate.c
create mode 100644 tools/testing/selftests/powerpc/copyloops/stubs.S
diff --git a/tools/testing/selftests/powerpc/copyloops/.gitignore b/tools/testing/selftests/powerpc/copyloops/.gitignore
index bc6c4ba..03a354d 100644
--- a/tools/testing/selftests/powerpc/copyloops/.gitignore
+++ b/tools/testing/selftests/powerpc/copyloops/.gitignore
@@ -8,3 +8,6 @@ memcpy_64_t1
memcpy_64_t2
memcpy_power7_t0
memcpy_power7_t1
+copyuser_64_exc_t0
+copyuser_64_exc_t1
+copyuser_64_exc_t2
\ No newline at end of file
diff --git a/tools/testing/selftests/powerpc/copyloops/Makefile b/tools/testing/selftests/powerpc/copyloops/Makefile
index 1b351c3..feb7bf1 100644
--- a/tools/testing/selftests/powerpc/copyloops/Makefile
+++ b/tools/testing/selftests/powerpc/copyloops/Makefile
@@ -10,9 +10,10 @@ ASFLAGS = $(CFLAGS)
TEST_PROGS := copyuser_64_t0 copyuser_64_t1 copyuser_64_t2 \
copyuser_p7_t0 copyuser_p7_t1 \
memcpy_64_t0 memcpy_64_t1 memcpy_64_t2 \
- memcpy_p7_t0 memcpy_p7_t1
+ memcpy_p7_t0 memcpy_p7_t1 \
+ copyuser_64_exc_t0 copyuser_64_exc_t1 copyuser_64_exc_t2
-EXTRA_SOURCES := validate.c ../harness.c
+EXTRA_SOURCES := validate.c ../harness.c stubs.S
all: $(TEST_PROGS)
@@ -37,6 +38,12 @@ memcpy_p7_t%: memcpy_power7.S $(EXTRA_SOURCES)
-D COPY_LOOP=test_memcpy_power7 \
-D SELFTEST_CASE=$(subst memcpy_p7_t,,$@) -o $@ $^
+copyuser_64_exc_t%: copyuser_64.S exc_validate.c ../harness.c \
+ copy_tofrom_user_reference.S stubs.S
+ $(CC) $(CPPFLAGS) $(CFLAGS) \
+ -D COPY_LOOP=test___copy_tofrom_user_base \
+ -D SELFTEST_CASE=$(subst copyuser_64_exc_t,,$@) -o $@ $^
+
include ../../lib.mk
clean:
diff --git a/tools/testing/selftests/powerpc/copyloops/asm/ppc_asm.h b/tools/testing/selftests/powerpc/copyloops/asm/ppc_asm.h
index 7eb0cd6..03f10f1 100644
--- a/tools/testing/selftests/powerpc/copyloops/asm/ppc_asm.h
+++ b/tools/testing/selftests/powerpc/copyloops/asm/ppc_asm.h
@@ -1,3 +1,5 @@
+#ifndef __SELFTESTS_POWERPC_PPC_ASM_H
+#define __SELFTESTS_POWERPC_PPC_ASM_H
#include <ppc-asm.h>
#define CONFIG_ALTIVEC
@@ -25,24 +27,6 @@
#define PPC_MTOCRF(A, B) mtocrf A, B
-FUNC_START(enter_vmx_usercopy)
- li r3,1
- blr
-
-FUNC_START(exit_vmx_usercopy)
- li r3,0
- blr
-
-FUNC_START(enter_vmx_copy)
- li r3,1
- blr
-
-FUNC_START(exit_vmx_copy)
- blr
-
-FUNC_START(__copy_tofrom_user_base)
- blr
-
#define BEGIN_FTR_SECTION .if test_feature
#define FTR_SECTION_ELSE .else
#define ALT_FTR_SECTION_END_IFCLR(x) .endif
@@ -53,3 +37,5 @@ FUNC_START(__copy_tofrom_user_base)
/* Default to taking the first of any alternative feature sections */
test_feature = 1
+
+#endif /* __SELFTESTS_POWERPC_PPC_ASM_H */
diff --git a/tools/testing/selftests/powerpc/copyloops/copy_tofrom_user_reference.S b/tools/testing/selftests/powerpc/copyloops/copy_tofrom_user_reference.S
new file mode 100644
index 0000000..3363b86
--- /dev/null
+++ b/tools/testing/selftests/powerpc/copyloops/copy_tofrom_user_reference.S
@@ -0,0 +1,24 @@
+#include <asm/ppc_asm.h>
+
+_GLOBAL(copy_tofrom_user_reference)
+ cmpdi r5,0
+ beq 4f
+
+ mtctr r5
+
+1: lbz r6,0(r4)
+2: stb r6,0(r3)
+ addi r3,r3,1
+ addi r4,r4,1
+ bdnz 1b
+
+3: mfctr r3
+ blr
+
+4: mr r3,r5
+ blr
+
+.section __ex_table,"a"
+ .llong 1b,3b
+ .llong 2b,3b
+.text
diff --git a/tools/testing/selftests/powerpc/copyloops/exc_validate.c b/tools/testing/selftests/powerpc/copyloops/exc_validate.c
new file mode 100644
index 0000000..72e7830
--- /dev/null
+++ b/tools/testing/selftests/powerpc/copyloops/exc_validate.c
@@ -0,0 +1,117 @@
+#include <stdlib.h>
+#include <string.h>
+#include <stdio.h>
+#include <signal.h>
+#include <unistd.h>
+#include <sys/mman.h>
+
+extern char __start___ex_table[];
+extern char __stop___ex_table[];
+
+#if defined(__powerpc64__)
+#define UCONTEXT_NIA(UC) (UC)->uc_mcontext.gp_regs[PT_NIP]
+#elif defined(__powerpc__)
+#define UCONTEXT_NIA(UC) (UC)->uc_mcontext.uc_regs->gregs[PT_NIP]
+#else
+#error implement UCONTEXT_NIA
+#endif
+
+static void segv_handler(int signr, siginfo_t *info, void *ptr)
+{
+ ucontext_t *uc = (ucontext_t *)ptr;
+ unsigned long addr = (unsigned long)info->si_addr;
+ unsigned long *ip = &UCONTEXT_NIA(uc);
+ unsigned long *ex_p = (unsigned long *)__start___ex_table;
+
+ while (ex_p < (unsigned long *)__stop___ex_table) {
+ unsigned long insn, fixup;
+
+ insn = *ex_p++;
+ fixup = *ex_p++;
+
+ if (insn == *ip) {
+ *ip = fixup;
+ return;
+ }
+ }
+
+ printf("No exception table match for NIA %lx ADDR %lx\n", *ip, addr);
+ abort();
+}
+
+static void setup_segv_handler(void)
+{
+ struct sigaction action;
+
+ memset(&action, 0, sizeof(action));
+ action.sa_sigaction = segv_handler;
+ action.sa_flags = SA_SIGINFO;
+ sigaction(SIGSEGV, &action, NULL);
+}
+
+unsigned long COPY_LOOP(void *to, const void *from, unsigned long size);
+unsigned long test_copy_tofrom_user_reference(void *to, const void *from, unsigned long size);
+
+static int total_passed;
+static int total_failed;
+
+static void do_one_test(char *dstp, char *srcp, unsigned long len)
+{
+ unsigned long got, expected;
+
+ got = COPY_LOOP(dstp, srcp, len);
+ expected = test_copy_tofrom_user_reference(dstp, srcp, len);
+
+ if (got != expected) {
+ total_failed++;
+ printf("FAIL from=%p to=%p len=%ld returned %ld, expected %ld\n",
+ srcp, dstp, len, got, expected);
+ //abort();
+ } else
+ total_passed++;
+}
+
+//#define MAX_LEN 512
+#define MAX_LEN 16
+
+int main(void)
+{
+ int page_size;
+ static char *p, *q;
+ unsigned long src, dst, len;
+
+ page_size = getpagesize();
+ p = mmap(NULL, page_size * 2, PROT_READ|PROT_WRITE,
+ MAP_PRIVATE|MAP_ANONYMOUS, -1, 0);
+
+ if (p == MAP_FAILED) {
+ perror("mmap");
+ exit(1);
+ }
+
+ memset(p, 0, page_size);
+
+ setup_segv_handler();
+
+ if (mprotect(p + page_size, page_size, PROT_NONE)) {
+ perror("mprotect");
+ exit(1);
+ }
+
+ q = p + page_size - MAX_LEN;
+
+ for (src = 0; src < MAX_LEN; src++) {
+ for (dst = 0; dst < MAX_LEN; dst++) {
+ for (len = 0; len < MAX_LEN+1; len++) {
+ // printf("from=%p to=%p len=%ld\n", q+dst, q+src, len);
+ do_one_test(q+dst, q+src, len);
+ }
+ }
+ }
+
+ printf("Totals:\n");
+ printf(" Pass: %d\n", total_passed);
+ printf(" Fail: %d\n", total_failed);
+
+ return 0;
+}
diff --git a/tools/testing/selftests/powerpc/copyloops/stubs.S b/tools/testing/selftests/powerpc/copyloops/stubs.S
new file mode 100644
index 0000000..830e2a6
--- /dev/null
+++ b/tools/testing/selftests/powerpc/copyloops/stubs.S
@@ -0,0 +1,19 @@
+#include <asm/ppc_asm.h>
+
+FUNC_START(enter_vmx_usercopy)
+ li r3,1
+ blr
+
+FUNC_START(exit_vmx_usercopy)
+ li r3,0
+ blr
+
+FUNC_START(enter_vmx_copy)
+ li r3,1
+ blr
+
+FUNC_START(exit_vmx_copy)
+ blr
+
+FUNC_START(__copy_tofrom_user_base)
+ blr
--
2.7.4
^ permalink raw reply related [flat|nested] 7+ messages in thread
end of thread, other threads:[~2016-11-03 23:40 UTC | newest]
Thread overview: 7+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2016-11-03 5:19 [PATCH 1/4] powerpc/64: Make exception table clearer in __copy_tofrom_user_base Paul Mackerras
2016-11-03 5:20 ` [PATCH 2/4] selftests/powerpc/64: Test all paths through copy routines Paul Mackerras
2016-11-03 7:59 ` kbuild test robot
2016-11-03 23:39 ` [PATCH 2/4 v2] " Paul Mackerras
2016-11-03 5:21 ` [PATCH 3/4] selftests/powerpc/64: Test exception cases in copy_tofrom_user Paul Mackerras
2016-11-03 23:40 ` [PATCH 3/4 v2] " Paul Mackerras
2016-11-03 5:23 ` [PATCH 4/4] powerpc/64: Copy as much as possible in __copy_tofrom_user Paul Mackerras
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).