linuxppc-dev.lists.ozlabs.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v2 0/2] powerpc32: Optimise csum_partial()
@ 2015-08-05 13:29 Christophe Leroy
  2015-08-05 13:29 ` [PATCH v2 1/2] powerpc32: optimise a few instructions in csum_partial() Christophe Leroy
  2015-08-05 13:29 ` [PATCH v2 2/2] powerpc32: optimise csum_partial() loop Christophe Leroy
  0 siblings, 2 replies; 11+ messages in thread
From: Christophe Leroy @ 2015-08-05 13:29 UTC (permalink / raw)
  To: Benjamin Herrenschmidt, Paul Mackerras, Michael Ellerman,
	scottwood
  Cc: linux-kernel, linuxppc-dev, Joakim Tjernlund

The purpose of this patchset is to optimise csum_partial() on powerpc32.
In the first part, we remove some unneccessary instructions
In the second part, we partially unloop the main loop

Christophe Leroy (2):
  Optimise a few instructions in csum_partial()
  Optimise csum_partial() loop

 arch/powerpc/lib/checksum_32.S | 53 +++++++++++++++++++++++++-----------------
 1 file changed, 32 insertions(+), 21 deletions(-)

-- 
2.1.0

^ permalink raw reply	[flat|nested] 11+ messages in thread

* [PATCH v2 1/2] powerpc32: optimise a few instructions in csum_partial()
  2015-08-05 13:29 [PATCH v2 0/2] powerpc32: Optimise csum_partial() Christophe Leroy
@ 2015-08-05 13:29 ` Christophe Leroy
  2015-08-05 13:29 ` [PATCH v2 2/2] powerpc32: optimise csum_partial() loop Christophe Leroy
  1 sibling, 0 replies; 11+ messages in thread
From: Christophe Leroy @ 2015-08-05 13:29 UTC (permalink / raw)
  To: Benjamin Herrenschmidt, Paul Mackerras, Michael Ellerman,
	scottwood
  Cc: linux-kernel, linuxppc-dev, Joakim Tjernlund

r5 does contain the value to be updated, so lets use r5 all way long
for that. It makes the code more readable.

To avoid confusion, it is better to use adde instead of addc

The first addition is useless. Its only purpose is to clear carry.
As r4 is a signed int that is always positive, this can be done by
using srawi instead of srwi

Let's also remove the comment about dcbz having no overhead as it
is not correct on all powerpc, at least on MPC8xx

In the last part, in our situation, the remaining quantity of bytes
to be proceeded is between 0 and 3. Therefore, we can base that part
on the value of bit 31 and bit 30 of r4 instead of anding r4 with 3
then proceding on comparisons and substractions.

Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
---
 arch/powerpc/lib/checksum_32.S | 37 +++++++++++++++++--------------------
 1 file changed, 17 insertions(+), 20 deletions(-)

diff --git a/arch/powerpc/lib/checksum_32.S b/arch/powerpc/lib/checksum_32.S
index 7b95a68..2e4879c 100644
--- a/arch/powerpc/lib/checksum_32.S
+++ b/arch/powerpc/lib/checksum_32.S
@@ -64,35 +64,32 @@ _GLOBAL(csum_tcpudp_magic)
  * csum_partial(buff, len, sum)
  */
 _GLOBAL(csum_partial)
-	addic	r0,r5,0
 	subi	r3,r3,4
-	srwi.	r6,r4,2
+	srawi.	r6,r4,2		/* Divide len by 4 and also clear carry */
 	beq	3f		/* if we're doing < 4 bytes */
-	andi.	r5,r3,2		/* Align buffer to longword boundary */
+	andi.	r0,r3,2		/* Align buffer to longword boundary */
 	beq+	1f
-	lhz	r5,4(r3)	/* do 2 bytes to get aligned */
-	addi	r3,r3,2
+	lhz	r0,4(r3)	/* do 2 bytes to get aligned */
 	subi	r4,r4,2
-	addc	r0,r0,r5
+	addi	r3,r3,2
 	srwi.	r6,r4,2		/* # words to do */
+	adde	r5,r5,r0
 	beq	3f
 1:	mtctr	r6
-2:	lwzu	r5,4(r3)	/* the bdnz has zero overhead, so it should */
-	adde	r0,r0,r5	/* be unnecessary to unroll this loop */
+2:	lwzu	r0,4(r3)
+	adde	r5,r5,r0
 	bdnz	2b
-	andi.	r4,r4,3
-3:	cmpwi	0,r4,2
-	blt+	4f
-	lhz	r5,4(r3)
+3:	andi.	r0,r4,2
+	beq+	4f
+	lhz	r0,4(r3)
 	addi	r3,r3,2
-	subi	r4,r4,2
-	adde	r0,r0,r5
-4:	cmpwi	0,r4,1
-	bne+	5f
-	lbz	r5,4(r3)
-	slwi	r5,r5,8		/* Upper byte of word */
-	adde	r0,r0,r5
-5:	addze	r3,r0		/* add in final carry */
+	adde	r5,r5,r0
+4:	andi.	r0,r4,1
+	beq+	5f
+	lbz	r0,4(r3)
+	slwi	r0,r0,8		/* Upper byte of word */
+	adde	r5,r5,r0
+5:	addze	r3,r5		/* add in final carry */
 	blr
 
 /*
-- 
2.1.0

^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH v2 2/2] powerpc32: optimise csum_partial() loop
  2015-08-05 13:29 [PATCH v2 0/2] powerpc32: Optimise csum_partial() Christophe Leroy
  2015-08-05 13:29 ` [PATCH v2 1/2] powerpc32: optimise a few instructions in csum_partial() Christophe Leroy
@ 2015-08-05 13:29 ` Christophe Leroy
  2015-08-06  0:30   ` Segher Boessenkool
  1 sibling, 1 reply; 11+ messages in thread
From: Christophe Leroy @ 2015-08-05 13:29 UTC (permalink / raw)
  To: Benjamin Herrenschmidt, Paul Mackerras, Michael Ellerman,
	scottwood
  Cc: linux-kernel, linuxppc-dev, Joakim Tjernlund

On the 8xx, load latency is 2 cycles and taking branches also takes
2 cycles. So let's unroll the loop.

Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
---
 v2: Only use lwzu for the last load as lwzu has undocumented
 	additional latency

 arch/powerpc/lib/checksum_32.S | 16 +++++++++++++++-
 1 file changed, 15 insertions(+), 1 deletion(-)

diff --git a/arch/powerpc/lib/checksum_32.S b/arch/powerpc/lib/checksum_32.S
index 2e4879c..9c48ee0 100644
--- a/arch/powerpc/lib/checksum_32.S
+++ b/arch/powerpc/lib/checksum_32.S
@@ -75,10 +75,24 @@ _GLOBAL(csum_partial)
 	srwi.	r6,r4,2		/* # words to do */
 	adde	r5,r5,r0
 	beq	3f
-1:	mtctr	r6
+1:	andi.	r6,r6,3		/* Prepare to handle words 4 by 4 */
+	beq	21f
+	mtctr	r6
 2:	lwzu	r0,4(r3)
 	adde	r5,r5,r0
 	bdnz	2b
+21:	srwi.	r6,r4,4		/* # blocks of 4 words to do */
+	beq	3f
+	mtctr	r6
+22:	lwz	r0,4(r3)
+	lwz	r6,8(r3)
+	lwz	r7,12(r3)
+	lwzu	r8,16(r3)
+	adde	r5,r5,r0
+	adde	r5,r5,r6
+	adde	r5,r5,r7
+	adde	r5,r5,r8
+	bdnz	22b
 3:	andi.	r0,r4,2
 	beq+	4f
 	lhz	r0,4(r3)
-- 
2.1.0

^ permalink raw reply related	[flat|nested] 11+ messages in thread

* Re: [PATCH v2 2/2] powerpc32: optimise csum_partial() loop
  2015-08-05 13:29 ` [PATCH v2 2/2] powerpc32: optimise csum_partial() loop Christophe Leroy
@ 2015-08-06  0:30   ` Segher Boessenkool
  2015-08-06  2:31     ` Scott Wood
  0 siblings, 1 reply; 11+ messages in thread
From: Segher Boessenkool @ 2015-08-06  0:30 UTC (permalink / raw)
  To: Christophe Leroy
  Cc: Benjamin Herrenschmidt, Paul Mackerras, Michael Ellerman,
	scottwood, linuxppc-dev, linux-kernel

On Wed, Aug 05, 2015 at 03:29:35PM +0200, Christophe Leroy wrote:
> On the 8xx, load latency is 2 cycles and taking branches also takes
> 2 cycles. So let's unroll the loop.

This is not true for most other 32-bit PowerPC; this patch makes
performance worse on e.g. 6xx/7xx/7xxx.  Let's not!


Segher

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH v2 2/2] powerpc32: optimise csum_partial() loop
  2015-08-06  0:30   ` Segher Boessenkool
@ 2015-08-06  2:31     ` Scott Wood
  2015-08-06  4:39       ` Segher Boessenkool
  0 siblings, 1 reply; 11+ messages in thread
From: Scott Wood @ 2015-08-06  2:31 UTC (permalink / raw)
  To: Segher Boessenkool
  Cc: Christophe Leroy, Benjamin Herrenschmidt, Paul Mackerras,
	Michael Ellerman, linuxppc-dev, linux-kernel

On Wed, 2015-08-05 at 19:30 -0500, Segher Boessenkool wrote:
> On Wed, Aug 05, 2015 at 03:29:35PM +0200, Christophe Leroy wrote:
> > On the 8xx, load latency is 2 cycles and taking branches also takes
> > 2 cycles. So let's unroll the loop.
> 
> This is not true for most other 32-bit PowerPC; this patch makes
> performance worse on e.g. 6xx/7xx/7xxx.  Let's not!

Chips with a load latency greater than 2 cycles should also benefit from the 
unrolling.  Have you benchmarked this somewhere and seen it reduce 
performance?  Do you know of any 32-bit PPC chips with a load latency less 
than 2 cycles?

-Scott

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH v2 2/2] powerpc32: optimise csum_partial() loop
  2015-08-06  2:31     ` Scott Wood
@ 2015-08-06  4:39       ` Segher Boessenkool
  2015-08-06 22:45         ` Scott Wood
  0 siblings, 1 reply; 11+ messages in thread
From: Segher Boessenkool @ 2015-08-06  4:39 UTC (permalink / raw)
  To: Scott Wood
  Cc: Christophe Leroy, Benjamin Herrenschmidt, Paul Mackerras,
	Michael Ellerman, linuxppc-dev, linux-kernel

On Wed, Aug 05, 2015 at 09:31:41PM -0500, Scott Wood wrote:
> On Wed, 2015-08-05 at 19:30 -0500, Segher Boessenkool wrote:
> > On Wed, Aug 05, 2015 at 03:29:35PM +0200, Christophe Leroy wrote:
> > > On the 8xx, load latency is 2 cycles and taking branches also takes
> > > 2 cycles. So let's unroll the loop.
> > 
> > This is not true for most other 32-bit PowerPC; this patch makes
> > performance worse on e.g. 6xx/7xx/7xxx.  Let's not!
> 
> Chips with a load latency greater than 2 cycles should also benefit from the 
> unrolling.  Have you benchmarked this somewhere and seen it reduce 
> performance?  Do you know of any 32-bit PPC chips with a load latency less 
> than 2 cycles?

The original loop was already optimal, as the comment said.  The new
code adds extra instructions and a mispredicted branch.  You also
might get less overlap between the loads and adde (I didn't check
if there is any originally): those instructions are no longer
interleaved.

I think it is a stupid idea to optimise code for all 32-bit PowerPC
CPUs based on solely what is best for a particularly simple, slow
implementation; and that is what this patch is doing.


Segher

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH v2 2/2] powerpc32: optimise csum_partial() loop
  2015-08-06  4:39       ` Segher Boessenkool
@ 2015-08-06 22:45         ` Scott Wood
  2015-08-06 23:25           ` Segher Boessenkool
  0 siblings, 1 reply; 11+ messages in thread
From: Scott Wood @ 2015-08-06 22:45 UTC (permalink / raw)
  To: Segher Boessenkool
  Cc: Christophe Leroy, Benjamin Herrenschmidt, Paul Mackerras,
	Michael Ellerman, linuxppc-dev, linux-kernel

On Wed, 2015-08-05 at 23:39 -0500, Segher Boessenkool wrote:
> On Wed, Aug 05, 2015 at 09:31:41PM -0500, Scott Wood wrote:
> > On Wed, 2015-08-05 at 19:30 -0500, Segher Boessenkool wrote:
> > > On Wed, Aug 05, 2015 at 03:29:35PM +0200, Christophe Leroy wrote:
> > > > On the 8xx, load latency is 2 cycles and taking branches also takes
> > > > 2 cycles. So let's unroll the loop.
> > > 
> > > This is not true for most other 32-bit PowerPC; this patch makes
> > > performance worse on e.g. 6xx/7xx/7xxx.  Let's not!
> > 
> > Chips with a load latency greater than 2 cycles should also benefit from 
> > the 
> > unrolling.  Have you benchmarked this somewhere and seen it reduce 
> > performance?  Do you know of any 32-bit PPC chips with a load latency 
> > less 
> > than 2 cycles?
> 
> The original loop was already optimal, as the comment said.

The comment says that bdnz has zero overhead.  That doesn't mean the adde 
won't stall waiting for the load result.

> The new code adds extra instructions and a mispredicted branch.

Outside the main loop.

>   You also might get less overlap between the loads and adde (I didn't check
> if there is any originally): those instructions are no longer
> interleaved.
>
> I think it is a stupid idea to optimise code for all 32-bit PowerPC
> CPUs based on solely what is best for a particularly simple, slow
> implementation; and that is what this patch is doing.

The simple and slow implementation is the one that needs optimizations the 
most.

If this makes performance non-negligibly worse on other 32-bit chips, and is 
an important improvement on 8xx, then we can use an ifdef since 8xx already 
requires its own kernel build.  I'd prefer to see a benchmark showing that it 
actually does make things worse on those chips, though.

-Scott

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH v2 2/2] powerpc32: optimise csum_partial() loop
  2015-08-06 22:45         ` Scott Wood
@ 2015-08-06 23:25           ` Segher Boessenkool
  2015-08-17 10:56             ` leroy christophe
  0 siblings, 1 reply; 11+ messages in thread
From: Segher Boessenkool @ 2015-08-06 23:25 UTC (permalink / raw)
  To: Scott Wood
  Cc: Christophe Leroy, Benjamin Herrenschmidt, Paul Mackerras,
	Michael Ellerman, linuxppc-dev, linux-kernel

On Thu, Aug 06, 2015 at 05:45:45PM -0500, Scott Wood wrote:
> > The original loop was already optimal, as the comment said.
> 
> The comment says that bdnz has zero overhead.  That doesn't mean the adde 
> won't stall waiting for the load result.

adde is execution serialising on those cores; it *always* stalls,
that is, it won't run until it is next to complete.

> > The new code adds extra instructions and a mispredicted branch.
> 
> Outside the main loop.

Sure, I never said it was super-bad or anything.

> >   You also might get less overlap between the loads and adde (I didn't check
> > if there is any originally): those instructions are no longer
> > interleaved.
> >
> > I think it is a stupid idea to optimise code for all 32-bit PowerPC
> > CPUs based on solely what is best for a particularly simple, slow
> > implementation; and that is what this patch is doing.
> 
> The simple and slow implementation is the one that needs optimizations the 
> most.

And, on the other hand, optimising for atypical (mostly) in-order
single-issue chips without branch folding, hurts performance on
other chips the most.  Well, dual-issue in-order might be worse :-P

> If this makes performance non-negligibly worse on other 32-bit chips, and is 
> an important improvement on 8xx, then we can use an ifdef since 8xx already 
> requires its own kernel build.  I'd prefer to see a benchmark showing that it 
> actually does make things worse on those chips, though.

And I'd like to see a benchmark that shows it *does not* hurt performance
on most chips, and does improve things on 8xx, and by how much.  But it
isn't *me* who has to show that, it is not my patch.

If these csum routines actually matter for performance that much, there
really *should* be chip-specific implementations.


Segher

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH v2 2/2] powerpc32: optimise csum_partial() loop
  2015-08-06 23:25           ` Segher Boessenkool
@ 2015-08-17 10:56             ` leroy christophe
  2015-08-17 11:00               ` leroy christophe
  0 siblings, 1 reply; 11+ messages in thread
From: leroy christophe @ 2015-08-17 10:56 UTC (permalink / raw)
  To: Segher Boessenkool, Scott Wood
  Cc: Benjamin Herrenschmidt, Paul Mackerras, Michael Ellerman,
	linuxppc-dev, linux-kernel



Le 07/08/2015 01:25, Segher Boessenkool a écrit :
> On Thu, Aug 06, 2015 at 05:45:45PM -0500, Scott Wood wrote:
>> If this makes performance non-negligibly worse on other 32-bit chips, and is
>> an important improvement on 8xx, then we can use an ifdef since 8xx already
>> requires its own kernel build.  I'd prefer to see a benchmark showing that it
>> actually does make things worse on those chips, though.
> And I'd like to see a benchmark that shows it *does not* hurt performance
> on most chips, and does improve things on 8xx, and by how much.  But it
> isn't *me* who has to show that, it is not my patch.
Ok, following this discussion I made some additional measurement and it 
looks like:
* There is almost no change on the 885
* There is a non negligeable degradation on the 8323 (19.5 tb ticks 
instead of 15.3)

Thanks for pointing this out, I think my patch is therefore not good.

Christophe

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH v2 2/2] powerpc32: optimise csum_partial() loop
  2015-08-17 10:56             ` leroy christophe
@ 2015-08-17 11:00               ` leroy christophe
  2015-08-17 13:05                 ` leroy christophe
  0 siblings, 1 reply; 11+ messages in thread
From: leroy christophe @ 2015-08-17 11:00 UTC (permalink / raw)
  To: Segher Boessenkool, Scott Wood; +Cc: Paul Mackerras, linuxppc-dev, linux-kernel



Le 17/08/2015 12:56, leroy christophe a écrit :
>
>
> Le 07/08/2015 01:25, Segher Boessenkool a écrit :
>> On Thu, Aug 06, 2015 at 05:45:45PM -0500, Scott Wood wrote:
>>> If this makes performance non-negligibly worse on other 32-bit 
>>> chips, and is
>>> an important improvement on 8xx, then we can use an ifdef since 8xx 
>>> already
>>> requires its own kernel build.  I'd prefer to see a benchmark 
>>> showing that it
>>> actually does make things worse on those chips, though.
>> And I'd like to see a benchmark that shows it *does not* hurt 
>> performance
>> on most chips, and does improve things on 8xx, and by how much. But it
>> isn't *me* who has to show that, it is not my patch.
> Ok, following this discussion I made some additional measurement and 
> it looks like:
> * There is almost no change on the 885
> * There is a non negligeable degradation on the 8323 (19.5 tb ticks 
> instead of 15.3)
>
> Thanks for pointing this out, I think my patch is therefore not good.
>
Oops, I was talking about my other past, the one that was to optimise 
ip_csum_fast.
I still have to measure csum_partial

Christophe

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH v2 2/2] powerpc32: optimise csum_partial() loop
  2015-08-17 11:00               ` leroy christophe
@ 2015-08-17 13:05                 ` leroy christophe
  0 siblings, 0 replies; 11+ messages in thread
From: leroy christophe @ 2015-08-17 13:05 UTC (permalink / raw)
  To: Segher Boessenkool, Scott Wood; +Cc: linuxppc-dev, Paul Mackerras, linux-kernel



Le 17/08/2015 13:00, leroy christophe a écrit :
>
>
> Le 17/08/2015 12:56, leroy christophe a écrit :
>>
>>
>> Le 07/08/2015 01:25, Segher Boessenkool a écrit :
>>> On Thu, Aug 06, 2015 at 05:45:45PM -0500, Scott Wood wrote:
>>>> If this makes performance non-negligibly worse on other 32-bit 
>>>> chips, and is
>>>> an important improvement on 8xx, then we can use an ifdef since 8xx 
>>>> already
>>>> requires its own kernel build.  I'd prefer to see a benchmark 
>>>> showing that it
>>>> actually does make things worse on those chips, though.
>>> And I'd like to see a benchmark that shows it *does not* hurt 
>>> performance
>>> on most chips, and does improve things on 8xx, and by how much. But it
>>> isn't *me* who has to show that, it is not my patch.
>> Ok, following this discussion I made some additional measurement and 
>> it looks like:
>> * There is almost no change on the 885
>> * There is a non negligeable degradation on the 8323 (19.5 tb ticks 
>> instead of 15.3)
>>
>> Thanks for pointing this out, I think my patch is therefore not good.
>>
> Oops, I was talking about my other past, the one that was to optimise 
> ip_csum_fast.
> I still have to measure csum_partial
>
Now, I have the results for csum_partial(). The measurement is done with 
mftbl() before and after calling the function, with IRQ off to get a 
stable measure. Measurement is done with a transfer of vmlinux file done 
3 times via scp toward the target. We get approximatly 50000 calls to 
csum_partial()

On MPC885:
1/ Without the patchset, mean time spent in csum_partial() is 167 tb ticks.
2/ With the patchset, mean time is 150 tb ticks

On MPC8323:
1/ Without the patchset, mean time is 287 tb ticks
2/ With the patchset, mean time is 256 tb ticks

The improvement is approximatly 10% in both cases

So, unlike my patch on ip_fast_csum(), this one is worth it.

Christophe

^ permalink raw reply	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2015-08-17 13:05 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2015-08-05 13:29 [PATCH v2 0/2] powerpc32: Optimise csum_partial() Christophe Leroy
2015-08-05 13:29 ` [PATCH v2 1/2] powerpc32: optimise a few instructions in csum_partial() Christophe Leroy
2015-08-05 13:29 ` [PATCH v2 2/2] powerpc32: optimise csum_partial() loop Christophe Leroy
2015-08-06  0:30   ` Segher Boessenkool
2015-08-06  2:31     ` Scott Wood
2015-08-06  4:39       ` Segher Boessenkool
2015-08-06 22:45         ` Scott Wood
2015-08-06 23:25           ` Segher Boessenkool
2015-08-17 10:56             ` leroy christophe
2015-08-17 11:00               ` leroy christophe
2015-08-17 13:05                 ` leroy christophe

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).