netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: David Laight <David.Laight@ACULAB.COM>
To: 'Eric Dumazet' <eric.dumazet@gmail.com>,
	"David S . Miller" <davem@davemloft.net>,
	Jakub Kicinski <kuba@kernel.org>
Cc: netdev <netdev@vger.kernel.org>,
	Eric Dumazet <edumazet@google.com>,
	"x86@kernel.org" <x86@kernel.org>,
	Alexander Duyck <alexander.duyck@gmail.com>,
	Peter Zijlstra <peterz@infradead.org>,
	"Andrew Cooper" <andrew.cooper3@citrix.com>
Subject: RE: [PATCH v2] x86/csum: rewrite csum_partial()
Date: Wed, 1 Dec 2021 10:51:20 +0000	[thread overview]
Message-ID: <cd7a346d37174ae7b90d149d5b8f3d4e@AcuMS.aculab.com> (raw)
In-Reply-To: <20211112161950.528886-1-eric.dumazet@gmail.com>

From: Eric Dumazet
> Sent: 12 November 2021 16:20
> 
> With more NIC supporting CHECKSUM_COMPLETE, and IPv6 being widely used.
> csum_partial() is heavily used with small amount of bytes,
> and is consuming many cycles.
> 
> IPv6 header size for instance is 40 bytes.
> 
> Another thing to consider is that NET_IP_ALIGN is 0 on x86,
> meaning that network headers are not word-aligned, unless
> the driver forces this.
> 
> This means that csum_partial() fetches one u16
> to 'align the buffer', then perform three u64 additions
> with carry in a loop, then a remaining u32, then a remaining u16.
> 
> With this new version, we perform a loop only for the 64 bytes blocks,
> then the remaining is bisected.
> 

I missed this going through, a couple of comments.
I've removed all the old lines from the patch to make it readable.

> +__wsum csum_partial(const void *buff, int len, __wsum sum)
>  {
> +	u64 temp64 = (__force u64)sum;
> +	unsigned odd, result;
> 
>  	odd = 1 & (unsigned long) buff;
>  	if (unlikely(odd)) {
> +		if (unlikely(len == 0))
> +			return sum;
> +		temp64 += (*(unsigned char *)buff << 8);
>  		len--;
>  		buff++;
>  	}

Do you need to special case an odd buffer address?
You are doing misaligned reads for other (more likely)
misaligned addresses so optimising for odd buffer addresses
is rather pointless.
If misaligned reads do cost an extra clock then it might
be worth detecting the more likely '4n+2' alignment of a receive
buffer and aligning that to 8n.

> 
> +	while (unlikely(len >= 64)) {
> +		asm("addq 0*8(%[src]),%[res]\n\t"
> +		    "adcq 1*8(%[src]),%[res]\n\t"
> +		    "adcq 2*8(%[src]),%[res]\n\t"
> +		    "adcq 3*8(%[src]),%[res]\n\t"
> +		    "adcq 4*8(%[src]),%[res]\n\t"
> +		    "adcq 5*8(%[src]),%[res]\n\t"
> +		    "adcq 6*8(%[src]),%[res]\n\t"
> +		    "adcq 7*8(%[src]),%[res]\n\t"
> +		    "adcq $0,%[res]"
> +		    : [res] "+r" (temp64)
> +		    : [src] "r" (buff)
> +		    : "memory");
> +		buff += 64;
> +		len -= 64;
> +	}
> +
> +	if (len & 32) {
> +		asm("addq 0*8(%[src]),%[res]\n\t"
> +		    "adcq 1*8(%[src]),%[res]\n\t"
> +		    "adcq 2*8(%[src]),%[res]\n\t"
> +		    "adcq 3*8(%[src]),%[res]\n\t"
> +		    "adcq $0,%[res]"
> +			: [res] "+r" (temp64)
> +			: [src] "r" (buff)
> +			: "memory");
> +		buff += 32;
> +	}
> +	if (len & 16) {
> +		asm("addq 0*8(%[src]),%[res]\n\t"
> +		    "adcq 1*8(%[src]),%[res]\n\t"
> +		    "adcq $0,%[res]"
> +			: [res] "+r" (temp64)
> +			: [src] "r" (buff)
> +			: "memory");
> +		buff += 16;
> +	}
> +	if (len & 8) {
> +		asm("addq 0*8(%[src]),%[res]\n\t"
> +		    "adcq $0,%[res]"
> +			: [res] "+r" (temp64)
> +			: [src] "r" (buff)
> +			: "memory");
> +		buff += 8;
> +	}

I suspect it is worth doing:
	switch (len & 24) {
	}
and separately coding the 24 byte case
to reduce the number of 'adc $0,%reg'.
Although writing the conditions by hand might needed to get
the likely code first (whichever length it is).

> +	if (len & 7) {
> +		unsigned int shift = (8 - (len & 7)) * 8;
> +		unsigned long trail;
> 
> +		trail = (load_unaligned_zeropad(buff) << shift) >> shift;
> 
> +		asm("addq %[trail],%[res]\n\t"
> +		    "adcq $0,%[res]"
> +			: [res] "+r" (temp64)
> +			: [trail] "r" (trail));
>  	}

If you do the 'len & 7' test at the top the 56bit 'trail' value
can just be added to the 32bit 'sum' input.
Just:
		temp64 += *(u64 *)(buff + len - 8) << shift;
would do - except it can fall off the beginning of a page :-(
Maybe:
		temp64 += load_unaligned_zeropad(buff + (len & ~7)) & (~0ull >> shift);
Generating the mask reduces the register dependency chain length.

Although I remember trying to do something like this and finding
it was actually slower than the old code.
The problem is likely to be the long register chain generating 'shift'
compared to the latency of multiple memory reads that you only get once.
So potentially a 'switch (len & 6)' followed by a final test for odd
length may in fact be better - who knows.

> +	result = add32_with_carry(temp64 >> 32, temp64 & 0xffffffff);
>  	if (unlikely(odd)) {
>  		result = from32to16(result);
>  		result = ((result >> 8) & 0xff) | ((result & 0xff) << 8);
>  	}
> +	return (__force __wsum)result;
>  }

	David

-
Registered Address Lakeside, Bramley Road, Mount Farm, Milton Keynes, MK1 1PT, UK
Registration No: 1397386 (Wales)


      parent reply	other threads:[~2021-12-01 10:51 UTC|newest]

Thread overview: 8+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-11-12 16:19 [PATCH v2] x86/csum: rewrite csum_partial() Eric Dumazet
2021-11-12 16:45 ` Peter Zijlstra
2021-11-12 17:23   ` Eric Dumazet
2021-11-12 17:36     ` Eric Dumazet
2021-11-12 17:42       ` Eric Dumazet
2021-11-13  7:43     ` Peter Zijlstra
2021-11-13  1:13 ` Alexander Duyck
2021-12-01 10:51 ` David Laight [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=cd7a346d37174ae7b90d149d5b8f3d4e@AcuMS.aculab.com \
    --to=david.laight@aculab.com \
    --cc=alexander.duyck@gmail.com \
    --cc=andrew.cooper3@citrix.com \
    --cc=davem@davemloft.net \
    --cc=edumazet@google.com \
    --cc=eric.dumazet@gmail.com \
    --cc=kuba@kernel.org \
    --cc=netdev@vger.kernel.org \
    --cc=peterz@infradead.org \
    --cc=x86@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).