From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1755588AbcBHUTR (ORCPT ); Mon, 8 Feb 2016 15:19:17 -0500 Received: from ns.horizon.com ([71.41.210.147]:46844 "HELO ns.horizon.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with SMTP id S1755006AbcBHUTP (ORCPT ); Mon, 8 Feb 2016 15:19:15 -0500 X-Greylist: delayed 400 seconds by postgrey-1.27 at vger.kernel.org; Mon, 08 Feb 2016 15:19:15 EST Date: 8 Feb 2016 15:12:34 -0500 Message-ID: <20160208201234.8569.qmail@ns.horizon.com> From: "George Spelvin" To: David.Laight@ACULAB.COM, linux-kernel@vger.kernel.org, netdev@vger.kernel.org, tom@herbertland.com Subject: Re: [PATCH v3 net-next] net: Implement fast csum_partial for x86_64 Cc: mingo@kernel.org Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org David Laight wrote: > I'd need convincing that unrolling the loop like that gives any significant gain. > You have a dependency chain on the carry flag so have delays between the 'adcq' > instructions (these may be more significant than the memory reads from l1 cache). If the carry chain is a bottleneck, on Broadwell+ (feature flag X86_FEATURE_ADX), there are the ADCX and ADOX instructions, which use separate flag bits for their carry chains and so can be interleaved. I don't have such a machine to test on, but if someone who does would like to do a little benchmarking, that would be an interesting data point. Unfortunately, that means yet another version of the main loop, but if there's a significant benefit...