From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-1.0 required=3.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B849DC04E53 for ; Wed, 15 May 2019 11:14:04 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 88A1D21726 for ; Wed, 15 May 2019 11:14:04 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="XQJcNd+C" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 88A1D21726 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=ACULAB.COM Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+infradead-linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:In-Reply-To:References: Message-ID:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=KSmVc2qtXI7TQQytVp7VYZITfautJTteExSftsyJN8w=; b=XQJcNd+C4jtmSD DMu7he2jIgHUyj2lKmLWEHVlurhiXdlJ+d0dEdOaiWk2BLQWlBal7flqYKMjCDQcMhy7WQZk19KYh Vb9t2x7GgdvxXDai2sboE8ktTFOGGk4kDW+a7IJLF/PJSZBO5U6DtBdBhBwdX6CHWtjyv5lg9UTF4 tj+gLDWtuR2gA5+HUI2r0wlgyQNUBIZX+lzqB4gYpvY0Qk1WZWh4C1J3lIgL3bc6LomD8M3MD3Jev aKoP2PKLVrpG4pMcJ3rIl/OwnSxC/LnojFXpAyqD9apLy0xJj1TXmhcGfuZBXt8EYmVnxeX9B4mKu F53ZG6S2h9dOs9bTQbVQ==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1hQrrE-0000x7-Sz; Wed, 15 May 2019 11:14:00 +0000 Received: from eu-smtp-delivery-151.mimecast.com ([146.101.78.151]) by bombadil.infradead.org with esmtps (Exim 4.90_1 #2 (Red Hat Linux)) id 1hQrr8-0000vl-DV for linux-arm-kernel@lists.infradead.org; Wed, 15 May 2019 11:13:58 +0000 Received: from AcuMS.aculab.com (156.67.243.126 [156.67.243.126]) (Using TLS) by relay.mimecast.com with ESMTP id uk-mta-17-7-hwOp_fMoOm1VVMCmE2cQ-1; Wed, 15 May 2019 12:13:51 +0100 Received: from AcuMS.Aculab.com (fd9f:af1c:a25b::d117) by AcuMS.aculab.com (fd9f:af1c:a25b::d117) with Microsoft SMTP Server (TLS) id 15.0.1347.2; Wed, 15 May 2019 12:13:50 +0100 Received: from AcuMS.Aculab.com ([fe80::43c:695e:880f:8750]) by AcuMS.aculab.com ([fe80::43c:695e:880f:8750%12]) with mapi id 15.00.1347.000; Wed, 15 May 2019 12:13:50 +0100 From: David Laight To: 'Robin Murphy' , 'Will Deacon' Subject: RE: [PATCH] arm64: do_csum: implement accelerated scalar version Thread-Topic: [PATCH] arm64: do_csum: implement accelerated scalar version Thread-Index: AQHVCwMrtYHa1Y0LVUWOb1uB691U/KZr90eg///8CQCAABRdMA== Date: Wed, 15 May 2019 11:13:50 +0000 Message-ID: <9f72aecd99e74c1a939df6562ed9c18c@AcuMS.aculab.com> References: <20190218230842.11448-1-ard.biesheuvel@linaro.org> <20190412095243.GA27193@fuggles.cambridge.arm.com> <41b30c72-c1c5-14b2-b2e1-3507d552830d@arm.com> <20190515094704.GC24357@fuggles.cambridge.arm.com> <6e755b2daaf341128cb3b54f36172442@AcuMS.aculab.com> <3d4fdbb5-7c7f-9331-187e-14c09dd1c18d@arm.com> In-Reply-To: <3d4fdbb5-7c7f-9331-187e-14c09dd1c18d@arm.com> Accept-Language: en-GB, en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-ms-exchange-transport-fromentityheader: Hosted x-originating-ip: [10.202.205.107] MIME-Version: 1.0 X-MC-Unique: 7-hwOp_fMoOm1VVMCmE2cQ-1 X-Mimecast-Spam-Score: 0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20190515_041357_200070_92427FF1 X-CRM114-Status: GOOD ( 14.99 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Ard Biesheuvel , "netdev@vger.kernel.org" , "ilias.apalodimas@linaro.org" , Zhangshaokun , "huanglingyan \(A\)" , "linux-arm-kernel@lists.infradead.org" , "steve.capper@arm.com" Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+infradead-linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Robin Murphy > Sent: 15 May 2019 11:58 > To: David Laight; 'Will Deacon' > Cc: Zhangshaokun; Ard Biesheuvel; linux-arm-kernel@lists.infradead.org; netdev@vger.kernel.org; > ilias.apalodimas@linaro.org; huanglingyan (A); steve.capper@arm.com > Subject: Re: [PATCH] arm64: do_csum: implement accelerated scalar version > > On 15/05/2019 11:15, David Laight wrote: > > ... > >>> ptr = (u64 *)(buff - offset); > >>> shift = offset * 8; > >>> > >>> /* > >>> * Head: zero out any excess leading bytes. Shifting back by the same > >>> * amount should be at least as fast as any other way of handling the > >>> * odd/even alignment, and means we can ignore it until the very end. > >>> */ > >>> data = *ptr++; > >>> #ifdef __LITTLE_ENDIAN > >>> data = (data >> shift) << shift; > >>> #else > >>> data = (data << shift) >> shift; > >>> #endif > > > > I suspect that > > #ifdef __LITTLE_ENDIAN > > data &= ~0ull << shift; > > #else > > data &= ~0ull >> shift; > > #endif > > is likely to be better. > > Out of interest, better in which respects? For the A64 ISA at least, > that would take 3 instructions plus an additional scratch register, e.g.: > > MOV x2, #~0 > LSL x2, x2, x1 > AND x0, x0, x1 > > (alternatively "AND x0, x0, x1 LSL x2" to save 4 bytes of code, but that > will typically take as many cycles if not more than just pipelining the > two 'simple' ALU instructions) > > Whereas the original is just two shift instruction in-place. > > LSR x0, x0, x1 > LSL x0, x0, x1 > > If the operation were repeated, the constant generation could certainly > be amortised over multiple subsequent ANDs for a net win, but that isn't > the case here. On a superscaler processor you reduce the register dependency chain by one instruction. The original code is pretty much a single dependency chain so you are likely to be able to generate the mask 'for free'. David - Registered Address Lakeside, Bramley Road, Mount Farm, Milton Keynes, MK1 1PT, UK Registration No: 1397386 (Wales) _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel