From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-1.0 required=3.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,SPF_PASS, URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id BF8B5C04E87 for ; Wed, 15 May 2019 10:58:05 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 967F321473 for ; Wed, 15 May 2019 10:58:05 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="c7uDQm0H" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 967F321473 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=arm.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+infradead-linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender:Content-Type: Content-Transfer-Encoding:Cc:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:In-Reply-To:MIME-Version:Date:Message-ID:From: References:To:Subject:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=ldI/g8D0jPWpV0GJJOmcnasyY1L8hvfdfB3+ZwmupFY=; b=c7uDQm0HRmpdWVOFiQMRiuM1u YBeAXBLtzkR9dMpplJF8YGR8q7brNud05WWkygfWzpiJ7B7Ff4Q6EKtR1ybmBKdYw9Gpcw4esQBqQ kK4iEo0W6HS/oJzsYOT5YocMQZibcIgweOS0x1xLxJzEj5wj/68isr+G0MpoPV7P3pLV5pGW8P+aJ 0PC4jEmsk3ltdAl3XtbRTGNUjl2dDSwaLsBQb5i0mVVGzm/KLK6elEjcYmql5oNIyfTJIHict2Dso fzpVm06PNcHnSW/+URkkY4F12bUDir0h1ci6pPiieCE6hmGJ4yhjB2uM0TJlhjFaxvgvkx9XSRAdT JrSUHSvMg==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1hQrbm-00087Z-OO; Wed, 15 May 2019 10:58:02 +0000 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70] helo=foss.arm.com) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1hQrbk-00086q-M5 for linux-arm-kernel@lists.infradead.org; Wed, 15 May 2019 10:58:01 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id CB36880D; Wed, 15 May 2019 03:57:59 -0700 (PDT) Received: from [10.1.196.75] (e110467-lin.cambridge.arm.com [10.1.196.75]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 4E1CF3F703; Wed, 15 May 2019 03:57:58 -0700 (PDT) Subject: Re: [PATCH] arm64: do_csum: implement accelerated scalar version To: David Laight , 'Will Deacon' References: <20190218230842.11448-1-ard.biesheuvel@linaro.org> <20190412095243.GA27193@fuggles.cambridge.arm.com> <41b30c72-c1c5-14b2-b2e1-3507d552830d@arm.com> <20190515094704.GC24357@fuggles.cambridge.arm.com> <6e755b2daaf341128cb3b54f36172442@AcuMS.aculab.com> From: Robin Murphy Message-ID: <3d4fdbb5-7c7f-9331-187e-14c09dd1c18d@arm.com> Date: Wed, 15 May 2019 11:57:56 +0100 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:60.0) Gecko/20100101 Thunderbird/60.6.1 MIME-Version: 1.0 In-Reply-To: <6e755b2daaf341128cb3b54f36172442@AcuMS.aculab.com> Content-Language: en-GB X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20190515_035800_722686_817E5AC6 X-CRM114-Status: GOOD ( 16.68 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Ard Biesheuvel , "netdev@vger.kernel.org" , "ilias.apalodimas@linaro.org" , Zhangshaokun , "huanglingyan \(A\)" , "linux-arm-kernel@lists.infradead.org" , "steve.capper@arm.com" Content-Transfer-Encoding: 7bit Content-Type: text/plain; charset="us-ascii"; Format="flowed" Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+infradead-linux-arm-kernel=archiver.kernel.org@lists.infradead.org On 15/05/2019 11:15, David Laight wrote: > ... >>> ptr = (u64 *)(buff - offset); >>> shift = offset * 8; >>> >>> /* >>> * Head: zero out any excess leading bytes. Shifting back by the same >>> * amount should be at least as fast as any other way of handling the >>> * odd/even alignment, and means we can ignore it until the very end. >>> */ >>> data = *ptr++; >>> #ifdef __LITTLE_ENDIAN >>> data = (data >> shift) << shift; >>> #else >>> data = (data << shift) >> shift; >>> #endif > > I suspect that > #ifdef __LITTLE_ENDIAN > data &= ~0ull << shift; > #else > data &= ~0ull >> shift; > #endif > is likely to be better. Out of interest, better in which respects? For the A64 ISA at least, that would take 3 instructions plus an additional scratch register, e.g.: MOV x2, #~0 LSL x2, x2, x1 AND x0, x0, x1 (alternatively "AND x0, x0, x1 LSL x2" to save 4 bytes of code, but that will typically take as many cycles if not more than just pipelining the two 'simple' ALU instructions) Whereas the original is just two shift instruction in-place. LSR x0, x0, x1 LSL x0, x0, x1 If the operation were repeated, the constant generation could certainly be amortised over multiple subsequent ANDs for a net win, but that isn't the case here. Robin. _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel