From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-3.8 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B033FC352A3 for ; Fri, 7 Feb 2020 13:26:29 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 8D215217BA for ; Fri, 7 Feb 2020 13:26:29 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727532AbgBGN0L (ORCPT ); Fri, 7 Feb 2020 08:26:11 -0500 Received: from Galois.linutronix.de ([193.142.43.55]:40782 "EHLO Galois.linutronix.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727317AbgBGN0C (ORCPT ); Fri, 7 Feb 2020 08:26:02 -0500 Received: from p5b06da22.dip0.t-ipconnect.de ([91.6.218.34] helo=nanos.tec.linutronix.de) by Galois.linutronix.de with esmtpsa (TLS1.2:DHE_RSA_AES_256_CBC_SHA256:256) (Exim 4.80) (envelope-from ) id 1j03dh-0003qh-Uu; Fri, 07 Feb 2020 14:25:46 +0100 Received: from nanos.tec.linutronix.de (localhost [IPv6:::1]) by nanos.tec.linutronix.de (Postfix) with ESMTP id 31731105D01; Fri, 7 Feb 2020 13:25:38 +0000 (GMT) Message-Id: <20200207124403.857649978@linutronix.de> User-Agent: quilt/0.65 Date: Fri, 07 Feb 2020 13:39:03 +0100 From: Thomas Gleixner To: LKML Cc: x86@kernel.org, John Stultz , Vincenzo Frascino , Andy Lutomirski , Christophe Leroy , Paolo Bonzini , Juergen Gross , Michael Kelley , Sasha Levin , Ralf Baechle , Paul Burton , James Hogan , Russell King , Catalin Marinas , Will Deacon , Mark Rutland , Marc Zyngier , Andrei Vagin Subject: [patch V2 16/17] lib/vdso: Allow architectures to override the ns shift operation References: <20200207123847.339896630@linutronix.de> MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 X-Linutronix-Spam-Score: -1.0 X-Linutronix-Spam-Level: - X-Linutronix-Spam-Status: No , -1.0 points, 5.0 required, ALL_TRUSTED=-1,SHORTCIRCUIT=-0.0001 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Christophe Leroy On powerpc/32, GCC (8.1) generates pretty bad code for the ns >>= vd->shift operation taking into account that the shift is always <= 32 and the upper part of the result is likely to be zero. GCC makes reversed assumptions considering the shift to be likely >= 32 and the upper part to be like not zero. unsigned long long shift(unsigned long long x, unsigned char s) { return x >> s; } results in: 00000018 : 18: 35 25 ff e0 addic. r9,r5,-32 1c: 41 80 00 10 blt 2c 20: 7c 64 4c 30 srw r4,r3,r9 24: 38 60 00 00 li r3,0 28: 4e 80 00 20 blr 2c: 54 69 08 3c rlwinm r9,r3,1,0,30 30: 21 45 00 1f subfic r10,r5,31 34: 7c 84 2c 30 srw r4,r4,r5 38: 7d 29 50 30 slw r9,r9,r10 3c: 7c 63 2c 30 srw r3,r3,r5 40: 7d 24 23 78 or r4,r9,r4 44: 4e 80 00 20 blr Even when forcing the shift to be smaller than 32 with an &= 31, it still considers the shift as likely >= 32. Move the default shift implementation into an inline which can be redefined in architecture code via a macro. [ tglx: Made the shift argument u32 and removed the __arch prefix ] Signed-off-by: Christophe Leroy Signed-off-by: Thomas Gleixner Link: https://lore.kernel.org/r/b3d449de856982ed060a71e6ace8eeca4654e685.1580399657.git.christophe.leroy@c-s.fr --- lib/vdso/gettimeofday.c | 11 +++++++++-- 1 file changed, 9 insertions(+), 2 deletions(-) --- a/lib/vdso/gettimeofday.c +++ b/lib/vdso/gettimeofday.c @@ -39,6 +39,13 @@ u64 vdso_calc_delta(u64 cycles, u64 last } #endif +#ifndef vdso_shift_ns +static __always_inline u64 vdso_shift_ns(u64 ns, u32 shift) +{ + return ns >> shift; +} +#endif + #ifndef __arch_vdso_hres_capable static inline bool __arch_vdso_hres_capable(void) { @@ -80,7 +87,7 @@ static int do_hres_timens(const struct v ns = vdso_ts->nsec; last = vd->cycle_last; ns += vdso_calc_delta(cycles, last, vd->mask, vd->mult); - ns >>= vd->shift; + ns = vdso_shift_ns(ns, vd->shift); sec = vdso_ts->sec; } while (unlikely(vdso_read_retry(vd, seq))); @@ -148,7 +155,7 @@ static __always_inline int do_hres(const ns = vdso_ts->nsec; last = vd->cycle_last; ns += vdso_calc_delta(cycles, last, vd->mask, vd->mult); - ns >>= vd->shift; + ns = vdso_shift_ns(ns, vd->shift); sec = vdso_ts->sec; } while (unlikely(vdso_read_retry(vd, seq)));