From mboxrd@z Thu Jan 1 00:00:00 1970 From: Paolo Bonzini Subject: Re: [PATCH 04/12] KVM: x86: Replace call-back set_tsc_khz() with a common function Date: Tue, 6 Oct 2015 12:40:49 +0200 Message-ID: <5613A531.5050506@redhat.com> References: <1443418691-24050-1-git-send-email-haozhong.zhang@intel.com> <1443418691-24050-5-git-send-email-haozhong.zhang@intel.com> <20151005195325.GA4508@potion.brq.redhat.com> <20151006040647.GD3798@hzzhang-OptiPlex-9020.sh.intel.com> Mime-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 8bit To: =?UTF-8?B?UmFkaW0gS3LEjW3DocWZ?= , David Matlack , kvm@vger.kernel.org, Gleb Natapov , Thomas Gleixner , Ingo Molnar , "H. Peter Anvin" , x86@kernel.org, Joerg Roedel , Wanpeng Li , Xiao Guangrong , =?UTF-8?Q?Mihai_Don=c8=9bu?= , Andy Lutomirski , Kai Huang , linux-kernel@vger.kernel.org Return-path: In-Reply-To: <20151006040647.GD3798@hzzhang-OptiPlex-9020.sh.intel.com> Sender: linux-kernel-owner@vger.kernel.org List-Id: kvm.vger.kernel.org On 06/10/2015 06:06, Haozhong Zhang wrote: > Alternatively, it's also possible to follow David's comment to use > divq on x86_64 to keep both precision and safety. On i386, it just > falls back to above truncating approach. khz is just 32 bits, so we can do a 96/32 division. And because this is a slow path, we can code a generic u64*u32/u32 function and use it to do (1 << kvm_tsc_scaling_ratio_frac_bits) * khz / tsc_khz: diff --git a/include/linux/math64.h b/include/linux/math64.h index c45c089bfdac..5b70af4fa386 100644 --- a/include/linux/math64.h +++ b/include/linux/math64.h @@ -142,6 +142,13 @@ static inline u64 mul_u64_u32_shr(u64 a, u32 mul, unsigned int shift) } #endif /* mul_u64_u32_shr */ +#ifndef mul_u64_u32_div +static inline u64 mul_u64_u32_div(u64 x, u32 num, u32 den) +{ + return (u64)(((unsigned __int128)a * mul) / den); +} +#endif + #else #ifndef mul_u64_u32_shr @@ -161,6 +168,35 @@ static inline u64 mul_u64_u32_shr(u64 a, u32 mul, unsigned int shift) } #endif /* mul_u64_u32_shr */ +#ifndef mul_u64_u32_div +static inline u64 mul_u64_u32_div(u64 a, u32 num, u32 den) +{ + union { + u64 ll; + struct { +#ifdef __BIG_ENDIAN + u32 high, low; +#else + u32 low, high; +#endif + } l; + } u, rl, rh; + + u.ll = a; + rl.ll = (u64)u.l.low * num; + rh.ll = (u64)u.l.high * num + rl.l.high; + + /* Bits 32-63 of the result will be in rh.l.low. */ + rl.l.high = do_div(rh.ll, den); + + /* Bits 0-31 of the result will be in rl.l.low. */ + do_div(rl.ll, den); + + rl.l.high = rh.l.low; + return rl.ll; +} +#endif + #endif #endif /* _LINUX_MATH64_H */