From mboxrd@z Thu Jan 1 00:00:00 1970 From: Paolo Bonzini Subject: Re: [RFC PATCH V3 2/5] Add function for left shift and 64 bit division Date: Mon, 6 Jun 2016 14:37:24 +0200 Message-ID: <8dbf2eaf-ea7d-b7d0-633e-889f2081912f@redhat.com> References: <1465000951-13343-1-git-send-email-yunhong.jiang@linux.intel.com> <1465000951-13343-3-git-send-email-yunhong.jiang@linux.intel.com> Mime-Version: 1.0 Content-Type: text/plain; charset=windows-1252 Content-Transfer-Encoding: 7bit Cc: mtosatti@redhat.com, rkrcmar@redhat.com, kernellwp@gmail.com To: Yunhong Jiang , kvm@vger.kernel.org Return-path: Received: from mail-wm0-f67.google.com ([74.125.82.67]:33634 "EHLO mail-wm0-f67.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750891AbcFFMha (ORCPT ); Mon, 6 Jun 2016 08:37:30 -0400 Received: by mail-wm0-f67.google.com with SMTP id c74so7934054wme.0 for ; Mon, 06 Jun 2016 05:37:29 -0700 (PDT) In-Reply-To: <1465000951-13343-3-git-send-email-yunhong.jiang@linux.intel.com> Sender: kvm-owner@vger.kernel.org List-ID: On 04/06/2016 02:42, Yunhong Jiang wrote: > From: Yunhong Jiang > > Sometimes we need convert from guest tsc to host tsc, which is: > host_tsc = ((unsigned __int128)(guest_tsc - tsc_offset) > << kvm_tsc_scaling_ratio_frac_bits) > / vcpu->arch.tsc_scaling_ratio; > where guest_tsc and host_tsc are both 64 bit. > > A helper function is provided to achieve this conversion. Only supported > on x86_64 platform now. A generic solution can be provided in future if > needed. > > Signed-off-by: Yunhong Jiang > --- > arch/x86/include/asm/div64.h | 18 ++++++++++++++++++ Please put this in vmx.c instead. You can merge it with patch 3, even. Thanks, Paolo > 1 file changed, 18 insertions(+) > > diff --git a/arch/x86/include/asm/div64.h b/arch/x86/include/asm/div64.h > index ced283ac79df..6937d6d4c81a 100644 > --- a/arch/x86/include/asm/div64.h > +++ b/arch/x86/include/asm/div64.h > @@ -60,6 +60,24 @@ static inline u64 div_u64_rem(u64 dividend, u32 divisor, u32 *remainder) > #define div_u64_rem div_u64_rem > > #else > +#include > +/* (a << shift) / divisor, return 1 if overflow otherwise 0 */ > +static inline int u64_shl_div_u64(u64 a, unsigned int shift, > + u64 divisor, u64 *result) > +{ > + u64 low = a << shift, high = a >> (64 - shift); > + > + /* To avoid the overflow on divq */ > + if (high > divisor) > + return 1; > + > + /* Low hold the result, high hold rem which is discarded */ > + asm("divq %2\n\t" : "=a" (low), "=d" (high) : > + "rm" (divisor), "0" (low), "1" (high)); > + *result = low; > + > + return 0; > +} > # include > #endif /* CONFIG_X86_32 */ > >