From mboxrd@z Thu Jan 1 00:00:00 1970 From: Radim =?utf-8?B?S3LEjW3DocWZ?= Subject: Re: [PATCH 1/2] KVM: x86: remaster kvm_write_tsc code Date: Thu, 6 Apr 2017 19:07:47 +0200 Message-ID: <20170406170746.GJ6369@potion> References: <1491466125-16988-1-git-send-email-dplotnikov@virtuozzo.com> <1491466125-16988-2-git-send-email-dplotnikov@virtuozzo.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Cc: pbonzini@redhat.com, mtosatti@redhat.com, kvm@vger.kernel.org, rkagan@virtuozzo.com, den@virtuozzo.com To: Denis Plotnikov Return-path: Received: from mx1.redhat.com ([209.132.183.28]:33218 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751694AbdDFRHv (ORCPT ); Thu, 6 Apr 2017 13:07:51 -0400 Content-Disposition: inline In-Reply-To: <1491466125-16988-2-git-send-email-dplotnikov@virtuozzo.com> Sender: kvm-owner@vger.kernel.org List-ID: 2017-04-06 11:08+0300, Denis Plotnikov: > Reuse existing code instead of using inline asm. > Make the code more concise and clear in the TSC > synchronization part. > > Signed-off-by: Denis Plotnikov > Reviewed-by: Roman Kagan > --- > arch/x86/kvm/x86.c | 51 ++++++++++++--------------------------------------- > 1 file changed, 12 insertions(+), 39 deletions(-) I like this patch a lot. > diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c > @@ -1455,51 +1455,24 @@ void kvm_write_tsc(struct kvm_vcpu *vcpu, struct msr_data *msr) > elapsed = ns - kvm->arch.last_tsc_nsec; > > if (vcpu->arch.virtual_tsc_khz) { > - int faulted = 0; > - > - /* n.b - signed multiplication and division required */ > - usdiff = data - kvm->arch.last_tsc_write; > -#ifdef CONFIG_X86_64 > - usdiff = (usdiff * 1000) / vcpu->arch.virtual_tsc_khz; > -#else > - /* do_div() only does unsigned */ > - asm("1: idivl %[divisor]\n" > - "2: xor %%edx, %%edx\n" > - " movl $0, %[faulted]\n" > - "3:\n" > - ".section .fixup,\"ax\"\n" > - "4: movl $1, %[faulted]\n" > - " jmp 3b\n" > - ".previous\n" > - > - _ASM_EXTABLE(1b, 4b) > - > - : "=A"(usdiff), [faulted] "=r" (faulted) > - : "A"(usdiff * 1000), [divisor] "rm"(vcpu->arch.virtual_tsc_khz)); > - > -#endif > - do_div(elapsed, 1000); Oh, this is actually fixing a bug, because we later consider elapsed in nanoseconds, but this one converts it to microsends ... > - usdiff -= elapsed; > - if (usdiff < 0) > - usdiff = -usdiff; > - > - /* idivl overflow => difference is larger than USEC_PER_SEC */ > - if (faulted) > - usdiff = USEC_PER_SEC; > - } else > - usdiff = USEC_PER_SEC; /* disable TSC match window below */ > + u64 tsc_exp = kvm->arch.last_tsc_write + > + nsec_to_cycles(vcpu, elapsed); > + u64 tsc_hz = vcpu->arch.virtual_tsc_khz * 1000LL; > + /* > + * Special case: TSC write with a small delta (1 second) of virtual > + * cycle time against real time is interpreted as an attempt to > + * synchronize the CPU. > + */ > + synchronizing = data < tsc_exp + tsc_hz && data > tsc_exp - tsc_hz; This condition is wrong -- if tsc_exp < tsc_hz, then it will wraparound and resolve as false. It should read: synchronizing = data < tsc_exp + tsc_hz && data + tsc_hz > tsc_exp; > + } > > /* > - * Special case: TSC write with a small delta (1 second) of virtual > - * cycle time against real time is interpreted as an attempt to > - * synchronize the CPU. > - * > * For a reliable TSC, we can match TSC offsets, and for an unstable > * TSC, we add elapsed time in this computation. We could let the > * compensation code attempt to catch up if we fall behind, but > * it's better to try to match offsets from the beginning. > */ > - if (usdiff < USEC_PER_SEC && > + if (synchronizing && > vcpu->arch.virtual_tsc_khz == kvm->arch.last_tsc_khz) { > if (!check_tsc_unstable()) { > offset = kvm->arch.cur_tsc_offset; > -- > 1.8.3.1 >