kvm.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: "Radim Krčmář" <rkrcmar@redhat.com>
To: Paolo Bonzini <pbonzini@redhat.com>
Cc: linux-kernel@vger.kernel.org, kvm@vger.kernel.org, mtosatti@redhat.com
Subject: Re: [PATCH v2] KVM: x86: do not go through vcpu in __get_kvmclock_ns
Date: Wed, 16 Nov 2016 19:10:02 +0100	[thread overview]
Message-ID: <20161116181001.GB7678@potion> (raw)
In-Reply-To: <1216909500.13168461.1479317268749.JavaMail.zimbra@redhat.com>

2016-11-16 12:27-0500, Paolo Bonzini:
>> > +	if (!ka->use_master_clock)
>> > +		return ktime_get_boot_ns() + ka->kvmclock_offset;
>> >  
>> > -	return ns;
>> > +	hv_clock.tsc_timestamp = ka->master_cycle_now;
>> > +	hv_clock.system_time = ka->master_kernel_ns + ka->kvmclock_offset;
>> > +	kvm_get_time_scale(NSEC_PER_SEC, __this_cpu_read(cpu_tsc_khz) * 1000LL,
>> > +			   &hv_clock.tsc_shift,
>> > +			   &hv_clock.tsc_to_system_mul);
>> 
>> Doesn't this result in a minor drift with scaled clock, because the
>> guest can be combining two systems that approximate frequency?
> 
> You mean instead of doing read_l1_tsc?

Yes, when we avoid the scaling and use host TSC directly.

>>   1) tsc_shift and tsc_to_system_mul for kvmclock scaling
>>   2) hardware TSC scaling ratio
>> 
>> If we are on a 7654321 kHz TSC and TSC-ratio scale to 1234567 kHz and
>> then tsc_shift+tsc_to_system_mul kvmclock-scale to 1000000 kHz, we
>> should be using multipliers of
>>   0.161290204578564186163606151349022336533834941074459772460...  and
>>   0.810000591300431649315104000025920018921613812778083328000...,
>> to achieve that.  Those multipliers cannot be precisely expressed in
>> what we have (shifts and 64/32 bit multipliers with intermediate values
>> only up to 128 bits), so performing the scaling will result in slightly
>> incorrect frequency.
>> 
>> The result of combining two operations that alter the freqency is quite
>> unlikely to cancel out and produce the same result as an operation that
>> uses a different shift+multiplier to scale in one step, so I think that
>> we aren't getting the same time as the guest with TSC-scaling is seeing.
> 
> I think you get pretty good precision, since 30 fractional bits are more
> or less equivalent to nanosecond precision.  For example, cutting the two
> ratios above to 30 fractional bits I get respectively 173184038/2^30
> and 869731512/2^30.  Multiplying them gives 140279173/2^30 which matches
> exactly the fixed point representation of 1000000/7654321.
> 
> Since the TSC scaling frequency has a larger precision (32 or 48 bits),
> you should get at most 1 ulp error, which is not bad.

True, it would take many years to accumulate into a second.

Thanks.

      reply	other threads:[~2016-11-16 18:10 UTC|newest]

Thread overview: 4+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-11-14 17:51 [PATCH v2] KVM: x86: do not go through vcpu in __get_kvmclock_ns Paolo Bonzini
2016-11-16 16:10 ` Radim Krčmář
2016-11-16 17:27   ` Paolo Bonzini
2016-11-16 18:10     ` Radim Krčmář [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20161116181001.GB7678@potion \
    --to=rkrcmar@redhat.com \
    --cc=kvm@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mtosatti@redhat.com \
    --cc=pbonzini@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).