From mboxrd@z Thu Jan 1 00:00:00 1970 From: Marcelo Tosatti Subject: Re: KVM: x86: limit difference between kvmclock updates Date: Tue, 14 May 2013 10:12:57 -0300 Message-ID: <20130514131257.GA19277@amt.cnet> References: <20130509232141.GA7642@amt.cnet> <20130514090513.GB20995@redhat.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Cc: kvm-devel , Glauber Costa To: Gleb Natapov Return-path: Received: from mx1.redhat.com ([209.132.183.28]:37420 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753872Ab3ENNNQ (ORCPT ); Tue, 14 May 2013 09:13:16 -0400 Content-Disposition: inline In-Reply-To: <20130514090513.GB20995@redhat.com> Sender: kvm-owner@vger.kernel.org List-ID: On Tue, May 14, 2013 at 12:05:13PM +0300, Gleb Natapov wrote: > On Thu, May 09, 2013 at 08:21:41PM -0300, Marcelo Tosatti wrote: > > > > kvmclock updates which are isolated to a given vcpu, such as vcpu->cpu > > migration, should not allow system_timestamp from the rest of the vcpus > > to remain static. Otherwise ntp frequency correction applies to one > > vcpu's system_timestamp but not the others. > > > > So in those cases, request a kvmclock update for all vcpus. The worst > > case for a remote vcpu to update its kvmclock is then bounded by maximum > > nohz sleep latency. > > > Does this mean that when one vcpu is migrated all others are kicked out > from a guest mode? Yes, those which are in guest mode. For guests with large number of vcpus this is a problem, but i can't see a simpler method to fix the bug for now. Yes, this aspect must be improved (however, the bug incurs on timers in the guest taking tens of milliseconds with vcpu->pcpu pinning, which can be unacceptable).