From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from [140.186.70.92] (port=45896 helo=eggs.gnu.org) by lists.gnu.org with esmtp (Exim 4.43) id 1PmRSC-0005J7-8P for qemu-devel@nongnu.org; Mon, 07 Feb 2011 08:48:33 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1PmRS6-0006xc-Ej for qemu-devel@nongnu.org; Mon, 07 Feb 2011 08:48:31 -0500 Received: from mx1.redhat.com ([209.132.183.28]:3623) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1PmRS6-0006xY-2k for qemu-devel@nongnu.org; Mon, 07 Feb 2011 08:48:26 -0500 Date: Mon, 7 Feb 2011 15:48:21 +0200 From: Gleb Natapov Subject: Re: [Qemu-devel] Re: [RFC: 0/2] patch for QEMU HPET periodic timer emulation to alleviate time drift Message-ID: <20110207134821.GF14984@redhat.com> References: <4D4B0B07.2040904@codemonkey.ws> <4D4B1CF8.8040800@web.de> <4D4B5F23.7040801@codemonkey.ws> <4D4BBF55.9060000@web.de> <4D4FE6BF.5080502@redhat.com> <4D4FEF81.1040603@codemonkey.ws> <4D4FF02F.2030309@redhat.com> <4D4FF24A.7000004@codemonkey.ws> <20110207134104.GE14984@redhat.com> <4D4FF7CE.6020005@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <4D4FF7CE.6020005@redhat.com> List-Id: qemu-devel.nongnu.org List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Avi Kivity Cc: kvm , Glauber Costa , qemu-devel , Ulrich Obergfell , Jan Kiszka On Mon, Feb 07, 2011 at 03:46:54PM +0200, Avi Kivity wrote: > On 02/07/2011 03:41 PM, Gleb Natapov wrote: > >On Mon, Feb 07, 2011 at 07:23:22AM -0600, Anthony Liguori wrote: > >> On 02/07/2011 07:14 AM, Avi Kivity wrote: > >> >On 02/07/2011 03:11 PM, Anthony Liguori wrote: > >> >>On 02/07/2011 06:34 AM, Avi Kivity wrote: > >> >>>On 02/04/2011 10:56 AM, Jan Kiszka wrote: > >> >>>>> > >> >>>>> This should be a rare event. If you are missing 50% of your > >> >>>>> notifications, not amount of gradual catchup is going to > >> >>>>help you out. > >> >>>> > >> >>>>But that's the only thing this patch is after: lost ticks at > >> >>>>QEMU level. > >> >>> > >> >>>Most lost ticks will happen at the vcpu level. The iothread > >> >>>has low utilization and will therefore be scheduled promptly, > >> >>>whereas the vcpu thread may have high utilization and will > >> >>>thus be preempted. When it is preempted for longer than the > >> >>>timer tick, we will see vcpu-level coalescing. All it takes > >> >>>is 2:1 overcommit to see time go half as fast; I don't think > >> >>>you'll ever see that on bare metal. > >> >> > >> >>But that's not to say that doing something about lost ticks in > >> >>QEMU isn't still useful. > >> >> > >> > > >> >If it doesn't solve the majority of the problems it isn't very > >> >useful IMO. It's a good first step, but not sufficient for real > >> >world use with overcommit. > >> > >> Even if we have a way to detect coalescing, we still need to make > >> sure we don't lose ticks in QEMU. So regardless of whether it > >> solves the majority of problems, we need this anyway. > >> > >Actually it is very strange we lose them. Last time I checked vm_clock > >worked in such a way that if ticks were lost due to qemu not been scheduled > >for a long time timer callback was repeatedly fired to compensate for > >missed wakeups. > > > > That's quite pointless, since those interrupts will be coalesced by > the guest. > Yes, of course, and this is what I remember happening. At this point interrupt de-coalescing kicks in. -- Gleb.