From mboxrd@z Thu Jan 1 00:00:00 1970 From: Dor Laor Subject: Re: [PATCH 1/2] Add HPET emulation to qemu (v3) Date: Mon, 27 Oct 2008 12:49:13 +0200 Message-ID: <49059CA9.30007@il.qumranet.com> References: <1224245854.3399.7.camel@beth-laptop> <20081017154932.GA14229@shareable.org> <1224529724.3399.27.camel@beth-laptop> Reply-To: qemu-devel@nongnu.org Mime-Version: 1.0 Content-Type: multipart/alternative; boundary="------------000703070407050104050809" Cc: qemu-devel@nongnu.org, kvm@vger.kernel.org, Alexander Graf To: Beth Kon Return-path: In-Reply-To: <1224529724.3399.27.camel@beth-laptop> List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: qemu-devel-bounces+gceq-qemu-devel=gmane.org@nongnu.org Errors-To: qemu-devel-bounces+gceq-qemu-devel=gmane.org@nongnu.org List-Id: kvm.vger.kernel.org This is a multi-part message in MIME format. --------------000703070407050104050809 Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Beth Kon wrote: > On Fri, 2008-10-17 at 16:49 +0100, Jamie Lokier wrote: > >> Beth Kon wrote: >> >>> Clock drift on Linux is in the range of .017% - .019%, loaded and unloaded. I >>> haven't found a straightforward way to test on Windows and would appreciate >>> any pointers to existing approaches. >>> >> Is there any reason why there should be any clock drift, when the >> guest is using a non-PIT clock? >> >> I'm probably being naive, but with 32-bit or 64-bit HPET counters >> available to the guest, and accurate values from the CMOS clock >> emulation, I don't see why drift would accumulate over the long term >> relative to the host clock. >> > > I was measuring with ntpdate, so the drift is with respect to the ntp > server pool, not the host clock. But in any case, since timer interrupts > and reads of the hpet counter are at the mercy of the host scheduler > (i.e., the qemu process can be swapped out at any time during hpet read > or timer expiration), I'd guess there would always be some amount of > inaccuracy. Also, qemu checks for timer expiration (qemu_run_timers) as > part of a bigger loop (main_loop_wait), so the varying amounts of work > to do elsewhere in the loop from iteration to iteration would also > introduce irregular delays. > This is exactly why hpet as the other clock emulation in qemu (pit, rtc, pm?) need to check whether their irq was really injected. Gleb sent patches for the rtc, pit. The idea is to check with the irq chip if the injected irq was really successful. Dor --------------000703070407050104050809 Content-Type: text/html; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit Beth Kon wrote:
On Fri, 2008-10-17 at 16:49 +0100, Jamie Lokier wrote:
  
Beth Kon wrote:
    
Clock drift on Linux is in the range of .017% - .019%, loaded and unloaded. I
haven't found a straightforward way to test on Windows and would appreciate
any pointers to existing approaches.
      
Is there any reason why there should be any clock drift, when the
guest is using a non-PIT clock?

I'm probably being naive, but with 32-bit or 64-bit HPET counters
available to the guest, and accurate values from the CMOS clock
emulation, I don't see why drift would accumulate over the long term
relative to the host clock.
    

I was measuring with ntpdate, so the drift is with respect to the ntp
server pool, not the host clock. But in any case, since timer interrupts
and reads of the hpet counter are at the mercy of the host scheduler
(i.e., the qemu process can be swapped out at any time during hpet read
or timer expiration), I'd guess there would always be some amount of
inaccuracy. Also, qemu checks for timer expiration (qemu_run_timers) as
part of a bigger loop (main_loop_wait), so the varying amounts of work
to do elsewhere in the loop from iteration to iteration would also
introduce irregular delays.
  
This is exactly why hpet as the other clock emulation in qemu (pit, rtc, pm?) need
to check whether their irq was really injected. Gleb sent patches for the rtc, pit.
The idea is to check with the irq chip if the injected irq was really successful.

Dor --------------000703070407050104050809--