From mboxrd@z Thu Jan 1 00:00:00 1970 From: Priya Subject: Time drift between values returned by TSC and gettimeofday Date: Tue, 23 Feb 2010 13:48:13 -0500 Message-ID: <5c3550fe1002231048p201f6c7eu7305273c251415ac@mail.gmail.com> Mime-Version: 1.0 Content-Type: multipart/mixed; boundary="===============0375257830==" Return-path: List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Sender: xen-devel-bounces@lists.xensource.com Errors-To: xen-devel-bounces@lists.xensource.com To: xen-devel@lists.xensource.com List-Id: xen-devel@lists.xenproject.org --===============0375257830== Content-Type: multipart/alternative; boundary=001636e0a4c8efbe20048048fd4b --001636e0a4c8efbe20048048fd4b Content-Type: text/plain; charset=ISO-8859-1 Hey guys, I have made an interesting (and unexpected) observation in relation to the drift between the (system) time returned by gettimeofday() system call and time obtained from time stamp counter (defined as TSC reading divided by CPU frequency). I was wondering if anyone could help me explain this: Let me first explain my system: I have 3 Hardware based vitual machines (HVMs) running on Linux 2.6.24-26-generic (tickless) kernel. The timer mode is 1 (default, virtual time is always wallclock time). I ran an experiment in which I compared the *difference between the values retured by rdtsc() and gettimeofday on the three domains* against realtime (from a third independent source). There is no NTP sync on either the control domain or any of the user domains. The scheduler weights and cap values for all domains (domain-0 and user domains) are default 256 and 0 respectively. I observe that: 1. Over time the drift between TSC-time and gettimeofday time increases (at a constant rate). This is expected because considering that TSC and gettimeofday are supposed to derive their values from different physical counters there will be some drift. 2. But what is suprising is that the rate is different on all three domains. Now this is something that is puzzling me. If I understand the virtualization architecture correctly, read shadows are created for each user domain which are updated by domain-0. Read access to TSC will return a value from these shadow tables. And since I am using the timer mode = 1 , I expect that system time will also be same on all domains. Which means that time difference between TSC time and system time should increase by the same amount on all domains which is not what I observe. Can some body give me a pointer to what I am missing here. Has anyone else observed this behavior? Thanks! --pr --001636e0a4c8efbe20048048fd4b Content-Type: text/html; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable Hey guys,

I have made an interesting (and unexpected) observation in= relation to the drift between the (system) time returned by gettimeofday()= system call and time obtained from time stamp counter (defined as TSC read= ing divided by CPU frequency). I was wondering if anyone could help me expl= ain this:

Let me first explain my system: I have 3 Hardware based vitual machines= (HVMs) running on Linux=A0 2.6.24-26-generic (tickless) kernel. The timer = mode is 1 (default, virtual time is always wallclock time). I ran an experi= ment in which I compared the difference between=A0 the values retured by= rdtsc() and gettimeofday on the three domains against realtime (from a= third independent source). There is no NTP sync on either the control doma= in or any of the user domains. The scheduler weights and cap values for all= domains (domain-0 and user domains) are default 256 and 0 respectively.
I observe that:
  1. Over time the drift between TSC-time and get= timeofday time increases (at a constant rate). This is expected because con= sidering that TSC and gettimeofday are supposed to derive their values from= different physical counters there will be some drift.
  2. But what is suprising is that the rate is different on all three domain= s. Now this is something that is puzzling me.
If I understand the virtualization architecture correctly, re= ad shadows are created for each user domain which are updated by domain-0. = Read access to TSC will return a value from these shadow tables.

And since I am using the timer mode =3D 1 , I expect that system time w= ill also be same on all domains. Which means that time difference between T= SC time and system time should increase by the same amount on all domains w= hich is not what I observe.

Can some body give me a pointer to what I am missing here. Has an= yone else observed this behavior?

Thanks!
--pr

--001636e0a4c8efbe20048048fd4b-- --===============0375257830== Content-Type: text/plain; charset="us-ascii" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit Content-Disposition: inline _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel --===============0375257830==--