qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: Lonnie Mendez <lmendez19@austin.rr.com>
To: qemu-devel@nongnu.org
Subject: Re: [Qemu-devel] objective benchmark?
Date: Tue, 16 May 2006 06:48:20 -0500	[thread overview]
Message-ID: <4469BC04.1080809@austin.rr.com> (raw)
In-Reply-To: <000e01c678b3$cd372460$0464a8c0@athlon>

[-- Attachment #1: Type: text/plain, Size: 2465 bytes --]

Kazu wrote:

>If you set /proc/sys/dev/rtc/max-user-freq to 1024 and disable cpuspeed
>service that is related to SpeedStep/PowerNow! on a host OS, the clock in
>guest OS works fine.
>
>I checked it on i686/x86_64 Linux host.
>
   Mind saying how you checked this?  I'm on a pentium-III mobile 
processor and the only way I've seen so far to make qemu + rdtsc behave 
100% is by disabling ACPI (boot with -noacpi).  If I add a simple printf 
to cpu_calibrate_ticks it doesn't seem fixed to me:

dignome@vaio /prog/qemu-stuff $ qemu -localtime -hda winxp.img -no-acpi 
-kernel-kqemu -soundhw es1370
ticks_per_sec set as 1126809000
dignome@vaio /prog/qemu-stuff $ qemu -localtime -hda winxp.img -no-acpi 
-kernel-kqemu -soundhw es1370
ticks_per_sec set as 17308857
dignome@vaio /prog/qemu-stuff $ qemu -localtime -hda winxp.img -no-acpi 
-kernel-kqemu -soundhw es1370
ticks_per_sec set as 103710852
dignome@vaio /prog/qemu-stuff $ qemu -localtime -hda winxp.img -no-acpi 
-kernel-kqemu -soundhw es1370
ticks_per_sec set as 15292604
dignome@vaio /prog/qemu-stuff $ qemu -localtime -hda winxp.img -no-acpi 
-kernel-kqemu -soundhw es1370
ticks_per_sec set as 96695295
dignome@vaio /prog/qemu-stuff $ qemu -localtime -hda winxp.img -no-acpi 
-kernel-kqemu -soundhw es1370
ticks_per_sec set as 1126761234
dignome@vaio /prog/qemu-stuff $ qemu -localtime -hda winxp.img -no-acpi 
-kernel-kqemu -soundhw es1370
ticks_per_sec set as 1126762522
dignome@vaio /prog/qemu-stuff $ qemu -localtime -hda winxp.img -no-acpi 
-kernel-kqemu -soundhw es1370
ticks_per_sec set as 49791263

   The first entry is with the patch attached called 
'ticks_from_proc.patch' applied (I've been using this for almost a 
year).  It sets the value ticks_per_sec, which happens to be used by a 
lot of qemu's hardware emulation, using information in /proc/cpuinfo.  
This doesn't fix the time issue as rdtsc is still used every time a 
SIGALRM signal occurs for timing, but at least the guest's emulated 
hardware runs to speed.

dignome@vaio /prog/qemu-cvs/qemu-acpi/qemu $ find . -type f -exec fgrep 
-l 'ticks_per_sec' {} \;
./audio/audio.c
./audio/noaudio.c
./audio/wavaudio.c
./monitor.c
./vl.c
./vl.h
./hw/acpi.c
./hw/adlib.c
./hw/arm_timer.c
./hw/cuda.c
./hw/fdc.c
./hw/i8254.c
./hw/i8259.c
./hw/ide.c
./hw/mc146818rtc.c
./hw/mips_r4k.c
./hw/ppc.c
./hw/sb16.c
./hw/sh7750.c
./hw/slavio_timer.c
./hw/usb-uhci.c

   The second patch adds the printf statement so you can see this for 
yourself.

[-- Attachment #2: tps-printf.patch --]
[-- Type: text/plain, Size: 338 bytes --]

--- a/vl.c	2006-05-16 06:42:11.000000000 -0500
+++ b/vl.c	2006-05-16 06:35:25.000000000 -0500
@@ -637,6 +637,7 @@
     usec = get_clock() - usec;
     ticks = cpu_get_real_ticks() - ticks;
     ticks_per_sec = (ticks * 1000000LL + (usec >> 1)) / usec;
+    printf("ticks_per_sec set as %lli\n", ticks_per_sec);
 }
 #endif /* !_WIN32 */
 

[-- Attachment #3: ticks_from_proc.patch --]
[-- Type: text/plain, Size: 999 bytes --]

Index: qemu/vl.c
@@ -596,8 +596,43 @@
 #endif
 }
 
+uint64_t get_tps_from_proc()
+{
+    FILE *fp;
+    char buf[30];
+    double cpu_mhz;
+
+    if (!(fp = fopen("/proc/cpuinfo", "r")))
+        return 0;
+
+    /* find wanted line */
+    while (fgets(buf, 30, fp) != NULL)
+        if (!strncmp(buf, "cpu MHz", 7))
+            break;
+
+    /* line not found? */
+    if (feof(fp)) {
+        fclose(fp);
+        return 0;
+    }
+    fclose(fp);
+
+    /* put 'cpu MHz' value into cpu_mhz */
+    if (!sscanf(&buf[9], ": %lf", &cpu_mhz))
+        return 0;
+
+    /* return estimated ticks/sec value */
+    return (uint64_t)(cpu_mhz * 1000000.0);
+}
+
 void cpu_calibrate_ticks(void)
 {
+    if (!(ticks_per_sec = get_tps_from_proc()))
+        fprintf(stderr, "Could not obtain ticks/sec. from /proc/cpuinfo\n \
+                        resorting to regular method (tsc)\n");
+    else /* value obtained, skip below */
+        return;
+
     int64_t usec, ticks;
 
     usec = get_clock();

  parent reply	other threads:[~2006-05-16 11:48 UTC|newest]

Thread overview: 15+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2006-05-15 18:03 [Qemu-devel] objective benchmark? Mikhail Ramendik
2006-05-15 20:25 ` Natalia Portillo
2006-05-15 21:13   ` Mikhail Ramendik
2006-05-16  0:07 ` NyOS
2006-05-16  4:12 ` Anthony Liguori
2006-05-16  6:41   ` Kazu
2006-05-16  6:55     ` Christian MICHON
2006-05-16  9:26       ` Kazu
2006-05-16 10:23         ` Christian MICHON
2006-05-17  7:24           ` Kazu
2006-05-16 11:48     ` Lonnie Mendez [this message]
2006-05-17  7:24       ` Kazu
2006-05-17  9:09         ` Lonnie Mendez
2006-05-17 19:18         ` Fabrice Bellard
  -- strict thread matches above, loose matches on Subject: below --
2006-05-16 12:53 Ben Taylor

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=4469BC04.1080809@austin.rr.com \
    --to=lmendez19@austin.rr.com \
    --cc=qemu-devel@nongnu.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).