From mboxrd@z Thu Jan 1 00:00:00 1970 From: Yaniv Kaul Subject: Re: [KVM-AUTOTEST][PATCH] timedrift support Date: Mon, 11 May 2009 14:05:01 +0300 Message-ID: <4A08065D.7050004@redhat.com> References: <4A010BCD.8060307@redhat.com> <20090506130247.GA5048@amt.cnet> <4A08008E.8060105@redhat.com> Mime-Version: 1.0 Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit Cc: Marcelo Tosatti , uril@redhat.com, kvm@vger.kernel.org To: Bear Yang Return-path: Received: from mx2.redhat.com ([66.187.237.31]:34378 "EHLO mx2.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752334AbZEKLFA (ORCPT ); Mon, 11 May 2009 07:05:00 -0400 Received: from int-mx2.corp.redhat.com (int-mx2.corp.redhat.com [172.16.27.26]) by mx2.redhat.com (8.13.8/8.13.8) with ESMTP id n4BB51PP028190 for ; Mon, 11 May 2009 07:05:01 -0400 In-Reply-To: <4A08008E.8060105@redhat.com> Sender: kvm-owner@vger.kernel.org List-ID: On 5/11/2009 1:40 PM, Bear Yang wrote: > Hello. > I have modified my script according Marcelo's suggestion. and resubmit > my script to you all. :) > > Marcelo, Seems except you, no one care my script. I still want to say > any suggestion on my script would be greatly appreciated. > > Thanks. > > Bear > > Marcelo Tosatti wrote: >> Bear, >> >> Some comments below. >> >> On Wed, May 06, 2009 at 12:02:21PM +0800, Bear Yang wrote: >>> Hello everyone, >>> >>> I like to submit patch to add a new function for 'time drift check' >>> for guest running on KVM. >>> >>> The TimeDrift design logic is below: >>> 1. Set the host as the NTP server >>> 2. Guest only sync it's clock with host *once* when it booted up. >>> * if the offset value of ntpdate large than 1 sec, the guest will >>> sync the clock with host. >>> * if the offset value of ntpdate less than 1 sec, the guest doesn't >>> need sync it's clock with host. I don't see the value of doing all the NTP stuff above. It poses another requirement on the host. >>> >>> 3. Then the cpu stress testing will running on guest. >>> * a C program will give the real load to guest cpu Isn't there any available program that can perform a load on the guest CPU? Moreover, I believe we want different kinds of loads to be tested (mixed with I/O, for example). >>> 4.when the cpustress testing finished. running the commandline >>> totally 20 times on guest to query the time >>> from host and judge whether the guest clock has drift or not. >>> >>> The details of my patch is attached. >>> >>> thanks. >>> >>> Bear. I might be missing something, but generally, what I feel should be the test is: 1. Load the guest (AND the host - configurable!) 2. Run a program for X minutes on the guest - as measured by the guest (say 3 minutes). 3. When it exits, check that indeed Xhost = Xguest (in some cases we've seen here: 3:15 on the host, vs. 3:00 on the guest). Looks like a much simpler plan than yours. And should also be pretty portable to Windows. Y. >> >>> diff -urN kvm_runtest_2.bak/cpu_stress.c kvm_runtest_2/cpu_stress.c >>> --- kvm_runtest_2.bak/cpu_stress.c 1969-12-31 19:00:00.000000000 >>> -0500 >>> +++ kvm_runtest_2/cpu_stress.c 2009-05-05 22:35:34.000000000 -0400 >>> @@ -0,0 +1,61 @@ >>> +#define _GNU_SOURCE >>> +#include >>> +#include >>> +#include >>> +#include >>> +#include >>> +#include >>> +#include >>> + >>> +#define MAX_CPUS 256 >>> +#define BUFFSIZE 1024 >>> + >>> + >>> +void worker_child(int cpu) >>> +{ >>> + int cur_freq; >>> + int min_freq; >>> + int max_freq; >>> + int last_freq; >>> + cpu_set_t mask; >>> + int i; >>> + double x; >>> + int d = 0; >>> + /* >>> + * bind this thread to the specified cpu + */ >>> + CPU_ZERO(&mask); >>> + CPU_SET(cpu, &mask); >>> + sched_setaffinity(0, CPU_SETSIZE, &mask); >>> + >>> + while (d++ != 500000) { >>> + for (i=0; i<100000; i++) >>> + x = sqrt(x); >>> + } >>> + >>> + _exit(0); >>> + >>> +} >>> + >>> + >>> +main() { >>> + cpu_set_t mask; >>> + int i; >>> + int code; >>> + >>> + if (sched_getaffinity(0, CPU_SETSIZE, &mask) < 0){ >>> + perror ("sched_getaffinity"); >>> + exit(1); >>> + } >>> + >>> + for (i=0; i>> + if (CPU_ISSET(i, &mask)){ >>> + printf ("CPU%d\n",i); >>> + if (fork() == 0) >>> + worker_child(i); >>> + } >>> + >>> + >>> + wait(&code); >>> + exit (WEXITSTATUS(code)); >>> +} >>> diff -urN kvm_runtest_2.bak/kvm_runtest_2.py >>> kvm_runtest_2/kvm_runtest_2.py >>> --- kvm_runtest_2.bak/kvm_runtest_2.py 2009-04-29 >>> 06:17:29.000000000 -0400 >>> +++ kvm_runtest_2/kvm_runtest_2.py 2009-04-29 08:06:32.000000000 >>> -0400 >>> @@ -36,6 +36,8 @@ >>> "autotest": test_routine("kvm_tests", >>> "run_autotest"), >>> "kvm_install": test_routine("kvm_install", >>> "run_kvm_install"), >>> "linux_s3": test_routine("kvm_tests", >>> "run_linux_s3"), >>> + "ntp_server_setup": test_routine("kvm_tests", >>> "run_ntp_server_setup"), >>> + "timedrift": test_routine("kvm_tests", >>> "run_timedrift"), >>> } >>> >>> # Make it possible to import modules from the test's bindir >>> diff -urN kvm_runtest_2.bak/kvm_tests.cfg.sample >>> kvm_runtest_2/kvm_tests.cfg.sample >>> --- kvm_runtest_2.bak/kvm_tests.cfg.sample 2009-04-29 >>> 06:17:29.000000000 -0400 >>> +++ kvm_runtest_2/kvm_tests.cfg.sample 2009-04-29 >>> 08:09:36.000000000 -0400 >>> @@ -81,6 +81,10 @@ >>> - linux_s3: install setup >>> type = linux_s3 >>> >>> + - ntp_server_setup: >>> + type = ntp_server_setup >>> + - timedrift: ntp_server_setup >>> + type = timedrift >>> # NICs >>> variants: >>> - @rtl8139: >>> diff -urN kvm_runtest_2.bak/kvm_tests.py kvm_runtest_2/kvm_tests.py >>> --- kvm_runtest_2.bak/kvm_tests.py 2009-04-29 06:17:29.000000000 >>> -0400 >>> +++ kvm_runtest_2/kvm_tests.py 2009-05-05 23:45:57.000000000 -0400 >>> @@ -394,3 +394,235 @@ >>> kvm_log.info("VM resumed after S3") >>> >>> session.close() >>> + >>> +def run_ntp_server_setup(test, params, env): >>> + + """NTP server configuration and related network file >>> modification >>> + """ >>> + kvm_log.debug("run ntp server setup") >>> + status = 1 >>> + # stop firewall for NTP server if it is running. >>> + status = os.system("/etc/init.d/iptables status") >>> + if status == 0: >>> + os.system("/etc/init.d/iptables stop") >>> + status = 1 >>> + >>> + # prevent dhcp client modify the ntp.conf >>> + kvm_log.info("prevent dhcp client modify the ntp.conf") >>> + >>> + config_file = "/etc/sysconfig/network" >>> + network_file = open("/etc/sysconfig/network", "a") >>> + string = "PEERNTP=no" >>> + >>> + if os.system("grep %s %s" % (string, config_file)): >>> + network_file.writelines(str(string)+'\n') >>> + + network_file.close() >>> + + # start ntp server on host >>> + kvm_log.info("backup ntp config file") >>> + >>> + ntp_filename = os.path.expanduser("/etc/ntp.conf") >>> + # backup ntp config file >>> + backup_bootloader_filename = ntp_filename + "_backup" >>> + if os.path.exists(ntp_filename): >>> + os.rename(ntp_filename, backup_bootloader_filename) >>> + + status = os.system("/etc/init.d/ntpd status") >>> + if status == 0: >>> + os.system("/etc/init.d/ntpd stop") >>> + status = 1 + >>> + kvm_log.info("start ntp server on host") >>> + >>> + ntp_cmd = ''' >>> + echo "restrict default kod nomodify notrap nopeer noquery" >>> >> /etc/ntp.conf;\ >>> + echo "restrict 127.0.0.1" >> /etc/ntp.conf;\ >>> + echo "driftfile /var/lib/ntp/drift" >> /etc/ntp.conf;\ >>> + echo "keys /etc/ntp/keys" >> /etc/ntp.conf;\ >>> + echo "server 127.127.1.0" >> /etc/ntp.conf;\ >>> + echo "fudge 127.127.1.0 stratum 1" >> /etc/ntp.conf;\ >>> + service ntpd start; >>> + ''' >> >> I think it would be better to copy /etc/ntp.conf to a temporary file, >> modify that, and start ntpd with the -c option. >> >> After the test is finished, restart ntpd with the default config (if it >> was running) via service ntpd restart. >> >> Also I don't see whether your script reports the content of >> >> /sys/devices/system/clocksource/clocksource0/current_clocksource >> >> On the guest? Its important that information is displayed on the test >> report. >> >> Looks fine to me other than that, but the kvm-autotest guys probably >> have more comments. >> >> Thanks >> -- >> To unsubscribe from this list: send the line "unsubscribe kvm" in >> the body of a message to majordomo@vger.kernel.org >> More majordomo info at http://vger.kernel.org/majordomo-info.html >