public inbox for ltp@lists.linux.it
 help / color / mirror / Atom feed
* [LTP] [Patch 1/6] Fix issues in cpu consolidation verification functions
@ 2009-10-13  7:43 Poornima Nayak
  2009-10-13  7:43 ` [LTP] [Patch 2/6] Developed new functions and fixed issues causing ilb test failure Poornima Nayak
                   ` (5 more replies)
  0 siblings, 6 replies; 17+ messages in thread
From: Poornima Nayak @ 2009-10-13  7:43 UTC (permalink / raw)
  To: ltp-list, svaidy, ego, arun

Arguments passed for cpu consolidation was not used appropriatly. Provided
TINFO messages to indicate dependency test failures.

Signed-off-by: poornima nayak <mpnayak@linux.vnet.ibm.com>

diff -uprN ltp-full-20090930/testcases/kernel/power_management/pm_include.sh ltp-full-20090930_patched/testcases/kernel/power_management/pm_include.sh
--- ltp-full-20090930/testcases/kernel/power_management/pm_include.sh	2009-10-05 02:10:56.000000000 -0400
+++ ltp-full-20090930_patched/testcases/kernel/power_management/pm_include.sh	2009-10-12 22:46:12.000000000 -0400
@@ -71,7 +71,7 @@ get_supporting_govr() {
 is_hyper_threaded() {
 	siblings=`cat /proc/cpuinfo | grep siblings | uniq | cut -f2 -d':'`
 	cpu_cores=`cat /proc/cpuinfo | grep "cpu cores" | uniq | cut -f2 -d':'`
-	[ $siblings > $cpu_cores ]; return $?
+	[ $siblings -gt $cpu_cores ]; return $?
 }
 
 check_input() {
@@ -148,8 +148,8 @@ get_valid_input() {
 		
 analyze_result_hyperthreaded() {
 	sched_mc=$1
-    pass_count=$3
-    sched_smt=$4
+    pass_count=$2
+    sched_smt=$3
 
 	case "$sched_mc" in
 	0)
@@ -165,7 +165,7 @@ $sched_mc & sched_smt=$sched_smt"
 			fi
 			;;
 		*)
-           	if [ $pass_count -lt 5 ]; then
+			if [ $pass_count -lt 5 ]; then
                	tst_resm TFAIL "cpu consolidation for sched_mc=\
 $sched_mc & sched_smt=$sched_smt"
            	else
@@ -190,10 +190,16 @@ $sched_mc & sched_smt=$sched_smt"
 
 analyze_package_consolidation_result() {
 	sched_mc=$1
-    pass_count=$3
-	sched_smt=$4
+    pass_count=$2
+
+	if [ $# -gt 2 ]
+	then
+		sched_smt=$3
+	else
+		sched_smt=-1
+	fi
 
-	if [ $hyper_threaded -eq $YES -a $sched_smt ]; then
+	if [ $hyper_threaded -eq $YES -a $sched_smt -gt -1 ]; then
 		analyze_result_hyperthreaded $sched_mc $pass_count $sched_smt
 	else
 		case "$sched_mc" in
@@ -209,10 +215,10 @@ $sched_mc"
     	*)
 			if [ $pass_count -lt 5 ]; then
 				tst_resm TFAIL "Consolidation at package level failed for \
-sched_mc=$sched_mc & sched_smt=$sched_smt"
+sched_mc=$sched_mc"
 			else
 				tst_resm TPASS "Consolidation at package level passed for \
-sched_mc=$sched_mc & sched_smt=$sched_smt"
+sched_mc=$sched_mc"
 			fi	
         	;;
     	esac
@@ -221,7 +227,7 @@ sched_mc=$sched_mc & sched_smt=$sched_sm
 
 analyze_core_consolidation_result() {
 	sched_smt=$1
-	pass_count=$3
+	pass_count=$2
 
 	case "$sched_smt" in
 	0)

------------------------------------------------------------------------------
Come build with us! The BlackBerry(R) Developer Conference in SF, CA
is the only developer event you need to attend this year. Jumpstart your
developing skills, take BlackBerry mobile applications to market and stay 
ahead of the curve. Join us from November 9 - 12, 2009. Register now!
http://p.sf.net/sfu/devconference
_______________________________________________
Ltp-list mailing list
Ltp-list@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/ltp-list

^ permalink raw reply	[flat|nested] 17+ messages in thread

* [LTP] [Patch 2/6] Developed new functions and fixed issues causing ilb test failure
  2009-10-13  7:43 [LTP] [Patch 1/6] Fix issues in cpu consolidation verification functions Poornima Nayak
@ 2009-10-13  7:43 ` Poornima Nayak
  2009-10-13 10:12   ` Subrata Modak
                     ` (2 more replies)
  2009-10-13  7:43 ` [LTP] [Patch 3/6] Modified ilb test to run with ebizzy as default workload Poornima Nayak
                   ` (4 subsequent siblings)
  5 siblings, 3 replies; 17+ messages in thread
From: Poornima Nayak @ 2009-10-13  7:43 UTC (permalink / raw)
  To: ltp-list, arun, svaidy, ego

CPU Consolidation verification function is fixed to handle variations in
CPU utilization. Threshold is selected based on test conducted on 2.6.31 on
dual core, quad core & hyper threaded system.
Developed new function to generate hyper threaded siblings list and get job count
for hyper threaded system and multisocket system.
Modified kernbench workload execution time for 5 min, hence test execution time
will be reduced further. Developed new functions to stop workload.

Signed-off-by: poornima nayak <mpnayak@linux.vnet.ibm.com>

diff -uprN ltp-full-20090930/testcases/kernel/power_management/lib/sched_mc.py ltp-full-20090930_patched/testcases/kernel/power_management/lib/sched_mc.py
--- ltp-full-20090930/testcases/kernel/power_management/lib/sched_mc.py	2009-10-05 02:10:56.000000000 -0400
+++ ltp-full-20090930_patched/testcases/kernel/power_management/lib/sched_mc.py	2009-10-12 23:00:30.000000000 -0400
@@ -22,6 +22,7 @@ socket_count = 0
 cpu1_max_intr = 0
 cpu2_max_intr = 0
 intr_stat_timer_0 = []
+siblings_list = []
 
 def clear_dmesg():
     '''
@@ -96,6 +97,36 @@ def is_hyper_threaded():
         print "Failed to check if system is hyper-threaded"
         sys.exit(1)
 
+def is_multi_core():
+    ''' Return true if system has sockets has multiple cores
+    '''
+  
+    try:
+        file_cpuinfo = open("/proc/cpuinfo", 'r')
+        for line in file_cpuinfo:
+            if line.startswith('siblings'):
+                siblings = line.split(":")
+            if line.startswith('cpu cores'):
+                cpu_cores = line.split(":")
+                break
+       
+        if int( siblings[1] ) == int( cpu_cores[1] ): 
+            if int( cpu_cores[1] ) > 1:
+                multi_core = 1
+            else:
+                multi_core = 0
+        else:
+            num_of_cpus = int(siblings[1]) / int(cpu_cores[1])
+            if num_of_cpus > 1:
+                multi_core = 1
+            else:
+                multi_core = 0
+        file_cpuinfo.close()
+        return multi_core
+    except Exception:
+        print "Failed to check if system is multi core system"
+        sys.exit(1)
+
 def get_hyper_thread_count():
     ''' Return number of threads in CPU. For eg for x3950 this function
         would return 2. In future if 4 threads are supported in CPU, this
@@ -153,6 +184,40 @@ def map_cpuid_pkgid():
                 sys.exit(1)
 
 
+def generate_sibling_list():
+    ''' Routine to generate siblings list
+    '''
+    try:
+        for i in range(0, cpu_count):
+            siblings_file = '/sys/devices/system/cpu/cpu%s' % i
+            siblings_file += '/topology/thread_siblings_list'
+            threads_sibs = open(siblings_file).read().rstrip()
+            thread_ids = threads_sibs.split("-")
+    
+            if not thread_ids in siblings_list:
+                siblings_list.append(thread_ids)
+    except Exception, details:
+        print "Exception in generate_siblings_list", details
+        sys.exit(1)
+
+def get_siblings(cpu_id):
+    ''' Return siblings of cpu_id
+    '''
+    try:
+        cpus = ""
+        for i in range(0, len(siblings_list)):
+            for cpu in siblings_list[i]:
+                if cpu_id == cpu:
+                    for j in siblings_list[i]:
+                        # Exclude cpu_id in the list of siblings
+                        if j != cpu_id:
+                            cpus += j
+                    return cpus
+        return cpus
+    except Exception, details:
+        print "Exception in get_siblings", details
+        sys.exit(1)
+
 def get_proc_data(stats_list):
     ''' Read /proc/stat info and store in dictionary
     '''
@@ -168,18 +233,18 @@ def get_proc_data(stats_list):
         sys.exit(1)
 
 def get_proc_loc_count(loc_stats):
-    ''' Read /proc/stat info and store in dictionary
+    ''' Read /proc/interrupts info and store in list
     '''
     try:
         file_procstat = open("/proc/interrupts", 'r')
         for line in file_procstat:
-            if line.startswith('LOC:'):
+            if line.startswith(' LOC:') or line.startswith('LOC:'):
                 data = line.split()
                 for i in range(0, cpu_count):
                     # To skip LOC
                     loc_stats.append(data[i+1])
-                    print data[i+1]
-        file_procstat.close()
+                file_procstat.close()
+                return
     except Exception, details:
         print "Could not read interrupt statistics", details
         sys.exit(1)
@@ -192,6 +257,8 @@ def set_sched_mc_power(sched_mc_level):
         os.system('echo %s > \
             /sys/devices/system/cpu/sched_mc_power_savings 2>/dev/null'
             % sched_mc_level)
+
+        get_proc_data(stats_start)
     except OSError, e:
         print "Could not set sched_mc_power_savings to", sched_mc_level, e
 	sys.exit(1)
@@ -203,6 +270,8 @@ def set_sched_smt_power(sched_smt_level)
         os.system('echo %s > \
             /sys/devices/system/cpu/sched_smt_power_savings 2>/dev/null'
             % sched_smt_level)
+
+        get_proc_data(stats_start)
     except OSError, e:
         print "Could not set sched_smt_power_savings to", sched_smt_level, e
 	sys.exit(1)
@@ -218,21 +287,36 @@ def set_timer_migration_interface(value)
         print "Could not set timer_migration to ", value, e
         sys.exit(1)
 
-def trigger_ebizzy (sched_smt, stress, duration, background, pinned):
-    ''' Triggers ebizzy workload for sched_mc=1
-        testing
+def get_job_count(stress, workload, sched_smt):
+    ''' Returns number of jobs/threads to be triggered
     '''
+    
     try:
         if stress == "thread":
             threads = get_hyper_thread_count()
         if stress == "partial":
             threads = cpu_count / socket_count
+            if is_hyper_threaded():
+                if workload == "ebizzy" and int(sched_smt) ==0:
+                    threads = threads / get_hyper_thread_count()
+                if workload == "kernbench" and int(sched_smt) < 2:
+                    threads = threads / get_hyper_thread_count()    
         if stress == "full":
-	    threads = cpu_count
+            threads = cpu_count
         if stress == "single_job":
             threads = 1
             duration = 180
+        return threads
+    except Exception, details:
+        print "get job count failed ", details
+        sys.exit(1)
 
+def trigger_ebizzy (sched_smt, stress, duration, background, pinned):
+    ''' Triggers ebizzy workload for sched_mc=1
+        testing
+    '''
+    try:
+        threads = get_job_count(stress, "ebizzy", sched_smt)
         olddir = os.getcwd()
         path = '%s/utils/benchmark' % os.environ['LTPROOT']
         os.chdir(path)
@@ -282,23 +366,14 @@ def trigger_ebizzy (sched_smt, stress, d
         print "Ebizzy workload trigger failed ", details
         sys.exit(1)   
 
-def trigger_kernbench (sched_smt, stress, background, pinned):
+def trigger_kernbench (sched_smt, stress, background, pinned, perf_test):
     ''' Trigger load on system like kernbench.
         Copys existing copy of LTP into as LTP2 and then builds it
         with make -j
     '''
     olddir = os.getcwd()
     try:
-        if stress == "thread":
-	    threads = 2
-        if stress == "partial":
-	    threads = cpu_count / socket_count
-            if is_hyper_threaded() and int(sched_smt) !=2:
-                threads = threads / get_hyper_thread_count()
-        if stress == "full":
-            threads = cpu_count
-        if stress == "single_job":
-            threads = 1
+        threads = get_job_count(stress, "kernbench", sched_smt)
 
         dst_path = "/root"
         olddir = os.getcwd()      
@@ -335,24 +410,35 @@ def trigger_kernbench (sched_smt, stress
         get_proc_loc_count(intr_start)
         if pinned == "yes":
             os.system ( 'taskset -c %s %s/kernbench -o %s -M -H -n 1 \
-                >/dev/null 2>&1' % (cpu_count-1, benchmark_path, threads))
+                >/dev/null 2>&1 &' % (cpu_count-1, benchmark_path, threads))
+
+            # We have to delete import in future
+            import time
+            time.sleep(240)
+            stop_wkld("kernbench")
         else:
             if background == "yes":
                 os.system ( '%s/kernbench -o %s -M -H -n 1 >/dev/null 2>&1 &' \
                     % (benchmark_path, threads))
             else:
-                os.system ( '%s/kernbench -o %s -M -H -n 1 >/dev/null 2>&1' \
-                    % (benchmark_path, threads))
+                if perf_test == "yes":
+                    os.system ( '%s/kernbench -o %s -M -H -n 1 >/dev/null 2>&1' \
+                        % (benchmark_path, threads))
+                else:
+                    os.system ( '%s/kernbench -o %s -M -H -n 1 >/dev/null 2>&1 &' \
+                        % (benchmark_path, threads))
+                    # We have to delete import in future
+                    import time
+                    time.sleep(240)
+                    stop_wkld("kernbench")
         
         print "INFO: Workload kernbench triggerd"
         os.chdir(olddir)
-        #get_proc_data(stats_stop)
-        #get_proc_loc_count(intr_stop)
     except Exception, details:
         print "Workload kernbench trigger failed ", details
         sys.exit(1)
    
-def trigger_workld(sched_smt, workload, stress, duration, background, pinned):
+def trigger_workld(sched_smt, workload, stress, duration, background, pinned, perf_test):
     ''' Triggers workload passed as argument. Number of threads 
         triggered is based on stress value.
     '''
@@ -360,7 +446,7 @@ def trigger_workld(sched_smt, workload, 
         if workload == "ebizzy":
             trigger_ebizzy (sched_smt, stress, duration, background, pinned)
         if workload == "kernbench":
-            trigger_kernbench (sched_smt, stress, background, pinned)
+            trigger_kernbench (sched_smt, stress, background, pinned, perf_test)
     except Exception, details:
         print "INFO: Trigger workload failed", details
         sys.exit(1)
@@ -434,7 +520,7 @@ def generate_report():
             print >> keyvalfile, "package-%s=%3.4f" % \
 		(pkg, (float(total_idle)*100/total))
     except Exception, details:
-        print "Generating reportfile failed: ", details
+        print "Generating utilization report failed: ", details
         sys.exit(1)
 
     #Add record delimiter '\n' before closing these files
@@ -454,20 +540,18 @@ def generate_loc_intr_report():
 
         get_proc_loc_count(intr_stop)
 
-        print "Before substracting"
-        for i in range(0, cpu_count):
-            print "CPU",i, intr_start[i], intr_stop[i]
-            reportfile = open('/procstat/cpu-loc_interrupts', 'a')
-            print >> reportfile, "=============================================="
-            print >> reportfile, "     Local timer interrupt stats              "
-            print >> reportfile, "=============================================="
+        reportfile = open('/procstat/cpu-loc_interrupts', 'a')
+        print >> reportfile, "=============================================="
+        print >> reportfile, "     Local timer interrupt stats              "
+        print >> reportfile, "=============================================="
+
         for i in range(0, cpu_count):
             intr_stop[i] =  int(intr_stop[i]) - int(intr_start[i])
             print >> reportfile, "CPU%s: %s" %(i, intr_stop[i])
         print >> reportfile
         reportfile.close()
     except Exception, details:
-        print "Generating reportfile failed: ", details
+        print "Generating interrupt report failed: ", details
         sys.exit(1)
 
 def record_loc_intr_count():
@@ -542,25 +626,24 @@ def validate_cpugrp_map(cpu_group, sched
                                 modi_cpu_grp.remove(core_cpus[i]) 
                                 if len(modi_cpu_grp) == 0:
                                     return 0
-                            else:
+                            #This code has to be deleted 
+                            #else:
                                 # If sched_smt == 0 then its oky if threads run
                                 # in different cores of same package 
-                                if sched_smt_level == 1:
-                                    sys.exit(1)
-                                else:
-                                    if len(cpu_group) == 2 and \
-                                        len(modi_cpu_grp) < len(cpu_group):
-                                        print "INFO:CPUs utilized not in a core"
-                                        return 1                                        
-            print "INFO: CPUs utilized is not in same package or core"
-            return(1)
+                                #if sched_smt_level > 0 :
+                                    #return 1
 	else:
             for pkg in sorted(cpu_map.keys()):
                 pkg_cpus = cpu_map[pkg]
-                if pkg_cpus == cpu_group:
-                    return(0)
-                 
-            return(1) 
+                if len(cpu_group) == len(pkg_cpus):
+                    if pkg_cpus == cpu_group:
+                        return(0)
+                else:
+                    if int(cpus_utilized[0]) in cpu_map[pkg] or int(cpus_utilized[1]) in cpu_map[pkg]:
+                        return(0)
+
+        return(1) 
+
     except Exception, details:
         print "Exception in validate_cpugrp_map: ", details
         sys.exit(1)
@@ -605,36 +688,70 @@ def verify_sched_domain_dmesg(sched_mc_l
         print "Reading dmesg failed", details
         sys.exit(1)
 
-def validate_cpu_consolidation(work_ld, sched_mc_level, sched_smt_level):
+def get_cpu_utilization(cpu):
+    ''' Return cpu utilization of cpu_id
+    '''
+    try:
+        for l in sorted(stats_percentage.keys()):
+            if cpu == stats_percentage[l][0]:
+                return stats_percentage[l][1]
+        return -1
+    except Exception, details:
+        print "Exception in get_cpu_utilization", details
+        sys.exit(1)
+
+def validate_cpu_consolidation(stress, work_ld, sched_mc_level, sched_smt_level):
     ''' Verify if cpu's on which threads executed belong to same
     package
     '''
     cpus_utilized = list()
+    threads = get_job_count(stress, work_ld, sched_smt_level)
     try:
         for l in sorted(stats_percentage.keys()):
             #modify threshold
+            cpu_id = stats_percentage[l][0].split("cpu")
+            if cpu_id[1] == '':
+                continue
+            if int(cpu_id[1]) in cpus_utilized:
+                continue
             if is_hyper_threaded():
-                if stats_percentage[l][1] > 25 and work_ld == "kernbench":
-                    cpu_id = stats_percentage[l][0].split("cpu")
-                    if cpu_id[1] != '':
+                if work_ld == "kernbench" and sched_smt_level < sched_mc_level:
+                    siblings = get_siblings(cpu_id[1])
+                    if siblings != "":
+                        sib_list = siblings.split()
+                        utilization = int(stats_percentage[l][1])
+                        for i in range(0, len(sib_list)):
+                            utilization += int(get_cpu_utilization("cpu%s" %sib_list[i])) 
+                    else:
+                        utilization = stats_percentage[l][1]
+                    if utilization > 40:
                         cpus_utilized.append(int(cpu_id[1]))
+                        if siblings != "":
+                            for i in range(0, len(sib_list)):
+                                cpus_utilized.append(int(sib_list[i]))
                 else:
-                    if stats_percentage[l][1] > 70:
-                        cpu_id = stats_percentage[l][0].split("cpu")
-                        if cpu_id[1] != '':
-                            cpus_utilized.append(int(cpu_id[1]))
+                    # This threshold wuld be modified based on results
+                    if stats_percentage[l][1] > 40:
+                        cpus_utilized.append(int(cpu_id[1]))
             else:
-                if stats_percentage[l][1] > 70:
-                    cpu_id = stats_percentage[l][0].split("cpu")
-                    if cpu_id[1] != '':
+                if work_ld == "kernbench" :
+                    if stats_percentage[l][1] > 50:
                         cpus_utilized.append(int(cpu_id[1]))
-                    cpus_utilized.sort()
+                else:
+                    if stats_percentage[l][1] > 70:
+                        cpus_utilized.append(int(cpu_id[1]))
+            cpus_utilized.sort()
         print "INFO: CPU's utilized ", cpus_utilized
 
+        # If length of CPU's utilized is not = number of jobs exit with 1
+        if len(cpus_utilized) < threads:
+            return 1
+
         status = validate_cpugrp_map(cpus_utilized, sched_mc_level, \
             sched_smt_level)
         if status == 1:
             print "INFO: CPUs utilized is not in same package or core"
+
         return(status)
     except Exception, details:
         print "Exception in validate_cpu_consolidation: ", details
@@ -645,7 +762,8 @@ def get_cpuid_max_intr_count():
     try:
         highest = 0
         second_highest = 0
-        global cpu1_max_intr, cpu2_max_intr
+        cpus_utilized = []
+        
         #Skipping CPU0 as it is generally high
         for i in range(1, cpu_count):
             if int(intr_stop[i]) > int(highest):
@@ -658,15 +776,19 @@ def get_cpuid_max_intr_count():
                 if int(intr_stop[i]) > int(second_highest):
                     second_highest = int(intr_stop[i])
                     cpu2_max_intr = i
+        cpus_utilized.append(cpu1_max_intr)
+        cpus_utilized.append(cpu2_max_intr)
+        
         for i in range(1, cpu_count):
             if i != cpu1_max_intr and i != cpu2_max_intr:
                 diff = second_highest - intr_stop[i]
                 ''' Threshold of difference has to be manipulated '''
                 if diff < 10000:
                     print "INFO: Diff in interrupt count is below threshold"
-                    return 1
+                    cpus_utilized = []
+                    return cpus_utilized
         print "INFO: Interrupt count in other CPU's low as expected"
-        return 0 
+        return cpus_utilized
     except Exception, details:
         print "Exception in get_cpuid_max_intr_count: ", details
         sys.exit(1)
@@ -675,14 +797,12 @@ def validate_ilb (sched_mc_level, sched_
     ''' Validate if ilb is running in same package where work load is running
     '''
     try:
-        status = get_cpuid_max_intr_count()
-        if status == 1:
+        cpus_utilized = get_cpuid_max_intr_count()
+        if not cpus_utilized:
             return 1
-        for pkg in sorted(cpu_map.keys()):
-            if cpu1_max_intr in cpu_map[pkg] and cpu2_max_intr in cpu_map[pkg]:
-                return 0
-        print "INFO: CPUs with higher interrupt count is not in same package"
-        return 1
+       
+        status = validate_cpugrp_map(cpus_utilized, sched_mc_level, sched_smt_level)
+        return status
     except Exception, details:
         print "Exception in validate_ilb: ", details
         sys.exit(1)
@@ -706,3 +826,14 @@ def reset_schedsmt():
     except OSError, e:
         print "Could not set sched_smt_power_savings to 0", e
         sys.exit(1)
+
+def stop_wkld(work_ld):
+    ''' Kill workload triggered in background
+    '''
+    try:
+        os.system('pkill %s 2>/dev/null' %work_ld)
+        if work_ld == "kernbench":
+            os.system('pkill make 2>/dev/null')
+    except OSError, e:
+        print "Exception in stop_wkld", e
+        sys.exit(1)

------------------------------------------------------------------------------
Come build with us! The BlackBerry(R) Developer Conference in SF, CA
is the only developer event you need to attend this year. Jumpstart your
developing skills, take BlackBerry mobile applications to market and stay 
ahead of the curve. Join us from November 9 - 12, 2009. Register now!
http://p.sf.net/sfu/devconference
_______________________________________________
Ltp-list mailing list
Ltp-list@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/ltp-list

^ permalink raw reply	[flat|nested] 17+ messages in thread

* [LTP] [Patch 3/6] Modified ilb test to run with ebizzy as default workload.
  2009-10-13  7:43 [LTP] [Patch 1/6] Fix issues in cpu consolidation verification functions Poornima Nayak
  2009-10-13  7:43 ` [LTP] [Patch 2/6] Developed new functions and fixed issues causing ilb test failure Poornima Nayak
@ 2009-10-13  7:43 ` Poornima Nayak
  2009-10-13 10:12   ` Subrata Modak
  2009-10-13  7:44 ` [LTP] [Patch 4/6] Enhanced & Modified cpu_consolidation testcase Poornima Nayak
                   ` (3 subsequent siblings)
  5 siblings, 1 reply; 17+ messages in thread
From: Poornima Nayak @ 2009-10-13  7:43 UTC (permalink / raw)
  To: ltp-list, svaidy, ego, arun

Modified ilb test to run with ebizzy as default workload.
 
Signed-off-by: poornima nayak <mpnayak@linux.vnet.ibm.com>

diff -uprN ltp-full-20090930/testcases/kernel/power_management/ilb_test.py ltp-full-20090930_patched/testcases/kernel/power_management/ilb_test.py
--- ltp-full-20090930/testcases/kernel/power_management/ilb_test.py	2009-10-05 02:10:56.000000000 -0400
+++ ltp-full-20090930_patched/testcases/kernel/power_management/ilb_test.py	2009-10-12 23:05:40.000000000 -0400
@@ -27,7 +27,7 @@ def main(argv=None):
     parser.add_option("-t", "--smt_level", dest="smt_level",
         default=0, help="Sched smt power saving value 0/1/2")
     parser.add_option("-w", "--workload", dest="work_ld",
-        default="kernbench", help="Workload can be ebizzy/kernbench")
+        default="ebizzy", help="Workload can be ebizzy/kernbench")
     (options, args) = parser.parse_args()
 
     try:
@@ -40,10 +40,10 @@ def main(argv=None):
         map_cpuid_pkgid()
         print "INFO: Created table mapping cpu to package"
         background="no"
-        duration=60
+        duration=120
         pinned="yes"
 
-        trigger_workld(options.smt_level,options.work_ld, "single_job", duration, background, pinned)
+        trigger_workld(options.smt_level,options.work_ld, "single_job", duration, background, pinned, "no")
         generate_loc_intr_report()
         status = validate_ilb(options.mc_level, options.smt_level)
         reset_schedmc()

------------------------------------------------------------------------------
Come build with us! The BlackBerry(R) Developer Conference in SF, CA
is the only developer event you need to attend this year. Jumpstart your
developing skills, take BlackBerry mobile applications to market and stay 
ahead of the curve. Join us from November 9 - 12, 2009. Register now!
http://p.sf.net/sfu/devconference
_______________________________________________
Ltp-list mailing list
Ltp-list@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/ltp-list

^ permalink raw reply	[flat|nested] 17+ messages in thread

* [LTP] [Patch 4/6] Enhanced & Modified cpu_consolidation testcase
  2009-10-13  7:43 [LTP] [Patch 1/6] Fix issues in cpu consolidation verification functions Poornima Nayak
  2009-10-13  7:43 ` [LTP] [Patch 2/6] Developed new functions and fixed issues causing ilb test failure Poornima Nayak
  2009-10-13  7:43 ` [LTP] [Patch 3/6] Modified ilb test to run with ebizzy as default workload Poornima Nayak
@ 2009-10-13  7:44 ` Poornima Nayak
  2009-10-13 10:12   ` Subrata Modak
  2009-10-13  7:44 ` [LTP] [Patch 5/6] Modified master script to pass appropriate arguments Poornima Nayak
                   ` (2 subsequent siblings)
  5 siblings, 1 reply; 17+ messages in thread
From: Poornima Nayak @ 2009-10-13  7:44 UTC (permalink / raw)
  To: ltp-list, arun, svaidy, ego

We can pass additional argument performance to use the same testcase for 
Performance test. Fixed issues in cpu consolidation test.

Signed-off-by: poornima nayak <mpnayak@linux.vnet.ibm.com>

diff -uprN ltp-full-20090930/testcases/kernel/power_management/cpu_consolidation.py ltp-full-20090930_patched/testcases/kernel/power_management/cpu_consolidation.py
--- ltp-full-20090930/testcases/kernel/power_management/cpu_consolidation.py	2009-10-05 02:10:56.000000000 -0400
+++ ltp-full-20090930_patched/testcases/kernel/power_management/cpu_consolidation.py	2009-10-12 23:03:04.000000000 -0400
@@ -34,40 +34,37 @@ def main(argv=None):
         default="ebizzy", help="Workload can be ebizzy/kernbench")
     parser.add_option("-s", "--stress", dest="stress",
         default="partial", help="Load on system is full/partial [i.e 50%]/thread")
+    parser.add_option("-p", "--performance", dest="perf_test",
+        default=False, action="store_true", help="Enable performance test")
     (options, args) = parser.parse_args()
 
     try:
         count_num_cpu()
         count_num_sockets()
-        # User would set option -v / -vc / -vt to test cpu consolidation
-        # gets disabled when sched_mc &(/) sched_smt is disabled when
-        # workload is already running in the system 
+        if is_hyper_threaded():
+            generate_sibling_list()
+        
+        # User should set option -v to test cpu consolidation
+        # resets when sched_mc &(/) sched_smt is disabled when
+        # workload is already running in the system
+ 
         if options.vary_mc_smt:
 
             # Since same code is used for testing package consolidation and core
             # consolidation is_multi_socket & is_hyper_threaded check is done
-            if is_multi_socket():
-                if options.mc_value:
-                    set_sched_mc_power(options.mc_value)
-                    mc_value=int(options.mc_value)
-                else:    
-                    set_sched_mc_power(1)
-                    mc_value=int(options.mc_value)
-            if is_hyper_threaded():
-                if options.smt_value:
-                    set_sched_smt_power(options.smt_value)
-                    smt_value=int(options.smt_value)
-                else:
-                    set_sched_smt_power(1)
-                    smt_value=1
+            if is_multi_socket() and is_multi_core() and options.mc_value:
+                set_sched_mc_power(options.mc_value)
+
+            if is_hyper_threaded() and options.smt_value:
+                set_sched_smt_power(options.smt_value)
 
             #Generate arguments for trigger workload, run workload in background
             map_cpuid_pkgid()
             background="yes"
             duration=360
             pinned="no"
-            if int(options.mc_value) < 2:
-                trigger_ebizzy (smt_value, "partial", duration, background, pinned)
+            if int(options.mc_value) < 2 and int(options.smt_value) < 2:
+                trigger_ebizzy (options.smt_value, "partial", duration, background, pinned)
                 work_ld="ebizzy"
                 #Wait for 120 seconds and then validate cpu consolidation works
                 #When sched_mc & sched_smt is set
@@ -76,27 +73,36 @@ def main(argv=None):
             else:
                 #Wait for 120 seconds and then validate cpu consolidation works
                 #When sched_mc & sched_smt is set
-                trigger_kernbench (smt_value, "partial", background, pinned) 
+                trigger_kernbench (options.smt_value, "partial", background, pinned, "no") 
                 work_ld="kernbench"
                 import time
-                time.sleep(240)
+                time.sleep(300)
 
             generate_report()
-            status = validate_cpu_consolidation(work_ld, mc_value, smt_value)
+            status = validate_cpu_consolidation("partial", work_ld, options.mc_value, options.smt_value)
             if status == 0:
                 print "INFO: Consolidation worked sched_smt &(/) sched_mc is set"
                 #Disable sched_smt & sched_mc interface values
-                if (options.vary_mc_smt and options.mc_value) and is_multi_socket():
+                if options.vary_mc_smt and options.mc_value > 0:
                     set_sched_mc_power(0)
-                    #Reset sched_smt bcoz when sched_smt is set process still
-                    #continue to consolidate
-                    if is_hyper_threaded():
-                        set_sched_smt_power(0)
-                if (options.vary_mc_smt and options.smt_value) and is_hyper_threaded():
+                    mc_value = options.mc_value
+                else:
+                    mc_value = 0
+                if options.vary_mc_smt and options.smt_value > 0 and is_hyper_threaded():
                     set_sched_smt_power(0)
-                time.sleep(120)
+                    smt_value = options.smt_value
+                else:
+                    smt_value = 0
+
+                if work_ld == "kernbench":
+                    time.sleep(240)
+                else:
+                    time.sleep(120)
+
                 generate_report()
-                status = validate_cpu_consolidation(options.work_ld,options.mc_value, options.smt_value)
+                status = validate_cpu_consolidation("partial", work_ld, mc_value, smt_value)
+                if background == "yes":
+                    stop_wkld(work_ld)
                 #CPU consolidation should fail as sched_mc &(/) sched_smt is disabled
                 if status == 1:
                     return(0)
@@ -113,16 +119,20 @@ sched_smt was enabled. This is pre-requi
                 set_sched_mc_power(options.mc_value)
             if is_hyper_threaded():
                 set_sched_smt_power(options.smt_value)
-                #Commented after observing changes in behaviour in 2.6.31-rc7
-                #stress="thread"
             map_cpuid_pkgid()
             print "INFO: Created table mapping cpu to package"
             background="no"
             duration=60
             pinned ="no"
-            trigger_workld( options.smt_value, options.work_ld, options.stress, duration, background, pinned)
+
+            if options.perf_test:
+                perf_test="yes"
+            else:
+                perf_test="no"
+
+            trigger_workld( options.smt_value, options.work_ld, options.stress, duration, background, pinned, perf_test)
             generate_report()
-            status = validate_cpu_consolidation(options.work_ld,options.mc_value, options.smt_value)
+            status = validate_cpu_consolidation(options.stress, options.work_ld,options.mc_value, options.smt_value)
             reset_schedmc()
             if is_hyper_threaded():
                 reset_schedsmt()

------------------------------------------------------------------------------
Come build with us! The BlackBerry(R) Developer Conference in SF, CA
is the only developer event you need to attend this year. Jumpstart your
developing skills, take BlackBerry mobile applications to market and stay 
ahead of the curve. Join us from November 9 - 12, 2009. Register now!
http://p.sf.net/sfu/devconference
_______________________________________________
Ltp-list mailing list
Ltp-list@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/ltp-list

^ permalink raw reply	[flat|nested] 17+ messages in thread

* [LTP] [Patch 5/6] Modified master script to pass appropriate arguments
  2009-10-13  7:43 [LTP] [Patch 1/6] Fix issues in cpu consolidation verification functions Poornima Nayak
                   ` (2 preceding siblings ...)
  2009-10-13  7:44 ` [LTP] [Patch 4/6] Enhanced & Modified cpu_consolidation testcase Poornima Nayak
@ 2009-10-13  7:44 ` Poornima Nayak
  2009-10-13 10:12   ` Subrata Modak
  2009-10-14  1:38   ` Garrett Cooper
  2009-10-13  7:44 ` [LTP] [Patch 6/6] Patch to fix workload installation issue Poornima Nayak
  2009-10-13 10:12 ` [LTP] [Patch 1/6] Fix issues in cpu consolidation verification functions Subrata Modak
  5 siblings, 2 replies; 17+ messages in thread
From: Poornima Nayak @ 2009-10-13  7:44 UTC (permalink / raw)
  To: ltp-list, svaidy, ego, arun

Modified master script to pass appropriate arguments for cpu consolidation
test cases. 

Signed-off-by: poornima nayak <mpnayak@linux.vnet.ibm.com>

diff -uprN ltp-full-20090930/testcases/kernel/power_management/runpwtests.sh ltp-full-20090930_patched/testcases/kernel/power_management/runpwtests.sh
--- ltp-full-20090930/testcases/kernel/power_management/runpwtests.sh	2009-10-05 02:10:56.000000000 -0400
+++ ltp-full-20090930_patched/testcases/kernel/power_management/runpwtests.sh	2009-10-12 22:43:46.000000000 -0400
@@ -210,7 +210,7 @@ fi
 if [ $# -gt 0 -a "$1" = "-exclusive" ]; then 
 	# Test CPU consolidation 
 	if [ $multi_socket -eq $YES -a $multi_core -eq $YES ]; then
-		for sched_mc in `seq 0 $max_sched_mc`; do
+		for sched_mc in `seq 0  $max_sched_mc`; do
 			: $(( TST_COUNT += 1 ))
 			sched_mc_pass_cnt=0
 			if [ $sched_mc -eq 2 ]; then
@@ -243,111 +243,108 @@ if [ $# -gt 0 -a "$1" = "-exclusive" ]; 
 		done
 
 	fi
-	if [ $hyper_threaded -eq $YES -a $multi_socket -eq $YES ]; then
-		#Testcase to validate consolidation at core level
-		work_load="ebizzy"
-		sched_smt_pass_cnt=0
-		: $(( TST_COUNT += 1 ))
-		stress="thread"
-		for repeat_test in `seq 1  10`; do
-			if cpu_consolidation.py -c $sched_mc -t $sched_smt -w $work_load -s $stress; then
-				: $(( sched_smt_pass_cnt += 1 ))
-			fi
-		done
-		analyze_core_consolidation_result $sched_smt $work_load $sched_smt_pass_cnt
 
-		# Vary only sched_smt from 1 to 0 when workload is running and ensure that
-		# tasks do not consolidate to single core when sched_smt is set to 0
-		: $(( TST_COUNT += 1 ))
-		if cpu_consolidation.py -vt; then
-			tst_resm TPASS "CPU consolidation test by varying sched_smt"
-		else
-			tst_resm TFAIL "CPU consolidation test by varying sched_smt"
-		fi
+	if [ $hyper_threaded -eq $YES -a $multi_socket -eq $YES -a $multi_core -eq $NO ]; then
+			#Testcase to validate consolidation at core level
+			for sched_smt in `seq 0 $max_sched_smt`; do
+				if [ $sched_smt -eq 2 ]; then
+				 	work_load="kernbench"
+				else	
+					work_load="ebizzy"
+				fi
+				sched_smt_pass_cnt=0
+				: $(( TST_COUNT += 1 ))
+				stress="thread"
+				for repeat_test in `seq 1  10`; do
+					if cpu_consolidation.py -t $sched_smt -w $work_load -s $stress; then
+						: $(( sched_smt_pass_cnt += 1 ))
+					fi
+				done
+				analyze_core_consolidation_result $sched_smt $sched_smt_pass_cnt
+			done
 	fi
 
 	# Verify threads consolidation stops when sched_mc &(/) sched_smt is disabled
     if [ $multi_socket -eq $YES -a $multi_core -eq $YES ]; then
-		: $(( TST_COUNT += 1 ))
-		# Vary sched_mc from 1 to 0 when workload is running and ensure that
-		# tasks do not consolidate to single package when sched_mc is set to 0
-		if cpu_consolidation.py -v -c 1; then
-            tst_resm TPASS "CPU consolidation test by varying sched_mc 1 to 0"
-        else
-            tst_resm TFAIL "CPU consolidation test by varying sched_mc 1 to 0"
-        fi
-
-		# Vary sched_mc from 2 to 0 when workload is running and ensure that
-        # tasks do not consolidate to single package when sched_mc is set to 0
-		: $(( TST_COUNT += 1 ))
-		if cpu_consolidation.py -v -c 2; then
-			tst_resm TPASS "CPU consolidation test by varying sched_mc 2 to 0"
-		else
-			tst_resm TFAIL "CPU consolidation test by varying sched_mc 2 to 0"
-		fi
-
-		if [ $hyper_threaded -eq $YES ]; then
-			# Vary sched_mc & sched_smt from 1 to 0 & 2 to 0 when workload is running and ensure that
-            # tasks do not consolidate to single package when sched_mc is set to 0
-            : $(( TST_COUNT += 1 ))
-			if cpu_consolidation.py -v -c 1 -t 1; then
-				tst_resm TPASS "CPU consolidation test by varying sched_mc \
-& sched_smt from 1 to 0"
-			else
-				tst_resm TFAIL "CPU consolidation test by varying sched_mc \
-& sched_smt from 1 to 0"
-			fi
-
+        for sched_mc in `seq 1  $max_sched_mc`; do
 			: $(( TST_COUNT += 1 ))
-			if cpu_consolidation.py -v -c 2 -t 2; then
-				tst_resm TPASS "CPU consolidation test by varying sched_mc \
- & sched_smt from 2 to 0"
-			else
-				tst_resm TFAIL "CPU consolidation test by varying sched_mc \
- & sched_smt from 2 to 0"
+		
+			# Vary sched_mc from 1/2 to 0 when workload is running and ensure that
+			# tasks do not consolidate to single package when sched_mc is set to 0
+			if cpu_consolidation.py -v -c $sched_mc; then
+            	tst_resm TPASS "CPU consolidation test by varying sched_mc $sched_mc to 0"
+        	else
+            	tst_resm TFAIL "CPU consolidation test by varying sched_mc $sched_mc to 0"
+        	fi
+
+			if [ $hyper_threaded -eq $YES ]; then
+				for sched_smt in `seq 1  $max_sched_smt`; do		
+					if [ $sched_smt -eq $sched_mc ]; then
+						# Vary sched_mc & sched_smt from 1 to 0 & 2 to 0 when workload is running and ensure that
+            			# tasks do not consolidate to single package when sched_mc is set to 0
+            			: $(( TST_COUNT += 1 ))
+						if cpu_consolidation.py -v -c $sched_mc -t $sched_smt; then
+							tst_resm TPASS "CPU consolidation test by varying sched_mc \
+& sched_smt from $sched_mc to 0"
+						else
+							tst_resm TFAIL "CPU consolidation test by varying sched_mc \
+& sched_smt from $sched_mc to 0"
+						fi
+					fi
+				done
 			fi
-		fi
+		done
 	fi
 
-    # Verify threads consolidation stops when is disabled in HT systems
+    # Verify threads consolidation stops when sched_smt is disabled in HT systems
 	if [ $hyper_threaded -eq $YES -a $multi_socket -eq $YES ]; then
 		# Vary only sched_smt from 1 to 0 when workload is running and ensure that
 		# tasks do not consolidate to single core when sched_smt is set to 0
 		: $(( TST_COUNT += 1 ))
 		if cpu_consolidation.py -v -t 1; then
-			tst_resm TPASS "CPU consolidation test by varying sched_smt 1 to 0"
+			tst_resm TPASS "CPU consolidation test by varying sched_smt from 1 to 0"
 		else
-			tst_resm TFAIL "CPU consolidation test by varying sched_smt 1 to 0"
+			tst_resm TFAIL "CPU consolidation test by varying sched_smt from 1 to 0"
 		fi
+        
+        # Vary only sched_smt from 2 to 0 when workload is running and ensure that
+        # tasks do not consolidate to single core when sched_smt is set to 0
+        : $(( TST_COUNT += 1 )) 
+        if cpu_consolidation.py -v -t 2; then 
+            tst_resm TPASS "CPU consolidation test by varying sched_smt 2 to 0"
+        else
+            tst_resm TFAIL "CPU consolidation test by varying sched_smt 2 to 0"
+        fi
+
 	fi
 
 	# Verify ILB runs in same package as workload
     if [ $multi_socket -eq $YES -a $multi_core -eq $YES ]; then
-		for sched_mc in `seq 0 $max_sched_mc`; do
+		for sched_mc in `seq 1 $max_sched_mc`; do
 			: $(( TST_COUNT += 1 ))
-            ilb_test.py -c $sched_mc; RC=$?
+            if [ $sched_mc -eq 2 ]; then
+                work_load="kernbench"
+            else
+                work_load="ebizzy"
+            fi
+
+            ilb_test.py -c $sched_mc -w $work_load; RC=$?
 			if [ $RC -eq 0 ]; then
 				tst_resm TPASS "ILB & workload in same package for sched_mc=$sched_mc"
 			else
-				if [ $sched_mc -eq 0 ]; then
-					tst_resm TPASS "ILB & workload is not in same package when sched_mc=0"
-				else
-					tst_resm TFAIL "ILB did not run in same package"
-				fi
+				tst_resm TFAIL "ILB & workload did not run in same package for sched_mc=$sched_mc\
+. Ensure CONFIG_NO_HZ is set"
 			fi
 			if [ $hyper_threaded -eq $YES ]; then
-				for sched_smt in `seq 0 $max_sched_smt`; do
+				for sched_smt in `seq 1 $max_sched_smt`; do
 					: $(( TST_COUNT += 1 ))
-					ilb_test.py -c $sched_mc -t sched_smt; RC=$?
+					ilb_test.py -c $sched_mc -t sched_smt -w $work_load; RC=$?
  					if [ $RC -eq 0 ]; then
-						tst_resm TPASS "ILB & workload in same package for sched_mc=$sched_mc"
+						tst_resm TPASS "ILB & workload in same package for sched_mc=$sched_mc \
+& sched_smt=$sched_smt"
 					else
-						if [ $sched_mc -eq 0 -a $sched_smt -eq 0 ]; then
-							tst_resm TPASS "ILB & workload is not in same package when\
-sched_mc & sched_smt is 0"
-						else
-							tst_resm TFAIL "ILB did not run in same package"    
-						fi
+						tst_resm TFAIL "ILB & workload did not execute in same package for \
+sched_mc=$sched_mc & sched_smt=$sched_smt. Ensure CONFIG_NO_HZ is set"    
 					fi
 				done
 			fi

------------------------------------------------------------------------------
Come build with us! The BlackBerry(R) Developer Conference in SF, CA
is the only developer event you need to attend this year. Jumpstart your
developing skills, take BlackBerry mobile applications to market and stay 
ahead of the curve. Join us from November 9 - 12, 2009. Register now!
http://p.sf.net/sfu/devconference
_______________________________________________
Ltp-list mailing list
Ltp-list@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/ltp-list

^ permalink raw reply	[flat|nested] 17+ messages in thread

* [LTP] [Patch 6/6] Patch to fix workload installation issue
  2009-10-13  7:43 [LTP] [Patch 1/6] Fix issues in cpu consolidation verification functions Poornima Nayak
                   ` (3 preceding siblings ...)
  2009-10-13  7:44 ` [LTP] [Patch 5/6] Modified master script to pass appropriate arguments Poornima Nayak
@ 2009-10-13  7:44 ` Poornima Nayak
  2009-10-13 10:12   ` Subrata Modak
  2009-10-13 10:12 ` [LTP] [Patch 1/6] Fix issues in cpu consolidation verification functions Subrata Modak
  5 siblings, 1 reply; 17+ messages in thread
From: Poornima Nayak @ 2009-10-13  7:44 UTC (permalink / raw)
  To: ltp-list, arun, svaidy, ego

Patch to fix workload installation issue

Signed-off-by: poornima nayak <mpnayak@linux.vnet.ibm.com>

diff -uprN ltp-full-20090930/utils/Makefile ltp-full-20090930_patched/utils/Makefile
--- ltp-full-20090930/utils/Makefile	2009-10-05 02:10:46.000000000 -0400
+++ ltp-full-20090930_patched/utils/Makefile	2009-10-12 22:48:30.000000000 -0400
@@ -27,6 +27,6 @@ all: configure
 	@set -e; for i in $(SUBDIRS); do $(MAKE) -C $$i $@; done;
 
 install:
-
+	@set -e; for i in $(SUBDIRS); do $(MAKE) -C $$i $@; done;
 clean:
 	@set -e; for i in $(SUBDIRS); do $(MAKE) -C $$i $@; done;

------------------------------------------------------------------------------
Come build with us! The BlackBerry(R) Developer Conference in SF, CA
is the only developer event you need to attend this year. Jumpstart your
developing skills, take BlackBerry mobile applications to market and stay 
ahead of the curve. Join us from November 9 - 12, 2009. Register now!
http://p.sf.net/sfu/devconference
_______________________________________________
Ltp-list mailing list
Ltp-list@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/ltp-list

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [LTP] [Patch 6/6] Patch to fix workload installation issue
  2009-10-13  7:44 ` [LTP] [Patch 6/6] Patch to fix workload installation issue Poornima Nayak
@ 2009-10-13 10:12   ` Subrata Modak
  2009-10-14  1:39     ` Garrett Cooper
  0 siblings, 1 reply; 17+ messages in thread
From: Subrata Modak @ 2009-10-13 10:12 UTC (permalink / raw)
  To: Poornima Nayak; +Cc: ltp-list, svaidy, ego, arun

On Tue, 2009-10-13 at 13:14 +0530, Poornima Nayak wrote: 
> Patch to fix workload installation issue
> 
> Signed-off-by: poornima nayak <mpnayak@linux.vnet.ibm.com>

This will not apply as all the LTP Makefiles has undergon huge changes.
Checkout the latest LTP and re-create this patch:

Hunk #1 FAILED at 27.
1 out of 1 hunk FAILED -- saving rejects to file utils/Makefile.rej

Regards--
Subrata

> 
> diff -uprN ltp-full-20090930/utils/Makefile ltp-full-20090930_patched/utils/Makefile
> --- ltp-full-20090930/utils/Makefile	2009-10-05 02:10:46.000000000 -0400
> +++ ltp-full-20090930_patched/utils/Makefile	2009-10-12 22:48:30.000000000 -0400
> @@ -27,6 +27,6 @@ all: configure
>  	@set -e; for i in $(SUBDIRS); do $(MAKE) -C $$i $@; done;
> 
>  install:
> -
> +	@set -e; for i in $(SUBDIRS); do $(MAKE) -C $$i $@; done;
>  clean:
>  	@set -e; for i in $(SUBDIRS); do $(MAKE) -C $$i $@; done;
> 
> ------------------------------------------------------------------------------
> Come build with us! The BlackBerry(R) Developer Conference in SF, CA
> is the only developer event you need to attend this year. Jumpstart your
> developing skills, take BlackBerry mobile applications to market and stay 
> ahead of the curve. Join us from November 9 - 12, 2009. Register now!
> http://p.sf.net/sfu/devconference
> _______________________________________________
> Ltp-list mailing list
> Ltp-list@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/ltp-list


------------------------------------------------------------------------------
Come build with us! The BlackBerry(R) Developer Conference in SF, CA
is the only developer event you need to attend this year. Jumpstart your
developing skills, take BlackBerry mobile applications to market and stay 
ahead of the curve. Join us from November 9 - 12, 2009. Register now!
http://p.sf.net/sfu/devconference
_______________________________________________
Ltp-list mailing list
Ltp-list@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/ltp-list

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [LTP] [Patch 5/6] Modified master script to pass appropriate arguments
  2009-10-13  7:44 ` [LTP] [Patch 5/6] Modified master script to pass appropriate arguments Poornima Nayak
@ 2009-10-13 10:12   ` Subrata Modak
  2009-10-14  1:38   ` Garrett Cooper
  1 sibling, 0 replies; 17+ messages in thread
From: Subrata Modak @ 2009-10-13 10:12 UTC (permalink / raw)
  To: Poornima Nayak; +Cc: ltp-list, arun, svaidy, ego

On Tue, 2009-10-13 at 13:14 +0530, Poornima Nayak wrote: 
> Modified master script to pass appropriate arguments for cpu consolidation
> test cases. 
> 
> Signed-off-by: poornima nayak <mpnayak@linux.vnet.ibm.com>

Thanks.

Regards--
Subrata

> 
> diff -uprN ltp-full-20090930/testcases/kernel/power_management/runpwtests.sh ltp-full-20090930_patched/testcases/kernel/power_management/runpwtests.sh
> --- ltp-full-20090930/testcases/kernel/power_management/runpwtests.sh	2009-10-05 02:10:56.000000000 -0400
> +++ ltp-full-20090930_patched/testcases/kernel/power_management/runpwtests.sh	2009-10-12 22:43:46.000000000 -0400
> @@ -210,7 +210,7 @@ fi
>  if [ $# -gt 0 -a "$1" = "-exclusive" ]; then 
>  	# Test CPU consolidation 
>  	if [ $multi_socket -eq $YES -a $multi_core -eq $YES ]; then
> -		for sched_mc in `seq 0 $max_sched_mc`; do
> +		for sched_mc in `seq 0  $max_sched_mc`; do
>  			: $(( TST_COUNT += 1 ))
>  			sched_mc_pass_cnt=0
>  			if [ $sched_mc -eq 2 ]; then
> @@ -243,111 +243,108 @@ if [ $# -gt 0 -a "$1" = "-exclusive" ]; 
>  		done
> 
>  	fi
> -	if [ $hyper_threaded -eq $YES -a $multi_socket -eq $YES ]; then
> -		#Testcase to validate consolidation at core level
> -		work_load="ebizzy"
> -		sched_smt_pass_cnt=0
> -		: $(( TST_COUNT += 1 ))
> -		stress="thread"
> -		for repeat_test in `seq 1  10`; do
> -			if cpu_consolidation.py -c $sched_mc -t $sched_smt -w $work_load -s $stress; then
> -				: $(( sched_smt_pass_cnt += 1 ))
> -			fi
> -		done
> -		analyze_core_consolidation_result $sched_smt $work_load $sched_smt_pass_cnt
> 
> -		# Vary only sched_smt from 1 to 0 when workload is running and ensure that
> -		# tasks do not consolidate to single core when sched_smt is set to 0
> -		: $(( TST_COUNT += 1 ))
> -		if cpu_consolidation.py -vt; then
> -			tst_resm TPASS "CPU consolidation test by varying sched_smt"
> -		else
> -			tst_resm TFAIL "CPU consolidation test by varying sched_smt"
> -		fi
> +	if [ $hyper_threaded -eq $YES -a $multi_socket -eq $YES -a $multi_core -eq $NO ]; then
> +			#Testcase to validate consolidation at core level
> +			for sched_smt in `seq 0 $max_sched_smt`; do
> +				if [ $sched_smt -eq 2 ]; then
> +				 	work_load="kernbench"
> +				else	
> +					work_load="ebizzy"
> +				fi
> +				sched_smt_pass_cnt=0
> +				: $(( TST_COUNT += 1 ))
> +				stress="thread"
> +				for repeat_test in `seq 1  10`; do
> +					if cpu_consolidation.py -t $sched_smt -w $work_load -s $stress; then
> +						: $(( sched_smt_pass_cnt += 1 ))
> +					fi
> +				done
> +				analyze_core_consolidation_result $sched_smt $sched_smt_pass_cnt
> +			done
>  	fi
> 
>  	# Verify threads consolidation stops when sched_mc &(/) sched_smt is disabled
>      if [ $multi_socket -eq $YES -a $multi_core -eq $YES ]; then
> -		: $(( TST_COUNT += 1 ))
> -		# Vary sched_mc from 1 to 0 when workload is running and ensure that
> -		# tasks do not consolidate to single package when sched_mc is set to 0
> -		if cpu_consolidation.py -v -c 1; then
> -            tst_resm TPASS "CPU consolidation test by varying sched_mc 1 to 0"
> -        else
> -            tst_resm TFAIL "CPU consolidation test by varying sched_mc 1 to 0"
> -        fi
> -
> -		# Vary sched_mc from 2 to 0 when workload is running and ensure that
> -        # tasks do not consolidate to single package when sched_mc is set to 0
> -		: $(( TST_COUNT += 1 ))
> -		if cpu_consolidation.py -v -c 2; then
> -			tst_resm TPASS "CPU consolidation test by varying sched_mc 2 to 0"
> -		else
> -			tst_resm TFAIL "CPU consolidation test by varying sched_mc 2 to 0"
> -		fi
> -
> -		if [ $hyper_threaded -eq $YES ]; then
> -			# Vary sched_mc & sched_smt from 1 to 0 & 2 to 0 when workload is running and ensure that
> -            # tasks do not consolidate to single package when sched_mc is set to 0
> -            : $(( TST_COUNT += 1 ))
> -			if cpu_consolidation.py -v -c 1 -t 1; then
> -				tst_resm TPASS "CPU consolidation test by varying sched_mc \
> -& sched_smt from 1 to 0"
> -			else
> -				tst_resm TFAIL "CPU consolidation test by varying sched_mc \
> -& sched_smt from 1 to 0"
> -			fi
> -
> +        for sched_mc in `seq 1  $max_sched_mc`; do
>  			: $(( TST_COUNT += 1 ))
> -			if cpu_consolidation.py -v -c 2 -t 2; then
> -				tst_resm TPASS "CPU consolidation test by varying sched_mc \
> - & sched_smt from 2 to 0"
> -			else
> -				tst_resm TFAIL "CPU consolidation test by varying sched_mc \
> - & sched_smt from 2 to 0"
> +		
> +			# Vary sched_mc from 1/2 to 0 when workload is running and ensure that
> +			# tasks do not consolidate to single package when sched_mc is set to 0
> +			if cpu_consolidation.py -v -c $sched_mc; then
> +            	tst_resm TPASS "CPU consolidation test by varying sched_mc $sched_mc to 0"
> +        	else
> +            	tst_resm TFAIL "CPU consolidation test by varying sched_mc $sched_mc to 0"
> +        	fi
> +
> +			if [ $hyper_threaded -eq $YES ]; then
> +				for sched_smt in `seq 1  $max_sched_smt`; do		
> +					if [ $sched_smt -eq $sched_mc ]; then
> +						# Vary sched_mc & sched_smt from 1 to 0 & 2 to 0 when workload is running and ensure that
> +            			# tasks do not consolidate to single package when sched_mc is set to 0
> +            			: $(( TST_COUNT += 1 ))
> +						if cpu_consolidation.py -v -c $sched_mc -t $sched_smt; then
> +							tst_resm TPASS "CPU consolidation test by varying sched_mc \
> +& sched_smt from $sched_mc to 0"
> +						else
> +							tst_resm TFAIL "CPU consolidation test by varying sched_mc \
> +& sched_smt from $sched_mc to 0"
> +						fi
> +					fi
> +				done
>  			fi
> -		fi
> +		done
>  	fi
> 
> -    # Verify threads consolidation stops when is disabled in HT systems
> +    # Verify threads consolidation stops when sched_smt is disabled in HT systems
>  	if [ $hyper_threaded -eq $YES -a $multi_socket -eq $YES ]; then
>  		# Vary only sched_smt from 1 to 0 when workload is running and ensure that
>  		# tasks do not consolidate to single core when sched_smt is set to 0
>  		: $(( TST_COUNT += 1 ))
>  		if cpu_consolidation.py -v -t 1; then
> -			tst_resm TPASS "CPU consolidation test by varying sched_smt 1 to 0"
> +			tst_resm TPASS "CPU consolidation test by varying sched_smt from 1 to 0"
>  		else
> -			tst_resm TFAIL "CPU consolidation test by varying sched_smt 1 to 0"
> +			tst_resm TFAIL "CPU consolidation test by varying sched_smt from 1 to 0"
>  		fi
> +        
> +        # Vary only sched_smt from 2 to 0 when workload is running and ensure that
> +        # tasks do not consolidate to single core when sched_smt is set to 0
> +        : $(( TST_COUNT += 1 )) 
> +        if cpu_consolidation.py -v -t 2; then 
> +            tst_resm TPASS "CPU consolidation test by varying sched_smt 2 to 0"
> +        else
> +            tst_resm TFAIL "CPU consolidation test by varying sched_smt 2 to 0"
> +        fi
> +
>  	fi
> 
>  	# Verify ILB runs in same package as workload
>      if [ $multi_socket -eq $YES -a $multi_core -eq $YES ]; then
> -		for sched_mc in `seq 0 $max_sched_mc`; do
> +		for sched_mc in `seq 1 $max_sched_mc`; do
>  			: $(( TST_COUNT += 1 ))
> -            ilb_test.py -c $sched_mc; RC=$?
> +            if [ $sched_mc -eq 2 ]; then
> +                work_load="kernbench"
> +            else
> +                work_load="ebizzy"
> +            fi
> +
> +            ilb_test.py -c $sched_mc -w $work_load; RC=$?
>  			if [ $RC -eq 0 ]; then
>  				tst_resm TPASS "ILB & workload in same package for sched_mc=$sched_mc"
>  			else
> -				if [ $sched_mc -eq 0 ]; then
> -					tst_resm TPASS "ILB & workload is not in same package when sched_mc=0"
> -				else
> -					tst_resm TFAIL "ILB did not run in same package"
> -				fi
> +				tst_resm TFAIL "ILB & workload did not run in same package for sched_mc=$sched_mc\
> +. Ensure CONFIG_NO_HZ is set"
>  			fi
>  			if [ $hyper_threaded -eq $YES ]; then
> -				for sched_smt in `seq 0 $max_sched_smt`; do
> +				for sched_smt in `seq 1 $max_sched_smt`; do
>  					: $(( TST_COUNT += 1 ))
> -					ilb_test.py -c $sched_mc -t sched_smt; RC=$?
> +					ilb_test.py -c $sched_mc -t sched_smt -w $work_load; RC=$?
>   					if [ $RC -eq 0 ]; then
> -						tst_resm TPASS "ILB & workload in same package for sched_mc=$sched_mc"
> +						tst_resm TPASS "ILB & workload in same package for sched_mc=$sched_mc \
> +& sched_smt=$sched_smt"
>  					else
> -						if [ $sched_mc -eq 0 -a $sched_smt -eq 0 ]; then
> -							tst_resm TPASS "ILB & workload is not in same package when\
> -sched_mc & sched_smt is 0"
> -						else
> -							tst_resm TFAIL "ILB did not run in same package"    
> -						fi
> +						tst_resm TFAIL "ILB & workload did not execute in same package for \
> +sched_mc=$sched_mc & sched_smt=$sched_smt. Ensure CONFIG_NO_HZ is set"    
>  					fi
>  				done
>  			fi
> 
> ------------------------------------------------------------------------------
> Come build with us! The BlackBerry(R) Developer Conference in SF, CA
> is the only developer event you need to attend this year. Jumpstart your
> developing skills, take BlackBerry mobile applications to market and stay 
> ahead of the curve. Join us from November 9 - 12, 2009. Register now!
> http://p.sf.net/sfu/devconference
> _______________________________________________
> Ltp-list mailing list
> Ltp-list@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/ltp-list


------------------------------------------------------------------------------
Come build with us! The BlackBerry(R) Developer Conference in SF, CA
is the only developer event you need to attend this year. Jumpstart your
developing skills, take BlackBerry mobile applications to market and stay 
ahead of the curve. Join us from November 9 - 12, 2009. Register now!
http://p.sf.net/sfu/devconference
_______________________________________________
Ltp-list mailing list
Ltp-list@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/ltp-list

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [LTP] [Patch 4/6] Enhanced & Modified cpu_consolidation testcase
  2009-10-13  7:44 ` [LTP] [Patch 4/6] Enhanced & Modified cpu_consolidation testcase Poornima Nayak
@ 2009-10-13 10:12   ` Subrata Modak
  0 siblings, 0 replies; 17+ messages in thread
From: Subrata Modak @ 2009-10-13 10:12 UTC (permalink / raw)
  To: Poornima Nayak; +Cc: ltp-list, svaidy, ego, arun

On Tue, 2009-10-13 at 13:14 +0530, Poornima Nayak wrote: 
> We can pass additional argument performance to use the same testcase for 
> Performance test. Fixed issues in cpu consolidation test.
> 
> Signed-off-by: poornima nayak <mpnayak@linux.vnet.ibm.com>

Thanks.

Regards--
Subrata

> 
> diff -uprN ltp-full-20090930/testcases/kernel/power_management/cpu_consolidation.py ltp-full-20090930_patched/testcases/kernel/power_management/cpu_consolidation.py
> --- ltp-full-20090930/testcases/kernel/power_management/cpu_consolidation.py	2009-10-05 02:10:56.000000000 -0400
> +++ ltp-full-20090930_patched/testcases/kernel/power_management/cpu_consolidation.py	2009-10-12 23:03:04.000000000 -0400
> @@ -34,40 +34,37 @@ def main(argv=None):
>          default="ebizzy", help="Workload can be ebizzy/kernbench")
>      parser.add_option("-s", "--stress", dest="stress",
>          default="partial", help="Load on system is full/partial [i.e 50%]/thread")
> +    parser.add_option("-p", "--performance", dest="perf_test",
> +        default=False, action="store_true", help="Enable performance test")
>      (options, args) = parser.parse_args()
> 
>      try:
>          count_num_cpu()
>          count_num_sockets()
> -        # User would set option -v / -vc / -vt to test cpu consolidation
> -        # gets disabled when sched_mc &(/) sched_smt is disabled when
> -        # workload is already running in the system 
> +        if is_hyper_threaded():
> +            generate_sibling_list()
> +        
> +        # User should set option -v to test cpu consolidation
> +        # resets when sched_mc &(/) sched_smt is disabled when
> +        # workload is already running in the system
> + 
>          if options.vary_mc_smt:
> 
>              # Since same code is used for testing package consolidation and core
>              # consolidation is_multi_socket & is_hyper_threaded check is done
> -            if is_multi_socket():
> -                if options.mc_value:
> -                    set_sched_mc_power(options.mc_value)
> -                    mc_value=int(options.mc_value)
> -                else:    
> -                    set_sched_mc_power(1)
> -                    mc_value=int(options.mc_value)
> -            if is_hyper_threaded():
> -                if options.smt_value:
> -                    set_sched_smt_power(options.smt_value)
> -                    smt_value=int(options.smt_value)
> -                else:
> -                    set_sched_smt_power(1)
> -                    smt_value=1
> +            if is_multi_socket() and is_multi_core() and options.mc_value:
> +                set_sched_mc_power(options.mc_value)
> +
> +            if is_hyper_threaded() and options.smt_value:
> +                set_sched_smt_power(options.smt_value)
> 
>              #Generate arguments for trigger workload, run workload in background
>              map_cpuid_pkgid()
>              background="yes"
>              duration=360
>              pinned="no"
> -            if int(options.mc_value) < 2:
> -                trigger_ebizzy (smt_value, "partial", duration, background, pinned)
> +            if int(options.mc_value) < 2 and int(options.smt_value) < 2:
> +                trigger_ebizzy (options.smt_value, "partial", duration, background, pinned)
>                  work_ld="ebizzy"
>                  #Wait for 120 seconds and then validate cpu consolidation works
>                  #When sched_mc & sched_smt is set
> @@ -76,27 +73,36 @@ def main(argv=None):
>              else:
>                  #Wait for 120 seconds and then validate cpu consolidation works
>                  #When sched_mc & sched_smt is set
> -                trigger_kernbench (smt_value, "partial", background, pinned) 
> +                trigger_kernbench (options.smt_value, "partial", background, pinned, "no") 
>                  work_ld="kernbench"
>                  import time
> -                time.sleep(240)
> +                time.sleep(300)
> 
>              generate_report()
> -            status = validate_cpu_consolidation(work_ld, mc_value, smt_value)
> +            status = validate_cpu_consolidation("partial", work_ld, options.mc_value, options.smt_value)
>              if status == 0:
>                  print "INFO: Consolidation worked sched_smt &(/) sched_mc is set"
>                  #Disable sched_smt & sched_mc interface values
> -                if (options.vary_mc_smt and options.mc_value) and is_multi_socket():
> +                if options.vary_mc_smt and options.mc_value > 0:
>                      set_sched_mc_power(0)
> -                    #Reset sched_smt bcoz when sched_smt is set process still
> -                    #continue to consolidate
> -                    if is_hyper_threaded():
> -                        set_sched_smt_power(0)
> -                if (options.vary_mc_smt and options.smt_value) and is_hyper_threaded():
> +                    mc_value = options.mc_value
> +                else:
> +                    mc_value = 0
> +                if options.vary_mc_smt and options.smt_value > 0 and is_hyper_threaded():
>                      set_sched_smt_power(0)
> -                time.sleep(120)
> +                    smt_value = options.smt_value
> +                else:
> +                    smt_value = 0
> +
> +                if work_ld == "kernbench":
> +                    time.sleep(240)
> +                else:
> +                    time.sleep(120)
> +
>                  generate_report()
> -                status = validate_cpu_consolidation(options.work_ld,options.mc_value, options.smt_value)
> +                status = validate_cpu_consolidation("partial", work_ld, mc_value, smt_value)
> +                if background == "yes":
> +                    stop_wkld(work_ld)
>                  #CPU consolidation should fail as sched_mc &(/) sched_smt is disabled
>                  if status == 1:
>                      return(0)
> @@ -113,16 +119,20 @@ sched_smt was enabled. This is pre-requi
>                  set_sched_mc_power(options.mc_value)
>              if is_hyper_threaded():
>                  set_sched_smt_power(options.smt_value)
> -                #Commented after observing changes in behaviour in 2.6.31-rc7
> -                #stress="thread"
>              map_cpuid_pkgid()
>              print "INFO: Created table mapping cpu to package"
>              background="no"
>              duration=60
>              pinned ="no"
> -            trigger_workld( options.smt_value, options.work_ld, options.stress, duration, background, pinned)
> +
> +            if options.perf_test:
> +                perf_test="yes"
> +            else:
> +                perf_test="no"
> +
> +            trigger_workld( options.smt_value, options.work_ld, options.stress, duration, background, pinned, perf_test)
>              generate_report()
> -            status = validate_cpu_consolidation(options.work_ld,options.mc_value, options.smt_value)
> +            status = validate_cpu_consolidation(options.stress, options.work_ld,options.mc_value, options.smt_value)
>              reset_schedmc()
>              if is_hyper_threaded():
>                  reset_schedsmt()
> 
> ------------------------------------------------------------------------------
> Come build with us! The BlackBerry(R) Developer Conference in SF, CA
> is the only developer event you need to attend this year. Jumpstart your
> developing skills, take BlackBerry mobile applications to market and stay 
> ahead of the curve. Join us from November 9 - 12, 2009. Register now!
> http://p.sf.net/sfu/devconference
> _______________________________________________
> Ltp-list mailing list
> Ltp-list@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/ltp-list


------------------------------------------------------------------------------
Come build with us! The BlackBerry(R) Developer Conference in SF, CA
is the only developer event you need to attend this year. Jumpstart your
developing skills, take BlackBerry mobile applications to market and stay 
ahead of the curve. Join us from November 9 - 12, 2009. Register now!
http://p.sf.net/sfu/devconference
_______________________________________________
Ltp-list mailing list
Ltp-list@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/ltp-list

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [LTP] [Patch 3/6] Modified ilb test to run with ebizzy as default workload.
  2009-10-13  7:43 ` [LTP] [Patch 3/6] Modified ilb test to run with ebizzy as default workload Poornima Nayak
@ 2009-10-13 10:12   ` Subrata Modak
  0 siblings, 0 replies; 17+ messages in thread
From: Subrata Modak @ 2009-10-13 10:12 UTC (permalink / raw)
  To: Poornima Nayak; +Cc: ltp-list, arun, svaidy, ego

On Tue, 2009-10-13 at 13:13 +0530, Poornima Nayak wrote: 
> Modified ilb test to run with ebizzy as default workload.
> 
> Signed-off-by: poornima nayak <mpnayak@linux.vnet.ibm.com>

Thanks.

Regards--
Subrata

> 
> diff -uprN ltp-full-20090930/testcases/kernel/power_management/ilb_test.py ltp-full-20090930_patched/testcases/kernel/power_management/ilb_test.py
> --- ltp-full-20090930/testcases/kernel/power_management/ilb_test.py	2009-10-05 02:10:56.000000000 -0400
> +++ ltp-full-20090930_patched/testcases/kernel/power_management/ilb_test.py	2009-10-12 23:05:40.000000000 -0400
> @@ -27,7 +27,7 @@ def main(argv=None):
>      parser.add_option("-t", "--smt_level", dest="smt_level",
>          default=0, help="Sched smt power saving value 0/1/2")
>      parser.add_option("-w", "--workload", dest="work_ld",
> -        default="kernbench", help="Workload can be ebizzy/kernbench")
> +        default="ebizzy", help="Workload can be ebizzy/kernbench")
>      (options, args) = parser.parse_args()
> 
>      try:
> @@ -40,10 +40,10 @@ def main(argv=None):
>          map_cpuid_pkgid()
>          print "INFO: Created table mapping cpu to package"
>          background="no"
> -        duration=60
> +        duration=120
>          pinned="yes"
> 
> -        trigger_workld(options.smt_level,options.work_ld, "single_job", duration, background, pinned)
> +        trigger_workld(options.smt_level,options.work_ld, "single_job", duration, background, pinned, "no")
>          generate_loc_intr_report()
>          status = validate_ilb(options.mc_level, options.smt_level)
>          reset_schedmc()
> 
> ------------------------------------------------------------------------------
> Come build with us! The BlackBerry(R) Developer Conference in SF, CA
> is the only developer event you need to attend this year. Jumpstart your
> developing skills, take BlackBerry mobile applications to market and stay 
> ahead of the curve. Join us from November 9 - 12, 2009. Register now!
> http://p.sf.net/sfu/devconference
> _______________________________________________
> Ltp-list mailing list
> Ltp-list@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/ltp-list


------------------------------------------------------------------------------
Come build with us! The BlackBerry(R) Developer Conference in SF, CA
is the only developer event you need to attend this year. Jumpstart your
developing skills, take BlackBerry mobile applications to market and stay 
ahead of the curve. Join us from November 9 - 12, 2009. Register now!
http://p.sf.net/sfu/devconference
_______________________________________________
Ltp-list mailing list
Ltp-list@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/ltp-list

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [LTP] [Patch 2/6] Developed new functions and fixed issues causing ilb test failure
  2009-10-13  7:43 ` [LTP] [Patch 2/6] Developed new functions and fixed issues causing ilb test failure Poornima Nayak
@ 2009-10-13 10:12   ` Subrata Modak
  2009-10-14  2:17   ` Garrett Cooper
  2009-10-14  9:25   ` Gautham R Shenoy
  2 siblings, 0 replies; 17+ messages in thread
From: Subrata Modak @ 2009-10-13 10:12 UTC (permalink / raw)
  To: Poornima Nayak; +Cc: ltp-list, svaidy, ego, arun

On Tue, 2009-10-13 at 13:13 +0530, Poornima Nayak wrote: 
> CPU Consolidation verification function is fixed to handle variations in
> CPU utilization. Threshold is selected based on test conducted on 2.6.31 on
> dual core, quad core & hyper threaded system.
> Developed new function to generate hyper threaded siblings list and get job count
> for hyper threaded system and multisocket system.
> Modified kernbench workload execution time for 5 min, hence test execution time
> will be reduced further. Developed new functions to stop workload.
> 
> Signed-off-by: poornima nayak <mpnayak@linux.vnet.ibm.com>

Thanks.

Regards--
Subrata

> 
> diff -uprN ltp-full-20090930/testcases/kernel/power_management/lib/sched_mc.py ltp-full-20090930_patched/testcases/kernel/power_management/lib/sched_mc.py
> --- ltp-full-20090930/testcases/kernel/power_management/lib/sched_mc.py	2009-10-05 02:10:56.000000000 -0400
> +++ ltp-full-20090930_patched/testcases/kernel/power_management/lib/sched_mc.py	2009-10-12 23:00:30.000000000 -0400
> @@ -22,6 +22,7 @@ socket_count = 0
>  cpu1_max_intr = 0
>  cpu2_max_intr = 0
>  intr_stat_timer_0 = []
> +siblings_list = []
> 
>  def clear_dmesg():
>      '''
> @@ -96,6 +97,36 @@ def is_hyper_threaded():
>          print "Failed to check if system is hyper-threaded"
>          sys.exit(1)
> 
> +def is_multi_core():
> +    ''' Return true if system has sockets has multiple cores
> +    '''
> +  
> +    try:
> +        file_cpuinfo = open("/proc/cpuinfo", 'r')
> +        for line in file_cpuinfo:
> +            if line.startswith('siblings'):
> +                siblings = line.split(":")
> +            if line.startswith('cpu cores'):
> +                cpu_cores = line.split(":")
> +                break
> +       
> +        if int( siblings[1] ) == int( cpu_cores[1] ): 
> +            if int( cpu_cores[1] ) > 1:
> +                multi_core = 1
> +            else:
> +                multi_core = 0
> +        else:
> +            num_of_cpus = int(siblings[1]) / int(cpu_cores[1])
> +            if num_of_cpus > 1:
> +                multi_core = 1
> +            else:
> +                multi_core = 0
> +        file_cpuinfo.close()
> +        return multi_core
> +    except Exception:
> +        print "Failed to check if system is multi core system"
> +        sys.exit(1)
> +
>  def get_hyper_thread_count():
>      ''' Return number of threads in CPU. For eg for x3950 this function
>          would return 2. In future if 4 threads are supported in CPU, this
> @@ -153,6 +184,40 @@ def map_cpuid_pkgid():
>                  sys.exit(1)
> 
> 
> +def generate_sibling_list():
> +    ''' Routine to generate siblings list
> +    '''
> +    try:
> +        for i in range(0, cpu_count):
> +            siblings_file = '/sys/devices/system/cpu/cpu%s' % i
> +            siblings_file += '/topology/thread_siblings_list'
> +            threads_sibs = open(siblings_file).read().rstrip()
> +            thread_ids = threads_sibs.split("-")
> +    
> +            if not thread_ids in siblings_list:
> +                siblings_list.append(thread_ids)
> +    except Exception, details:
> +        print "Exception in generate_siblings_list", details
> +        sys.exit(1)
> +
> +def get_siblings(cpu_id):
> +    ''' Return siblings of cpu_id
> +    '''
> +    try:
> +        cpus = ""
> +        for i in range(0, len(siblings_list)):
> +            for cpu in siblings_list[i]:
> +                if cpu_id == cpu:
> +                    for j in siblings_list[i]:
> +                        # Exclude cpu_id in the list of siblings
> +                        if j != cpu_id:
> +                            cpus += j
> +                    return cpus
> +        return cpus
> +    except Exception, details:
> +        print "Exception in get_siblings", details
> +        sys.exit(1)
> +
>  def get_proc_data(stats_list):
>      ''' Read /proc/stat info and store in dictionary
>      '''
> @@ -168,18 +233,18 @@ def get_proc_data(stats_list):
>          sys.exit(1)
> 
>  def get_proc_loc_count(loc_stats):
> -    ''' Read /proc/stat info and store in dictionary
> +    ''' Read /proc/interrupts info and store in list
>      '''
>      try:
>          file_procstat = open("/proc/interrupts", 'r')
>          for line in file_procstat:
> -            if line.startswith('LOC:'):
> +            if line.startswith(' LOC:') or line.startswith('LOC:'):
>                  data = line.split()
>                  for i in range(0, cpu_count):
>                      # To skip LOC
>                      loc_stats.append(data[i+1])
> -                    print data[i+1]
> -        file_procstat.close()
> +                file_procstat.close()
> +                return
>      except Exception, details:
>          print "Could not read interrupt statistics", details
>          sys.exit(1)
> @@ -192,6 +257,8 @@ def set_sched_mc_power(sched_mc_level):
>          os.system('echo %s > \
>              /sys/devices/system/cpu/sched_mc_power_savings 2>/dev/null'
>              % sched_mc_level)
> +
> +        get_proc_data(stats_start)
>      except OSError, e:
>          print "Could not set sched_mc_power_savings to", sched_mc_level, e
>  	sys.exit(1)
> @@ -203,6 +270,8 @@ def set_sched_smt_power(sched_smt_level)
>          os.system('echo %s > \
>              /sys/devices/system/cpu/sched_smt_power_savings 2>/dev/null'
>              % sched_smt_level)
> +
> +        get_proc_data(stats_start)
>      except OSError, e:
>          print "Could not set sched_smt_power_savings to", sched_smt_level, e
>  	sys.exit(1)
> @@ -218,21 +287,36 @@ def set_timer_migration_interface(value)
>          print "Could not set timer_migration to ", value, e
>          sys.exit(1)
> 
> -def trigger_ebizzy (sched_smt, stress, duration, background, pinned):
> -    ''' Triggers ebizzy workload for sched_mc=1
> -        testing
> +def get_job_count(stress, workload, sched_smt):
> +    ''' Returns number of jobs/threads to be triggered
>      '''
> +    
>      try:
>          if stress == "thread":
>              threads = get_hyper_thread_count()
>          if stress == "partial":
>              threads = cpu_count / socket_count
> +            if is_hyper_threaded():
> +                if workload == "ebizzy" and int(sched_smt) ==0:
> +                    threads = threads / get_hyper_thread_count()
> +                if workload == "kernbench" and int(sched_smt) < 2:
> +                    threads = threads / get_hyper_thread_count()    
>          if stress == "full":
> -	    threads = cpu_count
> +            threads = cpu_count
>          if stress == "single_job":
>              threads = 1
>              duration = 180
> +        return threads
> +    except Exception, details:
> +        print "get job count failed ", details
> +        sys.exit(1)
> 
> +def trigger_ebizzy (sched_smt, stress, duration, background, pinned):
> +    ''' Triggers ebizzy workload for sched_mc=1
> +        testing
> +    '''
> +    try:
> +        threads = get_job_count(stress, "ebizzy", sched_smt)
>          olddir = os.getcwd()
>          path = '%s/utils/benchmark' % os.environ['LTPROOT']
>          os.chdir(path)
> @@ -282,23 +366,14 @@ def trigger_ebizzy (sched_smt, stress, d
>          print "Ebizzy workload trigger failed ", details
>          sys.exit(1)   
> 
> -def trigger_kernbench (sched_smt, stress, background, pinned):
> +def trigger_kernbench (sched_smt, stress, background, pinned, perf_test):
>      ''' Trigger load on system like kernbench.
>          Copys existing copy of LTP into as LTP2 and then builds it
>          with make -j
>      '''
>      olddir = os.getcwd()
>      try:
> -        if stress == "thread":
> -	    threads = 2
> -        if stress == "partial":
> -	    threads = cpu_count / socket_count
> -            if is_hyper_threaded() and int(sched_smt) !=2:
> -                threads = threads / get_hyper_thread_count()
> -        if stress == "full":
> -            threads = cpu_count
> -        if stress == "single_job":
> -            threads = 1
> +        threads = get_job_count(stress, "kernbench", sched_smt)
> 
>          dst_path = "/root"
>          olddir = os.getcwd()      
> @@ -335,24 +410,35 @@ def trigger_kernbench (sched_smt, stress
>          get_proc_loc_count(intr_start)
>          if pinned == "yes":
>              os.system ( 'taskset -c %s %s/kernbench -o %s -M -H -n 1 \
> -                >/dev/null 2>&1' % (cpu_count-1, benchmark_path, threads))
> +                >/dev/null 2>&1 &' % (cpu_count-1, benchmark_path, threads))
> +
> +            # We have to delete import in future
> +            import time
> +            time.sleep(240)
> +            stop_wkld("kernbench")
>          else:
>              if background == "yes":
>                  os.system ( '%s/kernbench -o %s -M -H -n 1 >/dev/null 2>&1 &' \
>                      % (benchmark_path, threads))
>              else:
> -                os.system ( '%s/kernbench -o %s -M -H -n 1 >/dev/null 2>&1' \
> -                    % (benchmark_path, threads))
> +                if perf_test == "yes":
> +                    os.system ( '%s/kernbench -o %s -M -H -n 1 >/dev/null 2>&1' \
> +                        % (benchmark_path, threads))
> +                else:
> +                    os.system ( '%s/kernbench -o %s -M -H -n 1 >/dev/null 2>&1 &' \
> +                        % (benchmark_path, threads))
> +                    # We have to delete import in future
> +                    import time
> +                    time.sleep(240)
> +                    stop_wkld("kernbench")
>          
>          print "INFO: Workload kernbench triggerd"
>          os.chdir(olddir)
> -        #get_proc_data(stats_stop)
> -        #get_proc_loc_count(intr_stop)
>      except Exception, details:
>          print "Workload kernbench trigger failed ", details
>          sys.exit(1)
>     
> -def trigger_workld(sched_smt, workload, stress, duration, background, pinned):
> +def trigger_workld(sched_smt, workload, stress, duration, background, pinned, perf_test):
>      ''' Triggers workload passed as argument. Number of threads 
>          triggered is based on stress value.
>      '''
> @@ -360,7 +446,7 @@ def trigger_workld(sched_smt, workload, 
>          if workload == "ebizzy":
>              trigger_ebizzy (sched_smt, stress, duration, background, pinned)
>          if workload == "kernbench":
> -            trigger_kernbench (sched_smt, stress, background, pinned)
> +            trigger_kernbench (sched_smt, stress, background, pinned, perf_test)
>      except Exception, details:
>          print "INFO: Trigger workload failed", details
>          sys.exit(1)
> @@ -434,7 +520,7 @@ def generate_report():
>              print >> keyvalfile, "package-%s=%3.4f" % \
>  		(pkg, (float(total_idle)*100/total))
>      except Exception, details:
> -        print "Generating reportfile failed: ", details
> +        print "Generating utilization report failed: ", details
>          sys.exit(1)
> 
>      #Add record delimiter '\n' before closing these files
> @@ -454,20 +540,18 @@ def generate_loc_intr_report():
> 
>          get_proc_loc_count(intr_stop)
> 
> -        print "Before substracting"
> -        for i in range(0, cpu_count):
> -            print "CPU",i, intr_start[i], intr_stop[i]
> -            reportfile = open('/procstat/cpu-loc_interrupts', 'a')
> -            print >> reportfile, "=============================================="
> -            print >> reportfile, "     Local timer interrupt stats              "
> -            print >> reportfile, "=============================================="
> +        reportfile = open('/procstat/cpu-loc_interrupts', 'a')
> +        print >> reportfile, "=============================================="
> +        print >> reportfile, "     Local timer interrupt stats              "
> +        print >> reportfile, "=============================================="
> +
>          for i in range(0, cpu_count):
>              intr_stop[i] =  int(intr_stop[i]) - int(intr_start[i])
>              print >> reportfile, "CPU%s: %s" %(i, intr_stop[i])
>          print >> reportfile
>          reportfile.close()
>      except Exception, details:
> -        print "Generating reportfile failed: ", details
> +        print "Generating interrupt report failed: ", details
>          sys.exit(1)
> 
>  def record_loc_intr_count():
> @@ -542,25 +626,24 @@ def validate_cpugrp_map(cpu_group, sched
>                                  modi_cpu_grp.remove(core_cpus[i]) 
>                                  if len(modi_cpu_grp) == 0:
>                                      return 0
> -                            else:
> +                            #This code has to be deleted 
> +                            #else:
>                                  # If sched_smt == 0 then its oky if threads run
>                                  # in different cores of same package 
> -                                if sched_smt_level == 1:
> -                                    sys.exit(1)
> -                                else:
> -                                    if len(cpu_group) == 2 and \
> -                                        len(modi_cpu_grp) < len(cpu_group):
> -                                        print "INFO:CPUs utilized not in a core"
> -                                        return 1                                        
> -            print "INFO: CPUs utilized is not in same package or core"
> -            return(1)
> +                                #if sched_smt_level > 0 :
> +                                    #return 1
>  	else:
>              for pkg in sorted(cpu_map.keys()):
>                  pkg_cpus = cpu_map[pkg]
> -                if pkg_cpus == cpu_group:
> -                    return(0)
> -                 
> -            return(1) 
> +                if len(cpu_group) == len(pkg_cpus):
> +                    if pkg_cpus == cpu_group:
> +                        return(0)
> +                else:
> +                    if int(cpus_utilized[0]) in cpu_map[pkg] or int(cpus_utilized[1]) in cpu_map[pkg]:
> +                        return(0)
> +
> +        return(1) 
> +
>      except Exception, details:
>          print "Exception in validate_cpugrp_map: ", details
>          sys.exit(1)
> @@ -605,36 +688,70 @@ def verify_sched_domain_dmesg(sched_mc_l
>          print "Reading dmesg failed", details
>          sys.exit(1)
> 
> -def validate_cpu_consolidation(work_ld, sched_mc_level, sched_smt_level):
> +def get_cpu_utilization(cpu):
> +    ''' Return cpu utilization of cpu_id
> +    '''
> +    try:
> +        for l in sorted(stats_percentage.keys()):
> +            if cpu == stats_percentage[l][0]:
> +                return stats_percentage[l][1]
> +        return -1
> +    except Exception, details:
> +        print "Exception in get_cpu_utilization", details
> +        sys.exit(1)
> +
> +def validate_cpu_consolidation(stress, work_ld, sched_mc_level, sched_smt_level):
>      ''' Verify if cpu's on which threads executed belong to same
>      package
>      '''
>      cpus_utilized = list()
> +    threads = get_job_count(stress, work_ld, sched_smt_level)
>      try:
>          for l in sorted(stats_percentage.keys()):
>              #modify threshold
> +            cpu_id = stats_percentage[l][0].split("cpu")
> +            if cpu_id[1] == '':
> +                continue
> +            if int(cpu_id[1]) in cpus_utilized:
> +                continue
>              if is_hyper_threaded():
> -                if stats_percentage[l][1] > 25 and work_ld == "kernbench":
> -                    cpu_id = stats_percentage[l][0].split("cpu")
> -                    if cpu_id[1] != '':
> +                if work_ld == "kernbench" and sched_smt_level < sched_mc_level:
> +                    siblings = get_siblings(cpu_id[1])
> +                    if siblings != "":
> +                        sib_list = siblings.split()
> +                        utilization = int(stats_percentage[l][1])
> +                        for i in range(0, len(sib_list)):
> +                            utilization += int(get_cpu_utilization("cpu%s" %sib_list[i])) 
> +                    else:
> +                        utilization = stats_percentage[l][1]
> +                    if utilization > 40:
>                          cpus_utilized.append(int(cpu_id[1]))
> +                        if siblings != "":
> +                            for i in range(0, len(sib_list)):
> +                                cpus_utilized.append(int(sib_list[i]))
>                  else:
> -                    if stats_percentage[l][1] > 70:
> -                        cpu_id = stats_percentage[l][0].split("cpu")
> -                        if cpu_id[1] != '':
> -                            cpus_utilized.append(int(cpu_id[1]))
> +                    # This threshold wuld be modified based on results
> +                    if stats_percentage[l][1] > 40:
> +                        cpus_utilized.append(int(cpu_id[1]))
>              else:
> -                if stats_percentage[l][1] > 70:
> -                    cpu_id = stats_percentage[l][0].split("cpu")
> -                    if cpu_id[1] != '':
> +                if work_ld == "kernbench" :
> +                    if stats_percentage[l][1] > 50:
>                          cpus_utilized.append(int(cpu_id[1]))
> -                    cpus_utilized.sort()
> +                else:
> +                    if stats_percentage[l][1] > 70:
> +                        cpus_utilized.append(int(cpu_id[1]))
> +            cpus_utilized.sort()
>          print "INFO: CPU's utilized ", cpus_utilized
> 
> +        # If length of CPU's utilized is not = number of jobs exit with 1
> +        if len(cpus_utilized) < threads:
> +            return 1
> +
>          status = validate_cpugrp_map(cpus_utilized, sched_mc_level, \
>              sched_smt_level)
>          if status == 1:
>              print "INFO: CPUs utilized is not in same package or core"
> +
>          return(status)
>      except Exception, details:
>          print "Exception in validate_cpu_consolidation: ", details
> @@ -645,7 +762,8 @@ def get_cpuid_max_intr_count():
>      try:
>          highest = 0
>          second_highest = 0
> -        global cpu1_max_intr, cpu2_max_intr
> +        cpus_utilized = []
> +        
>          #Skipping CPU0 as it is generally high
>          for i in range(1, cpu_count):
>              if int(intr_stop[i]) > int(highest):
> @@ -658,15 +776,19 @@ def get_cpuid_max_intr_count():
>                  if int(intr_stop[i]) > int(second_highest):
>                      second_highest = int(intr_stop[i])
>                      cpu2_max_intr = i
> +        cpus_utilized.append(cpu1_max_intr)
> +        cpus_utilized.append(cpu2_max_intr)
> +        
>          for i in range(1, cpu_count):
>              if i != cpu1_max_intr and i != cpu2_max_intr:
>                  diff = second_highest - intr_stop[i]
>                  ''' Threshold of difference has to be manipulated '''
>                  if diff < 10000:
>                      print "INFO: Diff in interrupt count is below threshold"
> -                    return 1
> +                    cpus_utilized = []
> +                    return cpus_utilized
>          print "INFO: Interrupt count in other CPU's low as expected"
> -        return 0 
> +        return cpus_utilized
>      except Exception, details:
>          print "Exception in get_cpuid_max_intr_count: ", details
>          sys.exit(1)
> @@ -675,14 +797,12 @@ def validate_ilb (sched_mc_level, sched_
>      ''' Validate if ilb is running in same package where work load is running
>      '''
>      try:
> -        status = get_cpuid_max_intr_count()
> -        if status == 1:
> +        cpus_utilized = get_cpuid_max_intr_count()
> +        if not cpus_utilized:
>              return 1
> -        for pkg in sorted(cpu_map.keys()):
> -            if cpu1_max_intr in cpu_map[pkg] and cpu2_max_intr in cpu_map[pkg]:
> -                return 0
> -        print "INFO: CPUs with higher interrupt count is not in same package"
> -        return 1
> +       
> +        status = validate_cpugrp_map(cpus_utilized, sched_mc_level, sched_smt_level)
> +        return status
>      except Exception, details:
>          print "Exception in validate_ilb: ", details
>          sys.exit(1)
> @@ -706,3 +826,14 @@ def reset_schedsmt():
>      except OSError, e:
>          print "Could not set sched_smt_power_savings to 0", e
>          sys.exit(1)
> +
> +def stop_wkld(work_ld):
> +    ''' Kill workload triggered in background
> +    '''
> +    try:
> +        os.system('pkill %s 2>/dev/null' %work_ld)
> +        if work_ld == "kernbench":
> +            os.system('pkill make 2>/dev/null')
> +    except OSError, e:
> +        print "Exception in stop_wkld", e
> +        sys.exit(1)
> 
> ------------------------------------------------------------------------------
> Come build with us! The BlackBerry(R) Developer Conference in SF, CA
> is the only developer event you need to attend this year. Jumpstart your
> developing skills, take BlackBerry mobile applications to market and stay 
> ahead of the curve. Join us from November 9 - 12, 2009. Register now!
> http://p.sf.net/sfu/devconference
> _______________________________________________
> Ltp-list mailing list
> Ltp-list@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/ltp-list


------------------------------------------------------------------------------
Come build with us! The BlackBerry(R) Developer Conference in SF, CA
is the only developer event you need to attend this year. Jumpstart your
developing skills, take BlackBerry mobile applications to market and stay 
ahead of the curve. Join us from November 9 - 12, 2009. Register now!
http://p.sf.net/sfu/devconference
_______________________________________________
Ltp-list mailing list
Ltp-list@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/ltp-list

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [LTP] [Patch 1/6] Fix issues in cpu consolidation verification functions
  2009-10-13  7:43 [LTP] [Patch 1/6] Fix issues in cpu consolidation verification functions Poornima Nayak
                   ` (4 preceding siblings ...)
  2009-10-13  7:44 ` [LTP] [Patch 6/6] Patch to fix workload installation issue Poornima Nayak
@ 2009-10-13 10:12 ` Subrata Modak
  5 siblings, 0 replies; 17+ messages in thread
From: Subrata Modak @ 2009-10-13 10:12 UTC (permalink / raw)
  To: Poornima Nayak; +Cc: ltp-list, arun, svaidy, ego

On Tue, 2009-10-13 at 13:13 +0530, Poornima Nayak wrote: 
> Arguments passed for cpu consolidation was not used appropriatly. Provided
> TINFO messages to indicate dependency test failures.
> 
> Signed-off-by: poornima nayak <mpnayak@linux.vnet.ibm.com>

Thanks.

Regards--
Subrata

> 
> diff -uprN ltp-full-20090930/testcases/kernel/power_management/pm_include.sh ltp-full-20090930_patched/testcases/kernel/power_management/pm_include.sh
> --- ltp-full-20090930/testcases/kernel/power_management/pm_include.sh	2009-10-05 02:10:56.000000000 -0400
> +++ ltp-full-20090930_patched/testcases/kernel/power_management/pm_include.sh	2009-10-12 22:46:12.000000000 -0400
> @@ -71,7 +71,7 @@ get_supporting_govr() {
>  is_hyper_threaded() {
>  	siblings=`cat /proc/cpuinfo | grep siblings | uniq | cut -f2 -d':'`
>  	cpu_cores=`cat /proc/cpuinfo | grep "cpu cores" | uniq | cut -f2 -d':'`
> -	[ $siblings > $cpu_cores ]; return $?
> +	[ $siblings -gt $cpu_cores ]; return $?
>  }
> 
>  check_input() {
> @@ -148,8 +148,8 @@ get_valid_input() {
>  		
>  analyze_result_hyperthreaded() {
>  	sched_mc=$1
> -    pass_count=$3
> -    sched_smt=$4
> +    pass_count=$2
> +    sched_smt=$3
> 
>  	case "$sched_mc" in
>  	0)
> @@ -165,7 +165,7 @@ $sched_mc & sched_smt=$sched_smt"
>  			fi
>  			;;
>  		*)
> -           	if [ $pass_count -lt 5 ]; then
> +			if [ $pass_count -lt 5 ]; then
>                 	tst_resm TFAIL "cpu consolidation for sched_mc=\
>  $sched_mc & sched_smt=$sched_smt"
>             	else
> @@ -190,10 +190,16 @@ $sched_mc & sched_smt=$sched_smt"
> 
>  analyze_package_consolidation_result() {
>  	sched_mc=$1
> -    pass_count=$3
> -	sched_smt=$4
> +    pass_count=$2
> +
> +	if [ $# -gt 2 ]
> +	then
> +		sched_smt=$3
> +	else
> +		sched_smt=-1
> +	fi
> 
> -	if [ $hyper_threaded -eq $YES -a $sched_smt ]; then
> +	if [ $hyper_threaded -eq $YES -a $sched_smt -gt -1 ]; then
>  		analyze_result_hyperthreaded $sched_mc $pass_count $sched_smt
>  	else
>  		case "$sched_mc" in
> @@ -209,10 +215,10 @@ $sched_mc"
>      	*)
>  			if [ $pass_count -lt 5 ]; then
>  				tst_resm TFAIL "Consolidation at package level failed for \
> -sched_mc=$sched_mc & sched_smt=$sched_smt"
> +sched_mc=$sched_mc"
>  			else
>  				tst_resm TPASS "Consolidation at package level passed for \
> -sched_mc=$sched_mc & sched_smt=$sched_smt"
> +sched_mc=$sched_mc"
>  			fi	
>          	;;
>      	esac
> @@ -221,7 +227,7 @@ sched_mc=$sched_mc & sched_smt=$sched_sm
> 
>  analyze_core_consolidation_result() {
>  	sched_smt=$1
> -	pass_count=$3
> +	pass_count=$2
> 
>  	case "$sched_smt" in
>  	0)
> 
> ------------------------------------------------------------------------------
> Come build with us! The BlackBerry(R) Developer Conference in SF, CA
> is the only developer event you need to attend this year. Jumpstart your
> developing skills, take BlackBerry mobile applications to market and stay 
> ahead of the curve. Join us from November 9 - 12, 2009. Register now!
> http://p.sf.net/sfu/devconference
> _______________________________________________
> Ltp-list mailing list
> Ltp-list@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/ltp-list


------------------------------------------------------------------------------
Come build with us! The BlackBerry(R) Developer Conference in SF, CA
is the only developer event you need to attend this year. Jumpstart your
developing skills, take BlackBerry mobile applications to market and stay 
ahead of the curve. Join us from November 9 - 12, 2009. Register now!
http://p.sf.net/sfu/devconference
_______________________________________________
Ltp-list mailing list
Ltp-list@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/ltp-list

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [LTP] [Patch 5/6] Modified master script to pass appropriate arguments
  2009-10-13  7:44 ` [LTP] [Patch 5/6] Modified master script to pass appropriate arguments Poornima Nayak
  2009-10-13 10:12   ` Subrata Modak
@ 2009-10-14  1:38   ` Garrett Cooper
  1 sibling, 0 replies; 17+ messages in thread
From: Garrett Cooper @ 2009-10-14  1:38 UTC (permalink / raw)
  To: Poornima Nayak; +Cc: ltp-list, arun, svaidy, ego

On Tue, Oct 13, 2009 at 12:44 AM, Poornima Nayak
<mpnayak@linux.vnet.ibm.com> wrote:
> Modified master script to pass appropriate arguments for cpu consolidation
> test cases.
>
> Signed-off-by: poornima nayak <mpnayak@linux.vnet.ibm.com>

Sorry for not commenting sooner.

You could use a while loop with an initial value and increment in the
loop instead of `for i in $(seq {0,1} $max)'; that's guaranteed to
work better with systems that don't have seq(1) ;)...

Thanks!
-Garrett

------------------------------------------------------------------------------
Come build with us! The BlackBerry(R) Developer Conference in SF, CA
is the only developer event you need to attend this year. Jumpstart your
developing skills, take BlackBerry mobile applications to market and stay 
ahead of the curve. Join us from November 9 - 12, 2009. Register now!
http://p.sf.net/sfu/devconference
_______________________________________________
Ltp-list mailing list
Ltp-list@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/ltp-list

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [LTP] [Patch 6/6] Patch to fix workload installation issue
  2009-10-13 10:12   ` Subrata Modak
@ 2009-10-14  1:39     ` Garrett Cooper
  0 siblings, 0 replies; 17+ messages in thread
From: Garrett Cooper @ 2009-10-14  1:39 UTC (permalink / raw)
  To: subrata; +Cc: ltp-list, arun, svaidy, ego

On Tue, Oct 13, 2009 at 3:12 AM, Subrata Modak
<subrata@linux.vnet.ibm.com> wrote:
> On Tue, 2009-10-13 at 13:14 +0530, Poornima Nayak wrote:
>> Patch to fix workload installation issue
>>
>> Signed-off-by: poornima nayak <mpnayak@linux.vnet.ibm.com>
>
> This will not apply as all the LTP Makefiles has undergon huge changes.
> Checkout the latest LTP and re-create this patch:
>
> Hunk #1 FAILED at 27.
> 1 out of 1 hunk FAILED -- saving rejects to file utils/Makefile.rej
>
> Regards--
> Subrata
>
>>
>> diff -uprN ltp-full-20090930/utils/Makefile ltp-full-20090930_patched/utils/Makefile
>> --- ltp-full-20090930/utils/Makefile  2009-10-05 02:10:46.000000000 -0400
>> +++ ltp-full-20090930_patched/utils/Makefile  2009-10-12 22:48:30.000000000 -0400
>> @@ -27,6 +27,6 @@ all: configure
>>       @set -e; for i in $(SUBDIRS); do $(MAKE) -C $$i $@; done;
>>
>>  install:
>> -
>> +     @set -e; for i in $(SUBDIRS); do $(MAKE) -C $$i $@; done;
>>  clean:
>>       @set -e; for i in $(SUBDIRS); do $(MAKE) -C $$i $@; done;

    That was fixed as a `side-effect' of the new Makefile changes :)...
Thanks!
-Garrett

------------------------------------------------------------------------------
Come build with us! The BlackBerry(R) Developer Conference in SF, CA
is the only developer event you need to attend this year. Jumpstart your
developing skills, take BlackBerry mobile applications to market and stay 
ahead of the curve. Join us from November 9 - 12, 2009. Register now!
http://p.sf.net/sfu/devconference
_______________________________________________
Ltp-list mailing list
Ltp-list@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/ltp-list

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [LTP] [Patch 2/6] Developed new functions and fixed issues causing ilb test failure
  2009-10-13  7:43 ` [LTP] [Patch 2/6] Developed new functions and fixed issues causing ilb test failure Poornima Nayak
  2009-10-13 10:12   ` Subrata Modak
@ 2009-10-14  2:17   ` Garrett Cooper
  2009-10-14  9:25   ` Gautham R Shenoy
  2 siblings, 0 replies; 17+ messages in thread
From: Garrett Cooper @ 2009-10-14  2:17 UTC (permalink / raw)
  To: Poornima Nayak; +Cc: ltp-list, svaidy, ego, arun

On Tue, Oct 13, 2009 at 12:43 AM, Poornima Nayak
<mpnayak@linux.vnet.ibm.com> wrote:
> CPU Consolidation verification function is fixed to handle variations in
> CPU utilization. Threshold is selected based on test conducted on 2.6.31 on
> dual core, quad core & hyper threaded system.
> Developed new function to generate hyper threaded siblings list and get job count
> for hyper threaded system and multisocket system.
> Modified kernbench workload execution time for 5 min, hence test execution time
> will be reduced further. Developed new functions to stop workload.
>
> Signed-off-by: poornima nayak <mpnayak@linux.vnet.ibm.com>
>
> diff -uprN ltp-full-20090930/testcases/kernel/power_management/lib/sched_mc.py ltp-full-20090930_patched/testcases/kernel/power_management/lib/sched_mc.py
> --- ltp-full-20090930/testcases/kernel/power_management/lib/sched_mc.py 2009-10-05 02:10:56.000000000 -0400
> +++ ltp-full-20090930_patched/testcases/kernel/power_management/lib/sched_mc.py 2009-10-12 23:00:30.000000000 -0400
> @@ -22,6 +22,7 @@ socket_count = 0
>  cpu1_max_intr = 0
>  cpu2_max_intr = 0
>  intr_stat_timer_0 = []
> +siblings_list = []
>
>  def clear_dmesg():
>     '''
> @@ -96,6 +97,36 @@ def is_hyper_threaded():
>         print "Failed to check if system is hyper-threaded"
>         sys.exit(1)
>
> +def is_multi_core():
> +    ''' Return true if system has sockets has multiple cores
> +    '''
> +
> +    try:
> +        file_cpuinfo = open("/proc/cpuinfo", 'r')
> +        for line in file_cpuinfo:
> +            if line.startswith('siblings'):
> +                siblings = line.split(":")
> +            if line.startswith('cpu cores'):
> +                cpu_cores = line.split(":")
> +                break
> +
> +        if int( siblings[1] ) == int( cpu_cores[1] ):
> +            if int( cpu_cores[1] ) > 1:
> +                multi_core = 1
> +            else:
> +                multi_core = 0
> +        else:
> +            num_of_cpus = int(siblings[1]) / int(cpu_cores[1])
> +            if num_of_cpus > 1:
> +                multi_core = 1
> +            else:
> +                multi_core = 0
> +        file_cpuinfo.close()
> +        return multi_core
> +    except Exception:
> +        print "Failed to check if system is multi core system"
> +        sys.exit(1)

Here's another suggested method:

http://codeliberates.blogspot.com/2008/05/detecting-cpuscores-in-python.html

unless the `siblings' information has more data that the above method
cannot provide.

>  def get_hyper_thread_count():
>     ''' Return number of threads in CPU. For eg for x3950 this function
>         would return 2. In future if 4 threads are supported in CPU, this
> @@ -153,6 +184,40 @@ def map_cpuid_pkgid():
>                 sys.exit(1)
>
>
> +def generate_sibling_list():
> +    ''' Routine to generate siblings list
> +    '''
> +    try:
> +        for i in range(0, cpu_count):
> +            siblings_file = '/sys/devices/system/cpu/cpu%s' % i
> +            siblings_file += '/topology/thread_siblings_list'
> +            threads_sibs = open(siblings_file).read().rstrip()
> +            thread_ids = threads_sibs.split("-")
> +
> +            if not thread_ids in siblings_list:
> +                siblings_list.append(thread_ids)
> +    except Exception, details:
> +        print "Exception in generate_siblings_list", details
> +        sys.exit(1)

sys.exit with the above string might be better. Be sure to do
str(details) too if you follow that method.

> +def get_siblings(cpu_id):
> +    ''' Return siblings of cpu_id
> +    '''
> +    try:
> +        cpus = ""
> +        for i in range(0, len(siblings_list)):
> +            for cpu in siblings_list[i]:
> +                if cpu_id == cpu:
> +                    for j in siblings_list[i]:
> +                        # Exclude cpu_id in the list of siblings
> +                        if j != cpu_id:
> +                            cpus += j
> +                    return cpus
> +        return cpus
> +    except Exception, details:
> +        print "Exception in get_siblings", details
> +        sys.exit(1)

This for-loop logic could be reduced to the following:

       cpus = ""
       for i in range(len(siblings_list)):
           try:
               i = siblings_list.index(cpu_id)
               # XXX (garrcoop): Does a " ".join need to be here?
               cpus = [siblings_list[j] for j in
range(len(siblings_list)).remove(i)]
               break
           except ValueError:
               # .index(..) will return ValueError if the object isn't
found in the list.
               pass
       return cpus

This way there's only one exit point in the function and the overall
loop logic has been reduced. Please test before checkin though :).

>  def get_proc_data(stats_list):
>     ''' Read /proc/stat info and store in dictionary
>     '''
> @@ -168,18 +233,18 @@ def get_proc_data(stats_list):
>         sys.exit(1)
>
>  def get_proc_loc_count(loc_stats):
> -    ''' Read /proc/stat info and store in dictionary
> +    ''' Read /proc/interrupts info and store in list
>     '''
>     try:
>         file_procstat = open("/proc/interrupts", 'r')
>         for line in file_procstat:
> -            if line.startswith('LOC:'):
> +            if line.startswith(' LOC:') or line.startswith('LOC:'):

Why not just do lstrip()?

>                 data = line.split()
>                 for i in range(0, cpu_count):

range(0, ...) is unneeded. range(cpu_count) is sufficient as 0 as the
starting index for the range is implied in python.

>                     # To skip LOC
>                     loc_stats.append(data[i+1])
> -                    print data[i+1]
> -        file_procstat.close()
> +                file_procstat.close()
> +                return
>     except Exception, details:
>         print "Could not read interrupt statistics", details
>         sys.exit(1)
> @@ -192,6 +257,8 @@ def set_sched_mc_power(sched_mc_level):
>         os.system('echo %s > \
>             /sys/devices/system/cpu/sched_mc_power_savings 2>/dev/null'
>             % sched_mc_level)

^^^ This exit code needs to be checked... or you need to at least
check and make sure that it exists with os.access(, os.R_OK) -- this
is related to the issue I filed earlier ^^^

> +
> +        get_proc_data(stats_start)
>     except OSError, e:
>         print "Could not set sched_mc_power_savings to", sched_mc_level, e
>        sys.exit(1)
> @@ -203,6 +270,8 @@ def set_sched_smt_power(sched_smt_level)
>         os.system('echo %s > \
>             /sys/devices/system/cpu/sched_smt_power_savings 2>/dev/null'
>             % sched_smt_level)

^^^ This exit code needs to be checked... or you need to at least
check and make sure that it exists with os.access(, os.R_OK) -- this
is related to the issue I filed earlier ^^^

> +        get_proc_data(stats_start)
>     except OSError, e:
>         print "Could not set sched_smt_power_savings to", sched_smt_level, e
>        sys.exit(1)
> @@ -218,21 +287,36 @@ def set_timer_migration_interface(value)
>         print "Could not set timer_migration to ", value, e
>         sys.exit(1)
>
> -def trigger_ebizzy (sched_smt, stress, duration, background, pinned):
> -    ''' Triggers ebizzy workload for sched_mc=1
> -        testing
> +def get_job_count(stress, workload, sched_smt):
> +    ''' Returns number of jobs/threads to be triggered
>     '''
> +
>     try:
>         if stress == "thread":
>             threads = get_hyper_thread_count()
>         if stress == "partial":

elif would be better (less if logic -- these values are exclusive).

>             threads = cpu_count / socket_count
> +            if is_hyper_threaded():
> +                if workload == "ebizzy" and int(sched_smt) ==0:
> +                    threads = threads / get_hyper_thread_count()
> +                if workload == "kernbench" and int(sched_smt) < 2:
> +                    threads = threads / get_hyper_thread_count()

Same as above.

>         if stress == "full":
> -           threads = cpu_count
> +            threads = cpu_count
>         if stress == "single_job":
>             threads = 1
>             duration = 180
> +        return threads
> +    except Exception, details:
> +        print "get job count failed ", details
> +        sys.exit(1)

Same as above comment about sys.exit(...).

> +def trigger_ebizzy (sched_smt, stress, duration, background, pinned):
> +    ''' Triggers ebizzy workload for sched_mc=1
> +        testing
> +    '''
> +    try:
> +        threads = get_job_count(stress, "ebizzy", sched_smt)
>         olddir = os.getcwd()
>         path = '%s/utils/benchmark' % os.environ['LTPROOT']

This isn't the path anymore. Should be $LTPROOT/testcases/bin, or
os.path.basename(__file__) if you're concerned about portability.

>         os.chdir(path)
> @@ -282,23 +366,14 @@ def trigger_ebizzy (sched_smt, stress, d
>         print "Ebizzy workload trigger failed ", details
>         sys.exit(1)
>
> -def trigger_kernbench (sched_smt, stress, background, pinned):
> +def trigger_kernbench (sched_smt, stress, background, pinned, perf_test):
>     ''' Trigger load on system like kernbench.
>         Copys existing copy of LTP into as LTP2 and then builds it
>         with make -j
>     '''
>     olddir = os.getcwd()
>     try:
> -        if stress == "thread":
> -           threads = 2
> -        if stress == "partial":
> -           threads = cpu_count / socket_count
> -            if is_hyper_threaded() and int(sched_smt) !=2:
> -                threads = threads / get_hyper_thread_count()
> -        if stress == "full":
> -            threads = cpu_count
> -        if stress == "single_job":
> -            threads = 1
> +        threads = get_job_count(stress, "kernbench", sched_smt)
>
>         dst_path = "/root"
>         olddir = os.getcwd()
> @@ -335,24 +410,35 @@ def trigger_kernbench (sched_smt, stress
>         get_proc_loc_count(intr_start)
>         if pinned == "yes":
>             os.system ( 'taskset -c %s %s/kernbench -o %s -M -H -n 1 \
> -                >/dev/null 2>&1' % (cpu_count-1, benchmark_path, threads))
> +                >/dev/null 2>&1 &' % (cpu_count-1, benchmark_path, threads))
> +
> +            # We have to delete import in future

???? Why?

> +            import time
> +            time.sleep(240)
> +            stop_wkld("kernbench")
>         else:
>             if background == "yes":
>                 os.system ( '%s/kernbench -o %s -M -H -n 1 >/dev/null 2>&1 &' \
>                     % (benchmark_path, threads))
>             else:
> -                os.system ( '%s/kernbench -o %s -M -H -n 1 >/dev/null 2>&1' \
> -                    % (benchmark_path, threads))
> +                if perf_test == "yes":
> +                    os.system ( '%s/kernbench -o %s -M -H -n 1 >/dev/null 2>&1' \
> +                        % (benchmark_path, threads))
> +                else:
> +                    os.system ( '%s/kernbench -o %s -M -H -n 1 >/dev/null 2>&1 &' \
> +                        % (benchmark_path, threads))
> +                    # We have to delete import in future
> +                    import time
> +                    time.sleep(240)
> +                    stop_wkld("kernbench")
>
>         print "INFO: Workload kernbench triggerd"
>         os.chdir(olddir)
> -        #get_proc_data(stats_stop)
> -        #get_proc_loc_count(intr_stop)
>     except Exception, details:
>         print "Workload kernbench trigger failed ", details
>         sys.exit(1)
>
> -def trigger_workld(sched_smt, workload, stress, duration, background, pinned):
> +def trigger_workld(sched_smt, workload, stress, duration, background, pinned, perf_test):
>     ''' Triggers workload passed as argument. Number of threads
>         triggered is based on stress value.
>     '''
> @@ -360,7 +446,7 @@ def trigger_workld(sched_smt, workload,
>         if workload == "ebizzy":
>             trigger_ebizzy (sched_smt, stress, duration, background, pinned)
>         if workload == "kernbench":
> -            trigger_kernbench (sched_smt, stress, background, pinned)
> +            trigger_kernbench (sched_smt, stress, background, pinned, perf_test)
>     except Exception, details:
>         print "INFO: Trigger workload failed", details
>         sys.exit(1)
> @@ -434,7 +520,7 @@ def generate_report():
>             print >> keyvalfile, "package-%s=%3.4f" % \
>                (pkg, (float(total_idle)*100/total))
>     except Exception, details:
> -        print "Generating reportfile failed: ", details
> +        print "Generating utilization report failed: ", details
>         sys.exit(1)
>
>     #Add record delimiter '\n' before closing these files
> @@ -454,20 +540,18 @@ def generate_loc_intr_report():
>
>         get_proc_loc_count(intr_stop)
>
> -        print "Before substracting"
> -        for i in range(0, cpu_count):
> -            print "CPU",i, intr_start[i], intr_stop[i]
> -            reportfile = open('/procstat/cpu-loc_interrupts', 'a')
> -            print >> reportfile, "=============================================="
> -            print >> reportfile, "     Local timer interrupt stats              "
> -            print >> reportfile, "=============================================="
> +        reportfile = open('/procstat/cpu-loc_interrupts', 'a')
> +        print >> reportfile, "=============================================="
> +        print >> reportfile, "     Local timer interrupt stats              "
> +        print >> reportfile, "=============================================="
> +
>         for i in range(0, cpu_count):
>             intr_stop[i] =  int(intr_stop[i]) - int(intr_start[i])
>             print >> reportfile, "CPU%s: %s" %(i, intr_stop[i])
>         print >> reportfile
>         reportfile.close()
>     except Exception, details:
> -        print "Generating reportfile failed: ", details
> +        print "Generating interrupt report failed: ", details
>         sys.exit(1)
>
>  def record_loc_intr_count():
> @@ -542,25 +626,24 @@ def validate_cpugrp_map(cpu_group, sched
>                                 modi_cpu_grp.remove(core_cpus[i])
>                                 if len(modi_cpu_grp) == 0:
>                                     return 0
> -                            else:
> +                            #This code has to be deleted
> +                            #else:
>                                 # If sched_smt == 0 then its oky if threads run
>                                 # in different cores of same package
> -                                if sched_smt_level == 1:
> -                                    sys.exit(1)
> -                                else:
> -                                    if len(cpu_group) == 2 and \
> -                                        len(modi_cpu_grp) < len(cpu_group):
> -                                        print "INFO:CPUs utilized not in a core"
> -                                        return 1
> -            print "INFO: CPUs utilized is not in same package or core"
> -            return(1)
> +                                #if sched_smt_level > 0 :
> +                                    #return 1
>        else:
>             for pkg in sorted(cpu_map.keys()):
>                 pkg_cpus = cpu_map[pkg]
> -                if pkg_cpus == cpu_group:
> -                    return(0)
> -
> -            return(1)
> +                if len(cpu_group) == len(pkg_cpus):
> +                    if pkg_cpus == cpu_group:
> +                        return(0)
> +                else:
> +                    if int(cpus_utilized[0]) in cpu_map[pkg] or int(cpus_utilized[1]) in cpu_map[pkg]:
> +                        return(0)
> +
> +        return(1)
> +
>     except Exception, details:
>         print "Exception in validate_cpugrp_map: ", details
>         sys.exit(1)
> @@ -605,36 +688,70 @@ def verify_sched_domain_dmesg(sched_mc_l
>         print "Reading dmesg failed", details
>         sys.exit(1)
>
> -def validate_cpu_consolidation(work_ld, sched_mc_level, sched_smt_level):
> +def get_cpu_utilization(cpu):
> +    ''' Return cpu utilization of cpu_id
> +    '''
> +    try:
> +        for l in sorted(stats_percentage.keys()):
> +            if cpu == stats_percentage[l][0]:
> +                return stats_percentage[l][1]
> +        return -1
> +    except Exception, details:
> +        print "Exception in get_cpu_utilization", details
> +        sys.exit(1)
> +
> +def validate_cpu_consolidation(stress, work_ld, sched_mc_level, sched_smt_level):
>     ''' Verify if cpu's on which threads executed belong to same
>     package
>     '''
>     cpus_utilized = list()
> +    threads = get_job_count(stress, work_ld, sched_smt_level)
>     try:
>         for l in sorted(stats_percentage.keys()):
>             #modify threshold
> +            cpu_id = stats_percentage[l][0].split("cpu")
> +            if cpu_id[1] == '':
> +                continue
> +            if int(cpu_id[1]) in cpus_utilized:
> +                continue
>             if is_hyper_threaded():
> -                if stats_percentage[l][1] > 25 and work_ld == "kernbench":
> -                    cpu_id = stats_percentage[l][0].split("cpu")
> -                    if cpu_id[1] != '':
> +                if work_ld == "kernbench" and sched_smt_level < sched_mc_level:
> +                    siblings = get_siblings(cpu_id[1])
> +                    if siblings != "":
> +                        sib_list = siblings.split()
> +                        utilization = int(stats_percentage[l][1])
> +                        for i in range(0, len(sib_list)):
> +                            utilization += int(get_cpu_utilization("cpu%s" %sib_list[i]))

This could be done with map and sum, like so:

utilization = int(stats_percentage[l][1]) + sum( map ( int, [
get_cpu_utilization("cpu%s" % sib_list[i]) for i in
range(len(sib_list)) ] ) )

You may want to break this into 2 steps to make it more readable.

> +                    else:
> +                        utilization = stats_percentage[l][1]
> +                    if utilization > 40:
>                         cpus_utilized.append(int(cpu_id[1]))
> +                        if siblings != "":
> +                            for i in range(0, len(sib_list)):
> +                                cpus_utilized.append(int(sib_list[i]))
>                 else:
> -                    if stats_percentage[l][1] > 70:
> -                        cpu_id = stats_percentage[l][0].split("cpu")
> -                        if cpu_id[1] != '':
> -                            cpus_utilized.append(int(cpu_id[1]))
> +                    # This threshold wuld be modified based on results
> +                    if stats_percentage[l][1] > 40:
> +                        cpus_utilized.append(int(cpu_id[1]))
>             else:
> -                if stats_percentage[l][1] > 70:
> -                    cpu_id = stats_percentage[l][0].split("cpu")
> -                    if cpu_id[1] != '':
> +                if work_ld == "kernbench" :
> +                    if stats_percentage[l][1] > 50:
>                         cpus_utilized.append(int(cpu_id[1]))
> -                    cpus_utilized.sort()
> +                else:
> +                    if stats_percentage[l][1] > 70:
> +                        cpus_utilized.append(int(cpu_id[1]))
> +            cpus_utilized.sort()
>         print "INFO: CPU's utilized ", cpus_utilized
>
> +        # If length of CPU's utilized is not = number of jobs exit with 1
> +        if len(cpus_utilized) < threads:
> +            return 1
> +
>         status = validate_cpugrp_map(cpus_utilized, sched_mc_level, \
>             sched_smt_level)
>         if status == 1:
>             print "INFO: CPUs utilized is not in same package or core"
> +
>         return(status)
>     except Exception, details:
>         print "Exception in validate_cpu_consolidation: ", details
> @@ -645,7 +762,8 @@ def get_cpuid_max_intr_count():
>     try:
>         highest = 0
>         second_highest = 0
> -        global cpu1_max_intr, cpu2_max_intr
> +        cpus_utilized = []
> +
>         #Skipping CPU0 as it is generally high
>         for i in range(1, cpu_count):
>             if int(intr_stop[i]) > int(highest):
> @@ -658,15 +776,19 @@ def get_cpuid_max_intr_count():
>                 if int(intr_stop[i]) > int(second_highest):
>                     second_highest = int(intr_stop[i])
>                     cpu2_max_intr = i
> +        cpus_utilized.append(cpu1_max_intr)
> +        cpus_utilized.append(cpu2_max_intr)
> +

cpus_utilized.extend([cpu1_max_intr, cpu2_max_intr])

would be better.

>         for i in range(1, cpu_count):
>             if i != cpu1_max_intr and i != cpu2_max_intr:
>                 diff = second_highest - intr_stop[i]
>                 ''' Threshold of difference has to be manipulated '''
>                 if diff < 10000:
>                     print "INFO: Diff in interrupt count is below threshold"
> -                    return 1
> +                    cpus_utilized = []
> +                    return cpus_utilized

Why not just return [] (unless you need empty out the iterable
variable's value and there's another shallow copy somewhere else?)

>         print "INFO: Interrupt count in other CPU's low as expected"
> -        return 0
> +        return cpus_utilized
>     except Exception, details:
>         print "Exception in get_cpuid_max_intr_count: ", details
>         sys.exit(1)
> @@ -675,14 +797,12 @@ def validate_ilb (sched_mc_level, sched_
>     ''' Validate if ilb is running in same package where work load is running
>     '''
>     try:
> -        status = get_cpuid_max_intr_count()
> -        if status == 1:
> +        cpus_utilized = get_cpuid_max_intr_count()
> +        if not cpus_utilized:
>             return 1

True / False is more desirable with binary return values in python.

> -        for pkg in sorted(cpu_map.keys()):
> -            if cpu1_max_intr in cpu_map[pkg] and cpu2_max_intr in cpu_map[pkg]:
> -                return 0
> -        print "INFO: CPUs with higher interrupt count is not in same package"
> -        return 1
> +
> +        status = validate_cpugrp_map(cpus_utilized, sched_mc_level, sched_smt_level)
> +        return status
>     except Exception, details:
>         print "Exception in validate_ilb: ", details
>         sys.exit(1)
> @@ -706,3 +826,14 @@ def reset_schedsmt():
>     except OSError, e:
>         print "Could not set sched_smt_power_savings to 0", e
>         sys.exit(1)
> +
> +def stop_wkld(work_ld):
> +    ''' Kill workload triggered in background
> +    '''
> +    try:
> +        os.system('pkill %s 2>/dev/null' %work_ld)
> +        if work_ld == "kernbench":
> +            os.system('pkill make 2>/dev/null')
> +    except OSError, e:
> +        print "Exception in stop_wkld", e
> +        sys.exit(1)

Why pkill ... why not killall? I don't remember pkill existing on my system?

Plus, if you have the pids you can nuke 'em with os.kill instead of doing this.

Thanks!
-Garrett

------------------------------------------------------------------------------
Come build with us! The BlackBerry(R) Developer Conference in SF, CA
is the only developer event you need to attend this year. Jumpstart your
developing skills, take BlackBerry mobile applications to market and stay 
ahead of the curve. Join us from November 9 - 12, 2009. Register now!
http://p.sf.net/sfu/devconference
_______________________________________________
Ltp-list mailing list
Ltp-list@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/ltp-list

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [LTP] [Patch 2/6] Developed new functions and fixed issues causing ilb test failure
  2009-10-13  7:43 ` [LTP] [Patch 2/6] Developed new functions and fixed issues causing ilb test failure Poornima Nayak
  2009-10-13 10:12   ` Subrata Modak
  2009-10-14  2:17   ` Garrett Cooper
@ 2009-10-14  9:25   ` Gautham R Shenoy
  2009-11-27 10:09     ` poornima nayak
  2 siblings, 1 reply; 17+ messages in thread
From: Gautham R Shenoy @ 2009-10-14  9:25 UTC (permalink / raw)
  To: Poornima Nayak; +Cc: ltp-list, svaidy, arun

On Tue, Oct 13, 2009 at 01:13:46PM +0530, Poornima Nayak wrote:
> +def is_multi_core():
> +    ''' Return true if system has sockets has multiple cores
> +    '''
> +  
> +    try:
> +        file_cpuinfo = open("/proc/cpuinfo", 'r')
> +        for line in file_cpuinfo:
> +            if line.startswith('siblings'):
> +                siblings = line.split(":")
> +            if line.startswith('cpu cores'):
> +                cpu_cores = line.split(":")
> +                break

I assume your code works only on x86 machine for now.

The corresponding kernel code found from arch/x86/kernel/cpu/proc.c is
as follows
========================================================================
/*
 *	Get CPU information for use by the procfs.
 */
static void show_cpuinfo_core(struct seq_file *m, struct cpuinfo_x86 *c,
			      unsigned int cpu)
{
#ifdef CONFIG_SMP
	if (c->x86_max_cores * smp_num_siblings > 1) {
		seq_printf(m, "physical id\t: %d\n", c->phys_proc_id);
		seq_printf(m, "siblings\t: %d\n",
			   cpumask_weight(cpu_core_mask(cpu)));
		seq_printf(m, "core id\t\t: %d\n", c->cpu_core_id);
		seq_printf(m, "cpu cores\t: %d\n", c->booted_cores);
		seq_printf(m, "apicid\t\t: %d\n", c->apicid);
		seq_printf(m, "initial apicid\t: %d\n", c->initial_apicid);
	}
#endif
}
============================================================================
IIUC, 'siblings' tells us the number of hardware threads available
within a socket and 'cpu cores' tells us the number of cores available
within a socket.

So, if you are checking if the socket has multiple cores or not, 'cpu
cores > 1' would suffice, no ? Why do we need the following code below ?

> +       
> +        if int( siblings[1] ) == int( cpu_cores[1] ):
		^^^^^^^^^^^^^^^^^^^^^
		This only means that each core in the socket has only
		one processing unit or thread.
> +            if int( cpu_cores[1] ) > 1:
> +                multi_core = 1
> +            else:
> +                multi_core = 0
> +        else:
> +            num_of_cpus = int(siblings[1]) / int(cpu_cores[1])
		^^^^^^^^^^
		This tells us the number of hardware threads per core.
> +            if num_of_cpus > 1:
> +                multi_core = 1
> +            else:
> +                multi_core = 0
> +        file_cpuinfo.close()
> +        return multi_core
> +    except Exception:
> +        print "Failed to check if system is multi core system"
> +        sys.exit(1)
> +
>  def get_hyper_thread_count():
>      ''' Return number of threads in CPU. For eg for x3950 this function
>          would return 2. In future if 4 threads are supported in CPU, this
> @@ -153,6 +184,40 @@ def map_cpuid_pkgid():
>                  sys.exit(1)
> 
> 
> +def generate_sibling_list():
> +    ''' Routine to generate siblings list
> +    '''
> +    try:
> +        for i in range(0, cpu_count):
> +            siblings_file = '/sys/devices/system/cpu/cpu%s' % i
> +            siblings_file += '/topology/thread_siblings_list'
> +            threads_sibs = open(siblings_file).read().rstrip()
> +            thread_ids = threads_sibs.split("-")
A string '0-3' means that 0,1,2,3 are the siblings and not just 0 and 3.
Also, you can have strings like '0,4' which means that only 0 and 4 are
siblings.

Thus I am not sure if the parsing that you're doing is sufficient
to generate the list.

> +    
> +            if not thread_ids in siblings_list:
> +                siblings_list.append(thread_ids)
> +    except Exception, details:
> +        print "Exception in generate_siblings_list", details
> +        sys.exit(1)
> +
> +def get_siblings(cpu_id):
> +    ''' Return siblings of cpu_id
> +    '''
> +    try:
> +        cpus = ""
> +        for i in range(0, len(siblings_list)):
> +            for cpu in siblings_list[i]:
> +                if cpu_id == cpu:
> +                    for j in siblings_list[i]:
> +                        # Exclude cpu_id in the list of siblings
> +                        if j != cpu_id:
> +                            cpus += j
> +                    return cpus
> +        return cpus
> +    except Exception, details:
> +        print "Exception in get_siblings", details
> +        sys.exit(1)
> +
>  def get_proc_data(stats_list):
>      ''' Read /proc/stat info and store in dictionary
>      '''
> @@ -168,18 +233,18 @@ def get_proc_data(stats_list):
>          sys.exit(1)
> 
>  def get_proc_loc_count(loc_stats):
> -    ''' Read /proc/stat info and store in dictionary
> +    ''' Read /proc/interrupts info and store in list
>      '''
>      try:
>          file_procstat = open("/proc/interrupts", 'r')
>          for line in file_procstat:
> -            if line.startswith('LOC:'):
> +            if line.startswith(' LOC:') or line.startswith('LOC:'):
>                  data = line.split()
>                  for i in range(0, cpu_count):
>                      # To skip LOC
>                      loc_stats.append(data[i+1])
> -                    print data[i+1]
> -        file_procstat.close()
> +                file_procstat.close()
> +                return
>      except Exception, details:
>          print "Could not read interrupt statistics", details
>          sys.exit(1)
> @@ -192,6 +257,8 @@ def set_sched_mc_power(sched_mc_level):
>          os.system('echo %s > \
>              /sys/devices/system/cpu/sched_mc_power_savings 2>/dev/null'
>              % sched_mc_level)
> +
> +        get_proc_data(stats_start)
>      except OSError, e:
>          print "Could not set sched_mc_power_savings to", sched_mc_level, e
>  	sys.exit(1)
> @@ -203,6 +270,8 @@ def set_sched_smt_power(sched_smt_level)
>          os.system('echo %s > \
>              /sys/devices/system/cpu/sched_smt_power_savings 2>/dev/null'
>              % sched_smt_level)
> +
> +        get_proc_data(stats_start)
>      except OSError, e:
>          print "Could not set sched_smt_power_savings to", sched_smt_level, e
>  	sys.exit(1)
> @@ -218,21 +287,36 @@ def set_timer_migration_interface(value)
>          print "Could not set timer_migration to ", value, e
>          sys.exit(1)
> 
> -def trigger_ebizzy (sched_smt, stress, duration, background, pinned):
> -    ''' Triggers ebizzy workload for sched_mc=1
> -        testing
> +def get_job_count(stress, workload, sched_smt):
> +    ''' Returns number of jobs/threads to be triggered
>      '''
> +    
>      try:
>          if stress == "thread":
>              threads = get_hyper_thread_count()

I am assuming get_hyper_thread_count() returns the number of
hyperthreads within a core.

So, if stress = "thread", the number of software threads we create is
equal to the numbef of h/w threads within a core ?


>          if stress == "partial":
>              threads = cpu_count / socket_count

If stress = "partial", the number of software threads we create is equal
to the number of h/w threads within a socket ?

> +            if is_hyper_threaded():
> +                if workload == "ebizzy" and int(sched_smt) ==0:
> +                    threads = threads / get_hyper_thread_count()

Thus if the workload is ebizzy and sched_smt is 0, the number of
software threads we create is equal to the number of cores within the
socket, right ?

> +                if workload == "kernbench" and int(sched_smt) < 2:
> +                    threads = threads / get_hyper_thread_count()    
>          if stress == "full":
> -	    threads = cpu_count
> +            threads = cpu_count
>          if stress == "single_job":
>              threads = 1
>              duration = 180
> +        return threads
> +    except Exception, details:
> +        print "get job count failed ", details
> +        sys.exit(1)
> 
> +def trigger_ebizzy (sched_smt, stress, duration, background, pinned):
> +    ''' Triggers ebizzy workload for sched_mc=1
> +        testing
> +    '''
> +    try:
> +        threads = get_job_count(stress, "ebizzy", sched_smt)
>          olddir = os.getcwd()
>          path = '%s/utils/benchmark' % os.environ['LTPROOT']
>          os.chdir(path)
> @@ -282,23 +366,14 @@ def trigger_ebizzy (sched_smt, stress, d
>          print "Ebizzy workload trigger failed ", details
>          sys.exit(1)   
> 
> -def trigger_kernbench (sched_smt, stress, background, pinned):
> +def trigger_kernbench (sched_smt, stress, background, pinned, perf_test):
>      ''' Trigger load on system like kernbench.
>          Copys existing copy of LTP into as LTP2 and then builds it
>          with make -j
>      '''
>      olddir = os.getcwd()
>      try:
> -        if stress == "thread":
> -	    threads = 2
> -        if stress == "partial":
> -	    threads = cpu_count / socket_count
> -            if is_hyper_threaded() and int(sched_smt) !=2:
> -                threads = threads / get_hyper_thread_count()
> -        if stress == "full":
> -            threads = cpu_count
> -        if stress == "single_job":
> -            threads = 1
> +        threads = get_job_count(stress, "kernbench", sched_smt)
> 
>          dst_path = "/root"
>          olddir = os.getcwd()      
> @@ -335,24 +410,35 @@ def trigger_kernbench (sched_smt, stress
>          get_proc_loc_count(intr_start)
>          if pinned == "yes":
>              os.system ( 'taskset -c %s %s/kernbench -o %s -M -H -n 1 \
> -                >/dev/null 2>&1' % (cpu_count-1, benchmark_path, threads))
> +                >/dev/null 2>&1 &' % (cpu_count-1, benchmark_path, threads))
> +
> +            # We have to delete import in future
> +            import time
> +            time.sleep(240)
> +            stop_wkld("kernbench")
>          else:
>              if background == "yes":
>                  os.system ( '%s/kernbench -o %s -M -H -n 1 >/dev/null 2>&1 &' \
>                      % (benchmark_path, threads))
>              else:
> -                os.system ( '%s/kernbench -o %s -M -H -n 1 >/dev/null 2>&1' \
> -                    % (benchmark_path, threads))
> +                if perf_test == "yes":
> +                    os.system ( '%s/kernbench -o %s -M -H -n 1 >/dev/null 2>&1' \
> +                        % (benchmark_path, threads))
> +                else:
> +                    os.system ( '%s/kernbench -o %s -M -H -n 1 >/dev/null 2>&1 &' \
> +                        % (benchmark_path, threads))
> +                    # We have to delete import in future
> +                    import time
> +                    time.sleep(240)
> +                    stop_wkld("kernbench")
>          
>          print "INFO: Workload kernbench triggerd"
>          os.chdir(olddir)
> -        #get_proc_data(stats_stop)
> -        #get_proc_loc_count(intr_stop)
>      except Exception, details:
>          print "Workload kernbench trigger failed ", details
>          sys.exit(1)
>     
> -def trigger_workld(sched_smt, workload, stress, duration, background, pinned):
> +def trigger_workld(sched_smt, workload, stress, duration, background, pinned, perf_test):
>      ''' Triggers workload passed as argument. Number of threads 
>          triggered is based on stress value.
>      '''
> @@ -360,7 +446,7 @@ def trigger_workld(sched_smt, workload, 
>          if workload == "ebizzy":
>              trigger_ebizzy (sched_smt, stress, duration, background, pinned)
>          if workload == "kernbench":
> -            trigger_kernbench (sched_smt, stress, background, pinned)
> +            trigger_kernbench (sched_smt, stress, background, pinned, perf_test)
>      except Exception, details:
>          print "INFO: Trigger workload failed", details
>          sys.exit(1)
> @@ -434,7 +520,7 @@ def generate_report():
>              print >> keyvalfile, "package-%s=%3.4f" % \
>  		(pkg, (float(total_idle)*100/total))
>      except Exception, details:
> -        print "Generating reportfile failed: ", details
> +        print "Generating utilization report failed: ", details
>          sys.exit(1)
> 
>      #Add record delimiter '\n' before closing these files
> @@ -454,20 +540,18 @@ def generate_loc_intr_report():
> 
>          get_proc_loc_count(intr_stop)
> 
> -        print "Before substracting"
> -        for i in range(0, cpu_count):
> -            print "CPU",i, intr_start[i], intr_stop[i]
> -            reportfile = open('/procstat/cpu-loc_interrupts', 'a')
> -            print >> reportfile, "=============================================="
> -            print >> reportfile, "     Local timer interrupt stats              "
> -            print >> reportfile, "=============================================="
> +        reportfile = open('/procstat/cpu-loc_interrupts', 'a')
> +        print >> reportfile, "=============================================="
> +        print >> reportfile, "     Local timer interrupt stats              "
> +        print >> reportfile, "=============================================="
> +
>          for i in range(0, cpu_count):
>              intr_stop[i] =  int(intr_stop[i]) - int(intr_start[i])
>              print >> reportfile, "CPU%s: %s" %(i, intr_stop[i])
>          print >> reportfile
>          reportfile.close()
>      except Exception, details:
> -        print "Generating reportfile failed: ", details
> +        print "Generating interrupt report failed: ", details
>          sys.exit(1)
> 
>  def record_loc_intr_count():
> @@ -542,25 +626,24 @@ def validate_cpugrp_map(cpu_group, sched
>                                  modi_cpu_grp.remove(core_cpus[i]) 
>                                  if len(modi_cpu_grp) == 0:
>                                      return 0
> -                            else:
> +                            #This code has to be deleted 
> +                            #else:
>                                  # If sched_smt == 0 then its oky if threads run
>                                  # in different cores of same package 
> -                                if sched_smt_level == 1:
> -                                    sys.exit(1)
> -                                else:
> -                                    if len(cpu_group) == 2 and \
> -                                        len(modi_cpu_grp) < len(cpu_group):
> -                                        print "INFO:CPUs utilized not in a core"
> -                                        return 1                                        
> -            print "INFO: CPUs utilized is not in same package or core"
> -            return(1)
> +                                #if sched_smt_level > 0 :
> +                                    #return 1
>  	else:
>              for pkg in sorted(cpu_map.keys()):
>                  pkg_cpus = cpu_map[pkg]
> -                if pkg_cpus == cpu_group:
> -                    return(0)
> -                 
> -            return(1) 
> +                if len(cpu_group) == len(pkg_cpus):
> +                    if pkg_cpus == cpu_group:
> +                        return(0)
> +                else:
> +                    if int(cpus_utilized[0]) in cpu_map[pkg] or int(cpus_utilized[1]) in cpu_map[pkg]:
> +                        return(0)
> +
> +        return(1) 
> +
>      except Exception, details:
>          print "Exception in validate_cpugrp_map: ", details
>          sys.exit(1)
> @@ -605,36 +688,70 @@ def verify_sched_domain_dmesg(sched_mc_l
>          print "Reading dmesg failed", details
>          sys.exit(1)
> 
> -def validate_cpu_consolidation(work_ld, sched_mc_level, sched_smt_level):
> +def get_cpu_utilization(cpu):
> +    ''' Return cpu utilization of cpu_id
> +    '''
> +    try:
> +        for l in sorted(stats_percentage.keys()):

What's the key used to index the elements of this hash table ?


> +            if cpu == stats_percentage[l][0]:
> +                return stats_percentage[l][1]
> +        return -1
> +    except Exception, details:
> +        print "Exception in get_cpu_utilization", details
> +        sys.exit(1)
> +
> +def validate_cpu_consolidation(stress, work_ld, sched_mc_level, sched_smt_level):
>      ''' Verify if cpu's on which threads executed belong to same
>      package
>      '''
>      cpus_utilized = list()
> +    threads = get_job_count(stress, work_ld, sched_smt_level)
>      try:
>          for l in sorted(stats_percentage.keys()):
>              #modify threshold
> +            cpu_id = stats_percentage[l][0].split("cpu")
> +            if cpu_id[1] == '':
> +                continue
> +            if int(cpu_id[1]) in cpus_utilized:
> +                continue
>              if is_hyper_threaded():
Does is_hyper_threaded() check the status of the system from procfs or
sysfs every time it's called ? An easier way could be to obtain that
once and save the state in a variable, if it's not done that way
already.

> -                if stats_percentage[l][1] > 25 and work_ld == "kernbench":
> -                    cpu_id = stats_percentage[l][0].split("cpu")
> -                    if cpu_id[1] != '':
> +                if work_ld == "kernbench" and sched_smt_level < sched_mc_level:
> +                    siblings = get_siblings(cpu_id[1])
> +                    if siblings != "":
> +                        sib_list = siblings.split()
> +                        utilization = int(stats_percentage[l][1])
> +                        for i in range(0, len(sib_list)):
> +                            utilization += int(get_cpu_utilization("cpu%s" %sib_list[i])) 
> +                    else:
> +                        utilization = stats_percentage[l][1]
> +                    if utilization > 40:
>                          cpus_utilized.append(int(cpu_id[1]))
> +                        if siblings != "":
> +                            for i in range(0, len(sib_list)):
> +                                cpus_utilized.append(int(sib_list[i]))
>                  else:
> -                    if stats_percentage[l][1] > 70:
> -                        cpu_id = stats_percentage[l][0].split("cpu")
> -                        if cpu_id[1] != '':
> -                            cpus_utilized.append(int(cpu_id[1]))
> +                    # This threshold wuld be modified based on results

It's easier to have these constants stored in some variables with
appropriate names. That would be much easier to edit and update later
instead of searching the code every time.

> +                    if stats_percentage[l][1] > 40:
> +                        cpus_utilized.append(int(cpu_id[1]))
>              else:
> -                if stats_percentage[l][1] > 70:
> -                    cpu_id = stats_percentage[l][0].split("cpu")
> -                    if cpu_id[1] != '':
> +                if work_ld == "kernbench" :
> +                    if stats_percentage[l][1] > 50:
>                          cpus_utilized.append(int(cpu_id[1]))
> -                    cpus_utilized.sort()
> +                else:
> +                    if stats_percentage[l][1] > 70:
> +                        cpus_utilized.append(int(cpu_id[1]))
> +            cpus_utilized.sort()
>          print "INFO: CPU's utilized ", cpus_utilized
> 
> +        # If length of CPU's utilized is not = number of jobs exit with 1
> +        if len(cpus_utilized) < threads:
> +            return 1
> +
>          status = validate_cpugrp_map(cpus_utilized, sched_mc_level, \
>              sched_smt_level)
>          if status == 1:
>              print "INFO: CPUs utilized is not in same package or core"
> +
>          return(status)
>      except Exception, details:
>          print "Exception in validate_cpu_consolidation: ", details
> @@ -645,7 +762,8 @@ def get_cpuid_max_intr_count():
>      try:
>          highest = 0
>          second_highest = 0
> -        global cpu1_max_intr, cpu2_max_intr
> +        cpus_utilized = []
> +        
>          #Skipping CPU0 as it is generally high
>          for i in range(1, cpu_count):
>              if int(intr_stop[i]) > int(highest):
> @@ -658,15 +776,19 @@ def get_cpuid_max_intr_count():
>                  if int(intr_stop[i]) > int(second_highest):
>                      second_highest = int(intr_stop[i])
>                      cpu2_max_intr = i
> +        cpus_utilized.append(cpu1_max_intr)
> +        cpus_utilized.append(cpu2_max_intr)
> +        
>          for i in range(1, cpu_count):
>              if i != cpu1_max_intr and i != cpu2_max_intr:
>                  diff = second_highest - intr_stop[i]
>                  ''' Threshold of difference has to be manipulated '''
>                  if diff < 10000:
>                      print "INFO: Diff in interrupt count is below threshold"
> -                    return 1
> +                    cpus_utilized = []
> +                    return cpus_utilized
>          print "INFO: Interrupt count in other CPU's low as expected"
> -        return 0 
> +        return cpus_utilized
>      except Exception, details:
>          print "Exception in get_cpuid_max_intr_count: ", details
>          sys.exit(1)
> @@ -675,14 +797,12 @@ def validate_ilb (sched_mc_level, sched_
>      ''' Validate if ilb is running in same package where work load is running
>      '''
>      try:
> -        status = get_cpuid_max_intr_count()
> -        if status == 1:
> +        cpus_utilized = get_cpuid_max_intr_count()
> +        if not cpus_utilized:
>              return 1
> -        for pkg in sorted(cpu_map.keys()):
> -            if cpu1_max_intr in cpu_map[pkg] and cpu2_max_intr in cpu_map[pkg]:
> -                return 0
> -        print "INFO: CPUs with higher interrupt count is not in same package"
> -        return 1
> +       
> +        status = validate_cpugrp_map(cpus_utilized, sched_mc_level, sched_smt_level)
> +        return status
>      except Exception, details:
>          print "Exception in validate_ilb: ", details
>          sys.exit(1)
> @@ -706,3 +826,14 @@ def reset_schedsmt():
>      except OSError, e:
>          print "Could not set sched_smt_power_savings to 0", e
>          sys.exit(1)
> +
> +def stop_wkld(work_ld):
> +    ''' Kill workload triggered in background
> +    '''
> +    try:
> +        os.system('pkill %s 2>/dev/null' %work_ld)
> +        if work_ld == "kernbench":
> +            os.system('pkill make 2>/dev/null')
> +    except OSError, e:
> +        print "Exception in stop_wkld", e
> +        sys.exit(1)

-- 
Thanks and Regards
gautham

------------------------------------------------------------------------------
Come build with us! The BlackBerry(R) Developer Conference in SF, CA
is the only developer event you need to attend this year. Jumpstart your
developing skills, take BlackBerry mobile applications to market and stay 
ahead of the curve. Join us from November 9 - 12, 2009. Register now!
http://p.sf.net/sfu/devconference
_______________________________________________
Ltp-list mailing list
Ltp-list@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/ltp-list

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [LTP] [Patch 2/6] Developed new functions and fixed issues causing ilb test failure
  2009-10-14  9:25   ` Gautham R Shenoy
@ 2009-11-27 10:09     ` poornima nayak
  0 siblings, 0 replies; 17+ messages in thread
From: poornima nayak @ 2009-11-27 10:09 UTC (permalink / raw)
  To: ego; +Cc: ltp-list, svaidy, arun

On Wed, 2009-10-14 at 14:55 +0530, Gautham R Shenoy wrote:
> On Tue, Oct 13, 2009 at 01:13:46PM +0530, Poornima Nayak wrote:
> > +def is_multi_core():
> > +    ''' Return true if system has sockets has multiple cores
> > +    '''
> > +  
> > +    try:
> > +        file_cpuinfo = open("/proc/cpuinfo", 'r')
> > +        for line in file_cpuinfo:
> > +            if line.startswith('siblings'):
> > +                siblings = line.split(":")
> > +            if line.startswith('cpu cores'):
> > +                cpu_cores = line.split(":")
> > +                break
> 
> I assume your code works only on x86 machine for now.
This part of the code is modified to read the information from sys
topology. For this months LTP release patches will be mailed.
> 
> The corresponding kernel code found from arch/x86/kernel/cpu/proc.c is
> as follows
> ========================================================================
> /*
>  *	Get CPU information for use by the procfs.
>  */
> static void show_cpuinfo_core(struct seq_file *m, struct cpuinfo_x86 *c,
> 			      unsigned int cpu)
> {
> #ifdef CONFIG_SMP
> 	if (c->x86_max_cores * smp_num_siblings > 1) {
> 		seq_printf(m, "physical id\t: %d\n", c->phys_proc_id);
> 		seq_printf(m, "siblings\t: %d\n",
> 			   cpumask_weight(cpu_core_mask(cpu)));
> 		seq_printf(m, "core id\t\t: %d\n", c->cpu_core_id);
> 		seq_printf(m, "cpu cores\t: %d\n", c->booted_cores);
> 		seq_printf(m, "apicid\t\t: %d\n", c->apicid);
> 		seq_printf(m, "initial apicid\t: %d\n", c->initial_apicid);
> 	}
> #endif
> }
> ============================================================================
> IIUC, 'siblings' tells us the number of hardware threads available
> within a socket and 'cpu cores' tells us the number of cores available
> within a socket.
> 
> So, if you are checking if the socket has multiple cores or not, 'cpu
> cores > 1' would suffice, no ? Why do we need the following code below ?
> 
This code is also modified to read information from sys topology.
Testcase will lear wether system is hyper threaded by reading bit mask
of cpus in thread_siblings file.
> > +       
> > +        if int( siblings[1] ) == int( cpu_cores[1] ):
> 		^^^^^^^^^^^^^^^^^^^^^
> 		This only means that each core in the socket has only
> 		one processing unit or thread.
> > +            if int( cpu_cores[1] ) > 1:
> > +                multi_core = 1
> > +            else:
> > +                multi_core = 0
> > +        else:
> > +            num_of_cpus = int(siblings[1]) / int(cpu_cores[1])
> 		^^^^^^^^^^
> 		This tells us the number of hardware threads per core.
> > +            if num_of_cpus > 1:
> > +                multi_core = 1
> > +            else:
> > +                multi_core = 0
> > +        file_cpuinfo.close()
> > +        return multi_core
> > +    except Exception:
> > +        print "Failed to check if system is multi core system"
> > +        sys.exit(1)
> > +
> >  def get_hyper_thread_count():
> >      ''' Return number of threads in CPU. For eg for x3950 this function
> >          would return 2. In future if 4 threads are supported in CPU, this
> > @@ -153,6 +184,40 @@ def map_cpuid_pkgid():
> >                  sys.exit(1)
> > 
> > 
> > +def generate_sibling_list():
> > +    ''' Routine to generate siblings list
> > +    '''
> > +    try:
> > +        for i in range(0, cpu_count):
> > +            siblings_file = '/sys/devices/system/cpu/cpu%s' % i
> > +            siblings_file += '/topology/thread_siblings_list'
> > +            threads_sibs = open(siblings_file).read().rstrip()
> > +            thread_ids = threads_sibs.split("-")
> A string '0-3' means that 0,1,2,3 are the siblings and not just 0 and 3.
> Also, you can have strings like '0,4' which means that only 0 and 4 are
> siblings.
> 
Modified this function. Entire function is modified to read cpu bit mask
information from thread_siblings. 
> Thus I am not sure if the parsing that you're doing is sufficient
> to generate the list.
> 
> > +    
> > +            if not thread_ids in siblings_list:
> > +                siblings_list.append(thread_ids)
> > +    except Exception, details:
> > +        print "Exception in generate_siblings_list", details
> > +        sys.exit(1)
> > +
> > +def get_siblings(cpu_id):
> > +    ''' Return siblings of cpu_id
> > +    '''
> > +    try:
> > +        cpus = ""
> > +        for i in range(0, len(siblings_list)):
> > +            for cpu in siblings_list[i]:
> > +                if cpu_id == cpu:
> > +                    for j in siblings_list[i]:
> > +                        # Exclude cpu_id in the list of siblings
> > +                        if j != cpu_id:
> > +                            cpus += j
> > +                    return cpus
> > +        return cpus
> > +    except Exception, details:
> > +        print "Exception in get_siblings", details
> > +        sys.exit(1)
> > +
> >  def get_proc_data(stats_list):
> >      ''' Read /proc/stat info and store in dictionary
> >      '''
> > @@ -168,18 +233,18 @@ def get_proc_data(stats_list):
> >          sys.exit(1)
> > 
> >  def get_proc_loc_count(loc_stats):
> > -    ''' Read /proc/stat info and store in dictionary
> > +    ''' Read /proc/interrupts info and store in list
> >      '''
> >      try:
> >          file_procstat = open("/proc/interrupts", 'r')
> >          for line in file_procstat:
> > -            if line.startswith('LOC:'):
> > +            if line.startswith(' LOC:') or line.startswith('LOC:'):
> >                  data = line.split()
> >                  for i in range(0, cpu_count):
> >                      # To skip LOC
> >                      loc_stats.append(data[i+1])
> > -                    print data[i+1]
> > -        file_procstat.close()
> > +                file_procstat.close()
> > +                return
> >      except Exception, details:
> >          print "Could not read interrupt statistics", details
> >          sys.exit(1)
> > @@ -192,6 +257,8 @@ def set_sched_mc_power(sched_mc_level):
> >          os.system('echo %s > \
> >              /sys/devices/system/cpu/sched_mc_power_savings 2>/dev/null'
> >              % sched_mc_level)
> > +
> > +        get_proc_data(stats_start)
> >      except OSError, e:
> >          print "Could not set sched_mc_power_savings to", sched_mc_level, e
> >  	sys.exit(1)
> > @@ -203,6 +270,8 @@ def set_sched_smt_power(sched_smt_level)
> >          os.system('echo %s > \
> >              /sys/devices/system/cpu/sched_smt_power_savings 2>/dev/null'
> >              % sched_smt_level)
> > +
> > +        get_proc_data(stats_start)
> >      except OSError, e:
> >          print "Could not set sched_smt_power_savings to", sched_smt_level, e
> >  	sys.exit(1)
> > @@ -218,21 +287,36 @@ def set_timer_migration_interface(value)
> >          print "Could not set timer_migration to ", value, e
> >          sys.exit(1)
> > 
> > -def trigger_ebizzy (sched_smt, stress, duration, background, pinned):
> > -    ''' Triggers ebizzy workload for sched_mc=1
> > -        testing
> > +def get_job_count(stress, workload, sched_smt):
> > +    ''' Returns number of jobs/threads to be triggered
> >      '''
> > +    
> >      try:
> >          if stress == "thread":
> >              threads = get_hyper_thread_count()
> 
> I am assuming get_hyper_thread_count() returns the number of
> hyperthreads within a core.
Yes number of hyper threads within a core
> 
> So, if stress = "thread", the number of software threads we create is
> equal to the numbef of h/w threads within a core ?
yes number of software threads is equal to number of hyper threads
within a core. In a system like mx3950 where every core has two threads,
workload will be triggered with two threads.
> 
> 
> >          if stress == "partial":
> >              threads = cpu_count / socket_count
> 
> If stress = "partial", the number of software threads we create is equal
> to the number of h/w threads within a socket ?
if stress is partial, then software threads we create is equal number of
cores / 2. 
> 
> > +            if is_hyper_threaded():
> > +                if workload == "ebizzy" and int(sched_smt) ==0:
> > +                    threads = threads / get_hyper_thread_count()
> 
> Thus if the workload is ebizzy and sched_smt is 0, the number of
> software threads we create is equal to the number of cores within the
> socket, right ?
Yes number of cores within a socket. But is system is hyper threaded
then number of software threads is further divided by hyper thread
count. Else jobs will be triggered in other socket.
> 
> > +                if workload == "kernbench" and int(sched_smt) < 2:
> > +                    threads = threads / get_hyper_thread_count()    
> >          if stress == "full":
> > -	    threads = cpu_count
> > +            threads = cpu_count
> >          if stress == "single_job":
> >              threads = 1
> >              duration = 180
> > +        return threads
> > +    except Exception, details:
> > +        print "get job count failed ", details
> > +        sys.exit(1)
> > 
> > +def trigger_ebizzy (sched_smt, stress, duration, background, pinned):
> > +    ''' Triggers ebizzy workload for sched_mc=1
> > +        testing
> > +    '''
> > +    try:
> > +        threads = get_job_count(stress, "ebizzy", sched_smt)
> >          olddir = os.getcwd()
> >          path = '%s/utils/benchmark' % os.environ['LTPROOT']
> >          os.chdir(path)
> > @@ -282,23 +366,14 @@ def trigger_ebizzy (sched_smt, stress, d
> >          print "Ebizzy workload trigger failed ", details
> >          sys.exit(1)   
> > 
> > -def trigger_kernbench (sched_smt, stress, background, pinned):
> > +def trigger_kernbench (sched_smt, stress, background, pinned, perf_test):
> >      ''' Trigger load on system like kernbench.
> >          Copys existing copy of LTP into as LTP2 and then builds it
> >          with make -j
> >      '''
> >      olddir = os.getcwd()
> >      try:
> > -        if stress == "thread":
> > -	    threads = 2
> > -        if stress == "partial":
> > -	    threads = cpu_count / socket_count
> > -            if is_hyper_threaded() and int(sched_smt) !=2:
> > -                threads = threads / get_hyper_thread_count()
> > -        if stress == "full":
> > -            threads = cpu_count
> > -        if stress == "single_job":
> > -            threads = 1
> > +        threads = get_job_count(stress, "kernbench", sched_smt)
> > 
> >          dst_path = "/root"
> >          olddir = os.getcwd()      
> > @@ -335,24 +410,35 @@ def trigger_kernbench (sched_smt, stress
> >          get_proc_loc_count(intr_start)
> >          if pinned == "yes":
> >              os.system ( 'taskset -c %s %s/kernbench -o %s -M -H -n 1 \
> > -                >/dev/null 2>&1' % (cpu_count-1, benchmark_path, threads))
> > +                >/dev/null 2>&1 &' % (cpu_count-1, benchmark_path, threads))
> > +
> > +            # We have to delete import in future
> > +            import time
> > +            time.sleep(240)
> > +            stop_wkld("kernbench")
> >          else:
> >              if background == "yes":
> >                  os.system ( '%s/kernbench -o %s -M -H -n 1 >/dev/null 2>&1 &' \
> >                      % (benchmark_path, threads))
> >              else:
> > -                os.system ( '%s/kernbench -o %s -M -H -n 1 >/dev/null 2>&1' \
> > -                    % (benchmark_path, threads))
> > +                if perf_test == "yes":
> > +                    os.system ( '%s/kernbench -o %s -M -H -n 1 >/dev/null 2>&1' \
> > +                        % (benchmark_path, threads))
> > +                else:
> > +                    os.system ( '%s/kernbench -o %s -M -H -n 1 >/dev/null 2>&1 &' \
> > +                        % (benchmark_path, threads))
> > +                    # We have to delete import in future
> > +                    import time
> > +                    time.sleep(240)
> > +                    stop_wkld("kernbench")
> >          
> >          print "INFO: Workload kernbench triggerd"
> >          os.chdir(olddir)
> > -        #get_proc_data(stats_stop)
> > -        #get_proc_loc_count(intr_stop)
> >      except Exception, details:
> >          print "Workload kernbench trigger failed ", details
> >          sys.exit(1)
> >     
> > -def trigger_workld(sched_smt, workload, stress, duration, background, pinned):
> > +def trigger_workld(sched_smt, workload, stress, duration, background, pinned, perf_test):
> >      ''' Triggers workload passed as argument. Number of threads 
> >          triggered is based on stress value.
> >      '''
> > @@ -360,7 +446,7 @@ def trigger_workld(sched_smt, workload, 
> >          if workload == "ebizzy":
> >              trigger_ebizzy (sched_smt, stress, duration, background, pinned)
> >          if workload == "kernbench":
> > -            trigger_kernbench (sched_smt, stress, background, pinned)
> > +            trigger_kernbench (sched_smt, stress, background, pinned, perf_test)
> >      except Exception, details:
> >          print "INFO: Trigger workload failed", details
> >          sys.exit(1)
> > @@ -434,7 +520,7 @@ def generate_report():
> >              print >> keyvalfile, "package-%s=%3.4f" % \
> >  		(pkg, (float(total_idle)*100/total))
> >      except Exception, details:
> > -        print "Generating reportfile failed: ", details
> > +        print "Generating utilization report failed: ", details
> >          sys.exit(1)
> > 
> >      #Add record delimiter '\n' before closing these files
> > @@ -454,20 +540,18 @@ def generate_loc_intr_report():
> > 
> >          get_proc_loc_count(intr_stop)
> > 
> > -        print "Before substracting"
> > -        for i in range(0, cpu_count):
> > -            print "CPU",i, intr_start[i], intr_stop[i]
> > -            reportfile = open('/procstat/cpu-loc_interrupts', 'a')
> > -            print >> reportfile, "=============================================="
> > -            print >> reportfile, "     Local timer interrupt stats              "
> > -            print >> reportfile, "=============================================="
> > +        reportfile = open('/procstat/cpu-loc_interrupts', 'a')
> > +        print >> reportfile, "=============================================="
> > +        print >> reportfile, "     Local timer interrupt stats              "
> > +        print >> reportfile, "=============================================="
> > +
> >          for i in range(0, cpu_count):
> >              intr_stop[i] =  int(intr_stop[i]) - int(intr_start[i])
> >              print >> reportfile, "CPU%s: %s" %(i, intr_stop[i])
> >          print >> reportfile
> >          reportfile.close()
> >      except Exception, details:
> > -        print "Generating reportfile failed: ", details
> > +        print "Generating interrupt report failed: ", details
> >          sys.exit(1)
> > 
> >  def record_loc_intr_count():
> > @@ -542,25 +626,24 @@ def validate_cpugrp_map(cpu_group, sched
> >                                  modi_cpu_grp.remove(core_cpus[i]) 
> >                                  if len(modi_cpu_grp) == 0:
> >                                      return 0
> > -                            else:
> > +                            #This code has to be deleted 
> > +                            #else:
> >                                  # If sched_smt == 0 then its oky if threads run
> >                                  # in different cores of same package 
> > -                                if sched_smt_level == 1:
> > -                                    sys.exit(1)
> > -                                else:
> > -                                    if len(cpu_group) == 2 and \
> > -                                        len(modi_cpu_grp) < len(cpu_group):
> > -                                        print "INFO:CPUs utilized not in a core"
> > -                                        return 1                                        
> > -            print "INFO: CPUs utilized is not in same package or core"
> > -            return(1)
> > +                                #if sched_smt_level > 0 :
> > +                                    #return 1
> >  	else:
> >              for pkg in sorted(cpu_map.keys()):
> >                  pkg_cpus = cpu_map[pkg]
> > -                if pkg_cpus == cpu_group:
> > -                    return(0)
> > -                 
> > -            return(1) 
> > +                if len(cpu_group) == len(pkg_cpus):
> > +                    if pkg_cpus == cpu_group:
> > +                        return(0)
> > +                else:
> > +                    if int(cpus_utilized[0]) in cpu_map[pkg] or int(cpus_utilized[1]) in cpu_map[pkg]:
> > +                        return(0)
> > +
> > +        return(1) 
> > +
> >      except Exception, details:
> >          print "Exception in validate_cpugrp_map: ", details
> >          sys.exit(1)
> > @@ -605,36 +688,70 @@ def verify_sched_domain_dmesg(sched_mc_l
> >          print "Reading dmesg failed", details
> >          sys.exit(1)
> > 
> > -def validate_cpu_consolidation(work_ld, sched_mc_level, sched_smt_level):
> > +def get_cpu_utilization(cpu):
> > +    ''' Return cpu utilization of cpu_id
> > +    '''
> > +    try:
> > +        for l in sorted(stats_percentage.keys()):
> 
> What's the key used to index the elements of this hash table ?
> 
> 
> > +            if cpu == stats_percentage[l][0]:
> > +                return stats_percentage[l][1]
> > +        return -1
> > +    except Exception, details:
> > +        print "Exception in get_cpu_utilization", details
> > +        sys.exit(1)
> > +
> > +def validate_cpu_consolidation(stress, work_ld, sched_mc_level, sched_smt_level):
> >      ''' Verify if cpu's on which threads executed belong to same
> >      package
> >      '''
> >      cpus_utilized = list()
> > +    threads = get_job_count(stress, work_ld, sched_smt_level)
> >      try:
> >          for l in sorted(stats_percentage.keys()):
> >              #modify threshold
> > +            cpu_id = stats_percentage[l][0].split("cpu")
> > +            if cpu_id[1] == '':
> > +                continue
> > +            if int(cpu_id[1]) in cpus_utilized:
> > +                continue
> >              if is_hyper_threaded():
> Does is_hyper_threaded() check the status of the system from procfs or
> sysfs every time it's called ? An easier way could be to obtain that
> once and save the state in a variable, if it's not done that way
> already.
It is read from sysfs hereafter. will look into the code and modify.
> 
> > -                if stats_percentage[l][1] > 25 and work_ld == "kernbench":
> > -                    cpu_id = stats_percentage[l][0].split("cpu")
> > -                    if cpu_id[1] != '':
> > +                if work_ld == "kernbench" and sched_smt_level < sched_mc_level:
> > +                    siblings = get_siblings(cpu_id[1])
> > +                    if siblings != "":
> > +                        sib_list = siblings.split()
> > +                        utilization = int(stats_percentage[l][1])
> > +                        for i in range(0, len(sib_list)):
> > +                            utilization += int(get_cpu_utilization("cpu%s" %sib_list[i])) 
> > +                    else:
> > +                        utilization = stats_percentage[l][1]
> > +                    if utilization > 40:
> >                          cpus_utilized.append(int(cpu_id[1]))
> > +                        if siblings != "":
> > +                            for i in range(0, len(sib_list)):
> > +                                cpus_utilized.append(int(sib_list[i]))
> >                  else:
> > -                    if stats_percentage[l][1] > 70:
> > -                        cpu_id = stats_percentage[l][0].split("cpu")
> > -                        if cpu_id[1] != '':
> > -                            cpus_utilized.append(int(cpu_id[1]))
> > +                    # This threshold wuld be modified based on results
> 
> It's easier to have these constants stored in some variables with
> appropriate names. That would be much easier to edit and update later
> instead of searching the code every time.
This modification is done.
> 
> > +                    if stats_percentage[l][1] > 40:
> > +                        cpus_utilized.append(int(cpu_id[1]))
> >              else:
> > -                if stats_percentage[l][1] > 70:
> > -                    cpu_id = stats_percentage[l][0].split("cpu")
> > -                    if cpu_id[1] != '':
> > +                if work_ld == "kernbench" :
> > +                    if stats_percentage[l][1] > 50:
> >                          cpus_utilized.append(int(cpu_id[1]))
> > -                    cpus_utilized.sort()
> > +                else:
> > +                    if stats_percentage[l][1] > 70:
> > +                        cpus_utilized.append(int(cpu_id[1]))
> > +            cpus_utilized.sort()
> >          print "INFO: CPU's utilized ", cpus_utilized
> > 
> > +        # If length of CPU's utilized is not = number of jobs exit with 1
> > +        if len(cpus_utilized) < threads:
> > +            return 1
> > +
> >          status = validate_cpugrp_map(cpus_utilized, sched_mc_level, \
> >              sched_smt_level)
> >          if status == 1:
> >              print "INFO: CPUs utilized is not in same package or core"
> > +
> >          return(status)
> >      except Exception, details:
> >          print "Exception in validate_cpu_consolidation: ", details
> > @@ -645,7 +762,8 @@ def get_cpuid_max_intr_count():
> >      try:
> >          highest = 0
> >          second_highest = 0
> > -        global cpu1_max_intr, cpu2_max_intr
> > +        cpus_utilized = []
> > +        
> >          #Skipping CPU0 as it is generally high
> >          for i in range(1, cpu_count):
> >              if int(intr_stop[i]) > int(highest):
> > @@ -658,15 +776,19 @@ def get_cpuid_max_intr_count():
> >                  if int(intr_stop[i]) > int(second_highest):
> >                      second_highest = int(intr_stop[i])
> >                      cpu2_max_intr = i
> > +        cpus_utilized.append(cpu1_max_intr)
> > +        cpus_utilized.append(cpu2_max_intr)
> > +        
> >          for i in range(1, cpu_count):
> >              if i != cpu1_max_intr and i != cpu2_max_intr:
> >                  diff = second_highest - intr_stop[i]
> >                  ''' Threshold of difference has to be manipulated '''
> >                  if diff < 10000:
> >                      print "INFO: Diff in interrupt count is below threshold"
> > -                    return 1
> > +                    cpus_utilized = []
> > +                    return cpus_utilized
> >          print "INFO: Interrupt count in other CPU's low as expected"
> > -        return 0 
> > +        return cpus_utilized
> >      except Exception, details:
> >          print "Exception in get_cpuid_max_intr_count: ", details
> >          sys.exit(1)
> > @@ -675,14 +797,12 @@ def validate_ilb (sched_mc_level, sched_
> >      ''' Validate if ilb is running in same package where work load is running
> >      '''
> >      try:
> > -        status = get_cpuid_max_intr_count()
> > -        if status == 1:
> > +        cpus_utilized = get_cpuid_max_intr_count()
> > +        if not cpus_utilized:
> >              return 1
> > -        for pkg in sorted(cpu_map.keys()):
> > -            if cpu1_max_intr in cpu_map[pkg] and cpu2_max_intr in cpu_map[pkg]:
> > -                return 0
> > -        print "INFO: CPUs with higher interrupt count is not in same package"
> > -        return 1
> > +       
> > +        status = validate_cpugrp_map(cpus_utilized, sched_mc_level, sched_smt_level)
> > +        return status
> >      except Exception, details:
> >          print "Exception in validate_ilb: ", details
> >          sys.exit(1)
> > @@ -706,3 +826,14 @@ def reset_schedsmt():
> >      except OSError, e:
> >          print "Could not set sched_smt_power_savings to 0", e
> >          sys.exit(1)
> > +
> > +def stop_wkld(work_ld):
> > +    ''' Kill workload triggered in background
> > +    '''
> > +    try:
> > +        os.system('pkill %s 2>/dev/null' %work_ld)
> > +        if work_ld == "kernbench":
> > +            os.system('pkill make 2>/dev/null')
> > +    except OSError, e:
> > +        print "Exception in stop_wkld", e
> > +        sys.exit(1)
> 


------------------------------------------------------------------------------
Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day 
trial. Simplify your report design, integration and deployment - and focus on 
what you do best, core application coding. Discover what's new with
Crystal Reports now.  http://p.sf.net/sfu/bobj-july
_______________________________________________
Ltp-list mailing list
Ltp-list@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/ltp-list

^ permalink raw reply	[flat|nested] 17+ messages in thread

end of thread, other threads:[~2009-11-27 10:10 UTC | newest]

Thread overview: 17+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2009-10-13  7:43 [LTP] [Patch 1/6] Fix issues in cpu consolidation verification functions Poornima Nayak
2009-10-13  7:43 ` [LTP] [Patch 2/6] Developed new functions and fixed issues causing ilb test failure Poornima Nayak
2009-10-13 10:12   ` Subrata Modak
2009-10-14  2:17   ` Garrett Cooper
2009-10-14  9:25   ` Gautham R Shenoy
2009-11-27 10:09     ` poornima nayak
2009-10-13  7:43 ` [LTP] [Patch 3/6] Modified ilb test to run with ebizzy as default workload Poornima Nayak
2009-10-13 10:12   ` Subrata Modak
2009-10-13  7:44 ` [LTP] [Patch 4/6] Enhanced & Modified cpu_consolidation testcase Poornima Nayak
2009-10-13 10:12   ` Subrata Modak
2009-10-13  7:44 ` [LTP] [Patch 5/6] Modified master script to pass appropriate arguments Poornima Nayak
2009-10-13 10:12   ` Subrata Modak
2009-10-14  1:38   ` Garrett Cooper
2009-10-13  7:44 ` [LTP] [Patch 6/6] Patch to fix workload installation issue Poornima Nayak
2009-10-13 10:12   ` Subrata Modak
2009-10-14  1:39     ` Garrett Cooper
2009-10-13 10:12 ` [LTP] [Patch 1/6] Fix issues in cpu consolidation verification functions Subrata Modak

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox