From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1756482AbcEaIVI (ORCPT ); Tue, 31 May 2016 04:21:08 -0400 Received: from mga03.intel.com ([134.134.136.65]:24625 "EHLO mga03.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1756401AbcEaIVB (ORCPT ); Tue, 31 May 2016 04:21:01 -0400 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.26,394,1459839600"; d="yaml'?scan'208";a="977730675" From: kernel test robot Subject: [lkp] [sched/fair] 53d3bc773e: hackbench.throughput -32.9% regression CC: Mike Galbraith , Peter Zijlstra , Thomas Gleixner , Linus Torvalds , Peter Zijlstra , LKML , linux-kernel@vger.kernel.org, lkp@01.org TO: Ingo Molnar Date: Tue, 31 May 2016 16:20:54 +0800 Message-ID: <87inxud4ex.fsf@yhuang-dev.intel.com> User-Agent: Gnus/5.13 (Gnus v5.13) Emacs/24.5 (gnu/linux) MIME-Version: 1.0 Content-Type: multipart/mixed; boundary="=-=-=" Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org --=-=-= Content-Type: text/plain; charset=iso-8859-1 Content-Disposition: inline Content-Transfer-Encoding: quoted-printable FYI, we noticed hackbench.throughput -32.9% regression due to commit: commit 53d3bc773eaa7ab1cf63585e76af7ee869d5e709 ("Revert "sched/fair: Fix f= airness issue on migration"") https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git master in testcase: hackbench on test machine: ivb42: 48 threads Ivytown Ivy Bridge-EP with 64G memory with following parameters: cpufreq_governor=3Dperformance/ipc=3Dsocket/mode= =3Dthreads/nr_threads=3D50% In addition to that, the commit also has significant impact on the followin= g tests: unixbench: unixbench.score 25.9% improvement on test machine - ivb42: 48 th= reads Ivytown Ivy Bridge-EP with 64G memory with test parameters: cpufreq_governor=3Dperformance/nr_task=3D100%/test=3D= context1 hackbench: hackbench.throughput -15.6% regression on test machine - lkp-hsw= -ep4: 72 threads Haswell-EP with 128G memory with test parameters: cpufreq_governor=3Dperformance/ipc=3Dpipe/iterations= =3D12/mode=3Dprocess/nr_threads=3D50% Details are as below: ---------------------------------------------------------------------------= -----------------------> =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D compiler/cpufreq_governor/ipc/kconfig/mode/nr_threads/rootfs/tbox_group/tes= tcase: gcc-4.9/performance/socket/x86_64-rhel/threads/50%/debian-x86_64-2015-02-= 07.cgz/ivb42/hackbench commit:=20 c5114626f33b62fa7595e57d87f33d9d1f8298a2 53d3bc773eaa7ab1cf63585e76af7ee869d5e709 c5114626f33b62fa 53d3bc773eaa7ab1cf63585e76=20 ---------------- --------------------------=20 %stddev %change %stddev \ | \=20=20 196590 =B1 0% -32.9% 131963 =B1 2% hackbench.throughput 602.66 =B1 0% +2.8% 619.27 =B1 2% hackbench.time.elapsed_ti= me 602.66 =B1 0% +2.8% 619.27 =B1 2% hackbench.time.elapsed_ti= me.max 1.76e+08 =B1 3% +236.0% 5.914e+08 =B1 2% hackbench.time.involuntar= y_context_switches 208664 =B1 2% +26.0% 262929 =B1 3% hackbench.time.minor_page= _faults 4401 =B1 0% +5.7% 4650 =B1 0% hackbench.time.percent_of= _cpu_this_job_got 25256 =B1 0% +10.2% 27842 =B1 2% hackbench.time.system_time 1272 =B1 0% -24.5% 961.37 =B1 2% hackbench.time.user_time 7.64e+08 =B1 1% +131.8% 1.771e+09 =B1 2% hackbench.time.voluntary_= context_switches 143370 =B1 0% -12.0% 126124 =B1 1% meminfo.SUnreclaim 2462880 =B1 0% -35.6% 1585869 =B1 5% softirqs.SCHED 4051 =B1 0% -39.9% 2434 =B1 3% uptime.idle 1766752 =B1 1% +122.6% 3932589 =B1 1% vmstat.system.cs 249718 =B1 2% +307.4% 1017398 =B1 3% vmstat.system.in 1.76e+08 =B1 3% +236.0% 5.914e+08 =B1 2% time.involuntary_context_= switches 208664 =B1 2% +26.0% 262929 =B1 3% time.minor_page_faults 1272 =B1 0% -24.5% 961.37 =B1 2% time.user_time 7.64e+08 =B1 1% +131.8% 1.771e+09 =B1 2% time.voluntary_context_sw= itches 2228 =B1 92% +137.1% 5285 =B1 15% numa-meminfo.node0.AnonHu= gePages 73589 =B1 4% -12.5% 64393 =B1 2% numa-meminfo.node0.SUnrec= laim 27438 =B1 83% +102.6% 55585 =B1 6% numa-meminfo.node0.Shmem 101051 =B1 3% -10.9% 90044 =B1 2% numa-meminfo.node0.Slab 69844 =B1 4% -11.8% 61579 =B1 3% numa-meminfo.node1.SUnrec= laim 1136461 =B1 3% +16.6% 1324662 =B1 5% numa-numastat.node0.local= _node 1140216 =B1 3% +16.2% 1324689 =B1 5% numa-numastat.node0.numa_= hit 3755 =B1 68% -99.3% 27.25 =B1 94% numa-numastat.node0.other= _node 1098889 =B1 4% +20.1% 1320211 =B1 6% numa-numastat.node1.local= _node 1101996 =B1 4% +20.5% 1327590 =B1 6% numa-numastat.node1.numa_= hit 7.18 =B1 0% -50.2% 3.57 =B1 43% perf-profile.cycles-pp.ca= ll_cpuidle 8.09 =B1 0% -44.7% 4.47 =B1 38% perf-profile.cycles-pp.cp= u_startup_entry 7.17 =B1 0% -50.3% 3.56 =B1 43% perf-profile.cycles-pp.cp= uidle_enter 7.14 =B1 0% -50.3% 3.55 =B1 43% perf-profile.cycles-pp.cp= uidle_enter_state 7.11 =B1 0% -50.6% 3.52 =B1 43% perf-profile.cycles-pp.in= tel_idle 8.00 =B1 0% -44.5% 4.44 =B1 38% perf-profile.cycles-pp.st= art_secondary 92.32 =B1 0% +5.4% 97.32 =B1 0% turbostat.%Busy 2763 =B1 0% +5.4% 2912 =B1 0% turbostat.Avg_MHz 7.48 =B1 0% -66.5% 2.50 =B1 7% turbostat.CPU%c1 0.20 =B1 2% -6.4% 0.18 =B1 2% turbostat.CPU%c6 180.03 =B1 0% -1.3% 177.62 =B1 0% turbostat.CorWatt 5.83 =B1 0% +38.9% 8.10 =B1 3% turbostat.RAMWatt 6857 =B1 83% +102.8% 13905 =B1 6% numa-vmstat.node0.nr_shmem 18395 =B1 4% -12.4% 16121 =B1 2% numa-vmstat.node0.nr_slab= _unreclaimable 675569 =B1 3% +12.7% 761135 =B1 4% numa-vmstat.node0.numa_lo= cal 71537 =B1 5% -7.9% 65920 =B1 2% numa-vmstat.node0.numa_ot= her 17456 =B1 4% -11.7% 15405 =B1 3% numa-vmstat.node1.nr_slab= _unreclaimable 695848 =B1 3% +14.9% 799683 =B1 5% numa-vmstat.node1.numa_hit 677405 =B1 4% +14.5% 775903 =B1 6% numa-vmstat.node1.numa_lo= cal 18442 =B1 19% +28.9% 23779 =B1 5% numa-vmstat.node1.numa_ot= her 1.658e+09 =B1 0% -59.1% 6.784e+08 =B1 7% cpuidle.C1-IVT.time 1.066e+08 =B1 0% -40.3% 63661563 =B1 6% cpuidle.C1-IVT.usage 26348635 =B1 0% -86.8% 3471048 =B1 15% cpuidle.C1E-IVT.time 291620 =B1 0% -85.1% 43352 =B1 15% cpuidle.C1E-IVT.usage 54158643 =B1 1% -88.5% 6254009 =B1 14% cpuidle.C3-IVT.time 482437 =B1 1% -87.0% 62620 =B1 16% cpuidle.C3-IVT.usage 5.028e+08 =B1 0% -75.8% 1.219e+08 =B1 8% cpuidle.C6-IVT.time 3805026 =B1 0% -85.5% 552326 =B1 16% cpuidle.C6-IVT.usage 2766 =B1 4% -51.4% 1344 =B1 6% cpuidle.POLL.usage 35841 =B1 0% -12.0% 31543 =B1 0% proc-vmstat.nr_slab_unrec= laimable 154090 =B1 2% +43.1% 220509 =B1 3% proc-vmstat.numa_hint_fau= lts 129240 =B1 2% +47.4% 190543 =B1 3% proc-vmstat.numa_hint_fau= lts_local 2238386 =B1 1% +18.4% 2649737 =B1 2% proc-vmstat.numa_hit 2232163 =B1 1% +18.4% 2643105 =B1 2% proc-vmstat.numa_local 22315 =B1 1% -21.0% 17625 =B1 5% proc-vmstat.numa_pages_mi= grated 154533 =B1 2% +45.6% 225071 =B1 3% proc-vmstat.numa_pte_upda= tes 382980 =B1 2% +33.2% 510157 =B1 4% proc-vmstat.pgalloc_dma32 7311738 =B1 2% +37.2% 10029060 =B1 2% proc-vmstat.pgalloc_normal 7672040 =B1 2% +37.1% 10519738 =B1 2% proc-vmstat.pgfree 22315 =B1 1% -21.0% 17625 =B1 5% proc-vmstat.pgmigrate_suc= cess 5487 =B1 6% -12.6% 4797 =B1 4% slabinfo.UNIX.active_objs 5609 =B1 5% -12.2% 4926 =B1 4% slabinfo.UNIX.num_objs 4362 =B1 4% +14.6% 4998 =B1 2% slabinfo.cred_jar.active_= objs 4362 =B1 4% +14.6% 4998 =B1 2% slabinfo.cred_jar.num_objs 42525 =B1 0% -41.6% 24824 =B1 3% slabinfo.kmalloc-256.acti= ve_objs 845.50 =B1 0% -42.9% 482.50 =B1 3% slabinfo.kmalloc-256.acti= ve_slabs 54124 =B1 0% -42.9% 30920 =B1 3% slabinfo.kmalloc-256.num_= objs 845.50 =B1 0% -42.9% 482.50 =B1 3% slabinfo.kmalloc-256.num_= slabs 47204 =B1 0% -37.9% 29335 =B1 2% slabinfo.kmalloc-512.acti= ve_objs 915.25 =B1 0% -39.8% 551.00 =B1 3% slabinfo.kmalloc-512.acti= ve_slabs 58599 =B1 0% -39.8% 35300 =B1 3% slabinfo.kmalloc-512.num_= objs 915.25 =B1 0% -39.8% 551.00 =B1 3% slabinfo.kmalloc-512.num_= slabs 12443 =B1 2% -20.1% 9944 =B1 3% slabinfo.pid.active_objs 12443 =B1 2% -20.1% 9944 =B1 3% slabinfo.pid.num_objs 440.00 =B1 5% -32.8% 295.75 =B1 4% slabinfo.taskstats.active= _objs 440.00 =B1 5% -32.8% 295.75 =B1 4% slabinfo.taskstats.num_ob= js 312.45 =B1157% -94.8% 16.29 =B1 33% sched_debug.cfs_rq:/.load= .stddev 0.27 =B1 5% -56.3% 0.12 =B1 30% sched_debug.cfs_rq:/.nr_r= unning.stddev 16.51 =B1 1% +9.5% 18.08 =B1 3% sched_debug.cfs_rq:/.runn= able_load_avg.avg 0.05 =B1100% +7950.0% 3.66 =B1 48% sched_debug.cfs_rq:/.runn= able_load_avg.min -740916 =B1-28% -158.5% 433310 =B1120% sched_debug.cfs_rq:/.spre= ad0.avg 1009940 =B1 19% +75.8% 1775442 =B1 30% sched_debug.cfs_rq:/.spre= ad0.max -2384171 =B1 -7% -65.7% -818684 =B1-76% sched_debug.cfs_rq:/.spre= ad0.min 749.14 =B1 1% +13.0% 846.34 =B1 1% sched_debug.cfs_rq:/.util= _avg.min 51.66 =B1 4% -36.3% 32.92 =B1 5% sched_debug.cfs_rq:/.util= _avg.stddev 161202 =B1 7% -41.7% 93997 =B1 4% sched_debug.cpu.avg_idle.= avg 595158 =B1 6% -51.2% 290491 =B1 22% sched_debug.cpu.avg_idle.= max 132760 =B1 8% -58.8% 54718 =B1 19% sched_debug.cpu.avg_idle.= stddev 11.40 =B1 11% +111.0% 24.05 =B1 16% sched_debug.cpu.clock.std= dev 11.40 =B1 11% +111.0% 24.05 =B1 16% sched_debug.cpu.clock_tas= k.stddev 32.34 =B1 2% +23.9% 40.07 =B1 19% sched_debug.cpu.cpu_load[= 0].max 0.34 =B1103% +520.0% 2.11 =B1 67% sched_debug.cpu.cpu_load[= 0].min 32.18 =B1 2% +22.7% 39.50 =B1 17% sched_debug.cpu.cpu_load[= 1].max 3.32 =B1 8% +84.9% 6.14 =B1 12% sched_debug.cpu.cpu_load[= 1].min 5.39 =B1 7% +36.3% 7.34 =B1 4% sched_debug.cpu.cpu_load[= 2].min 33.18 =B1 3% +14.0% 37.82 =B1 5% sched_debug.cpu.cpu_load[= 4].max 5.56 =B1 6% +16.2% 6.45 =B1 6% sched_debug.cpu.cpu_load[= 4].stddev 16741 =B1 0% -15.4% 14166 =B1 2% sched_debug.cpu.curr->pid= .avg 19196 =B1 0% -18.3% 15690 =B1 1% sched_debug.cpu.curr->pid= .max 5174 =B1 5% -55.4% 2305 =B1 14% sched_debug.cpu.curr->pid= .stddev 1410 =B1 1% -14.2% 1210 =B1 6% sched_debug.cpu.nr_load_u= pdates.stddev 9.95 =B1 3% -14.5% 8.51 =B1 5% sched_debug.cpu.nr_runnin= g.avg 29.07 =B1 2% -15.0% 24.70 =B1 4% sched_debug.cpu.nr_runnin= g.max 0.05 =B1100% +850.0% 0.43 =B1 37% sched_debug.cpu.nr_runnin= g.min 7.64 =B1 3% -23.0% 5.88 =B1 2% sched_debug.cpu.nr_runnin= g.stddev 10979930 =B1 1% +123.3% 24518490 =B1 2% sched_debug.cpu.nr_switch= es.avg 12350130 =B1 1% +117.5% 26856375 =B1 2% sched_debug.cpu.nr_switch= es.max 9594835 =B1 2% +132.6% 22314436 =B1 2% sched_debug.cpu.nr_switch= es.min 769296 =B1 1% +56.8% 1206190 =B1 3% sched_debug.cpu.nr_switch= es.stddev 8.30 =B1 18% +32.9% 11.02 =B1 15% sched_debug.cpu.nr_uninte= rruptible.max turbostat.Avg_MHz 3000 O+---O-O-O--O-O-O-O--O-O-O--O-O-O--O-O-O-O--O-----------------------= -+ *.O..*.*.* *.*.*..*.*.*..*.*.*..*.*.*.*..*.*.*..*.*.*..*.*.*.*..*= .* 2500 ++ : : = | | : : = | | : : = | 2000 ++ : : = | | : : = | 1500 ++ : : = | | : : = | 1000 ++ : : = | | : : = | | :: = | 500 ++ : = | | : = | 0 ++----------*-------------------------------------------------------= -+ turbostat._Busy 100 O+O--O-O-O--O-O-O--O-O-O--O-O-O--O-O-O--O-O-O------------------------= -+ 90 *+*..*.*.* *.*..*.*.*..*.*.*..*.*.*..*.*.*..*.*.*..*.*.*..*.*.*..*= .* | : : = | 80 ++ : : = | 70 ++ : : = | | : : = | 60 ++ : : = | 50 ++ : : = | 40 ++ : : = | | : : = | 30 ++ : : = | 20 ++ :: = | | : = | 10 ++ : = | 0 ++----------*--------------------------------------------------------= -+ turbostat.CPU_c1 8 ++---------------*-----------------------------------------------------= -+ *.*..*.*..* *. *..*.*.*..*.*..*.*.*..*.*..*.*.*..*.*..*.*.*..*.*..*= .* 7 ++ : : = | 6 ++ : : = | | : : = | 5 ++ : : = | | : : = | 4 ++ : : = | | O : : = | 3 O+ O O: : O O = | 2 ++ O :O:O O O O O O O O O O O = | | : O = | 1 ++ : = | | : = | 0 ++----------*----------------------------------------------------------= -+ turbostat.PkgWatt 250 ++-------------------------------------------------------------------= -+ | = | O.O..O.O.O O O.O..O.O.O..O.O.O..O.O.O..O.O.O..*.*.*..*.*.*..*.*.*..*= .* 200 ++ : : = | | : : = | | : : = | 150 ++ : : = | | : : = | 100 ++ : : = | | : : = | | : : = | 50 ++ : : = | | : = | | : = | 0 ++----------*--------------------------------------------------------= -+ turbostat.CorWatt 200 ++-------------------------------------------------------------------= -+ 180 *+*..*.*.* *.*..*.*.*..*.*.*..*.*.*..*.*.*..*.*.*..*.*.*..*.*.*..*= .* O O O O O O O O O O O O O O O O O O O O = | 160 ++ : : = | 140 ++ : : = | | : : = | 120 ++ : : = | 100 ++ : : = | 80 ++ : : = | | : : = | 60 ++ : : = | 40 ++ :: = | | : = | 20 ++ : = | 0 ++----------*--------------------------------------------------------= -+ turbostat.RAMWatt 9 ++---------------O-----------------------------------------------------= -+ | O O O O O O O O O O O = | 8 ++O O O O O O O = | 7 O+ = | | = | 6 *+*..*.*..* *..*.*..*.*.*..*.*..*.*.*..*.*..*.*.*..*.*..*.*.*..*.*..*= .* 5 ++ : : = | | : : = | 4 ++ : : = | 3 ++ : : = | | : : = | 2 ++ : : = | 1 ++ : = | | : = | 0 ++----------*----------------------------------------------------------= -+ hackbench.throughput 200000 *+*-*--*-*---*--*-*-*-*--*-*-*-*--*-*-*-*-*--*-*-*-*--*-*-*-*--*-*= -* 180000 ++ : : = | | : : = | 160000 ++ : : = | 140000 O+O : : O O = | | O O O:O:O O O O O O O O O O O O = | 120000 ++ : : = | 100000 ++ : : = | 80000 ++ : : = | | : : = | 60000 ++ : : = | 40000 ++ : = | | : = | 20000 ++ : = | 0 ++---------*------------------------------------------------------= -+ time.user_time 1400 ++------------*-----------------------------------------------------= -+ *.*..*.*.* : *.*..*.*.*..*.*.*..*.*.*.*..*.*.*..*.*.*..*.*.*.*..*= .* 1200 ++ : : = | | : : = | 1000 O+O O: O:O O O O O O O O O O = | | O O : : O O O O = | 800 ++ : : = | | : : = | 600 ++ : : = | | : : = | 400 ++ : : = | | :: = | 200 ++ : = | | : = | 0 ++----------*-------------------------------------------------------= -+ time.minor_page_faults 300000 ++----------------------------------------------------------------= -+ | O O O O O O O O O = | 250000 O+O O O O O O O O O O = | | = | *.*.*..*.* *..*.*.*.*..*.*.*.*..*.*.*.*.*..*. .*.*..*.*. .*..*.*= .* 200000 ++ : : * * = | | : : = | 150000 ++ : : = | | : : = | 100000 ++ : : = | | : : = | | : : = | 50000 ++ : = | | : = | 0 ++---------*------------------------------------------------------= -+ time.voluntary_context_switches 2e+09 ++---------------------------------------------------------------= -+ 1.8e+09 ++ O O O O O O = | | O O O O O O O O O O O O = | 1.6e+09 O+O = | 1.4e+09 ++ = | | = | 1.2e+09 ++ = | 1e+09 ++ = | 8e+08 ++ = .* *.*.*..*.* *.*..*.*.*.*.*..*.*.*.*.*..*.*.*.*.*..*.*.*.*.*..*.*= | 6e+08 ++ : : = | 4e+08 ++ : : = | | : : = | 2e+08 ++ : = | 0 ++---------*-----------------------------------------------------= -+ time.involuntary_context_switches 7e+08 ++-----------------------------------------------------------------= -+ | O O O = | 6e+08 ++ O O O O O O O O O O O = | O O O O O O = | 5e+08 ++ = | | = | 4e+08 ++ = | | = | 3e+08 ++ = | | = | 2e+08 ++ .*.. .*.*. .*.*.*. = .* *.*..*.*.* *.*.* *.*.*.*..*.*.*..*.*.*.*. *.*. *..*= | 1e+08 ++ : + = | | : + = | 0 ++---------*-------------------------------------------------------= -+ hackbench.time.user_time 1400 ++------------*-----------------------------------------------------= -+ *.*..*.*.* : *.*..*.*.*..*.*.*..*.*.*.*..*.*.*..*.*.*..*.*.*.*..*= .* 1200 ++ : : = | | : : = | 1000 O+O O: O:O O O O O O O O O O = | | O O : : O O O O = | 800 ++ : : = | | : : = | 600 ++ : : = | | : : = | 400 ++ : : = | | :: = | 200 ++ : = | | : = | 0 ++----------*-------------------------------------------------------= -+ hackbench.time.percent_of_cpu_this_job_got 5000 ++------------------------------------------------------------------= -+ 4500 O+O O O O O O O O O O O O O O O O O O O = | *.*..*.*.* *.*.*..*.*.*..*.*.*..*.*.*.*..*.*.*..*.*.*..*.*.*.*..*= .* 4000 ++ : : = | 3500 ++ : : = | | : : = | 3000 ++ : : = | 2500 ++ : : = | 2000 ++ : : = | | : : = | 1500 ++ : : = | 1000 ++ : : = | | : = | 500 ++ : = | 0 ++----------*-------------------------------------------------------= -+ hackbench.time.minor_page_faults 300000 ++----------------------------------------------------------------= -+ | O O O O O O O O O = | 250000 O+O O O O O O O O O O = | | = | *.*.*..*.* *..*.*.*.*..*.*.*.*..*.*.*.*.*..*. .*.*..*.*. .*..*.*= .* 200000 ++ : : * * = | | : : = | 150000 ++ : : = | | : : = | 100000 ++ : : = | | : : = | | : : = | 50000 ++ : = | | : = | 0 ++---------*------------------------------------------------------= -+ hackbench.time.voluntary_context_switches 2e+09 ++---------------------------------------------------------------= -+ 1.8e+09 ++ O O O O O O = | | O O O O O O O O O O O O = | 1.6e+09 O+O = | 1.4e+09 ++ = | | = | 1.2e+09 ++ = | 1e+09 ++ = | 8e+08 ++ = .* *.*.*..*.* *.*..*.*.*.*.*..*.*.*.*.*..*.*.*.*.*..*.*.*.*.*..*.*= | 6e+08 ++ : : = | 4e+08 ++ : : = | | : : = | 2e+08 ++ : = | 0 ++---------*-----------------------------------------------------= -+ hackbench.time.involuntary_context_switches 7e+08 ++-----------------------------------------------------------------= -+ | O O O = | 6e+08 ++ O O O O O O O O O O O = | O O O O O O = | 5e+08 ++ = | | = | 4e+08 ++ = | | = | 3e+08 ++ = | | = | 2e+08 ++ .*.. .*.*. .*.*.*. = .* *.*..*.*.* *.*.* *.*.*.*..*.*.*..*.*.*.*. *.*. *..*= | 1e+08 ++ : + = | | : + = | 0 ++---------*-------------------------------------------------------= -+ softirqs.SCHED 3e+06 ++---------------------------------------------------------------= -+ | = | 2.5e+06 *+*.*..*.* *.*..*.*.*.*.*..*.*.*.*.*..*.*.*.*.*.. .*. .*. = | | : : *.* *.*. *= .* | : : = | 2e+06 ++ : : = | O O O: : O O O O = | 1.5e+06 ++ O O :O:O O O O O O O O O O = | | : : = | 1e+06 ++ : : = | | : : = | | :: = | 500000 ++ : = | | : = | 0 ++---------*-----------------------------------------------------= -+ uptime.idle 4500 ++------------------------------------------------------------------= -+ *.*..*.*.* .*.*..*.*.*..*.*.*..*.*.*.*..*.*.*..*. .*. .*.*..*= .* 4000 ++ : * *.*. * = | 3500 ++ : : = | | : : = | 3000 ++ : : = | 2500 O+O O: O: O O O O O O O = | | O : :O O O O O O O = | 2000 ++ O : : = | 1500 ++ : : = | | : : = | 1000 ++ : : = | 500 ++ : = | | : = | 0 ++----------*-------------------------------------------------------= -+ cpuidle.POLL.usage 3500 ++------------------------------------------------------------------= -+ | = | 3000 ++ .*. *. .*.. .*. .*.. .*. .*.. = .* *.*..* * : * * *..*.* *.*.*.*. * *.*.*..*.*.*.*..*= | 2500 ++ : : = | | : : = | 2000 ++ : : = | O O : : = | 1500 ++ O: : O O = | | O :O:O O O O O O O O O O = | 1000 ++ O : : O O = | | : : = | 500 ++ :: = | | : = | 0 ++----------*-------------------------------------------------------= -+ cpuidle.C1-IVT.time 1.8e+09 *+*-*--*-*---*-*--*---*-*-*--*-*-*-*-*--*-*-*---*----------------= -+ | : : * * *.*.*.*.*..*.*= .* 1.6e+09 ++ : : = | 1.4e+09 ++ : : = | | : : = | 1.2e+09 ++ : : = | 1e+09 ++ : : = | | : : = | 8e+08 O+O O: : O O O O = | 6e+08 ++ O O :O:O O O O O O O O = | | : : O O = | 4e+08 ++ : = | 2e+08 ++ : = | | : = | 0 ++---------*-----------------------------------------------------= -+ cpuidle.C1-IVT.usage 1.2e+08 ++---------------------------------------------------------------= -+ *.*.*..*.* *.*..*.*.*.*.*..*.*.*.*.*..*.*.*.*.*.. .*. = | 1e+08 ++ : : *.* *.*..*.*= .* | : : = | | : : = | 8e+07 O+O : : = | | O: : O O O O = | 6e+07 ++ O O :O:O O O O O O O O = | | : : O O = | 4e+07 ++ : : = | | : : = | | : = | 2e+07 ++ : = | | : = | 0 ++---------*-----------------------------------------------------= -+ cpuidle.C1E-IVT.time 3e+07 ++---------------------------------------------------------------= -+ *.*.*..*.* *.*..*.*.*.*.*..*.*.*.*.*..*.*.*.*.*.. .*.*.*. = | 2.5e+07 ++ : : * *..*.*= .* | : : = | | : : = | 2e+07 ++ : : = | | : : = | 1.5e+07 ++ : : = | | : : = | 1e+07 ++ : : = | | : : = | O : = | 5e+06 ++O O O O O O O O O = | | O : O O O O O O O O O = | 0 ++---------*-----------------------------------------------------= -+ cpuidle.C1E-IVT.usage 350000 ++----------------------------------------------------------------= -+ | .*.*. = | 300000 *+*.*..*.* *. *.*..*.*.*.*..*.*.*.*.*..*.*.*.*..*.*.*.*..*.*= .* | : : = | 250000 ++ : : = | | : : = | 200000 ++ : : = | | : : = | 150000 ++ : : = | | : : = | 100000 ++ : : = | | O :: = | 50000 O+ O O O O O O O O O O = | | O : O O O O O O O = | 0 ++---------*------------------------------------------------------= -+ cpuidle.C3-IVT.time 6e+07 *+*--*-*-*------*--------*-*-*--*-*-*--*-*-*-*----*----------------= -+ | : * *.*..* * *.*..*.*.*.*..*= .| 5e+07 ++ : : = * | : : = | | : : = | 4e+07 ++ : : = | | : : = | 3e+07 ++ : : = | | : : = | 2e+07 ++ : : = | | : : = | | :: = | 1e+07 O+O O O : O O O O = | | O O O O O O O O O O O O = | 0 ++---------*-------------------------------------------------------= -+ cpuidle.C3-IVT.usage 600000 ++----------------------------------------------------------------= -+ | = | 500000 *+*.*..*.* *..*.*.*.*..*.*.*.*..*.*.*.*.*..*.*.*. .*. = | | : : *..* *.*..*.*= .* | : : = | 400000 ++ : : = | | : : = | 300000 ++ : : = | | : : = | 200000 ++ : : = | | : : = | | :: = | 100000 O+O O O : O O O O O = | | O O O O O O O O O O O = | 0 ++---------*------------------------------------------------------= -+ cpuidle.C6-IVT.time 6e+08 ++-----------------------------------------------------------------= -+ |.*..*.*.* .*.*..*.*. .*.*.*.*.. = | 5e+08 *+ : *.*.*.*..*.* *. *.*.*.*..*.*.*.*..*= .* | : : = | | : : = | 4e+08 ++ : : = | | : : = | 3e+08 ++ : : = | | : : = | 2e+08 ++ : : = | | O : : = | O O O O O: O O O O O O O O O O O = | 1e+08 ++ :: O O O = | | : = | 0 ++---------*-------------------------------------------------------= -+ cpuidle.C6-IVT.usage 4.5e+06 ++---------------------------------------------------------------= -+ *.*.*..*.* .*..*. .*.*.*..*.*.*.*.*..*.*.*.*. .*. = | 4e+06 ++ : * * *..*.* *.*..*.*= .* 3.5e+06 ++ : : = | | : : = | 3e+06 ++ : : = | 2.5e+06 ++ : : = | | : : = | 2e+06 ++ : : = | 1.5e+06 ++ : : = | | : : = | 1e+06 ++O :: = | 500000 O+ O O O O O O O O O = | | O : O O O O O O O O = | 0 ++---------*-----------------------------------------------------= -+ meminfo.Slab 200000 *+*-*--*-*---*--*-*-*-*--*-*-*-*--*-*-*-*-*--*-*-*-*--*-*-*-*--*-*= -* 180000 ++O O : O O = | O O O : O O O O O O O O O O O O O = | 160000 ++ : : = | 140000 ++ : : = | | : : = | 120000 ++ : : = | 100000 ++ : : = | 80000 ++ : : = | | : : = | 60000 ++ : : = | 40000 ++ : = | | : = | 20000 ++ : = | 0 ++---------*------------------------------------------------------= -+ meminfo.SUnreclaim 160000 ++----------------------------------------------------------------= -+ *. .*..*.* *..*.*. .*. .*.*..*.*. .*.*..*. .*. .*.*.*.*..*. = .* 140000 ++* : : *.*. * * * *. *= | 120000 O+O O O O O O O O O O O O O O O O O O O = | | : : = | 100000 ++ : : = | | : : = | 80000 ++ : : = | | : : = | 60000 ++ : : = | 40000 ++ : : = | | : = | 20000 ++ : = | | : = | 0 ++---------*------------------------------------------------------= -+ vmstat.system.in 1.2e+06 ++---------------------------------------------------------------= -+ | O O O = | 1e+06 ++ O O O O O O O O O O O O = | | O O O = | O O = | 800000 ++ = | | = | 600000 ++ = | | = | 400000 ++ = | | = | *.*.*..*.* *.*..*.*.*.*.*..*.*.*.*.*..*.*.*.*.*..*.*.*.*.*..*.*= .* 200000 ++ : : = | | : : = | 0 ++---------*-----------------------------------------------------= -+ vmstat.system.cs 4.5e+06 ++---------------------------------------------------------------= -+ | O O O O = | 4e+06 O+ O O O O O O O O O O O O O O = | 3.5e+06 ++O = | | = | 3e+06 ++ = | 2.5e+06 ++ = | | = | 2e+06 ++ .*. = .* 1.5e+06 *+*.*..*.* *.*..*.*.*.*.*..*.*.*.*.*..*.*.*.*.*..*.*.* *..*.*= | | : : = | 1e+06 ++ : : = | 500000 ++ : : = | | : = | 0 ++---------*-----------------------------------------------------= -+ [*] bisect-good sample [O] bisect-bad sample To reproduce: git clone git://git.kernel.org/pub/scm/linux/kernel/git/wfg/lkp-tes= ts.git cd lkp-tests bin/lkp install job.yaml # job file is attached in this email bin/lkp run job.yaml ***************************************************************************= ************************ ivb42: 48 threads Ivytown Ivy Bridge-EP with 64G memory =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D compiler/cpufreq_governor/kconfig/nr_task/rootfs/tbox_group/test/testcase: gcc-4.9/performance/x86_64-rhel/100%/debian-x86_64-2015-02-07.cgz/ivb42/c= ontext1/unixbench commit:=20 c5114626f33b62fa7595e57d87f33d9d1f8298a2 53d3bc773eaa7ab1cf63585e76af7ee869d5e709 c5114626f33b62fa 53d3bc773eaa7ab1cf63585e76=20 ---------------- --------------------------=20 %stddev %change %stddev \ | \=20=20 18006 =B1 1% +25.9% 22672 =B1 0% unixbench.score 39774 =B1 33% +5.4e+05% 2.138e+08 =B1 4% unixbench.time.involuntar= y_context_switches 1717 =B1 0% +1.9% 1749 =B1 0% unixbench.time.percent_of= _cpu_this_job_got 152.51 =B1 0% +33.9% 204.18 =B1 1% unixbench.time.user_time 7.052e+08 =B1 1% -3.9% 6.78e+08 =B1 1% unixbench.time.voluntary_= context_switches 4.243e+08 =B1 3% -9.4% 3.845e+08 =B1 7% cpuidle.C1-IVT.time 1.544e+08 =B1 6% -37.5% 96475672 =B1 5% cpuidle.C1-IVT.usage 409626 =B1 4% +28.6% 526843 =B1 15% softirqs.RCU 274815 =B1 4% -27.5% 199184 =B1 9% softirqs.SCHED 39774 =B1 33% +5.4e+05% 2.138e+08 =B1 4% time.involuntary_context_= switches 152.51 =B1 0% +33.9% 204.18 =B1 1% time.user_time 45.25 =B1 0% +12.7% 51.00 =B1 0% vmstat.procs.r 11774346 =B1 0% +20.2% 14152328 =B1 0% vmstat.system.cs 1848728 =B1 0% +22.7% 2269123 =B1 0% sched_debug.cfs_rq:/.min_= vruntime.avg 2029277 =B1 0% +18.7% 2409509 =B1 0% sched_debug.cfs_rq:/.min_= vruntime.max 1561074 =B1 5% +29.9% 2027122 =B1 3% sched_debug.cfs_rq:/.min_= vruntime.min 103209 =B1 9% -17.8% 84792 =B1 10% sched_debug.cfs_rq:/.min_= vruntime.stddev 11.68 =B1 6% -35.9% 7.49 =B1 6% sched_debug.cfs_rq:/.runn= able_load_avg.avg 103208 =B1 9% -17.8% 84795 =B1 10% sched_debug.cfs_rq:/.spre= ad0.stddev 946393 =B1 5% -24.5% 714499 =B1 10% sched_debug.cpu.avg_idle.= max 234059 =B1 6% -36.5% 148728 =B1 37% sched_debug.cpu.avg_idle.= stddev 11.57 =B1 6% -31.2% 7.96 =B1 20% sched_debug.cpu.cpu_load[= 1].avg 11.61 =B1 7% -34.4% 7.61 =B1 12% sched_debug.cpu.cpu_load[= 2].avg 11.70 =B1 7% -35.4% 7.56 =B1 8% sched_debug.cpu.cpu_load[= 3].avg 11.86 =B1 7% -36.1% 7.58 =B1 6% sched_debug.cpu.cpu_load[= 4].avg 0.48 =B1 6% +13.9% 0.54 =B1 3% sched_debug.cpu.nr_runnin= g.avg 0.37 =B1 5% +10.5% 0.41 =B1 4% sched_debug.cpu.nr_runnin= g.stddev 14556348 =B1 0% +20.1% 17474921 =B1 0% sched_debug.cpu.nr_switch= es.avg 14764042 =B1 0% +24.5% 18380752 =B1 0% sched_debug.cpu.nr_switch= es.max 14296508 =B1 0% +14.9% 16430231 =B1 0% sched_debug.cpu.nr_switch= es.min 121577 =B1 25% +268.4% 447878 =B1 8% sched_debug.cpu.nr_switch= es.stddev -9.42 =B1 -3% +20.4% -11.33 =B1-12% sched_debug.cpu.nr_uninte= rruptible.min ***************************************************************************= ************************ lkp-hsw-ep4: 72 threads Haswell-EP with 128G memory =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D compiler/cpufreq_governor/ipc/iterations/kconfig/mode/nr_threads/rootfs/tbo= x_group/testcase: gcc-4.9/performance/pipe/12/x86_64-rhel/process/50%/debian-x86_64-2015-02= -07.cgz/lkp-hsw-ep4/hackbench commit:=20 c5114626f33b62fa7595e57d87f33d9d1f8298a2 53d3bc773eaa7ab1cf63585e76af7ee869d5e709 c5114626f33b62fa 53d3bc773eaa7ab1cf63585e76=20 ---------------- --------------------------=20 %stddev %change %stddev \ | \=20=20 207412 =B1 0% -15.6% 175076 =B1 1% hackbench.throughput 489.41 =B1 0% +18.4% 579.66 =B1 1% hackbench.time.elapsed_ti= me 489.41 =B1 0% +18.4% 579.66 =B1 1% hackbench.time.elapsed_ti= me.max 1.005e+09 =B1 0% +113.2% 2.142e+09 =B1 4% hackbench.time.involuntar= y_context_switches 6966 =B1 0% +2.2% 7118 =B1 0% hackbench.time.percent_of= _cpu_this_job_got 32394 =B1 0% +19.3% 38635 =B1 1% hackbench.time.system_time 1700 =B1 0% +54.6% 2627 =B1 3% hackbench.time.user_time 3.164e+09 =B1 0% +64.2% 5.195e+09 =B1 3% hackbench.time.voluntary_= context_switches 536.44 =B1 0% +17.1% 627.97 =B1 1% uptime.boot 4496 =B1 1% -16.4% 3757 =B1 4% uptime.idle 720.75 =B1 0% +14.7% 826.75 =B1 0% vmstat.procs.r 8795090 =B1 0% +44.3% 12689850 =B1 2% vmstat.system.cs 2115904 =B1 1% -7.1% 1965559 =B1 3% vmstat.system.in 49651750 =B1 0% -34.1% 32710138 =B1 3% numa-numastat.node0.local= _node 49657590 =B1 0% -34.1% 32719401 =B1 3% numa-numastat.node0.numa_= hit 51230886 =B1 1% -37.1% 32238968 =B1 4% numa-numastat.node1.local= _node 51235497 =B1 1% -37.1% 32241201 =B1 4% numa-numastat.node1.numa_= hit 16114 =B1 3% +15.3% 18577 =B1 2% softirqs.NET_RX 3907664 =B1 1% +44.4% 5643157 =B1 1% softirqs.RCU 2029740 =B1 1% -67.7% 655775 =B1 16% softirqs.SCHED 17332687 =B1 0% +21.1% 20995794 =B1 1% softirqs.TIMER 97.19 =B1 0% +1.5% 98.70 =B1 0% turbostat.%Busy 2694 =B1 0% +1.2% 2726 =B1 0% turbostat.Avg_MHz 2.58 =B1 2% -56.9% 1.11 =B1 7% turbostat.CPU%c1 0.22 =B1 3% -14.8% 0.19 =B1 2% turbostat.CPU%c6 894518 =B1 5% -16.2% 749856 =B1 5% numa-meminfo.node0.MemUsed 31304 =B1 18% -19.8% 25116 =B1 13% numa-meminfo.node0.PageTa= bles 137230 =B1 14% -13.2% 119062 =B1 7% numa-meminfo.node0.Slab 77654 =B1 43% +53.9% 119507 =B1 2% numa-meminfo.node1.Active= (anon) 676863 =B1 6% +18.9% 804493 =B1 5% numa-meminfo.node1.MemUsed 40040 =B1 87% +102.8% 81204 =B1 3% numa-meminfo.node1.Shmem 2.29 =B1 8% -82.5% 0.40 =B1112% perf-profile.cycles-pp.ca= ll_cpuidle 3.41 =B1 8% -84.8% 0.52 =B1113% perf-profile.cycles-pp.cp= u_startup_entry 2.29 =B1 8% -82.4% 0.40 =B1112% perf-profile.cycles-pp.cp= uidle_enter 2.26 =B1 9% -82.4% 0.40 =B1112% perf-profile.cycles-pp.cp= uidle_enter_state 2.24 =B1 9% -82.4% 0.40 =B1112% perf-profile.cycles-pp.in= tel_idle 3.42 =B1 7% -84.9% 0.52 =B1113% perf-profile.cycles-pp.st= art_secondary 86451 =B1 1% +9.1% 94357 =B1 3% proc-vmstat.numa_hint_fau= lts_local 1.009e+08 =B1 0% -35.6% 64951081 =B1 3% proc-vmstat.numa_hit 1.009e+08 =B1 0% -35.6% 64941826 =B1 3% proc-vmstat.numa_local 1744958 =B1 0% -36.7% 1105128 =B1 3% proc-vmstat.pgalloc_dma32 99309681 =B1 0% -35.5% 64014721 =B1 3% proc-vmstat.pgalloc_normal 1.01e+08 =B1 0% -35.6% 65068018 =B1 3% proc-vmstat.pgfree 489.41 =B1 0% +18.4% 579.66 =B1 1% time.elapsed_time 489.41 =B1 0% +18.4% 579.66 =B1 1% time.elapsed_time.max 1.005e+09 =B1 0% +113.2% 2.142e+09 =B1 4% time.involuntary_context_= switches 32394 =B1 0% +19.3% 38635 =B1 1% time.system_time 1700 =B1 0% +54.6% 2627 =B1 3% time.user_time 3.164e+09 =B1 0% +64.2% 5.195e+09 =B1 3% time.voluntary_context_sw= itches 7826 =B1 18% -19.7% 6283 =B1 13% numa-vmstat.node0.nr_page= _table_pages 24938156 =B1 0% -34.5% 16344223 =B1 2% numa-vmstat.node0.numa_hit 24865727 =B1 0% -34.6% 16268676 =B1 2% numa-vmstat.node0.numa_lo= cal 19415 =B1 43% +53.9% 29872 =B1 2% numa-vmstat.node1.nr_acti= ve_anon 10012 =B1 87% +102.5% 20273 =B1 3% numa-vmstat.node1.nr_shmem 25578109 =B1 2% -35.3% 16544997 =B1 3% numa-vmstat.node1.numa_hit 25542618 =B1 2% -35.4% 16513089 =B1 3% numa-vmstat.node1.numa_lo= cal 7.39e+08 =B1 1% -63.6% 2.693e+08 =B1 12% cpuidle.C1-HSW.time 1.279e+08 =B1 2% -75.4% 31468140 =B1 20% cpuidle.C1-HSW.usage 97966635 =B1 3% -38.4% 60323848 =B1 6% cpuidle.C1E-HSW.time 2424496 =B1 2% -54.3% 1108542 =B1 10% cpuidle.C1E-HSW.usage 2168324 =B1 5% -38.4% 1335858 =B1 6% cpuidle.C3-HSW.time 23824 =B1 2% -51.7% 11496 =B1 10% cpuidle.C3-HSW.usage 133416 =B1 1% -41.7% 77729 =B1 10% cpuidle.C6-HSW.usage 72278 =B1 96% -85.4% 10574 =B1 13% cpuidle.POLL.time 7564 =B1 0% -64.3% 2699 =B1 13% cpuidle.POLL.usage 447972 =B1 12% -77.1% 102749 =B1 39% sched_debug.cfs_rq:/.MIN_= vruntime.avg 23408331 =B1 2% -74.0% 6077779 =B1 38% sched_debug.cfs_rq:/.MIN_= vruntime.max 3133258 =B1 5% -75.3% 773710 =B1 35% sched_debug.cfs_rq:/.MIN_= vruntime.stddev 0.17 =B1173% +1025.0% 1.88 =B1 15% sched_debug.cfs_rq:/.load= .min 4.72 =B1 5% +21.2% 5.72 =B1 4% sched_debug.cfs_rq:/.load= _avg.min 447972 =B1 12% -77.1% 102749 =B1 39% sched_debug.cfs_rq:/.max_= vruntime.avg 23408331 =B1 2% -74.0% 6077779 =B1 38% sched_debug.cfs_rq:/.max_= vruntime.max 3133258 =B1 5% -75.3% 773710 =B1 35% sched_debug.cfs_rq:/.max_= vruntime.stddev 34877232 =B1 0% -16.9% 28973299 =B1 2% sched_debug.cfs_rq:/.min_= vruntime.avg 36136568 =B1 0% -16.9% 30030834 =B1 1% sched_debug.cfs_rq:/.min_= vruntime.max 33553337 =B1 0% -16.4% 28050567 =B1 2% sched_debug.cfs_rq:/.min_= vruntime.min 580186 =B1 2% -26.0% 429600 =B1 11% sched_debug.cfs_rq:/.min_= vruntime.stddev 0.08 =B1110% +710.0% 0.67 =B1 21% sched_debug.cfs_rq:/.nr_r= unning.min 0.17 =B1 12% -59.6% 0.07 =B1 31% sched_debug.cfs_rq:/.nr_r= unning.stddev 25.39 =B1 2% -17.8% 20.88 =B1 3% sched_debug.cfs_rq:/.runn= able_load_avg.max 0.44 =B1173% +1002.5% 4.90 =B1 13% sched_debug.cfs_rq:/.runn= able_load_avg.min 4.84 =B1 2% -39.4% 2.93 =B1 8% sched_debug.cfs_rq:/.runn= able_load_avg.stddev 952653 =B1 15% -51.5% 462372 =B1 50% sched_debug.cfs_rq:/.spre= ad0.avg 2206041 =B1 10% -31.4% 1514231 =B1 8% sched_debug.cfs_rq:/.spre= ad0.max 577122 =B1 2% -25.8% 428166 =B1 11% sched_debug.cfs_rq:/.spre= ad0.stddev 46.85 =B1 3% -34.0% 30.93 =B1 24% sched_debug.cfs_rq:/.util= _avg.stddev 115635 =B1 1% +107.7% 240214 =B1 8% sched_debug.cpu.avg_idle.= avg 506560 =B1 15% +83.7% 930497 =B1 4% sched_debug.cpu.avg_idle.= max 6833 =B1131% +168.7% 18362 =B1 34% sched_debug.cpu.avg_idle.= min 78999 =B1 9% +214.9% 248764 =B1 8% sched_debug.cpu.avg_idle.= stddev 290289 =B1 0% +10.7% 321362 =B1 0% sched_debug.cpu.clock.avg 290345 =B1 0% +10.7% 321461 =B1 0% sched_debug.cpu.clock.max 290230 =B1 0% +10.7% 321263 =B1 0% sched_debug.cpu.clock.min 34.48 =B1 26% +74.7% 60.23 =B1 5% sched_debug.cpu.clock.std= dev 290289 =B1 0% +10.7% 321362 =B1 0% sched_debug.cpu.clock_tas= k.avg 290345 =B1 0% +10.7% 321461 =B1 0% sched_debug.cpu.clock_tas= k.max 290230 =B1 0% +10.7% 321263 =B1 0% sched_debug.cpu.clock_tas= k.min 34.48 =B1 26% +74.7% 60.23 =B1 5% sched_debug.cpu.clock_tas= k.stddev 0.50 =B1 80% +865.0% 4.82 =B1 7% sched_debug.cpu.cpu_load[= 0].min 2.00 =B1 33% +155.0% 5.10 =B1 6% sched_debug.cpu.cpu_load[= 1].min 3.31 =B1 17% +59.6% 5.28 =B1 6% sched_debug.cpu.cpu_load[= 2].min 4.28 =B1 5% +28.0% 5.47 =B1 4% sched_debug.cpu.cpu_load[= 3].min 29.69 =B1 10% -21.4% 23.35 =B1 4% sched_debug.cpu.cpu_load[= 4].max 4.39 =B1 5% +24.7% 5.47 =B1 4% sched_debug.cpu.cpu_load[= 4].min 4.99 =B1 9% -30.3% 3.47 =B1 5% sched_debug.cpu.cpu_load[= 4].stddev 1275 =B1 74% +660.4% 9696 =B1 35% sched_debug.cpu.curr->pid= .min 2960 =B1 11% -54.8% 1338 =B1 39% sched_debug.cpu.curr->pid= .stddev 0.22 =B1 70% +935.0% 2.30 =B1 30% sched_debug.cpu.load.min 0.00 =B1 11% +39.0% 0.00 =B1 4% sched_debug.cpu.next_bala= nce.stddev 245043 =B1 0% +12.4% 275488 =B1 0% sched_debug.cpu.nr_load_u= pdates.avg 253700 =B1 0% +11.3% 282470 =B1 0% sched_debug.cpu.nr_load_u= pdates.max 242515 =B1 0% +12.5% 272755 =B1 0% sched_debug.cpu.nr_load_u= pdates.min 8.93 =B1 5% +12.5% 10.05 =B1 2% sched_debug.cpu.nr_runnin= g.avg 29.08 =B1 4% -23.2% 22.35 =B1 2% sched_debug.cpu.nr_runnin= g.max 0.11 =B1 70% +1970.0% 2.30 =B1 26% sched_debug.cpu.nr_runnin= g.min 6.52 =B1 3% -40.5% 3.88 =B1 8% sched_debug.cpu.nr_runnin= g.stddev 29380032 =B1 0% +62.7% 47789650 =B1 1% sched_debug.cpu.nr_switch= es.avg 32480191 =B1 0% +63.0% 52947357 =B1 1% sched_debug.cpu.nr_switch= es.max 26568245 =B1 0% +64.3% 43639487 =B1 2% sched_debug.cpu.nr_switch= es.min 1724177 =B1 1% +28.9% 2223172 =B1 5% sched_debug.cpu.nr_switch= es.stddev 307.39 =B1 7% -42.6% 176.42 =B1 14% sched_debug.cpu.nr_uninte= rruptible.max -278.64 =B1-10% -41.9% -162.00 =B1 -5% sched_debug.cpu.nr_uninte= rruptible.min 131.21 =B1 6% -45.4% 71.66 =B1 3% sched_debug.cpu.nr_uninte= rruptible.stddev 290228 =B1 0% +10.7% 321261 =B1 0% sched_debug.cpu_clk 286726 =B1 0% +11.2% 318853 =B1 0% sched_debug.ktime 290228 =B1 0% +10.7% 321261 =B1 0% sched_debug.sched_clk Disclaimer: Results have been estimated based on internal Intel analysis and are provid= ed for informational purposes only. Any difference in system hardware or softw= are design or configuration may affect actual performance. Thanks, Ying Huang --=-=-= Content-Type: text/plain; charset=ascii Content-Disposition: attachment; filename=job.yaml --- LKP_SERVER: inn LKP_CGI_PORT: 80 LKP_CIFS_PORT: 139 testcase: hackbench default-monitors: wait: activate-monitor kmsg: uptime: iostat: heartbeat: vmstat: numa-numastat: numa-vmstat: numa-meminfo: proc-vmstat: proc-stat: interval: 10 meminfo: slabinfo: interrupts: lock_stat: softirqs: bdi_dev_mapping: diskstats: nfsstat: cpuidle: cpufreq-stats: turbostat: pmeter: sched_debug: interval: 60 cpufreq_governor: performance NFS_HANG_DF_TIMEOUT: 200 NFS_HANG_CHECK_INTERVAL: 900 default-watchdogs: oom-killer: watchdog: nfs-hang: commit: 53d3bc773eaa7ab1cf63585e76af7ee869d5e709 model: Ivytown Ivy Bridge-EP nr_cpu: 48 memory: 64G nr_ssd_partitions: 1 ssd_partitions: "/dev/disk/by-id/ata-INTEL_SSDSC2BB480G6_BTWA5444064C480FGN-part1" swap_partitions: rootfs_partition: category: benchmark nr_threads: 50% perf-profile: freq: 800 hackbench: mode: threads ipc: socket queue: bisect testbox: ivb42 tbox_group: ivb42 kconfig: x86_64-rhel enqueue_time: 2016-05-15 08:54:56.489267568 +08:00 compiler: gcc-4.9 rootfs: debian-x86_64-2015-02-07.cgz id: 7da972d71df97b29fc3f810c0433c3b0b6c70992 user: lkp head_commit: 65643e3abe71e970bef656ea0b125dace7c7a1b3 base_commit: 610603a520bdeb35bd838835f36cfd6b4a563995 branch: linus/master result_root: "/result/hackbench/performance-50%-threads-socket/ivb42/debian-x86_64-2015-02-07.cgz/x86_64-rhel/gcc-4.9/53d3bc773eaa7ab1cf63585e76af7ee869d5e709/0" job_file: "/lkp/scheduled/ivb42/bisect_hackbench-performance-50%-threads-socket-debian-x86_64-2015-02-07.cgz-x86_64-rhel-53d3bc773eaa7ab1cf63585e76af7ee869d5e709-20160515-61153-jw8y6w-0.yaml" max_uptime: 2400 initrd: "/osimage/debian/debian-x86_64-2015-02-07.cgz" bootloader_append: - root=/dev/ram0 - user=lkp - job=/lkp/scheduled/ivb42/bisect_hackbench-performance-50%-threads-socket-debian-x86_64-2015-02-07.cgz-x86_64-rhel-53d3bc773eaa7ab1cf63585e76af7ee869d5e709-20160515-61153-jw8y6w-0.yaml - ARCH=x86_64 - kconfig=x86_64-rhel - branch=linus/master - commit=53d3bc773eaa7ab1cf63585e76af7ee869d5e709 - BOOT_IMAGE=/pkg/linux/x86_64-rhel/gcc-4.9/53d3bc773eaa7ab1cf63585e76af7ee869d5e709/vmlinuz-4.6.0-rc7-00056-g53d3bc7 - max_uptime=2400 - RESULT_ROOT=/result/hackbench/performance-50%-threads-socket/ivb42/debian-x86_64-2015-02-07.cgz/x86_64-rhel/gcc-4.9/53d3bc773eaa7ab1cf63585e76af7ee869d5e709/0 - LKP_SERVER=inn - |2- earlyprintk=ttyS0,115200 systemd.log_level=err debug apic=debug sysrq_always_enabled rcupdate.rcu_cpu_stall_timeout=100 panic=-1 softlockup_panic=1 nmi_watchdog=panic oops=panic load_ramdisk=2 prompt_ramdisk=0 console=ttyS0,115200 console=tty0 vga=normal rw lkp_initrd: "/lkp/lkp/lkp-x86_64.cgz" modules_initrd: "/pkg/linux/x86_64-rhel/gcc-4.9/53d3bc773eaa7ab1cf63585e76af7ee869d5e709/modules.cgz" bm_initrd: "/osimage/deps/debian-x86_64-2015-02-07.cgz/lkp.cgz,/osimage/deps/debian-x86_64-2015-02-07.cgz/run-ipconfig.cgz,/osimage/deps/debian-x86_64-2015-02-07.cgz/turbostat.cgz,/lkp/benchmarks/turbostat.cgz,/lkp/benchmarks/perf-profile-x86_64.cgz" linux_headers_initrd: "/pkg/linux/x86_64-rhel/gcc-4.9/53d3bc773eaa7ab1cf63585e76af7ee869d5e709/linux-headers.cgz" repeat_to: 2 kernel: "/pkg/linux/x86_64-rhel/gcc-4.9/53d3bc773eaa7ab1cf63585e76af7ee869d5e709/vmlinuz-4.6.0-rc7-00056-g53d3bc7" dequeue_time: 2016-05-15 08:56:16.895546608 +08:00 job_state: finished loadavg: 342.96 389.96 225.01 1/540 28966 start_time: '1463273823' end_time: '1463274426' version: "/lkp/lkp/.src-20160513-232343" --=-=-= Content-Type: text/plain; charset=ascii Content-Disposition: attachment; filename=reproduce 2016-05-15 08:57:02 echo performance > /sys/devices/system/cpu/cpu0/cpufreq/scaling_governor 2016-05-15 08:57:02 echo performance > /sys/devices/system/cpu/cpu1/cpufreq/scaling_governor 2016-05-15 08:57:02 echo performance > /sys/devices/system/cpu/cpu10/cpufreq/scaling_governor 2016-05-15 08:57:02 echo performance > /sys/devices/system/cpu/cpu11/cpufreq/scaling_governor 2016-05-15 08:57:02 echo performance > /sys/devices/system/cpu/cpu12/cpufreq/scaling_governor 2016-05-15 08:57:02 echo performance > /sys/devices/system/cpu/cpu13/cpufreq/scaling_governor 2016-05-15 08:57:02 echo performance > /sys/devices/system/cpu/cpu14/cpufreq/scaling_governor 2016-05-15 08:57:02 echo performance > /sys/devices/system/cpu/cpu15/cpufreq/scaling_governor 2016-05-15 08:57:02 echo performance > /sys/devices/system/cpu/cpu16/cpufreq/scaling_governor 2016-05-15 08:57:02 echo performance > /sys/devices/system/cpu/cpu17/cpufreq/scaling_governor 2016-05-15 08:57:02 echo performance > /sys/devices/system/cpu/cpu18/cpufreq/scaling_governor 2016-05-15 08:57:02 echo performance > /sys/devices/system/cpu/cpu19/cpufreq/scaling_governor 2016-05-15 08:57:02 echo performance > /sys/devices/system/cpu/cpu2/cpufreq/scaling_governor 2016-05-15 08:57:02 echo performance > /sys/devices/system/cpu/cpu20/cpufreq/scaling_governor 2016-05-15 08:57:02 echo performance > /sys/devices/system/cpu/cpu21/cpufreq/scaling_governor 2016-05-15 08:57:02 echo performance > /sys/devices/system/cpu/cpu22/cpufreq/scaling_governor 2016-05-15 08:57:02 echo performance > /sys/devices/system/cpu/cpu23/cpufreq/scaling_governor 2016-05-15 08:57:02 echo performance > /sys/devices/system/cpu/cpu24/cpufreq/scaling_governor 2016-05-15 08:57:02 echo performance > /sys/devices/system/cpu/cpu25/cpufreq/scaling_governor 2016-05-15 08:57:02 echo performance > /sys/devices/system/cpu/cpu26/cpufreq/scaling_governor 2016-05-15 08:57:02 echo performance > /sys/devices/system/cpu/cpu27/cpufreq/scaling_governor 2016-05-15 08:57:02 echo performance > /sys/devices/system/cpu/cpu28/cpufreq/scaling_governor 2016-05-15 08:57:02 echo performance > /sys/devices/system/cpu/cpu29/cpufreq/scaling_governor 2016-05-15 08:57:02 echo performance > /sys/devices/system/cpu/cpu3/cpufreq/scaling_governor 2016-05-15 08:57:02 echo performance > /sys/devices/system/cpu/cpu30/cpufreq/scaling_governor 2016-05-15 08:57:02 echo performance > /sys/devices/system/cpu/cpu31/cpufreq/scaling_governor 2016-05-15 08:57:02 echo performance > /sys/devices/system/cpu/cpu32/cpufreq/scaling_governor 2016-05-15 08:57:02 echo performance > /sys/devices/system/cpu/cpu33/cpufreq/scaling_governor 2016-05-15 08:57:02 echo performance > /sys/devices/system/cpu/cpu34/cpufreq/scaling_governor 2016-05-15 08:57:02 echo performance > /sys/devices/system/cpu/cpu35/cpufreq/scaling_governor 2016-05-15 08:57:02 echo performance > /sys/devices/system/cpu/cpu36/cpufreq/scaling_governor 2016-05-15 08:57:02 echo performance > /sys/devices/system/cpu/cpu37/cpufreq/scaling_governor 2016-05-15 08:57:02 echo performance > /sys/devices/system/cpu/cpu38/cpufreq/scaling_governor 2016-05-15 08:57:02 echo performance > /sys/devices/system/cpu/cpu39/cpufreq/scaling_governor 2016-05-15 08:57:02 echo performance > /sys/devices/system/cpu/cpu4/cpufreq/scaling_governor 2016-05-15 08:57:02 echo performance > /sys/devices/system/cpu/cpu40/cpufreq/scaling_governor 2016-05-15 08:57:02 echo performance > /sys/devices/system/cpu/cpu41/cpufreq/scaling_governor 2016-05-15 08:57:02 echo performance > /sys/devices/system/cpu/cpu42/cpufreq/scaling_governor 2016-05-15 08:57:02 echo performance > /sys/devices/system/cpu/cpu43/cpufreq/scaling_governor 2016-05-15 08:57:02 echo performance > /sys/devices/system/cpu/cpu44/cpufreq/scaling_governor 2016-05-15 08:57:02 echo performance > /sys/devices/system/cpu/cpu45/cpufreq/scaling_governor 2016-05-15 08:57:02 echo performance > /sys/devices/system/cpu/cpu46/cpufreq/scaling_governor 2016-05-15 08:57:02 echo performance > /sys/devices/system/cpu/cpu47/cpufreq/scaling_governor 2016-05-15 08:57:02 echo performance > /sys/devices/system/cpu/cpu5/cpufreq/scaling_governor 2016-05-15 08:57:02 echo performance > /sys/devices/system/cpu/cpu6/cpufreq/scaling_governor 2016-05-15 08:57:02 echo performance > /sys/devices/system/cpu/cpu7/cpufreq/scaling_governor 2016-05-15 08:57:02 echo performance > /sys/devices/system/cpu/cpu8/cpufreq/scaling_governor 2016-05-15 08:57:02 echo performance > /sys/devices/system/cpu/cpu9/cpufreq/scaling_governor 2016-05-15 08:57:03 /usr/bin/hackbench -g 24 --threads -l 60000 2016-05-15 08:57:50 /usr/bin/hackbench -g 24 --threads -l 60000 2016-05-15 08:58:33 /usr/bin/hackbench -g 24 --threads -l 60000 2016-05-15 08:59:15 /usr/bin/hackbench -g 24 --threads -l 60000 2016-05-15 08:59:58 /usr/bin/hackbench -g 24 --threads -l 60000 2016-05-15 09:00:43 /usr/bin/hackbench -g 24 --threads -l 60000 2016-05-15 09:01:22 /usr/bin/hackbench -g 24 --threads -l 60000 2016-05-15 09:01:57 /usr/bin/hackbench -g 24 --threads -l 60000 2016-05-15 09:02:39 /usr/bin/hackbench -g 24 --threads -l 60000 2016-05-15 09:03:22 /usr/bin/hackbench -g 24 --threads -l 60000 2016-05-15 09:04:10 /usr/bin/hackbench -g 24 --threads -l 60000 2016-05-15 09:04:53 /usr/bin/hackbench -g 24 --threads -l 60000 2016-05-15 09:05:39 /usr/bin/hackbench -g 24 --threads -l 60000 2016-05-15 09:06:24 /usr/bin/hackbench -g 24 --threads -l 60000 --=-=-=--