From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751921AbbHVXdU (ORCPT ); Sat, 22 Aug 2015 19:33:20 -0400 Received: from mga09.intel.com ([134.134.136.24]:5003 "EHLO mga09.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751829AbbHVXdS (ORCPT ); Sat, 22 Aug 2015 19:33:18 -0400 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.15,730,1432623600"; d="yaml'?scan'208";a="773837779" From: kernel test robot Subject: [lkp] [fsnotify] 8f2f3eb59d: -4.0% will-it-scale.per_thread_ops CC: lkp@01.org CC: LKML CC: Linus Torvalds TO: Jan Kara Date: Sun, 23 Aug 2015 07:33:16 +0800 Message-ID: <87k2smkif7.fsf@yhuang-dev.intel.com> User-Agent: Gnus/5.13 (Gnus v5.13) Emacs/24.5 (gnu/linux) MIME-Version: 1.0 Content-Type: multipart/mixed; boundary="=-=-=" Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org --=-=-= Content-Type: text/plain; charset=iso-8859-1 Content-Disposition: inline Content-Transfer-Encoding: quoted-printable FYI, we noticed the below changes on git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git master commit 8f2f3eb59dff4ec538de55f2e0592fec85966aab ("fsnotify: fix oops in fsn= otify_clear_marks_by_group_flags()") =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D tbox_group/testcase/rootfs/kconfig/compiler/cpufreq_governor/test: lkp-sbx04/will-it-scale/debian-x86_64-2015-02-07.cgz/x86_64-rhel/gcc-4.9/= performance/read1 commit:=20 447f6a95a9c80da7faaec3e66e656eab8f262640 8f2f3eb59dff4ec538de55f2e0592fec85966aab 447f6a95a9c80da7 8f2f3eb59dff4ec538de55f2e0=20 ---------------- --------------------------=20 %stddev %change %stddev \ | \=20=20 1844687 =B1 0% -4.0% 1770899 =B1 0% will-it-scale.per_thread_= ops 283.69 =B1 0% +9.5% 310.64 =B1 0% will-it-scale.time.user_t= ime 4576 =B1 3% -7.3% 4242 =B1 6% will-it-scale.time.volunt= ary_context_switches 7211 =B1 10% +54.0% 11101 =B1 18% cpuidle.C1E-SNB.usage 10636 =B1 36% +69.3% 18003 =B1 36% numa-meminfo.node1.Shmem 1.07 =B1 4% -13.1% 0.93 =B1 9% perf-profile.cpu-cycles.s= elinux_file_permission.security_file_permission.rw_verify_area.vfs_read.sys= _read 4576 =B1 3% -7.3% 4242 =B1 6% time.voluntary_context_sw= itches 526.75 =B1104% -94.2% 30.50 =B1 98% numa-numastat.node1.other= _node 1540 =B1 35% -74.2% 398.00 =B1 90% numa-numastat.node2.other= _node 32344 =B1 5% +7.4% 34722 =B1 4% numa-vmstat.node0.numa_ot= her 2658 =B1 36% +69.3% 4500 =B1 36% numa-vmstat.node1.nr_shmem 935792 =B1136% +4247.3% 40682138 =B1141% latency_stats.avg.nfs_wai= t_on_request.nfs_updatepage.nfs_write_end.generic_perform_write.__generic_f= ile_write_iter.generic_file_write_iter.nfs_file_write.__vfs_write.vfs_write= .SyS_write.entry_SYSCALL_64_fastpath 935792 =B1136% +4247.3% 40682138 =B1141% latency_stats.max.nfs_wai= t_on_request.nfs_updatepage.nfs_write_end.generic_perform_write.__generic_f= ile_write_iter.generic_file_write_iter.nfs_file_write.__vfs_write.vfs_write= .SyS_write.entry_SYSCALL_64_fastpath 935792 =B1136% +4247.3% 40682138 =B1141% latency_stats.sum.nfs_wai= t_on_request.nfs_updatepage.nfs_write_end.generic_perform_write.__generic_f= ile_write_iter.generic_file_write_iter.nfs_file_write.__vfs_write.vfs_write= .SyS_write.entry_SYSCALL_64_fastpath 12893 =B1 2% -9.1% 11716 =B1 1% slabinfo.kmalloc-192.acti= ve_objs 1653 =B1 9% -10.3% 1483 =B1 5% slabinfo.mnt_cache.active= _objs 1653 =B1 9% -10.3% 1483 =B1 5% slabinfo.mnt_cache.num_ob= js 1.75 =B1 47% -81.0% 0.33 =B1141% sched_debug.cfs_rq[10]:/.= nr_spread_over -343206 =B1-27% -73.2% -91995 =B1-170% sched_debug.cfs_rq[14]:/= .spread0 533.25 =B1 82% -81.5% 98.75 =B1 42% sched_debug.cfs_rq[18]:/.= blocked_load_avg 541.75 =B1 82% -81.3% 101.25 =B1 41% sched_debug.cfs_rq[18]:/.= tg_load_contrib -1217705 =B1 -5% -30.2% -850080 =B1-15% sched_debug.cfs_rq[26]:/.= spread0 89722 =B1 9% +9.8% 98495 =B1 10% sched_debug.cfs_rq[32]:/.= exec_clock 101180 =B1132% +180.8% 284154 =B1 30% sched_debug.cfs_rq[35]:/.= spread0 37332 =B1473% +725.2% 308082 =B1 59% sched_debug.cfs_rq[38]:/.= spread0 32054 =B1502% +981.6% 346689 =B1 39% sched_debug.cfs_rq[39]:/.= spread0 1.00 =B1100% +100.0% 2.00 =B1 50% sched_debug.cfs_rq[42]:/.= nr_spread_over -125980 =B1-218% -307.1% 260875 =B1 46% sched_debug.cfs_rq[42]:/= .spread0 -111501 =B1-102% -288.7% 210354 =B1 94% sched_debug.cfs_rq[45]:/= .spread0 -173363 =B1-34% -221.0% 209775 =B1 94% sched_debug.cfs_rq[47]:/.= spread0 -302090 =B1-43% -121.8% 65953 =B1322% sched_debug.cfs_rq[4]:/.s= pread0 -490175 =B1-18% -41.1% -288722 =B1-31% sched_debug.cfs_rq[50]:/.= spread0 -594948 =B1-10% -59.7% -239840 =B1-33% sched_debug.cfs_rq[51]:/.= spread0 1.00 =B1100% +6050.0% 61.50 =B1141% sched_debug.cfs_rq[53]:/.= blocked_load_avg 10.50 =B1 8% +614.3% 75.00 =B1122% sched_debug.cfs_rq[53]:/.= tg_load_contrib -596043 =B1-10% -49.0% -304277 =B1-36% sched_debug.cfs_rq[54]:/.= spread0 10.00 =B1 0% +2062.5% 216.25 =B1 40% sched_debug.cfs_rq[56]:/.= tg_load_contrib 17.75 =B1173% +1302.8% 249.00 =B1 26% sched_debug.cfs_rq[60]:/.= blocked_load_avg -809633 =B1 -9% -36.2% -516886 =B1-23% sched_debug.cfs_rq[60]:/.= spread0 28.00 =B1109% +828.6% 260.00 =B1 25% sched_debug.cfs_rq[60]:/.= tg_load_contrib 277.75 =B1 95% -86.3% 38.00 =B1171% sched_debug.cfs_rq[7]:/.b= locked_load_avg 293.25 =B1 90% -81.8% 53.50 =B1121% sched_debug.cfs_rq[7]:/.t= g_load_contrib 17.50 =B1 2% -28.6% 12.50 =B1 34% sched_debug.cpu#0.cpu_loa= d[2] 17.00 =B1 4% -25.0% 12.75 =B1 35% sched_debug.cpu#0.cpu_loa= d[3] 2907 =B1 12% +195.9% 8603 =B1 63% sched_debug.cpu#0.sched_g= oidle 16.50 =B1 3% -9.1% 15.00 =B1 0% sched_debug.cpu#1.cpu_loa= d[2] 16.50 =B1 3% -7.6% 15.25 =B1 2% sched_debug.cpu#1.cpu_loa= d[3] 5595 =B1 26% -36.4% 3557 =B1 11% sched_debug.cpu#11.nr_swi= tches 6885 =B1 92% -76.2% 1639 =B1 40% sched_debug.cpu#11.ttwu_c= ount 1350 =B1 34% -55.0% 608.00 =B1 14% sched_debug.cpu#11.ttwu_l= ocal 17892 =B1 74% -78.3% 3877 =B1 18% sched_debug.cpu#12.nr_swi= tches 1288 =B1 27% -49.8% 647.50 =B1 37% sched_debug.cpu#12.ttwu_l= ocal 1405 =B1 22% -52.7% 664.50 =B1 23% sched_debug.cpu#13.ttwu_l= ocal 1.25 =B1182% -440.0% -4.25 =B1-50% sched_debug.cpu#17.nr_uni= nterruptible 1976 =B1 5% -10.0% 1779 =B1 0% sched_debug.cpu#18.curr->= pid 983.75 =B1 8% +101.6% 1983 =B1 32% sched_debug.cpu#18.ttwu_l= ocal -0.25 =B1-911% +2300.0% -6.00 =B1-28% sched_debug.cpu#21.nr_un= interruptible 2979 =B1 49% +159.6% 7734 =B1 75% sched_debug.cpu#22.ttwu_c= ount 1111 =B1 21% +127.6% 2528 =B1 32% sched_debug.cpu#22.ttwu_l= ocal 1.00 =B1141% -275.0% -1.75 =B1-84% sched_debug.cpu#25.nr_uni= nterruptible 14419 =B1 54% -58.2% 6022 =B1 84% sched_debug.cpu#25.ttwu_c= ount 14395 =B1 70% +252.4% 50729 =B1 39% sched_debug.cpu#28.nr_swi= tches -4.75 =B1-17% -115.8% 0.75 =B1218% sched_debug.cpu#30.nr_uni= nterruptible 2335 =B1115% -76.6% 547.25 =B1 18% sched_debug.cpu#34.ttwu_c= ount 1258 =B1 25% -43.3% 713.75 =B1 11% sched_debug.cpu#35.nr_swi= tches 1409 =B1 23% -39.6% 851.75 =B1 9% sched_debug.cpu#35.sched_= count 969.50 =B1 69% -68.8% 302.00 =B1 38% sched_debug.cpu#35.ttwu_c= ount 382.00 =B1 37% -66.0% 130.00 =B1 14% sched_debug.cpu#35.ttwu_l= ocal 808.75 =B1 18% +28.3% 1037 =B1 15% sched_debug.cpu#38.nr_swi= tches 948.50 =B1 16% +23.2% 1168 =B1 13% sched_debug.cpu#38.sched_= count 70695 =B1 2% +6.2% 75047 =B1 4% sched_debug.cpu#41.nr_loa= d_updates 1269 =B1 13% +55.3% 1970 =B1 25% sched_debug.cpu#46.nr_swi= tches 3.25 =B1 93% -76.9% 0.75 =B1197% sched_debug.cpu#46.nr_uni= nterruptible 1375 =B1 12% +51.1% 2078 =B1 23% sched_debug.cpu#46.sched_= count 3958 =B1 97% +462.9% 22281 =B1 25% sched_debug.cpu#50.ttwu_c= ount 457.25 =B1 26% +64.3% 751.25 =B1 28% sched_debug.cpu#53.ttwu_l= ocal 753041 =B1 3% -11.1% 669815 =B1 5% sched_debug.cpu#58.avg_id= le -1.75 =B1-142% -257.1% 2.75 =B1 64% sched_debug.cpu#59.nr_un= interruptible 2581 =B1 27% +1426.4% 39408 =B1 57% sched_debug.cpu#60.nr_swi= tches 2632 =B1 27% +1400.2% 39495 =B1 57% sched_debug.cpu#60.sched_= count 34156 =B1 94% -94.8% 1776 =B1 15% sched_debug.cpu#61.nr_swi= tches 34250 =B1 94% -94.7% 1825 =B1 15% sched_debug.cpu#61.sched_= count 16821 =B1 96% -95.4% 768.50 =B1 11% sched_debug.cpu#61.sched_= goidle 8128 =B1146% -91.7% 676.00 =B1 10% sched_debug.cpu#61.ttwu_c= ount =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D tbox_group/testcase/rootfs/kconfig/compiler/cpufreq_governor/test: ivb42/will-it-scale/debian-x86_64-2015-02-07.cgz/x86_64-rhel/gcc-4.9/perf= ormance/readseek1 commit:=20 447f6a95a9c80da7faaec3e66e656eab8f262640 8f2f3eb59dff4ec538de55f2e0592fec85966aab 447f6a95a9c80da7 8f2f3eb59dff4ec538de55f2e0=20 ---------------- --------------------------=20 %stddev %change %stddev \ | \=20=20 1915464 =B1 0% -2.4% 1869344 =B1 0% will-it-scale.per_thread_= ops 473.17 =B1 0% +6.9% 505.66 =B1 0% will-it-scale.time.user_t= ime 0.20 =B1 5% -49.4% 0.10 =B1 35% turbostat.Pkg%pc6 3.38 =B1 0% +34.0% 4.53 =B1 1% perf-profile.cpu-cycles.f= ind_get_entry.find_lock_entry.shmem_getpage_gfp.shmem_file_read_iter.__vfs_= read 7.42 =B1 0% +16.3% 8.62 =B1 1% perf-profile.cpu-cycles.f= ind_lock_entry.shmem_getpage_gfp.shmem_file_read_iter.__vfs_read.vfs_read 0.57 =B1 6% +72.2% 0.99 =B1 6% perf-profile.cpu-cycles.r= adix_tree_lookup_slot.find_get_entry.find_lock_entry.shmem_getpage_gfp.shme= m_file_read_iter 10.58 =B1 0% +11.4% 11.79 =B1 1% perf-profile.cpu-cycles.s= hmem_getpage_gfp.shmem_file_read_iter.__vfs_read.vfs_read.sys_read 30.50 =B1150% +1140.2% 378.25 =B1 49% sched_debug.cfs_rq[22]:/.= blocked_load_avg 44.75 =B1103% +788.3% 397.50 =B1 46% sched_debug.cfs_rq[22]:/.= tg_load_contrib 89.50 =B1159% +300.3% 358.25 =B1 75% sched_debug.cfs_rq[2]:/.b= locked_load_avg 115.75 =B1123% +231.7% 384.00 =B1 70% sched_debug.cfs_rq[2]:/.t= g_load_contrib 0.50 =B1100% +750.0% 4.25 =B1 67% sched_debug.cfs_rq[32]:/.= nr_spread_over 499.50 =B1 44% -98.2% 9.00 =B1101% sched_debug.cfs_rq[40]:/.= blocked_load_avg 505.50 =B1 44% -95.2% 24.50 =B1 73% sched_debug.cfs_rq[40]:/.= tg_load_contrib 421.00 =B1 56% -85.7% 60.25 =B1109% sched_debug.cfs_rq[42]:/.= blocked_load_avg 428.75 =B1 56% -80.4% 84.00 =B1 86% sched_debug.cfs_rq[42]:/.= tg_load_contrib 8053 =B1 2% +13.4% 9132 =B1 5% sched_debug.cfs_rq[47]:/.= avg->runnable_avg_sum 175.25 =B1 2% +12.7% 197.50 =B1 5% sched_debug.cfs_rq[47]:/.= tg_runnable_contrib 0.25 =B1173% +1500.0% 4.00 =B1 77% sched_debug.cfs_rq[8]:/.n= r_spread_over 90.75 =B1 13% -23.1% 69.75 =B1 15% sched_debug.cpu#0.cpu_loa= d[2] 97.00 =B1 15% -28.4% 69.50 =B1 16% sched_debug.cpu#0.cpu_loa= d[3] 99.50 =B1 14% -27.6% 72.00 =B1 18% sched_debug.cpu#0.cpu_loa= d[4] -10.25 =B1-14% -73.2% -2.75 =B1-180% sched_debug.cpu#1.nr_uni= nterruptible 8173 =B1106% -78.9% 1722 =B1 35% sched_debug.cpu#10.nr_swi= tches 3896 =B1112% -81.3% 727.50 =B1 36% sched_debug.cpu#10.sched_= goidle 515.00 =B1 40% -47.2% 271.75 =B1 49% sched_debug.cpu#10.ttwu_l= ocal 2.00 =B1 81% -325.0% -4.50 =B1-77% sched_debug.cpu#11.nr_uni= nterruptible 3818 =B1 39% -58.2% 1598 =B1 68% sched_debug.cpu#15.ttwu_l= ocal 0.50 =B1331% -650.0% -2.75 =B1-74% sched_debug.cpu#16.nr_uni= nterruptible 12671 =B1 30% -58.4% 5270 =B1 46% sched_debug.cpu#20.ttwu_c= ount 2285 =B1 70% -57.0% 983.50 =B1 25% sched_debug.cpu#20.ttwu_l= ocal 2722 =B1 79% -72.9% 738.75 =B1 51% sched_debug.cpu#21.ttwu_l= ocal -2.50 =B1-72% -200.0% 2.50 =B1 82% sched_debug.cpu#23.nr_uni= nterruptible 1183 =B1 31% +188.4% 3413 =B1 22% sched_debug.cpu#24.nr_swi= tches 1384 =B1 45% +148.4% 3438 =B1 22% sched_debug.cpu#24.sched_= count 318.50 =B1 54% +347.5% 1425 =B1 21% sched_debug.cpu#24.ttwu_l= ocal 5255 =B1 46% -60.2% 2090 =B1 54% sched_debug.cpu#25.nr_swi= tches 5276 =B1 46% -59.9% 2114 =B1 54% sched_debug.cpu#25.sched_= count 1893 =B1 42% -66.9% 627.00 =B1 75% sched_debug.cpu#25.ttwu_l= ocal 1.25 =B1142% +240.0% 4.25 =B1 45% sched_debug.cpu#27.nr_uni= nterruptible 0.75 =B1272% -322.2% -1.67 =B1-28% sched_debug.cpu#31.nr_uni= nterruptible 1977 =B1140% -86.5% 267.25 =B1 10% sched_debug.cpu#32.sched_= goidle 7.67 =B1 78% -122.8% -1.75 =B1-84% sched_debug.cpu#34.nr_uni= nterruptible 3642 =B1 37% +205.0% 11108 =B1 53% sched_debug.cpu#39.nr_swi= tches 1250 =B1 51% +292.0% 4902 =B1 52% sched_debug.cpu#39.sched_= goidle 3.00 =B1 0% +216.7% 9.50 =B1 30% sched_debug.cpu#45.cpu_lo= ad[0] 3.50 =B1 24% +121.4% 7.75 =B1 10% sched_debug.cpu#45.cpu_lo= ad[1] 3.25 =B1 13% +123.1% 7.25 =B1 11% sched_debug.cpu#45.cpu_lo= ad[2] 3.25 =B1 13% +92.3% 6.25 =B1 23% sched_debug.cpu#45.cpu_lo= ad[3] 3.00 =B1 0% +91.7% 5.75 =B1 22% sched_debug.cpu#45.cpu_lo= ad[4] 1593 =B1 19% +63.6% 2605 =B1 30% sched_debug.cpu#47.curr->= pid 365.75 =B1 39% +254.6% 1297 =B1 98% sched_debug.cpu#6.ttwu_lo= cal 8717 =B1 80% -78.7% 1856 =B1 45% sched_debug.cpu#8.nr_swit= ches 3992 =B1 85% -80.5% 778.50 =B1 51% sched_debug.cpu#8.sched_g= oidle 6221 =B1128% -83.9% 998.75 =B1 44% sched_debug.cpu#8.ttwu_co= unt 722.00 =B1 71% -69.5% 220.25 =B1 5% sched_debug.cpu#8.ttwu_lo= cal 0.25 =B1173% +321.4% 1.05 =B1 5% sched_debug.rt_rq[12]:/.r= t_time 0.04 =B1173% +311.0% 0.17 =B1 8% sched_debug.rt_rq[13]:/.r= t_time lkp-sbx04: Sandy Bridge-EX Memory: 64G ivb42: Ivytown Ivy Bridge-EP Memory: 64G will-it-scale.time.user_time 325 ++-------------------------------------------------------------------= -+ 320 ++ O O O = | | O O O O O O O O = | 315 O+ O O = | 310 ++ O O O = O | O = | 305 ++ = | 300 ++ = | 295 ++ = | | *.. = | 290 ++ .*.. ..*...*.. .. . = | 285 *+..*... .. . .*...*. . . *...*...*..*...*... ..*..= | | * *. * *. = .| 280 ++ = * 275 ++-------------------------------------------------------------------= -+ [*] bisect-good sample [O] bisect-bad sample To reproduce: git clone git://git.kernel.org/pub/scm/linux/kernel/git/wfg/lkp-tes= ts.git cd lkp-tests bin/lkp install job.yaml # job file is attached in this email bin/lkp run job.yaml Disclaimer: Results have been estimated based on internal Intel analysis and are provid= ed for informational purposes only. Any difference in system hardware or softw= are design or configuration may affect actual performance. Thanks, Ying Huang --=-=-= Content-Type: text/plain; charset=ascii Content-Disposition: attachment; filename=job.yaml --- LKP_SERVER: inn LKP_CGI_PORT: 80 LKP_CIFS_PORT: 139 testcase: will-it-scale default-monitors: wait: activate-monitor kmsg: uptime: iostat: vmstat: numa-numastat: numa-vmstat: numa-meminfo: proc-vmstat: proc-stat: interval: 10 meminfo: slabinfo: interrupts: lock_stat: latency_stats: softirqs: bdi_dev_mapping: diskstats: nfsstat: cpuidle: cpufreq-stats: turbostat: pmeter: sched_debug: interval: 60 cpufreq_governor: performance default-watchdogs: oom-killer: watchdog: commit: dd2384a75d1c046faf068a6352732a204814b86d model: Sandy Bridge-EX nr_cpu: 64 memory: 64G nr_ssd_partitions: 4 ssd_partitions: "/dev/disk/by-id/ata-INTEL_SSDSC2CW240A3_CVCV20430*-part1" swap_partitions: category: benchmark perf-profile: freq: 800 will-it-scale: test: read1 queue: cyclic testbox: lkp-sbx04 tbox_group: lkp-sbx04 kconfig: x86_64-rhel enqueue_time: 2015-08-08 06:51:04.467682345 +08:00 id: 7543484f1eea88c654299222e83a89fb3f8fbd44 user: lkp compiler: gcc-4.9 head_commit: dd2384a75d1c046faf068a6352732a204814b86d base_commit: 733db573a6451681b60e7372d2862de09d6eb04e branch: linus/master kernel: "/pkg/linux/x86_64-rhel/gcc-4.9/dd2384a75d1c046faf068a6352732a204814b86d/vmlinuz-4.2.0-rc5-00156-gdd2384a" rootfs: debian-x86_64-2015-02-07.cgz result_root: "/result/will-it-scale/performance-read1/lkp-sbx04/debian-x86_64-2015-02-07.cgz/x86_64-rhel/gcc-4.9/dd2384a75d1c046faf068a6352732a204814b86d/0" job_file: "/lkp/scheduled/lkp-sbx04/cyclic_will-it-scale-performance-read1-x86_64-rhel-CYCLIC_HEAD-dd2384a75d1c046faf068a6352732a204814b86d-20150808-9771-1mploli-0.yaml" dequeue_time: 2015-08-08 18:18:22.936002643 +08:00 max_uptime: 1500 initrd: "/osimage/debian/debian-x86_64-2015-02-07.cgz" bootloader_append: - root=/dev/ram0 - user=lkp - job=/lkp/scheduled/lkp-sbx04/cyclic_will-it-scale-performance-read1-x86_64-rhel-CYCLIC_HEAD-dd2384a75d1c046faf068a6352732a204814b86d-20150808-9771-1mploli-0.yaml - ARCH=x86_64 - kconfig=x86_64-rhel - branch=linus/master - commit=dd2384a75d1c046faf068a6352732a204814b86d - BOOT_IMAGE=/pkg/linux/x86_64-rhel/gcc-4.9/dd2384a75d1c046faf068a6352732a204814b86d/vmlinuz-4.2.0-rc5-00156-gdd2384a - max_uptime=1500 - RESULT_ROOT=/result/will-it-scale/performance-read1/lkp-sbx04/debian-x86_64-2015-02-07.cgz/x86_64-rhel/gcc-4.9/dd2384a75d1c046faf068a6352732a204814b86d/0 - LKP_SERVER=inn - |2- earlyprintk=ttyS0,115200 systemd.log_level=err debug apic=debug sysrq_always_enabled rcupdate.rcu_cpu_stall_timeout=100 panic=-1 softlockup_panic=1 nmi_watchdog=panic oops=panic load_ramdisk=2 prompt_ramdisk=0 console=ttyS0,115200 console=tty0 vga=normal rw lkp_initrd: "/lkp/lkp/lkp-x86_64.cgz" modules_initrd: "/pkg/linux/x86_64-rhel/gcc-4.9/dd2384a75d1c046faf068a6352732a204814b86d/modules.cgz" bm_initrd: "/osimage/deps/debian-x86_64-2015-02-07.cgz/lkp.cgz,/osimage/deps/debian-x86_64-2015-02-07.cgz/run-ipconfig.cgz,/osimage/deps/debian-x86_64-2015-02-07.cgz/turbostat.cgz,/lkp/benchmarks/turbostat.cgz,/lkp/benchmarks/will-it-scale.cgz" job_state: finished loadavg: 47.44 21.45 8.37 1/628 11393 start_time: '1439029153' end_time: '1439029462' version: "/lkp/lkp/.src-20150807-183152" --=-=-= Content-Type: text/plain; charset=ascii Content-Disposition: attachment; filename=reproduce echo performance > /sys/devices/system/cpu/cpu0/cpufreq/scaling_governor echo performance > /sys/devices/system/cpu/cpu1/cpufreq/scaling_governor echo performance > /sys/devices/system/cpu/cpu10/cpufreq/scaling_governor echo performance > /sys/devices/system/cpu/cpu11/cpufreq/scaling_governor echo performance > /sys/devices/system/cpu/cpu12/cpufreq/scaling_governor echo performance > /sys/devices/system/cpu/cpu13/cpufreq/scaling_governor echo performance > /sys/devices/system/cpu/cpu14/cpufreq/scaling_governor echo performance > /sys/devices/system/cpu/cpu15/cpufreq/scaling_governor echo performance > /sys/devices/system/cpu/cpu16/cpufreq/scaling_governor echo performance > /sys/devices/system/cpu/cpu17/cpufreq/scaling_governor echo performance > /sys/devices/system/cpu/cpu18/cpufreq/scaling_governor echo performance > /sys/devices/system/cpu/cpu19/cpufreq/scaling_governor echo performance > /sys/devices/system/cpu/cpu2/cpufreq/scaling_governor echo performance > /sys/devices/system/cpu/cpu20/cpufreq/scaling_governor echo performance > /sys/devices/system/cpu/cpu21/cpufreq/scaling_governor echo performance > /sys/devices/system/cpu/cpu22/cpufreq/scaling_governor echo performance > /sys/devices/system/cpu/cpu23/cpufreq/scaling_governor echo performance > /sys/devices/system/cpu/cpu24/cpufreq/scaling_governor echo performance > /sys/devices/system/cpu/cpu25/cpufreq/scaling_governor echo performance > /sys/devices/system/cpu/cpu26/cpufreq/scaling_governor echo performance > /sys/devices/system/cpu/cpu27/cpufreq/scaling_governor echo performance > /sys/devices/system/cpu/cpu28/cpufreq/scaling_governor echo performance > /sys/devices/system/cpu/cpu29/cpufreq/scaling_governor echo performance > /sys/devices/system/cpu/cpu3/cpufreq/scaling_governor echo performance > /sys/devices/system/cpu/cpu30/cpufreq/scaling_governor echo performance > /sys/devices/system/cpu/cpu31/cpufreq/scaling_governor echo performance > /sys/devices/system/cpu/cpu32/cpufreq/scaling_governor echo performance > /sys/devices/system/cpu/cpu33/cpufreq/scaling_governor echo performance > /sys/devices/system/cpu/cpu34/cpufreq/scaling_governor echo performance > /sys/devices/system/cpu/cpu35/cpufreq/scaling_governor echo performance > /sys/devices/system/cpu/cpu36/cpufreq/scaling_governor echo performance > /sys/devices/system/cpu/cpu37/cpufreq/scaling_governor echo performance > /sys/devices/system/cpu/cpu38/cpufreq/scaling_governor echo performance > /sys/devices/system/cpu/cpu39/cpufreq/scaling_governor echo performance > /sys/devices/system/cpu/cpu4/cpufreq/scaling_governor echo performance > /sys/devices/system/cpu/cpu40/cpufreq/scaling_governor echo performance > /sys/devices/system/cpu/cpu41/cpufreq/scaling_governor echo performance > /sys/devices/system/cpu/cpu42/cpufreq/scaling_governor echo performance > /sys/devices/system/cpu/cpu43/cpufreq/scaling_governor echo performance > /sys/devices/system/cpu/cpu44/cpufreq/scaling_governor echo performance > /sys/devices/system/cpu/cpu45/cpufreq/scaling_governor echo performance > /sys/devices/system/cpu/cpu46/cpufreq/scaling_governor echo performance > /sys/devices/system/cpu/cpu47/cpufreq/scaling_governor echo performance > /sys/devices/system/cpu/cpu48/cpufreq/scaling_governor echo performance > /sys/devices/system/cpu/cpu49/cpufreq/scaling_governor echo performance > /sys/devices/system/cpu/cpu5/cpufreq/scaling_governor echo performance > /sys/devices/system/cpu/cpu50/cpufreq/scaling_governor echo performance > /sys/devices/system/cpu/cpu51/cpufreq/scaling_governor echo performance > /sys/devices/system/cpu/cpu52/cpufreq/scaling_governor echo performance > /sys/devices/system/cpu/cpu53/cpufreq/scaling_governor echo performance > /sys/devices/system/cpu/cpu54/cpufreq/scaling_governor echo performance > /sys/devices/system/cpu/cpu55/cpufreq/scaling_governor echo performance > /sys/devices/system/cpu/cpu56/cpufreq/scaling_governor echo performance > /sys/devices/system/cpu/cpu57/cpufreq/scaling_governor echo performance > /sys/devices/system/cpu/cpu58/cpufreq/scaling_governor echo performance > /sys/devices/system/cpu/cpu59/cpufreq/scaling_governor echo performance > /sys/devices/system/cpu/cpu6/cpufreq/scaling_governor echo performance > /sys/devices/system/cpu/cpu60/cpufreq/scaling_governor echo performance > /sys/devices/system/cpu/cpu61/cpufreq/scaling_governor echo performance > /sys/devices/system/cpu/cpu62/cpufreq/scaling_governor echo performance > /sys/devices/system/cpu/cpu63/cpufreq/scaling_governor echo performance > /sys/devices/system/cpu/cpu7/cpufreq/scaling_governor echo performance > /sys/devices/system/cpu/cpu8/cpufreq/scaling_governor echo performance > /sys/devices/system/cpu/cpu9/cpufreq/scaling_governor ./runtest.py read1 16 both 1 8 16 24 32 48 64 --=-=-=--