public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: kernel test robot <ying.huang@intel.com>
To: Eric Dumazet <edumazet@google.com>
Cc: lkp@01.org
Cc: LKML <linux-kernel@vger.kernel.org>
Cc: Al Viro <viro@zeniv.linux.org.uk>
Subject: [lkp] [fs/file.c] 8a81252b77: 14.2% will-it-scale.per_thread_ops
Date: Thu, 10 Sep 2015 10:30:30 +0800	[thread overview]
Message-ID: <87si6natvd.fsf@yhuang-dev.intel.com> (raw)

[-- Attachment #1: Type: text/plain, Size: 4707 bytes --]

FYI, we noticed the below changes on

https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git master
commit 8a81252b774b53e628a8a0fe18e2b8fc236d92cc ("fs/file.c: don't acquire files->file_lock in fd_install()")


=========================================================================================
tbox_group/testcase/rootfs/kconfig/compiler/cpufreq_governor/test:
  xps/will-it-scale/debian-x86_64-2015-02-07.cgz/x86_64-rhel/gcc-4.9/performance/open1

commit: 
  1af95de6f0119d5bde02d3a811a9f3a3661e954e
  8a81252b774b53e628a8a0fe18e2b8fc236d92cc

1af95de6f0119d5b 8a81252b774b53e628a8a0fe18 
---------------- -------------------------- 
         %stddev     %change         %stddev
             \          |                \  
    581483 ±  2%     +14.2%     663787 ±  2%  will-it-scale.per_thread_ops
    689.96 ±  0%      -2.7%     671.31 ±  0%  will-it-scale.time.system_time
     17.86 ±  1%     +59.8%      28.55 ±  0%  will-it-scale.time.user_time
      3521 ±  6%     -11.2%       3125 ±  3%  slabinfo.kmalloc-192.active_objs
     17.86 ±  1%     +59.8%      28.55 ±  0%  time.user_time
      8.75 ± 16%     -51.4%       4.25 ± 50%  sched_debug.cfs_rq[1]:/.nr_spread_over
      5.50 ± 20%     +95.5%      10.75 ± 35%  sched_debug.cfs_rq[3]:/.nr_spread_over
    473.25 ± 23%     +45.7%     689.50 ± 25%  sched_debug.cfs_rq[7]:/.utilization_load_avg
    811992 ± 10%     +11.6%     906272 ±  3%  sched_debug.cpu#0.avg_idle
     80.00 ± 17%     +61.6%     129.25 ± 33%  sched_debug.cpu#7.cpu_load[0]
      1372 ± 19%     +50.6%       2066 ± 13%  sched_debug.cpu#7.curr->pid
     20835 ± 26%     +40.7%      29308 ± 18%  sched_debug.cpu#7.ttwu_count
      2.15 ±  2%     +34.2%       2.88 ±  2%  perf-profile.cpu-cycles.__alloc_fd.get_unused_fd_flags.do_sys_open.sys_open.system_call_fastpath
      1.40 ±  3%      -8.7%       1.28 ±  3%  perf-profile.cpu-cycles.__slab_alloc.kmem_cache_alloc.get_empty_filp.path_openat.do_filp_open
      0.96 ±  4%      -8.4%       0.88 ±  2%  perf-profile.cpu-cycles.dput.__fput.____fput.task_work_run.do_notify_resume
      2.55 ±  4%     +42.7%       3.63 ±  2%  perf-profile.cpu-cycles.get_unused_fd_flags.do_sys_open.sys_open.system_call_fastpath
      3.67 ±  4%      -9.1%       3.34 ±  3%  perf-profile.cpu-cycles.getname.do_sys_open.sys_open.system_call_fastpath
      1.02 ±  7%     +16.4%       1.19 ±  5%  perf-profile.cpu-cycles.kmem_cache_free.putname.do_sys_open.sys_open.system_call_fastpath
      1.45 ±  6%     +22.8%       1.78 ±  5%  perf-profile.cpu-cycles.path_init.path_openat.do_filp_open.do_sys_open.sys_open
      1.19 ±  6%     +25.4%       1.49 ±  4%  perf-profile.cpu-cycles.putname.do_sys_open.sys_open.system_call_fastpath
      1.71 ±  7%     -14.0%       1.47 ±  4%  perf-profile.cpu-cycles.security_file_free.__fput.____fput.task_work_run.do_notify_resume


xps: Nehalem
Memory: 4G




                          will-it-scale.time.user_time

  35 ++---------------------------------------------------------------------+
     O                     O O  O O  O O                                    |
  30 ++O    O    O  O O  O                O O  O O  O                       |
     |                                                O O  O O  O           |
  25 ++                                                                     |
     |             .*.*..*.*.*..*.*..*.*..*.*..*                            |
  20 *+*..*.*..*.*.                             +                           |
     |                                           *..*.*.*..*.*..*.*..*.*..*.*
  15 ++                                                                     |
     |                                                                      |
  10 ++                                                                     |
     |                                                                      |
   5 ++                                                                     |
     |                                                                      |
   0 ++---O----O------------------------------------------------------------+


	[*] bisect-good sample
	[O] bisect-bad  sample

To reproduce:

        git clone git://git.kernel.org/pub/scm/linux/kernel/git/wfg/lkp-tests.git
        cd lkp-tests
        bin/lkp install job.yaml  # job file is attached in this email
        bin/lkp run     job.yaml


Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.


Thanks,
Ying Huang

[-- Attachment #2: job.yaml --]
[-- Type: text/plain, Size: 3265 bytes --]

---
LKP_SERVER: inn
LKP_CGI_PORT: 80
LKP_CIFS_PORT: 139
testcase: will-it-scale
default-monitors:
  wait: activate-monitor
  kmsg: 
  uptime: 
  iostat: 
  vmstat: 
  numa-numastat: 
  numa-vmstat: 
  numa-meminfo: 
  proc-vmstat: 
  proc-stat:
    interval: 10
  meminfo: 
  slabinfo: 
  interrupts: 
  lock_stat: 
  latency_stats: 
  softirqs: 
  bdi_dev_mapping: 
  diskstats: 
  nfsstat: 
  cpuidle: 
  cpufreq-stats: 
  turbostat: 
  pmeter: 
  sched_debug:
    interval: 60
cpufreq_governor: performance
default-watchdogs:
  oom-killer: 
  watchdog: 
commit: 64291f7db5bd8150a74ad2036f1037e6a0428df2
model: Nehalem
nr_cpu: 8
memory: 4G
hdd_partitions: 
swap_partitions: "/dev/disk/by-id/ata-HDT722516DLA380_VDK91GTE0WMZBR-part2"
rootfs_partition: "/dev/disk/by-id/ata-HDT722516DLA380_VDK91GTE0WMZBR-part1"
netconsole_port: 6666
pxe_user: rli9
category: benchmark
perf-profile:
  freq: 800
will-it-scale:
  test: open1
queue: cyclic
testbox: xps
tbox_group: xps
kconfig: x86_64-rhel
enqueue_time: 2015-08-31 17:42:37.615577272 +08:00
id: ecb4938deaa81f720b37e838939cced483b783c3
user: rli9
compiler: gcc-4.9
head_commit: 2d11c675e2c328a1763d4fbad7b6684879f8102a
base_commit: 64291f7db5bd8150a74ad2036f1037e6a0428df2
branch: linux-devel/devel-hourly-2015083105
kernel: "/pkg/linux/x86_64-rhel/gcc-4.9/64291f7db5bd8150a74ad2036f1037e6a0428df2/vmlinuz-4.2.0"
rootfs: debian-x86_64-2015-02-07.cgz
result_root: "/result/will-it-scale/performance-open1/xps/debian-x86_64-2015-02-07.cgz/x86_64-rhel/gcc-4.9/64291f7db5bd8150a74ad2036f1037e6a0428df2/0"
job_file: "/lkp/scheduled/xps/cyclic_will-it-scale-performance-open1-x86_64-rhel-CYCLIC_BASE-64291f7db5bd8150a74ad2036f1037e6a0428df2-20150831-29509-yewucc-0.yaml"
dequeue_time: 2015-08-31 19:26:04.544308510 +08:00
max_uptime: 1500
initrd: "/osimage/debian/debian-x86_64-2015-02-07.cgz"
bootloader_append:
- root=/dev/ram0
- user=rli9
- job=/lkp/scheduled/xps/cyclic_will-it-scale-performance-open1-x86_64-rhel-CYCLIC_BASE-64291f7db5bd8150a74ad2036f1037e6a0428df2-20150831-29509-yewucc-0.yaml
- ARCH=x86_64
- kconfig=x86_64-rhel
- branch=linux-devel/devel-hourly-2015083105
- commit=64291f7db5bd8150a74ad2036f1037e6a0428df2
- BOOT_IMAGE=/pkg/linux/x86_64-rhel/gcc-4.9/64291f7db5bd8150a74ad2036f1037e6a0428df2/vmlinuz-4.2.0
- max_uptime=1500
- RESULT_ROOT=/result/will-it-scale/performance-open1/xps/debian-x86_64-2015-02-07.cgz/x86_64-rhel/gcc-4.9/64291f7db5bd8150a74ad2036f1037e6a0428df2/0
- LKP_SERVER=inn
- |2-


  earlyprintk=ttyS0,115200 systemd.log_level=err
  debug apic=debug sysrq_always_enabled rcupdate.rcu_cpu_stall_timeout=100
  panic=-1 softlockup_panic=1 nmi_watchdog=panic oops=panic load_ramdisk=2 prompt_ramdisk=0
  console=ttyS0,115200 console=tty0 vga=normal

  rw
lkp_initrd: "/lkp/rli9/lkp-x86_64.cgz"
modules_initrd: "/pkg/linux/x86_64-rhel/gcc-4.9/64291f7db5bd8150a74ad2036f1037e6a0428df2/modules.cgz"
bm_initrd: "/osimage/deps/debian-x86_64-2015-02-07.cgz/lkp.cgz,/osimage/deps/debian-x86_64-2015-02-07.cgz/run-ipconfig.cgz,/osimage/deps/debian-x86_64-2015-02-07.cgz/turbostat.cgz,/lkp/benchmarks/turbostat.cgz,/lkp/benchmarks/will-it-scale.cgz"
job_state: finished
loadavg: 6.89 3.58 1.44 1/148 4720
start_time: '1441020401'
end_time: '1441020705'
version: "/lkp/rli9/.src-20150831-174110"

[-- Attachment #3: reproduce --]
[-- Type: text/plain, Size: 619 bytes --]

echo performance > /sys/devices/system/cpu/cpu0/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu1/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu2/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu3/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu4/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu5/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu6/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu7/cpufreq/scaling_governor
./runtest.py open1 32 both 1 4 6 8

                 reply	other threads:[~2015-09-10  2:30 UTC|newest]

Thread overview: [no followups] expand[flat|nested]  mbox.gz  Atom feed

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=87si6natvd.fsf@yhuang-dev.intel.com \
    --to=ying.huang@intel.com \
    --cc=edumazet@google.com \
    --cc=lkp@01.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox