From: Fengguang Wu <fengguang.wu@intel.com>
To: Dave Hansen <dave.hansen@intel.com>
Cc: LKML <linux-kernel@vger.kernel.org>, lkp@linux.intel.com
Subject: Re: [slub shrink] 0f6934bf16: +191.9% vmstat.system.cs
Date: Fri, 17 Jan 2014 21:00:52 +0800 [thread overview]
Message-ID: <20140117130051.GA2072@localhost> (raw)
In-Reply-To: <52D82F13.9070309@intel.com>
Hi Dave,
I retested the will-it-scale/read2 case with perf profile enabled, and
here are the new comparison results. It shows that there are increased
overheads in shmem_getpage_gfp(). If you'd like to collect more data,
feel free to tell me.
9a0bb2966efbf30 0f6934bf1695682e7ced973f6
--------------- -------------------------
26460 ~95% +136.3% 62514 ~ 1% numa-vmstat.node2.numa_other
62927 ~ 0% -85.9% 8885 ~ 2% numa-vmstat.node1.numa_other
8363465 ~ 4% +81.9% 15210930 ~ 2% interrupts.RES
3.96 ~ 6% +42.8% 5.66 ~ 4% perf-profile.cpu-cycles.find_lock_page.shmem_getpage_gfp.shmem_file_aio_read.do_sync_read.vfs_read
209881 ~11% +35.2% 283704 ~ 9% numa-vmstat.node1.numa_local
1795727 ~ 7% +52.1% 2730750 ~17% interrupts.LOC
7 ~ 0% -33.3% 4 ~10% vmstat.procs.b
18461 ~12% -21.1% 14569 ~ 2% numa-meminfo.node1.SUnreclaim
4614 ~12% -21.1% 3641 ~ 2% numa-vmstat.node1.nr_slab_unreclaimable
491 ~ 2% -25.9% 363 ~ 6% proc-vmstat.nr_tlb_remote_flush
14595 ~ 8% -17.1% 12093 ~16% numa-meminfo.node2.AnonPages
3648 ~ 8% -17.1% 3025 ~16% numa-vmstat.node2.nr_anon_pages
277 ~12% -14.4% 237 ~ 8% numa-vmstat.node2.nr_page_table_pages
202594 ~ 8% -20.5% 161033 ~12% softirqs.SCHED
1104 ~11% -14.0% 950 ~ 8% numa-meminfo.node2.PageTables
5201 ~ 7% +21.0% 6292 ~ 3% numa-vmstat.node0.nr_slab_unreclaimable
20807 ~ 7% +21.0% 25171 ~ 3% numa-meminfo.node0.SUnreclaim
975 ~ 8% +16.7% 1138 ~ 5% numa-meminfo.node1.PageTables
245 ~ 7% +16.5% 285 ~ 5% numa-vmstat.node1.nr_page_table_pages
109964 ~ 4% -16.7% 91589 ~ 1% numa-numastat.node0.local_node
20433 ~ 4% -16.3% 17104 ~ 2% proc-vmstat.pgalloc_dma32
112051 ~ 4% -16.4% 93676 ~ 1% numa-numastat.node0.numa_hit
273320 ~ 8% -14.4% 234064 ~ 3% numa-vmstat.node2.numa_local
31480 ~ 4% +13.9% 35852 ~ 5% numa-meminfo.node0.Slab
917358 ~ 2% +12.5% 1031687 ~ 2% softirqs.TIMER
513 ~ 0% +37.7% 706 ~33% numa-meminfo.node2.Mlocked
8404395 ~13% +256.9% 29992039 ~ 9% time.voluntary_context_switches
157154 ~17% +201.7% 474102 ~ 8% vmstat.system.cs
36948 ~ 3% +67.7% 61963 ~ 2% vmstat.system.in
2274 ~ 0% +13.7% 2584 ~ 1% time.system_time
769 ~ 0% +13.5% 873 ~ 1% time.percent_of_cpu_this_job_got
4359 ~ 2% +13.6% 4951 ~ 3% time.involuntary_context_switches
104 ~ 3% +10.2% 115 ~ 2% time.user_time
Thanks,
Fengguang
next prev parent reply other threads:[~2014-01-17 13:00 UTC|newest]
Thread overview: 5+ messages / expand[flat|nested] mbox.gz Atom feed top
2014-01-16 3:07 [slub shrink] 0f6934bf16: +191.9% vmstat.system.cs kernel test robot
2014-01-16 19:12 ` Dave Hansen
2014-01-17 0:26 ` Fengguang Wu
2014-01-17 13:00 ` Fengguang Wu [this message]
2014-01-29 8:26 ` Fengguang Wu
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20140117130051.GA2072@localhost \
--to=fengguang.wu@intel.com \
--cc=dave.hansen@intel.com \
--cc=linux-kernel@vger.kernel.org \
--cc=lkp@linux.intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).