From: Ravikiran G Thirumalai <kiran@in.ibm.com>
To: Andrew Morton <akpm@digeo.com>
Cc: Hugh Dickins <hugh@veritas.com>, linux-kernel@vger.kernel.org
Subject: Re: [patch] Inline vm_acct_memory
Date: Thu, 29 May 2003 10:13:12 +0530 [thread overview]
Message-ID: <20030529044312.GG5604@in.ibm.com> (raw)
In-Reply-To: <20030528192330.77d3d9e9.akpm@digeo.com>
On Wed, May 28, 2003 at 07:23:30PM -0700, Andrew Morton wrote:
>...
> Maybe I'm wrong. kernbench is not some silly tight-loop microbenchmark.
> It is a "real" workload: gcc.
>
> The thing about pagetable setup and teardown is that it tends to be called
> in big lumps: for a while the process is establishing thousands of pages
> and then later it is tearing down thousands. So the cache-thrashing impact
> of having those eighteen instances sprinkled around the place is less
> than it would be if they were all being called randomly. If you can believe
> such waffle.
>
> Kiran, do you still have the raw data available? profiles and runtimes?
Yeah I kinda overlooked vm_unacct_memory 'til I wondered why I had put
the inlines in the header file earlier :). But I do have the profiles
and checking on all calls to vm_unacct_memory, I find there is no
regression at significant callsites.
I can provide the detailed profile if required,
but here is the summary for 4 runs of kernbench -- nos against routines
are profile ticks (oprofile) for 4 runs.
Vanilla
-------
vm_enough_memory 786 778 780 735
vm_acct_memory 870 952 884 898
-----------------------------------
tot of above 1656 1730 1664 1633
do_mmap_pgoff 3559 3673 3669 3807
expand_stack 27 34 33 42
unmap_region 236 267 280 280
do_brk 594 596 615 615
exit_mmap 51 52 44 52
mprotect_fixup 47 55 55 57
do_mremap 0 2 1 1
shmem_notify_change 0 0 0 0
shmem_notify_change 0 0 0 0
shmem_delete_inode 0 0 0 0
shmem_file_write 0 0 0 0
shmem_symlink 0 0 0 0
shmem_file_setup 0 0 0 0
sys_swapoff 0 0 0 0
poll_idle 101553 108687 89281 86251
Inline
------
vm_enough_memory 1539 1488 1488 1472
do_mmap_pgoff 3510 3523 3436 3475
expand_stack 27 33 32 27
unmap_region 295 340 311 349
do_brk 641 583 659 640
exit_mmap 33 52 44 42
mprotect_fixup 54 65 73 64
do_mremap 2 0 0 0
shmem_notify_change 0 0 0 0
shmem_notify_change 0 0 0 0
shmem_delete_inode 0 0 0 0
shmem_file_write 0 0 0 0
shmem_symlink 0 0 0 0
shmem_file_setup 0 0 0 0
sys_swapoff 0 0 0 0
poll_idle 98733 85659 104994 103096
Thanks,
Kiran
next prev parent reply other threads:[~2003-05-29 4:18 UTC|newest]
Thread overview: 7+ messages / expand[flat|nested] mbox.gz Atom feed top
2003-05-28 11:05 [patch] Inline vm_acct_memory Ravikiran G Thirumalai
2003-05-28 11:04 ` Andrew Morton
2003-05-28 15:45 ` Hugh Dickins
2003-05-29 2:23 ` Andrew Morton
2003-05-29 4:43 ` Ravikiran G Thirumalai [this message]
2003-05-29 5:59 ` Hugh Dickins
2003-05-29 9:52 ` Ravikiran G Thirumalai
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20030529044312.GG5604@in.ibm.com \
--to=kiran@in.ibm.com \
--cc=akpm@digeo.com \
--cc=hugh@veritas.com \
--cc=linux-kernel@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox