From: "Chen, Kenneth W" <kenneth.w.chen@intel.com>
To: <linux-kernel@vger.kernel.org>
Cc: "'Andrew Morton'" <akpm@osdl.org>
Subject: re-inline sched functions
Date: Thu, 10 Mar 2005 16:24:15 -0800 [thread overview]
Message-ID: <200503110024.j2B0OFg06087@unix-os.sc.intel.com> (raw)
This could be part of the unknown 2% performance regression with
db transaction processing benchmark.
The four functions in the following patch use to be inline. They
are un-inlined since 2.6.7.
We measured that by re-inline them back on 2.6.9, it improves performance
for db transaction processing benchmark, +0.2% (on real hardware :-)
The cost is certainly larger kernel size, cost 928 bytes on x86, and
2728 bytes on ia64. But certainly worth the money for enterprise
customer since they improve performance on enterprise workload.
# size vmlinux.*
text data bss dec hex filename
3261844 717184 262020 4241048 40b698 vmlinux.x86.orig
3262772 717488 262020 4242280 40bb68 vmlinux.x86.inline
text data bss dec hex filename
5836933 903828 201940 6942701 69efed vmlinux.ia64.orig
5839661 903460 201940 6945061 69f925 vmlinux.ia64.inline
Possible we can introduce them back?
Signed-off-by: Ken Chen <kenneth.w.chen@intel.com>
--- linux-2.6.11/kernel/sched.c.orig 2005-03-10 15:31:10.000000000 -0800
+++ linux-2.6.11/kernel/sched.c 2005-03-10 15:36:32.000000000 -0800
@@ -164,7 +164,7 @@
#define SCALE_PRIO(x, prio) \
max(x * (MAX_PRIO - prio) / (MAX_USER_PRIO/2), MIN_TIMESLICE)
-static unsigned int task_timeslice(task_t *p)
+static inline unsigned int task_timeslice(task_t *p)
{
if (p->static_prio < NICE_TO_PRIO(0))
return SCALE_PRIO(DEF_TIMESLICE*4, p->static_prio);
@@ -302,7 +302,7 @@ static DEFINE_PER_CPU(struct runqueue, r
* interrupts. Note the ordering: we can safely lookup the task_rq without
* explicitly disabling preemption.
*/
-static runqueue_t *task_rq_lock(task_t *p, unsigned long *flags)
+static inline runqueue_t *task_rq_lock(task_t *p, unsigned long *flags)
__acquires(rq->lock)
{
struct runqueue *rq;
@@ -426,7 +426,7 @@ struct file_operations proc_schedstat_op
/*
* rq_lock - lock a given runqueue and disable interrupts.
*/
-static runqueue_t *this_rq_lock(void)
+static inline runqueue_t *this_rq_lock(void)
__acquires(rq->lock)
{
runqueue_t *rq;
@@ -1323,7 +1323,7 @@ void fastcall sched_exit(task_t * p)
* with the lock held can cause deadlocks; see schedule() for
* details.)
*/
-static void finish_task_switch(task_t *prev)
+static inline void finish_task_switch(task_t *prev)
__releases(rq->lock)
{
runqueue_t *rq = this_rq();
next reply other threads:[~2005-03-11 0:27 UTC|newest]
Thread overview: 7+ messages / expand[flat|nested] mbox.gz Atom feed top
2005-03-11 0:24 Chen, Kenneth W [this message]
2005-03-11 0:30 ` re-inline sched functions Andrew Morton
2005-03-11 13:08 ` Nick Piggin
2005-03-11 9:31 ` Ingo Molnar
2005-03-11 18:39 ` Chen, Kenneth W
-- strict thread matches above, loose matches on Subject: below --
2005-03-24 21:16 Chen, Kenneth W
2005-03-24 22:22 ` Ingo Molnar
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=200503110024.j2B0OFg06087@unix-os.sc.intel.com \
--to=kenneth.w.chen@intel.com \
--cc=akpm@osdl.org \
--cc=linux-kernel@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox