From: Srivatsa Vaddagiri <vatsa@linux.vnet.ibm.com>
To: Ingo Molnar <mingo@elte.hu>
Cc: dmitry.adamushko@gmail.com, a.p.zijlstra@chello.nl,
dhaval@linux.vnet.ibm.com, linux-kernel@vger.kernel.org,
efault@gmx.de, skumar@linux.vnet.ibm.com,
Balbir Singh <balbir@in.ibm.com>, Dipankar <dipankar@in.ibm.com>
Subject: [Patch 3/4 v1] sched: change how cpu load is calculated
Date: Mon, 26 Nov 2007 10:35:11 +0530 [thread overview]
Message-ID: <20071126050511.GD5304@linux.vnet.ibm.com> (raw)
In-Reply-To: <20071126050044.GA5304@linux.vnet.ibm.com>
This patch changes how the cpu load exerted by fair_sched_class tasks
is calculated. Load exerted by fair_sched_class tasks on a cpu is now a
summation of the group weights, rather than summation of task weights.
Weight exerted by a group on a cpu is dependent on the shares allocated to it.
This version of patch (v1 of Patch 3/4) has zero impact for
!CONFIG_FAIR_GROUP_SCHED case.
Signed-off-by: Srivatsa Vaddagiri <vatsa@linux.vnet.ibm.com>
---
kernel/sched.c | 38 ++++++++++++++++++++++++++++++--------
kernel/sched_fair.c | 31 +++++++++++++++++++++++++++----
kernel/sched_rt.c | 2 ++
3 files changed, 59 insertions(+), 12 deletions(-)
Index: current/kernel/sched.c
===================================================================
--- current.orig/kernel/sched.c
+++ current/kernel/sched.c
@@ -869,15 +869,25 @@
struct rq_iterator *iterator);
#endif
-#include "sched_stats.h"
-#include "sched_idletask.c"
-#include "sched_fair.c"
-#include "sched_rt.c"
-#ifdef CONFIG_SCHED_DEBUG
-# include "sched_debug.c"
-#endif
+#ifdef CONFIG_FAIR_GROUP_SCHED
-#define sched_class_highest (&rt_sched_class)
+static inline void inc_cpu_load(struct rq *rq, unsigned long load)
+{
+ update_load_add(&rq->load, load);
+}
+
+static inline void dec_cpu_load(struct rq *rq, unsigned long load)
+{
+ update_load_sub(&rq->load, load);
+}
+
+static inline void inc_load(struct rq *rq, const struct task_struct *p) { }
+static inline void dec_load(struct rq *rq, const struct task_struct *p) { }
+
+#else /* CONFIG_FAIR_GROUP_SCHED */
+
+static inline void inc_cpu_load(struct rq *rq, unsigned long load) { }
+static inline void dec_cpu_load(struct rq *rq, unsigned long load) { }
static inline void inc_load(struct rq *rq, const struct task_struct *p)
{
@@ -889,6 +899,18 @@
update_load_sub(&rq->load, p->se.load.weight);
}
+#endif /* CONFIG_FAIR_GROUP_SCHED */
+
+#include "sched_stats.h"
+#include "sched_idletask.c"
+#include "sched_fair.c"
+#include "sched_rt.c"
+#ifdef CONFIG_SCHED_DEBUG
+# include "sched_debug.c"
+#endif
+
+#define sched_class_highest (&rt_sched_class)
+
static void inc_nr_running(struct task_struct *p, struct rq *rq)
{
rq->nr_running++;
Index: current/kernel/sched_fair.c
===================================================================
--- current.orig/kernel/sched_fair.c
+++ current/kernel/sched_fair.c
@@ -755,15 +755,26 @@
static void enqueue_task_fair(struct rq *rq, struct task_struct *p, int wakeup)
{
struct cfs_rq *cfs_rq;
- struct sched_entity *se = &p->se;
+ struct sched_entity *se = &p->se,
+ *topse = NULL; /* Highest schedulable entity */
+ int incload = 1;
for_each_sched_entity(se) {
- if (se->on_rq)
+ topse = se;
+ if (se->on_rq) {
+ incload = 0;
break;
+ }
cfs_rq = cfs_rq_of(se);
enqueue_entity(cfs_rq, se, wakeup);
wakeup = 1;
}
+ /* Increment cpu load if we just enqueued the first task of a group on
+ * 'rq->cpu'. 'topse' represents the group to which task 'p' belongs
+ * at the highest grouping level.
+ */
+ if (incload)
+ inc_cpu_load(rq, topse->load.weight);
}
/*
@@ -774,16 +785,28 @@
static void dequeue_task_fair(struct rq *rq, struct task_struct *p, int sleep)
{
struct cfs_rq *cfs_rq;
- struct sched_entity *se = &p->se;
+ struct sched_entity *se = &p->se,
+ *topse = NULL; /* Highest schedulable entity */
+ int decload = 1;
for_each_sched_entity(se) {
+ topse = se;
cfs_rq = cfs_rq_of(se);
dequeue_entity(cfs_rq, se, sleep);
/* Don't dequeue parent if it has other entities besides us */
- if (cfs_rq->load.weight)
+ if (cfs_rq->load.weight) {
+ if (parent_entity(se))
+ decload = 0;
break;
+ }
sleep = 1;
}
+ /* Decrement cpu load if we just dequeued the last task of a group on
+ * 'rq->cpu'. 'topse' represents the group to which task 'p' belongs
+ * at the highest grouping level.
+ */
+ if (decload)
+ dec_cpu_load(rq, topse->load.weight);
}
/*
Index: current/kernel/sched_rt.c
===================================================================
--- current.orig/kernel/sched_rt.c
+++ current/kernel/sched_rt.c
@@ -31,6 +31,7 @@
list_add_tail(&p->run_list, array->queue + p->prio);
__set_bit(p->prio, array->bitmap);
+ inc_cpu_load(rq, p->se.load.weight);
}
/*
@@ -45,6 +46,7 @@
list_del(&p->run_list);
if (list_empty(array->queue + p->prio))
__clear_bit(p->prio, array->bitmap);
+ dec_cpu_load(rq, p->se.load.weight);
}
/*
--
Regards,
vatsa
next prev parent reply other threads:[~2007-11-26 4:52 UTC|newest]
Thread overview: 29+ messages / expand[flat|nested] mbox.gz Atom feed top
2007-11-19 12:27 [PATCH 0/2] sched: Group scheduler related patches Srivatsa Vaddagiri
2007-11-19 12:28 ` [PATCH 1/2] sched: Minor cleanups Srivatsa Vaddagiri
2007-11-19 13:08 ` Ingo Molnar
2007-11-19 15:01 ` Srivatsa Vaddagiri
2007-11-19 12:30 ` [PATCH 2/2] sched: Improve fairness of cpu allocation for task groups Srivatsa Vaddagiri
2007-11-19 13:12 ` Ingo Molnar
2007-11-19 15:03 ` Srivatsa Vaddagiri
2007-11-19 15:22 ` Ingo Molnar
2007-11-19 16:06 ` Srivatsa Vaddagiri
2007-11-19 19:00 ` Ingo Molnar
2007-11-26 5:00 ` [PATCH 0/4] sched: group scheduler related patches (V3) Srivatsa Vaddagiri
2007-11-26 5:02 ` [PATCH 1/4] sched: code cleanup Srivatsa Vaddagiri
2007-11-26 5:03 ` [PATCH 2/4] sched: minor fixes for group scheduler Srivatsa Vaddagiri
2007-11-26 5:05 ` Srivatsa Vaddagiri [this message]
2007-11-26 5:06 ` [Patch 3/4 v2] sched: change how cpu load is calculated Srivatsa Vaddagiri
2007-11-26 5:09 ` [Patch 4/4] sched: Improve fairness of cpu bandwidth allocation for task groups Srivatsa Vaddagiri
2007-11-26 20:28 ` Ingo Molnar
2007-11-27 5:06 ` [Patch 0/5] sched: group scheduler related patches (V4) Srivatsa Vaddagiri
2007-11-27 5:08 ` [Patch 1/5] sched: code cleanup Srivatsa Vaddagiri
2007-11-27 5:09 ` [Patch 2/5] sched: minor fixes for group scheduler Srivatsa Vaddagiri
2007-11-27 5:11 ` [Patch 3/5 v1] sched: change how cpu load is calculated Srivatsa Vaddagiri
2007-11-27 5:12 ` [Patch 3/5 v2] " Srivatsa Vaddagiri
2007-11-27 5:21 ` [Patch 4/5] sched: introduce a mutex and corresponding API to serialize access to doms_cur[] array Srivatsa Vaddagiri
2007-11-27 5:27 ` [Patch 5/5] sched: Improve fairness of cpu bandwidth allocation for task groups Srivatsa Vaddagiri
2007-11-27 11:09 ` [Patch 0/5] sched: group scheduler related patches (V4) Ingo Molnar
2007-11-27 11:42 ` Srivatsa Vaddagiri
2007-11-27 12:53 ` Ingo Molnar
2007-11-27 14:32 ` Srivatsa Vaddagiri
2007-11-26 20:29 ` [Patch 4/4] sched: Improve fairness of cpu bandwidth allocation for task groups Ingo Molnar
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20071126050511.GD5304@linux.vnet.ibm.com \
--to=vatsa@linux.vnet.ibm.com \
--cc=a.p.zijlstra@chello.nl \
--cc=balbir@in.ibm.com \
--cc=dhaval@linux.vnet.ibm.com \
--cc=dipankar@in.ibm.com \
--cc=dmitry.adamushko@gmail.com \
--cc=efault@gmx.de \
--cc=linux-kernel@vger.kernel.org \
--cc=mingo@elte.hu \
--cc=skumar@linux.vnet.ibm.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox