public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/9] sched: Shrink include/linux/sched.h
@ 2013-03-05  8:05 Li Zefan
  2013-03-05  8:05 ` [PATCH 1/9] sched: Remove some dummpy functions Li Zefan
                   ` (9 more replies)
  0 siblings, 10 replies; 20+ messages in thread
From: Li Zefan @ 2013-03-05  8:05 UTC (permalink / raw)
  To: Ingo Molnar; +Cc: Peter Zijlstra, LKML

While working of a cgroup patch which also touched include/linux/sched.h,
I found some function/macro/structure declarations can be moved to
kernel/sched/sched.h, and some can even be total removed, so here's
the patchset.

The result is a reduction of ~200 LOC from include/linux/sched.h.

0001-sched-Remove-some-dummpy-functions.patch
0002-sched-Remove-test_sd_parent.patch
0003-sched-Move-SCHED_LOAD_SHIFT-macros-to-kernel-sched-s.patch
0004-sched-Move-struct-sched_group-to-kernel-sched-sched..patch
0005-sched-Move-wake-flags-to-kernel-sched-sched.h.patch
0006-sched-Move-struct-sched_class-to-kernel-sched-sched..patch
0007-sched-Make-default_scale_freq_power-static.patch
0008-sched-Move-group-scheduling-functions-out-of-include.patch
0009-sched-Remove-double-declaration-of-root_task_group.patch

--
 include/linux/sched.h | 194 +-------------------------------------------------
 kernel/sched/core.c   |  14 ++--
 kernel/sched/fair.c   |   6 +-
 kernel/sched/sched.h  | 159 +++++++++++++++++++++++++++++++++++++++--
 4 files changed, 168 insertions(+), 205 deletions(-)


^ permalink raw reply	[flat|nested] 20+ messages in thread

* [PATCH 1/9] sched: Remove some dummpy functions
  2013-03-05  8:05 [PATCH 0/9] sched: Shrink include/linux/sched.h Li Zefan
@ 2013-03-05  8:05 ` Li Zefan
  2013-03-06 14:32   ` [tip:sched/core] sched: Remove some dummy functions tip-bot for Li Zefan
  2013-03-05  8:05 ` [PATCH 2/9] sched: Remove test_sd_parent() Li Zefan
                   ` (8 subsequent siblings)
  9 siblings, 1 reply; 20+ messages in thread
From: Li Zefan @ 2013-03-05  8:05 UTC (permalink / raw)
  To: Ingo Molnar; +Cc: Peter Zijlstra, LKML

No one will call those functions if CONFIG_SCHED_DEBUG=n.

Signed-off-by: Li Zefan <lizefan@huawei.com>
---
 include/linux/sched.h | 12 ------------
 1 file changed, 12 deletions(-)

diff --git a/include/linux/sched.h b/include/linux/sched.h
index d35d2b6..2715fbb 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -127,18 +127,6 @@ extern void proc_sched_show_task(struct task_struct *p, struct seq_file *m);
 extern void proc_sched_set_task(struct task_struct *p);
 extern void
 print_cfs_rq(struct seq_file *m, int cpu, struct cfs_rq *cfs_rq);
-#else
-static inline void
-proc_sched_show_task(struct task_struct *p, struct seq_file *m)
-{
-}
-static inline void proc_sched_set_task(struct task_struct *p)
-{
-}
-static inline void
-print_cfs_rq(struct seq_file *m, int cpu, struct cfs_rq *cfs_rq)
-{
-}
 #endif
 
 /*
-- 
1.8.0.2

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH 2/9] sched: Remove test_sd_parent()
  2013-03-05  8:05 [PATCH 0/9] sched: Shrink include/linux/sched.h Li Zefan
  2013-03-05  8:05 ` [PATCH 1/9] sched: Remove some dummpy functions Li Zefan
@ 2013-03-05  8:05 ` Li Zefan
  2013-03-06 14:34   ` [tip:sched/core] " tip-bot for Li Zefan
  2013-03-05  8:06 ` [PATCH 3/9] sched: Move SCHED_LOAD_SHIFT macros to kernel/sched/sched.h Li Zefan
                   ` (7 subsequent siblings)
  9 siblings, 1 reply; 20+ messages in thread
From: Li Zefan @ 2013-03-05  8:05 UTC (permalink / raw)
  To: Ingo Molnar; +Cc: Peter Zijlstra, LKML

It's unused.

Signed-off-by: Li Zefan <lizefan@huawei.com>
---
 include/linux/sched.h | 9 ---------
 1 file changed, 9 deletions(-)

diff --git a/include/linux/sched.h b/include/linux/sched.h
index 2715fbb..e880d7d 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -959,15 +959,6 @@ extern void partition_sched_domains(int ndoms_new, cpumask_var_t doms_new[],
 cpumask_var_t *alloc_sched_domains(unsigned int ndoms);
 void free_sched_domains(cpumask_var_t doms[], unsigned int ndoms);
 
-/* Test a flag in parent sched domain */
-static inline int test_sd_parent(struct sched_domain *sd, int flag)
-{
-	if (sd->parent && (sd->parent->flags & flag))
-		return 1;
-
-	return 0;
-}
-
 unsigned long default_scale_freq_power(struct sched_domain *sd, int cpu);
 unsigned long default_scale_smt_power(struct sched_domain *sd, int cpu);
 
-- 
1.8.0.2

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH 3/9] sched: Move SCHED_LOAD_SHIFT macros to kernel/sched/sched.h
  2013-03-05  8:05 [PATCH 0/9] sched: Shrink include/linux/sched.h Li Zefan
  2013-03-05  8:05 ` [PATCH 1/9] sched: Remove some dummpy functions Li Zefan
  2013-03-05  8:05 ` [PATCH 2/9] sched: Remove test_sd_parent() Li Zefan
@ 2013-03-05  8:06 ` Li Zefan
  2013-03-06 14:35   ` [tip:sched/core] sched: Move SCHED_LOAD_SHIFT macros to kernel/ sched/sched.h tip-bot for Li Zefan
  2013-03-05  8:06 ` [PATCH 4/9] sched: Move struct sched_group to kernel/sched/sched.h Li Zefan
                   ` (6 subsequent siblings)
  9 siblings, 1 reply; 20+ messages in thread
From: Li Zefan @ 2013-03-05  8:06 UTC (permalink / raw)
  To: Ingo Molnar; +Cc: Peter Zijlstra, LKML

They are used internally only.

Signed-off-by: Li Zefan <lizefan@huawei.com>
---
 include/linux/sched.h | 25 -------------------------
 kernel/sched/sched.h  | 26 +++++++++++++++++++++++++-
 2 files changed, 25 insertions(+), 26 deletions(-)

diff --git a/include/linux/sched.h b/include/linux/sched.h
index e880d7d..f8826d0 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -756,31 +756,6 @@ enum cpu_idle_type {
 };
 
 /*
- * Increase resolution of nice-level calculations for 64-bit architectures.
- * The extra resolution improves shares distribution and load balancing of
- * low-weight task groups (eg. nice +19 on an autogroup), deeper taskgroup
- * hierarchies, especially on larger systems. This is not a user-visible change
- * and does not change the user-interface for setting shares/weights.
- *
- * We increase resolution only if we have enough bits to allow this increased
- * resolution (i.e. BITS_PER_LONG > 32). The costs for increasing resolution
- * when BITS_PER_LONG <= 32 are pretty high and the returns do not justify the
- * increased costs.
- */
-#if 0 /* BITS_PER_LONG > 32 -- currently broken: it increases power usage under light load  */
-# define SCHED_LOAD_RESOLUTION	10
-# define scale_load(w)		((w) << SCHED_LOAD_RESOLUTION)
-# define scale_load_down(w)	((w) >> SCHED_LOAD_RESOLUTION)
-#else
-# define SCHED_LOAD_RESOLUTION	0
-# define scale_load(w)		(w)
-# define scale_load_down(w)	(w)
-#endif
-
-#define SCHED_LOAD_SHIFT	(10 + SCHED_LOAD_RESOLUTION)
-#define SCHED_LOAD_SCALE	(1L << SCHED_LOAD_SHIFT)
-
-/*
  * Increase resolution of cpu_power calculations
  */
 #define SCHED_POWER_SHIFT	10
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index cc03cfd..709a30c 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -33,6 +33,31 @@ extern __read_mostly int scheduler_running;
  */
 #define NS_TO_JIFFIES(TIME)	((unsigned long)(TIME) / (NSEC_PER_SEC / HZ))
 
+/*
+ * Increase resolution of nice-level calculations for 64-bit architectures.
+ * The extra resolution improves shares distribution and load balancing of
+ * low-weight task groups (eg. nice +19 on an autogroup), deeper taskgroup
+ * hierarchies, especially on larger systems. This is not a user-visible change
+ * and does not change the user-interface for setting shares/weights.
+ *
+ * We increase resolution only if we have enough bits to allow this increased
+ * resolution (i.e. BITS_PER_LONG > 32). The costs for increasing resolution
+ * when BITS_PER_LONG <= 32 are pretty high and the returns do not justify the
+ * increased costs.
+ */
+#if 0 /* BITS_PER_LONG > 32 -- currently broken: it increases power usage under light load  */
+# define SCHED_LOAD_RESOLUTION	10
+# define scale_load(w)		((w) << SCHED_LOAD_RESOLUTION)
+# define scale_load_down(w)	((w) >> SCHED_LOAD_RESOLUTION)
+#else
+# define SCHED_LOAD_RESOLUTION	0
+# define scale_load(w)		(w)
+# define scale_load_down(w)	(w)
+#endif
+
+#define SCHED_LOAD_SHIFT	(10 + SCHED_LOAD_RESOLUTION)
+#define SCHED_LOAD_SCALE	(1L << SCHED_LOAD_SHIFT)
+
 #define NICE_0_LOAD		SCHED_LOAD_SCALE
 #define NICE_0_SHIFT		SCHED_LOAD_SHIFT
 
@@ -784,7 +809,6 @@ static inline void finish_lock_switch(struct rq *rq, struct task_struct *prev)
 }
 #endif /* __ARCH_WANT_UNLOCKED_CTXSW */
 
-
 static inline void update_load_add(struct load_weight *lw, unsigned long inc)
 {
 	lw->weight += inc;
-- 
1.8.0.2

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH 4/9] sched: Move struct sched_group to kernel/sched/sched.h
  2013-03-05  8:05 [PATCH 0/9] sched: Shrink include/linux/sched.h Li Zefan
                   ` (2 preceding siblings ...)
  2013-03-05  8:06 ` [PATCH 3/9] sched: Move SCHED_LOAD_SHIFT macros to kernel/sched/sched.h Li Zefan
@ 2013-03-05  8:06 ` Li Zefan
  2013-03-06 14:36   ` [tip:sched/core] sched: Move struct sched_group to kernel/sched/ sched.h tip-bot for Li Zefan
  2013-03-05  8:06 ` [PATCH 5/9] sched: Move wake flags to kernel/sched/sched.h Li Zefan
                   ` (5 subsequent siblings)
  9 siblings, 1 reply; 20+ messages in thread
From: Li Zefan @ 2013-03-05  8:06 UTC (permalink / raw)
  To: Ingo Molnar; +Cc: Peter Zijlstra, LKML

Move struct sched_group_power and sched_group and related inline
functions to kernel/sched/sched.h, as they are used internally only.

Signed-off-by: Li Zefan <lizefan@huawei.com>
---
 include/linux/sched.h | 58 ++-------------------------------------------------
 kernel/sched/sched.h  | 56 +++++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 58 insertions(+), 56 deletions(-)

diff --git a/include/linux/sched.h b/include/linux/sched.h
index f8826d0..0d64130 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -780,62 +780,6 @@ enum cpu_idle_type {
 
 extern int __weak arch_sd_sibiling_asym_packing(void);
 
-struct sched_group_power {
-	atomic_t ref;
-	/*
-	 * CPU power of this group, SCHED_LOAD_SCALE being max power for a
-	 * single CPU.
-	 */
-	unsigned int power, power_orig;
-	unsigned long next_update;
-	/*
-	 * Number of busy cpus in this group.
-	 */
-	atomic_t nr_busy_cpus;
-
-	unsigned long cpumask[0]; /* iteration mask */
-};
-
-struct sched_group {
-	struct sched_group *next;	/* Must be a circular list */
-	atomic_t ref;
-
-	unsigned int group_weight;
-	struct sched_group_power *sgp;
-
-	/*
-	 * The CPUs this group covers.
-	 *
-	 * NOTE: this field is variable length. (Allocated dynamically
-	 * by attaching extra space to the end of the structure,
-	 * depending on how many CPUs the kernel has booted up with)
-	 */
-	unsigned long cpumask[0];
-};
-
-static inline struct cpumask *sched_group_cpus(struct sched_group *sg)
-{
-	return to_cpumask(sg->cpumask);
-}
-
-/*
- * cpumask masking which cpus in the group are allowed to iterate up the domain
- * tree.
- */
-static inline struct cpumask *sched_group_mask(struct sched_group *sg)
-{
-	return to_cpumask(sg->sgp->cpumask);
-}
-
-/**
- * group_first_cpu - Returns the first cpu in the cpumask of a sched_group.
- * @group: The group whose first cpu is to be returned.
- */
-static inline unsigned int group_first_cpu(struct sched_group *group)
-{
-	return cpumask_first(sched_group_cpus(group));
-}
-
 struct sched_domain_attr {
 	int relax_domain_level;
 };
@@ -846,6 +790,8 @@ struct sched_domain_attr {
 
 extern int sched_domain_level_max;
 
+struct sched_group;
+
 struct sched_domain {
 	/* These fields must be setup */
 	struct sched_domain *parent;	/* top domain must be null terminated */
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index 709a30c..1a4a2b1 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -572,6 +572,62 @@ static inline struct sched_domain *highest_flag_domain(int cpu, int flag)
 DECLARE_PER_CPU(struct sched_domain *, sd_llc);
 DECLARE_PER_CPU(int, sd_llc_id);
 
+struct sched_group_power {
+	atomic_t ref;
+	/*
+	 * CPU power of this group, SCHED_LOAD_SCALE being max power for a
+	 * single CPU.
+	 */
+	unsigned int power, power_orig;
+	unsigned long next_update;
+	/*
+	 * Number of busy cpus in this group.
+	 */
+	atomic_t nr_busy_cpus;
+
+	unsigned long cpumask[0]; /* iteration mask */
+};
+
+struct sched_group {
+	struct sched_group *next;	/* Must be a circular list */
+	atomic_t ref;
+
+	unsigned int group_weight;
+	struct sched_group_power *sgp;
+
+	/*
+	 * The CPUs this group covers.
+	 *
+	 * NOTE: this field is variable length. (Allocated dynamically
+	 * by attaching extra space to the end of the structure,
+	 * depending on how many CPUs the kernel has booted up with)
+	 */
+	unsigned long cpumask[0];
+};
+
+static inline struct cpumask *sched_group_cpus(struct sched_group *sg)
+{
+	return to_cpumask(sg->cpumask);
+}
+
+/*
+ * cpumask masking which cpus in the group are allowed to iterate up the domain
+ * tree.
+ */
+static inline struct cpumask *sched_group_mask(struct sched_group *sg)
+{
+	return to_cpumask(sg->sgp->cpumask);
+}
+
+/**
+ * group_first_cpu - Returns the first cpu in the cpumask of a sched_group.
+ * @group: The group whose first cpu is to be returned.
+ */
+static inline unsigned int group_first_cpu(struct sched_group *group)
+{
+	return cpumask_first(sched_group_cpus(group));
+}
+
 extern int group_balance_cpu(struct sched_group *sg);
 
 #endif /* CONFIG_SMP */
-- 
1.8.0.2

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH 5/9] sched: Move wake flags to kernel/sched/sched.h
  2013-03-05  8:05 [PATCH 0/9] sched: Shrink include/linux/sched.h Li Zefan
                   ` (3 preceding siblings ...)
  2013-03-05  8:06 ` [PATCH 4/9] sched: Move struct sched_group to kernel/sched/sched.h Li Zefan
@ 2013-03-05  8:06 ` Li Zefan
  2013-03-06 14:37   ` [tip:sched/core] " tip-bot for Li Zefan
  2013-03-05  8:06 ` [PATCH 6/9] sched: Move struct sched_class " Li Zefan
                   ` (4 subsequent siblings)
  9 siblings, 1 reply; 20+ messages in thread
From: Li Zefan @ 2013-03-05  8:06 UTC (permalink / raw)
  To: Ingo Molnar; +Cc: Peter Zijlstra, LKML

They are used internally only.

Signed-off-by: Li Zefan <lizefan@huawei.com>
---
 include/linux/sched.h | 7 -------
 kernel/sched/sched.h  | 7 +++++++
 2 files changed, 7 insertions(+), 7 deletions(-)

diff --git a/include/linux/sched.h b/include/linux/sched.h
index 0d64130..863b505 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -920,13 +920,6 @@ struct uts_namespace;
 struct rq;
 struct sched_domain;
 
-/*
- * wake flags
- */
-#define WF_SYNC		0x01		/* waker goes to sleep after wakup */
-#define WF_FORK		0x02		/* child wakeup after fork */
-#define WF_MIGRATED	0x04		/* internal use, task got migrated */
-
 #define ENQUEUE_WAKEUP		1
 #define ENQUEUE_HEAD		2
 #ifdef CONFIG_SMP
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index 1a4a2b1..4e5c2af 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -865,6 +865,13 @@ static inline void finish_lock_switch(struct rq *rq, struct task_struct *prev)
 }
 #endif /* __ARCH_WANT_UNLOCKED_CTXSW */
 
+/*
+ * wake flags
+ */
+#define WF_SYNC		0x01		/* waker goes to sleep after wakeup */
+#define WF_FORK		0x02		/* child wakeup after fork */
+#define WF_MIGRATED	0x4		/* internal use, task got migrated */
+
 static inline void update_load_add(struct load_weight *lw, unsigned long inc)
 {
 	lw->weight += inc;
-- 
1.8.0.2

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH 6/9] sched: Move struct sched_class to kernel/sched/sched.h
  2013-03-05  8:05 [PATCH 0/9] sched: Shrink include/linux/sched.h Li Zefan
                   ` (4 preceding siblings ...)
  2013-03-05  8:06 ` [PATCH 5/9] sched: Move wake flags to kernel/sched/sched.h Li Zefan
@ 2013-03-05  8:06 ` Li Zefan
  2013-03-06 14:38   ` [tip:sched/core] sched: Move struct sched_class to kernel/sched/ sched.h tip-bot for Li Zefan
  2013-03-05  8:07 ` [PATCH 7/9] sched: Make default_scale_freq_power() static Li Zefan
                   ` (3 subsequent siblings)
  9 siblings, 1 reply; 20+ messages in thread
From: Li Zefan @ 2013-03-05  8:06 UTC (permalink / raw)
  To: Ingo Molnar; +Cc: Peter Zijlstra, LKML

It's used internally only.

Signed-off-by: Li Zefan <lizefan@huawei.com>
---
 include/linux/sched.h | 59 ---------------------------------------------------
 kernel/sched/sched.h  | 55 +++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 55 insertions(+), 59 deletions(-)

diff --git a/include/linux/sched.h b/include/linux/sched.h
index 863b505..04b834f 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -917,65 +917,6 @@ struct mempolicy;
 struct pipe_inode_info;
 struct uts_namespace;
 
-struct rq;
-struct sched_domain;
-
-#define ENQUEUE_WAKEUP		1
-#define ENQUEUE_HEAD		2
-#ifdef CONFIG_SMP
-#define ENQUEUE_WAKING		4	/* sched_class::task_waking was called */
-#else
-#define ENQUEUE_WAKING		0
-#endif
-
-#define DEQUEUE_SLEEP		1
-
-struct sched_class {
-	const struct sched_class *next;
-
-	void (*enqueue_task) (struct rq *rq, struct task_struct *p, int flags);
-	void (*dequeue_task) (struct rq *rq, struct task_struct *p, int flags);
-	void (*yield_task) (struct rq *rq);
-	bool (*yield_to_task) (struct rq *rq, struct task_struct *p, bool preempt);
-
-	void (*check_preempt_curr) (struct rq *rq, struct task_struct *p, int flags);
-
-	struct task_struct * (*pick_next_task) (struct rq *rq);
-	void (*put_prev_task) (struct rq *rq, struct task_struct *p);
-
-#ifdef CONFIG_SMP
-	int  (*select_task_rq)(struct task_struct *p, int sd_flag, int flags);
-	void (*migrate_task_rq)(struct task_struct *p, int next_cpu);
-
-	void (*pre_schedule) (struct rq *this_rq, struct task_struct *task);
-	void (*post_schedule) (struct rq *this_rq);
-	void (*task_waking) (struct task_struct *task);
-	void (*task_woken) (struct rq *this_rq, struct task_struct *task);
-
-	void (*set_cpus_allowed)(struct task_struct *p,
-				 const struct cpumask *newmask);
-
-	void (*rq_online)(struct rq *rq);
-	void (*rq_offline)(struct rq *rq);
-#endif
-
-	void (*set_curr_task) (struct rq *rq);
-	void (*task_tick) (struct rq *rq, struct task_struct *p, int queued);
-	void (*task_fork) (struct task_struct *p);
-
-	void (*switched_from) (struct rq *this_rq, struct task_struct *task);
-	void (*switched_to) (struct rq *this_rq, struct task_struct *task);
-	void (*prio_changed) (struct rq *this_rq, struct task_struct *task,
-			     int oldprio);
-
-	unsigned int (*get_rr_interval) (struct rq *rq,
-					 struct task_struct *task);
-
-#ifdef CONFIG_FAIR_GROUP_SCHED
-	void (*task_move_group) (struct task_struct *p, int on_rq);
-#endif
-};
-
 struct load_weight {
 	unsigned long weight, inv_weight;
 };
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index 4e5c2af..eca526d 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -951,6 +951,61 @@ enum cpuacct_stat_index {
 	CPUACCT_STAT_NSTATS,
 };
 
+#define ENQUEUE_WAKEUP		1
+#define ENQUEUE_HEAD		2
+#ifdef CONFIG_SMP
+#define ENQUEUE_WAKING		4	/* sched_class::task_waking was called */
+#else
+#define ENQUEUE_WAKING		0
+#endif
+
+#define DEQUEUE_SLEEP		1
+
+struct sched_class {
+	const struct sched_class *next;
+
+	void (*enqueue_task) (struct rq *rq, struct task_struct *p, int flags);
+	void (*dequeue_task) (struct rq *rq, struct task_struct *p, int flags);
+	void (*yield_task) (struct rq *rq);
+	bool (*yield_to_task) (struct rq *rq, struct task_struct *p, bool preempt);
+
+	void (*check_preempt_curr) (struct rq *rq, struct task_struct *p, int flags);
+
+	struct task_struct * (*pick_next_task) (struct rq *rq);
+	void (*put_prev_task) (struct rq *rq, struct task_struct *p);
+
+#ifdef CONFIG_SMP
+	int  (*select_task_rq)(struct task_struct *p, int sd_flag, int flags);
+	void (*migrate_task_rq)(struct task_struct *p, int next_cpu);
+
+	void (*pre_schedule) (struct rq *this_rq, struct task_struct *task);
+	void (*post_schedule) (struct rq *this_rq);
+	void (*task_waking) (struct task_struct *task);
+	void (*task_woken) (struct rq *this_rq, struct task_struct *task);
+
+	void (*set_cpus_allowed)(struct task_struct *p,
+				 const struct cpumask *newmask);
+
+	void (*rq_online)(struct rq *rq);
+	void (*rq_offline)(struct rq *rq);
+#endif
+
+	void (*set_curr_task) (struct rq *rq);
+	void (*task_tick) (struct rq *rq, struct task_struct *p, int queued);
+	void (*task_fork) (struct task_struct *p);
+
+	void (*switched_from) (struct rq *this_rq, struct task_struct *task);
+	void (*switched_to) (struct rq *this_rq, struct task_struct *task);
+	void (*prio_changed) (struct rq *this_rq, struct task_struct *task,
+			     int oldprio);
+
+	unsigned int (*get_rr_interval) (struct rq *rq,
+					 struct task_struct *task);
+
+#ifdef CONFIG_FAIR_GROUP_SCHED
+	void (*task_move_group) (struct task_struct *p, int on_rq);
+#endif
+};
 
 #define sched_class_highest (&stop_sched_class)
 #define for_each_class(class) \
-- 
1.8.0.2

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH 7/9] sched: Make default_scale_freq_power() static
  2013-03-05  8:05 [PATCH 0/9] sched: Shrink include/linux/sched.h Li Zefan
                   ` (5 preceding siblings ...)
  2013-03-05  8:06 ` [PATCH 6/9] sched: Move struct sched_class " Li Zefan
@ 2013-03-05  8:07 ` Li Zefan
  2013-03-06 14:40   ` [tip:sched/core] " tip-bot for Li Zefan
  2013-03-05  8:07 ` [PATCH 8/9] sched: Move group scheduling functions out of include/linux/sched.h Li Zefan
                   ` (2 subsequent siblings)
  9 siblings, 1 reply; 20+ messages in thread
From: Li Zefan @ 2013-03-05  8:07 UTC (permalink / raw)
  To: Ingo Molnar; +Cc: Peter Zijlstra, LKML

As default_scale_{freq,smt}_power() and update_rt_power() are used
in kernel/sched/fair.c only, annotate them as static functions.

Signed-off-by: Li Zefan <lizefan@huawei.com>
---
 include/linux/sched.h | 3 ---
 kernel/sched/fair.c   | 6 +++---
 2 files changed, 3 insertions(+), 6 deletions(-)

diff --git a/include/linux/sched.h b/include/linux/sched.h
index 04b834f..eadd113 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -880,9 +880,6 @@ extern void partition_sched_domains(int ndoms_new, cpumask_var_t doms_new[],
 cpumask_var_t *alloc_sched_domains(unsigned int ndoms);
 void free_sched_domains(cpumask_var_t doms[], unsigned int ndoms);
 
-unsigned long default_scale_freq_power(struct sched_domain *sd, int cpu);
-unsigned long default_scale_smt_power(struct sched_domain *sd, int cpu);
-
 bool cpus_share_cache(int this_cpu, int that_cpu);
 
 #else /* CONFIG_SMP */
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 7a33e59..9f23112 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -4245,7 +4245,7 @@ static inline int get_sd_load_idx(struct sched_domain *sd,
 	return load_idx;
 }
 
-unsigned long default_scale_freq_power(struct sched_domain *sd, int cpu)
+static unsigned long default_scale_freq_power(struct sched_domain *sd, int cpu)
 {
 	return SCHED_POWER_SCALE;
 }
@@ -4255,7 +4255,7 @@ unsigned long __weak arch_scale_freq_power(struct sched_domain *sd, int cpu)
 	return default_scale_freq_power(sd, cpu);
 }
 
-unsigned long default_scale_smt_power(struct sched_domain *sd, int cpu)
+static unsigned long default_scale_smt_power(struct sched_domain *sd, int cpu)
 {
 	unsigned long weight = sd->span_weight;
 	unsigned long smt_gain = sd->smt_gain;
@@ -4270,7 +4270,7 @@ unsigned long __weak arch_scale_smt_power(struct sched_domain *sd, int cpu)
 	return default_scale_smt_power(sd, cpu);
 }
 
-unsigned long scale_rt_power(int cpu)
+static unsigned long scale_rt_power(int cpu)
 {
 	struct rq *rq = cpu_rq(cpu);
 	u64 total, available, age_stamp, avg;
-- 
1.8.0.2

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH 8/9] sched: Move group scheduling functions out of include/linux/sched.h
  2013-03-05  8:05 [PATCH 0/9] sched: Shrink include/linux/sched.h Li Zefan
                   ` (6 preceding siblings ...)
  2013-03-05  8:07 ` [PATCH 7/9] sched: Make default_scale_freq_power() static Li Zefan
@ 2013-03-05  8:07 ` Li Zefan
  2013-03-06 14:41   ` [tip:sched/core] " tip-bot for Li Zefan
  2013-03-05  8:07 ` [PATCH 9/9] sched: Remove double declaration of root_task_group Li Zefan
  2013-03-05 16:43 ` [PATCH 0/9] sched: Shrink include/linux/sched.h Lai Jiangshan
  9 siblings, 1 reply; 20+ messages in thread
From: Li Zefan @ 2013-03-05  8:07 UTC (permalink / raw)
  To: Ingo Molnar; +Cc: Peter Zijlstra, LKML

- Make sched_group_{set_,}runtime(), sched_group_{set_,}period() and
sched_rt_can_attach() static.

- Move sched_{create,destroy,online,offline}_group() to kernel/sched/sched.h.

- Remove declaration of sched_group_shares().

Signed-off-by: Li Zefan <lizefan@huawei.com>
---
 include/linux/sched.h | 21 ---------------------
 kernel/sched/core.c   | 10 +++++-----
 kernel/sched/sched.h  | 12 ++++++++++++
 3 files changed, 17 insertions(+), 26 deletions(-)

diff --git a/include/linux/sched.h b/include/linux/sched.h
index eadd113..fc039ce 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -2512,28 +2512,7 @@ extern long sched_setaffinity(pid_t pid, const struct cpumask *new_mask);
 extern long sched_getaffinity(pid_t pid, struct cpumask *mask);
 
 #ifdef CONFIG_CGROUP_SCHED
-
 extern struct task_group root_task_group;
-
-extern struct task_group *sched_create_group(struct task_group *parent);
-extern void sched_online_group(struct task_group *tg,
-			       struct task_group *parent);
-extern void sched_destroy_group(struct task_group *tg);
-extern void sched_offline_group(struct task_group *tg);
-extern void sched_move_task(struct task_struct *tsk);
-#ifdef CONFIG_FAIR_GROUP_SCHED
-extern int sched_group_set_shares(struct task_group *tg, unsigned long shares);
-extern unsigned long sched_group_shares(struct task_group *tg);
-#endif
-#ifdef CONFIG_RT_GROUP_SCHED
-extern int sched_group_set_rt_runtime(struct task_group *tg,
-				      long rt_runtime_us);
-extern long sched_group_rt_runtime(struct task_group *tg);
-extern int sched_group_set_rt_period(struct task_group *tg,
-				      long rt_period_us);
-extern long sched_group_rt_period(struct task_group *tg);
-extern int sched_rt_can_attach(struct task_group *tg, struct task_struct *tsk);
-#endif
 #endif /* CONFIG_CGROUP_SCHED */
 
 extern int task_can_switch_user(struct user_struct *up,
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 7f12624..9ad26c9 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -7455,7 +7455,7 @@ unlock:
 	return err;
 }
 
-int sched_group_set_rt_runtime(struct task_group *tg, long rt_runtime_us)
+static int sched_group_set_rt_runtime(struct task_group *tg, long rt_runtime_us)
 {
 	u64 rt_runtime, rt_period;
 
@@ -7467,7 +7467,7 @@ int sched_group_set_rt_runtime(struct task_group *tg, long rt_runtime_us)
 	return tg_set_rt_bandwidth(tg, rt_period, rt_runtime);
 }
 
-long sched_group_rt_runtime(struct task_group *tg)
+static long sched_group_rt_runtime(struct task_group *tg)
 {
 	u64 rt_runtime_us;
 
@@ -7479,7 +7479,7 @@ long sched_group_rt_runtime(struct task_group *tg)
 	return rt_runtime_us;
 }
 
-int sched_group_set_rt_period(struct task_group *tg, long rt_period_us)
+static int sched_group_set_rt_period(struct task_group *tg, long rt_period_us)
 {
 	u64 rt_runtime, rt_period;
 
@@ -7492,7 +7492,7 @@ int sched_group_set_rt_period(struct task_group *tg, long rt_period_us)
 	return tg_set_rt_bandwidth(tg, rt_period, rt_runtime);
 }
 
-long sched_group_rt_period(struct task_group *tg)
+static long sched_group_rt_period(struct task_group *tg)
 {
 	u64 rt_period_us;
 
@@ -7527,7 +7527,7 @@ static int sched_rt_global_constraints(void)
 	return ret;
 }
 
-int sched_rt_can_attach(struct task_group *tg, struct task_struct *tsk)
+static int sched_rt_can_attach(struct task_group *tg, struct task_struct *tsk)
 {
 	/* Don't accept realtime tasks when there is no way for them to run */
 	if (rt_task(tsk) && tg->rt_bandwidth.rt_runtime == 0)
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index eca526d..304fc1c 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -221,6 +221,18 @@ extern void init_tg_rt_entry(struct task_group *tg, struct rt_rq *rt_rq,
 		struct sched_rt_entity *rt_se, int cpu,
 		struct sched_rt_entity *parent);
 
+extern struct task_group *sched_create_group(struct task_group *parent);
+extern void sched_online_group(struct task_group *tg,
+			       struct task_group *parent);
+extern void sched_destroy_group(struct task_group *tg);
+extern void sched_offline_group(struct task_group *tg);
+
+extern void sched_move_task(struct task_struct *tsk);
+
+#ifdef CONFIG_FAIR_GROUP_SCHED
+extern int sched_group_set_shares(struct task_group *tg, unsigned long shares);
+#endif
+
 #else /* CONFIG_CGROUP_SCHED */
 
 struct cfs_bandwidth { };
-- 
1.8.0.2

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH 9/9] sched: Remove double declaration of root_task_group
  2013-03-05  8:05 [PATCH 0/9] sched: Shrink include/linux/sched.h Li Zefan
                   ` (7 preceding siblings ...)
  2013-03-05  8:07 ` [PATCH 8/9] sched: Move group scheduling functions out of include/linux/sched.h Li Zefan
@ 2013-03-05  8:07 ` Li Zefan
  2013-03-06 14:42   ` [tip:sched/core] " tip-bot for Li Zefan
  2013-03-05 16:43 ` [PATCH 0/9] sched: Shrink include/linux/sched.h Lai Jiangshan
  9 siblings, 1 reply; 20+ messages in thread
From: Li Zefan @ 2013-03-05  8:07 UTC (permalink / raw)
  To: Ingo Molnar; +Cc: Peter Zijlstra, LKML

It's already declared in include/linux/sched.h

Signed-off-by: Li Zefan <lizefan@huawei.com>
---
 kernel/sched/core.c  | 4 ++++
 kernel/sched/sched.h | 5 -----
 2 files changed, 4 insertions(+), 5 deletions(-)

diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 9ad26c9..42ecbcb 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -6861,6 +6861,10 @@ int in_sched_functions(unsigned long addr)
 }
 
 #ifdef CONFIG_CGROUP_SCHED
+/*
+ * Default task group.
+ * Every task in system belongs to this group at bootup.
+ */
 struct task_group root_task_group;
 LIST_HEAD(task_groups);
 #endif
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index 304fc1c..30bebb9 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -179,11 +179,6 @@ struct task_group {
 #define MAX_SHARES	(1UL << 18)
 #endif
 
-/* Default task group.
- *	Every task in system belong to this group at bootup.
- */
-extern struct task_group root_task_group;
-
 typedef int (*tg_visitor)(struct task_group *, void *);
 
 extern int walk_tg_tree_from(struct task_group *from,
-- 
1.8.0.2

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* Re: [PATCH 0/9] sched: Shrink include/linux/sched.h
  2013-03-05  8:05 [PATCH 0/9] sched: Shrink include/linux/sched.h Li Zefan
                   ` (8 preceding siblings ...)
  2013-03-05  8:07 ` [PATCH 9/9] sched: Remove double declaration of root_task_group Li Zefan
@ 2013-03-05 16:43 ` Lai Jiangshan
  9 siblings, 0 replies; 20+ messages in thread
From: Lai Jiangshan @ 2013-03-05 16:43 UTC (permalink / raw)
  To: Li Zefan; +Cc: Ingo Molnar, Peter Zijlstra, LKML

On Tue, Mar 5, 2013 at 4:05 PM, Li Zefan <lizefan@huawei.com> wrote:
> While working of a cgroup patch which also touched include/linux/sched.h,
> I found some function/macro/structure declarations can be moved to
> kernel/sched/sched.h, and some can even be total removed, so here's
> the patchset.
>
> The result is a reduction of ~200 LOC from include/linux/sched.h.

It looks good to me.
Acked-by: Lai Jiangshan <laijs@cn.fujitsu.com>

>
> 0001-sched-Remove-some-dummpy-functions.patch
> 0002-sched-Remove-test_sd_parent.patch
> 0003-sched-Move-SCHED_LOAD_SHIFT-macros-to-kernel-sched-s.patch
> 0004-sched-Move-struct-sched_group-to-kernel-sched-sched..patch
> 0005-sched-Move-wake-flags-to-kernel-sched-sched.h.patch
> 0006-sched-Move-struct-sched_class-to-kernel-sched-sched..patch
> 0007-sched-Make-default_scale_freq_power-static.patch
> 0008-sched-Move-group-scheduling-functions-out-of-include.patch
> 0009-sched-Remove-double-declaration-of-root_task_group.patch
>
> --
>  include/linux/sched.h | 194 +-------------------------------------------------
>  kernel/sched/core.c   |  14 ++--
>  kernel/sched/fair.c   |   6 +-
>  kernel/sched/sched.h  | 159 +++++++++++++++++++++++++++++++++++++++--
>  4 files changed, 168 insertions(+), 205 deletions(-)
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at  http://www.tux.org/lkml/

^ permalink raw reply	[flat|nested] 20+ messages in thread

* [tip:sched/core] sched: Remove some dummy functions
  2013-03-05  8:05 ` [PATCH 1/9] sched: Remove some dummpy functions Li Zefan
@ 2013-03-06 14:32   ` tip-bot for Li Zefan
  0 siblings, 0 replies; 20+ messages in thread
From: tip-bot for Li Zefan @ 2013-03-06 14:32 UTC (permalink / raw)
  To: linux-tip-commits; +Cc: linux-kernel, hpa, mingo, peterz, tglx, lizefan

Commit-ID:  19a37d1cd5465c10d669a296a2ea24b4c985363b
Gitweb:     http://git.kernel.org/tip/19a37d1cd5465c10d669a296a2ea24b4c985363b
Author:     Li Zefan <lizefan@huawei.com>
AuthorDate: Tue, 5 Mar 2013 16:05:28 +0800
Committer:  Ingo Molnar <mingo@kernel.org>
CommitDate: Wed, 6 Mar 2013 11:24:28 +0100

sched: Remove some dummy functions

No one will call those functions if CONFIG_SCHED_DEBUG=n.

Signed-off-by: Li Zefan <lizefan@huawei.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/5135A748.3050206@huawei.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
---
 include/linux/sched.h | 12 ------------
 1 file changed, 12 deletions(-)

diff --git a/include/linux/sched.h b/include/linux/sched.h
index d35d2b6..2715fbb 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -127,18 +127,6 @@ extern void proc_sched_show_task(struct task_struct *p, struct seq_file *m);
 extern void proc_sched_set_task(struct task_struct *p);
 extern void
 print_cfs_rq(struct seq_file *m, int cpu, struct cfs_rq *cfs_rq);
-#else
-static inline void
-proc_sched_show_task(struct task_struct *p, struct seq_file *m)
-{
-}
-static inline void proc_sched_set_task(struct task_struct *p)
-{
-}
-static inline void
-print_cfs_rq(struct seq_file *m, int cpu, struct cfs_rq *cfs_rq)
-{
-}
 #endif
 
 /*

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [tip:sched/core] sched: Remove test_sd_parent()
  2013-03-05  8:05 ` [PATCH 2/9] sched: Remove test_sd_parent() Li Zefan
@ 2013-03-06 14:34   ` tip-bot for Li Zefan
  0 siblings, 0 replies; 20+ messages in thread
From: tip-bot for Li Zefan @ 2013-03-06 14:34 UTC (permalink / raw)
  To: linux-tip-commits; +Cc: linux-kernel, hpa, mingo, peterz, tglx, lizefan

Commit-ID:  090b582f27ac7b6714661020033160130e5297bd
Gitweb:     http://git.kernel.org/tip/090b582f27ac7b6714661020033160130e5297bd
Author:     Li Zefan <lizefan@huawei.com>
AuthorDate: Tue, 5 Mar 2013 16:05:51 +0800
Committer:  Ingo Molnar <mingo@kernel.org>
CommitDate: Wed, 6 Mar 2013 11:24:29 +0100

sched: Remove test_sd_parent()

It's unused.

Signed-off-by: Li Zefan <lizefan@huawei.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/5135A75F.4070202@huawei.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
---
 include/linux/sched.h | 9 ---------
 1 file changed, 9 deletions(-)

diff --git a/include/linux/sched.h b/include/linux/sched.h
index 2715fbb..e880d7d 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -959,15 +959,6 @@ extern void partition_sched_domains(int ndoms_new, cpumask_var_t doms_new[],
 cpumask_var_t *alloc_sched_domains(unsigned int ndoms);
 void free_sched_domains(cpumask_var_t doms[], unsigned int ndoms);
 
-/* Test a flag in parent sched domain */
-static inline int test_sd_parent(struct sched_domain *sd, int flag)
-{
-	if (sd->parent && (sd->parent->flags & flag))
-		return 1;
-
-	return 0;
-}
-
 unsigned long default_scale_freq_power(struct sched_domain *sd, int cpu);
 unsigned long default_scale_smt_power(struct sched_domain *sd, int cpu);
 

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [tip:sched/core] sched: Move SCHED_LOAD_SHIFT macros to kernel/ sched/sched.h
  2013-03-05  8:06 ` [PATCH 3/9] sched: Move SCHED_LOAD_SHIFT macros to kernel/sched/sched.h Li Zefan
@ 2013-03-06 14:35   ` tip-bot for Li Zefan
  0 siblings, 0 replies; 20+ messages in thread
From: tip-bot for Li Zefan @ 2013-03-06 14:35 UTC (permalink / raw)
  To: linux-tip-commits; +Cc: linux-kernel, hpa, mingo, peterz, tglx, lizefan

Commit-ID:  cc1f4b1f3faed9f2040eff2a75f510b424b3cf18
Gitweb:     http://git.kernel.org/tip/cc1f4b1f3faed9f2040eff2a75f510b424b3cf18
Author:     Li Zefan <lizefan@huawei.com>
AuthorDate: Tue, 5 Mar 2013 16:06:09 +0800
Committer:  Ingo Molnar <mingo@kernel.org>
CommitDate: Wed, 6 Mar 2013 11:24:30 +0100

sched: Move SCHED_LOAD_SHIFT macros to kernel/sched/sched.h

They are used internally only.

Signed-off-by: Li Zefan <lizefan@huawei.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/5135A771.4070104@huawei.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
---
 include/linux/sched.h | 25 -------------------------
 kernel/sched/sched.h  | 26 +++++++++++++++++++++++++-
 2 files changed, 25 insertions(+), 26 deletions(-)

diff --git a/include/linux/sched.h b/include/linux/sched.h
index e880d7d..f8826d0 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -756,31 +756,6 @@ enum cpu_idle_type {
 };
 
 /*
- * Increase resolution of nice-level calculations for 64-bit architectures.
- * The extra resolution improves shares distribution and load balancing of
- * low-weight task groups (eg. nice +19 on an autogroup), deeper taskgroup
- * hierarchies, especially on larger systems. This is not a user-visible change
- * and does not change the user-interface for setting shares/weights.
- *
- * We increase resolution only if we have enough bits to allow this increased
- * resolution (i.e. BITS_PER_LONG > 32). The costs for increasing resolution
- * when BITS_PER_LONG <= 32 are pretty high and the returns do not justify the
- * increased costs.
- */
-#if 0 /* BITS_PER_LONG > 32 -- currently broken: it increases power usage under light load  */
-# define SCHED_LOAD_RESOLUTION	10
-# define scale_load(w)		((w) << SCHED_LOAD_RESOLUTION)
-# define scale_load_down(w)	((w) >> SCHED_LOAD_RESOLUTION)
-#else
-# define SCHED_LOAD_RESOLUTION	0
-# define scale_load(w)		(w)
-# define scale_load_down(w)	(w)
-#endif
-
-#define SCHED_LOAD_SHIFT	(10 + SCHED_LOAD_RESOLUTION)
-#define SCHED_LOAD_SCALE	(1L << SCHED_LOAD_SHIFT)
-
-/*
  * Increase resolution of cpu_power calculations
  */
 #define SCHED_POWER_SHIFT	10
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index cc03cfd..709a30c 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -33,6 +33,31 @@ extern __read_mostly int scheduler_running;
  */
 #define NS_TO_JIFFIES(TIME)	((unsigned long)(TIME) / (NSEC_PER_SEC / HZ))
 
+/*
+ * Increase resolution of nice-level calculations for 64-bit architectures.
+ * The extra resolution improves shares distribution and load balancing of
+ * low-weight task groups (eg. nice +19 on an autogroup), deeper taskgroup
+ * hierarchies, especially on larger systems. This is not a user-visible change
+ * and does not change the user-interface for setting shares/weights.
+ *
+ * We increase resolution only if we have enough bits to allow this increased
+ * resolution (i.e. BITS_PER_LONG > 32). The costs for increasing resolution
+ * when BITS_PER_LONG <= 32 are pretty high and the returns do not justify the
+ * increased costs.
+ */
+#if 0 /* BITS_PER_LONG > 32 -- currently broken: it increases power usage under light load  */
+# define SCHED_LOAD_RESOLUTION	10
+# define scale_load(w)		((w) << SCHED_LOAD_RESOLUTION)
+# define scale_load_down(w)	((w) >> SCHED_LOAD_RESOLUTION)
+#else
+# define SCHED_LOAD_RESOLUTION	0
+# define scale_load(w)		(w)
+# define scale_load_down(w)	(w)
+#endif
+
+#define SCHED_LOAD_SHIFT	(10 + SCHED_LOAD_RESOLUTION)
+#define SCHED_LOAD_SCALE	(1L << SCHED_LOAD_SHIFT)
+
 #define NICE_0_LOAD		SCHED_LOAD_SCALE
 #define NICE_0_SHIFT		SCHED_LOAD_SHIFT
 
@@ -784,7 +809,6 @@ static inline void finish_lock_switch(struct rq *rq, struct task_struct *prev)
 }
 #endif /* __ARCH_WANT_UNLOCKED_CTXSW */
 
-
 static inline void update_load_add(struct load_weight *lw, unsigned long inc)
 {
 	lw->weight += inc;

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [tip:sched/core] sched: Move struct sched_group to kernel/sched/ sched.h
  2013-03-05  8:06 ` [PATCH 4/9] sched: Move struct sched_group to kernel/sched/sched.h Li Zefan
@ 2013-03-06 14:36   ` tip-bot for Li Zefan
  0 siblings, 0 replies; 20+ messages in thread
From: tip-bot for Li Zefan @ 2013-03-06 14:36 UTC (permalink / raw)
  To: linux-tip-commits; +Cc: linux-kernel, hpa, mingo, peterz, tglx, lizefan

Commit-ID:  5e6521eaa1ee581a13b904f35b80c5efeb2baccb
Gitweb:     http://git.kernel.org/tip/5e6521eaa1ee581a13b904f35b80c5efeb2baccb
Author:     Li Zefan <lizefan@huawei.com>
AuthorDate: Tue, 5 Mar 2013 16:06:23 +0800
Committer:  Ingo Molnar <mingo@kernel.org>
CommitDate: Wed, 6 Mar 2013 11:24:31 +0100

sched: Move struct sched_group to kernel/sched/sched.h

Move struct sched_group_power and sched_group and related inline
functions to kernel/sched/sched.h, as they are used internally
only.

Signed-off-by: Li Zefan <lizefan@huawei.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/5135A77F.2010705@huawei.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
---
 include/linux/sched.h | 58 ++-------------------------------------------------
 kernel/sched/sched.h  | 56 +++++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 58 insertions(+), 56 deletions(-)

diff --git a/include/linux/sched.h b/include/linux/sched.h
index f8826d0..0d64130 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -780,62 +780,6 @@ enum cpu_idle_type {
 
 extern int __weak arch_sd_sibiling_asym_packing(void);
 
-struct sched_group_power {
-	atomic_t ref;
-	/*
-	 * CPU power of this group, SCHED_LOAD_SCALE being max power for a
-	 * single CPU.
-	 */
-	unsigned int power, power_orig;
-	unsigned long next_update;
-	/*
-	 * Number of busy cpus in this group.
-	 */
-	atomic_t nr_busy_cpus;
-
-	unsigned long cpumask[0]; /* iteration mask */
-};
-
-struct sched_group {
-	struct sched_group *next;	/* Must be a circular list */
-	atomic_t ref;
-
-	unsigned int group_weight;
-	struct sched_group_power *sgp;
-
-	/*
-	 * The CPUs this group covers.
-	 *
-	 * NOTE: this field is variable length. (Allocated dynamically
-	 * by attaching extra space to the end of the structure,
-	 * depending on how many CPUs the kernel has booted up with)
-	 */
-	unsigned long cpumask[0];
-};
-
-static inline struct cpumask *sched_group_cpus(struct sched_group *sg)
-{
-	return to_cpumask(sg->cpumask);
-}
-
-/*
- * cpumask masking which cpus in the group are allowed to iterate up the domain
- * tree.
- */
-static inline struct cpumask *sched_group_mask(struct sched_group *sg)
-{
-	return to_cpumask(sg->sgp->cpumask);
-}
-
-/**
- * group_first_cpu - Returns the first cpu in the cpumask of a sched_group.
- * @group: The group whose first cpu is to be returned.
- */
-static inline unsigned int group_first_cpu(struct sched_group *group)
-{
-	return cpumask_first(sched_group_cpus(group));
-}
-
 struct sched_domain_attr {
 	int relax_domain_level;
 };
@@ -846,6 +790,8 @@ struct sched_domain_attr {
 
 extern int sched_domain_level_max;
 
+struct sched_group;
+
 struct sched_domain {
 	/* These fields must be setup */
 	struct sched_domain *parent;	/* top domain must be null terminated */
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index 709a30c..1a4a2b1 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -572,6 +572,62 @@ static inline struct sched_domain *highest_flag_domain(int cpu, int flag)
 DECLARE_PER_CPU(struct sched_domain *, sd_llc);
 DECLARE_PER_CPU(int, sd_llc_id);
 
+struct sched_group_power {
+	atomic_t ref;
+	/*
+	 * CPU power of this group, SCHED_LOAD_SCALE being max power for a
+	 * single CPU.
+	 */
+	unsigned int power, power_orig;
+	unsigned long next_update;
+	/*
+	 * Number of busy cpus in this group.
+	 */
+	atomic_t nr_busy_cpus;
+
+	unsigned long cpumask[0]; /* iteration mask */
+};
+
+struct sched_group {
+	struct sched_group *next;	/* Must be a circular list */
+	atomic_t ref;
+
+	unsigned int group_weight;
+	struct sched_group_power *sgp;
+
+	/*
+	 * The CPUs this group covers.
+	 *
+	 * NOTE: this field is variable length. (Allocated dynamically
+	 * by attaching extra space to the end of the structure,
+	 * depending on how many CPUs the kernel has booted up with)
+	 */
+	unsigned long cpumask[0];
+};
+
+static inline struct cpumask *sched_group_cpus(struct sched_group *sg)
+{
+	return to_cpumask(sg->cpumask);
+}
+
+/*
+ * cpumask masking which cpus in the group are allowed to iterate up the domain
+ * tree.
+ */
+static inline struct cpumask *sched_group_mask(struct sched_group *sg)
+{
+	return to_cpumask(sg->sgp->cpumask);
+}
+
+/**
+ * group_first_cpu - Returns the first cpu in the cpumask of a sched_group.
+ * @group: The group whose first cpu is to be returned.
+ */
+static inline unsigned int group_first_cpu(struct sched_group *group)
+{
+	return cpumask_first(sched_group_cpus(group));
+}
+
 extern int group_balance_cpu(struct sched_group *sg);
 
 #endif /* CONFIG_SMP */

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [tip:sched/core] sched: Move wake flags to kernel/sched/sched.h
  2013-03-05  8:06 ` [PATCH 5/9] sched: Move wake flags to kernel/sched/sched.h Li Zefan
@ 2013-03-06 14:37   ` tip-bot for Li Zefan
  0 siblings, 0 replies; 20+ messages in thread
From: tip-bot for Li Zefan @ 2013-03-06 14:37 UTC (permalink / raw)
  To: linux-tip-commits; +Cc: linux-kernel, hpa, mingo, peterz, tglx, lizefan

Commit-ID:  b13095f07f25464de65f5ce5ea94e16813d67488
Gitweb:     http://git.kernel.org/tip/b13095f07f25464de65f5ce5ea94e16813d67488
Author:     Li Zefan <lizefan@huawei.com>
AuthorDate: Tue, 5 Mar 2013 16:06:38 +0800
Committer:  Ingo Molnar <mingo@kernel.org>
CommitDate: Wed, 6 Mar 2013 11:24:32 +0100

sched: Move wake flags to kernel/sched/sched.h

They are used internally only.

Signed-off-by: Li Zefan <lizefan@huawei.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/5135A78E.7040609@huawei.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
---
 include/linux/sched.h | 7 -------
 kernel/sched/sched.h  | 7 +++++++
 2 files changed, 7 insertions(+), 7 deletions(-)

diff --git a/include/linux/sched.h b/include/linux/sched.h
index 0d64130..863b505 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -920,13 +920,6 @@ struct uts_namespace;
 struct rq;
 struct sched_domain;
 
-/*
- * wake flags
- */
-#define WF_SYNC		0x01		/* waker goes to sleep after wakup */
-#define WF_FORK		0x02		/* child wakeup after fork */
-#define WF_MIGRATED	0x04		/* internal use, task got migrated */
-
 #define ENQUEUE_WAKEUP		1
 #define ENQUEUE_HEAD		2
 #ifdef CONFIG_SMP
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index 1a4a2b1..4e5c2af 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -865,6 +865,13 @@ static inline void finish_lock_switch(struct rq *rq, struct task_struct *prev)
 }
 #endif /* __ARCH_WANT_UNLOCKED_CTXSW */
 
+/*
+ * wake flags
+ */
+#define WF_SYNC		0x01		/* waker goes to sleep after wakeup */
+#define WF_FORK		0x02		/* child wakeup after fork */
+#define WF_MIGRATED	0x4		/* internal use, task got migrated */
+
 static inline void update_load_add(struct load_weight *lw, unsigned long inc)
 {
 	lw->weight += inc;

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [tip:sched/core] sched: Move struct sched_class to kernel/sched/ sched.h
  2013-03-05  8:06 ` [PATCH 6/9] sched: Move struct sched_class " Li Zefan
@ 2013-03-06 14:38   ` tip-bot for Li Zefan
  0 siblings, 0 replies; 20+ messages in thread
From: tip-bot for Li Zefan @ 2013-03-06 14:38 UTC (permalink / raw)
  To: linux-tip-commits; +Cc: linux-kernel, hpa, mingo, peterz, tglx, lizefan

Commit-ID:  c82ba9fa7588dfd02d4dc99ad1af486304bc424c
Gitweb:     http://git.kernel.org/tip/c82ba9fa7588dfd02d4dc99ad1af486304bc424c
Author:     Li Zefan <lizefan@huawei.com>
AuthorDate: Tue, 5 Mar 2013 16:06:55 +0800
Committer:  Ingo Molnar <mingo@kernel.org>
CommitDate: Wed, 6 Mar 2013 11:24:33 +0100

sched: Move struct sched_class to kernel/sched/sched.h

It's used internally only.

Signed-off-by: Li Zefan <lizefan@huawei.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/5135A79F.8090502@huawei.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
---
 include/linux/sched.h | 59 ---------------------------------------------------
 kernel/sched/sched.h  | 55 +++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 55 insertions(+), 59 deletions(-)

diff --git a/include/linux/sched.h b/include/linux/sched.h
index 863b505..04b834f 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -917,65 +917,6 @@ struct mempolicy;
 struct pipe_inode_info;
 struct uts_namespace;
 
-struct rq;
-struct sched_domain;
-
-#define ENQUEUE_WAKEUP		1
-#define ENQUEUE_HEAD		2
-#ifdef CONFIG_SMP
-#define ENQUEUE_WAKING		4	/* sched_class::task_waking was called */
-#else
-#define ENQUEUE_WAKING		0
-#endif
-
-#define DEQUEUE_SLEEP		1
-
-struct sched_class {
-	const struct sched_class *next;
-
-	void (*enqueue_task) (struct rq *rq, struct task_struct *p, int flags);
-	void (*dequeue_task) (struct rq *rq, struct task_struct *p, int flags);
-	void (*yield_task) (struct rq *rq);
-	bool (*yield_to_task) (struct rq *rq, struct task_struct *p, bool preempt);
-
-	void (*check_preempt_curr) (struct rq *rq, struct task_struct *p, int flags);
-
-	struct task_struct * (*pick_next_task) (struct rq *rq);
-	void (*put_prev_task) (struct rq *rq, struct task_struct *p);
-
-#ifdef CONFIG_SMP
-	int  (*select_task_rq)(struct task_struct *p, int sd_flag, int flags);
-	void (*migrate_task_rq)(struct task_struct *p, int next_cpu);
-
-	void (*pre_schedule) (struct rq *this_rq, struct task_struct *task);
-	void (*post_schedule) (struct rq *this_rq);
-	void (*task_waking) (struct task_struct *task);
-	void (*task_woken) (struct rq *this_rq, struct task_struct *task);
-
-	void (*set_cpus_allowed)(struct task_struct *p,
-				 const struct cpumask *newmask);
-
-	void (*rq_online)(struct rq *rq);
-	void (*rq_offline)(struct rq *rq);
-#endif
-
-	void (*set_curr_task) (struct rq *rq);
-	void (*task_tick) (struct rq *rq, struct task_struct *p, int queued);
-	void (*task_fork) (struct task_struct *p);
-
-	void (*switched_from) (struct rq *this_rq, struct task_struct *task);
-	void (*switched_to) (struct rq *this_rq, struct task_struct *task);
-	void (*prio_changed) (struct rq *this_rq, struct task_struct *task,
-			     int oldprio);
-
-	unsigned int (*get_rr_interval) (struct rq *rq,
-					 struct task_struct *task);
-
-#ifdef CONFIG_FAIR_GROUP_SCHED
-	void (*task_move_group) (struct task_struct *p, int on_rq);
-#endif
-};
-
 struct load_weight {
 	unsigned long weight, inv_weight;
 };
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index 4e5c2af..eca526d 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -951,6 +951,61 @@ enum cpuacct_stat_index {
 	CPUACCT_STAT_NSTATS,
 };
 
+#define ENQUEUE_WAKEUP		1
+#define ENQUEUE_HEAD		2
+#ifdef CONFIG_SMP
+#define ENQUEUE_WAKING		4	/* sched_class::task_waking was called */
+#else
+#define ENQUEUE_WAKING		0
+#endif
+
+#define DEQUEUE_SLEEP		1
+
+struct sched_class {
+	const struct sched_class *next;
+
+	void (*enqueue_task) (struct rq *rq, struct task_struct *p, int flags);
+	void (*dequeue_task) (struct rq *rq, struct task_struct *p, int flags);
+	void (*yield_task) (struct rq *rq);
+	bool (*yield_to_task) (struct rq *rq, struct task_struct *p, bool preempt);
+
+	void (*check_preempt_curr) (struct rq *rq, struct task_struct *p, int flags);
+
+	struct task_struct * (*pick_next_task) (struct rq *rq);
+	void (*put_prev_task) (struct rq *rq, struct task_struct *p);
+
+#ifdef CONFIG_SMP
+	int  (*select_task_rq)(struct task_struct *p, int sd_flag, int flags);
+	void (*migrate_task_rq)(struct task_struct *p, int next_cpu);
+
+	void (*pre_schedule) (struct rq *this_rq, struct task_struct *task);
+	void (*post_schedule) (struct rq *this_rq);
+	void (*task_waking) (struct task_struct *task);
+	void (*task_woken) (struct rq *this_rq, struct task_struct *task);
+
+	void (*set_cpus_allowed)(struct task_struct *p,
+				 const struct cpumask *newmask);
+
+	void (*rq_online)(struct rq *rq);
+	void (*rq_offline)(struct rq *rq);
+#endif
+
+	void (*set_curr_task) (struct rq *rq);
+	void (*task_tick) (struct rq *rq, struct task_struct *p, int queued);
+	void (*task_fork) (struct task_struct *p);
+
+	void (*switched_from) (struct rq *this_rq, struct task_struct *task);
+	void (*switched_to) (struct rq *this_rq, struct task_struct *task);
+	void (*prio_changed) (struct rq *this_rq, struct task_struct *task,
+			     int oldprio);
+
+	unsigned int (*get_rr_interval) (struct rq *rq,
+					 struct task_struct *task);
+
+#ifdef CONFIG_FAIR_GROUP_SCHED
+	void (*task_move_group) (struct task_struct *p, int on_rq);
+#endif
+};
 
 #define sched_class_highest (&stop_sched_class)
 #define for_each_class(class) \

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [tip:sched/core] sched: Make default_scale_freq_power() static
  2013-03-05  8:07 ` [PATCH 7/9] sched: Make default_scale_freq_power() static Li Zefan
@ 2013-03-06 14:40   ` tip-bot for Li Zefan
  0 siblings, 0 replies; 20+ messages in thread
From: tip-bot for Li Zefan @ 2013-03-06 14:40 UTC (permalink / raw)
  To: linux-tip-commits; +Cc: linux-kernel, hpa, mingo, peterz, tglx, lizefan

Commit-ID:  15f803c94bd92b17708aad9e74226fd0b2c9130c
Gitweb:     http://git.kernel.org/tip/15f803c94bd92b17708aad9e74226fd0b2c9130c
Author:     Li Zefan <lizefan@huawei.com>
AuthorDate: Tue, 5 Mar 2013 16:07:11 +0800
Committer:  Ingo Molnar <mingo@kernel.org>
CommitDate: Wed, 6 Mar 2013 11:24:34 +0100

sched: Make default_scale_freq_power() static

As default_scale_{freq,smt}_power() and update_rt_power() are
used in kernel/sched/fair.c only, annotate them as static
functions.

Signed-off-by: Li Zefan <lizefan@huawei.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/5135A7AF.8010900@huawei.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
---
 include/linux/sched.h | 3 ---
 kernel/sched/fair.c   | 6 +++---
 2 files changed, 3 insertions(+), 6 deletions(-)

diff --git a/include/linux/sched.h b/include/linux/sched.h
index 04b834f..eadd113 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -880,9 +880,6 @@ extern void partition_sched_domains(int ndoms_new, cpumask_var_t doms_new[],
 cpumask_var_t *alloc_sched_domains(unsigned int ndoms);
 void free_sched_domains(cpumask_var_t doms[], unsigned int ndoms);
 
-unsigned long default_scale_freq_power(struct sched_domain *sd, int cpu);
-unsigned long default_scale_smt_power(struct sched_domain *sd, int cpu);
-
 bool cpus_share_cache(int this_cpu, int that_cpu);
 
 #else /* CONFIG_SMP */
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 7a33e59..9f23112 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -4245,7 +4245,7 @@ static inline int get_sd_load_idx(struct sched_domain *sd,
 	return load_idx;
 }
 
-unsigned long default_scale_freq_power(struct sched_domain *sd, int cpu)
+static unsigned long default_scale_freq_power(struct sched_domain *sd, int cpu)
 {
 	return SCHED_POWER_SCALE;
 }
@@ -4255,7 +4255,7 @@ unsigned long __weak arch_scale_freq_power(struct sched_domain *sd, int cpu)
 	return default_scale_freq_power(sd, cpu);
 }
 
-unsigned long default_scale_smt_power(struct sched_domain *sd, int cpu)
+static unsigned long default_scale_smt_power(struct sched_domain *sd, int cpu)
 {
 	unsigned long weight = sd->span_weight;
 	unsigned long smt_gain = sd->smt_gain;
@@ -4270,7 +4270,7 @@ unsigned long __weak arch_scale_smt_power(struct sched_domain *sd, int cpu)
 	return default_scale_smt_power(sd, cpu);
 }
 
-unsigned long scale_rt_power(int cpu)
+static unsigned long scale_rt_power(int cpu)
 {
 	struct rq *rq = cpu_rq(cpu);
 	u64 total, available, age_stamp, avg;

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [tip:sched/core] sched: Move group scheduling functions out of include/linux/sched.h
  2013-03-05  8:07 ` [PATCH 8/9] sched: Move group scheduling functions out of include/linux/sched.h Li Zefan
@ 2013-03-06 14:41   ` tip-bot for Li Zefan
  0 siblings, 0 replies; 20+ messages in thread
From: tip-bot for Li Zefan @ 2013-03-06 14:41 UTC (permalink / raw)
  To: linux-tip-commits; +Cc: linux-kernel, hpa, mingo, peterz, tglx, lizefan

Commit-ID:  25cc7da7e6336d3bb6a5bad3d3fa96fce9a81d5b
Gitweb:     http://git.kernel.org/tip/25cc7da7e6336d3bb6a5bad3d3fa96fce9a81d5b
Author:     Li Zefan <lizefan@huawei.com>
AuthorDate: Tue, 5 Mar 2013 16:07:33 +0800
Committer:  Ingo Molnar <mingo@kernel.org>
CommitDate: Wed, 6 Mar 2013 11:24:34 +0100

sched: Move group scheduling functions out of include/linux/sched.h

- Make sched_group_{set_,}runtime(), sched_group_{set_,}period()
and sched_rt_can_attach() static.

- Move sched_{create,destroy,online,offline}_group() to
kernel/sched/sched.h.

- Remove declaration of sched_group_shares().

Signed-off-by: Li Zefan <lizefan@huawei.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/5135A7C5.3000708@huawei.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
---
 include/linux/sched.h | 21 ---------------------
 kernel/sched/core.c   | 10 +++++-----
 kernel/sched/sched.h  | 12 ++++++++++++
 3 files changed, 17 insertions(+), 26 deletions(-)

diff --git a/include/linux/sched.h b/include/linux/sched.h
index eadd113..fc039ce 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -2512,28 +2512,7 @@ extern long sched_setaffinity(pid_t pid, const struct cpumask *new_mask);
 extern long sched_getaffinity(pid_t pid, struct cpumask *mask);
 
 #ifdef CONFIG_CGROUP_SCHED
-
 extern struct task_group root_task_group;
-
-extern struct task_group *sched_create_group(struct task_group *parent);
-extern void sched_online_group(struct task_group *tg,
-			       struct task_group *parent);
-extern void sched_destroy_group(struct task_group *tg);
-extern void sched_offline_group(struct task_group *tg);
-extern void sched_move_task(struct task_struct *tsk);
-#ifdef CONFIG_FAIR_GROUP_SCHED
-extern int sched_group_set_shares(struct task_group *tg, unsigned long shares);
-extern unsigned long sched_group_shares(struct task_group *tg);
-#endif
-#ifdef CONFIG_RT_GROUP_SCHED
-extern int sched_group_set_rt_runtime(struct task_group *tg,
-				      long rt_runtime_us);
-extern long sched_group_rt_runtime(struct task_group *tg);
-extern int sched_group_set_rt_period(struct task_group *tg,
-				      long rt_period_us);
-extern long sched_group_rt_period(struct task_group *tg);
-extern int sched_rt_can_attach(struct task_group *tg, struct task_struct *tsk);
-#endif
 #endif /* CONFIG_CGROUP_SCHED */
 
 extern int task_can_switch_user(struct user_struct *up,
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 7f12624..9ad26c9 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -7455,7 +7455,7 @@ unlock:
 	return err;
 }
 
-int sched_group_set_rt_runtime(struct task_group *tg, long rt_runtime_us)
+static int sched_group_set_rt_runtime(struct task_group *tg, long rt_runtime_us)
 {
 	u64 rt_runtime, rt_period;
 
@@ -7467,7 +7467,7 @@ int sched_group_set_rt_runtime(struct task_group *tg, long rt_runtime_us)
 	return tg_set_rt_bandwidth(tg, rt_period, rt_runtime);
 }
 
-long sched_group_rt_runtime(struct task_group *tg)
+static long sched_group_rt_runtime(struct task_group *tg)
 {
 	u64 rt_runtime_us;
 
@@ -7479,7 +7479,7 @@ long sched_group_rt_runtime(struct task_group *tg)
 	return rt_runtime_us;
 }
 
-int sched_group_set_rt_period(struct task_group *tg, long rt_period_us)
+static int sched_group_set_rt_period(struct task_group *tg, long rt_period_us)
 {
 	u64 rt_runtime, rt_period;
 
@@ -7492,7 +7492,7 @@ int sched_group_set_rt_period(struct task_group *tg, long rt_period_us)
 	return tg_set_rt_bandwidth(tg, rt_period, rt_runtime);
 }
 
-long sched_group_rt_period(struct task_group *tg)
+static long sched_group_rt_period(struct task_group *tg)
 {
 	u64 rt_period_us;
 
@@ -7527,7 +7527,7 @@ static int sched_rt_global_constraints(void)
 	return ret;
 }
 
-int sched_rt_can_attach(struct task_group *tg, struct task_struct *tsk)
+static int sched_rt_can_attach(struct task_group *tg, struct task_struct *tsk)
 {
 	/* Don't accept realtime tasks when there is no way for them to run */
 	if (rt_task(tsk) && tg->rt_bandwidth.rt_runtime == 0)
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index eca526d..304fc1c 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -221,6 +221,18 @@ extern void init_tg_rt_entry(struct task_group *tg, struct rt_rq *rt_rq,
 		struct sched_rt_entity *rt_se, int cpu,
 		struct sched_rt_entity *parent);
 
+extern struct task_group *sched_create_group(struct task_group *parent);
+extern void sched_online_group(struct task_group *tg,
+			       struct task_group *parent);
+extern void sched_destroy_group(struct task_group *tg);
+extern void sched_offline_group(struct task_group *tg);
+
+extern void sched_move_task(struct task_struct *tsk);
+
+#ifdef CONFIG_FAIR_GROUP_SCHED
+extern int sched_group_set_shares(struct task_group *tg, unsigned long shares);
+#endif
+
 #else /* CONFIG_CGROUP_SCHED */
 
 struct cfs_bandwidth { };

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [tip:sched/core] sched: Remove double declaration of root_task_group
  2013-03-05  8:07 ` [PATCH 9/9] sched: Remove double declaration of root_task_group Li Zefan
@ 2013-03-06 14:42   ` tip-bot for Li Zefan
  0 siblings, 0 replies; 20+ messages in thread
From: tip-bot for Li Zefan @ 2013-03-06 14:42 UTC (permalink / raw)
  To: linux-tip-commits; +Cc: linux-kernel, hpa, mingo, peterz, tglx, lizefan

Commit-ID:  27b4b9319a3c2e8654d45df99ce584c7c2cfe100
Gitweb:     http://git.kernel.org/tip/27b4b9319a3c2e8654d45df99ce584c7c2cfe100
Author:     Li Zefan <lizefan@huawei.com>
AuthorDate: Tue, 5 Mar 2013 16:07:52 +0800
Committer:  Ingo Molnar <mingo@kernel.org>
CommitDate: Wed, 6 Mar 2013 11:24:35 +0100

sched: Remove double declaration of root_task_group

It's already declared in include/linux/sched.h

Signed-off-by: Li Zefan <lizefan@huawei.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/5135A7D8.7000107@huawei.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
---
 kernel/sched/core.c  | 4 ++++
 kernel/sched/sched.h | 5 -----
 2 files changed, 4 insertions(+), 5 deletions(-)

diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 9ad26c9..42ecbcb 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -6861,6 +6861,10 @@ int in_sched_functions(unsigned long addr)
 }
 
 #ifdef CONFIG_CGROUP_SCHED
+/*
+ * Default task group.
+ * Every task in system belongs to this group at bootup.
+ */
 struct task_group root_task_group;
 LIST_HEAD(task_groups);
 #endif
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index 304fc1c..30bebb9 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -179,11 +179,6 @@ struct task_group {
 #define MAX_SHARES	(1UL << 18)
 #endif
 
-/* Default task group.
- *	Every task in system belong to this group at bootup.
- */
-extern struct task_group root_task_group;
-
 typedef int (*tg_visitor)(struct task_group *, void *);
 
 extern int walk_tg_tree_from(struct task_group *from,

^ permalink raw reply related	[flat|nested] 20+ messages in thread

end of thread, other threads:[~2013-03-06 14:43 UTC | newest]

Thread overview: 20+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2013-03-05  8:05 [PATCH 0/9] sched: Shrink include/linux/sched.h Li Zefan
2013-03-05  8:05 ` [PATCH 1/9] sched: Remove some dummpy functions Li Zefan
2013-03-06 14:32   ` [tip:sched/core] sched: Remove some dummy functions tip-bot for Li Zefan
2013-03-05  8:05 ` [PATCH 2/9] sched: Remove test_sd_parent() Li Zefan
2013-03-06 14:34   ` [tip:sched/core] " tip-bot for Li Zefan
2013-03-05  8:06 ` [PATCH 3/9] sched: Move SCHED_LOAD_SHIFT macros to kernel/sched/sched.h Li Zefan
2013-03-06 14:35   ` [tip:sched/core] sched: Move SCHED_LOAD_SHIFT macros to kernel/ sched/sched.h tip-bot for Li Zefan
2013-03-05  8:06 ` [PATCH 4/9] sched: Move struct sched_group to kernel/sched/sched.h Li Zefan
2013-03-06 14:36   ` [tip:sched/core] sched: Move struct sched_group to kernel/sched/ sched.h tip-bot for Li Zefan
2013-03-05  8:06 ` [PATCH 5/9] sched: Move wake flags to kernel/sched/sched.h Li Zefan
2013-03-06 14:37   ` [tip:sched/core] " tip-bot for Li Zefan
2013-03-05  8:06 ` [PATCH 6/9] sched: Move struct sched_class " Li Zefan
2013-03-06 14:38   ` [tip:sched/core] sched: Move struct sched_class to kernel/sched/ sched.h tip-bot for Li Zefan
2013-03-05  8:07 ` [PATCH 7/9] sched: Make default_scale_freq_power() static Li Zefan
2013-03-06 14:40   ` [tip:sched/core] " tip-bot for Li Zefan
2013-03-05  8:07 ` [PATCH 8/9] sched: Move group scheduling functions out of include/linux/sched.h Li Zefan
2013-03-06 14:41   ` [tip:sched/core] " tip-bot for Li Zefan
2013-03-05  8:07 ` [PATCH 9/9] sched: Remove double declaration of root_task_group Li Zefan
2013-03-06 14:42   ` [tip:sched/core] " tip-bot for Li Zefan
2013-03-05 16:43 ` [PATCH 0/9] sched: Shrink include/linux/sched.h Lai Jiangshan

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox