public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
* [git] CFS-devel, latest code
@ 2007-09-25 14:44 Ingo Molnar
  2007-09-25 16:04 ` Srivatsa Vaddagiri
  2007-09-25 16:25 ` [PATCH 0/3] More group scheduler related fixes Srivatsa Vaddagiri
  0 siblings, 2 replies; 11+ messages in thread
From: Ingo Molnar @ 2007-09-25 14:44 UTC (permalink / raw)
  To: linux-kernel
  Cc: Peter Zijlstra, Mike Galbraith, Srivatsa Vaddagiri, Dhaval Giani,
	Dmitry Adamushko, Andrew Morton


The latest sched-devel.git tree can be pulled from:
  
   git://git.kernel.org/pub/scm/linux/kernel/git/mingo/linux-2.6-sched-devel.git
 
This is a quick iteration after yesterday's: a couple of group 
scheduling bugs were found/debugged and fixed by Srivatsa Vaddagiri and 
Mike Galbraith. There's also a yield fix from Dmitry Adamushko, a build 
fix from S.Ceglar Onur and Andrew Morton, a cleanup from Hiroshi 
Shimamoto and the usual stream of goodies from Peter Zijlstra. Rebased 
it to -rc8 as well.

there are no known regressions at the moment in the sched-devel.git 
codebase. (yay :)

	Ingo

----------------------------------------->
the shortlog relative to 2.6.23-rc8:
 
Dmitry Adamushko (9):
      sched: clean up struct load_stat
      sched: clean up schedstat block in dequeue_entity()
      sched: sched_setscheduler() fix
      sched: add set_curr_task() calls
      sched: do not keep current in the tree and get rid of sched_entity::fair_key
      sched: optimize task_new_fair()
      sched: simplify sched_class::yield_task()
      sched: rework enqueue/dequeue_entity() to get rid of set_curr_task()
      sched: yield fix

Hiroshi Shimamoto (1):
      sched: clean up sched_fork()

Ingo Molnar (44):
      sched: fix new-task method
      sched: resched task in task_new_fair()
      sched: small sched_debug cleanup
      sched: debug: track maximum 'slice'
      sched: uniform tunings
      sched: use constants if !CONFIG_SCHED_DEBUG
      sched: remove stat_gran
      sched: remove precise CPU load
      sched: remove precise CPU load calculations #2
      sched: track cfs_rq->curr on !group-scheduling too
      sched: cleanup: simplify cfs_rq_curr() methods
      sched: uninline __enqueue_entity()/__dequeue_entity()
      sched: speed up update_load_add/_sub()
      sched: clean up calc_weighted()
      sched: introduce se->vruntime
      sched: move sched_feat() definitions
      sched: optimize vruntime based scheduling
      sched: simplify check_preempt() methods
      sched: wakeup granularity fix
      sched: add se->vruntime debugging
      sched: add more vruntime statistics
      sched: debug: update exec_clock only when SCHED_DEBUG
      sched: remove wait_runtime limit
      sched: remove wait_runtime fields and features
      sched: x86: allow single-depth wchan output
      sched: fix delay accounting performance regression
      sched: prettify /proc/sched_debug output
      sched: enhance debug output
      sched: kernel/sched_fair.c whitespace cleanups
      sched: fair-group sched, cleanups
      sched: enable CONFIG_FAIR_GROUP_SCHED=y by default
      sched debug: BKL usage statistics
      sched: remove unneeded tunables
      sched debug: print settings
      sched debug: more width for parameter printouts
      sched: entity_key() fix
      sched: remove condition from set_task_cpu()
      sched: remove last_min_vruntime effect
      sched: undo some of the recent changes
      sched: fix place_entity()
      sched: fix sched_fork()
      sched: remove set_leftmost()
      sched: clean up schedstats, cnt -> count
      sched: cleanup, remove stale comment

Matthias Kaehlcke (1):
      sched: use list_for_each_entry_safe() in __wake_up_common()

Mike Galbraith (2):
      sched: fix SMP migration latencies
      sched: fix formatting of /proc/sched_debug

Peter Zijlstra (12):
      sched: simplify SCHED_FEAT_* code
      sched: new task placement for vruntime
      sched: simplify adaptive latency
      sched: clean up new task placement
      sched: add tree based averages
      sched: handle vruntime overflow
      sched: better min_vruntime tracking
      sched: add vslice
      sched debug: check spread
      sched: max_vruntime() simplification
      sched: clean up min_vruntime use
      sched: speed up and simplify vslice calculations

S.Ceglar Onur (1):
      sched debug: BKL usage statistics, fix

Srivatsa Vaddagiri (9):
      sched: group-scheduler core
      sched: revert recent removal of set_curr_task()
      sched: fix minor bug in yield
      sched: print nr_running and load in /proc/sched_debug
      sched: print &rq->cfs stats
      sched: clean up code under CONFIG_FAIR_GROUP_SCHED
      sched: add fair-user scheduler
      sched: group scheduler wakeup latency fix
      sched: group scheduler SMP migration fix

 arch/i386/Kconfig       |   11 
 fs/proc/base.c          |    2 
 include/linux/sched.h   |   55 ++-
 init/Kconfig            |   21 +
 kernel/delayacct.c      |    2 
 kernel/sched.c          |  577 +++++++++++++++++++++++++-------------
 kernel/sched_debug.c    |  250 +++++++++++-----
 kernel/sched_fair.c     |  718 +++++++++++++++++-------------------------------
 kernel/sched_idletask.c |    5 
 kernel/sched_rt.c       |   12 
 kernel/sched_stats.h    |   28 -
 kernel/sysctl.c         |   31 --
 kernel/user.c           |   43 ++
 13 files changed, 954 insertions(+), 801 deletions(-)

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [git] CFS-devel, latest code
  2007-09-25 14:44 [git] CFS-devel, latest code Ingo Molnar
@ 2007-09-25 16:04 ` Srivatsa Vaddagiri
  2007-09-25 16:08   ` Srivatsa Vaddagiri
  2007-09-25 16:25 ` [PATCH 0/3] More group scheduler related fixes Srivatsa Vaddagiri
  1 sibling, 1 reply; 11+ messages in thread
From: Srivatsa Vaddagiri @ 2007-09-25 16:04 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: linux-kernel, Peter Zijlstra, Mike Galbraith, Dhaval Giani,
	Dmitry Adamushko, Andrew Morton

On Tue, Sep 25, 2007 at 04:44:43PM +0200, Ingo Molnar wrote:
> 
> The latest sched-devel.git tree can be pulled from:
>   
>    git://git.kernel.org/pub/scm/linux/kernel/git/mingo/linux-2.6-sched-devel.git
> 

This is required for it to compile.

---
 include/linux/sched.h |    1 +
 1 files changed, 1 insertion(+)

Index: current/include/linux/sched.h
===================================================================
--- current.orig/include/linux/sched.h
+++ current/include/linux/sched.h
@@ -1404,6 +1404,7 @@ extern unsigned int sysctl_sched_wakeup_
 extern unsigned int sysctl_sched_batch_wakeup_granularity;
 extern unsigned int sysctl_sched_child_runs_first;
 extern unsigned int sysctl_sched_features;
+extern unsigned int sysctl_sched_nr_latency;
 #endif
 
 extern unsigned int sysctl_sched_compat_yield;

-- 
Regards,
vatsa

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [git] CFS-devel, latest code
  2007-09-25 16:04 ` Srivatsa Vaddagiri
@ 2007-09-25 16:08   ` Srivatsa Vaddagiri
  0 siblings, 0 replies; 11+ messages in thread
From: Srivatsa Vaddagiri @ 2007-09-25 16:08 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: linux-kernel, Peter Zijlstra, Mike Galbraith, Dhaval Giani,
	Dmitry Adamushko, Andrew Morton

On Tue, Sep 25, 2007 at 09:34:20PM +0530, Srivatsa Vaddagiri wrote:
> On Tue, Sep 25, 2007 at 04:44:43PM +0200, Ingo Molnar wrote:
> > 
> > The latest sched-devel.git tree can be pulled from:
> >   
> >    git://git.kernel.org/pub/scm/linux/kernel/git/mingo/linux-2.6-sched-devel.git
> > 
> 
> This is required for it to compile.
> 
> ---
>  include/linux/sched.h |    1 +
>  1 files changed, 1 insertion(+)
> 
> Index: current/include/linux/sched.h
> ===================================================================
> --- current.orig/include/linux/sched.h
> +++ current/include/linux/sched.h
> @@ -1404,6 +1404,7 @@ extern unsigned int sysctl_sched_wakeup_
>  extern unsigned int sysctl_sched_batch_wakeup_granularity;
>  extern unsigned int sysctl_sched_child_runs_first;
>  extern unsigned int sysctl_sched_features;
> +extern unsigned int sysctl_sched_nr_latency;
>  #endif
>  
>  extern unsigned int sysctl_sched_compat_yield;

and this:

---
 kernel/sched_debug.c |    1 -
 1 files changed, 1 deletion(-)

Index: current/kernel/sched_debug.c
===================================================================
--- current.orig/kernel/sched_debug.c
+++ current/kernel/sched_debug.c
@@ -210,7 +210,6 @@ static int sched_debug_show(struct seq_f
 #define PN(x) \
 	SEQ_printf(m, "  .%-40s: %Ld.%06ld\n", #x, SPLIT_NS(x))
 	PN(sysctl_sched_latency);
-	PN(sysctl_sched_min_granularity);
 	PN(sysctl_sched_wakeup_granularity);
 	PN(sysctl_sched_batch_wakeup_granularity);
 	PN(sysctl_sched_child_runs_first);

-- 
Regards,
vatsa

^ permalink raw reply	[flat|nested] 11+ messages in thread

* [PATCH 0/3] More group scheduler related fixes
  2007-09-25 14:44 [git] CFS-devel, latest code Ingo Molnar
  2007-09-25 16:04 ` Srivatsa Vaddagiri
@ 2007-09-25 16:25 ` Srivatsa Vaddagiri
  2007-09-25 16:28   ` [PATCH 1/3] Fix coding style Srivatsa Vaddagiri
                     ` (3 more replies)
  1 sibling, 4 replies; 11+ messages in thread
From: Srivatsa Vaddagiri @ 2007-09-25 16:25 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: linux-kernel, Peter Zijlstra, Mike Galbraith, Dhaval Giani,
	Dmitry Adamushko, Andrew Morton

On Tue, Sep 25, 2007 at 04:44:43PM +0200, Ingo Molnar wrote:
> 
> The latest sched-devel.git tree can be pulled from:
>   
>    git://git.kernel.org/pub/scm/linux/kernel/git/mingo/linux-2.6-sched-devel.git
> 

Ingo,
	Few more patches follow on top of this latest sched-devel tree.

Pls consider for inclusion.

--
Regards,
vatsa

^ permalink raw reply	[flat|nested] 11+ messages in thread

* [PATCH 1/3] Fix coding style
  2007-09-25 16:25 ` [PATCH 0/3] More group scheduler related fixes Srivatsa Vaddagiri
@ 2007-09-25 16:28   ` Srivatsa Vaddagiri
  2007-09-25 19:16     ` Ingo Oeser
  2007-09-25 16:33   ` [PATCH 2/3] Fix size bloat for !CONFIG_FAIR_GROUP_SCHED Srivatsa Vaddagiri
                     ` (2 subsequent siblings)
  3 siblings, 1 reply; 11+ messages in thread
From: Srivatsa Vaddagiri @ 2007-09-25 16:28 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: linux-kernel, Peter Zijlstra, Mike Galbraith, Dhaval Giani,
	Dmitry Adamushko, Andrew Morton, randy.dunlap

Fix coding style issues reported by Randy Dunlap and others

Signed-off-by : Dhaval Giani <dhaval@linux.vnet.ibm.com>
Signed-off-by : Srivatsa Vaddagiri <vatsa@linux.vnet.ibm.com>

---
 init/Kconfig         |   14 +++++++-------
 kernel/sched_debug.c |    8 ++------
 2 files changed, 9 insertions(+), 13 deletions(-)

Index: current/init/Kconfig
===================================================================
--- current.orig/init/Kconfig
+++ current/init/Kconfig
@@ -282,11 +282,11 @@ config CPUSETS
 	  Say N if unsure.
 
 config FAIR_GROUP_SCHED
-	bool "Fair group cpu scheduler"
+	bool "Fair group CPU scheduler"
 	default y
 	depends on EXPERIMENTAL
 	help
-	  This feature lets cpu scheduler recognize task groups and control cpu
+	  This feature lets CPU scheduler recognize task groups and control CPU
 	  bandwidth allocation to such task groups.
 
 choice
@@ -294,11 +294,11 @@ choice
 	prompt "Basis for grouping tasks"
 	default FAIR_USER_SCHED
 
-	config FAIR_USER_SCHED
-		bool "user id"
-		help
-		  This option will choose userid as the basis for grouping
-		  tasks, thus providing equal cpu bandwidth to each user.
+config FAIR_USER_SCHED
+	bool "user id"
+	help
+	  This option will choose userid as the basis for grouping
+	  tasks, thus providing equal CPU bandwidth to each user.
 
 endchoice
 
Index: current/kernel/sched_debug.c
===================================================================
--- current.orig/kernel/sched_debug.c
+++ current/kernel/sched_debug.c
@@ -239,11 +239,7 @@ static int
 root_user_share_read_proc(char *page, char **start, off_t off, int count,
 				 int *eof, void *data)
 {
-	int len;
-
-	len = sprintf(page, "%d\n", init_task_grp_load);
-
-	return len;
+	return sprintf(page, "%d\n", init_task_grp_load);
 }
 
 static int
@@ -297,7 +293,7 @@ static int __init init_sched_debug_procf
 	pe->proc_fops = &sched_debug_fops;
 
 #ifdef CONFIG_FAIR_USER_SCHED
-	pe = create_proc_entry("root_user_share", 0644, NULL);
+	pe = create_proc_entry("root_user_cpu_share", 0644, NULL);
 	if (!pe)
 		return -ENOMEM;
 


-- 
Regards,
vatsa

^ permalink raw reply	[flat|nested] 11+ messages in thread

* [PATCH 2/3] Fix size bloat for !CONFIG_FAIR_GROUP_SCHED
  2007-09-25 16:25 ` [PATCH 0/3] More group scheduler related fixes Srivatsa Vaddagiri
  2007-09-25 16:28   ` [PATCH 1/3] Fix coding style Srivatsa Vaddagiri
@ 2007-09-25 16:33   ` Srivatsa Vaddagiri
  2007-09-25 16:37   ` [PATCH 3/3] Fix other possible sources of latency issues Srivatsa Vaddagiri
  2007-09-25 18:32   ` [PATCH 0/3] More group scheduler related fixes Ingo Molnar
  3 siblings, 0 replies; 11+ messages in thread
From: Srivatsa Vaddagiri @ 2007-09-25 16:33 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: linux-kernel, Peter Zijlstra, Mike Galbraith, Dhaval Giani,
	Dmitry Adamushko, Andrew Morton

Recent fix to check_preempt_wakeup() to check for preemption at higher
levels caused a size bloat for !CONFIG_FAIR_GROUP_SCHED.

Fix the problem.

  42277   10598     320   53195    cfcb kernel/sched.o-before_this_patch
  42216   10598     320   53134    cf8e kernel/sched.o-after_this_patch


Signed-off-by : Srivatsa Vaddagiri <vatsa@linux.vnet.ibm.com>


---
 kernel/sched_fair.c |   43 +++++++++++++++++++++++++------------------
 1 files changed, 25 insertions(+), 18 deletions(-)

Index: current/kernel/sched_fair.c
===================================================================
--- current.orig/kernel/sched_fair.c
+++ current/kernel/sched_fair.c
@@ -640,15 +640,21 @@ static inline struct cfs_rq *cpu_cfs_rq(
 #define for_each_leaf_cfs_rq(rq, cfs_rq) \
 	list_for_each_entry(cfs_rq, &rq->leaf_cfs_rq_list, leaf_cfs_rq_list)
 
-/* Do the two (enqueued) tasks belong to the same group ? */
-static inline int is_same_group(struct task_struct *curr, struct task_struct *p)
+/* Do the two (enqueued) entities belong to the same group ? */
+static inline int
+is_same_group(struct sched_entity *se, struct sched_entity *pse)
 {
-	if (curr->se.cfs_rq == p->se.cfs_rq)
+	if (se->cfs_rq == pse->cfs_rq)
 		return 1;
 
 	return 0;
 }
 
+static inline struct sched_entity *parent_entity(struct sched_entity *se)
+{
+	return se->parent;
+}
+
 #else	/* CONFIG_FAIR_GROUP_SCHED */
 
 #define for_each_sched_entity(se) \
@@ -681,11 +687,17 @@ static inline struct cfs_rq *cpu_cfs_rq(
 #define for_each_leaf_cfs_rq(rq, cfs_rq) \
 		for (cfs_rq = &rq->cfs; cfs_rq; cfs_rq = NULL)
 
-static inline int is_same_group(struct task_struct *curr, struct task_struct *p)
+static inline int
+is_same_group(struct sched_entity *se, struct sched_entity *pse)
 {
 	return 1;
 }
 
+static inline struct sched_entity *parent_entity(struct sched_entity *se)
+{
+	return NULL;
+}
+
 #endif	/* CONFIG_FAIR_GROUP_SCHED */
 
 /*
@@ -775,8 +787,9 @@ static void yield_task_fair(struct rq *r
 static void check_preempt_wakeup(struct rq *rq, struct task_struct *p)
 {
 	struct task_struct *curr = rq->curr;
-	struct cfs_rq *cfs_rq = task_cfs_rq(curr), *pcfs_rq;
+	struct cfs_rq *cfs_rq = task_cfs_rq(curr);
 	struct sched_entity *se = &curr->se, *pse = &p->se;
+	s64 delta;
 
 	if (unlikely(rt_prio(p->prio))) {
 		update_rq_clock(rq);
@@ -785,21 +798,15 @@ static void check_preempt_wakeup(struct 
 		return;
 	}
 
-	for_each_sched_entity(se) {
-		cfs_rq = cfs_rq_of(se);
-		pcfs_rq = cfs_rq_of(pse);
+	while (!is_same_group(se, pse)) {
+		se = parent_entity(se);
+		pse = parent_entity(pse);
+	}
 
-		if (cfs_rq == pcfs_rq) {
-			s64 delta = se->vruntime - pse->vruntime;
+	delta = se->vruntime - pse->vruntime;
 
-			if (delta > (s64)sysctl_sched_wakeup_granularity)
-				resched_task(curr);
-			break;
-		}
-#ifdef CONFIG_FAIR_GROUP_SCHED
-		pse = pse->parent;
-#endif
-	}
+	if (delta > (s64)sysctl_sched_wakeup_granularity)
+		resched_task(curr);
 }
 
 static struct task_struct *pick_next_task_fair(struct rq *rq)


-- 
Regards,
vatsa

^ permalink raw reply	[flat|nested] 11+ messages in thread

* [PATCH 3/3] Fix other possible sources of latency issues
  2007-09-25 16:25 ` [PATCH 0/3] More group scheduler related fixes Srivatsa Vaddagiri
  2007-09-25 16:28   ` [PATCH 1/3] Fix coding style Srivatsa Vaddagiri
  2007-09-25 16:33   ` [PATCH 2/3] Fix size bloat for !CONFIG_FAIR_GROUP_SCHED Srivatsa Vaddagiri
@ 2007-09-25 16:37   ` Srivatsa Vaddagiri
  2007-09-25 18:32   ` [PATCH 0/3] More group scheduler related fixes Ingo Molnar
  3 siblings, 0 replies; 11+ messages in thread
From: Srivatsa Vaddagiri @ 2007-09-25 16:37 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: linux-kernel, Peter Zijlstra, Mike Galbraith, Dhaval Giani,
	Dmitry Adamushko, Andrew Morton

There is a possibility that because of task of a group moving from one
cpu to another, it may gain more cpu time that desired. See 
http://marc.info/?l=linux-kernel&m=119073197730334 for details.

This is an attempt to fix that problem. Basically it simulates dequeue
of higher level entities as if they are going to sleep. Similarly it
simulate wakeup of higher level entities as if they are waking up from
sleep.

Signed-off-by : Srivatsa Vaddagiri <vatsa@linux.vnet.ibm.com>

---
 kernel/sched_fair.c |    2 ++
 1 files changed, 2 insertions(+)

Index: current/kernel/sched_fair.c
===================================================================
--- current.orig/kernel/sched_fair.c
+++ current/kernel/sched_fair.c
@@ -715,6 +715,7 @@ static void enqueue_task_fair(struct rq 
 			break;
 		cfs_rq = cfs_rq_of(se);
 		enqueue_entity(cfs_rq, se, wakeup);
+		wakeup = 1;
 	}
 }
 
@@ -734,6 +735,7 @@ static void dequeue_task_fair(struct rq 
 		/* Don't dequeue parent if it has other entities besides us */
 		if (cfs_rq->load.weight)
 			break;
+		sleep = 1;
 	}
 }
 
-- 
Regards,
vatsa

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH 0/3] More group scheduler related fixes
  2007-09-25 16:25 ` [PATCH 0/3] More group scheduler related fixes Srivatsa Vaddagiri
                     ` (2 preceding siblings ...)
  2007-09-25 16:37   ` [PATCH 3/3] Fix other possible sources of latency issues Srivatsa Vaddagiri
@ 2007-09-25 18:32   ` Ingo Molnar
  3 siblings, 0 replies; 11+ messages in thread
From: Ingo Molnar @ 2007-09-25 18:32 UTC (permalink / raw)
  To: Srivatsa Vaddagiri
  Cc: linux-kernel, Peter Zijlstra, Mike Galbraith, Dhaval Giani,
	Dmitry Adamushko, Andrew Morton


* Srivatsa Vaddagiri <vatsa@linux.vnet.ibm.com> wrote:

> On Tue, Sep 25, 2007 at 04:44:43PM +0200, Ingo Molnar wrote:
> > 
> > The latest sched-devel.git tree can be pulled from:
> >   
> >    git://git.kernel.org/pub/scm/linux/kernel/git/mingo/linux-2.6-sched-devel.git
> > 
> 
> Ingo,
> 	Few more patches follow on top of this latest sched-devel tree.
> 
> Pls consider for inclusion.

thanks, applied.

	Ingo

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH 1/3] Fix coding style
  2007-09-25 16:28   ` [PATCH 1/3] Fix coding style Srivatsa Vaddagiri
@ 2007-09-25 19:16     ` Ingo Oeser
  2007-09-25 20:47       ` Kyle Moffett
  2007-09-26  2:03       ` Dhaval Giani
  0 siblings, 2 replies; 11+ messages in thread
From: Ingo Oeser @ 2007-09-25 19:16 UTC (permalink / raw)
  To: vatsa
  Cc: Ingo Molnar, linux-kernel, Peter Zijlstra, Mike Galbraith,
	Dhaval Giani, Dmitry Adamushko, Andrew Morton, randy.dunlap

On Tuesday 25 September 2007, Srivatsa Vaddagiri wrote:
> Index: current/kernel/sched_debug.c
> ===================================================================
> --- current.orig/kernel/sched_debug.c
> +++ current/kernel/sched_debug.c
> @@ -239,11 +239,7 @@ static int
>  root_user_share_read_proc(char *page, char **start, off_t off, int count,
>  				 int *eof, void *data)
>  {
> -	int len;
> -
> -	len = sprintf(page, "%d\n", init_task_grp_load);
> -
> -	return len;
> +	return sprintf(page, "%d\n", init_task_grp_load);
>  }
>  
>  static int
> @@ -297,7 +293,7 @@ static int __init init_sched_debug_procf
>  	pe->proc_fops = &sched_debug_fops;
>  
>  #ifdef CONFIG_FAIR_USER_SCHED
> -	pe = create_proc_entry("root_user_share", 0644, NULL);
> +	pe = create_proc_entry("root_user_cpu_share", 0644, NULL);
>  	if (!pe)
>  		return -ENOMEM;

What about moving this debug stuff under debugfs?
Please consider using the functions in <linux/debugfs.h> .
They compile into nothing, if DEBUGFS is not compiled in
and have already useful functions for reading/writing integers
and booleans.

Best Regards

Ingo Oeser

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH 1/3] Fix coding style
  2007-09-25 19:16     ` Ingo Oeser
@ 2007-09-25 20:47       ` Kyle Moffett
  2007-09-26  2:03       ` Dhaval Giani
  1 sibling, 0 replies; 11+ messages in thread
From: Kyle Moffett @ 2007-09-25 20:47 UTC (permalink / raw)
  To: Ingo Oeser
  Cc: vatsa, Ingo Molnar, linux-kernel, Peter Zijlstra, Mike Galbraith,
	Dhaval Giani, Dmitry Adamushko, Andrew Morton, randy.dunlap

On Sep 25, 2007, at 15:16:20, Ingo Oeser wrote:
> On Tuesday 25 September 2007, Srivatsa Vaddagiri wrote:
>> @@ -297,7 +293,7 @@ static int __init init_sched_debug_procf
>>  	pe->proc_fops = &sched_debug_fops;
>>
>>  #ifdef CONFIG_FAIR_USER_SCHED
>> -	pe = create_proc_entry("root_user_share", 0644, NULL);
>> +	pe = create_proc_entry("root_user_cpu_share", 0644, NULL);
>>  	if (!pe)
>>  		return -ENOMEM;
>
> What about moving this debug stuff under debugfs?  Please consider  
> using the functions in <linux/debugfs.h>.  They compile into  
> nothing, if DEBUGFS is not compiled in and have already useful  
> functions for reading/writing integers and booleans.

Umm, that's not a debugging thing.  It appears to be a tunable  
allowing you to configure what percentage of the total CPU that UID 0  
gets which is likely to be useful to configure on production systems;  
at least until better group-scheduling tools are produced.

Cheers,
Kyle Moffett


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH 1/3] Fix coding style
  2007-09-25 19:16     ` Ingo Oeser
  2007-09-25 20:47       ` Kyle Moffett
@ 2007-09-26  2:03       ` Dhaval Giani
  1 sibling, 0 replies; 11+ messages in thread
From: Dhaval Giani @ 2007-09-26  2:03 UTC (permalink / raw)
  To: Ingo Oeser
  Cc: vatsa, Ingo Molnar, linux-kernel, Peter Zijlstra, Mike Galbraith,
	Dmitry Adamushko, Andrew Morton, randy.dunlap

On Tue, Sep 25, 2007 at 09:16:20PM +0200, Ingo Oeser wrote:
> On Tuesday 25 September 2007, Srivatsa Vaddagiri wrote:
> > Index: current/kernel/sched_debug.c
> > ===================================================================
> > --- current.orig/kernel/sched_debug.c
> > +++ current/kernel/sched_debug.c
> > @@ -239,11 +239,7 @@ static int
> >  root_user_share_read_proc(char *page, char **start, off_t off, int count,
> >  				 int *eof, void *data)
> >  {
> > -	int len;
> > -
> > -	len = sprintf(page, "%d\n", init_task_grp_load);
> > -
> > -	return len;
> > +	return sprintf(page, "%d\n", init_task_grp_load);
> >  }
> >  
> >  static int
> > @@ -297,7 +293,7 @@ static int __init init_sched_debug_procf
> >  	pe->proc_fops = &sched_debug_fops;
> >  
> >  #ifdef CONFIG_FAIR_USER_SCHED
> > -	pe = create_proc_entry("root_user_share", 0644, NULL);
> > +	pe = create_proc_entry("root_user_cpu_share", 0644, NULL);
> >  	if (!pe)
> >  		return -ENOMEM;
> 
> What about moving this debug stuff under debugfs?
> Please consider using the functions in <linux/debugfs.h> .
> They compile into nothing, if DEBUGFS is not compiled in
> and have already useful functions for reading/writing integers
> and booleans.
> 
Hi Ingo,

This is not debug stuff. It is a tunable to give the root user more
weight with respect to the other users.

-- 
regards,
Dhaval

^ permalink raw reply	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2007-09-26  2:06 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2007-09-25 14:44 [git] CFS-devel, latest code Ingo Molnar
2007-09-25 16:04 ` Srivatsa Vaddagiri
2007-09-25 16:08   ` Srivatsa Vaddagiri
2007-09-25 16:25 ` [PATCH 0/3] More group scheduler related fixes Srivatsa Vaddagiri
2007-09-25 16:28   ` [PATCH 1/3] Fix coding style Srivatsa Vaddagiri
2007-09-25 19:16     ` Ingo Oeser
2007-09-25 20:47       ` Kyle Moffett
2007-09-26  2:03       ` Dhaval Giani
2007-09-25 16:33   ` [PATCH 2/3] Fix size bloat for !CONFIG_FAIR_GROUP_SCHED Srivatsa Vaddagiri
2007-09-25 16:37   ` [PATCH 3/3] Fix other possible sources of latency issues Srivatsa Vaddagiri
2007-09-25 18:32   ` [PATCH 0/3] More group scheduler related fixes Ingo Molnar

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox