public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
* Re: [git] CFS-devel, latest code
@ 2007-10-02 19:49 Dmitry Adamushko
  2007-10-02 19:59 ` Dmitry Adamushko
  2007-10-04  7:41 ` Ingo Molnar
  0 siblings, 2 replies; 53+ messages in thread
From: Dmitry Adamushko @ 2007-10-02 19:49 UTC (permalink / raw)
  To: Ingo Molnar; +Cc: linux-kernel


On 01/10/2007, Ingo Molnar <mingo@elte.hu> wrote:
> 
> * Dmitry Adamushko <dmitry.adamushko@gmail.com> wrote:
> 
> > here is a few patches on top of the recent 'sched-dev':
> >
> > (1) [ proposal ] make timeslices of SCHED_RR tasks constant and not
> > dependent on task's static_prio;
> >
> > (2) [ cleanup ] calc_weighted() is obsolete, remove it;
> >
> > (3) [ refactoring ] make dequeue_entity() / enqueue_entity()
> > and update_stats_dequeue() / update_stats_enqueue() look similar, structure-wise.
> 
> thanks - i've applied all 3 patches of yours.
> 
> > (compiles well, not functionally tested yet)
> 
> (it boots fine here and SCHED_RR seems to work - but i've not tested
> getinterval.)

/me is guilty... it was a bit broken :-/ here is the fix.

results:

(SCHED_FIFO)

dimm@earth:~/storage/prog$ sudo chrt -f 10 ./rr_interval 
time_slice: 0 : 0

(SCHED_RR)

dimm@earth:~/storage/prog$ sudo chrt 10 ./rr_interval 
time_slice: 0 : 99984800

(SCHED_NORMAL)

dimm@earth:~/storage/prog$ ./rr_interval 
time_slice: 0 : 19996960

(SCHED_NORMAL + a cpu_hog of similar 'weight' on the same CPU --- so should be a half of the previous result)

dimm@earth:~/storage/prog$ taskset 1 ./rr_interval 
time_slice: 0 : 9998480


Signed-off-by: Dmitry Adamushko <dmitry.adamushko@gmail.com>

---
diff --git a/kernel/sched.c b/kernel/sched.c
index d835cd2..cce22ff 100644
--- a/kernel/sched.c
+++ b/kernel/sched.c
@@ -4745,11 +4745,12 @@ long sys_sched_rr_get_interval(pid_t pid, struct timespec __user *interval)
 	else if (p->policy == SCHED_RR)
 		time_slice = DEF_TIMESLICE;
 	else {
+		struct sched_entity *se = &p->se;
 		unsigned long flags;
 		struct rq *rq;
 
 		rq = task_rq_lock(p, &flags);
-		time_slice = sched_slice(&rq->cfs, &p->se);
+		time_slice = NS_TO_JIFFIES(sched_slice(cfs_rq_of(se), se));
 		task_rq_unlock(rq, &flags);
 	}
 	read_unlock(&tasklist_lock);

---




^ permalink raw reply related	[flat|nested] 53+ messages in thread
* Re: [git] CFS-devel, latest code
@ 2007-09-30 19:18 Dmitry Adamushko
  0 siblings, 0 replies; 53+ messages in thread
From: Dmitry Adamushko @ 2007-09-30 19:18 UTC (permalink / raw)
  To: Ingo Molnar; +Cc: linux-kernel


and this one,

make dequeue_entity() / enqueue_entity() and update_stats_dequeue() /
update_stats_enqueue() look similar, structure-wise.

zero effect, functionally-wise.

Signed-off-by: Dmitry Adamushko <dmitry.adamushko@gmail.com>

---
diff --git a/kernel/sched_fair.c b/kernel/sched_fair.c
index 2674e27..ed75a04 100644
--- a/kernel/sched_fair.c
+++ b/kernel/sched_fair.c
@@ -366,7 +366,6 @@ update_stats_wait_end(struct cfs_rq *cfs_rq, struct sched_entity *se)
 static inline void
 update_stats_dequeue(struct cfs_rq *cfs_rq, struct sched_entity *se)
 {
-	update_curr(cfs_rq);
 	/*
 	 * Mark the end of the wait period if dequeueing a
 	 * waiting task:
@@ -493,7 +492,7 @@ static void
 enqueue_entity(struct cfs_rq *cfs_rq, struct sched_entity *se, int wakeup)
 {
 	/*
-	 * Update the fair clock.
+	 * Update run-time statistics of the 'current'.
 	 */
 	update_curr(cfs_rq);
 
@@ -512,6 +511,11 @@ enqueue_entity(struct cfs_rq *cfs_rq, struct sched_entity *se, int wakeup)
 static void
 dequeue_entity(struct cfs_rq *cfs_rq, struct sched_entity *se, int sleep)
 {
+	/*
+	 * Update run-time statistics of the 'current'.
+	 */
+	update_curr(cfs_rq);
+
 	update_stats_dequeue(cfs_rq, se);
 	if (sleep) {
 #ifdef CONFIG_SCHEDSTATS
@@ -775,8 +779,7 @@ static void yield_task_fair(struct rq *rq)
 	if (likely(!sysctl_sched_compat_yield)) {
 		__update_rq_clock(rq);
 		/*
-		 * Dequeue and enqueue the task to update its
-		 * position within the tree:
+		 * Update run-time statistics of the 'current'.
 		 */
 		update_curr(cfs_rq);
 

---


^ permalink raw reply related	[flat|nested] 53+ messages in thread
* Re: [git] CFS-devel, latest code
@ 2007-09-30 19:15 Dmitry Adamushko
  2007-10-01  5:53 ` Mike Galbraith
  0 siblings, 1 reply; 53+ messages in thread
From: Dmitry Adamushko @ 2007-09-30 19:15 UTC (permalink / raw)
  To: Ingo Molnar; +Cc: linux-kernel



remove obsolete code -- calc_weighted()


Signed-off-by: Dmitry Adamushko <dmitry.adamushko@gmail.com>


---
diff --git a/kernel/sched_fair.c b/kernel/sched_fair.c
index fe4003d..2674e27 100644
--- a/kernel/sched_fair.c
+++ b/kernel/sched_fair.c
@@ -342,17 +342,6 @@ update_stats_wait_start(struct cfs_rq *cfs_rq,
struct sched_entity *se)
 	schedstat_set(se->wait_start, rq_of(cfs_rq)->clock);
 }
 
-static inline unsigned long
-calc_weighted(unsigned long delta, struct sched_entity *se)
-{
-	unsigned long weight = se->load.weight;
-
-	if (unlikely(weight != NICE_0_LOAD))
-		return (u64)delta * se->load.weight >> NICE_0_SHIFT;
-	else
-		return delta;
-}
-
 /*
  * Task is being enqueued - update stats:
  */

---


^ permalink raw reply related	[flat|nested] 53+ messages in thread
* Re: [git] CFS-devel, latest code
@ 2007-09-30 19:13 Dmitry Adamushko
  2007-10-01  6:11 ` Ingo Molnar
  0 siblings, 1 reply; 53+ messages in thread
From: Dmitry Adamushko @ 2007-09-30 19:13 UTC (permalink / raw)
  To: Ingo Molnar; +Cc: linux-kernel



here is a few patches on top of the recent 'sched-dev':

(1) [ proposal ] make timeslices of SCHED_RR tasks constant and not
dependent on task's static_prio;

(2) [ cleanup ] calc_weighted() is obsolete, remove it;

(3) [ refactoring ] make dequeue_entity() / enqueue_entity() 
and update_stats_dequeue() / update_stats_enqueue() look similar, structure-wise.

-----------------------------------

(1)

- make timeslices of SCHED_RR tasks constant and not
dependent on task's static_prio [1] ;
- remove obsolete code (timeslice related bits);
- make sched_rr_get_interval() return something more
meaningful [2] for SCHED_OTHER tasks.

[1] according to the following link, the current behavior is not compliant
with SUSv3 (not sure though, what is the reference for us :-)
http://lkml.org/lkml/2007/3/7/656

[2] the interval is dynamic and can be depicted as follows "should a
task be one of the runnable tasks at this particular moment, it would
expect to run for this interval of time before being re-scheduled by the
scheduler tick".

all in all, the code doesn't increase:

   text    data     bss     dec     hex filename
  46585    5102      40   51727    ca0f ../build/kernel/sched.o.before
  46553    5102      40   51695    c9ef ../build/kernel/sched.o

yeah, this seems to require task_rq_lock/unlock() but this is not a hot
path.

what do you think?

(compiles well, not functionally tested yet)

Almost-Signed-off-by: Dmitry Adamushko <dmitry.adamushko@gmail.com>

---
diff --git a/kernel/sched.c b/kernel/sched.c
index 0abed89..eba7827 100644
--- a/kernel/sched.c
+++ b/kernel/sched.c
@@ -104,11 +104,9 @@ unsigned long long __attribute__((weak)) sched_clock(void)
 /*
  * These are the 'tuning knobs' of the scheduler:
  *
- * Minimum timeslice is 5 msecs (or 1 jiffy, whichever is larger),
- * default timeslice is 100 msecs, maximum timeslice is 800 msecs.
+ * default timeslice is 100 msecs (used only for SCHED_RR tasks).
  * Timeslices get refilled after they expire.
  */
-#define MIN_TIMESLICE		max(5 * HZ / 1000, 1)
 #define DEF_TIMESLICE		(100 * HZ / 1000)
 
 #ifdef CONFIG_SMP
@@ -132,24 +130,6 @@ static inline void sg_inc_cpu_power(struct sched_group *sg, u32 val)
 }
 #endif
 
-#define SCALE_PRIO(x, prio) \
-	max(x * (MAX_PRIO - prio) / (MAX_USER_PRIO / 2), MIN_TIMESLICE)
-
-/*
- * static_prio_timeslice() scales user-nice values [ -20 ... 0 ... 19 ]
- * to time slice values: [800ms ... 100ms ... 5ms]
- */
-static unsigned int static_prio_timeslice(int static_prio)
-{
-	if (static_prio == NICE_TO_PRIO(19))
-		return 1;
-
-	if (static_prio < NICE_TO_PRIO(0))
-		return SCALE_PRIO(DEF_TIMESLICE * 4, static_prio);
-	else
-		return SCALE_PRIO(DEF_TIMESLICE, static_prio);
-}
-
 static inline int rt_policy(int policy)
 {
 	if (unlikely(policy == SCHED_FIFO) || unlikely(policy == SCHED_RR))
@@ -4759,6 +4739,7 @@ asmlinkage
 long sys_sched_rr_get_interval(pid_t pid, struct timespec __user *interval)
 {
 	struct task_struct *p;
+	unsigned int time_slice;
 	int retval = -EINVAL;
 	struct timespec t;
 
@@ -4775,9 +4756,20 @@ long sys_sched_rr_get_interval(pid_t pid, struct timespec __user *interval)
 	if (retval)
 		goto out_unlock;
 
-	jiffies_to_timespec(p->policy == SCHED_FIFO ?
-				0 : static_prio_timeslice(p->static_prio), &t);
+	if (p->policy == SCHED_FIFO)
+		time_slice = 0;
+	else if (p->policy == SCHED_RR)
+		time_slice = DEF_TIMESLICE;
+	else {
+		unsigned long flags;
+		struct rq *rq;
+
+		rq = task_rq_lock(p, &flags);
+		time_slice = sched_slice(&rq->cfs, &p->se);
+		task_rq_unlock(rq, &flags);
+	}
 	read_unlock(&tasklist_lock);
+	jiffies_to_timespec(time_slice, &t);
 	retval = copy_to_user(interval, &t, sizeof(t)) ? -EFAULT : 0;
 out_nounlock:
 	return retval;
diff --git a/kernel/sched_rt.c b/kernel/sched_rt.c
index dbe4d8c..5c52881 100644
--- a/kernel/sched_rt.c
+++ b/kernel/sched_rt.c
@@ -206,7 +206,7 @@ static void task_tick_rt(struct rq *rq, struct task_struct *p)
 	if (--p->time_slice)
 		return;
 
-	p->time_slice = static_prio_timeslice(p->static_prio);
+	p->time_slice = DEF_TIMESLICE;
 
 	/*
 	 * Requeue to the end of queue if we are not the only element

---


^ permalink raw reply related	[flat|nested] 53+ messages in thread
* Re: [git] CFS-devel, latest code
@ 2007-09-25 21:35 Dmitry Adamushko
  2007-09-27  7:56 ` Ingo Molnar
  0 siblings, 1 reply; 53+ messages in thread
From: Dmitry Adamushko @ 2007-09-25 21:35 UTC (permalink / raw)
  To: Ingo Molnar; +Cc: Peter Zijlstra, Peter Zijlstra, Mike Galbraith, linux-kernel


humm... I think, it'd be safer to have something like the following
change in place.

The thing is that __pick_next_entity() must never be called when
first_fair(cfs_rq) == NULL. It wouldn't be a problem, should 'run_node'
be the very first field of 'struct sched_entity' (and it's the second).

The 'nr_running != 0' check is _not_ enough, due to the fact that
'current' is not within the tree. Generic paths are ok (e.g. schedule()
as put_prev_task() is called previously)... I'm more worried about e.g.
migration_call() -> CPU_DEAD_FROZEN -> migrate_dead_tasks()... if
'current' == rq->idle, no problems.. if it's one of the SCHED_NORMAL
tasks (or imagine, some other use-cases in the future -- i.e. we should
not make outer world dependent on internal details of sched_fair class)
-- it may be "Houston, we've got a problem" case.

it's +16 bytes to the ".text". Another variant is to make 'run_node' the
first data member of 'struct sched_entity' but an additional check (se !
= NULL) is still needed in pick_next_entity().

what do you think?


---
 diff --git a/kernel/sched_fair.c b/kernel/sched_fair.c
index dae714a..33b2376 100644
--- a/kernel/sched_fair.c
+++ b/kernel/sched_fair.c
@@ -563,9 +563,12 @@ set_next_entity(struct cfs_rq *cfs_rq, struct sched_entity *se)
 
 static struct sched_entity *pick_next_entity(struct cfs_rq *cfs_rq)
 {
-	struct sched_entity *se = __pick_next_entity(cfs_rq);
-
-	set_next_entity(cfs_rq, se);
+	struct sched_entity *se = NULL;
+	
+	if (first_fair(cfs_rq)) {
+		se = __pick_next_entity(cfs_rq);
+		set_next_entity(cfs_rq, se);
+	}
 
 	return se;
 }

---



^ permalink raw reply related	[flat|nested] 53+ messages in thread
* [git] CFS-devel, latest code
@ 2007-09-25 14:44 Ingo Molnar
  2007-09-25 16:04 ` Srivatsa Vaddagiri
  0 siblings, 1 reply; 53+ messages in thread
From: Ingo Molnar @ 2007-09-25 14:44 UTC (permalink / raw)
  To: linux-kernel
  Cc: Peter Zijlstra, Mike Galbraith, Srivatsa Vaddagiri, Dhaval Giani,
	Dmitry Adamushko, Andrew Morton


The latest sched-devel.git tree can be pulled from:
  
   git://git.kernel.org/pub/scm/linux/kernel/git/mingo/linux-2.6-sched-devel.git
 
This is a quick iteration after yesterday's: a couple of group 
scheduling bugs were found/debugged and fixed by Srivatsa Vaddagiri and 
Mike Galbraith. There's also a yield fix from Dmitry Adamushko, a build 
fix from S.Ceglar Onur and Andrew Morton, a cleanup from Hiroshi 
Shimamoto and the usual stream of goodies from Peter Zijlstra. Rebased 
it to -rc8 as well.

there are no known regressions at the moment in the sched-devel.git 
codebase. (yay :)

	Ingo

----------------------------------------->
the shortlog relative to 2.6.23-rc8:
 
Dmitry Adamushko (9):
      sched: clean up struct load_stat
      sched: clean up schedstat block in dequeue_entity()
      sched: sched_setscheduler() fix
      sched: add set_curr_task() calls
      sched: do not keep current in the tree and get rid of sched_entity::fair_key
      sched: optimize task_new_fair()
      sched: simplify sched_class::yield_task()
      sched: rework enqueue/dequeue_entity() to get rid of set_curr_task()
      sched: yield fix

Hiroshi Shimamoto (1):
      sched: clean up sched_fork()

Ingo Molnar (44):
      sched: fix new-task method
      sched: resched task in task_new_fair()
      sched: small sched_debug cleanup
      sched: debug: track maximum 'slice'
      sched: uniform tunings
      sched: use constants if !CONFIG_SCHED_DEBUG
      sched: remove stat_gran
      sched: remove precise CPU load
      sched: remove precise CPU load calculations #2
      sched: track cfs_rq->curr on !group-scheduling too
      sched: cleanup: simplify cfs_rq_curr() methods
      sched: uninline __enqueue_entity()/__dequeue_entity()
      sched: speed up update_load_add/_sub()
      sched: clean up calc_weighted()
      sched: introduce se->vruntime
      sched: move sched_feat() definitions
      sched: optimize vruntime based scheduling
      sched: simplify check_preempt() methods
      sched: wakeup granularity fix
      sched: add se->vruntime debugging
      sched: add more vruntime statistics
      sched: debug: update exec_clock only when SCHED_DEBUG
      sched: remove wait_runtime limit
      sched: remove wait_runtime fields and features
      sched: x86: allow single-depth wchan output
      sched: fix delay accounting performance regression
      sched: prettify /proc/sched_debug output
      sched: enhance debug output
      sched: kernel/sched_fair.c whitespace cleanups
      sched: fair-group sched, cleanups
      sched: enable CONFIG_FAIR_GROUP_SCHED=y by default
      sched debug: BKL usage statistics
      sched: remove unneeded tunables
      sched debug: print settings
      sched debug: more width for parameter printouts
      sched: entity_key() fix
      sched: remove condition from set_task_cpu()
      sched: remove last_min_vruntime effect
      sched: undo some of the recent changes
      sched: fix place_entity()
      sched: fix sched_fork()
      sched: remove set_leftmost()
      sched: clean up schedstats, cnt -> count
      sched: cleanup, remove stale comment

Matthias Kaehlcke (1):
      sched: use list_for_each_entry_safe() in __wake_up_common()

Mike Galbraith (2):
      sched: fix SMP migration latencies
      sched: fix formatting of /proc/sched_debug

Peter Zijlstra (12):
      sched: simplify SCHED_FEAT_* code
      sched: new task placement for vruntime
      sched: simplify adaptive latency
      sched: clean up new task placement
      sched: add tree based averages
      sched: handle vruntime overflow
      sched: better min_vruntime tracking
      sched: add vslice
      sched debug: check spread
      sched: max_vruntime() simplification
      sched: clean up min_vruntime use
      sched: speed up and simplify vslice calculations

S.Ceglar Onur (1):
      sched debug: BKL usage statistics, fix

Srivatsa Vaddagiri (9):
      sched: group-scheduler core
      sched: revert recent removal of set_curr_task()
      sched: fix minor bug in yield
      sched: print nr_running and load in /proc/sched_debug
      sched: print &rq->cfs stats
      sched: clean up code under CONFIG_FAIR_GROUP_SCHED
      sched: add fair-user scheduler
      sched: group scheduler wakeup latency fix
      sched: group scheduler SMP migration fix

 arch/i386/Kconfig       |   11 
 fs/proc/base.c          |    2 
 include/linux/sched.h   |   55 ++-
 init/Kconfig            |   21 +
 kernel/delayacct.c      |    2 
 kernel/sched.c          |  577 +++++++++++++++++++++++++-------------
 kernel/sched_debug.c    |  250 +++++++++++-----
 kernel/sched_fair.c     |  718 +++++++++++++++++-------------------------------
 kernel/sched_idletask.c |    5 
 kernel/sched_rt.c       |   12 
 kernel/sched_stats.h    |   28 -
 kernel/sysctl.c         |   31 --
 kernel/user.c           |   43 ++
 13 files changed, 954 insertions(+), 801 deletions(-)

^ permalink raw reply	[flat|nested] 53+ messages in thread
* [git] CFS-devel, latest code
@ 2007-09-24 21:45 Ingo Molnar
  2007-09-24 21:55 ` Andrew Morton
                   ` (4 more replies)
  0 siblings, 5 replies; 53+ messages in thread
From: Ingo Molnar @ 2007-09-24 21:45 UTC (permalink / raw)
  To: linux-kernel
  Cc: Peter Zijlstra, Mike Galbraith, Srivatsa Vaddagiri, Dhaval Giani,
	Dmitry Adamushko, Andrew Morton


The latest sched-devel.git tree can be pulled from:
 
  git://git.kernel.org/pub/scm/linux/kernel/git/mingo/linux-2.6-sched-devel.git

Lots of scheduler updates in the past few days, done by many people. 
Most importantly, the SMP latency problems reported and debugged by Mike 
Galbraith should be fixed for good now.

I've also included the latest and greatest group-fairness scheduling 
patch from Srivatsa Vaddagiri, which can now be used without containers 
as well (in a simplified, each-uid-gets-its-fair-share mode). This 
feature (CONFIG_FAIR_USER_SCHED) is now default-enabled.

Peter Zijlstra has been busy enhancing the math of the scheduler: we've 
got the new 'vslice' forked-task code that should enable snappier shell 
commands during load while still keeping kbuild workloads in check.

On my testsystems this codebase starts looking like something that could 
be merged into v2.6.24, so please give it a good workout and let us know 
if there's anything bad going on. (If this works out fine then i'll 
propagate these changes back into the CFS backport, for wider testing.)

	Ingo

----------------------------------------->
the shortlog relative to 2.6.23-rc7:

Dmitry Adamushko (8):
      sched: clean up struct load_stat
      sched: clean up schedstat block in dequeue_entity()
      sched: sched_setscheduler() fix
      sched: add set_curr_task() calls
      sched: do not keep current in the tree and get rid of sched_entity::fair_key
      sched: optimize task_new_fair()
      sched: simplify sched_class::yield_task()
      sched: rework enqueue/dequeue_entity() to get rid of set_curr_task()

Ingo Molnar (41):
      sched: fix new-task method
      sched: resched task in task_new_fair()
      sched: small sched_debug cleanup
      sched: debug: track maximum 'slice'
      sched: uniform tunings
      sched: use constants if !CONFIG_SCHED_DEBUG
      sched: remove stat_gran
      sched: remove precise CPU load
      sched: remove precise CPU load calculations #2
      sched: track cfs_rq->curr on !group-scheduling too
      sched: cleanup: simplify cfs_rq_curr() methods
      sched: uninline __enqueue_entity()/__dequeue_entity()
      sched: speed up update_load_add/_sub()
      sched: clean up calc_weighted()
      sched: introduce se->vruntime
      sched: move sched_feat() definitions
      sched: optimize vruntime based scheduling
      sched: simplify check_preempt() methods
      sched: wakeup granularity fix
      sched: add se->vruntime debugging
      sched: add more vruntime statistics
      sched: debug: update exec_clock only when SCHED_DEBUG
      sched: remove wait_runtime limit
      sched: remove wait_runtime fields and features
      sched: x86: allow single-depth wchan output
      sched: fix delay accounting performance regression
      sched: prettify /proc/sched_debug output
      sched: enhance debug output
      sched: kernel/sched_fair.c whitespace cleanups
      sched: fair-group sched, cleanups
      sched: enable CONFIG_FAIR_GROUP_SCHED=y by default
      sched debug: BKL usage statistics
      sched: remove unneeded tunables
      sched debug: print settings
      sched debug: more width for parameter printouts
      sched: entity_key() fix
      sched: remove condition from set_task_cpu()
      sched: remove last_min_vruntime effect
      sched: undo some of the recent changes
      sched: fix place_entity()
      sched: fix sched_fork()

Matthias Kaehlcke (1):
      sched: use list_for_each_entry_safe() in __wake_up_common()

Mike Galbraith (2):
      sched: fix SMP migration latencies
      sched: fix formatting of /proc/sched_debug

Peter Zijlstra (10):
      sched: simplify SCHED_FEAT_* code
      sched: new task placement for vruntime
      sched: simplify adaptive latency
      sched: clean up new task placement
      sched: add tree based averages
      sched: handle vruntime overflow
      sched: better min_vruntime tracking
      sched: add vslice
      sched debug: check spread
      sched: max_vruntime() simplification

Srivatsa Vaddagiri (7):
      sched: group-scheduler core
      sched: revert recent removal of set_curr_task()
      sched: fix minor bug in yield
      sched: print nr_running and load in /proc/sched_debug
      sched: print &rq->cfs stats
      sched: clean up code under CONFIG_FAIR_GROUP_SCHED
      sched: add fair-user scheduler

 arch/i386/Kconfig       |   11 
 include/linux/sched.h   |   45 +--
 init/Kconfig            |   21 +
 kernel/sched.c          |  547 +++++++++++++++++++++++++------------
 kernel/sched_debug.c    |  248 +++++++++++------
 kernel/sched_fair.c     |  692 +++++++++++++++++-------------------------------
 kernel/sched_idletask.c |    5 
 kernel/sched_rt.c       |   12 
 kernel/sched_stats.h    |    4 
 kernel/sysctl.c         |   22 -
 kernel/user.c           |   43 ++
 11 files changed, 906 insertions(+), 744 deletions(-)

^ permalink raw reply	[flat|nested] 53+ messages in thread

end of thread, other threads:[~2007-10-04  7:41 UTC | newest]

Thread overview: 53+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2007-10-02 19:49 [git] CFS-devel, latest code Dmitry Adamushko
2007-10-02 19:59 ` Dmitry Adamushko
2007-10-03  4:15   ` Srivatsa Vaddagiri
2007-10-04  7:40   ` Ingo Molnar
2007-10-04  7:41 ` Ingo Molnar
  -- strict thread matches above, loose matches on Subject: below --
2007-09-30 19:18 Dmitry Adamushko
2007-09-30 19:15 Dmitry Adamushko
2007-10-01  5:53 ` Mike Galbraith
2007-10-01  5:55   ` Ingo Molnar
2007-09-30 19:13 Dmitry Adamushko
2007-10-01  6:11 ` Ingo Molnar
2007-09-25 21:35 Dmitry Adamushko
2007-09-27  7:56 ` Ingo Molnar
2007-09-25 14:44 Ingo Molnar
2007-09-25 16:04 ` Srivatsa Vaddagiri
2007-09-25 16:08   ` Srivatsa Vaddagiri
2007-09-24 21:45 Ingo Molnar
2007-09-24 21:55 ` Andrew Morton
2007-09-24 21:59   ` Ingo Molnar
2007-09-25  0:08 ` Daniel Walker
2007-09-25  6:45   ` Ingo Molnar
2007-09-25 15:17     ` Daniel Walker
2007-09-25  6:10 ` Mike Galbraith
2007-09-25  7:35   ` Mike Galbraith
2007-09-25  8:33     ` Mike Galbraith
2007-09-25  8:53       ` Srivatsa Vaddagiri
2007-09-25  9:11         ` Srivatsa Vaddagiri
2007-09-25  9:15           ` Mike Galbraith
2007-09-25  9:12         ` Mike Galbraith
2007-09-25  9:13       ` Ingo Molnar
2007-09-25  9:17         ` Mike Galbraith
2007-09-25  9:47           ` Ingo Molnar
2007-09-25 10:02             ` Mike Galbraith
2007-09-26  8:04             ` Mike Galbraith
2007-09-28 21:46             ` Bill Davidsen
2007-09-25  9:44         ` Srivatsa Vaddagiri
2007-09-25  9:40           ` Ingo Molnar
2007-09-25 10:10             ` Ingo Molnar
2007-09-25 10:28               ` Srivatsa Vaddagiri
2007-09-25 10:36                 ` Ingo Molnar
2007-09-25 11:33                   ` Ingo Molnar
2007-09-25 14:48                     ` Srivatsa Vaddagiri
2007-09-25 12:51                   ` Srivatsa Vaddagiri
2007-09-25 13:35                     ` Mike Galbraith
2007-09-25 14:07                       ` Srivatsa Vaddagiri
2007-09-25 12:28                 ` Mike Galbraith
2007-09-25 12:54                   ` Mike Galbraith
2007-09-25  6:50 ` S.Çağlar Onur
2007-09-25  9:17   ` Ingo Molnar
2007-09-25  7:41 ` Andrew Morton
2007-09-25  8:43   ` Srivatsa Vaddagiri
2007-09-25  8:48     ` Andrew Morton
2007-09-25 11:00     ` Ingo Molnar

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox