public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
* [git pull request] scheduler updates
@ 2007-08-24 14:12 Ingo Molnar
  2007-08-24 18:09 ` Linus Torvalds
  0 siblings, 1 reply; 20+ messages in thread
From: Ingo Molnar @ 2007-08-24 14:12 UTC (permalink / raw)
  To: Linus Torvalds; +Cc: Andrew Morton, linux-kernel

Linus, please pull the latest scheduler git tree from:

   git://git.kernel.org/pub/scm/linux/kernel/git/mingo/linux-2.6-sched.git

It includes 8 commits, 3 of which are important: the most important 
change is a bugfix to the new task startup penalty code. This could 
explain the task-startup unpredictability problem reported by Al Boldi.

Then there's also a change/tweak that increases the default granularity: 
it's still well below human perception so should not be noticeable, but 
servers win a bit from less preemption of CPU-bound tasks. (this is also 
the first step towards eliminating HZ from the granularity default 
calculation.)

Plus a bonus balance inconsistency has been fixed: the previous logic 
was slightly inflatory of sleeper wait-runtime, without a 
counter-balance on runners. (I found no noticeable or measurable impact, 
other than a ~5% improvement in hackbench performance [due to less 
preemption scheduling] and a slightly nicer looking /proc/sched_debug 
output when there are lots of sleepers.)

Five other, low-impact changes: a group-scheduling fixlet from Bruce 
Ashfield, two nice simplifications from Peter Zijlstra to the 
bonus-balance code (which eliminate a 64-bit multiplication and shrink 
the code), a QOI improvement from Dmitry Adamushko to RR RT task 
preemption [not strictly required for .23 but this has been in my tree 
for some time already with no ill effects and the code is obviously 
correct] and a dead code elimination fix from Sven-Thorsten Dietrich.

Test-built and test-booted on x86-32 and x86-64, and it passed a few 
dozen "make randconfig" builds as well.

	Ingo

------------------>
Bruce Ashfield (1):
      sched: CONFIG_SCHED_GROUP_FAIR=y fixlet

Dmitry Adamushko (1):
      sched: optimize task_tick_rt() a bit

Ingo Molnar (3):
      sched: increase default granularity a bit
      sched: tidy up and simplify the bonus balance
      sched: fix startup penalty calculation

Peter Zijlstra (2):
      sched: simplify bonus calculation #1
      sched: simplify bonus calculation #2

Sven-Thorsten Dietrich (1):
      sched: simplify can_migrate_task()

 sched.c      |    6 ------
 sched_fair.c |   26 +++++++++++++++-----------
 sched_rt.c   |   11 ++++++++---
 3 files changed, 23 insertions(+), 20 deletions(-)


^ permalink raw reply	[flat|nested] 20+ messages in thread
* [git pull request] scheduler updates
@ 2007-08-28 11:32 Ingo Molnar
  2007-08-28 14:11 ` Mike Galbraith
  0 siblings, 1 reply; 20+ messages in thread
From: Ingo Molnar @ 2007-08-28 11:32 UTC (permalink / raw)
  To: Linus Torvalds
  Cc: Andrew Morton, linux-kernel, Peter Zijlstra, Mike Galbraith


Linus, please pull the latest scheduler git tree from:

  git://git.kernel.org/pub/scm/linux/kernel/git/mingo/linux-2.6-sched.git

no big changes - 5 small fixes and 1 small cleanup:

- the only bug with a human-noticeable effect is a bonus-limit oneliner
  bug found and fixed by Mike: Mike has done interactivity testing of
  -rc4 and found a relatively minor but noticeable Amarok
  song-switch-latency increase under high load. (This bug was a
  side-effect of the recent adaptive-latency patch - mea culpa.)

- there's a fix for a new_task_fair() bug found by Ting Yang: Ting has
  done a comprehensive review of the latest CFS code and found this
  problem which caused a random jitter of 1 jiffy of the key value for
  newly started up tasks. Saw no immediate effects from this fix (this
  amount of jitter is noise in most cases and the effect averages out
  over longer time), but it's worth having the fix in .23 nevertheless.

- then there's a converge-to-ideal-latency change that fixes a
  pre-existing property of CFS. This is not a bug per se but is still
  worth fixing for .23 - the before/after chew-max output in the
  changelog shows the clear benefits in consistency of scheduling.
  Affects the preemption slowpath only. Should be human-unnoticeable.
  [ We would not have this fix if it wasnt for the de-HZ-ification
    change of the tunables, so i'm glad we got rid of the HZ uglies in 
    one go - they just hid this real problem. ]

- Peter noticed a bug in the SCHED_FEAT_SKIP_INITIAL code - but this
  is off by default so it's a NOP on the default kernel.

- a small schedstat fix [NOP for defconfig]. This bug was there since
  the first CFS commit.

- a small task_new_fair() cleanup [NOP].

	Ingo

------------------>
Ingo Molnar (4):
      sched: make the scheduler converge to the ideal latency
      sched: fix wait_start_fair condition in update_stats_wait_end()
      sched: small schedstat fix
      sched: clean up task_new_fair()

Mike Galbraith (1):
      sched: fix sleeper bonus limit

Ting Yang (1):
      sched: call update_curr() in task_tick_fair()

 include/linux/sched.h |    1 +
 kernel/sched.c        |    1 +
 kernel/sched_fair.c   |   46 +++++++++++++++++++++++++++++++++++-----------
 3 files changed, 37 insertions(+), 11 deletions(-)


^ permalink raw reply	[flat|nested] 20+ messages in thread
* [git pull request] scheduler updates
@ 2007-08-23 16:07 Ingo Molnar
  0 siblings, 0 replies; 20+ messages in thread
From: Ingo Molnar @ 2007-08-23 16:07 UTC (permalink / raw)
  To: Linus Torvalds; +Cc: Andrew Morton, linux-kernel


Linus, please pull the latest scheduler git tree from:

   git://git.kernel.org/pub/scm/linux/kernel/git/mingo/linux-2.6-sched.git

It includes six fixes: an s390 task-accounting fix from Christian 
Borntraeger, sysctl directory permission fixes from Eric W. Biederman, 
an SMT/MC balancing fix from Suresh Siddha (we under-balanced) and 
another fix from Suresh for debugging tweak side-effect. Plus there's a 
sched_clock() quality fix for CPUs that stop the TSC in idle (acked by 
Len Brown) and a reniced-tasks fixlet.

the SMT/MC blancing fix has the highest risk - but since it causes 
slightly more balancing (instead of less balancing, which is the more 
risky action) it should be pretty safe. Key workloads still seem fine. 
Tested on 32-bit and 64-bit x86 and it has passed 200+ make randconfig 
build tests.

	Ingo

---------------->
Christian Borntraeger (1):
      sched: accounting regression since rc1

Eric W. Biederman (1):
      sched: fix sysctl directory permissions

Ingo Molnar (2):
      sched: sched_clock_idle_[sleep|wakeup]_event()
      sched: tweak the sched_runtime_limit tunable

Suresh Siddha (2):
      sched: fix broken SMT/MC optimizations
      sched: skip updating rq's next_balance under null SD

 arch/i386/kernel/tsc.c        |    1 
 drivers/acpi/processor_idle.c |   32 +++++++++++++++----
 fs/proc/array.c               |   44 +++++++++++++++++----------
 include/linux/sched.h         |    5 +--
 kernel/sched.c                |   68 +++++++++++++++++++++++++++++++-----------
 kernel/sched_debug.c          |    3 +
 6 files changed, 110 insertions(+), 43 deletions(-)

^ permalink raw reply	[flat|nested] 20+ messages in thread
* [git pull request] scheduler updates
@ 2007-08-12 16:32 Ingo Molnar
  0 siblings, 0 replies; 20+ messages in thread
From: Ingo Molnar @ 2007-08-12 16:32 UTC (permalink / raw)
  To: Linus Torvalds; +Cc: Andrew Morton, linux-kernel

Linus, please pull the latest scheduler git tree from:

  git://git.kernel.org/pub/scm/linux/kernel/git/mingo/linux-2.6-sched.git

three bugfixes:

- a nice fix from eagle-eye Oleg for a subtle typo in the balancing
  code, the effect of this bug was more agressive idle balancing. This
  bug was introduced by one of the original CFS commits.

- a round of global->static fixes from Adrian Bunk - this change,
  besides the cleanup effect, chops 100 bytes off sched.o.

- Peter Zijlstra noticed a sleeper-bonus bug. I kept this patch under
  observation and testing this past week and saw no ill effects so far. 
  It could fix two suspected regressions. (It could improve Kasper
  Sandberg's workload and it could improve the sleeper/runner
  problem/bug Roman Zippel was seeing.)

test-built and test-booted on x86-32 and x86-64, and did a dozen of 
randconfig builds for good measure (which uncovered two new build errors 
in latest -git).

Thanks,

	Ingo

--------------->
Adrian Bunk (1):
      sched: make global code static

Ingo Molnar (1):
      sched: fix sleeper bonus

Oleg Nesterov (1):
      sched: run_rebalance_domains: s/SCHED_IDLE/CPU_IDLE/

 include/linux/cpu.h |    2 --
 kernel/sched.c      |   48 ++++++++++++++++++++++++------------------------
 kernel/sched_fair.c |   12 ++++++------
 3 files changed, 30 insertions(+), 32 deletions(-)

^ permalink raw reply	[flat|nested] 20+ messages in thread
* [git pull request] scheduler updates
@ 2007-08-10 21:22 Ingo Molnar
  0 siblings, 0 replies; 20+ messages in thread
From: Ingo Molnar @ 2007-08-10 21:22 UTC (permalink / raw)
  To: Linus Torvalds; +Cc: Andrew Morton, linux-kernel


Linus, please pull the latest scheduler git tree from:

   git://git.kernel.org/pub/scm/linux/kernel/git/mingo/linux-2.6-sched.git

this includes a regression fix and two minor fixes. The regression was 
noticed today by Arjan on the F8-Test1 kernel (which uses .23-rc2): if 
his laptop boots from battery then cpu_khz gets mis-detected and 
subsequently sched_clock() runs too fast - causing interactivity 
problems. This was a pre-existing sched_clock() regression and those 
sched_clock() problems are being addressed by Andi's cpufreq sched-clock 
patchset, but meanwhile i've fixed the regression by making the 
rq->clock logic more robust against such type of sched_clock() 
anomalies. (it was already robust against time warps) Arjan tested the 
fix and it solved the problem. There's also a small 
kernel-address-information-leak fix for the SCHED_DEBUG case noticed by 
Arjan and a fix for a SCHED_GROUP_FAIR branch (not enabled upstream, but 
still working if enabled manually).

	Ingo

---------------->
Ingo Molnar (3):
      sched: improve rq-clock overflow logic
      sched: fix typo in the FAIR_GROUP_SCHED branch
      sched debug: dont print kernel address in /proc/sched_debug

 sched.c       |   15 +++++++++++++--
 sched_debug.c |    2 +-
 sched_fair.c  |    7 +++----
 3 files changed, 17 insertions(+), 7 deletions(-)

^ permalink raw reply	[flat|nested] 20+ messages in thread
* [git pull request] scheduler updates
@ 2007-08-08 20:30 Ingo Molnar
  0 siblings, 0 replies; 20+ messages in thread
From: Ingo Molnar @ 2007-08-08 20:30 UTC (permalink / raw)
  To: Linus Torvalds; +Cc: Andrew Morton, linux-kernel


Linus, please pull the latest scheduler git tree from:

   git://git.kernel.org/pub/scm/linux/kernel/git/mingo/linux-2.6-sched.git

the high commit count is scary, but it's all low-risk items: the main 
reason is the safe and gradual elimination of a widely used 64-bit 
function argument: the 64-bit "now" timestamp. About 40 of those commits 
are identity transformations that prepare the real change in a safe way, 
and the rest is obvious and safe as well. Besides the obvious and nice 
cleanup factor, these changes are necessary for 3 reasons: firstly they 
address the "there's too much 64-bit stuff in the scheduler" 
observation. Secondly, it's not directly visible but these changes also 
act as a correctness fix for an obscure (and minor) but 
not-too-pretty-to-fix accounting bug: idle_balance() had its own 
internal notion of 'now', separate from that of schedule(). Thirdly, 
this debloats sched.o quite significantly:

on 32-bit (smp, nondebug), it's almost 1k less code:

   text    data     bss     dec     hex filename
  34869    3066      20   37955    9443 sched.o.before
  33972    3066      24   37062    90c6 sched.o.after

but even on 64-bit platforms it's noticeable:

   text    data     bss     dec     hex filename
  28652    4162      24   32838    8046 sched.o.before
  28064    4162      24   32250    7dfa sched.o.after

and that's a speedup as well, because these parameters were passed all 
around the fastpath.

It was the safest to do it this way (considering that we are post -rc2 
already), together in one commit these changes would have been much less 
obvious to validate and apply. (It's of course all fully bisectable and 
every step builds and boots fine.)

besides this elimination of the 64-bit timestamp parameter passing 
between (almost all) scheduler functions, there are 8 other fixes that 
are not identity transformations:

 - Peter Williams reviewed the smpnice load-balancer and noticed a few 
   leftover items that are unnecessary now (i have re-tested 
   load-balancing behavior and it's all still fine)

 - binary sysctl cleanup from Alexey Dobriyan

 - two small accounting fixes

 - reniced tasks fixes: a key calculation fix (i re-checked key nice
   workloads and this has no real impact [other than improving them 
   slightly] - the other side of the branch fixed up the effects of this 
   - otherwise we'd have noticed this sooner), and two rounding 
   precision improvements that act against error accumulation.

 - sleeper_bonus should be batched by sched_granularity and not by 
   stat_granularity. (this has almost no effect in practice, but a 
   speedup that pushes the only 64-bit division in CFS into a slowpath.)

then are are also two non-code documentation updates and minor cleanups 
and uninlining.

Nevertheless, to be safe i have also done over 200 'make randconfig; 
make -j bzImage' build tests:

   #define UTS_VERSION "#231 SMP Wed Aug 8 21:34:24 CEST 2007"

all of which passed fine. Booted (and extensively tested) on x86-32 and 
x64-32 as well, both UP and SMP - UP, 2-way to 8-way systems.

	Ingo

------------------>

Alexey Dobriyan (1):
      sched: remove binary sysctls from kernel.sched_domain

Josh Triplett (1):
      sched: mark print_cfs_stats static

Peter Williams (2):
      sched: simplify move_tasks()
      sched: fix bug in balance_tasks()

Thomas Voegtle (1):
      sched: mention CONFIG_SCHED_DEBUG in documentation

Ulrich Drepper (1):
      sched: clean up sched_getaffinity()

Ingo Molnar (55):
      sched: batch sleeper bonus
      sched: reorder update_cpu_load(rq) with the ->task_tick() call
      sched: uninline rq_clock()
      sched: schedule() speedup
      sched: clean up delta_mine
      sched: delta_exec accounting fix
      sched: document nice levels
      sched: add [__]update_rq_clock(rq)
      sched: eliminate rq_clock() use
      sched: remove rq_clock()
      sched: eliminate __rq_clock() use
      sched: remove __rq_clock()
      sched: remove 'now' use from assignments
      sched: remove the 'u64 now' parameter from print_cfs_rq()
      sched: remove the 'u64 now' parameter from update_curr()
      sched: remove the 'u64 now' parameter from update_stats_wait_start()
      sched: remove the 'u64 now' parameter from update_stats_enqueue()
      sched: remove the 'u64 now' parameter from __update_stats_wait_end()
      sched: remove the 'u64 now' parameter from update_stats_wait_end()
      sched: remove the 'u64 now' parameter from update_stats_curr_start()
      sched: remove the 'u64 now' parameter from update_stats_dequeue()
      sched: remove the 'u64 now' parameter from update_stats_curr_end()
      sched: remove the 'u64 now' parameter from __enqueue_sleeper()
      sched: remove the 'u64 now' parameter from enqueue_sleeper()
      sched: remove the 'u64 now' parameter from enqueue_entity()
      sched: remove the 'u64 now' parameter from dequeue_entity()
      sched: remove the 'u64 now' parameter from set_next_entity()
      sched: remove the 'u64 now' parameter from pick_next_entity()
      sched: remove the 'u64 now' parameter from put_prev_entity()
      sched: remove the 'u64 now' parameter from update_curr_rt()
      sched: remove the 'u64 now' parameter from ->enqueue_task()
      sched: remove the 'u64 now' parameter from ->dequeue_task()
      sched: remove the 'u64 now' parameter from ->pick_next_task()
      sched: remove the 'u64 now' parameter from pick_next_task()
      sched: remove the 'u64 now' parameter from ->put_prev_task()
      sched: remove the 'u64 now' parameter from ->task_new()
      sched: remove the 'u64 now' parameter from update_curr_load()
      sched: remove the 'u64 now' parameter from inc_load()
      sched: remove the 'u64 now' parameter from dec_load()
      sched: remove the 'u64 now' parameter from inc_nr_running()
      sched: remove the 'u64 now' parameter from dec_nr_running()
      sched: remove the 'u64 now' parameter from enqueue_task()
      sched: remove the 'u64 now' parameter from dequeue_task()
      sched: remove the 'u64 now' parameter from deactivate_task()
      sched: remove the 'u64 now' local variables
      sched debug: remove the 'u64 now' parameter from print_task()/_rq()
      sched: move the __update_rq_clock() call to scheduler_tick()
      sched: remove __update_rq_clock() call from entity_tick()
      sched: clean up set_curr_task_fair()
      sched: optimize activate_task()
      sched: optimize update_rq_clock() calls in the load-balancer
      sched: make the multiplication table more accurate
      sched: round a bit better
      sched: fix update_stats_enqueue() reniced codepath
      sched: refine negative nice level granularity

 Documentation/sched-design-CFS.txt  |    2 
 Documentation/sched-nice-design.txt |  108 +++++++++++
 include/linux/sched.h               |   20 --
 kernel/sched.c                      |  339 ++++++++++++++++++------------------
 kernel/sched_debug.c                |   16 -
 kernel/sched_fair.c                 |  212 ++++++++++------------
 kernel/sched_idletask.c             |   10 -
 kernel/sched_rt.c                   |   48 +----
 8 files changed, 421 insertions(+), 334 deletions(-)

^ permalink raw reply	[flat|nested] 20+ messages in thread
* [git pull request] scheduler updates
@ 2007-08-02 16:08 Ingo Molnar
  0 siblings, 0 replies; 20+ messages in thread
From: Ingo Molnar @ 2007-08-02 16:08 UTC (permalink / raw)
  To: Linus Torvalds; +Cc: Andrew Morton, linux-kernel


Linus, please pull the latest scheduler git tree from:

   git://git.kernel.org/pub/scm/linux/kernel/git/mingo/linux-2.6-sched.git

these are all low-risk sched.o and task_struct debloating patches:

   text    data     bss     dec     hex filename
  37033    3066      20   40119    9cb7 sched.o.debug.before
  34840    3066      20   37926    9426 sched.o.debug.after

   text    data     bss     dec     hex filename
  28997    2726      16   31739    7bfb sched.o.before
  27991    2726      16   30733    780d sched.o.after

1006 bytes of code off in the nondebug case (this also speeds things up) 
and 2193 bytes of code off in the debug case. The size of sched.o is now 
1k smaller than it was before CFS on SMP, and within 1k of its old size 
on UP. (Further reduction is possible, there is another patch that 
shaves off another 500 bytes but it needs some more testing.)

also a nice smpnice cleanup/simplification from Peter Williams.

built and booted on x86-32 and x86-64, built allnoconfig and 
allyesconfig, and for good measure it also passed 38 iterations of 'make 
randconfig; make -j vmlinux' builds without any failure.

Thanks!

	Ingo

------------------->

Ingo Molnar (10):
      sched: remove cache_hot_time
      sched: calc_delta_mine(): use fixed limit
      sched: uninline calc_delta_mine()
      sched: uninline inc/dec_nr_running()
      sched: ->task_new cleanup
      sched: move load-calculation functions
      sched: add schedstat_set() API
      sched: use schedstat_set() API
      sched: reduce debug code
      sched: reduce task_struct size

Peter Williams (1):
      sched: tidy up left over smpnice code

 include/linux/sched.h    |   24 +++--
 include/linux/topology.h |    1 
 kernel/sched.c           |  193 +++++++++++++++++++++++------------------------
 kernel/sched_debug.c     |   22 +++--
 kernel/sched_fair.c      |   21 +----
 kernel/sched_rt.c        |   14 ---
 kernel/sched_stats.h     |    2 
 7 files changed, 134 insertions(+), 143 deletions(-)

^ permalink raw reply	[flat|nested] 20+ messages in thread
* [git pull request] scheduler updates
@ 2007-07-26 12:08 Ingo Molnar
  0 siblings, 0 replies; 20+ messages in thread
From: Ingo Molnar @ 2007-07-26 12:08 UTC (permalink / raw)
  To: Linus Torvalds; +Cc: linux-kernel, Andrew Morton


Linus, please pull the latest scheduler git tree from:

   git://git.kernel.org/pub/scm/linux/kernel/git/mingo/linux-2.6-sched.git

there's 8 commits in this tree - only one modifies scheduling behavior 
(and even that one only slightly so): a fix for a (minor) 
SMP-fairness-balancing problem.

There is one update/fix to the (upstream still unused) cpu_clock() API. 
[ which API will replace all the current (and buggy) in-tree uses of 
  sched_clock(). ]

There are also two small facilities added: preempt-notifiers (which is 
disabled and not selectable by the user and hence a NOP) needed by 
future KVM and other virtualization work and they'd like to see this 
offered by the upstream kernel. There's also the new 
above_background_load() inline function (unused at the moment). The 
presence of these two facilities causes no change at all to the kernel 
image:

    text    data     bss     dec     hex filename
 5573413  679332 3842048 10094793         9a08c9 vmlinux.before
 5573413  679332 3842048 10094793         9a08c9 vmlinux.after

so i thought this would be fine for a post-rc1 merge too.

There's also two small cleanup patches, a documentation update, and a 
debugging enhancement/helper: i've merged Nick's long-pending 
sysctl-domain-tree debug patch that has been in -mm for 3 years 
meanwhile. (It depends on CONFIG_SCHED_DEBUG and has no effect on 
scheduling by default even if enabled.)

passes allyesconfig, allnoconfig and distro build, boots and works fine 
on 32-bit and 64-bit x86 as well. (and is expected to work fine on every 
architecture)

	Ingo

-------------------->
Avi Kivity (1):
      sched: arch preempt notifier mechanism

Con Kolivas (1):
      sched: add above_background_load() function

Ingo Molnar (2):
      sched: increase SCHED_LOAD_SCALE_FUZZ
      sched: make cpu_clock() not use the rq clock

Joachim Deguara (1):
      sched: update Documentation/sched-stats.txt

Josh Triplett (1):
      sched: mark sysrq_sched_debug_show() static

Nick Piggin (1):
      sched: debug feature - make the sched-domains tree runtime-tweakable

Satoru Takeuchi (1):
      sched: remove unused rq->load_balance_class

 Documentation/sched-stats.txt |  195 ++++++++++++++++++++--------------------
 include/linux/preempt.h       |   44 +++++++++
 include/linux/sched.h         |   23 ++++
 kernel/Kconfig.preempt        |    3 
 kernel/sched.c                |  204 ++++++++++++++++++++++++++++++++++++++++--
 kernel/sched_debug.c          |    2 
 6 files changed, 365 insertions(+), 106 deletions(-)

^ permalink raw reply	[flat|nested] 20+ messages in thread
* [git pull request] scheduler updates
@ 2007-07-19 16:50 Ingo Molnar
  0 siblings, 0 replies; 20+ messages in thread
From: Ingo Molnar @ 2007-07-19 16:50 UTC (permalink / raw)
  To: Linus Torvalds; +Cc: linux-kernel, Andrew Morton


Linus, please pull the latest scheduler git tree from:

   git://git.kernel.org/pub/scm/linux/kernel/git/mingo/linux-2.6-sched.git

4 small changes only. It includes an cleanup: Ralf Baechle noticed that 
sched_cacheflush() is now unused, a new kernel-internal API for future 
use (cpu_clock(cpu)), and two SMP balancer fixes from Suresh Siddha. The 
balancer fixes are the only functional bits. Tested on x86-32bit and 
x86-64bit, build-tested on allyesconfig and allnoconfig. I re-checked a 
few SMP balancing scenarios due to the balancer fixes and kept those 
changes in my tree for a few days, and they are working fine here.

Thanks,

	Ingo

--------------->
Ingo Molnar (1):
      sched: implement cpu_clock(cpu) high-speed time source

Ralf Baechle (1):
      sched: sched_cacheflush is now unused

Suresh Siddha (2):
      sched: fix newly idle load balance in case of SMT
      sched: fix the all pinned logic in load_balance_newidle()

 arch/ia64/kernel/setup.c     |    9 ---------
 include/asm-alpha/system.h   |   10 ----------
 include/asm-arm/system.h     |   10 ----------
 include/asm-arm26/system.h   |   10 ----------
 include/asm-i386/system.h    |    9 ---------
 include/asm-ia64/system.h    |    1 -
 include/asm-m32r/system.h    |   10 ----------
 include/asm-mips/system.h    |   10 ----------
 include/asm-parisc/system.h  |   11 -----------
 include/asm-powerpc/system.h |   10 ----------
 include/asm-ppc/system.h     |   10 ----------
 include/asm-s390/system.h    |   10 ----------
 include/asm-sh/system.h      |   10 ----------
 include/asm-sparc/system.h   |   10 ----------
 include/asm-sparc64/system.h |   10 ----------
 include/asm-x86_64/system.h  |    9 ---------
 include/linux/sched.h        |    7 +++++++
 kernel/sched.c               |   31 ++++++++++++++++++++++++++-----
 18 files changed, 33 insertions(+), 154 deletions(-)

^ permalink raw reply	[flat|nested] 20+ messages in thread
* [git pull request] scheduler updates
@ 2007-07-16  7:53 Ingo Molnar
  0 siblings, 0 replies; 20+ messages in thread
From: Ingo Molnar @ 2007-07-16  7:53 UTC (permalink / raw)
  To: Linus Torvalds; +Cc: linux-kernel


Linus, please pull the latest scheduler git tree from:

   git://git.kernel.org/pub/scm/linux/kernel/git/mingo/linux-2.6-sched.git

this includes low-risk changes that improve comments, remove dead code 
and fix whitespace/style problems.

Thanks!

	Ingo

--------------->
Ingo Molnar (5):
      sched: remove dead code from task_stime()
      sched: improve weight-array comments
      sched: document prio_to_wmult[]
      sched: prettify prio_to_wmult[]
      sched: fix up fs/proc/array.c whitespace problems

 fs/proc/array.c |   53 ++++++++++++++++++++++++++---------------------------
 kernel/sched.c  |   27 ++++++++++++++++++---------
 2 files changed, 44 insertions(+), 36 deletions(-)

^ permalink raw reply	[flat|nested] 20+ messages in thread
* [git pull request] scheduler updates
@ 2007-07-11 19:38 Ingo Molnar
  0 siblings, 0 replies; 20+ messages in thread
From: Ingo Molnar @ 2007-07-11 19:38 UTC (permalink / raw)
  To: Linus Torvalds; +Cc: linux-kernel, Mike Galbraith, Andrew Morton


Linus, please pull the latest sched.git tree from:

   git://git.kernel.org/pub/scm/linux/kernel/git/mingo/linux-2.6-sched.git

It includes 5 small fixes from the CFS merge fallout: Mike noticed a 
typo in the prio_to_wmult[] lookup table (the visible effects of this 
bug were minor), plus allow the scheduler to default to larger than 10 
msecs granularity - this should help larger boxes (without changing any 
of the tunings on smaller boxes), then there are also show_tasks() 
output fixes and some small cleanups.

Thanks,

	Ingo

----------------------->
Mike Galbraith (1):
      sched: fix prio_to_wmult[] for nice 1

Ingo Molnar (4):
      sched: allow larger granularity
      sched: remove stale version info from kernel/sched_debug.c
      sched: fix show_task()/show_tasks() output
      sched: small topology.h cleanup

 include/linux/topology.h |    2 +-
 kernel/sched.c           |   30 ++++++++++++------------------
 kernel/sched_debug.c     |    2 +-
 3 files changed, 14 insertions(+), 20 deletions(-)

^ permalink raw reply	[flat|nested] 20+ messages in thread

end of thread, other threads:[~2007-08-31  1:58 UTC | newest]

Thread overview: 20+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2007-08-24 14:12 [git pull request] scheduler updates Ingo Molnar
2007-08-24 18:09 ` Linus Torvalds
2007-08-24 19:37   ` Ingo Molnar
2007-08-25 17:23     ` Ingo Molnar
2007-08-25 20:43       ` Ingo Molnar
2007-08-25 21:20       ` Peter Zijlstra
2007-08-31  1:58   ` Roman Zippel
  -- strict thread matches above, loose matches on Subject: below --
2007-08-28 11:32 Ingo Molnar
2007-08-28 14:11 ` Mike Galbraith
2007-08-28 14:46   ` Ingo Molnar
2007-08-28 14:55     ` Mike Galbraith
2007-08-23 16:07 Ingo Molnar
2007-08-12 16:32 Ingo Molnar
2007-08-10 21:22 Ingo Molnar
2007-08-08 20:30 Ingo Molnar
2007-08-02 16:08 Ingo Molnar
2007-07-26 12:08 Ingo Molnar
2007-07-19 16:50 Ingo Molnar
2007-07-16  7:53 Ingo Molnar
2007-07-11 19:38 Ingo Molnar

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox