From: Con Kolivas <kernel@kolivas.org>
To: linux-kernel@vger.kernel.org
Subject: BFS v0.311 CPU scheduler for 2.6.32
Date: Fri, 11 Dec 2009 11:24:18 +1100 [thread overview]
Message-ID: <200912111124.18118.kernel@kolivas.org> (raw)
This is to briefly announce the availability of the latest stable BFS CPU
scheduler version 0.311 for the new stable linux kernel, 2.6.32.
http://ck.kolivas.org/patches/bfs/2.6.32-sched-bfs-311.patch
Changes since the last announced version, 0.304 are trivial apart from minimal
scalability improvements to make the most of SMT (hyperthreading) and to
improve NUMA performance. Here is the summary from the documentation of the
changes:
When choosing an idle CPU for a waking task, the cache locality is determined
according to where the task last ran and then idle CPUs are ranked from best
to worst to choose the most suitable idle CPU based on cache locality, NUMA
node locality and hyperthread sibling business. They are chosen in the
following preference (if idle):
* Same core, idle or busy cache, idle threads
* Other core, same cache, idle or busy cache, idle threads.
* Same node, other CPU, idle cache, idle threads.
* Same node, other CPU, busy cache, idle threads.
* Same core, busy threads.
* Other core, same cache, busy threads.
* Same node, other CPU, busy threads.
* Other node, other CPU, idle cache, idle threads.
* Other node, other CPU, busy cache, idle threads.
* Other node, other CPU, busy threads.
(The brief rundown for the average user means that if you have a hyperthreaded
CPU, it will use real cores before hyperthread siblings)
A quick summary of the features of BFS:
Excellent interactivity and responsiveness with a very simple, low overhead
design (9000 lines less code than the mainline CPU scheduler)
Suited and scalable for any respectable number of CPUs, whether separate
socket, multicore and/or multithreaded, from 1 to many (although won't scale
well to 4096).
Only one tunable which almost never needs changing.
Features SCHED_IDLEPRIO and SCHED_ISO scheduling policies as well.
To run something idleprio, use schedtool like so:
schedtool -D -e make -j4
To run something isoprio, use schedtool like so:
schedtool -I -e amarok
Features subtick accounting for better CPU usage reporting.
More comprehensive documentation is included in the patch.
--
-ck
next reply other threads:[~2009-12-11 0:24 UTC|newest]
Thread overview: 19+ messages / expand[flat|nested] mbox.gz Atom feed top
2009-12-11 0:24 Con Kolivas [this message]
2009-12-11 10:29 ` BFS v0.311 CPU scheduler for 2.6.32 Mike Galbraith
2009-12-11 14:10 ` Christoph Lameter
2009-12-11 15:04 ` Con Kolivas
2009-12-11 15:12 ` Christoph Lameter
2009-12-11 22:37 ` Con Kolivas
2009-12-12 0:55 ` Bartlomiej Zolnierkiewicz
2009-12-12 2:00 ` Con Kolivas
2009-12-12 3:22 ` Bartlomiej Zolnierkiewicz
2009-12-12 5:54 ` Willy Tarreau
2009-12-12 6:10 ` Con Kolivas
2009-12-12 6:14 ` Willy Tarreau
2009-12-14 14:16 ` Bartlomiej Zolnierkiewicz
2009-12-18 15:44 ` BFS v0.312 configurable " Con Kolivas
2009-12-14 14:50 ` BFS v0.311 " Christoph Lameter
2009-12-15 0:56 ` Con Kolivas
2009-12-12 7:59 ` Mike Galbraith
2009-12-20 4:46 ` Bill Davidsen
2009-12-11 22:06 ` Bill Davidsen
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=200912111124.18118.kernel@kolivas.org \
--to=kernel@kolivas.org \
--cc=linux-kernel@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox