* Re: a quest for a better scheduler
@ 2001-04-04 13:43 Hubertus Franke
2001-04-04 13:25 ` Ingo Molnar
` (2 more replies)
0 siblings, 3 replies; 52+ messages in thread
From: Hubertus Franke @ 2001-04-04 13:43 UTC (permalink / raw)
To: Ingo Molnar, Mike Kravetz; +Cc: Fabio Riccardi, Linux Kernel List, lse-tech
This is an important point that Mike is raising and it also addresses a
critique that Ingo issued yesterday, namely interactivity and fairness.
The HP scheduler completely separates the per-CPU runqueues and does
not take preemption goodness or alike into account. This can lead to
unfair proportionment of CPU cycles, strong priority inversion and a
potential
lack of interactivity.
Our MQ scheduler does yield the same decision in most cases
(other than defined by some race condition on locks and counter members)
It is not clear that yielding the same decision as the current scheduler
is the ultimate goal to shoot for, but it allows comparision.
Another point to raise is that the current scheduler does a exhaustive
search
for the "best" task to run. It touches every process in the runqueue. this
is
ok if the runqueue length is limited to a very small multiple of the #cpus.
But that is not what high end server systems encounter.
With the rising number of processors, lock contention can quickly become
a bottleneck. If we assume that load (#running-task) increases somewhat
linear with #cpus, the problem gets even worth because not only have I
increased the number of clients but also the lock hold time.
Clinging on to the statement that #running-tasks ~ #cpus, ofcourse saves
you from that dilemma, but not everybody is signing on to this limitation.
MQ and priority-list help in 2 ways.
MQ reduces the average lock holdtime because on average it only inspects
#running-tasks/#cpus tasks to make a local decision. It then goes on to
inspect (#cpus-1) datastructures representing the next best to run tasks
on those remote cpus all without holding the lock, thus substantially
reducing lock contention. Note we still search the entire set of runnable
tasks, however we do it in a distributed collaborative manner.
The only time we deviate from the current scheduler decision is in the
case when two cpus have identified the same remote task as a target for
remote stealing. In that case one will win and the other cpu will continue
looking somewhere else, although there might have been another tasks
on that cpu to steal.
priority list schedulers (PRS) only helps in reducing lock hold time,
which can result in some relieve wrt lock contention, but not a whole lot.
PRS can limit the lists it has to search based on the PROC_CHANGE_PENALTY.
It also keeps 0-counter in a list that is never inspected. One can
even go further and put YIELD tasks in a separate list, given that the
sys_sched_yield already does some optimizations.
The older version (12/00) posted on LSE is functionally equivalent to the
current scheduler.
I will put up another version this week that is based on a bitmask and
which is a bit more agressive in that it ignores the MM goodness boost of 1
which in my books is merely a tie breaker between two task of equal
goodness.
Beyond that we have done some work on cpu pooling, which is to identify
a set of cpus that form a scheduling set. We still consider in
reschedule_idle all cpus for preemption but in schedule it is sufficient
to only schedule within the own set. That again can limit lock hold time
with MQ and we have seen some improvements. We also deploy load balacing.
To summarize, we have taken great care of trying to preserve the current
scheduler semantics and have laid out a path to relax some of the semantics
for further improvements.
I don't believe that the HP scheduler is sufficient since it is lacking
load balacing, which naturally occurs in our MQ scheduler, and it lacks
the interactivity requirements that Ingo pointed out.
Most of these things are discussed in great detail in the writeups under
lse.sourceforge.net/scheduling. I also believe we show there that the
MQ performance for low thread counts is also matching the vanilla case.
I further don't understand the obsession of keeping the scheduler simple.
If there are improvements and I don't believe the MQ is all that
complicated
and its well documented, why not put it in.
Hubertus Franke
Enterprise Linux Group (Mgr), Linux Technology Center (Member Scalability)
email: frankeh@us.ibm.com
(w) 914-945-2003 (fax) 914-945-4425 TL: 862-2003
Mike Kravetz <mkravetz@sequent.com> on 04/03/2001 10:47:00 PM
To: Fabio Riccardi <fabio@chromium.com>
cc: Mike Kravetz <mkravetz@sequent.com>, Ingo Molnar <mingo@elte.hu>,
Hubertus Franke/Watson/IBM@IBMUS, Linux Kernel List
<linux-kernel@vger.kernel.org>, Alan Cox <alan@lxorguk.ukuu.org.uk>
Subject: Re: a quest for a better scheduler
On Tue, Apr 03, 2001 at 05:18:03PM -0700, Fabio Riccardi wrote:
>
> I have measured the HP and not the "scalability" patch because the two do
more
> or less the same thing and give me the same performance advantages, but
the
> former is a lot simpler and I could port it with no effort on any recent
> kernel.
Actually, there is a significant difference between the HP patch and
the one I developed. In the HP patch, if there is a schedulable task
on the 'local' (current CPU) runqueue it will ignore runnable tasks on
other (remote) runqueues. In the multi-queue patch I developed, the
scheduler always attempts to make the same global scheduling decisions
as the current scheduler.
--
Mike Kravetz mkravetz@sequent.com
IBM Linux Technology Center
^ permalink raw reply [flat|nested] 52+ messages in thread* Re: a quest for a better scheduler
2001-04-04 13:43 a quest for a better scheduler Hubertus Franke
@ 2001-04-04 13:25 ` Ingo Molnar
2001-04-04 13:34 ` Ingo Molnar
2001-04-04 15:44 ` Khalid Aziz
2 siblings, 0 replies; 52+ messages in thread
From: Ingo Molnar @ 2001-04-04 13:25 UTC (permalink / raw)
To: Hubertus Franke; +Cc: Mike Kravetz, Fabio Riccardi, Linux Kernel List, lse-tech
On Wed, 4 Apr 2001, Hubertus Franke wrote:
> It is not clear that yielding the same decision as the current
> scheduler is the ultimate goal to shoot for, but it allows
> comparision.
obviously the current scheduler is not cast into stone, it never was,
never will be.
but determining whether the current behavior is possible in a different
scheduler design is sure a good metric of how flexible that different
scheduler design is.
Ingo
^ permalink raw reply [flat|nested] 52+ messages in thread
* Re: a quest for a better scheduler
2001-04-04 13:43 a quest for a better scheduler Hubertus Franke
2001-04-04 13:25 ` Ingo Molnar
@ 2001-04-04 13:34 ` Ingo Molnar
2001-04-04 15:08 ` Andrea Arcangeli
2001-04-04 16:39 ` Kanoj Sarcar
2001-04-04 15:44 ` Khalid Aziz
2 siblings, 2 replies; 52+ messages in thread
From: Ingo Molnar @ 2001-04-04 13:34 UTC (permalink / raw)
To: Hubertus Franke; +Cc: Mike Kravetz, Fabio Riccardi, Linux Kernel List, lse-tech
On Wed, 4 Apr 2001, Hubertus Franke wrote:
> Another point to raise is that the current scheduler does a exhaustive
> search for the "best" task to run. It touches every process in the
> runqueue. this is ok if the runqueue length is limited to a very small
> multiple of the #cpus. [...]
indeed. The current scheduler handles UP and SMP systems, up to 32
(perhaps 64) CPUs efficiently. Agressively NUMA systems need a different
approach anyway in many other subsystems too, Kanoj is doing some
scheduler work in that area.
but the original claim was that the scheduling of thousands of runnable
processes (which is not equal to having thousands of sleeping processes)
must perform well - which is a completely different issue.
Ingo
^ permalink raw reply [flat|nested] 52+ messages in thread
* Re: a quest for a better scheduler
2001-04-04 13:34 ` Ingo Molnar
@ 2001-04-04 15:08 ` Andrea Arcangeli
2001-04-04 16:50 ` [Lse-tech] " Kanoj Sarcar
2001-04-04 16:39 ` Kanoj Sarcar
1 sibling, 1 reply; 52+ messages in thread
From: Andrea Arcangeli @ 2001-04-04 15:08 UTC (permalink / raw)
To: Ingo Molnar
Cc: Hubertus Franke, Mike Kravetz, Fabio Riccardi, Linux Kernel List,
lse-tech
On Wed, Apr 04, 2001 at 03:34:22PM +0200, Ingo Molnar wrote:
>
> On Wed, 4 Apr 2001, Hubertus Franke wrote:
>
> > Another point to raise is that the current scheduler does a exhaustive
> > search for the "best" task to run. It touches every process in the
> > runqueue. this is ok if the runqueue length is limited to a very small
> > multiple of the #cpus. [...]
>
> indeed. The current scheduler handles UP and SMP systems, up to 32
> (perhaps 64) CPUs efficiently. Agressively NUMA systems need a different
> approach anyway in many other subsystems too, Kanoj is doing some
> scheduler work in that area.
I didn't seen anything from Kanoj but I did something myself for the wildfire:
ftp://ftp.us.kernel.org/pub/linux/kernel/people/andrea/kernels/v2.4/2.4.3aa1/10_numa-sched-1
this is mostly an userspace issue, not really intended as a kernel optimization
(however it's also partly a kernel optimization). Basically it splits the load
of the numa machine into per-node load, there can be unbalanced load across the
nodes but fairness is guaranteed inside each node. It's not extremely well
tested but benchmarks were ok and it is at least certainly stable.
However Ingo consider that in a 32-way if you don't have at least 32 tasks
running all the time _always_ you're really stupid paying such big money for
nothing ;). So the fact the scheduler is optimized for 1/2 tasks running all
the time is not nearly enough for those machines (and of course also the
scheduling rate automatically increases linearly with the increase of the
number of cpus). Now it's perfectly fine that we don't ask the embedded and
desktop guys to pay for that, but a kernel configuration option to select an
algorithm that scales would be a good idea IMHO. The above patch just adds a
CONFIG_NUMA_SCHED. The scalable algorithm can fit into it and nobody will be
hurted by that (CONFIG_NUMA_SCHED cannot even be selected by x86 compiles).
Andrea
^ permalink raw reply [flat|nested] 52+ messages in thread
* Re: [Lse-tech] Re: a quest for a better scheduler
2001-04-04 15:08 ` Andrea Arcangeli
@ 2001-04-04 16:50 ` Kanoj Sarcar
2001-04-04 17:16 ` Andrea Arcangeli
0 siblings, 1 reply; 52+ messages in thread
From: Kanoj Sarcar @ 2001-04-04 16:50 UTC (permalink / raw)
To: Andrea Arcangeli
Cc: Ingo Molnar, Hubertus Franke, Mike Kravetz, Fabio Riccardi,
Linux Kernel List, lse-tech
>
> I didn't seen anything from Kanoj but I did something myself for the wildfire:
>
> ftp://ftp.us.kernel.org/pub/linux/kernel/people/andrea/kernels/v2.4/2.4.3aa1/10_numa-sched-1
>
> this is mostly an userspace issue, not really intended as a kernel optimization
> (however it's also partly a kernel optimization). Basically it splits the load
> of the numa machine into per-node load, there can be unbalanced load across the
> nodes but fairness is guaranteed inside each node. It's not extremely well
> tested but benchmarks were ok and it is at least certainly stable.
>
Just a quick comment. Andrea, unless your machine has some hardware
that imply pernode runqueues will help (nodelevel caches etc), I fail
to understand how this is helping you ... here's a simple theory though.
If your system is lightly loaded, your pernode queues are actually
implementing some sort of affinity, making sure processes stick to
cpus on nodes where they have allocated most of their memory on. I am
not sure what the situation will be under huge loads though.
As I have mentioned to some people before, percpu/pernode/percpuset/global
runqueues probably all have their advantages and disadvantages, and their
own sweet spots. Wouldn't it be really neat if a system administrator
or performance expert could pick and choose what scheduler behavior he
wants, based on how the system is going to be used?
Kanoj
^ permalink raw reply [flat|nested] 52+ messages in thread
* Re: [Lse-tech] Re: a quest for a better scheduler
2001-04-04 16:50 ` [Lse-tech] " Kanoj Sarcar
@ 2001-04-04 17:16 ` Andrea Arcangeli
2001-04-04 17:49 ` Kanoj Sarcar
0 siblings, 1 reply; 52+ messages in thread
From: Andrea Arcangeli @ 2001-04-04 17:16 UTC (permalink / raw)
To: Kanoj Sarcar
Cc: Ingo Molnar, Hubertus Franke, Mike Kravetz, Fabio Riccardi,
Linux Kernel List, lse-tech
On Wed, Apr 04, 2001 at 09:50:58AM -0700, Kanoj Sarcar wrote:
> >
> > I didn't seen anything from Kanoj but I did something myself for the wildfire:
> >
> > ftp://ftp.us.kernel.org/pub/linux/kernel/people/andrea/kernels/v2.4/2.4.3aa1/10_numa-sched-1
> >
> > this is mostly an userspace issue, not really intended as a kernel optimization
> > (however it's also partly a kernel optimization). Basically it splits the load
> > of the numa machine into per-node load, there can be unbalanced load across the
> > nodes but fairness is guaranteed inside each node. It's not extremely well
> > tested but benchmarks were ok and it is at least certainly stable.
> >
>
> Just a quick comment. Andrea, unless your machine has some hardware
> that imply pernode runqueues will help (nodelevel caches etc), I fail
> to understand how this is helping you ... here's a simple theory though.
It helps by keeping the task in the same node if it cannot keep it in
the same cpu anymore.
Assume task A is sleeping and it last run on cpu 8 node 2. It gets a wakeup
and it gets running and for some reason cpu 8 is busy and there are other
cpus idle in the system. Now with the current scheduler it can be moved in any
cpu in the system, with the numa sched applied we will try to first reschedule
it in the idles cpus of node 2 for example. The per-node runqueue are mainly
necessary to implement the heuristic.
> cpus on nodes where they have allocated most of their memory on. I am
> not sure what the situation will be under huge loads though.
after all cpus are busy we try to reschedule only on the cpus of the local
node, that's why it can generate some unbalance yes, but it will tend to
rebalance over the time because some node will end with all tasks with
zero counter first if it's less loaded, and so then it will start
getting tasks with has_cpu 0 in the runqueues out of other nodes.
You may want to give it a try on your machines and see what difference it
makes, I'd be curious to know of course.
Andrea
^ permalink raw reply [flat|nested] 52+ messages in thread
* Re: [Lse-tech] Re: a quest for a better scheduler
2001-04-04 17:16 ` Andrea Arcangeli
@ 2001-04-04 17:49 ` Kanoj Sarcar
2001-04-04 18:00 ` Andrea Arcangeli
0 siblings, 1 reply; 52+ messages in thread
From: Kanoj Sarcar @ 2001-04-04 17:49 UTC (permalink / raw)
To: Andrea Arcangeli
Cc: Ingo Molnar, Hubertus Franke, Mike Kravetz, Fabio Riccardi,
Linux Kernel List, lse-tech
>
> It helps by keeping the task in the same node if it cannot keep it in
> the same cpu anymore.
>
> Assume task A is sleeping and it last run on cpu 8 node 2. It gets a wakeup
> and it gets running and for some reason cpu 8 is busy and there are other
> cpus idle in the system. Now with the current scheduler it can be moved in any
> cpu in the system, with the numa sched applied we will try to first reschedule
> it in the idles cpus of node 2 for example. The per-node runqueue are mainly
> necessary to implement the heuristic.
>
Yes. But this is not the best solution, if I can add on to the example
and make some assumptions.
Imagine that most of the program's memory is on node 1, it was scheduled
on node 2 cpu 8 momentarily (maybe because kswapd ran on node 1, other
higher priority processes took over other cpus on node 1, etc).
Then, your patch will try to keep the process on node 2, which is not
neccessarily the best solution. Of course, as I mentioned before, if
you have a node local cache on node 2, that cache might have been warmed
enough to make scheduling on node 2 a good option.
I am not saying there is a wrong or right answer, there are so many
possibilities, everything probably works and breaks under different
circumstances.
Btw, while we are swapping patches, the patch at
http://oss.sgi.com/projects/numa/download/sched242.patch
tries to implement per-arch scheduling. The current scheduler behavior
is smp_arch_goodness() and smp_pick_cpu(), but the patch allows the
possibility for a specific platform to change that to something else.
Linus has seen this patch, and agrees to it in principle. He does not
consider this 2.4 material though. Of course, I am completely open to
Ingo (or someone else) coming up with a different way of providing the
same freedom to arch specific code.
Kanoj
^ permalink raw reply [flat|nested] 52+ messages in thread
* Re: [Lse-tech] Re: a quest for a better scheduler
2001-04-04 17:49 ` Kanoj Sarcar
@ 2001-04-04 18:00 ` Andrea Arcangeli
2001-04-05 11:13 ` Zdenek Kabelac
0 siblings, 1 reply; 52+ messages in thread
From: Andrea Arcangeli @ 2001-04-04 18:00 UTC (permalink / raw)
To: Kanoj Sarcar
Cc: Ingo Molnar, Hubertus Franke, Mike Kravetz, Fabio Riccardi,
Linux Kernel List, lse-tech
On Wed, Apr 04, 2001 at 10:49:04AM -0700, Kanoj Sarcar wrote:
> Imagine that most of the program's memory is on node 1, it was scheduled
> on node 2 cpu 8 momentarily (maybe because kswapd ran on node 1, other
> higher priority processes took over other cpus on node 1, etc).
>
> Then, your patch will try to keep the process on node 2, which is not
> neccessarily the best solution. Of course, as I mentioned before, if
> you have a node local cache on node 2, that cache might have been warmed
> enough to make scheduling on node 2 a good option.
Exactly it made it a good option, and more important this heuristic can
only improve performance if compared to the mainline scheduler.
Infact I tried to reschedule the task back to its original node and it dropped
performance, anyways I cannot say to have done an extensive research on that, I
believe if we take care of some more history of the node migration we may
understand it's right time to push it back to its original node but that was
not obvious and I wanted a simple solution to boost the performance under CPU
bound load to start with.
Andrea
^ permalink raw reply [flat|nested] 52+ messages in thread
* Re: [Lse-tech] Re: a quest for a better scheduler
2001-04-04 13:34 ` Ingo Molnar
2001-04-04 15:08 ` Andrea Arcangeli
@ 2001-04-04 16:39 ` Kanoj Sarcar
2001-04-04 17:00 ` Andrea Arcangeli
1 sibling, 1 reply; 52+ messages in thread
From: Kanoj Sarcar @ 2001-04-04 16:39 UTC (permalink / raw)
To: mingo
Cc: Hubertus Franke, Mike Kravetz, Fabio Riccardi, Linux Kernel List,
lse-tech
>
>
> On Wed, 4 Apr 2001, Hubertus Franke wrote:
>
> > Another point to raise is that the current scheduler does a exhaustive
> > search for the "best" task to run. It touches every process in the
> > runqueue. this is ok if the runqueue length is limited to a very small
> > multiple of the #cpus. [...]
>
> indeed. The current scheduler handles UP and SMP systems, up to 32
> (perhaps 64) CPUs efficiently. Agressively NUMA systems need a different
> approach anyway in many other subsystems too, Kanoj is doing some
> scheduler work in that area.
Actually, not _much_ work has been done in this area. Alongwith a bunch
of other people, I have some ideas about what needs to be done. For
example, for NUMA, we need to try hard to schedule a thread on the
node that has most of its memory (for no reason other than to decrease
memory latency). Independently, some NUMA machines build in multilevel
caches and local snoops that also means that specific processors on
the same node as the last_processor are also good candidates to run
the process next.
To handle a single layer of shared caches, I have tried certain simple
things, mostly as hacks, but am not pleased with the results yet. More
testing needed.
Kanoj
>
> but the original claim was that the scheduling of thousands of runnable
> processes (which is not equal to having thousands of sleeping processes)
> must perform well - which is a completely different issue.
>
> Ingo
>
>
> _______________________________________________
> Lse-tech mailing list
> Lse-tech@lists.sourceforge.net
> http://lists.sourceforge.net/lists/listinfo/lse-tech
>
^ permalink raw reply [flat|nested] 52+ messages in thread
* Re: [Lse-tech] Re: a quest for a better scheduler
2001-04-04 16:39 ` Kanoj Sarcar
@ 2001-04-04 17:00 ` Andrea Arcangeli
0 siblings, 0 replies; 52+ messages in thread
From: Andrea Arcangeli @ 2001-04-04 17:00 UTC (permalink / raw)
To: Kanoj Sarcar
Cc: mingo, Hubertus Franke, Mike Kravetz, Fabio Riccardi,
Linux Kernel List, lse-tech
On Wed, Apr 04, 2001 at 09:39:23AM -0700, Kanoj Sarcar wrote:
> example, for NUMA, we need to try hard to schedule a thread on the
> node that has most of its memory (for no reason other than to decrease
> memory latency). Independently, some NUMA machines build in multilevel
> caches and local snoops that also means that specific processors on
> the same node as the last_processor are also good candidates to run
> the process next.
yes. That will probably need to be optional and choosen by the architecture
at compile time too. The probably most important factor to consider is the
penality of accessing remote memory, I think I can say on all recent and future
machines with a small difference between local and remote memory (and possibly
as you say with a decent cache protocol able to snoop cacheline data from the
other cpus even if they're not dirty) it's much better to always try to keep
the task in its last node. My patch is actually assuming recent machines and it
keeps the task in its last node if not in the last cpu and it keeps doing
memory allocation from there and it forgets about its original node where it
started allocating the memory from. This provided the best performance during
userspace CPU bound load as far I can tell and it also better distribute the load.
Kanoj could you also have a look at the NUMA related common code MM fixes I did
in this patch? I'd like to get them integrated (just skip the arch/alpha/*
include/asm-alpha/* stuff while reading the patch, they're totally orthogonal).
ftp://ftp.us.kernel.org/pub/linux/kernel/people/andrea/kernels/v2.4/2.4.3aa1/00_alpha-numa-1
If you prefer I can extract them in a more finegrinded patch just dropping
the alpha stuff by hand.
Andrea
^ permalink raw reply [flat|nested] 52+ messages in thread
* Re: a quest for a better scheduler
2001-04-04 13:43 a quest for a better scheduler Hubertus Franke
2001-04-04 13:25 ` Ingo Molnar
2001-04-04 13:34 ` Ingo Molnar
@ 2001-04-04 15:44 ` Khalid Aziz
2001-04-04 15:55 ` [Lse-tech] " Christoph Hellwig
2 siblings, 1 reply; 52+ messages in thread
From: Khalid Aziz @ 2001-04-04 15:44 UTC (permalink / raw)
To: Hubertus Franke
Cc: Ingo Molnar, Mike Kravetz, Fabio Riccardi, Linux Kernel List,
lse-tech
Hubertus Franke wrote:
>
> This is an important point that Mike is raising and it also addresses a
> critique that Ingo issued yesterday, namely interactivity and fairness.
> The HP scheduler completely separates the per-CPU runqueues and does
> not take preemption goodness or alike into account. This can lead to
> unfair proportionment of CPU cycles, strong priority inversion and a
> potential
> lack of interactivity.
>
> Our MQ scheduler does yield the same decision in most cases
> (other than defined by some race condition on locks and counter members)
>
Let me stress that HP scheduler is not meant to be a replacement for the
current scheduler. The HP scheduler patch allows the current scheduler
to be replaced by another scheduler by loading a module in special
cases. HP is providing three different loadable scheduler modules -
Processor sets, Constant time scheduler, and Multi-runqueue scheduler.
Each one of these is geared towards a specific requirement. I would not
suggest using any of these for a generalized case. Processor sets
scheduler is designed to make scheduling decisions on a per-cpu basis
and not global basis. All we are trying to do is to make the current
scheduler modular so we CAN load an alternate scheduling policy module
in cases where the process mix requires a different scheduling policy or
the site policy require a different scheduling policy. An example of a
specific site processor allocation policy could be a site that runs a
database server on an MP machine along with a few other processes and
the administrator wants to guarantee that the database server process
always gets x percent of processing time or one CPU be dedicated to just
the database server. A policy like this is not meant to be fair and of
course, not a policy we want to impose upon others. The only HP changes
I would put in the kernel sources for general release would be the
changes to scheduler to allow such alternate (not necessarily fair or
the fastest for benchmarks, general process mix or 1000's of processes)
policies to be loaded. When a policy module is not loaded, scheduler
works exactly like the current scheduler even after HP patches. There
are people who could benefit from being able to load alternate policy
schedules. Fabio Ricardi happens to be one of them :-) Anyone who does
not want to even allow an alternate scheduler module to be loaded can
simply compile the alternate scheduler support out and that is how I
would expect most kernels to be compiled, especially the ones that ship
with various distributions. I would like the decision to include support
for alternate scheduler to be made by sys admins themselves for their
individual cases.
--
Khalid
====================================================================
Khalid Aziz Linux Development Laboratory
(970)898-9214 Hewlett-Packard
khalid@fc.hp.com Fort Collins, CO
^ permalink raw reply [flat|nested] 52+ messages in thread
* Re: [Lse-tech] Re: a quest for a better scheduler
2001-04-04 15:44 ` Khalid Aziz
@ 2001-04-04 15:55 ` Christoph Hellwig
0 siblings, 0 replies; 52+ messages in thread
From: Christoph Hellwig @ 2001-04-04 15:55 UTC (permalink / raw)
To: Khalid Aziz
Cc: Hubertus Franke, Ingo Molnar, Mike Kravetz, Fabio Riccardi,
Linux Kernel List, lse-tech
On Wed, Apr 04, 2001 at 09:44:22AM -0600, Khalid Aziz wrote:
> Let me stress that HP scheduler is not meant to be a replacement for the
> current scheduler. The HP scheduler patch allows the current scheduler
> to be replaced by another scheduler by loading a module in special
> cases.
HP also has a simple mq patch that is _not_ integrated into the pluggable
scheduler framework, I have used it myself.
Christoph
--
Of course it doesn't work. We've performed a software upgrade.
^ permalink raw reply [flat|nested] 52+ messages in thread
* Re: a quest for a better scheduler
@ 2001-04-18 14:50 Yoav Etsion
0 siblings, 0 replies; 52+ messages in thread
From: Yoav Etsion @ 2001-04-18 14:50 UTC (permalink / raw)
To: linux-kernel; +Cc: Tsafrir Dan
I don't want to open any old wounds, but I just got a summary from a
colleague of mine, Dan Tsafrir, who measured the context switch overhead
on Linux with multiple processes.
You can find the document at:
http://www.cs.huji.ac.il/~dants/linux-2.2.18-context-switch.ps
The measurements were taken on a quad Pentium III 550MHz IBM NetFinity
server with 1GB RAM.
Even though this was done on the older 2.2.18 kernel, this is quite
intereseting with regard to the current scheduler discussion.
Yoav Etsion
^ permalink raw reply [flat|nested] 52+ messages in thread
* RE: a quest for a better scheduler
@ 2001-04-06 13:15 Hubertus Franke
0 siblings, 0 replies; 52+ messages in thread
From: Hubertus Franke @ 2001-04-06 13:15 UTC (permalink / raw)
To: Torrey Hoffman; +Cc: 'Timothy D. Witham', Linux Kernel List
Torrey, nothing to worry about (or at least not yet).
The new scheduler only replaces the current SMP scheduler, not the single
cpu scheduler. So the devices that you refer to are not affected at
all by these changes.
The desire for interactivity etc, lead us to stick to the current
proposed MQ scheduler semantics, namely not to completely isolate the
runqueues from each other (e.g. the HP-MQ does so), but do some cross
checking to ensure that high priority tasks are run when they need to.
First you get the same (similar good or bad scheduler semantics as the
current one) but at a significantly lower cost for medium to high end
loads (defined as #runnable tasks > #cpus)
Here is a simple approximate arithmetic. Assume the following:
- Let X be the fixed overhead to get through the scheduler (cur or MQ)
once.
- Let Y the overhead of touching a task to inspect its goodness
- Let N be the number of tasks
- Let C be the number of cpus.
The cost for the current scheduler is: X(cur) + N*Y.
The cost for the MQ scheduler is: X(MQ) + (N/C)*Y + C*Y
I assumed here equal runqueue length.
You can see that this ballpark math shows that if X(cur) ~ X(MQ)
MQ is expected to be better when (N/C) + C < N or
if ( N > C*C/(C-1) )
C=2 N>4
C=3 N>4
C=4 N>5
C=5 N>6
C=6 N>7
C=7 N>8
C=8 N>9
Turns out that for C>2 this amounts to N > C+1 MQ will do better.
Now this is in the face of lack of runqueue lock contention. When
runqueue lock contention surfaces, then this will shift in favor of MQ.
Hubertus Franke
Enterprise Linux Group (Mgr), Linux Technology Center (Member Scalability)
, OS-PIC (Chair)
email: frankeh@us.ibm.com
(w) 914-945-2003 (fax) 914-945-4425 TL: 862-2003
Torrey Hoffman <torrey.hoffman@myrio.com>@vger.kernel.org on 04/05/2001
07:53:27 PM
Sent by: linux-kernel-owner@vger.kernel.org
To: "'Timothy D. Witham'" <wookie@osdlab.org>, Linux Kernel List
<linux-kernel@vger.kernel.org>
cc:
Subject: RE: a quest for a better scheduler
Timothy D. Witham wrote :
[...]
> I propose that we work on setting up a straight forward test harness
> that allows developers to quickly test a kernel patch against
> various performance yardsticks.
[...
(proposed large server testbeds)
...]
I like this idea, but could the testbeds also include some
resource-constrained "embedded" or appliance-style systems, and
include performance yardsticks for latency and responsiveness?
It would be unfortunate if work on a revised scheduler resulted
in Linux being a monster web server on 4-way systems, but having
lousy interactive performance on web pads, hand helds, and set top
boxes.
How about a 120Mhz Pentium with 32MB of RAM and a flash disk, a 200
Mhz PowerPC with 64 MB? Maybe a Transmeta web pad?
For the process load, something that tests responsiveness and
latency - how about a set of processes doing multicast network
transfers, CPU-intensive MPEG video and audio encode / decode,
and a test app that measures "user response", i.e. if X Windows
was running, would the mouse pointer move smoothly or jerkily?
The "better" scheduler for these applications would make sure that
multicast packets were never dropped, the MPEG decode never dropped
frames, and the "user interface" stayed responsive, etc.
Also, I would not mind a bit if the kernel had tuning options, either
in configuration or through /proc to adjust for these very different
situations.
Torrey Hoffman
Torrey.Hoffman@myrio.com
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
^ permalink raw reply [flat|nested] 52+ messages in thread* RE: a quest for a better scheduler
@ 2001-04-05 23:53 Torrey Hoffman
0 siblings, 0 replies; 52+ messages in thread
From: Torrey Hoffman @ 2001-04-05 23:53 UTC (permalink / raw)
To: 'Timothy D. Witham', Linux Kernel List
Timothy D. Witham wrote :
[...]
> I propose that we work on setting up a straight forward test harness
> that allows developers to quickly test a kernel patch against
> various performance yardsticks.
[...
(proposed large server testbeds)
...]
I like this idea, but could the testbeds also include some
resource-constrained "embedded" or appliance-style systems, and
include performance yardsticks for latency and responsiveness?
It would be unfortunate if work on a revised scheduler resulted
in Linux being a monster web server on 4-way systems, but having
lousy interactive performance on web pads, hand helds, and set top
boxes.
How about a 120Mhz Pentium with 32MB of RAM and a flash disk, a 200
Mhz PowerPC with 64 MB? Maybe a Transmeta web pad?
For the process load, something that tests responsiveness and
latency - how about a set of processes doing multicast network
transfers, CPU-intensive MPEG video and audio encode / decode,
and a test app that measures "user response", i.e. if X Windows
was running, would the mouse pointer move smoothly or jerkily?
The "better" scheduler for these applications would make sure that
multicast packets were never dropped, the MPEG decode never dropped
frames, and the "user interface" stayed responsive, etc.
Also, I would not mind a bit if the kernel had tuning options, either
in configuration or through /proc to adjust for these very different
situations.
Torrey Hoffman
Torrey.Hoffman@myrio.com
^ permalink raw reply [flat|nested] 52+ messages in thread
* Re: a quest for a better scheduler
@ 2001-04-05 23:01 Hubertus Franke
0 siblings, 0 replies; 52+ messages in thread
From: Hubertus Franke @ 2001-04-05 23:01 UTC (permalink / raw)
To: Timothy D. Witham; +Cc: Linux Kernel List
Exellent idea....
Assuming that you have set up these environments already,
what would be a real treat is to get the average runqueue
length at a given time, for instance every second or so, while
running some of these more sophisticated server oriented applications
that you mention.
>From that we can see which of the various assumption regarding
runqueue length is holding up, when the runqueue is not empty.
This would help the current discussion trememdously as we don't
seem to be able to even agree on this.
Simple bincount could do. If you want a kernel patch that counts
every scheduling cycle I'll write it.
Hubertus Franke
Enterprise Linux Group (Mgr), Linux Technology Center (Member Scalability)
, OS-PIC (Chair)
email: frankeh@us.ibm.com
(w) 914-945-2003 (fax) 914-945-4425 TL: 862-2003
"Timothy D. Witham" <wookie@osdlab.org>@vger.kernel.org on 04/05/2001
06:38:41 PM
Sent by: linux-kernel-owner@vger.kernel.org
To: Linux Kernel List <linux-kernel@vger.kernel.org>
cc: wookie@osdlab.org
Subject: Re: a quest for a better scheduler
I have been following this thread and thinking that everybody has some
truth in
what they are saying but with the absence of a repeatable test environment
there
really isn't a way of arriving at a data driven decision.
Given the following conditions.
1)The diversity of the problem sets that developers are working on results
in a
real concern that another developers solution to their performance issue
might result in a worsening of my performance situation.
2)Most of the developers have a given set of tests that they use when they
make changes to determine if they change did what they want.
3)The Open Source Development Lab has the faculties for providing a large
scale
testing environment on several different configurations that remain the
same over
time.
I propose that we work on setting up a straight forward test harness that
allows
developers to quickly test a kernel patch against various performance
yardsticks.
If we use the same set of hardware for the generation of the performance
data
from patch to patch there will be a correlation between the runs allowing
for
a real comparison of the different changes.
The following should be taken only as a starting point.
As for the base hardware configurations I propose:
2 way with 1 GB main memory and 2 IDE drives
4 way with 4 GB main memory and 16 disk drives
8 way with 8 GB main memory and 24 disk drives
The types of applications that I have heard people express concern are:
Web infrastructure:
Apache
TUX
Jabber
Corporate infrastructure:
NFS
raw TCP/IP performance
Samba
Database performance:
Raw storage I/O performance
OLTP workload
General usage:
compile speed (usually measured by kernel compile)
The OSDL also has the ability to make some of the "for fee" benchmarks
available for use for testing that is done onsite (network access to OSDL
machines counts as onsite) so that people don't have to purchase individual
copies. Things like SECMAIL2001, SPECSFS2.0 and SEPCWEB99 come to mind.
Comments, suggestions, volunteers?
--
Timothy D. Witham - Lab Director - wookie@osdlab.org
Open Source Development Lab Inc - A non-profit corporation
15275 SW Koll Parkway - Suite H - Beaverton OR, 97006
(503)-626-2455 x11 (office) (503)-702-2871 (cell)
(503)-626-2455 (fax)
On Wed, Apr 04, 2001 at 03:54:54PM -0700, Christopher Smith wrote:
> --On Wednesday, April 04, 2001 15:16:32 -0700 Tim Wright <timw@splhi.com>
> wrote:
> > On Wed, Apr 04, 2001 at 03:23:34PM +0200, Ingo Molnar wrote:
> >> nope. The goal is to satisfy runnable processes in the range of
NR_CPUS.
> >> You are playing word games by suggesting that the current behavior
> >> prefers 'low end'. 'thousands of runnable processes' is not 'high end'
> >> at all, it's 'broken end'. Thousands of runnable processes are the
sign
> >> of a broken application design, and 'fixing' the scheduler to perform
> >> better in that case is just fixing the symptom. [changing the
scheduler
> >> to perform better in such situations is possible too, but all
solutions
> >> proposed so far had strings attached.]
> >
> > Ingo, you continue to assert this without giving much evidence to back
it
> > up. All the world is not a web server. If I'm running a large OLTP
> > database with thousands of clients, it's not at all unreasonable to
> > expect periods where several hundred (forget the thousands) want to be
> > serviced by the database engine. That sounds like hundreds of
schedulable
> > entities be they processes or threads or whatever. This sort of load is
> > regularly run on machine with 16-64 CPUs.
>
> Actually, it's not just OLTP, anytime you are doing time sharing between
> hundreds of users (something POSIX systems are supposed to be good at)
this
> will happen.
>
> > Now I will admit that it is conceivable that you can design an
> > application that finds out how many CPUs are available, creates threads
> > to match that number and tries to divvy up the work between them using
> > some combination of polling and asynchronous I/O etc. There are,
however
> > a number of problems with this approach:
>
> Actually, one way to semi-support this approach is to implement
> many-to-many threads as per the Solaris approach. This also requires
> significant hacking of both the kernel and the runtime, and certainly is
> significantly more error prone than trying to write a flexible scheduler.
>
> One problem you didn't highlight that even the above case does not
happily
> identify is that for security reasons you may very well need each user's
> requests to take place in a different process. If you don't, then you
have
> to implement a very well tested and secure user-level security mechanism
to
> ensure things like privacy (above and beyond the time-sharing).
>
> The world is filled with a wide variety of types of applications, and
> unless you know two programming approaches are functionaly equivalent
(and
> event driven/polling I/O vs. tons of running processes are NOT), you
> shouldn't say one approach is "broken". You could say it's a "broken"
> approach to building web servers. Unfortunately, things like kernels and
> standard libraries should work well in the general case.
>
> --Chris
> -
> To unsubscribe from this list: send the line "unsubscribe linux-kernel"
in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at http://www.tux.org/lkml/
--
Timothy D. Witham - Lab Director - wookie@osdlab.org
Open Source Development Lab Inc - A non-profit corporation
15275 SW Koll Parkway - Suite H - Beaverton OR, 97006
(503)-626-2455 x11 (office) (503)-702-2871 (cell)
(503)-626-2455 (fax)
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
^ permalink raw reply [flat|nested] 52+ messages in thread* Re: a quest for a better scheduler
@ 2001-04-04 19:06 Hubertus Franke
0 siblings, 0 replies; 52+ messages in thread
From: Hubertus Franke @ 2001-04-04 19:06 UTC (permalink / raw)
To: Mark Hahn; +Cc: Linux Kernel List, lse-tech
I give you a concrete example:
Running DB2 on an SMP system.
In DB2 there is a processes/thread pool that is sized based
on memory and numcpus. People tell me that the size of this pool
is in the order of 100s for an 8-way system with reasonable
sized database. These <maxagents> determine the number of agents
that can simultaneously execute an SQL statement.
Requests are flying in for transactions (e.g. driven by TPC-W like
applications). The agents are grepped from the pool and concurrently
fire the SQL transactions.
Assuming that there is enough concurrency in the database, there is
no reason to believe that the majority of those active agents is
not effectively running. TPC-W loads have observed 100 of active
transactions at a time.
Ofcourse limiting the number of agents would reduce concurrently
running tasks, but would limit the responsiveness of the system.
Implementing a database in the kernel ala TUX doesn't seem to be
the right approach either (complexity, fault containment, ...)
Hope that is one example people accept.
I can dig up some information on WebSphere Applications.
I'd love to hear from some other applications that fall into
a similar category as the above and substantiate a bit the need
for 100s of running processes, without claiming that the
application is broke.
Hubertus Franke
Enterprise Linux Group (Mgr), Linux Technology Center (Member Scalability)
, OS-PIC (Chair)
email: frankeh@us.ibm.com
(w) 914-945-2003 (fax) 914-945-4425 TL: 862-2003
Mark Hahn <hahn@coffee.psychology.mcmaster.ca> on 04/04/2001 02:28:42 PM
To: Hubertus Franke/Watson/IBM@IBMUS
cc:
Subject: Re: a quest for a better scheduler
> ok if the runqueue length is limited to a very small multiple of the
#cpus.
> But that is not what high end server systems encounter.
do you have some example of this in mind? so far, noone has
actually produced an example of a "high end" server that has
long runqueues.
^ permalink raw reply [flat|nested] 52+ messages in thread
* Re: a quest for a better scheduler
@ 2001-04-04 17:17 Hubertus Franke
0 siblings, 0 replies; 52+ messages in thread
From: Hubertus Franke @ 2001-04-04 17:17 UTC (permalink / raw)
To: Davide Libenzi; +Cc: Linux Kernel List
Well put, this how we can eliminate searching all bins or lists and that's
how we do it under.
http://lse.sourceforge.net/scheduling/2.4.1-pre8-prioSched.
If you have a list per priority level, then you can even pick the first one
you find if its on
the same level. That's what I tried in a more recent implementation. Also
combined that
with using a bitmask to represent non-empty tasks.
Hubertus Franke
Enterprise Linux Group (Mgr), Linux Technology Center (Member Scalability)
, OS-PIC (Chair)
email: frankeh@us.ibm.com
(w) 914-945-2003 (fax) 914-945-4425 TL: 862-2003
Davide Libenzi <davidel@xmailserver.org>@ewok.dev.mycio.com on 04/04/2001
12:12:54 PM
Sent by: davidel@ewok.dev.mycio.com
To: Ingo Molnar <mingo@elte.hu>
cc: Linus Torvalds <torvalds@transmeta.com>, Alan Cox
<alan@lxorguk.ukuu.org.uk>, Linux Kernel List
<linux-kernel@vger.kernel.org>, Hubertus Franke/Watson/IBM@IBMUS,
Mike Kravetz <mkravetz@sequent.com>, Fabio Riccardi
<fabio@chromium.com>
Subject: Re: a quest for a better scheduler
On 04-Apr-2001 Ingo Molnar wrote:
>
> On Tue, 3 Apr 2001, Fabio Riccardi wrote:
>
>> I've spent my afternoon running some benchmarks to see if MQ patches
>> would degrade performance in the "normal case".
>
> no doubt priority-queue can run almost as fast as the current scheduler.
> What i'm worried about is the restriction of the 'priority' of processes,
> it cannot depend on previous processes (and other 'current state')
> anymore.
>
> to so we have two separate issues:
>
>#1: priority-queue: has the fundamental goodness() design limitation.
>
>#2: per-CPU-runqueues: changes semantics, makes scheduler less
> effective due to nonglobal decisions.
>
> about #1: while right now the prev->mm rule appears to be a tiny issue
(it
> might not affect performance significantly), but forbidding it by
> hardcoding the assumption into data structures is a limitation of
*future*
> goodness() functions. Eg. with the possible emergence of CPU-level
> threading and other, new multiprocessing technologies, this could be a
> *big* mistake.
This is not correct Ingo. I haven't seen the HP code but if You store
processes
in slots S :
S = FS( goodness(p, p->processor, p->mm) )
and You start scanning from the higher slots, as soon as you find a task
with a
goodness G' that is equal to the max goodness in slot You can choose that
process to run.
Again, if You haven't found such a goodness during the slot scan but You've
found a task with a goodness G' :
G' >= SG - DD
where :
SG = max slot goodness
DD = SG(i) - SG(i - 1)
You can select that task as the next to spin.
This was the behaviour that was implemented in my scheduler patch about 2
years
ago.
Beside this, I this that with such loads We've more serious problem to face
with inside the kernel.
- Davide
^ permalink raw reply [flat|nested] 52+ messages in thread* Re: a quest for a better scheduler
@ 2001-04-04 15:36 Hubertus Franke
0 siblings, 0 replies; 52+ messages in thread
From: Hubertus Franke @ 2001-04-04 15:36 UTC (permalink / raw)
To: mingo; +Cc: Linux Kernel List, lse-tech
You imply that high end means thousands of processes,
simply because we have shown that in our graphs as an
asymptotic end.
No, it could mean 5*#cpus and that is not all that absurd.
This could happen with a spike in demand.
TUX is not the greatest example to use, because it does
static webpage serving and is hence tied into the
file cache. If you move up the food chain, where middleware
is active and things are a bit more complicated than
sending stuff out of the filecache, having a bunch of threads
hanging around to deal with the spike in demand is common
practive, although you think its bad.
Now coming again to MQ (forget about priority list from
now on).
When we scan either the local or the realtime queues we do
use goodness value. So we have the same flexibility as the
current scheduler if it comes to goodness() flexibility and
future improvements.
For remote stealing we do use na_goodness to compare
for a better process to steal. Hence we would loose the "+1"
information here, nevertheless, once a decision
has been made, we still use preemption verification with goodness.
Eitherway, being off by "+1" for regular tasks once in a while
is no big deal, because this problem already exists today.
While walking the runqueue, another processor can either update
the counter value of task (ok that's covered by can_schedule)
or can run recalculate, which makes the decision that one is
about to make wrong from the point of always running the best.
But that's ok, because counter, nice etc. are approximations
anyway. Through in PROC_CHANGE_PENALTY and you have a few knobs
that are used to control interactivity and throughput.
If you look at some of the results with our reflex benchmark.
For the low thread count we basically show improved performance
on the 2,4,5,6,7,8 way system if #threads < #cpus, they all show
improvements. Look at the following numbers running the
reflex benchmark, left most columns are number of threads, with
typically 1/2 of them runnable.
You can clearly see that the priority list suffers from overhead,
but MQ is beating vanilla pretty much everywhere. Again, this
is due because of rapid scheduler invocation and resulting
lock contention.
2-way
2.4.1 2.4.1-mq1 2.4.1-prbit
4 6.725 4.691 7.387
8 6.326 4.766 6.421
12 6.838 5.233 6.431
16 7.13 5.415 7.29
4-way
2.4.1 2.4.1-mq1 2.4.1-prbit
4 9.42 7.873 10.592
8 8.143 3.799 8.691
12 7.877 3.537 8.101
16 7.688 2.953 7.144
6-way
2.4.1 2.4.1-mq1 2.4.1-prbit
4 9.595 7.88 10.358
8 9.703 7.278 10.523
12 10.016 4.652 10.985
16 9.882 3.629 10.525
8-way
2.4.1 2.4.1-mq1 2.4.1-prbit
4 9.804 8.033 10.548
8 10.436 5.783 11.475
12 10.925 6.787 11.646
16 11.426 5.048 11.877
20 11.438 3.895 11.633
24 11.457 3.304 11.347
28 11.495 3.073 11.09
32 11.53 2.944 10.898
Hubertus Franke
Enterprise Linux Group (Mgr), Linux Technology Center (Member Scalability)
email: frankeh@us.ibm.com
(w) 914-945-2003 (fax) 914-945-4425 TL: 862-2003
Ingo Molnar <mingo@elte.hu>@elte.hu> on 04/04/2001 09:23:34 AM
Please respond to <mingo@elte.hu>
Sent by: <mingo@elte.hu>
To: Hubertus Franke/Watson/IBM@IBMUS
cc: Mike Kravetz <mkravetz@sequent.com>, Fabio Riccardi
<fabio@chromium.com>, Linux Kernel List
<linux-kernel@vger.kernel.org>
Subject: Re: a quest for a better scheduler
On Wed, 4 Apr 2001, Hubertus Franke wrote:
> I understand the dilemma that the Linux scheduler is in, namely
> satisfy the low end at all cost. [...]
nope. The goal is to satisfy runnable processes in the range of NR_CPUS.
You are playing word games by suggesting that the current behavior prefers
'low end'. 'thousands of runnable processes' is not 'high end' at all,
it's 'broken end'. Thousands of runnable processes are the sign of a
broken application design, and 'fixing' the scheduler to perform better in
that case is just fixing the symptom. [changing the scheduler to perform
better in such situations is possible too, but all solutions proposed so
far had strings attached.]
Ingo
^ permalink raw reply [flat|nested] 52+ messages in thread* Re: a quest for a better scheduler
@ 2001-04-04 15:28 Hubertus Franke
0 siblings, 0 replies; 52+ messages in thread
From: Hubertus Franke @ 2001-04-04 15:28 UTC (permalink / raw)
To: Andrea Arcangeli; +Cc: Linux Kernel List, lse-tech
Yes, Andrea.
We actually already went a step further. We treat the scheduler
as a single entity, rather than splitting it up.
Based on the MQ scheduler we do the balancing across all nodes
at reschedule_idle time. We experimented to see whether only looking
for idle tasks remotely is a good idea, but it bloats the code.
Local scheduling decisions are limited to a set of cpus, which
could coincide with the cpus of one node, or if desirable on
smaller granularities.
In addition we implemented a global load balancing scheme that
ensures that load is equally distributed across all run queues.
This is made a loadable module, so you can plug in what ever
you want.
I grant in NUMA it might actually be desirable to separate
schedulers completely (we can do that trivially in reschedule_idle),
but you need loadbalancing at some point.
Here is the list of patches:
MultiQueue Scheduler: http://lse.sourceforge.net/scheduling/2.4.1.mq1-sched
Pooling Extension: http://lse.sourceforge.net/scheduling/LB/2.4.1-MQpool
LoadBalancing:
http://lse.sourceforge.net/scheduling/LB/loadbalance.c
Hubertus Franke
Enterprise Linux Group (Mgr), Linux Technology Center (Member Scalability)
, OS-PIC (Chair)
email: frankeh@us.ibm.com
(w) 914-945-2003 (fax) 914-945-4425 TL: 862-2003
Andrea Arcangeli <andrea@suse.de> on 04/04/2001 11:08:47 AM
To: Ingo Molnar <mingo@elte.hu>
cc: Hubertus Franke/Watson/IBM@IBMUS, Mike Kravetz
<mkravetz@sequent.com>, Fabio Riccardi <fabio@chromium.com>, Linux
Kernel List <linux-kernel@vger.kernel.org>,
lse-tech@lists.sourceforge.net
Subject: Re: a quest for a better scheduler
On Wed, Apr 04, 2001 at 03:34:22PM +0200, Ingo Molnar wrote:
>
> On Wed, 4 Apr 2001, Hubertus Franke wrote:
>
> > Another point to raise is that the current scheduler does a exhaustive
> > search for the "best" task to run. It touches every process in the
> > runqueue. this is ok if the runqueue length is limited to a very small
> > multiple of the #cpus. [...]
>
> indeed. The current scheduler handles UP and SMP systems, up to 32
> (perhaps 64) CPUs efficiently. Agressively NUMA systems need a different
> approach anyway in many other subsystems too, Kanoj is doing some
> scheduler work in that area.
I didn't seen anything from Kanoj but I did something myself for the
wildfire:
ftp://ftp.us.kernel.org/pub/linux/kernel/people/andrea/kernels/v2.4/2.4.3aa1/10_numa-sched-1
this is mostly an userspace issue, not really intended as a kernel
optimization
(however it's also partly a kernel optimization). Basically it splits the
load
of the numa machine into per-node load, there can be unbalanced load across
the
nodes but fairness is guaranteed inside each node. It's not extremely well
tested but benchmarks were ok and it is at least certainly stable.
However Ingo consider that in a 32-way if you don't have at least 32 tasks
running all the time _always_ you're really stupid paying such big money
for
nothing ;). So the fact the scheduler is optimized for 1/2 tasks running
all
the time is not nearly enough for those machines (and of course also the
scheduling rate automatically increases linearly with the increase of the
number of cpus). Now it's perfectly fine that we don't ask the embedded and
desktop guys to pay for that, but a kernel configuration option to select
an
algorithm that scales would be a good idea IMHO. The above patch just adds
a
CONFIG_NUMA_SCHED. The scalable algorithm can fit into it and nobody will
be
hurted by that (CONFIG_NUMA_SCHED cannot even be selected by x86 compiles).
Andrea
^ permalink raw reply [flat|nested] 52+ messages in thread* Re: a quest for a better scheduler
@ 2001-04-04 14:03 Hubertus Franke
2001-04-04 13:23 ` Ingo Molnar
2001-04-04 15:12 ` Andrea Arcangeli
0 siblings, 2 replies; 52+ messages in thread
From: Hubertus Franke @ 2001-04-04 14:03 UTC (permalink / raw)
To: mingo; +Cc: Mike Kravetz, Fabio Riccardi, Linux Kernel List
I grant you that the code is not as clean as the current scheduler,
so maybe you missed that part.
For the priority scheduler:
Yes the task_to_qid assumes a NON-affinity (no cpu, no mm) to determine
the list index to where the task has to be enqueued.
However, if you wonder down to the search_range section of the scheduler,
you see that we do add the "+1" for equal mm. All I did here was take
the goodness() function a part for search_range in order to insert some
extra code that speeds up the code.
Again, I don't want to lean to hard on the priority scheduler, because
we only did this to have a reference point regarding lock contention and
lock hold. This is stuff that many operating systems did years ago, most
of which have moved on to add MQ to that. So that is what the LSE
scheduling team is pushing.
I understand the dilemma that the Linux scheduler is in, namely satisfy
the low end at all cost. But I believe that in order for Linux to push
into the high end we need to address the scalability of the current
scheduler.
Simply handwaving and declaring that lots of running tasks is a stupid
idea doesn't get us there.
If indeed you assume that there #running-tasks ~ #cpus then each task
should alot a reasonable amount of work and any small overhead incurred
should be amortizable. However as we contend if #running-tasks >> #cpus
is a situation we need to deal with, the living with the current lock
contention can really drag us down.
Hubertus Franke
Enterprise Linux Group (Mgr), Linux Technology Center (Member Scalability)
email: frankeh@us.ibm.com
(w) 914-945-2003 (fax) 914-945-4425 TL: 862-2003
Ingo Molnar <mingo@elte.hu>@elte.hu> on 04/04/2001 02:28:31 AM
Please respond to <mingo@elte.hu>
Sent by: <mingo@elte.hu>
To: Mike Kravetz <mkravetz@sequent.com>
cc: Hubertus Franke/Watson/IBM@IBMUS, Fabio Riccardi
<fabio@chromium.com>, Linux Kernel List
<linux-kernel@vger.kernel.org>
Subject: Re: a quest for a better scheduler
On Tue, 3 Apr 2001, Mike Kravetz wrote:
> Our 'priority queue' implementation uses almost the same goodness
> function as the current scheduler. The main difference between our
> 'priority queue' scheduler and the current scheduler is the structure
> of the runqueue. We break up the single runqueue into a set of
> priority based runqueues. The 'goodness' of a task determines what
> sub-queue a task is placed in. Tasks with higher goodness values are
> placed in higher priority queues than tasks with lower goodness
> values. [...]
we are talking about the same thing, re-read my mail. this design assumes
that 'goodness' is constant in the sense that it does not depend on the
previous process.
and no, your are not using the same goodness() function, you omitted the
prev->mm == next->mm component to goodness(), due to this design
limitation.
Ingo
^ permalink raw reply [flat|nested] 52+ messages in thread* Re: a quest for a better scheduler
2001-04-04 14:03 Hubertus Franke
@ 2001-04-04 13:23 ` Ingo Molnar
2001-04-04 22:16 ` Tim Wright
2001-04-04 15:12 ` Andrea Arcangeli
1 sibling, 1 reply; 52+ messages in thread
From: Ingo Molnar @ 2001-04-04 13:23 UTC (permalink / raw)
To: Hubertus Franke; +Cc: Mike Kravetz, Fabio Riccardi, Linux Kernel List
On Wed, 4 Apr 2001, Hubertus Franke wrote:
> I understand the dilemma that the Linux scheduler is in, namely
> satisfy the low end at all cost. [...]
nope. The goal is to satisfy runnable processes in the range of NR_CPUS.
You are playing word games by suggesting that the current behavior prefers
'low end'. 'thousands of runnable processes' is not 'high end' at all,
it's 'broken end'. Thousands of runnable processes are the sign of a
broken application design, and 'fixing' the scheduler to perform better in
that case is just fixing the symptom. [changing the scheduler to perform
better in such situations is possible too, but all solutions proposed so
far had strings attached.]
Ingo
^ permalink raw reply [flat|nested] 52+ messages in thread
* Re: a quest for a better scheduler
2001-04-04 13:23 ` Ingo Molnar
@ 2001-04-04 22:16 ` Tim Wright
2001-04-04 22:54 ` Christopher Smith
0 siblings, 1 reply; 52+ messages in thread
From: Tim Wright @ 2001-04-04 22:16 UTC (permalink / raw)
To: Ingo Molnar
Cc: Hubertus Franke, Mike Kravetz, Fabio Riccardi, Linux Kernel List
On Wed, Apr 04, 2001 at 03:23:34PM +0200, Ingo Molnar wrote:
>
> On Wed, 4 Apr 2001, Hubertus Franke wrote:
>
> > I understand the dilemma that the Linux scheduler is in, namely
> > satisfy the low end at all cost. [...]
>
> nope. The goal is to satisfy runnable processes in the range of NR_CPUS.
> You are playing word games by suggesting that the current behavior prefers
> 'low end'. 'thousands of runnable processes' is not 'high end' at all,
> it's 'broken end'. Thousands of runnable processes are the sign of a
> broken application design, and 'fixing' the scheduler to perform better in
> that case is just fixing the symptom. [changing the scheduler to perform
> better in such situations is possible too, but all solutions proposed so
> far had strings attached.]
>
Ingo, you continue to assert this without giving much evidence to back it up.
All the world is not a web server. If I'm running a large OLTP database with
thousands of clients, it's not at all unreasonable to expect periods where
several hundred (forget the thousands) want to be serviced by the database
engine. That sounds like hundreds of schedulable entities be they processes
or threads or whatever. This sort of load is regularly run on machine with
16-64 CPUs.
Now I will admit that it is conceivable that you can design an application that
finds out how many CPUs are available, creates threads to match that number
and tries to divvy up the work between them using some combination of polling
and asynchronous I/O etc. There are, however a number of problems with this
approach:
1) It assumes that this is the only workload on the machine. If not it quickly
becomes sub-optimal due to interactions between the workloads. This is a
problem that the kernel scheduler does not suffer from.
2) It requires *every* application designer to design an effective scheduler
into their application. I would submit that an effective scheduler is better
situated in the operating system.
3) It is not a familiar programming paradigm to many Unix/Linux/POSIX
programmers, so you have a sort of impedance mismatch going on.
Since the proposed scheduler changes being talked about here have been shown
to not hurt the "low end" and to dramatically improve the "high end", I fail
to understand the hostility to the changes. I can understand that you do not
feel that this is the correct way to architect an application, but if the
changes don't hurt you, why sabotage changes that also allow a different
method to work. There isn't one true way to do anything in computing.
Tim
--
Tim Wright - timw@splhi.com or timw@aracnet.com or twright@us.ibm.com
IBM Linux Technology Center, Beaverton, Oregon
Interested in Linux scalability ? Look at http://lse.sourceforge.net/
"Nobody ever said I was charming, they said "Rimmer, you're a git!"" RD VI
^ permalink raw reply [flat|nested] 52+ messages in thread
* Re: a quest for a better scheduler
2001-04-04 22:16 ` Tim Wright
@ 2001-04-04 22:54 ` Christopher Smith
2001-04-05 22:38 ` Timothy D. Witham
0 siblings, 1 reply; 52+ messages in thread
From: Christopher Smith @ 2001-04-04 22:54 UTC (permalink / raw)
To: timw, Ingo Molnar
Cc: Hubertus Franke, Mike Kravetz, Fabio Riccardi, Linux Kernel List
--On Wednesday, April 04, 2001 15:16:32 -0700 Tim Wright <timw@splhi.com>
wrote:
> On Wed, Apr 04, 2001 at 03:23:34PM +0200, Ingo Molnar wrote:
>> nope. The goal is to satisfy runnable processes in the range of NR_CPUS.
>> You are playing word games by suggesting that the current behavior
>> prefers 'low end'. 'thousands of runnable processes' is not 'high end'
>> at all, it's 'broken end'. Thousands of runnable processes are the sign
>> of a broken application design, and 'fixing' the scheduler to perform
>> better in that case is just fixing the symptom. [changing the scheduler
>> to perform better in such situations is possible too, but all solutions
>> proposed so far had strings attached.]
>
> Ingo, you continue to assert this without giving much evidence to back it
> up. All the world is not a web server. If I'm running a large OLTP
> database with thousands of clients, it's not at all unreasonable to
> expect periods where several hundred (forget the thousands) want to be
> serviced by the database engine. That sounds like hundreds of schedulable
> entities be they processes or threads or whatever. This sort of load is
> regularly run on machine with 16-64 CPUs.
Actually, it's not just OLTP, anytime you are doing time sharing between
hundreds of users (something POSIX systems are supposed to be good at) this
will happen.
> Now I will admit that it is conceivable that you can design an
> application that finds out how many CPUs are available, creates threads
> to match that number and tries to divvy up the work between them using
> some combination of polling and asynchronous I/O etc. There are, however
> a number of problems with this approach:
Actually, one way to semi-support this approach is to implement
many-to-many threads as per the Solaris approach. This also requires
significant hacking of both the kernel and the runtime, and certainly is
significantly more error prone than trying to write a flexible scheduler.
One problem you didn't highlight that even the above case does not happily
identify is that for security reasons you may very well need each user's
requests to take place in a different process. If you don't, then you have
to implement a very well tested and secure user-level security mechanism to
ensure things like privacy (above and beyond the time-sharing).
The world is filled with a wide variety of types of applications, and
unless you know two programming approaches are functionaly equivalent (and
event driven/polling I/O vs. tons of running processes are NOT), you
shouldn't say one approach is "broken". You could say it's a "broken"
approach to building web servers. Unfortunately, things like kernels and
standard libraries should work well in the general case.
--Chris
^ permalink raw reply [flat|nested] 52+ messages in thread
* Re: a quest for a better scheduler
2001-04-04 22:54 ` Christopher Smith
@ 2001-04-05 22:38 ` Timothy D. Witham
2001-04-06 3:27 ` Christopher Smith
2001-04-06 18:06 ` Timothy D. Witham
0 siblings, 2 replies; 52+ messages in thread
From: Timothy D. Witham @ 2001-04-05 22:38 UTC (permalink / raw)
To: Linux Kernel List; +Cc: wookie
I have been following this thread and thinking that everybody has some truth in
what they are saying but with the absence of a repeatable test environment there
really isn't a way of arriving at a data driven decision.
Given the following conditions.
1)The diversity of the problem sets that developers are working on results in a
real concern that another developers solution to their performance issue
might result in a worsening of my performance situation.
2)Most of the developers have a given set of tests that they use when they
make changes to determine if they change did what they want.
3)The Open Source Development Lab has the faculties for providing a large scale
testing environment on several different configurations that remain the same over
time.
I propose that we work on setting up a straight forward test harness that allows
developers to quickly test a kernel patch against various performance yardsticks.
If we use the same set of hardware for the generation of the performance data
from patch to patch there will be a correlation between the runs allowing for
a real comparison of the different changes.
The following should be taken only as a starting point.
As for the base hardware configurations I propose:
2 way with 1 GB main memory and 2 IDE drives
4 way with 4 GB main memory and 16 disk drives
8 way with 8 GB main memory and 24 disk drives
The types of applications that I have heard people express concern are:
Web infrastructure:
Apache
TUX
Jabber
Corporate infrastructure:
NFS
raw TCP/IP performance
Samba
Database performance:
Raw storage I/O performance
OLTP workload
General usage:
compile speed (usually measured by kernel compile)
The OSDL also has the ability to make some of the "for fee" benchmarks
available for use for testing that is done onsite (network access to OSDL
machines counts as onsite) so that people don't have to purchase individual
copies. Things like SECMAIL2001, SPECSFS2.0 and SEPCWEB99 come to mind.
Comments, suggestions, volunteers?
--
Timothy D. Witham - Lab Director - wookie@osdlab.org
Open Source Development Lab Inc - A non-profit corporation
15275 SW Koll Parkway - Suite H - Beaverton OR, 97006
(503)-626-2455 x11 (office) (503)-702-2871 (cell)
(503)-626-2455 (fax)
On Wed, Apr 04, 2001 at 03:54:54PM -0700, Christopher Smith wrote:
> --On Wednesday, April 04, 2001 15:16:32 -0700 Tim Wright <timw@splhi.com>
> wrote:
> > On Wed, Apr 04, 2001 at 03:23:34PM +0200, Ingo Molnar wrote:
> >> nope. The goal is to satisfy runnable processes in the range of NR_CPUS.
> >> You are playing word games by suggesting that the current behavior
> >> prefers 'low end'. 'thousands of runnable processes' is not 'high end'
> >> at all, it's 'broken end'. Thousands of runnable processes are the sign
> >> of a broken application design, and 'fixing' the scheduler to perform
> >> better in that case is just fixing the symptom. [changing the scheduler
> >> to perform better in such situations is possible too, but all solutions
> >> proposed so far had strings attached.]
> >
> > Ingo, you continue to assert this without giving much evidence to back it
> > up. All the world is not a web server. If I'm running a large OLTP
> > database with thousands of clients, it's not at all unreasonable to
> > expect periods where several hundred (forget the thousands) want to be
> > serviced by the database engine. That sounds like hundreds of schedulable
> > entities be they processes or threads or whatever. This sort of load is
> > regularly run on machine with 16-64 CPUs.
>
> Actually, it's not just OLTP, anytime you are doing time sharing between
> hundreds of users (something POSIX systems are supposed to be good at) this
> will happen.
>
> > Now I will admit that it is conceivable that you can design an
> > application that finds out how many CPUs are available, creates threads
> > to match that number and tries to divvy up the work between them using
> > some combination of polling and asynchronous I/O etc. There are, however
> > a number of problems with this approach:
>
> Actually, one way to semi-support this approach is to implement
> many-to-many threads as per the Solaris approach. This also requires
> significant hacking of both the kernel and the runtime, and certainly is
> significantly more error prone than trying to write a flexible scheduler.
>
> One problem you didn't highlight that even the above case does not happily
> identify is that for security reasons you may very well need each user's
> requests to take place in a different process. If you don't, then you have
> to implement a very well tested and secure user-level security mechanism to
> ensure things like privacy (above and beyond the time-sharing).
>
> The world is filled with a wide variety of types of applications, and
> unless you know two programming approaches are functionaly equivalent (and
> event driven/polling I/O vs. tons of running processes are NOT), you
> shouldn't say one approach is "broken". You could say it's a "broken"
> approach to building web servers. Unfortunately, things like kernels and
> standard libraries should work well in the general case.
>
> --Chris
> -
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at http://www.tux.org/lkml/
--
Timothy D. Witham - Lab Director - wookie@osdlab.org
Open Source Development Lab Inc - A non-profit corporation
15275 SW Koll Parkway - Suite H - Beaverton OR, 97006
(503)-626-2455 x11 (office) (503)-702-2871 (cell)
(503)-626-2455 (fax)
^ permalink raw reply [flat|nested] 52+ messages in thread
* Re: a quest for a better scheduler
2001-04-05 22:38 ` Timothy D. Witham
@ 2001-04-06 3:27 ` Christopher Smith
2001-04-06 18:06 ` Timothy D. Witham
1 sibling, 0 replies; 52+ messages in thread
From: Christopher Smith @ 2001-04-06 3:27 UTC (permalink / raw)
To: Timothy D. Witham, Linux Kernel List
--On Thursday, April 05, 2001 15:38:41 -0700 "Timothy D. Witham"
<wookie@osdlab.org> wrote:
> Database performance:
> Raw storage I/O performance
> OLTP workload
You probably want to add an OLAP scenario as well.
--Chris
^ permalink raw reply [flat|nested] 52+ messages in thread
* Re: a quest for a better scheduler
2001-04-05 22:38 ` Timothy D. Witham
2001-04-06 3:27 ` Christopher Smith
@ 2001-04-06 18:06 ` Timothy D. Witham
2001-04-06 21:08 ` Michael Peddemors
2001-04-06 22:33 ` Nathan Straz
1 sibling, 2 replies; 52+ messages in thread
From: Timothy D. Witham @ 2001-04-06 18:06 UTC (permalink / raw)
To: Linux Kernel List
Timothy D. Witham wrote :
[...]
> I propose that we work on setting up a straight forward test harness
> that allows developers to quickly test a kernel patch against
> various performance yardsticks.
[...
(proposed large server testbeds)
...]
OK, so I have received some feedback on my proposal to provide a
reference set of machines so that any kernel modifications could be
checked across a range of machines and a range of tests. It was
pointed out that there are lots of smaller servers out there and
they should be part of any test plan. There was also some concern
that a 4 way server didn't add any value in a test lineup. But I
have to think that with the number of 4 ways out there they should
be included.
One additional piece of feedback was that any comprehensive
characterization plan should include desktops, tablet devices and
older machines and the performance tests that address those
configurations usage models and I agree that it is something that
needs to be done. But as for providing hardware for that effort the
OSDL is not the group to do that. Hopefully somebody with an
interest in these configurations will step forward to do that
portion of the job.
So the server hardware configurations have evolved to look like
the following.
1 way, 512 MB, 2 IDE
2 way, 1 GB, 10 SCSI (1 SCSI channel)
4 way, 4 GB, 20 SCSI (2 channels)
8 way, 8 GB, 40 SCSI (4 channels) maybe Fibre Channel (FC)
16 way, 16 GB, 80 FC (8 channels)
The types of server applications that I have heard people express concern are:
Web infrastructure:
Apache (SPECWEB99???)
TUX (SPECWEB99???)
Jabber (have their own)
Corporate infrastructure:
NFS (SPECSFS2.0???)
raw TCP/IP performance
Samba (Have their own)
email (SPECMAIL2001???)
Database performance:
Raw storage I/O performance (various)
OLTP workload (something like TPC-C???)
OLAP workload
General usage:
compile speed (usually measured by kernel compile)
Further comments? I will start contacting folks who have expressed
interest.
--
Timothy D. Witham - Lab Director - wookie@osdlab.org
Open Source Development Lab Inc - A non-profit corporation
15275 SW Koll Parkway - Suite H - Beaverton OR, 97006
(503)-626-2455 x11 (office) (503)-702-2871 (cell)
(503)-626-2455 (fax)
^ permalink raw reply [flat|nested] 52+ messages in thread* Re: a quest for a better scheduler
2001-04-06 18:06 ` Timothy D. Witham
@ 2001-04-06 21:08 ` Michael Peddemors
2001-04-06 22:33 ` Nathan Straz
1 sibling, 0 replies; 52+ messages in thread
From: Michael Peddemors @ 2001-04-06 21:08 UTC (permalink / raw)
To: Timothy D. Witham; +Cc: linux-kernel
Missing an important one, our VPN's routinely run on 16 MG Ram, no HD or swap..
Loaded from an initrd on a floppy..
Don't we need to test on minimalistic machines as well :)
> So the server hardware configurations have evolved to look like
> the following.
>
> 1 way, 512 MB, 2 IDE
> 2 way, 1 GB, 10 SCSI (1 SCSI channel)
> 4 way, 4 GB, 20 SCSI (2 channels)
> 8 way, 8 GB, 40 SCSI (4 channels) maybe Fibre Channel (FC)
> 16 way, 16 GB, 80 FC (8 channels)
>
--
"Catch the Magic of Linux..."
--------------------------------------------------------
Michael Peddemors - Senior Consultant
LinuxAdministration - Internet Services
NetworkServices - Programming - Security
WizardInternet Services http://www.wizard.ca
Linux Support Specialist - http://www.linuxmagic.com
--------------------------------------------------------
(604)589-0037 Beautiful British Columbia, Canada
^ permalink raw reply [flat|nested] 52+ messages in thread
* Re: a quest for a better scheduler
2001-04-06 18:06 ` Timothy D. Witham
2001-04-06 21:08 ` Michael Peddemors
@ 2001-04-06 22:33 ` Nathan Straz
1 sibling, 0 replies; 52+ messages in thread
From: Nathan Straz @ 2001-04-06 22:33 UTC (permalink / raw)
To: Timothy D. Witham; +Cc: Linux Kernel List, Linux Test Project
On Fri, Apr 06, 2001 at 11:06:03AM -0700, Timothy D. Witham wrote:
> Timothy D. Witham wrote :
> > I propose that we work on setting up a straight forward test harness
> > that allows developers to quickly test a kernel patch against
> > various performance yardsticks.
The Linux Test Project would like to help out here. At the least, we
would like to add the scripts, wrappers, and configuration files used to
create the test systems to LTP. Making test systems available is
definitely a great step forward. Showing people how to build similar
test systems is another step forward.
Let us know what parts you need and we'll see what we can come up with.
> Further comments? I will start contacting folks who have expressed
> interest.
If anyone has loose programs that they use to test the scheduler, please
submit them to the Linux Test Project. Chances are other people will
also find them useful and add functionality. It doesn't have to be a
formal test program or use the test libraries that we use. We can take
care of that when we add it to the CVS tree.
While we probably aren't going to package up full applications for
testing purposes, we could definitely keep track of useful configuration
files and scripts that people find useful for testing.
--
Nate Straz nstraz@sgi.com
sgi, inc http://www.sgi.com/
Linux Test Project http://oss.sgi.com/projects/ltp/
^ permalink raw reply [flat|nested] 52+ messages in thread
* Re: a quest for a better scheduler
2001-04-04 14:03 Hubertus Franke
2001-04-04 13:23 ` Ingo Molnar
@ 2001-04-04 15:12 ` Andrea Arcangeli
2001-04-04 15:49 ` Khalid Aziz
1 sibling, 1 reply; 52+ messages in thread
From: Andrea Arcangeli @ 2001-04-04 15:12 UTC (permalink / raw)
To: Hubertus Franke; +Cc: mingo, Mike Kravetz, Fabio Riccardi, Linux Kernel List
On Wed, Apr 04, 2001 at 10:03:10AM -0400, Hubertus Franke wrote:
> I understand the dilemma that the Linux scheduler is in, namely satisfy
> the low end at all cost. [..]
We can satisfy the low end by making the numa scheduler at compile time (that's
what I did in my patch at least).
Andrea
^ permalink raw reply [flat|nested] 52+ messages in thread
* Re: a quest for a better scheduler
2001-04-04 15:12 ` Andrea Arcangeli
@ 2001-04-04 15:49 ` Khalid Aziz
0 siblings, 0 replies; 52+ messages in thread
From: Khalid Aziz @ 2001-04-04 15:49 UTC (permalink / raw)
To: Andrea Arcangeli
Cc: Hubertus Franke, mingo, Mike Kravetz, Fabio Riccardi,
Linux Kernel List
Andrea Arcangeli wrote:
>
> On Wed, Apr 04, 2001 at 10:03:10AM -0400, Hubertus Franke wrote:
> > I understand the dilemma that the Linux scheduler is in, namely satisfy
> > the low end at all cost. [..]
>
> We can satisfy the low end by making the numa scheduler at compile time (that's
> what I did in my patch at least).
>
> Andrea
I fully agree with this approach. It would be very hard to design a
scheduler that performs equally well on a UP machine running couple of
processes and a NUMA machine. These two cases represent the two ends of
spectrum. The two schedulers should be separate IMO and one of them
should be selected at compile time.
--
Khalid
====================================================================
Khalid Aziz Linux Development Laboratory
(970)898-9214 Hewlett-Packard
khalid@fc.hp.com Fort Collins, CO
^ permalink raw reply [flat|nested] 52+ messages in thread
* Re: a quest for a better scheduler
@ 2001-04-04 6:36 alad
0 siblings, 0 replies; 52+ messages in thread
From: alad @ 2001-04-04 6:36 UTC (permalink / raw)
To: Fabio Riccardi; +Cc: Alan Cox, linux-kernel
If we are facing these problems for "normal case" then hope the Solaris is
handling it !!
Amol
Fabio Riccardi <fabio@chromium.com> on 04/04/2001 07:03:57 AM
To: Alan Cox <alan@lxorguk.ukuu.org.uk>
cc: linux-kernel@vger.kernel.org (bcc: Amol Lad/HSS)
Subject: Re: a quest for a better scheduler
Alan,
for the "normal case" performance see my other message.
I agree that a better threading model would surely help in a web server, but to
me this is not an excuse to live up with a broken scheduler.
The X15 server I'm working on now is a sort of user-space TUX, it uses only 8
threads per CPU and it achieves the same level of performance of the kernel
space TUX. Even in this case the performance advantage of the multiqueue
scheduler is remarkable, especially on a multi-CPU (> 2) architecture.
To achieve some decent disk/CPU/network overlapping with the current linux
blocking disk IO limitations there is no way to avoid a "bunch of server
threads". I've (experimentally) figured out that 8-16 threads per CPU can
assure some reasonable overlapping, depending on the memory size and disk
subsystem speed. On a 8-way machine this means 64-128 active tasks, a total
disaster with the current scheduler.
Unless we want to maintain the position tha the only way to achieve good
performance is to embed server applications in the kernel, some minimal help
should be provided to goodwilling user applications :)
TIA, ciao,
- Fabio
Alan Cox wrote:
> > Is there any special reason why any of those patches didn't make it to
> > the mainstream kernel code?
>
> All of them are worse for the normal case. Also 1500 running apache's isnt
> a remotely useful situation, you are thrashing the cache even if you are now
> not thrashing the scheduler. Use an httpd designed for that situation. Then
> you can also downgrade to a cheap pentium class box for the task ;)
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
^ permalink raw reply [flat|nested] 52+ messages in thread
* a quest for a better scheduler
@ 2001-04-03 2:23 Fabio Riccardi
2001-04-03 8:55 ` Ingo Molnar
2001-04-03 12:31 ` Alan Cox
0 siblings, 2 replies; 52+ messages in thread
From: Fabio Riccardi @ 2001-04-03 2:23 UTC (permalink / raw)
To: linux-kernel
Hello,
I sent a message a few days ago about some limitations I found in the
linux scheduler.
In servers like Apache where a large (> 1000) number of processes can be
running at the same time and where many of them are runnable at the same
time, the default Linux scheduler just starts trashing and the machine
becomes very rapidly unusable.
Performance degradations are quite noticeable on a two-way SMP machine
(20-30% of the CPU gets lost) and are even more glaring on a multi-cpu
machine. As an example, an 8-way Compaq Proliant just crawls with linux.
>From the feedback I received I realized that there are at least two
possible solutions to the problem:
http://lse.sourceforge.net/scheduling/
http://resourcemanagement.unixsolutions.hp.com/WaRM/schedpolicy.html
Indeed I've tried the patches available on the sites for the multi-queue
scheduler and I was amazed by the performance improvement that I got.
Both patches allow me to get to a 100% real CPU utilization on a two way
machine running ~1500 processes.
What those patches do is quite simple, instead of having the single
global process queue present in the normal Linux scheduler, they add
multiple queues (one per CPU). In this way the scheduling decision can
be greatly simplified and almost made local to each CPU. No hotspots, no
global locks (well, almost).
Although some scalability problems are still there (there still is a
global decision to make), the performance improvement obtained and the
simplicity of the solution are remarkable.
The HP patch is probably the most interesting, since it consists of
really a few lines of code and it gets (for what I could measure) the
same kind of performance improvement of the more elaborate (but still
quite simple) sourceforge patch.
Is there any special reason why any of those patches didn't make it to
the mainstream kernel code?
TIA, ciao,
- Fabio
^ permalink raw reply [flat|nested] 52+ messages in thread* Re: a quest for a better scheduler
2001-04-03 2:23 Fabio Riccardi
@ 2001-04-03 8:55 ` Ingo Molnar
2001-04-03 19:13 ` Mike Kravetz
2001-04-03 12:31 ` Alan Cox
1 sibling, 1 reply; 52+ messages in thread
From: Ingo Molnar @ 2001-04-03 8:55 UTC (permalink / raw)
To: Fabio Riccardi; +Cc: Linux Kernel List
On Mon, 2 Apr 2001, Fabio Riccardi wrote:
> I sent a message a few days ago about some limitations I found in the
> linux scheduler.
>
> In servers like Apache where a large (> 1000) number of processes can
> be running at the same time and where many of them are runnable at the
> same time, the default Linux scheduler just starts trashing and the
> machine becomes very rapidly unusable.
it is *not* a scheduler problem. This is an application design problem.
Do not expect > 1000 runnable processes to perform well. Apache's
one-process-per-parallel-client-connection application model is especially
bad for such kind of loads. The scheduler has more important priorities to
worry about than to fix application design problems.
> Indeed I've tried the patches available on the sites for the
> multi-queue scheduler and I was amazed by the performance improvement
> that I got. Both patches allow me to get to a 100% real CPU
> utilization on a two way machine running ~1500 processes.
There were many promises along the years, and none as of now met all the
requirements the scheduler has to fulfill. 'many runnable processes' is
not on the top of the list - if we are scheduling like mad then we have
lost lots of performance already.
even with such 'improvements', the many-parallel-clients performance of
one-process-per-request HTTP server designs is an order slower than what
is possible.
having said that, improving this workload is still a useful task and we
added improvements to the scheduler to improve this case. But those
patches sacrifice other functionality just to get better
many-runnable-processes performance, which is unacceptable.
> What those patches do is quite simple, instead of having the single
> global process queue present in the normal Linux scheduler, they add
> multiple queues (one per CPU). In this way the scheduling decision can
> be greatly simplified and almost made local to each CPU. No hotspots,
> no global locks (well, almost).
the problem is that the *real* scheduling complexity (and needed
functionality) shows when the number of runnable processes is in the
neighborhood of the number of processors. Running many processes just
masks eg. load-balancing deficiencies in schedulers, so obviously the
simplest and most localized scheduler 'wins'. So comparing schedulers
based on many-runnable-processes load is like comparing cars solely based
on the size of their fuel tanks.
> Although some scalability problems are still there (there still is a
> global decision to make), the performance improvement obtained and the
> simplicity of the solution are remarkable.
you can easily make the scheduler fast in the many-processes case by
sacrificing functionality in the much more realistic, few-processes case.
None of the patch i've seen so far maintained the current scheduler's
few-processes logic. But i invite you to improve the current scheduler's
many-processes behavior, without hurting its behavior in the few-processes
case.
Ingo
^ permalink raw reply [flat|nested] 52+ messages in thread
* Re: a quest for a better scheduler
2001-04-03 8:55 ` Ingo Molnar
@ 2001-04-03 19:13 ` Mike Kravetz
2001-04-03 18:47 ` Ingo Molnar
0 siblings, 1 reply; 52+ messages in thread
From: Mike Kravetz @ 2001-04-03 19:13 UTC (permalink / raw)
To: Ingo Molnar; +Cc: Fabio Riccardi, Linux Kernel List
On Tue, Apr 03, 2001 at 10:55:12AM +0200, Ingo Molnar wrote:
>
> you can easily make the scheduler fast in the many-processes case by
> sacrificing functionality in the much more realistic, few-processes case.
> None of the patch i've seen so far maintained the current scheduler's
> few-processes logic. But i invite you to improve the current scheduler's
> many-processes behavior, without hurting its behavior in the few-processes
> case.
>
Maintaining the current scheduler's logic is exactly what we are trying
to do in the projects at:
http://lse.sourceforge.net/scheduling/
A common design goal for the the two alternative scheduler
implementations at this site is to maintain the current scheduler's
behavior/scheduling decisions. In the case of the priority queue
scheduler, we have actually used a copy of the existing scheduler
running in parallel (in the same kernel) to determine if we are
making the same scheduling decisions. Currently, in this implementation
we only deviate from the current scheduler in a small number of cases
where tasks get a boost due to having the same memory map.
The multi-queue implementation is more interesting. It is also
designed to maintain the behavior of the current scheduler. However,
as the runqueues get longer (and we start getting contention on the
runqueue locks) it starts to deviate from existing scheduler behavior
and make more local scheduling decisions. Ideally, this implementation
will exhibit the behavior of the current scheduler at low thread
counts and make more localized decisions as pressure on the scheduler
is increased.
Neither of these implementations are at a point where I would advocate
their adoption; yet.
Can someone tell me what a good workload/benchmark would be to
examine 'low thread count' performance? In the past people have
used the 'spinning on sched_yield' benchmark. However, this now
makes little sense with the sched_yield optimizations introduced
in 2.4. In addition, such a benchmark mostly ignores the
'reschedule_idle' component of the scheduler. We have developed
a 'token passing' benchmark which attempts to address these issues
(called reflex at the above site). However, I would really like
to get a pointer to a community acceptable workload/benchmark for
these low thread cases.
--
Mike Kravetz mkravetz@sequent.com
IBM Linux Technology Center
^ permalink raw reply [flat|nested] 52+ messages in thread
* Re: a quest for a better scheduler
2001-04-03 19:13 ` Mike Kravetz
@ 2001-04-03 18:47 ` Ingo Molnar
2001-04-03 22:43 ` Mike Kravetz
0 siblings, 1 reply; 52+ messages in thread
From: Ingo Molnar @ 2001-04-03 18:47 UTC (permalink / raw)
To: Mike Kravetz; +Cc: Fabio Riccardi, Linux Kernel List
On Tue, 3 Apr 2001, Mike Kravetz wrote:
> [...] Currently, in this implementation we only deviate from the
> current scheduler in a small number of cases where tasks get a boost
> due to having the same memory map.
thread-thread-affinity pretty much makes it impossible to use a priority
queue. This 'small number of cases' is actually a fundamental issue: can
the 'goodness()' function be an arbitrary function of:
goodness(process_prev,process_next) := f(process_prev,process_next)
, or is the queue design restricting the choice of goodness() functions?
Priority queues for example restrict the choice of the goodness() function
to subset of functions:
goodness(process_prev,process_next) := f(process_next);
this restriction (independence of the priority from the previous process)
is a fundamentally bad property, scheduling is nonlinear and affinity
decisions depend on the previous context. [i had a priority-queue SMP
scheduler running 2-3 years ago but dropped the idea due to this issue.]
IMO it's more important to have a generic and flexible scheduler, and
arbitrary, nonnatural restrictions tend to bite us later on.
one issue regarding multiqueues is the ability of interactive processes to
preempt other, lower priority processes on other CPUs. These sort of
things tend to happen while using X. In a system where process priorities
are different, scheduling decisions cannot be localized per-CPU
(unfortunately), and 'the right to run' as such is a global thing.
> Can someone tell me what a good workload/benchmark would be to examine
> 'low thread count' performance? [...]
lmbench's lat_ctx for example, and other tools in lmbench trigger various
scheduler workloads as well.
Ingo
^ permalink raw reply [flat|nested] 52+ messages in thread
* Re: a quest for a better scheduler
2001-04-03 18:47 ` Ingo Molnar
@ 2001-04-03 22:43 ` Mike Kravetz
2001-04-04 0:18 ` Fabio Riccardi
2001-04-04 6:28 ` Ingo Molnar
0 siblings, 2 replies; 52+ messages in thread
From: Mike Kravetz @ 2001-04-03 22:43 UTC (permalink / raw)
To: Ingo Molnar; +Cc: frankeh, Fabio Riccardi, Linux Kernel List
On Tue, Apr 03, 2001 at 08:47:52PM +0200, Ingo Molnar wrote:
>
> this restriction (independence of the priority from the previous process)
> is a fundamentally bad property, scheduling is nonlinear and affinity
> decisions depend on the previous context. [i had a priority-queue SMP
> scheduler running 2-3 years ago but dropped the idea due to this issue.]
> IMO it's more important to have a generic and flexible scheduler, and
> arbitrary, nonnatural restrictions tend to bite us later on.
It seems like we may be talking about two different things.
Our 'priority queue' implementation uses almost the same
goodness function as the current scheduler. The main
difference between our 'priority queue' scheduler and the
current scheduler is the structure of the runqueue. We
break up the single runqueue into a set of priority based
runqueues. The 'goodness' of a task determines what sub-queue
a task is placed in. Tasks with higher goodness values
are placed in higher priority queues than tasks with lower
goodness values. This allows us to limit the scan of runnable
tasks to the highest priority sub-queue, as opposed to the
entire runquue. When scanning the highest priority sub-queue
we use almost the same goodness function as that which is
used today (it does take CPU affinity into account). In fact,
if we used the same goodness function I'm pretty sure we
would exactly match the behavior of the existing scheduler.
Hopefully, Hubertus Franke will speak up and provide more
details, as he is much more familiar with this implementation
than I am.
In any case, I think we have almost reached the conclusion
that our priority queue implementation may not be acceptable
due to the extra overhead in low thread counts.
> one issue regarding multiqueues is the ability of interactive processes to
> preempt other, lower priority processes on other CPUs. These sort of
> things tend to happen while using X. In a system where process priorities
> are different, scheduling decisions cannot be localized per-CPU
> (unfortunately), and 'the right to run' as such is a global thing.
Agreed. This is why our multi-queue scheduler always attempts
to make global decisions. It will preempt lower priority tasks
on other CPUs. This implementation tends to make 'more
localized' scheduling decisions as contention on the runqueue
locks increase. However, at this point one could argue that
we have moved away from a 'realistic' low task count system load.
> lmbench's lat_ctx for example, and other tools in lmbench trigger various
> scheduler workloads as well.
Thanks, I'll add these to our list.
--
Mike Kravetz mkravetz@sequent.com
IBM Linux Technology Center
^ permalink raw reply [flat|nested] 52+ messages in thread
* Re: a quest for a better scheduler
2001-04-03 22:43 ` Mike Kravetz
@ 2001-04-04 0:18 ` Fabio Riccardi
2001-04-04 2:47 ` Mike Kravetz
2001-04-04 6:53 ` Ingo Molnar
2001-04-04 6:28 ` Ingo Molnar
1 sibling, 2 replies; 52+ messages in thread
From: Fabio Riccardi @ 2001-04-04 0:18 UTC (permalink / raw)
To: Mike Kravetz; +Cc: Ingo Molnar, frankeh, Linux Kernel List, Alan Cox
[-- Attachment #1: Type: text/plain, Size: 1288 bytes --]
Dear all,
I've spent my afternoon running some benchmarks to see if MQ patches would
degrade performance in the "normal case".
To measure performance I've used the latest lmbench and I have mesured the
kernel compile times on a dual pentium III box runing at 1GHz with an 133MHz
bus.
Results (attached) show that there is no measurable difference in performace
between a vanilla scheduler and a multiqueue scheduler when running only few
processes (the compilation benchmark runs essentially two processes, one per
CPU).
I have measured the HP and not the "scalability" patch because the two do more
or less the same thing and give me the same performance advantages, but the
former is a lot simpler and I could port it with no effort on any recent
kernel. It is indeed interesting to see that this patch was originally designed
for a 2.4.0-test10 kernel, and still works fine on the latest kernels, only a
minor change (one line) was required. A version of the patch for the 2.4.2-ac26
kernel is attached to this message.
Given the zero impact on "normal case" performance and the huge positive impact
(> 20%) in the heavy load case (70-80 runnable processess on a load of about
1400 total) I don't see why such a thing shouldn't be accepted in the
mainstream scheduler.
- Fabio
[-- Attachment #2: mqpatch242ac26 --]
[-- Type: text/plain, Size: 7102 bytes --]
--- sched.c.orig Tue Mar 27 17:30:58 2001
+++ sched.c Tue Apr 3 16:45:21 2001
@@ -34,6 +34,7 @@
extern void timer_bh(void);
extern void tqueue_bh(void);
extern void immediate_bh(void);
+static inline void hop_queues(struct task_struct *, int);
/*
* scheduler variables
@@ -90,7 +91,8 @@
spinlock_t runqueue_lock __cacheline_aligned = SPIN_LOCK_UNLOCKED; /* inner */
rwlock_t tasklist_lock __cacheline_aligned = RW_LOCK_UNLOCKED; /* outer */
-static LIST_HEAD(runqueue_head);
+static struct list_head runqueue_head[NR_CPUS] = { LIST_HEAD_INIT((runqueue_head[0]))};
+static LIST_HEAD(rt_queue_head);
/*
* We align per-CPU scheduling data on cacheline boundaries,
@@ -100,12 +102,15 @@
struct schedule_data {
struct task_struct * curr;
cycles_t last_schedule;
+ struct list_head runqueue_head;
} schedule_data;
char __pad [SMP_CACHE_BYTES];
-} aligned_data [NR_CPUS] __cacheline_aligned = { {{&init_task,0}}};
+} aligned_data [NR_CPUS] __cacheline_aligned = { {{&init_task,0,
+ LIST_HEAD_INIT((aligned_data[0].schedule_data.runqueue_head))}}};
#define cpu_curr(cpu) aligned_data[(cpu)].schedule_data.curr
#define last_schedule(cpu) aligned_data[(cpu)].schedule_data.last_schedule
+#define cpu_rq(cpu) (aligned_data[(cpu)].schedule_data.runqueue_head)
struct kernel_stat kstat;
@@ -199,6 +204,33 @@
return goodness(p, cpu, prev->active_mm) - goodness(prev, cpu, prev->active_mm);
}
+
+static inline int other_goodness(struct task_struct * p, int this_cpu, struct mm_struct *this_mm)
+{
+ int weight;
+
+ /*
+ * select the current process after every other
+ * runnable process, but before the idle thread.
+ * Also, dont trigger a counter recalculation.
+ *
+ * Give the process a first-approximation goodness value
+ * according to the number of clock-ticks it has left.
+ *
+ * Don't do any other calculations if the time slice is
+ * over..
+ */
+ weight = p->counter;
+ if (!weight)
+ goto out2;
+
+ /* .. and a slight advantage to the current MM */
+ if (p->mm == this_mm || !p->mm)
+ weight += 1;
+ weight += 20 - p->nice;
+out2:
+ return weight;
+}
/*
* This is ugly, but reschedule_idle() is very timing-critical.
* We are called with the runqueue spinlock held and we must
@@ -266,6 +298,10 @@
} else {
if (oldest_idle == -1ULL) {
int prio = preemption_goodness(tsk, p, cpu);
+ /*
+ * this will never be true for < 400 HZ non
+ * realtime. optimize this? SAR
+ */
if (prio > max_prio) {
max_prio = prio;
@@ -277,6 +313,10 @@
tsk = target_tsk;
if (tsk) {
if (oldest_idle != -1ULL) {
+ /* push onto best queue */
+ if (p->policy == SCHED_OTHER){
+ hop_queues(p, tsk->processor);
+ }
best_cpu = tsk->processor;
goto send_now_idle;
}
@@ -306,20 +346,28 @@
*/
static inline void add_to_runqueue(struct task_struct * p)
{
- list_add(&p->run_list, &runqueue_head);
+ if (p->policy == SCHED_OTHER){
+ list_add(&p->run_list, &cpu_rq(p->processor));
+ } else list_add(&p->run_list, &rt_queue_head);
nr_running++;
}
-static inline void move_last_runqueue(struct task_struct * p)
+static inline void move_last_rt_queue(struct task_struct * p)
+{
+ list_del(&p->run_list);
+ list_add_tail(&p->run_list, &rt_queue_head);
+}
+
+static inline void move_first_rt_queue(struct task_struct * p)
{
list_del(&p->run_list);
- list_add_tail(&p->run_list, &runqueue_head);
+ list_add(&p->run_list, &rt_queue_head);
}
-static inline void move_first_runqueue(struct task_struct * p)
+static inline void hop_queues(struct task_struct * p, int this_cpu)
{
list_del(&p->run_list);
- list_add(&p->run_list, &runqueue_head);
+ list_add(&p->run_list, &cpu_rq(this_cpu));
}
/*
@@ -343,6 +391,7 @@
if (task_on_runqueue(p))
goto out;
add_to_runqueue(p);
+ /* LATER : make an effort to choose rq before add ? */
if (!synchronous || !(p->cpus_allowed & (1 << smp_processor_id())))
reschedule_idle(p);
success = 1;
@@ -531,9 +580,9 @@
asmlinkage void schedule(void)
{
struct schedule_data * sched_data;
- struct task_struct *prev, *next, *p;
+ struct task_struct *prev, *next, *p = NULL; /* LATER fix this */
struct list_head *tmp;
- int this_cpu, c;
+ int this_cpu, c, i;
spin_lock_prefetch(&runqueue_lock);
@@ -592,18 +641,63 @@
goto still_running;
still_running_back:
- list_for_each(tmp, &runqueue_head) {
+ /*
+ * we unrolled the original combined runqueue in to two separate
+ * checks, first the real time and then then policy OTHER for own
+ * cpu, and finally theft from another cpu.
+ */
+ list_for_each(tmp, &rt_queue_head) {
p = list_entry(tmp, struct task_struct, run_list);
- if (can_schedule(p, this_cpu)) {
- int weight = goodness(p, this_cpu, prev->active_mm);
+ if (can_schedule(p, this_cpu) && !(p->policy & SCHED_YIELD)) {
+ int weight = 1000 + p->rt_priority;
if (weight > c)
c = weight, next = p;
}
}
+ if (c >= 1000)
+ goto choice_made;
+
+ list_for_each(tmp, &cpu_rq(this_cpu)) {
+ p = list_entry(tmp, struct task_struct, run_list);
+ if (can_schedule(p, this_cpu) && !(p->policy & SCHED_YIELD)) {
+ int weight = other_goodness(p, this_cpu, prev->active_mm);
+ if (weight > c)
+ c = weight, next = p;
+ }
+ }
+
+#ifdef CONFIG_SMP
+ if (c > 0)
+ goto choice_made;
+
+ /*
+ * try to steal from another CPU's queue. since we don't have to
+ * worry about real time or CPU preference, pick anything available.
+ * this is an area for detailed policy.
+ */
+ for (i = 0; i < smp_num_cpus; i++) {
+ int cpu = cpu_logical_map(i);
+ if (cpu == this_cpu)
+ continue;
+
+ list_for_each(tmp, &cpu_rq(cpu)) {
+ p = list_entry(tmp, struct task_struct, run_list);
+ if (can_schedule(p, this_cpu) && !(p->policy & SCHED_YIELD)) {
+ int weight = other_goodness(p, cpu, prev->active_mm);
+ if (weight > c)
+ c = weight, next = p;
+ }
+ }
+ }
+
+ if (c > 0)
+ hop_queues(next, this_cpu); /* pull onto mine */
+#endif
/* Do we need to re-calculate counters? */
if (!c)
goto recalculate;
+choice_made:
/*
* from this point on nothing can prevent us from
* switching to the next task, save this fact in
@@ -705,7 +799,7 @@
move_rr_last:
if (!prev->counter) {
prev->counter = NICE_TO_TICKS(prev->nice);
- move_last_runqueue(prev);
+ move_last_rt_queue(prev);
}
goto move_rr_back;
@@ -938,8 +1032,14 @@
retval = 0;
p->policy = policy;
p->rt_priority = lp.sched_priority;
- if (task_on_runqueue(p))
- move_first_runqueue(p);
+ if (task_on_runqueue(p)){
+ if (policy != SCHED_OTHER)
+ move_first_rt_queue(p);
+ else {
+ /* push onto appropriate non-rt queue */
+ hop_queues(p, p->processor);
+ }
+ }
current->need_resched = 1;
@@ -1251,9 +1351,12 @@
* process right in SMP mode.
*/
int cpu = smp_processor_id();
- int nr;
+ int nr, i;
init_task.processor = cpu;
+ for (i=1; i<NR_CPUS; i++){
+ INIT_LIST_HEAD(&cpu_rq(i));
+ }
for(nr = 0; nr < PIDHASH_SZ; nr++)
pidhash[nr] = NULL;
[-- Attachment #3: compile_times --]
[-- Type: text/plain, Size: 4871 bytes --]
Kernel compilation times
========================
three runs of "time make -j2"
--
Linux 2.4.2-ac26 + HP Multiqueue patch
216.34user 14.36system 2:00.03elapsed 192%CPU (0avgtext+0avgdata 0maxresident)k
0inputs+0outputs (482269major+691020minor)pagefaults 0swaps
216.53user 14.23system 1:57.91elapsed 195%CPU (0avgtext+0avgdata 0maxresident)k
0inputs+0outputs (482269major+691020minor)pagefaults 0swaps
217.65user 13.46system 1:58.05elapsed 195%CPU (0avgtext+0avgdata 0maxresident)k
0inputs+0outputs (482269major+691020minor)pagefaults 0swaps
--
Linux 2.4.2-ac26
220.07user 14.88system 2:02.67elapsed 191%CPU (0avgtext+0avgdata 0maxresident)k
0inputs+0outputs (482269major+691019minor)pagefaults 0swaps
220.31user 14.90system 2:00.64elapsed 194%CPU (0avgtext+0avgdata 0maxresident)k
0inputs+0outputs (482269major+691018minor)pagefaults 0swaps
220.58user 14.84system 2:00.57elapsed 195%CPU (0avgtext+0avgdata 0maxresident)k
0inputs+0outputs (482269major+691018minor)pagefaults 0swaps
LMBENCH-2BETA1
==============
First two: Linux 2.4.2-ac26 + HP Multiqueue patch
Second Two: Linux 2.4.2-ac26
Processor, Processes - times in microseconds - smaller is better
----------------------------------------------------------------
Host OS Mhz null null open selct sig sig fork exec sh
call I/O stat clos TCP inst hndl proc proc proc
--------- ------------- ---- ---- ---- ---- ---- ----- ---- ---- ---- ---- ----
skinny Linux 2.4.2-a 997 0.34 0.56 3.96 5.04 27 0.89 2.72 245 1128 4044
skinny Linux 2.4.2-a 997 0.34 0.57 4.19 5.32 25 0.89 2.71 247 1150 4067
skinny Linux 2.4.2-a 997 0.34 0.58 3.90 5.00 25 0.89 2.69 249 1121 3968
skinny Linux 2.4.2-a 997 0.34 0.57 3.90 5.01 25 0.87 2.70 246 1126 4018
Context switching - times in microseconds - smaller is better
-------------------------------------------------------------
Host OS 2p/0K 2p/16K 2p/64K 8p/16K 8p/64K 16p/16K 16p/64K
ctxsw ctxsw ctxsw ctxsw ctxsw ctxsw ctxsw
--------- ------------- ----- ------ ------ ------ ------ ------- -------
skinny Linux 2.4.2-a 1.820 4.7700 12 8.0800 109 27 110
skinny Linux 2.4.2-a 1.890 4.7300 20 6.6500 109 27 110
skinny Linux 2.4.2-a 1.620 4.5900 12 6.7000 109 24 109
skinny Linux 2.4.2-a 1.700 4.6400 12 7.0600 109 26 109
*Local* Communication latencies in microseconds - smaller is better
-------------------------------------------------------------------
Host OS 2p/0K Pipe AF UDP RPC/ TCP RPC/ TCP
ctxsw UNIX UDP TCP conn
--------- ------------- ----- ----- ---- ----- ----- ----- ----- ----
skinny Linux 2.4.2-a 1.820 7.390 15 16 52 23 91 55
skinny Linux 2.4.2-a 1.890 7.185 14 16 41 23 56 54
skinny Linux 2.4.2-a 1.620 6.793 15 16 40 21 56 54
skinny Linux 2.4.2-a 1.700 6.801 15 16 40 21 55 54
File & VM system latencies in microseconds - smaller is better
--------------------------------------------------------------
Host OS 0K File 10K File Mmap Prot Page
Create Delete Create Delete Latency Fault Fault
--------- ------------- ------ ------ ------ ------ ------- ----- -----
skinny Linux 2.4.2-a 6.2054 0.7192 12 1.6973 451 0.672 2.00000
skinny Linux 2.4.2-a 6.2469 0.7360 12 1.7142 465 0.668 2.00000
skinny Linux 2.4.2-a 3.1992 0.7182 12 1.5857 449 0.680 2.00000
skinny Linux 2.4.2-a 6.1576 0.7119 12 1.5817 448 0.669 2.00000
*Local* Communication bandwidths in MB/s - bigger is better
-----------------------------------------------------------
Host OS Pipe AF TCP File Mmap Bcopy Bcopy Mem Mem
UNIX reread reread (libc) (hand) read write
--------- ------------- ---- ---- ---- ------ ------ ------ ------ ---- -----
skinny Linux 2.4.2-a 823 233 162 434 558 271 218 558 282
skinny Linux 2.4.2-a 828 289 243 408 558 269 216 558 281
skinny Linux 2.4.2-a 839 285 249 445 558 222 147 558 219
skinny Linux 2.4.2-a 811 284 242 445 558 222 147 558 219
Memory latencies in nanoseconds - smaller is better
(WARNING - may not be correct, check graphs)
---------------------------------------------------
Host OS Mhz L1 $ L2 $ Main mem Guesses
--------- ------------- ---- ----- ------ -------- -------
skinny Linux 2.4.2-a 997 3.009 7.0230 101
skinny Linux 2.4.2-a 997 3.010 7.0220 101
skinny Linux 2.4.2-a 997 3.010 7.0280 101
skinny Linux 2.4.2-a 997 3.009 7.0290 101
^ permalink raw reply [flat|nested] 52+ messages in thread* Re: a quest for a better scheduler
2001-04-04 0:18 ` Fabio Riccardi
@ 2001-04-04 2:47 ` Mike Kravetz
2001-04-04 4:21 ` Fabio Riccardi
2001-04-04 6:53 ` Ingo Molnar
1 sibling, 1 reply; 52+ messages in thread
From: Mike Kravetz @ 2001-04-04 2:47 UTC (permalink / raw)
To: Fabio Riccardi
Cc: Mike Kravetz, Ingo Molnar, frankeh, Linux Kernel List, Alan Cox
On Tue, Apr 03, 2001 at 05:18:03PM -0700, Fabio Riccardi wrote:
>
> I have measured the HP and not the "scalability" patch because the two do more
> or less the same thing and give me the same performance advantages, but the
> former is a lot simpler and I could port it with no effort on any recent
> kernel.
Actually, there is a significant difference between the HP patch and
the one I developed. In the HP patch, if there is a schedulable task
on the 'local' (current CPU) runqueue it will ignore runnable tasks on
other (remote) runqueues. In the multi-queue patch I developed, the
scheduler always attempts to make the same global scheduling decisions
as the current scheduler.
--
Mike Kravetz mkravetz@sequent.com
IBM Linux Technology Center
^ permalink raw reply [flat|nested] 52+ messages in thread
* Re: a quest for a better scheduler
2001-04-04 2:47 ` Mike Kravetz
@ 2001-04-04 4:21 ` Fabio Riccardi
2001-04-04 17:27 ` Mike Kravetz
0 siblings, 1 reply; 52+ messages in thread
From: Fabio Riccardi @ 2001-04-04 4:21 UTC (permalink / raw)
To: Mike Kravetz; +Cc: Ingo Molnar, frankeh, Linux Kernel List, Alan Cox
I was actually suspecting that the extra lines in your patch were there for a
reason :)
A few questions:
What is the real impact of a (slight) change in scheduling semantics?
Under which situation one should notice a difference?
As you state in your papers the global decision comes with a cost, is it worth it?
Could you make a port of your thing on recent kernels?
I tried and I failed and I don't have enough time to figure out why, that should be
trivial for you though.
TIA, ciao,
- Fabio
Mike Kravetz wrote:
> On Tue, Apr 03, 2001 at 05:18:03PM -0700, Fabio Riccardi wrote:
> >
> > I have measured the HP and not the "scalability" patch because the two do more
> > or less the same thing and give me the same performance advantages, but the
> > former is a lot simpler and I could port it with no effort on any recent
> > kernel.
>
> Actually, there is a significant difference between the HP patch and
> the one I developed. In the HP patch, if there is a schedulable task
> on the 'local' (current CPU) runqueue it will ignore runnable tasks on
> other (remote) runqueues. In the multi-queue patch I developed, the
> scheduler always attempts to make the same global scheduling decisions
> as the current scheduler.
>
> --
> Mike Kravetz mkravetz@sequent.com
> IBM Linux Technology Center
^ permalink raw reply [flat|nested] 52+ messages in thread
* Re: a quest for a better scheduler
2001-04-04 4:21 ` Fabio Riccardi
@ 2001-04-04 17:27 ` Mike Kravetz
0 siblings, 0 replies; 52+ messages in thread
From: Mike Kravetz @ 2001-04-04 17:27 UTC (permalink / raw)
To: Fabio Riccardi; +Cc: Ingo Molnar, frankeh, Linux Kernel List, Alan Cox
On Tue, Apr 03, 2001 at 09:21:57PM -0700, Fabio Riccardi wrote:
> I was actually suspecting that the extra lines in your patch were there for a
> reason :)
>
> A few questions:
>
> What is the real impact of a (slight) change in scheduling semantics?
>
> Under which situation one should notice a difference?
I assume your questions are directed at the difference between local
and global scheduling decisions. As with most things the impact of
these differences depends on workload. Most multi-queue scheduler
implementations make local scheduling decisions and depend on load
balancing for system wide fairness. Schedulers which make global
decisions handle load balancing via their global policy.
The HP multi-queue implementation you are using performs no load
balancing. Therefore, in a worst case situation you could have
low priority tasks sharing one CPU while high priority tasks are
sharing another. To be fair, I have talked to the HP people and
this scheduler was never intended to be a general purpose solution.
Therefore, it makes little sense to comment its merits as such.
> As you state in your papers the global decision comes with a cost,
> is it worth it?
My primary motivation for attempting to perform the same global
decisions as the current scheduler was so that meaningful comparisons
could be made. Also, by using the same global decision policy I
was able to avoid the issue of load balancing. In most multi-queue
implementations, load balancing algorithms take considerable effort
to get working in a reasonable well performing manner.
>
> Could you make a port of your thing on recent kernels?
There is a 2.4.2 patch on the web page. I'll put out a 2.4.3 patch
as soon as I get some time.
--
Mike Kravetz mkravetz@sequent.com
IBM Linux Technology Center
^ permalink raw reply [flat|nested] 52+ messages in thread
* Re: a quest for a better scheduler
2001-04-04 0:18 ` Fabio Riccardi
2001-04-04 2:47 ` Mike Kravetz
@ 2001-04-04 6:53 ` Ingo Molnar
2001-04-04 16:12 ` Davide Libenzi
1 sibling, 1 reply; 52+ messages in thread
From: Ingo Molnar @ 2001-04-04 6:53 UTC (permalink / raw)
To: Fabio Riccardi
Cc: Mike Kravetz, frankeh, Linux Kernel List, Alan Cox,
Linus Torvalds
On Tue, 3 Apr 2001, Fabio Riccardi wrote:
> I've spent my afternoon running some benchmarks to see if MQ patches
> would degrade performance in the "normal case".
no doubt priority-queue can run almost as fast as the current scheduler.
What i'm worried about is the restriction of the 'priority' of processes,
it cannot depend on previous processes (and other 'current state')
anymore.
to so we have two separate issues:
#1: priority-queue: has the fundamental goodness() design limitation.
#2: per-CPU-runqueues: changes semantics, makes scheduler less
effective due to nonglobal decisions.
about #1: while right now the prev->mm rule appears to be a tiny issue (it
might not affect performance significantly), but forbidding it by
hardcoding the assumption into data structures is a limitation of *future*
goodness() functions. Eg. with the possible emergence of CPU-level
threading and other, new multiprocessing technologies, this could be a
*big* mistake.
the Linux scheduler is not designed for the case of 1000 runnable
processes.
Ingo
^ permalink raw reply [flat|nested] 52+ messages in thread* Re: a quest for a better scheduler
2001-04-04 6:53 ` Ingo Molnar
@ 2001-04-04 16:12 ` Davide Libenzi
0 siblings, 0 replies; 52+ messages in thread
From: Davide Libenzi @ 2001-04-04 16:12 UTC (permalink / raw)
To: Ingo Molnar
Cc: Linus Torvalds, Alan Cox, Linux Kernel List, frankeh,
Mike Kravetz, Fabio Riccardi
On 04-Apr-2001 Ingo Molnar wrote:
>
> On Tue, 3 Apr 2001, Fabio Riccardi wrote:
>
>> I've spent my afternoon running some benchmarks to see if MQ patches
>> would degrade performance in the "normal case".
>
> no doubt priority-queue can run almost as fast as the current scheduler.
> What i'm worried about is the restriction of the 'priority' of processes,
> it cannot depend on previous processes (and other 'current state')
> anymore.
>
> to so we have two separate issues:
>
>#1: priority-queue: has the fundamental goodness() design limitation.
>
>#2: per-CPU-runqueues: changes semantics, makes scheduler less
> effective due to nonglobal decisions.
>
> about #1: while right now the prev->mm rule appears to be a tiny issue (it
> might not affect performance significantly), but forbidding it by
> hardcoding the assumption into data structures is a limitation of *future*
> goodness() functions. Eg. with the possible emergence of CPU-level
> threading and other, new multiprocessing technologies, this could be a
> *big* mistake.
This is not correct Ingo. I haven't seen the HP code but if You store processes
in slots S :
S = FS( goodness(p, p->processor, p->mm) )
and You start scanning from the higher slots, as soon as you find a task with a
goodness G' that is equal to the max goodness in slot You can choose that
process to run.
Again, if You haven't found such a goodness during the slot scan but You've
found a task with a goodness G' :
G' >= SG - DD
where :
SG = max slot goodness
DD = SG(i) - SG(i - 1)
You can select that task as the next to spin.
This was the behaviour that was implemented in my scheduler patch about 2 years
ago.
Beside this, I this that with such loads We've more serious problem to face
with inside the kernel.
- Davide
^ permalink raw reply [flat|nested] 52+ messages in thread
* Re: a quest for a better scheduler
2001-04-03 22:43 ` Mike Kravetz
2001-04-04 0:18 ` Fabio Riccardi
@ 2001-04-04 6:28 ` Ingo Molnar
1 sibling, 0 replies; 52+ messages in thread
From: Ingo Molnar @ 2001-04-04 6:28 UTC (permalink / raw)
To: Mike Kravetz; +Cc: frankeh, Fabio Riccardi, Linux Kernel List
On Tue, 3 Apr 2001, Mike Kravetz wrote:
> Our 'priority queue' implementation uses almost the same goodness
> function as the current scheduler. The main difference between our
> 'priority queue' scheduler and the current scheduler is the structure
> of the runqueue. We break up the single runqueue into a set of
> priority based runqueues. The 'goodness' of a task determines what
> sub-queue a task is placed in. Tasks with higher goodness values are
> placed in higher priority queues than tasks with lower goodness
> values. [...]
we are talking about the same thing, re-read my mail. this design assumes
that 'goodness' is constant in the sense that it does not depend on the
previous process.
and no, your are not using the same goodness() function, you omitted the
prev->mm == next->mm component to goodness(), due to this design
limitation.
Ingo
^ permalink raw reply [flat|nested] 52+ messages in thread
* Re: a quest for a better scheduler
2001-04-03 2:23 Fabio Riccardi
2001-04-03 8:55 ` Ingo Molnar
@ 2001-04-03 12:31 ` Alan Cox
2001-04-04 0:33 ` Fabio Riccardi
1 sibling, 1 reply; 52+ messages in thread
From: Alan Cox @ 2001-04-03 12:31 UTC (permalink / raw)
To: Fabio Riccardi; +Cc: linux-kernel
> Is there any special reason why any of those patches didn't make it to
> the mainstream kernel code?
All of them are worse for the normal case. Also 1500 running apache's isnt
a remotely useful situation, you are thrashing the cache even if you are now
not thrashing the scheduler. Use an httpd designed for that situation. Then
you can also downgrade to a cheap pentium class box for the task ;)
^ permalink raw reply [flat|nested] 52+ messages in thread
* Re: a quest for a better scheduler
2001-04-03 12:31 ` Alan Cox
@ 2001-04-04 0:33 ` Fabio Riccardi
2001-04-04 0:35 ` Alan Cox
2001-04-04 11:51 ` Ingo Molnar
0 siblings, 2 replies; 52+ messages in thread
From: Fabio Riccardi @ 2001-04-04 0:33 UTC (permalink / raw)
To: Alan Cox; +Cc: linux-kernel
Alan,
for the "normal case" performance see my other message.
I agree that a better threading model would surely help in a web server, but to
me this is not an excuse to live up with a broken scheduler.
The X15 server I'm working on now is a sort of user-space TUX, it uses only 8
threads per CPU and it achieves the same level of performance of the kernel
space TUX. Even in this case the performance advantage of the multiqueue
scheduler is remarkable, especially on a multi-CPU (> 2) architecture.
To achieve some decent disk/CPU/network overlapping with the current linux
blocking disk IO limitations there is no way to avoid a "bunch of server
threads". I've (experimentally) figured out that 8-16 threads per CPU can
assure some reasonable overlapping, depending on the memory size and disk
subsystem speed. On a 8-way machine this means 64-128 active tasks, a total
disaster with the current scheduler.
Unless we want to maintain the position tha the only way to achieve good
performance is to embed server applications in the kernel, some minimal help
should be provided to goodwilling user applications :)
TIA, ciao,
- Fabio
Alan Cox wrote:
> > Is there any special reason why any of those patches didn't make it to
> > the mainstream kernel code?
>
> All of them are worse for the normal case. Also 1500 running apache's isnt
> a remotely useful situation, you are thrashing the cache even if you are now
> not thrashing the scheduler. Use an httpd designed for that situation. Then
> you can also downgrade to a cheap pentium class box for the task ;)
^ permalink raw reply [flat|nested] 52+ messages in thread
* Re: a quest for a better scheduler
2001-04-04 0:33 ` Fabio Riccardi
@ 2001-04-04 0:35 ` Alan Cox
2001-04-04 1:17 ` Fabio Riccardi
2001-04-04 11:57 ` Ingo Molnar
2001-04-04 11:51 ` Ingo Molnar
1 sibling, 2 replies; 52+ messages in thread
From: Alan Cox @ 2001-04-04 0:35 UTC (permalink / raw)
To: Fabio Riccardi; +Cc: Alan Cox, linux-kernel
> for the "normal case" performance see my other message.
I did - and with a lot of interest
> I agree that a better threading model would surely help in a web server, but to
> me this is not an excuse to live up with a broken scheduler.
The problem has always been - alternative scheduler, crappier performance for
2 tasks running (which is most boxes). If your numbers are right then the
HP patch is working as well for 1 or 2 tasks too
> Unless we want to maintain the position tha the only way to achieve good
> performance is to embed server applications in the kernel, some minimal help
> should be provided to goodwilling user applications :)
Indeed. I'd love to see you beat tux entirely in userspace. It proves the
rest of the API for the kernel is right
^ permalink raw reply [flat|nested] 52+ messages in thread
* Re: a quest for a better scheduler
2001-04-04 0:35 ` Alan Cox
@ 2001-04-04 1:17 ` Fabio Riccardi
2001-04-04 1:50 ` Christopher Smith
2001-04-04 11:57 ` Ingo Molnar
1 sibling, 1 reply; 52+ messages in thread
From: Fabio Riccardi @ 2001-04-04 1:17 UTC (permalink / raw)
To: Alan Cox; +Cc: linux-kernel
Alan Cox wrote:
> > for the "normal case" performance see my other message.
>
> I did - and with a lot of interest
thanks! :)
> > I agree that a better threading model would surely help in a web server, but to
> > me this is not an excuse to live up with a broken scheduler.
>
> The problem has always been - alternative scheduler, crappier performance for
> 2 tasks running (which is most boxes). If your numbers are right then the
> HP patch is working as well for 1 or 2 tasks too
Please verify them if you have a couple of spare hours.
BTW: I measured similar results for the "scalability" patches on a 2.4.1 kernel, it
would be worth the effort to seriously compare them from an architectural point of
view, but I don't have the time right now...
> > Unless we want to maintain the position tha the only way to achieve good
> > performance is to embed server applications in the kernel, some minimal help
> > should be provided to goodwilling user applications :)
>
> Indeed. I'd love to see you beat tux entirely in userspace. It proves the
> rest of the API for the kernel is right
Indeed, I'm using RT sigio/sigwait event scheduling, bare clone threads and
zero-copy io.
If only I had a really asynchronous sendfile, or a smarter madvise that wouldn't
require to map files :)
My server cannot execute dynamic stuff yet, it relies on Apache for that.
Running X15 and TUX in the same conditions (i.e. dynamic code in Apache) I get
exactly the same score in both cases.
I'm adding a TUX-like dynamic interface, I hope to get it to work by next week, then
I'll make a real confrontation.
Regards, ciao,
- Fabio
^ permalink raw reply [flat|nested] 52+ messages in thread
* Re: a quest for a better scheduler
2001-04-04 1:17 ` Fabio Riccardi
@ 2001-04-04 1:50 ` Christopher Smith
0 siblings, 0 replies; 52+ messages in thread
From: Christopher Smith @ 2001-04-04 1:50 UTC (permalink / raw)
To: Fabio Riccardi; +Cc: linux-kernel
--On Tuesday, April 03, 2001 18:17:30 -0700 Fabio Riccardi
<fabio@chromium.com> wrote:
> Alan Cox wrote:
> Indeed, I'm using RT sigio/sigwait event scheduling, bare clone threads
> and zero-copy io.
Fabio, I'm working on a similar solution, although I'm experimenting with
SGI's KAIO patch to see what it can do. I've had to patch the kernel to
implement POSIX style signal dispatch symantics (so that the thread which
posted an I/O request doesn't have to be the one which catches the signal).
Are you taking a similar approach, or is the lack of this behavior the
reason you are using so many threads?
--Chris
^ permalink raw reply [flat|nested] 52+ messages in thread
* Re: a quest for a better scheduler
2001-04-04 0:35 ` Alan Cox
2001-04-04 1:17 ` Fabio Riccardi
@ 2001-04-04 11:57 ` Ingo Molnar
1 sibling, 0 replies; 52+ messages in thread
From: Ingo Molnar @ 2001-04-04 11:57 UTC (permalink / raw)
To: Alan Cox; +Cc: Fabio Riccardi, linux-kernel
On Wed, 4 Apr 2001, Alan Cox wrote:
> The problem has always been - alternative scheduler, crappier
> performance for 2 tasks running (which is most boxes). [...]
it's not only the 2-task case, but also less flexibility or lost
semantics.
> Indeed. I'd love to see you beat tux entirely in userspace. It proves
> the rest of the API for the kernel is right
well, until the cost of entry into the kernel is eliminated, this is not
possible - unless there are performance bugs in TUX :-)
but yes, getting a userspace solution that gets 'close enough' in eg.
SPECweb99 benchmarks (which is complex enough to be trusted as a generic
performance metric) would be a nice thing to have. There are existing
SIGIO based, multithreaded solutions (eg. phttpd), with varying success.
Ingo
^ permalink raw reply [flat|nested] 52+ messages in thread
* Re: a quest for a better scheduler
2001-04-04 0:33 ` Fabio Riccardi
2001-04-04 0:35 ` Alan Cox
@ 2001-04-04 11:51 ` Ingo Molnar
1 sibling, 0 replies; 52+ messages in thread
From: Ingo Molnar @ 2001-04-04 11:51 UTC (permalink / raw)
To: Fabio Riccardi; +Cc: Alan Cox, Linux Kernel List
On Tue, 3 Apr 2001, Fabio Riccardi wrote:
> I agree that a better threading model would surely help in a web
> server, but to me this is not an excuse to live up with a broken
> scheduler.
believe me, there are many other parts of the kernel that are not
optimized for the nutcase. In this case it's the scheduler that punishes
you first. Do not shoot the messenger.
> The X15 server I'm working on now is a sort of user-space TUX, it uses
> only 8 threads per CPU and it achieves the same level of performance
> of the kernel space TUX. Even in this case the performance advantage
> of the multiqueue scheduler is remarkable, especially on a multi-CPU
> (> 2) architecture.
then your design is still bad. TUX has no problems with the scheduler, and
TUX doesnt have many runnable processes at once. And this has nothing to
do with TUX being within the kernel.
> To achieve some decent disk/CPU/network overlapping with the current
> linux blocking disk IO limitations there is no way to avoid a "bunch
> of server threads". [...]
false.
> [...] I've (experimentally) figured out that 8-16 threads per CPU can
> assure some reasonable overlapping, depending on the memory size and
> disk subsystem speed. On a 8-way machine this means 64-128 active
> tasks, [...]
if they are doing IO *only* then there wont be 64-128 active tasks. This
is how TUX does async IO: helper threads do the IO and *only* the IO. This
way cache locality is maximized. (the scheduler is irrelevant in this
case.) Again you are blaming the scheduler - the poor scheduler is simply
the first kernel subsystem that tells you: your design still sucks. (if
you have 64-128 active tasks then you already pay a very big cache
footprint price as well.) [I'm wondering how your solution was just as
fast as TUX, although you claim that the scheduler was already killing
things. Perhaps your test was done over localhost or was saturating
network bandwidth?]
> Unless we want to maintain the position tha the only way to achieve
> good performance is to embed server applications in the kernel, [...]
FYI, TUX related functionality is constantly being pushed out and made
accessible to userspace as well. Witness zerocopy, IRQ-affinity, and tons
of generic patches such as pagecache-scalability and timer-scalability.
(and soon the remaining, TUX-private improvements too.) TUX is intended to
be a testbed for kernel subsystems, where performance optimizations and
APIs can be tested without having to export them to userspace. (exporting
to userspace cannot be done lightly and makes things less flexible.)
Ingo
^ permalink raw reply [flat|nested] 52+ messages in thread
end of thread, other threads:[~2001-04-18 14:51 UTC | newest]
Thread overview: 52+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2001-04-04 13:43 a quest for a better scheduler Hubertus Franke
2001-04-04 13:25 ` Ingo Molnar
2001-04-04 13:34 ` Ingo Molnar
2001-04-04 15:08 ` Andrea Arcangeli
2001-04-04 16:50 ` [Lse-tech] " Kanoj Sarcar
2001-04-04 17:16 ` Andrea Arcangeli
2001-04-04 17:49 ` Kanoj Sarcar
2001-04-04 18:00 ` Andrea Arcangeli
2001-04-05 11:13 ` Zdenek Kabelac
2001-04-04 16:39 ` Kanoj Sarcar
2001-04-04 17:00 ` Andrea Arcangeli
2001-04-04 15:44 ` Khalid Aziz
2001-04-04 15:55 ` [Lse-tech] " Christoph Hellwig
-- strict thread matches above, loose matches on Subject: below --
2001-04-18 14:50 Yoav Etsion
2001-04-06 13:15 Hubertus Franke
2001-04-05 23:53 Torrey Hoffman
2001-04-05 23:01 Hubertus Franke
2001-04-04 19:06 Hubertus Franke
2001-04-04 17:17 Hubertus Franke
2001-04-04 15:36 Hubertus Franke
2001-04-04 15:28 Hubertus Franke
2001-04-04 14:03 Hubertus Franke
2001-04-04 13:23 ` Ingo Molnar
2001-04-04 22:16 ` Tim Wright
2001-04-04 22:54 ` Christopher Smith
2001-04-05 22:38 ` Timothy D. Witham
2001-04-06 3:27 ` Christopher Smith
2001-04-06 18:06 ` Timothy D. Witham
2001-04-06 21:08 ` Michael Peddemors
2001-04-06 22:33 ` Nathan Straz
2001-04-04 15:12 ` Andrea Arcangeli
2001-04-04 15:49 ` Khalid Aziz
2001-04-04 6:36 alad
2001-04-03 2:23 Fabio Riccardi
2001-04-03 8:55 ` Ingo Molnar
2001-04-03 19:13 ` Mike Kravetz
2001-04-03 18:47 ` Ingo Molnar
2001-04-03 22:43 ` Mike Kravetz
2001-04-04 0:18 ` Fabio Riccardi
2001-04-04 2:47 ` Mike Kravetz
2001-04-04 4:21 ` Fabio Riccardi
2001-04-04 17:27 ` Mike Kravetz
2001-04-04 6:53 ` Ingo Molnar
2001-04-04 16:12 ` Davide Libenzi
2001-04-04 6:28 ` Ingo Molnar
2001-04-03 12:31 ` Alan Cox
2001-04-04 0:33 ` Fabio Riccardi
2001-04-04 0:35 ` Alan Cox
2001-04-04 1:17 ` Fabio Riccardi
2001-04-04 1:50 ` Christopher Smith
2001-04-04 11:57 ` Ingo Molnar
2001-04-04 11:51 ` Ingo Molnar
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox