* Is it bad to have lots of sleeping tasks?
@ 2001-08-24 19:39 Donald Dade
2001-08-24 20:05 ` Alan Cox
0 siblings, 1 reply; 7+ messages in thread
From: Donald Dade @ 2001-08-24 19:39 UTC (permalink / raw)
To: linux-kernel
Hello list,
I have an application that creates a pool of 1000 threads that sleep() for
various intervals, and then wake up and do something. I also have another
version of the same app that uses a single thread together with program
logic to do essentially the same thing: sleeping until it is time to wake up
and do something. I'm surprised to see that the overhead of having 1000
threads that each use a timer is only about 60% of the overhead of the most
efficient attempt that I could muster using a single thread.
Since I'll need the pool size will go from 1000 to 6000 (technical hurdles
aside), I was wondering if I can expect the kernel to continue to scale as
well. I think that I understand why it has scaled so well thus far, and I'd
like some type of validation/invalidation: I have a boatload of threads, but
all except a very small number will be in the TASK_INTERRUPTIBLE state in
stead of TASK_RUNNABLE, and therefore the scheduler doesn't have to walk a
list of 1000+ processes, just 2 or 3.
My questions are these:
1) Are there any kernel structures that are O(n) or just as bad, where n is
the total number of processes, not just the total number of runnable
processes? i.e. will the performance of the system be adversely affected by
the number of tasks in wait queues?
2) If I have 1000 threads, and each calls sleep(), I assume that my
motherboard cannot handle 1000 timers in hardware, so the kernel has to
somehow keep track of a timer by polling and sending SIGALRM to the owning
process when each expires, which causes glibc's sleep() to somehow return.
The question is this - how can asking the kernel to manage 1000 timers be so
much more efficient than asking it to manage just 1? Or better yet, how can
I load the system down with 1000, and see absolutely *NO* demonstrable
difference in the system's responsiveness?
3) I am tempted to cajole linuxthreads to do that many, or get clone() to
play nicely with glibc, or still use linuxthreads and have 6 processes with
1000 each that share memory, but don't want to figure it all out, just to
see it not perform well. So, if the kernel impresses the hell out of me at
1000, will it at 6000? Or is a pool of 6000 threads, sleeping or not, just
crazy, which was my first impression?
Thanks to any and all,
Don Dade
"I'm not going back to win32. Ever."
_________________________________________________________________
Get your FREE download of MSN Explorer at http://explorer.msn.com/intl.asp
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: Is it bad to have lots of sleeping tasks?
2001-08-24 19:39 Is it bad to have lots of sleeping tasks? Donald Dade
@ 2001-08-24 20:05 ` Alan Cox
2001-08-24 20:13 ` Hua Zhong
0 siblings, 1 reply; 7+ messages in thread
From: Alan Cox @ 2001-08-24 20:05 UTC (permalink / raw)
To: ddade; +Cc: linux-kernel
> My questions are these:
> 1) Are there any kernel structures that are O(n) or just as bad, where n is
> the total number of processes, not just the total number of runnable
> processes? i.e. will the performance of the system be adversely affected by
> the number of tasks in wait queues?
Linus scheduler is pretty dire beyond about 8 runnable threads, but very
good below that. It also has a refresh loop that is O(n) tasks, which is
strange, and actually looks easily to eliminate.
The critical bit is threads runnable at any given time. When that is low as
it is in almost all normal workloads the performance of the scheduler is
very good indeed.
> 2) If I have 1000 threads, and each calls sleep(), I assume that my
...
> difference in the system's responsiveness?
Read kernel/timer.c.
Alan
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: Is it bad to have lots of sleeping tasks?
2001-08-24 20:05 ` Alan Cox
@ 2001-08-24 20:13 ` Hua Zhong
2001-08-24 20:32 ` Kurt Garloff
2001-08-24 21:15 ` Alan Cox
0 siblings, 2 replies; 7+ messages in thread
From: Hua Zhong @ 2001-08-24 20:13 UTC (permalink / raw)
To: ddade, Alan Cox; +Cc: linux-kernel
> Linus scheduler is pretty dire beyond about 8 runnable threads, but very
> good below that. It also has a refresh loop that is O(n) tasks, which is
> strange, and actually looks easily to eliminate.
So why not do it? Or implement a nicer scheduler? There are many good
ones. There are o(1) schedulers that provide much better proportional
sharing. They scale and also perform well even in "few running processes"
case. They are also not hard to implement (I once implemented such a
scheduler with 100 lines of patch, and that fitted in the existing Linux
runqueue framework). What's the resistence to scheduler changes?
> The critical bit is threads runnable at any given time. When that is low
as
> it is in almost all normal workloads the performance of the scheduler is
> very good indeed.
>
> > 2) If I have 1000 threads, and each calls sleep(), I assume that my
> ...
> > difference in the system's responsiveness?
>
> Read kernel/timer.c.
>
> Alan
> -
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at http://www.tux.org/lkml/
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: Is it bad to have lots of sleeping tasks?
2001-08-24 20:13 ` Hua Zhong
@ 2001-08-24 20:32 ` Kurt Garloff
2001-08-24 21:19 ` Alan Cox
2001-08-24 21:15 ` Alan Cox
1 sibling, 1 reply; 7+ messages in thread
From: Kurt Garloff @ 2001-08-24 20:32 UTC (permalink / raw)
To: Hua Zhong; +Cc: ddade, Alan Cox, linux-kernel
[-- Attachment #1: Type: text/plain, Size: 984 bytes --]
On Fri, Aug 24, 2001 at 01:13:04PM -0700, Hua Zhong wrote:
> > Linus scheduler is pretty dire beyond about 8 runnable threads, but very
> > good below that. It also has a refresh loop that is O(n) tasks, which is
> > strange, and actually looks easily to eliminate.
>
> So why not do it? Or implement a nicer scheduler? There are many good
> ones. There are o(1) schedulers that provide much better proportional
> sharing. They scale and also perform well even in "few running processes"
> case. They are also not hard to implement (I once implemented such a
> scheduler with 100 lines of patch, and that fitted in the existing Linux
> runqueue framework). What's the resistence to scheduler changes?
Expect Larry to jump on you.
Regards,
--
Kurt Garloff <garloff@suse.de> Eindhoven, NL
GPG key: See mail header, key servers Linux kernel development
SuSE GmbH, Nuernberg, DE SCSI, Security
[-- Attachment #2: Type: application/pgp-signature, Size: 232 bytes --]
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: Is it bad to have lots of sleeping tasks?
2001-08-24 20:32 ` Kurt Garloff
@ 2001-08-24 21:19 ` Alan Cox
0 siblings, 0 replies; 7+ messages in thread
From: Alan Cox @ 2001-08-24 21:19 UTC (permalink / raw)
To: Kurt Garloff; +Cc: Hua Zhong, ddade, Alan Cox, linux-kernel
> > scheduler with 100 lines of patch, and that fitted in the existing Linux
> > runqueue framework). What's the resistence to scheduler changes?
>
> Expect Larry to jump on you.
If you can implement an O(1) scheduler that is faster than Linus scheduler
for the normal case and for scalable cases I expect Larry would probably
jump up and down happily instead.
The current scheduler does make some very bad decisions
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: Is it bad to have lots of sleeping tasks?
2001-08-24 20:13 ` Hua Zhong
2001-08-24 20:32 ` Kurt Garloff
@ 2001-08-24 21:15 ` Alan Cox
1 sibling, 0 replies; 7+ messages in thread
From: Alan Cox @ 2001-08-24 21:15 UTC (permalink / raw)
To: Hua Zhong; +Cc: ddade, Alan Cox, linux-kernel
> So why not do it? Or implement a nicer scheduler? There are many good
> ones. There are o(1) schedulers that provide much better proportional
> sharing. They scale and also perform well even in "few running processes"
> case. They are also not hard to implement (I once implemented such a
> scheduler with 100 lines of patch, and that fitted in the existing Linux
> runqueue framework). What's the resistence to scheduler changes?
The resistance I've seen has been to schedulers that perform more poorly
with < 3 running processes - that being the normal case.
Its also suprisingly hard to find a very simple scheduler that provides
fairness and cache optimal behaviour while working well SMP. Uniprocessor
is easy, uniprocessor with real time isnt too bad, SMP gets tricky.
I'd definitely like to see a better scheduler in the kernel, providing its
as fast for the < 3 processes case too.
Alan
^ permalink raw reply [flat|nested] 7+ messages in thread
[parent not found: <3B8A88B0.FDCC5ACE@watson.ibm.com>]
* Re: Is it bad to have lots of sleeping tasks?
[not found] <3B8A88B0.FDCC5ACE@watson.ibm.com>
@ 2001-08-27 19:18 ` Alan Cox
0 siblings, 0 replies; 7+ messages in thread
From: Alan Cox @ 2001-08-27 19:18 UTC (permalink / raw)
To: frankeh; +Cc: Alan Cox @vger.kernel.org, ddade, linux-kernel
> Alan is right, I would not be too concerned about the recalculate
> loop. There are patches out that would basically eliminate the
> update of all tasks during recalculate, but they come at an additional
> cost during add_from_runqueue and del_from_runqueue. I don't know
> exactly where the break-even point is where either solution is
> worse of better.
The overhead will be pretty close to zero since the update will be for a
task that is in local cache, so its a tiny bit of math from L1 cache with
writes that don't cause stalls.
> We presented this stuff at OLS and since then have improved the
> low running-thread count scenario.
> The MQ is functionally equivalent to the current scheduler
This is one of the things I think we have wrong although I understand
keeping the behaviour was part of the goal of your code. The scheduler
should be cache optimising for cpu soakers at least. That also seems
to simplify the scheduler not make it more complex.
For example we tend to schedule events badly - consider processes A and B
and are CPU suckers and an editor. Each time you hit a key and wake the
editor you tend to flip between running A and B.
I'd like to see us end up with an O(1) [for hardware ffz at least] scheduler
that did the right things for cache locality. I've got some ideas but I
don't know how to make them work SMP.
> Alan, could you suggest some benchmarks to run for 2-way systems,
> that have low thread counts, are easily reproducable (no big setup and
> large system configuration required) that would help us make the point
> for a larger set of application then we have targetted.
Lmbench can be useful for small scale numbers, although it isnt intended
currently to measure SMP. Perhaps Larry can comment on that.
We certainly both agree the scheduler needs work
Alan
^ permalink raw reply [flat|nested] 7+ messages in thread
end of thread, other threads:[~2001-08-27 19:15 UTC | newest]
Thread overview: 7+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2001-08-24 19:39 Is it bad to have lots of sleeping tasks? Donald Dade
2001-08-24 20:05 ` Alan Cox
2001-08-24 20:13 ` Hua Zhong
2001-08-24 20:32 ` Kurt Garloff
2001-08-24 21:19 ` Alan Cox
2001-08-24 21:15 ` Alan Cox
[not found] <3B8A88B0.FDCC5ACE@watson.ibm.com>
2001-08-27 19:18 ` Alan Cox
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox