From: "Donald Dade" <don_dade@hotmail.com>
To: linux-kernel@vger.kernel.org
Subject: Is it bad to have lots of sleeping tasks?
Date: Fri, 24 Aug 2001 12:39:42 -0700 [thread overview]
Message-ID: <F45F3VcuiDqPkMseLHP00010e68@hotmail.com> (raw)
Hello list,
I have an application that creates a pool of 1000 threads that sleep() for
various intervals, and then wake up and do something. I also have another
version of the same app that uses a single thread together with program
logic to do essentially the same thing: sleeping until it is time to wake up
and do something. I'm surprised to see that the overhead of having 1000
threads that each use a timer is only about 60% of the overhead of the most
efficient attempt that I could muster using a single thread.
Since I'll need the pool size will go from 1000 to 6000 (technical hurdles
aside), I was wondering if I can expect the kernel to continue to scale as
well. I think that I understand why it has scaled so well thus far, and I'd
like some type of validation/invalidation: I have a boatload of threads, but
all except a very small number will be in the TASK_INTERRUPTIBLE state in
stead of TASK_RUNNABLE, and therefore the scheduler doesn't have to walk a
list of 1000+ processes, just 2 or 3.
My questions are these:
1) Are there any kernel structures that are O(n) or just as bad, where n is
the total number of processes, not just the total number of runnable
processes? i.e. will the performance of the system be adversely affected by
the number of tasks in wait queues?
2) If I have 1000 threads, and each calls sleep(), I assume that my
motherboard cannot handle 1000 timers in hardware, so the kernel has to
somehow keep track of a timer by polling and sending SIGALRM to the owning
process when each expires, which causes glibc's sleep() to somehow return.
The question is this - how can asking the kernel to manage 1000 timers be so
much more efficient than asking it to manage just 1? Or better yet, how can
I load the system down with 1000, and see absolutely *NO* demonstrable
difference in the system's responsiveness?
3) I am tempted to cajole linuxthreads to do that many, or get clone() to
play nicely with glibc, or still use linuxthreads and have 6 processes with
1000 each that share memory, but don't want to figure it all out, just to
see it not perform well. So, if the kernel impresses the hell out of me at
1000, will it at 6000? Or is a pool of 6000 threads, sleeping or not, just
crazy, which was my first impression?
Thanks to any and all,
Don Dade
"I'm not going back to win32. Ever."
_________________________________________________________________
Get your FREE download of MSN Explorer at http://explorer.msn.com/intl.asp
next reply other threads:[~2001-08-24 19:39 UTC|newest]
Thread overview: 7+ messages / expand[flat|nested] mbox.gz Atom feed top
2001-08-24 19:39 Donald Dade [this message]
2001-08-24 20:05 ` Is it bad to have lots of sleeping tasks? Alan Cox
2001-08-24 20:13 ` Hua Zhong
2001-08-24 20:32 ` Kurt Garloff
2001-08-24 21:19 ` Alan Cox
2001-08-24 21:15 ` Alan Cox
[not found] <3B8A88B0.FDCC5ACE@watson.ibm.com>
2001-08-27 19:18 ` Alan Cox
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=F45F3VcuiDqPkMseLHP00010e68@hotmail.com \
--to=don_dade@hotmail.com \
--cc=ddade@digitalstatecraft.com \
--cc=linux-kernel@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox