From: "Peter Wächtler" <pwaechtler@loewe-komp.de>
To: Roberto Fichera <kernel@tekno-soft.it>
Cc: David Schwartz <davids@webmaster.com>, linux-kernel@vger.kernel.org
Subject: Re: Developing multi-threading applications
Date: Thu, 13 Jun 2002 11:44:33 +0200 [thread overview]
Message-ID: <3D086981.5030109@loewe-komp.de> (raw)
In-Reply-To: <5.1.1.6.0.20020613095304.00a6fc60@mail.tekno-soft.it> <5.1.1.6.0.20020613104128.02c119a0@mail.tekno-soft.it>
Roberto Fichera wrote:
> At 01.26 13/06/02 -0700, you wrote:
>
>> On Thu, 13 Jun 2002 10:13:35 +0200, Roberto Fichera wrote:
>>
>> >I'm designing a multithreding application with many threads,
>> >from ~100 to 300/400. I need to take some decisions about
>> >which threading library use, and which patch I need for the
>> >kernel to improve the scheduler performances. The machines
>> >will be a SMP Xeon with 4/8 processors with 4Gb RAM.
>> >All threads are almost computational intensive and the library
>> >need a fast interprocess comunication and syncronization
>> >because there are many sync & async threads time
>> >dependent and/or critical. I'm planning, in the future, to distribuite
>> >all the threads in a pool of SMP box.
>>
>> With 4/8 processors, you don't want to create 100-400 threads
>> doing
>> computation intensive tasks. So redesign things so that the number of
>> threads
>> you create is more in line with the number of CPUs you have available.
>> That
>> is, use a 'thread per CPU' (or slightly more threads than their are
>> CPUs per
>> node) approach and you'll perform a lot better. Distribute the
>> available work
>> over the available threads.
>
>
> You are right! But "computational intensive" is not totaly right as I
> say ;-),
> because most of thread are waiting for I/O, after I/O are performed the
> computational intensive tasks, finished its work all the result are sent
> to thread-father, the father collect all the child's result and perform
> some
> computational work and send its result to its father and so on with many
> thread-father controlling other child. So I think the main problem/overhead
> is thread creation and the thread's numbers.
>
Have a look at http://www-124.ibm.com/developerworks/opensource/pthreads/
they provide M:N threading model where threads can live in userspace.
next prev parent reply other threads:[~2002-06-13 9:42 UTC|newest]
Thread overview: 19+ messages / expand[flat|nested] mbox.gz Atom feed top
2002-06-13 8:13 Developing multi-threading applications Roberto Fichera
2002-06-13 8:26 ` David Schwartz
2002-06-13 9:08 ` Roberto Fichera
2002-06-13 9:44 ` Peter Wächtler [this message]
2002-06-13 9:52 ` Roberto Fichera
2002-06-13 10:16 ` Peter Wächtler
2002-06-13 10:42 ` Roberto Fichera
2002-06-13 10:13 ` David Schwartz
2002-06-13 11:21 ` Roberto Fichera
2002-06-13 11:58 ` David Schwartz
2002-06-13 16:26 ` Roberto Fichera
2002-06-14 20:56 ` David Schwartz
2002-06-15 9:01 ` Roberto Fichera
2002-06-15 10:30 ` Ingo Oeser
2002-06-17 8:17 ` Roberto Fichera
2002-06-17 16:07 ` Marco Colombo
2002-06-17 18:00 ` Roberto Fichera
2002-06-17 18:55 ` Jakob Oestergaard
[not found] <20020613113158.I22429@nightmaster.csn.tu-chemnitz.de>
2002-06-13 10:25 ` Roberto Fichera
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=3D086981.5030109@loewe-komp.de \
--to=pwaechtler@loewe-komp.de \
--cc=davids@webmaster.com \
--cc=kernel@tekno-soft.it \
--cc=linux-kernel@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox