From: "Huang, Ying" <ying.huang@intel.com>
To: Stefan Richter <stefanr@s5r6.in-berlin.de>
Cc: Greg K-H <greg@kroah.com>,
Cornelia Huck <cornelia.huck@de.ibm.com>,
Adrian Bunk <bunk@stusta.de>,
david@lang.hm, David Miller <davem@davemloft.net>,
Duncan Sands <duncan.sands@math.u-psud.fr>,
Phillip Susi <psusi@cfl.rr.com>,
linux-kernel <linux-kernel@vger.kernel.org>
Subject: Re: [PATCH] driver core: multithreaded probing - more parallelism control
Date: Fri, 22 Jun 2007 09:52:38 +0000 [thread overview]
Message-ID: <1182505958.12161.0.camel@caritas-dev.intel.com> (raw)
In-Reply-To: <467AA596.5030509@s5r6.in-berlin.de>
On Thu, 2007-06-21 at 18:21 +0200, Stefan Richter wrote:
> Parallelism between subsystems may be interesting during boot ==
> "coldplug", /if/ the machine has time-consuming devices to probe on
> /different/ types of buses. Of course some machines do the really
> time-consuming stuff on only one type of bus. Granted, parallelism
> betwen subsystems is not very interesting anymore later after boot ==
> "hotplug".
Yes. So I think there are two possible solution.
1. Creating one set of probing queues for each subsystem (maybe just the
subsystems need it), so the probing queue IDs are local to each
subsystem.
2. There is only one set of probing queues in whole system. The probing
queue IDs are shared between subsystems. The subsystem can select a
random starting queue ID (maybe named as start_queue_id), and allocate
the queue IDs from that point on (start_queue_id + private_queue_id). So
the probability of queue ID sharing will be reduced.
> (The old FireWire stack will re-enter the main loop of the bus scanning
> thread sometime after a bus reset event signaled that nodes or units may
> have appeared or disappeared. The new FireWire stack will schedule
> respective scanning workqueue jobs after such an event.)
I think the workqueue is better than kernel thread here. With kernel
thread, the nodes and units may be needed to be scanned again and again
for each unit/node if many units/nodes are appeared at almost the same
time, while with workqueue, just schedule the jobs needed.
And a workqueue like the probing queue whose thread can be
created/destroyed on demand will save more resources than ordinary
workqueue. :)
Best Regards,
Huang Ying
next prev parent reply other threads:[~2007-06-22 1:53 UTC|newest]
Thread overview: 14+ messages / expand[flat|nested] mbox.gz Atom feed top
[not found] <1182373258.30574.30.camel@caritas-dev.intel.com>
2007-06-20 15:09 ` [PATCH] driver core: multithreaded probing - more parallelism control Stefan Richter
2007-06-20 15:14 ` Stefan Richter
2007-06-21 9:38 ` Huang, Ying
2007-06-21 8:49 ` Stefan Richter
2007-06-21 13:51 ` Huang, Ying
2007-06-21 16:21 ` Stefan Richter
2007-06-22 9:52 ` Huang, Ying [this message]
2007-07-03 15:04 ` Cornelia Huck
2007-06-24 7:06 ` Greg KH
2007-06-24 9:38 ` Stefan Richter
2007-06-24 15:04 ` [PATCH] driver core: multithreaded probing - more parallelismcontrol Huang, Ying
2007-06-25 8:16 ` Greg KH
2007-07-03 9:33 ` Cornelia Huck
2007-06-21 10:17 [PATCH] driver core: multithreaded probing - more parallelism control Huang, Ying
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1182505958.12161.0.camel@caritas-dev.intel.com \
--to=ying.huang@intel.com \
--cc=bunk@stusta.de \
--cc=cornelia.huck@de.ibm.com \
--cc=davem@davemloft.net \
--cc=david@lang.hm \
--cc=duncan.sands@math.u-psud.fr \
--cc=greg@kroah.com \
--cc=linux-kernel@vger.kernel.org \
--cc=psusi@cfl.rr.com \
--cc=stefanr@s5r6.in-berlin.de \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox