From: Erich Focht <efocht@ess.nec.de>
To: "Martin J. Bligh" <mbligh@aracnet.com>
Cc: Michael Hohnbaum <hohnbaum@us.ibm.com>,
mingo@redhat.com, habanero@us.ibm.com,
linux-kernel@vger.kernel.org, lse-tech@lists.sourceforge.net
Subject: Re: NUMA scheduler (was: 2.5 merge candidate list 1.5)
Date: Mon, 28 Oct 2002 17:34:40 +0100 [thread overview]
Message-ID: <200210281734.41115.efocht@ess.nec.de> (raw)
In-Reply-To: <3128418467.1035736310@[10.10.2.3]>
On Monday 28 October 2002 01:31, Martin J. Bligh wrote:
> OK, so I'm trying to read your patch 1, fairly unsucessfully
> (seems to be a lot more complex that Michael's).
>
> Can you explain pool_lock? It does actually seem to work, but
> it's rather confusing ....
The pool data is needed to be able to loop over the CPUs of one node,
only. I'm convinced we'll need to do that sometime, no matter how simple
the core of the NUMA scheduler is.
The pool_lock is protecting that data while it is built. This can happen
in future more often, if somebody starts hotplugging CPUs.
> build_pools() has a comment above it saying:
>
> +/*
> + * Call pooldata_lock() before calling this function and
> + * pooldata_unlock() after!
> + */
>
> But then you promptly call pooldata_lock inside build_pools
> anyway ... looks like it's just a naff comment, but doesn't
> help much.
Sorry, the comment came from a former version...
> just block). If you really still need to do this, RCU is now
> in the kernel ;-) If not, can we just chuck all that stuff?
I'm preparing a core patch which doesn't need the pool_lock. I'll send it
out today.
Regards,
Erich
next prev parent reply other threads:[~2002-10-28 16:28 UTC|newest]
Thread overview: 33+ messages / expand[flat|nested] mbox.gz Atom feed top
2002-10-23 21:26 Crunch time -- the musical. (2.5 merge candidate list 1.5) Rob Landley
2002-10-24 16:17 ` Michael Hohnbaum
[not found] ` <200210240750.09751.landley@trommello.org>
2002-10-24 19:01 ` Michael Hohnbaum
2002-10-24 21:51 ` Erich Focht
2002-10-24 22:38 ` Martin J. Bligh
2002-10-25 8:15 ` Erich Focht
2002-10-25 23:26 ` Martin J. Bligh
2002-10-25 23:45 ` Martin J. Bligh
2002-10-26 0:02 ` Martin J. Bligh
2002-10-26 18:58 ` Martin J. Bligh
2002-10-26 19:14 ` NUMA scheduler (was: 2.5 " Martin J. Bligh
2002-10-27 18:16 ` Martin J. Bligh
2002-10-28 0:32 ` Erich Focht
2002-10-27 23:52 ` Martin J. Bligh
2002-10-28 0:55 ` [Lse-tech] " Michael Hohnbaum
2002-10-28 4:23 ` Martin J. Bligh
2002-10-28 0:31 ` Martin J. Bligh
2002-10-28 16:34 ` Erich Focht [this message]
2002-10-28 16:57 ` Martin J. Bligh
2002-10-28 17:26 ` Erich Focht
2002-10-28 17:35 ` Martin J. Bligh
2002-10-29 0:07 ` [Lse-tech] " Erich Focht
2002-10-28 0:46 ` Martin J. Bligh
2002-10-28 17:11 ` Erich Focht
2002-10-28 18:32 ` Martin J. Bligh
2002-10-28 17:38 ` Erich Focht
2002-10-28 17:36 ` Martin J. Bligh
2002-10-28 23:49 ` Erich Focht
2002-10-29 0:00 ` Martin J. Bligh
2002-10-29 1:12 ` Gerrit Huizenga
2002-10-29 22:39 ` Erich Focht
2002-10-28 7:16 ` Martin J. Bligh
2002-10-25 14:46 ` Crunch time -- the musical. (2.5 " Kevin Corry
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=200210281734.41115.efocht@ess.nec.de \
--to=efocht@ess.nec.de \
--cc=habanero@us.ibm.com \
--cc=hohnbaum@us.ibm.com \
--cc=linux-kernel@vger.kernel.org \
--cc=lse-tech@lists.sourceforge.net \
--cc=mbligh@aracnet.com \
--cc=mingo@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox