From: Andi Kleen <ak@muc.de>
To: Anton Blanchard <anton@samba.org>
Cc: Andi Kleen <ak@muc.de>,
linux-kernel@vger.kernel.org, lse-tech@lse.sourceforge.net
Subject: Re: NUMA API observations
Date: 15 Jun 2004 01:49:58 +0200
Date: Tue, 15 Jun 2004 01:49:58 +0200 [thread overview]
Message-ID: <20040614234958.GC90963@colin2.muc.de> (raw)
In-Reply-To: <20040614214003.GE25389@krispykreme>
On Tue, Jun 15, 2004 at 07:40:04AM +1000, Anton Blanchard wrote:
>
> > interleave should always fall back to other nodes. Very weird.
> > Needs to be investigated. What were the actual arguments passed
> > to the syscalls?
>
> This one looks like a bug in my code. I wasnt setting numnodes high
> enough, so the node fallback lists werent being initialised for some
> nodes.
Ok. Good to know.
That's a bad generic bug, right?
interleaving isn't really doing much different from an ordinary allocation,
except that the numa_node_id() index to the zone table is replaced with a
different number.
> > > My kernel is compiled with NR_CPUS=128, the setaffinity syscall must be
> > > called with a bitmap at least as big as the kernels cpumask_t. I will
> > > submit a patch for this shortly.
> >
> > Umm, what a misfeature. We size the buffer up to the biggest
> > running CPU. That should be enough.
> >
> > IMHO that's just a kernel bug. How should a user space
> > application sanely discover the cpumask_t size needed by the kernel?
> > Whoever designed that was on crack.
>
> glibc now uses a select style interface. Unfortunately the interface has
> changed about three times by now.
I have no plans to track the glibc interface of the week for this
and numactl must run with older glibc anyways, that is why I always
used an own stub to this. I am not sure they even solved the problem
completely. With the upcomming numactl version it should work.
What I wonder is why IA64 worked though. We tested on it previously,
but somehow didn't run into this. The regression test suite
needs to check this better.
-Andi
next prev parent reply other threads:[~2004-06-14 23:50 UTC|newest]
Thread overview: 19+ messages / expand[flat|nested] mbox.gz Atom feed top
2004-06-14 15:36 NUMA API observations Anton Blanchard
2004-06-14 16:17 ` Andi Kleen
2004-06-14 21:21 ` Paul Jackson
2004-06-14 23:44 ` Andi Kleen
2004-06-15 0:06 ` Paul Jackson
2004-06-15 0:20 ` Andi Kleen
2004-06-15 0:25 ` Paul Jackson
2004-06-14 21:40 ` Anton Blanchard
2004-06-14 23:49 ` Andi Kleen [this message]
2004-06-15 13:50 ` Jesse Barnes
2004-06-15 12:53 ` Thomas Zehetbauer
-- strict thread matches above, loose matches on Subject: below --
2004-06-15 4:59 Manfred Spraul
2004-06-15 6:18 ` Paul Jackson
2004-06-15 11:03 ` Andi Kleen
2004-06-15 17:37 ` Manfred Spraul
2004-06-15 18:32 ` Paul Jackson
2004-06-15 18:18 ` Paul Jackson
[not found] <271SM-3DT-7@gated-at.bofh.it>
[not found] ` <27lI4-29E-19@gated-at.bofh.it>
2004-06-15 13:27 ` Andi Kleen
[not found] ` <272lY-44B-49@gated-at.bofh.it>
[not found] ` <2772a-7VK-9@gated-at.bofh.it>
[not found] ` <279nf-1id-3@gated-at.bofh.it>
2004-06-15 13:52 ` Bill Davidsen
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20040614234958.GC90963@colin2.muc.de \
--to=ak@muc.de \
--cc=anton@samba.org \
--cc=linux-kernel@vger.kernel.org \
--cc=lse-tech@lse.sourceforge.net \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.