* [BUG] sched: big numa dynamic sched domain memory corruption
@ 2006-07-31 7:07 Paul Jackson
2006-07-31 7:12 ` Ingo Molnar
0 siblings, 1 reply; 13+ messages in thread
From: Paul Jackson @ 2006-07-31 7:07 UTC (permalink / raw)
To: Nick Piggin, Srivatsa Vaddagiri, Suresh Siddha
Cc: Simon.Derr, Jack Steiner, Paul Jackson, linux-kernel
We have hit a bug in the dynamic sched domain setup. It causes random
memory writes via a stale pointer.
I don't entirely understand the code yet, so my description of this
bug may be flawed. I'll do the best I can. Thanks to Jack Steiner
for figuring out what we know so far.
The three systems we are testing on have 128, 224 and 256 CPUs.
They are single core, ia64 SN2 itanium systems configured with:
CONFIG_NUMA - enabled
CONFIG_SCHED_MC - disabled
CONFIG_SCHED_SMT - disabled
They are running approximately the 2.6.16.* kernel found in SLES10.
We first noticed the problem due to the memory clobbering, and
had to crank up the slab debug code a notch to backtrack to the
apparent original cause. The bug does not cause an immediate
crash or kernel complaint.
In sum, it appears that the large array sched_group_allnodes is
free'd up by arch_destroy_sched_domains() when someone redefines
the cpu_exclusive portion of the cpuset configuration, but some
of the sd->groups are left pointing into the free'd array, causing
the assignment:
sd->groups->cpu_power = power;
to write via a stale sd->groups pointer.
The build_sched_domains() code only rebuilds the sd->groups pointer
to the current sched_group_allnodes array for those cpus that are
in the specified cpu_map. The remaining cpus seem to be left with
stale sd->groups pointers.
The above summary may be too inaccurate to be helpful.
I'll step through the failing scenario in more detail, and hopefully
with fewer inaccuracies.
During the system boot, the initial call to build_sched_domains()
sets up all encompasing sched_group_allnodes and the smaller
child domains and groups. So far, all is well. Part of
this initialization includes allocating a large array called
sched_group_allnodes, and for each cpu in the system, initializing
its sd->groups->cpu_power element in the sched_group_allnodes
array.
After boot, we run some commands that create a child cpuset,
with, for this example, cpus 4-8, marked cpu_exclusive.
This calls arch_destroy_sched_domains(), which frees
sched_group_allnodes.
Then this calls build_sched_domains() with a mask including
*all-but* cpus 4-8 (in this example). That call allocates a new
sched_group_allnodes and in the first for_each_cpu_mask() loop,
initializes the sched domain, including sd->groups, for *all-but*
cpus 4-8. The sd->groups for 4-8 are still pointing back at
the now freed original sched_group_allnodes array.
Then we call build_sched_domains() again, with a mask for just
cpus 4-8. It executes the line:
sd->groups->cpu_power = power;
with a stale sd->groups pointer, clobbering the already freed
memory that used to be in the sched_group_allnodes array. For our
situation, we are in the "#ifdef CONFIG_NUMA" variant of this line.
Aha - while writing the above, I had an idea for a possible fix.
The following patch seems to fix this problem, at least for the
above CONFIG on one of the test systems. Though I have no particular
confidence that it is a good patch.
The idea of the patch is to -always- execute the code conditioned by
the "if (... > SD_NODES_PER_DOMAIN*...) {" test on big systems, even
if we happen to be calling build_sched_domains() with a small cpu_map.
---
kernel/sched.c | 5 +++--
1 file changed, 3 insertions(+), 2 deletions(-)
--- linux.orig/kernel/sched.c 2006-07-30 23:42:12.182958555 -0700
+++ linux/kernel/sched.c 2006-07-30 23:45:12.513282355 -0700
@@ -5675,12 +5675,13 @@ void build_sched_domains(const cpumask_t
int group;
struct sched_domain *sd = NULL, *p;
cpumask_t nodemask = node_to_cpumask(cpu_to_node(i));
+ int cpus_per_node = cpus_weight(nodemask);
cpus_and(nodemask, nodemask, *cpu_map);
#ifdef CONFIG_NUMA
- if (cpus_weight(*cpu_map)
- > SD_NODES_PER_DOMAIN*cpus_weight(nodemask)) {
+ if (cpus_weight(cpu_online_map)
+ > SD_NODES_PER_DOMAIN*cpus_per_node) {
if (!sched_group_allnodes) {
sched_group_allnodes
= kmalloc(sizeof(struct sched_group)
--
I won't rest till it's the best ...
Programmer, Linux Scalability
Paul Jackson <pj@sgi.com> 1.925.600.0401
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [BUG] sched: big numa dynamic sched domain memory corruption
2006-07-31 7:07 [BUG] sched: big numa dynamic sched domain memory corruption Paul Jackson
@ 2006-07-31 7:12 ` Ingo Molnar
2006-07-31 16:04 ` Siddha, Suresh B
0 siblings, 1 reply; 13+ messages in thread
From: Ingo Molnar @ 2006-07-31 7:12 UTC (permalink / raw)
To: Paul Jackson
Cc: Nick Piggin, Srivatsa Vaddagiri, Suresh Siddha, Simon.Derr,
Jack Steiner, linux-kernel, Andrew Morton
* Paul Jackson <pj@sgi.com> wrote:
> @@ -5675,12 +5675,13 @@ void build_sched_domains(const cpumask_t
> int group;
> struct sched_domain *sd = NULL, *p;
> cpumask_t nodemask = node_to_cpumask(cpu_to_node(i));
> + int cpus_per_node = cpus_weight(nodemask);
>
> cpus_and(nodemask, nodemask, *cpu_map);
>
> #ifdef CONFIG_NUMA
> - if (cpus_weight(*cpu_map)
> - > SD_NODES_PER_DOMAIN*cpus_weight(nodemask)) {
> + if (cpus_weight(cpu_online_map)
> + > SD_NODES_PER_DOMAIN*cpus_per_node) {
> if (!sched_group_allnodes) {
> sched_group_allnodes
> = kmalloc(sizeof(struct sched_group)
even if the bug is not fully understood in time, i think we should queue
the patch above for v2.6.18. (with the small nit that you should put the
new cpus_per_node variable under CONFIG_NUMA too, to avoid a compiler
warning)
Acked-by: Ingo Molnar <mingo@elte.hu>
Ingo
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [BUG] sched: big numa dynamic sched domain memory corruption
2006-07-31 7:12 ` Ingo Molnar
@ 2006-07-31 16:04 ` Siddha, Suresh B
2006-07-31 16:54 ` Paul Jackson
` (2 more replies)
0 siblings, 3 replies; 13+ messages in thread
From: Siddha, Suresh B @ 2006-07-31 16:04 UTC (permalink / raw)
To: Ingo Molnar
Cc: Paul Jackson, Nick Piggin, Srivatsa Vaddagiri, Suresh Siddha,
Simon.Derr, Jack Steiner, linux-kernel, Andrew Morton
On Mon, Jul 31, 2006 at 09:12:42AM +0200, Ingo Molnar wrote:
>
> * Paul Jackson <pj@sgi.com> wrote:
>
> > @@ -5675,12 +5675,13 @@ void build_sched_domains(const cpumask_t
> > int group;
> > struct sched_domain *sd = NULL, *p;
> > cpumask_t nodemask = node_to_cpumask(cpu_to_node(i));
> > + int cpus_per_node = cpus_weight(nodemask);
> >
> > cpus_and(nodemask, nodemask, *cpu_map);
> >
> > #ifdef CONFIG_NUMA
> > - if (cpus_weight(*cpu_map)
> > - > SD_NODES_PER_DOMAIN*cpus_weight(nodemask)) {
> > + if (cpus_weight(cpu_online_map)
> > + > SD_NODES_PER_DOMAIN*cpus_per_node) {
> > if (!sched_group_allnodes) {
> > sched_group_allnodes
> > = kmalloc(sizeof(struct sched_group)
>
> even if the bug is not fully understood in time, i think we should queue
> the patch above for v2.6.18. (with the small nit that you should put the
I believe that this problem doesn't happen with the current mainline code.
Paul can you please test the mainline code and confirm? After going through
SLES10 code and current mainline code, my understanding is that SLES10 has
this bug but not mainline.
thanks,
suresh
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [BUG] sched: big numa dynamic sched domain memory corruption
2006-07-31 16:04 ` Siddha, Suresh B
@ 2006-07-31 16:54 ` Paul Jackson
2006-07-31 17:15 ` Siddha, Suresh B
2006-07-31 17:04 ` Paul Jackson
2006-08-01 8:25 ` Paul Jackson
2 siblings, 1 reply; 13+ messages in thread
From: Paul Jackson @ 2006-07-31 16:54 UTC (permalink / raw)
To: Siddha, Suresh B
Cc: mingo, nickpiggin, vatsa, suresh.b.siddha, Simon.Derr, steiner,
linux-kernel, akpm
> Paul can you please test the mainline code and confirm?
Sure - which version of Linus and/or Andrew's tree is the minimum
worth testing?
Could you explain why you don't think the mainline has this
problem? I still see the critical code piece there:
#ifdef CONFIG_NUMA
if (cpus_weight(*cpu_map)
> SD_NODES_PER_DOMAIN*cpus_weight(nodemask)) {
What other critical bugs are fixed between the SLES10 variant
and the mainline?
--
I won't rest till it's the best ...
Programmer, Linux Scalability
Paul Jackson <pj@sgi.com> 1.925.600.0401
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [BUG] sched: big numa dynamic sched domain memory corruption
2006-07-31 16:04 ` Siddha, Suresh B
2006-07-31 16:54 ` Paul Jackson
@ 2006-07-31 17:04 ` Paul Jackson
2006-08-01 8:25 ` Paul Jackson
2 siblings, 0 replies; 13+ messages in thread
From: Paul Jackson @ 2006-07-31 17:04 UTC (permalink / raw)
To: Siddha, Suresh B
Cc: mingo, nickpiggin, vatsa, suresh.b.siddha, Simon.Derr, steiner,
linux-kernel, akpm
> Paul can you please test the mainline code and confirm?
This will take me a few hours - it requires a bit of
slab debugging scaffolding to detect the memory corruption
in a timely and accurate manner. I'll have to port that
scaffolding forward from its current SLES10 base.
--
I won't rest till it's the best ...
Programmer, Linux Scalability
Paul Jackson <pj@sgi.com> 1.925.600.0401
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [BUG] sched: big numa dynamic sched domain memory corruption
2006-07-31 16:54 ` Paul Jackson
@ 2006-07-31 17:15 ` Siddha, Suresh B
2006-08-02 6:57 ` Paul Jackson
0 siblings, 1 reply; 13+ messages in thread
From: Siddha, Suresh B @ 2006-07-31 17:15 UTC (permalink / raw)
To: Paul Jackson
Cc: Siddha, Suresh B, mingo, nickpiggin, vatsa, Simon.Derr, steiner,
linux-kernel, akpm
On Mon, Jul 31, 2006 at 09:54:29AM -0700, Paul Jackson wrote:
> > Paul can you please test the mainline code and confirm?
>
> Sure - which version of Linus and/or Andrew's tree is the minimum
> worth testing?
>
> Could you explain why you don't think the mainline has this
> problem? I still see the critical code piece there:
>
> #ifdef CONFIG_NUMA
> if (cpus_weight(*cpu_map)
> > SD_NODES_PER_DOMAIN*cpus_weight(nodemask)) {
This code piece is not the culprit. In 2.6.16, the mechanism of setting
up group power for allnodes_domains is wrong(which is actually causing
this issue in the presence of dynamic sched groups patch) and the mainline
has fixes for all these issues.
> What other critical bugs are fixed between the SLES10 variant
> and the mainline?
Basically SLES10 has to backport all these patches:
sched: fix group power for allnodes_domains
sched_domai: Allocate sched_group structures dynamically
sched: build_sched_domains() fix
thanks,
suresh
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [BUG] sched: big numa dynamic sched domain memory corruption
2006-07-31 16:04 ` Siddha, Suresh B
2006-07-31 16:54 ` Paul Jackson
2006-07-31 17:04 ` Paul Jackson
@ 2006-08-01 8:25 ` Paul Jackson
2006-08-01 19:00 ` Siddha, Suresh B
2 siblings, 1 reply; 13+ messages in thread
From: Paul Jackson @ 2006-08-01 8:25 UTC (permalink / raw)
To: Siddha, Suresh B
Cc: mingo, nickpiggin, vatsa, suresh.b.siddha, Simon.Derr, steiner,
linux-kernel, akpm
Andrew, Ingo,
I believe Suresh is correct, and that the mainline no longer
has this bug. So the patch that was buried in my report should
not be applied.
Suresh wrote:
> Paul can you please test the mainline code and confirm?
I have tested the mainline, 2.6.18-rc2-mm1, and I am now nearly certain
you are correct. I did not see the failure. There is a slim chance
that I botched the forward port of the tripwire code needed to catch
this bug efficiently, and so could have missed seeing the bug on that
account.
I will now try backporting the three patches you recommended for
systems on a release such as SLES10:
sched: fix group power for allnodes_domains
sched_domai: Allocate sched_group structures dynamically
sched: build_sched_domains() fix
I wish you well on any further code improvements you have planned for
this code. It's tough to understand, with such issues as many #ifdef's,
an interesting memory layout of the key sched domain arrays that I
didn't see described much in the comments, and a variety of memory
allocation calls that are tough to unravel on error. Portions of
the code could use some more comments, explaining what is going on.
For example, I still haven't figured exactly what 'cpu_power' means.
The allocations of sched_group_allnodes, sched_group_phys and
sched_group_core are -big- on our ia64 SN2 systems (1024 CPUs),
and could fail once a system has been up for a while and is
getting memory tight and fragmented.
It is not obvious to me from the code or comments just how sched
domains are arranged on various large systems with hyper-threading
(SMT) and/or multiple cores (MC) and/or multiple processor packages
per node, and how scheduling is affected by all this.
This was about the third bug that has come by in it -- which I
in particular notice when it is someone playing with cpu_exclusive
cpusets who hits the bug. Any kernel backtrace with 'cpuset' buried in
it tends to migrate to my inbox. This latest bug was particularly
nasty, as is usually the case with random memory corruption bugs,
costing us a bunch of hours.
Good luck.
If you are aware of any other fixes/patches besides the above that us
big honkin numa iron SLES10 users need for reliable operation, let me
know.
Thanks.
--
I won't rest till it's the best ...
Programmer, Linux Scalability
Paul Jackson <pj@sgi.com> 1.925.600.0401
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [BUG] sched: big numa dynamic sched domain memory corruption
2006-08-01 8:25 ` Paul Jackson
@ 2006-08-01 19:00 ` Siddha, Suresh B
2006-08-01 19:16 ` Paul Jackson
0 siblings, 1 reply; 13+ messages in thread
From: Siddha, Suresh B @ 2006-08-01 19:00 UTC (permalink / raw)
To: Paul Jackson
Cc: Siddha, Suresh B, mingo, nickpiggin, vatsa, Simon.Derr, steiner,
linux-kernel, akpm
On Tue, Aug 01, 2006 at 01:25:33AM -0700, Paul Jackson wrote:
> I wish you well on any further code improvements you have planned for
> this code. It's tough to understand, with such issues as many #ifdef's,
> an interesting memory layout of the key sched domain arrays that I
> didn't see described much in the comments, and a variety of memory
> allocation calls that are tough to unravel on error. Portions of
> the code could use some more comments, explaining what is going on.
> For example, I still haven't figured exactly what 'cpu_power' means.
I will add some info to Documentation/sched-domains.txt aswell as some
comments to the code where appropriate. I did some cleanup of the code
but unfortunately that got dropped because of some issues. I will repost
that cleanup patch aswell.
>
> The allocations of sched_group_allnodes, sched_group_phys and
> sched_group_core are -big- on our ia64 SN2 systems (1024 CPUs),
> and could fail once a system has been up for a while and is
> getting memory tight and fragmented.
I have to agree with you. I have an idea(basically passing cpu_map info
to functions which determine the group) to solve this issue. Let me work
on it and post a fix.
> It is not obvious to me from the code or comments just how sched
> domains are arranged on various large systems with hyper-threading
> (SMT) and/or multiple cores (MC) and/or multiple processor packages
> per node, and how scheduling is affected by all this.
Enabling SCHED_DOMAIN_DEBUG should atleast show how sched domains
and groups are arranged. Adding an example in Documentation might
be a good idea.
>
> This was about the third bug that has come by in it -- which I
> in particular notice when it is someone playing with cpu_exclusive
> cpusets who hits the bug. Any kernel backtrace with 'cpuset' buried in
> it tends to migrate to my inbox. This latest bug was particularly
> nasty, as is usually the case with random memory corruption bugs,
> costing us a bunch of hours.
>
> Good luck.
>
> If you are aware of any other fixes/patches besides the above that us
> big honkin numa iron SLES10 users need for reliable operation, let me
> know.
Will keep you in loop.
thanks,
suresh
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [BUG] sched: big numa dynamic sched domain memory corruption
2006-08-01 19:00 ` Siddha, Suresh B
@ 2006-08-01 19:16 ` Paul Jackson
0 siblings, 0 replies; 13+ messages in thread
From: Paul Jackson @ 2006-08-01 19:16 UTC (permalink / raw)
To: Siddha, Suresh B
Cc: suresh.b.siddha, mingo, nickpiggin, vatsa, Simon.Derr, steiner,
linux-kernel, akpm
Suresh wrote:
> Will keep you in loop.
Good.
--
I won't rest till it's the best ...
Programmer, Linux Scalability
Paul Jackson <pj@sgi.com> 1.925.600.0401
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [BUG] sched: big numa dynamic sched domain memory corruption
2006-07-31 17:15 ` Siddha, Suresh B
@ 2006-08-02 6:57 ` Paul Jackson
2006-08-02 21:36 ` Siddha, Suresh B
0 siblings, 1 reply; 13+ messages in thread
From: Paul Jackson @ 2006-08-02 6:57 UTC (permalink / raw)
To: Siddha, Suresh B
Cc: mingo, nickpiggin, vatsa, Simon.Derr, steiner, linux-kernel, akpm
Suresh wrote:
> Basically SLES10 has to backport all these patches:
>
> sched: fix group power for allnodes_domains
> sched_domai: Allocate sched_group structures dynamically
> sched: build_sched_domains() fix
A few questions on this, centered around the question of which dynamic
sched domain patches we should backport to SLES10.
Readers not seriously interested in sched domains on 2.6.16.* kernels
will probably want to ignore this post.
Is the first of the above three patches the one needed to fix the "big
numa dynamic sched domain memory corruption" that started off this
thread? I'd test that theory now, but someone else is using my test
machine tonight.
The second of these three, "Allocate sched_group structures
dynamically," doesn't apply cleanly, because it depends on the
free_sched_groups() code added in another patch:
sched_domain-handle-kmalloc-failure.patch
This patch in turn seems to have some important fixes and followups
in a couple other patches, listed below ...
Which of the following would you recommend I advise SUSE do for SLES10:
1) Backport the "Allocate sched_group structures dynamically"
patch (removing the free_sched_groups() related pieces, and
changing the "goto error" statements back to "break"), staying
with just your above recommended set of three patches, or
2) Also take the sched_domain-handle-kmalloc-failure.patch and its
immediate followups, resulting in the following patch set:
sched-fix-group-power-for-allnodes_domains.patch
sched_domain-handle-kmalloc-failure.patch
sched_domain-handle-kmalloc-failure-fix.patch
sched_domain-dont-use-gfp_atomic.patch
sched_domain-use-kmalloc_node.patch
sched_domain-allocate-sched_group-structures-dynamically.patch
sched-build_sched_domains-fix.patch
3) Just take the first patch in this set, as it (I'm guessing) is
the one needed to fix the immediate problem -- the memory
corruption.
Cetainly the patchset in (2) applies more cleanly than (1), and it sure
seems to me like all these patches are fixing things we would want to
fix in SLES10.
Was there a reason you did not include these additional patches in your
recommendations for patches to backport to SLES10?
Any other patches I really should consider? If so, why?
If you recommend (2), then can you offer a clean description of bug(s)
fixed, including symptoms and potential severity, and how fixed, for
each of the patches in that proposed patchset? SUSE won't be much
interested in fixes unless they have a clear description of the pain
they will encounter if they don't take the fix. The existing patch
header comments don't particularly spell that out. They say what
changed, but not much of the why nor what kind of hurt one is in
without the change.
Also do you have any way to test whichever patch set you recommend on
other systems? I can test on my big honkin numa iron (100's of CPUs,
NUMA yes, SMT no, MC no), but SUSE will want to know that this stuff
worked on more ordinary systems and SMT (hyperthread) and MC
(multicore) systems.
Sorry for all the questions ...
--
I won't rest till it's the best ...
Programmer, Linux Scalability
Paul Jackson <pj@sgi.com> 1.925.600.0401
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [BUG] sched: big numa dynamic sched domain memory corruption
2006-08-02 6:57 ` Paul Jackson
@ 2006-08-02 21:36 ` Siddha, Suresh B
2006-08-02 21:58 ` Paul Jackson
2006-08-06 1:38 ` Paul Jackson
0 siblings, 2 replies; 13+ messages in thread
From: Siddha, Suresh B @ 2006-08-02 21:36 UTC (permalink / raw)
To: Paul Jackson
Cc: Siddha, Suresh B, mingo, nickpiggin, vatsa, Simon.Derr, steiner,
linux-kernel, akpm
On Tue, Aug 01, 2006 at 11:57:52PM -0700, Paul Jackson wrote:
> Suresh wrote:
> > Basically SLES10 has to backport all these patches:
> >
> > sched: fix group power for allnodes_domains
> > sched_domai: Allocate sched_group structures dynamically
> > sched: build_sched_domains() fix
>
>
> A few questions on this, centered around the question of which dynamic
> sched domain patches we should backport to SLES10.
>
> Readers not seriously interested in sched domains on 2.6.16.* kernels
> will probably want to ignore this post.
Paul, I will answer your questions on Suse bugzilla as that is a better
forum than lkml.
thanks,
suresh
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [BUG] sched: big numa dynamic sched domain memory corruption
2006-08-02 21:36 ` Siddha, Suresh B
@ 2006-08-02 21:58 ` Paul Jackson
2006-08-06 1:38 ` Paul Jackson
1 sibling, 0 replies; 13+ messages in thread
From: Paul Jackson @ 2006-08-02 21:58 UTC (permalink / raw)
To: Siddha, Suresh B
Cc: suresh.b.siddha, mingo, nickpiggin, vatsa, Simon.Derr, steiner,
linux-kernel, akpm
Suresh wrote:
> Paul, I will answer your questions on Suse bugzilla as that is a better
> forum than lkml.
Good idea.
--
I won't rest till it's the best ...
Programmer, Linux Scalability
Paul Jackson <pj@sgi.com> 1.925.600.0401
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [BUG] sched: big numa dynamic sched domain memory corruption
2006-08-02 21:36 ` Siddha, Suresh B
2006-08-02 21:58 ` Paul Jackson
@ 2006-08-06 1:38 ` Paul Jackson
1 sibling, 0 replies; 13+ messages in thread
From: Paul Jackson @ 2006-08-06 1:38 UTC (permalink / raw)
To: Siddha, Suresh B
Cc: mingo, nickpiggin, vatsa, Simon.Derr, steiner, linux-kernel, akpm
Suresh wrote:
> Paul, I will answer your questions on Suse bugzilla as that is a better
> forum than lkml.
Well ... can't say I ever got answers to some of my questions.
But the critical one was answered - what patch(es) do I need
to fix this memory corruption problem on SLES10 kernels.
Earlier, Suresh had recommended [numbers added by pj]:
> Basically SLES10 has to backport all these patches:
>
> 1) sched: fix group power for allnodes_domains
> 2) sched_domai: Allocate sched_group structures dynamically
> 3) sched: build_sched_domains() fix
Only two of these three patches are needed for this memory corruption
bug:
1) sched: fix group power for allnodes_domains
3) sched: build_sched_domains() fix
Patch (2) addresses a separate problem, and has been reworked since
anyway, and was the one that caused me grief backporting SLES10 as it
depended on other patches not in SLES10.
The key fix for the memory corruption is in patch (1). This patch went
into Linus's tree on March 28, 2006. Patch (3) fixed a bug introduced
in the first patch.
--
I won't rest till it's the best ...
Programmer, Linux Scalability
Paul Jackson <pj@sgi.com> 1.925.600.0401
^ permalink raw reply [flat|nested] 13+ messages in thread
end of thread, other threads:[~2006-08-06 1:39 UTC | newest]
Thread overview: 13+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2006-07-31 7:07 [BUG] sched: big numa dynamic sched domain memory corruption Paul Jackson
2006-07-31 7:12 ` Ingo Molnar
2006-07-31 16:04 ` Siddha, Suresh B
2006-07-31 16:54 ` Paul Jackson
2006-07-31 17:15 ` Siddha, Suresh B
2006-08-02 6:57 ` Paul Jackson
2006-08-02 21:36 ` Siddha, Suresh B
2006-08-02 21:58 ` Paul Jackson
2006-08-06 1:38 ` Paul Jackson
2006-07-31 17:04 ` Paul Jackson
2006-08-01 8:25 ` Paul Jackson
2006-08-01 19:00 ` Siddha, Suresh B
2006-08-01 19:16 ` Paul Jackson
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox