* [PATCH 1/2] hugetlb: search harder for memory in alloc_fresh_huge_page()
@ 2007-10-03 22:45 Nishanth Aravamudan
2007-10-03 22:49 ` [PATCH 2/2] hugetlb: fix pool allocation with empty nodes Nishanth Aravamudan
2007-10-04 3:54 ` [PATCH 1/2] hugetlb: search harder for memory in alloc_fresh_huge_page() Christoph Lameter
0 siblings, 2 replies; 8+ messages in thread
From: Nishanth Aravamudan @ 2007-10-03 22:45 UTC (permalink / raw)
To: clameter; +Cc: wli, anton, agl, lee.schermerhorn, linux-mm
Currently, alloc_fresh_huge_page() returns NULL when it is not able to
allocate a huge page on the current node, as specified by its custom
interleave variable. The callers of this function, though, assume that a
failure in alloc_fresh_huge_page() indicates no hugepages can be
allocated on the system period. This might not be the case, for
instance, if we have an uneven NUMA system, and we happen to try to
allocate a hugepage on a node with less memory and fail, while there is
still plenty of free memory on the other nodes.
To correct this, make alloc_fresh_huge_page() search through all online
nodes before deciding no hugepages can be allocated. Add a helper
function for actually allocating the hugepage.
Note: we expect particular semantics for __GFP_THISNODE, which are now
enforced even for memoryless nodes. That is, there is should be no
fallback to other nodes. Therefore, we rely on the nid passed into
alloc_pages_node() to be the nid the page comes from. If this is
incorrect, accounting will break.
Tested on x86 !NUMA, x86 NUMA, x86_64 NUMA and ppc64 NUMA (with 2
memoryless nodes).
Before on the ppc64 box:
Trying to clear the hugetlb pool
Done. 0 free
Trying to resize the pool to 100
Node 0 HugePages_Free: 25
Node 1 HugePages_Free: 75
Node 2 HugePages_Free: 0
Node 3 HugePages_Free: 0
Done. Initially 100 free
Trying to resize the pool to 200
Node 0 HugePages_Free: 50
Node 1 HugePages_Free: 150
Node 2 HugePages_Free: 0
Node 3 HugePages_Free: 0
Done. 200 free
After:
Trying to clear the hugetlb pool
Done. 0 free
Trying to resize the pool to 100
Node 0 HugePages_Free: 50
Node 1 HugePages_Free: 50
Node 2 HugePages_Free: 0
Node 3 HugePages_Free: 0
Done. Initially 100 free
Trying to resize the pool to 200
Node 0 HugePages_Free: 100
Node 1 HugePages_Free: 100
Node 2 HugePages_Free: 0
Node 3 HugePages_Free: 0
Done. 200 free
Signed-off-by: Nishanth Aravamudan <nacc@us.ibm.com>
---
Christoph, I've moved to using a global static variable, is this closer
to what you hoped for?
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 4a374fa..d97508e 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -29,6 +29,7 @@ static unsigned int nr_huge_pages_node[MAX_NUMNODES];
static unsigned int free_huge_pages_node[MAX_NUMNODES];
static gfp_t htlb_alloc_mask = GFP_HIGHUSER;
unsigned long hugepages_treat_as_movable;
+static int last_allocated_nid;
/*
* Protects updates to hugepage_freelists, nr_huge_pages, and free_huge_pages
@@ -103,36 +104,56 @@ static void free_huge_page(struct page *page)
spin_unlock(&hugetlb_lock);
}
-static int alloc_fresh_huge_page(void)
+static struct page *alloc_fresh_huge_page_node(int nid)
{
- static int prev_nid;
struct page *page;
- int nid;
-
- /*
- * Copy static prev_nid to local nid, work on that, then copy it
- * back to prev_nid afterwards: otherwise there's a window in which
- * a racer might pass invalid nid MAX_NUMNODES to alloc_pages_node.
- * But we don't need to use a spin_lock here: it really doesn't
- * matter if occasionally a racer chooses the same nid as we do.
- */
- nid = next_node(prev_nid, node_online_map);
- if (nid == MAX_NUMNODES)
- nid = first_node(node_online_map);
- prev_nid = nid;
- page = alloc_pages_node(nid, htlb_alloc_mask|__GFP_COMP|__GFP_NOWARN,
- HUGETLB_PAGE_ORDER);
+ page = alloc_pages_node(nid,
+ htlb_alloc_mask|__GFP_COMP|__GFP_THISNODE|__GFP_NOWARN,
+ HUGETLB_PAGE_ORDER);
if (page) {
set_compound_page_dtor(page, free_huge_page);
spin_lock(&hugetlb_lock);
nr_huge_pages++;
- nr_huge_pages_node[page_to_nid(page)]++;
+ nr_huge_pages_node[nid]++;
spin_unlock(&hugetlb_lock);
put_page(page); /* free it into the hugepage allocator */
- return 1;
}
- return 0;
+
+ return page;
+}
+
+static int alloc_fresh_huge_page(void)
+{
+ struct page *page;
+ int start_nid;
+ int next_nid;
+ int ret = 0;
+
+ start_nid = last_allocated_nid;
+
+ do {
+ page = alloc_fresh_huge_page_node(last_allocated_nid);
+ if (page)
+ ret = 1;
+ /*
+ * Use a helper variable to find the next node and then
+ * copy it back to last_allocated_nid afterwards:
+ * otherwise there's a window in which a racer might
+ * pass invalid nid MAX_NUMNODES to alloc_pages_node.
+ * But we don't need to use a spin_lock here: it really
+ * doesn't matter if occasionally a racer chooses the
+ * same nid as we do. Move nid forward in the mask even
+ * if we just successfully allocated a hugepage so that
+ * the next caller gets hugepages on the next node.
+ */
+ next_nid = next_node(last_allocated_nid, node_online_map);
+ if (next_nid == MAX_NUMNODES)
+ next_nid = first_node(node_online_map);
+ last_allocated_nid = next_nid;
+ } while (!page && last_allocated_nid != start_nid);
+
+ return ret;
}
static struct page *alloc_huge_page(struct vm_area_struct *vma,
@@ -171,6 +192,8 @@ static int __init hugetlb_init(void)
for (i = 0; i < MAX_NUMNODES; ++i)
INIT_LIST_HEAD(&hugepage_freelists[i]);
+ last_allocated_nid = first_node(node_online_map);
+
for (i = 0; i < max_huge_pages; ++i) {
if (!alloc_fresh_huge_page())
break;
--
Nishanth Aravamudan <nacc@us.ibm.com>
IBM Linux Technology Center
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply related [flat|nested] 8+ messages in thread* [PATCH 2/2] hugetlb: fix pool allocation with empty nodes
2007-10-03 22:45 [PATCH 1/2] hugetlb: search harder for memory in alloc_fresh_huge_page() Nishanth Aravamudan
@ 2007-10-03 22:49 ` Nishanth Aravamudan
2007-10-04 3:12 ` Nishanth Aravamudan
2007-10-04 3:55 ` Christoph Lameter
2007-10-04 3:54 ` [PATCH 1/2] hugetlb: search harder for memory in alloc_fresh_huge_page() Christoph Lameter
1 sibling, 2 replies; 8+ messages in thread
From: Nishanth Aravamudan @ 2007-10-03 22:49 UTC (permalink / raw)
To: clameter; +Cc: wli, anton, agl, lee.schermerhorn, linux-mm
Anton found a problem with the hugetlb pool allocation when some nodes
have no memory (http://marc.info/?l=linux-mm&m=118133042025995&w=2). Lee
worked on versions that tried to fix it, but none were accepted.
Christoph has created a set of patches which allow for GFP_THISNODE
allocations to fail if the node has no memory and for exporting a
nodemask indicating which nodes have memory. Simply interleave across
this nodemask rather than the online nodemask.
Tested on x86 !NUMA, x86 NUMA, x86_64 NUMA, ppc64 NUMA with 2 memoryless
nodes.
Signed-off-by: Nishanth Aravamudan <nacc@us.ibm.com>
---
Would it be better to combine this patch directly in 1/2? There is no
functional difference, really, just a matter of 'correctness'. Without
this patch, we'll iterate over nodes that we can't possibly do THISNODE
allocations on. So I guess this falls more into an optimization?
Also, I see that Adam's patches have been pulled in for the next -mm. I
can rebase on top of them and retest to minimise Andrew's work.
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index d97508e..4d08cae 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -147,9 +147,10 @@ static int alloc_fresh_huge_page(void)
* if we just successfully allocated a hugepage so that
* the next caller gets hugepages on the next node.
*/
- next_nid = next_node(last_allocated_nid, node_online_map);
+ next_nid = next_node(last_allocated_nid,
+ node_states[N_HIGH_MEMORY]);
if (next_nid == MAX_NUMNODES)
- next_nid = first_node(node_online_map);
+ next_nid = first_node(node_states[N_HIGH_MEMORY]);
last_allocated_nid = next_nid;
} while (!page && last_allocated_nid != start_nid);
@@ -192,7 +193,7 @@ static int __init hugetlb_init(void)
for (i = 0; i < MAX_NUMNODES; ++i)
INIT_LIST_HEAD(&hugepage_freelists[i]);
- last_allocated_nid = first_node(node_online_map);
+ last_allocated_nid = first_node(node_states[N_HIGH_MEMORY]);
for (i = 0; i < max_huge_pages; ++i) {
if (!alloc_fresh_huge_page())
--
Nishanth Aravamudan <nacc@us.ibm.com>
IBM Linux Technology Center
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply related [flat|nested] 8+ messages in thread* Re: [PATCH 2/2] hugetlb: fix pool allocation with empty nodes
2007-10-03 22:49 ` [PATCH 2/2] hugetlb: fix pool allocation with empty nodes Nishanth Aravamudan
@ 2007-10-04 3:12 ` Nishanth Aravamudan
2007-10-05 19:56 ` Lee Schermerhorn
2007-10-04 3:55 ` Christoph Lameter
1 sibling, 1 reply; 8+ messages in thread
From: Nishanth Aravamudan @ 2007-10-04 3:12 UTC (permalink / raw)
To: clameter; +Cc: wli, anton, agl, lee.schermerhorn, linux-mm
On 03.10.2007 [15:49:04 -0700], Nishanth Aravamudan wrote:
> hugetlb: fix pool allocation with empty nodes
>
> Anton found a problem with the hugetlb pool allocation when some nodes
> have no memory (http://marc.info/?l=linux-mm&m=118133042025995&w=2). Lee
> worked on versions that tried to fix it, but none were accepted.
> Christoph has created a set of patches which allow for GFP_THISNODE
> allocations to fail if the node has no memory and for exporting a
> nodemask indicating which nodes have memory. Simply interleave across
> this nodemask rather than the online nodemask.
>
> Tested on x86 !NUMA, x86 NUMA, x86_64 NUMA, ppc64 NUMA with 2 memoryless
> nodes.
>
> Signed-off-by: Nishanth Aravamudan <nacc@us.ibm.com>
>
> ---
> Would it be better to combine this patch directly in 1/2? There is no
> functional difference, really, just a matter of 'correctness'. Without
> this patch, we'll iterate over nodes that we can't possibly do THISNODE
> allocations on. So I guess this falls more into an optimization?
>
> Also, I see that Adam's patches have been pulled in for the next -mm. I
> can rebase on top of them and retest to minimise Andrew's work.
FWIW, both patches apply pretty easily on top of Adam's stack. 1/2
requires a bit of massaging because functions have moved out of their
context, but 2/2 applies cleanly. I noticed, though, that Adam's patches
use node_online_map when they should use node_states[N_HIGH_MEMORY], so
shall I modify this patch to simply be
hugetlb: only iterate over populated nodes
and fix all of the instances in hugetlb.c?
Still need to test the patches on top of Adam's stack before I'll ask
Andrew to pick them up.
Thanks,
Nish
--
Nishanth Aravamudan <nacc@us.ibm.com>
IBM Linux Technology Center
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH 2/2] hugetlb: fix pool allocation with empty nodes
2007-10-04 3:12 ` Nishanth Aravamudan
@ 2007-10-05 19:56 ` Lee Schermerhorn
2007-10-05 20:30 ` Nishanth Aravamudan
0 siblings, 1 reply; 8+ messages in thread
From: Lee Schermerhorn @ 2007-10-05 19:56 UTC (permalink / raw)
To: Nishanth Aravamudan; +Cc: clameter, wli, anton, agl, linux-mm, Mel Gorman
On Wed, 2007-10-03 at 20:12 -0700, Nishanth Aravamudan wrote:
> On 03.10.2007 [15:49:04 -0700], Nishanth Aravamudan wrote:
> > hugetlb: fix pool allocation with empty nodes
> >
> > Anton found a problem with the hugetlb pool allocation when some nodes
> > have no memory (http://marc.info/?l=linux-mm&m=118133042025995&w=2). Lee
> > worked on versions that tried to fix it, but none were accepted.
> > Christoph has created a set of patches which allow for GFP_THISNODE
> > allocations to fail if the node has no memory and for exporting a
> > nodemask indicating which nodes have memory. Simply interleave across
> > this nodemask rather than the online nodemask.
> >
> > Tested on x86 !NUMA, x86 NUMA, x86_64 NUMA, ppc64 NUMA with 2 memoryless
> > nodes.
> >
> > Signed-off-by: Nishanth Aravamudan <nacc@us.ibm.com>
> >
> > ---
> > Would it be better to combine this patch directly in 1/2? There is no
> > functional difference, really, just a matter of 'correctness'. Without
> > this patch, we'll iterate over nodes that we can't possibly do THISNODE
> > allocations on. So I guess this falls more into an optimization?
> >
> > Also, I see that Adam's patches have been pulled in for the next -mm. I
> > can rebase on top of them and retest to minimise Andrew's work.
>
> FWIW, both patches apply pretty easily on top of Adam's stack. 1/2
> requires a bit of massaging because functions have moved out of their
> context, but 2/2 applies cleanly. I noticed, though, that Adam's patches
> use node_online_map when they should use node_states[N_HIGH_MEMORY], so
> shall I modify this patch to simply be
>
> hugetlb: only iterate over populated nodes
>
> and fix all of the instances in hugetlb.c?
>
> Still need to test the patches on top of Adam's stack before I'll ask
> Andrew to pick them up.
Nish: Have you tried these atop Mel Gorman's onezonelist patches. I've
been maintaining your previous posting of the 4 hugetlb patches [i.e.,
including the per node sysfs attributes] atop Mel's patches and some of
my additional mempolicy "cleanups". I just go around to testing the
whole mess and found that I can only allocate hugetlb pages on node 1,
whether I set /proc/sys/vm/nr_hugepages or the per node sysfs
attributes.
I'm trying to isolate the problem now. I've determined that with just
your rebased patched on 23-rc8-mm2, allocations appear to work as
expected. E.g., writing '64' to /proc/sys/vm/nr_hugepages yields 16
huge pages on each of 4 nodes. My dma-only node 4 is skipped because it
doesn't have sufficient memory to allocate a single ia64 huge page. If
it did, I fear I'd see a huge page there with the current patches. Have
to reconfig the hardware to test that.
Anyway, I won't get back to this until mid-next week. Just wanted to
give you [and Mel] a heads up about the possible interaction. However,
it could be my patches that are causing the problem.
Later,
Lee
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH 2/2] hugetlb: fix pool allocation with empty nodes
2007-10-05 19:56 ` Lee Schermerhorn
@ 2007-10-05 20:30 ` Nishanth Aravamudan
0 siblings, 0 replies; 8+ messages in thread
From: Nishanth Aravamudan @ 2007-10-05 20:30 UTC (permalink / raw)
To: Lee Schermerhorn; +Cc: clameter, wli, anton, agl, linux-mm, Mel Gorman
On 05.10.2007 [15:56:08 -0400], Lee Schermerhorn wrote:
> On Wed, 2007-10-03 at 20:12 -0700, Nishanth Aravamudan wrote:
> > On 03.10.2007 [15:49:04 -0700], Nishanth Aravamudan wrote:
> > > hugetlb: fix pool allocation with empty nodes
> > >
> > > Anton found a problem with the hugetlb pool allocation when some nodes
> > > have no memory (http://marc.info/?l=linux-mm&m=118133042025995&w=2). Lee
> > > worked on versions that tried to fix it, but none were accepted.
> > > Christoph has created a set of patches which allow for GFP_THISNODE
> > > allocations to fail if the node has no memory and for exporting a
> > > nodemask indicating which nodes have memory. Simply interleave across
> > > this nodemask rather than the online nodemask.
> > >
> > > Tested on x86 !NUMA, x86 NUMA, x86_64 NUMA, ppc64 NUMA with 2 memoryless
> > > nodes.
> > >
> > > Signed-off-by: Nishanth Aravamudan <nacc@us.ibm.com>
> > >
> > > ---
> > > Would it be better to combine this patch directly in 1/2? There is no
> > > functional difference, really, just a matter of 'correctness'. Without
> > > this patch, we'll iterate over nodes that we can't possibly do THISNODE
> > > allocations on. So I guess this falls more into an optimization?
> > >
> > > Also, I see that Adam's patches have been pulled in for the next -mm. I
> > > can rebase on top of them and retest to minimise Andrew's work.
> >
> > FWIW, both patches apply pretty easily on top of Adam's stack. 1/2
> > requires a bit of massaging because functions have moved out of their
> > context, but 2/2 applies cleanly. I noticed, though, that Adam's patches
> > use node_online_map when they should use node_states[N_HIGH_MEMORY], so
> > shall I modify this patch to simply be
> >
> > hugetlb: only iterate over populated nodes
> >
> > and fix all of the instances in hugetlb.c?
> >
> > Still need to test the patches on top of Adam's stack before I'll ask
> > Andrew to pick them up.
>
> Nish: Have you tried these atop Mel Gorman's onezonelist patches.
> I've been maintaining your previous posting of the 4 hugetlb patches
> [i.e., including the per node sysfs attributes] atop Mel's patches and
> some of my additional mempolicy "cleanups". I just go around to
> testing the whole mess and found that I can only allocate hugetlb
> pages on node 1, whether I set /proc/sys/vm/nr_hugepages or the per
> node sysfs attributes.
I have not tested with Mel's one-zonelist patches. I can add that to my
queue to test, with, though.
> I'm trying to isolate the problem now. I've determined that with just
> your rebased patched on 23-rc8-mm2, allocations appear to work as
> expected. E.g., writing '64' to /proc/sys/vm/nr_hugepages yields 16
> huge pages on each of 4 nodes. My dma-only node 4 is skipped because
> it doesn't have sufficient memory to allocate a single ia64 huge page.
> If it did, I fear I'd see a huge page there with the current patches.
> Have to reconfig the hardware to test that.
Hrm, I'll look at Mel's patches to see if I see anything obvious.
> Anyway, I won't get back to this until mid-next week. Just wanted to
> give you [and Mel] a heads up about the possible interaction.
> However, it could be my patches that are causing the problem.
Thanks,
Nish
--
Nishanth Aravamudan <nacc@us.ibm.com>
IBM Linux Technology Center
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH 2/2] hugetlb: fix pool allocation with empty nodes
2007-10-03 22:49 ` [PATCH 2/2] hugetlb: fix pool allocation with empty nodes Nishanth Aravamudan
2007-10-04 3:12 ` Nishanth Aravamudan
@ 2007-10-04 3:55 ` Christoph Lameter
2007-10-08 17:39 ` Nishanth Aravamudan
1 sibling, 1 reply; 8+ messages in thread
From: Christoph Lameter @ 2007-10-04 3:55 UTC (permalink / raw)
To: Nishanth Aravamudan; +Cc: wli, anton, agl, lee.schermerhorn, linux-mm
I guess this should be included in 2.6.24?
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH 2/2] hugetlb: fix pool allocation with empty nodes
2007-10-04 3:55 ` Christoph Lameter
@ 2007-10-08 17:39 ` Nishanth Aravamudan
0 siblings, 0 replies; 8+ messages in thread
From: Nishanth Aravamudan @ 2007-10-08 17:39 UTC (permalink / raw)
To: Christoph Lameter; +Cc: wli, anton, agl, lee.schermerhorn, linux-mm
On 03.10.2007 [20:55:00 -0700], Christoph Lameter wrote:
> Acked-by: Christoph Lameter <clameter@sgi.com>
>
> I guess this should be included in 2.6.24?
Realistically, I think 1/2 would be sufficient for now. Since Adam's
patches have gone in, 2/2 needs to be reworked to account for the new
users of node_online_map.
*But*, Lee found issues with my patches and Mel's one-zonelist patches,
potentially. I'm going to investigate that in the next week and repost
the rebased (on top of Adam's patches which will be in the next -mm)
patches soon.
Thanks,
Nish
--
Nishanth Aravamudan <nacc@us.ibm.com>
IBM Linux Technology Center
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH 1/2] hugetlb: search harder for memory in alloc_fresh_huge_page()
2007-10-03 22:45 [PATCH 1/2] hugetlb: search harder for memory in alloc_fresh_huge_page() Nishanth Aravamudan
2007-10-03 22:49 ` [PATCH 2/2] hugetlb: fix pool allocation with empty nodes Nishanth Aravamudan
@ 2007-10-04 3:54 ` Christoph Lameter
1 sibling, 0 replies; 8+ messages in thread
From: Christoph Lameter @ 2007-10-04 3:54 UTC (permalink / raw)
To: Nishanth Aravamudan; +Cc: wli, anton, agl, lee.schermerhorn, linux-mm
On Wed, 3 Oct 2007, Nishanth Aravamudan wrote:
> Christoph, I've moved to using a global static variable, is this closer
> to what you hoped for?
Looks good now.
Acked-by: Christoph Lameter <clameter@sgi.com>
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 8+ messages in thread
end of thread, other threads:[~2007-10-08 16:31 UTC | newest]
Thread overview: 8+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2007-10-03 22:45 [PATCH 1/2] hugetlb: search harder for memory in alloc_fresh_huge_page() Nishanth Aravamudan
2007-10-03 22:49 ` [PATCH 2/2] hugetlb: fix pool allocation with empty nodes Nishanth Aravamudan
2007-10-04 3:12 ` Nishanth Aravamudan
2007-10-05 19:56 ` Lee Schermerhorn
2007-10-05 20:30 ` Nishanth Aravamudan
2007-10-04 3:55 ` Christoph Lameter
2007-10-08 17:39 ` Nishanth Aravamudan
2007-10-04 3:54 ` [PATCH 1/2] hugetlb: search harder for memory in alloc_fresh_huge_page() Christoph Lameter
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).