linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Nishanth Aravamudan <nacc@us.ibm.com>
To: Mel Gorman <mel@csn.ul.ie>
Cc: akpm@linux-foundation.org, clameter@sgi.com, apw@shadowen.org,
	wli@holomorphy.com, linux-mm@kvack.org
Subject: [UPDATED][PATCH 3/3] Explicitly retry hugepage allocations
Date: Wed, 16 Apr 2008 18:40:54 -0700	[thread overview]
Message-ID: <20080417014054.GB17076@us.ibm.com> (raw)
In-Reply-To: <20080415085608.GB20316@csn.ul.ie>

On 15.04.2008 [09:56:08 +0100], Mel Gorman wrote:
> On (11/04/08 16:36), Nishanth Aravamudan didst pronounce:
> > Add __GFP_REPEAT to hugepage allocations. Do so to not necessitate
> > userspace putting pressure on the VM by repeated echo's into
> > /proc/sys/vm/nr_hugepages to grow the pool. With the previous patch to
> > allow for large-order __GFP_REPEAT attempts to loop for a bit (as
> > opposed to indefinitely), this increases the likelihood of getting
> > hugepages when the system experiences (or recently experienced) load.
> > 
> > Signed-off-by: Nishanth Aravamudan <nacc@us.ibm.com>
> 
> I tested the patchset on an x86_32 laptop. With the patches, it was easier to
> use the proc interface to grow the hugepage pool. The following is the output
> of a script that grows the pool as much as possible running on 2.6.25-rc9
> 
> Allocating hugepages test
> -------------------------
> Disabling OOM Killer for current test process
> Starting page count: 0
> Attempt 1: 57 pages Progress made with 57 pages
> Attempt 2: 73 pages Progress made with 16 pages
> Attempt 3: 74 pages Progress made with 1 pages
> Attempt 4: 75 pages Progress made with 1 pages
> Attempt 5: 77 pages Progress made with 2 pages
> 
> 77 pages was the most it allocated but it took 5 attempts from userspace
> to get it. With your 3 patches applied,
> 
> Allocating hugepages test
> -------------------------
> Disabling OOM Killer for current test process
> Starting page count: 0
> Attempt 1: 75 pages Progress made with 75 pages
> Attempt 2: 76 pages Progress made with 1 pages
> Attempt 3: 79 pages Progress made with 3 pages
> 
> And 79 pages was the most it got. Your patches were able to allocate the
> bulk of possible pages on the first attempt.

Add __GFP_REPEAT to hugepage allocations. Do so to not necessitate
userspace putting pressure on the VM by repeated echo's into
/proc/sys/vm/nr_hugepages to grow the pool. With the previous patch to
allow for large-order __GFP_REPEAT attempts to loop for a bit (as
opposed to indefinitely), this increases the likelihood of getting
hugepages when the system experiences (or recently experienced) load.

Mel tested the patchset on an x86_32 laptop. With the patches, it was
easier to use the proc interface to grow the hugepage pool. The
following is the output of a script that grows the pool as much as
possible running on 2.6.25-rc9.

Allocating hugepages test
-------------------------
Disabling OOM Killer for current test process
Starting page count: 0
Attempt 1: 57 pages Progress made with 57 pages
Attempt 2: 73 pages Progress made with 16 pages
Attempt 3: 74 pages Progress made with 1 pages
Attempt 4: 75 pages Progress made with 1 pages
Attempt 5: 77 pages Progress made with 2 pages

77 pages was the most it allocated but it took 5 attempts from userspace
to get it. With the 3 patches in this series applied,

Allocating hugepages test
-------------------------
Disabling OOM Killer for current test process
Starting page count: 0
Attempt 1: 75 pages Progress made with 75 pages
Attempt 2: 76 pages Progress made with 1 pages
Attempt 3: 79 pages Progress made with 3 pages

And 79 pages was the most it got. Your patches were able to allocate the
bulk of possible pages on the first attempt.

Signed-off-by: Nishanth Aravamudan <nacc@us.ibm.com>
Tested-by: Mel Gorman <mel@csn.ul.ie>

diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index df28c17..e13a7b2 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -199,7 +199,8 @@ static struct page *alloc_fresh_huge_page_node(int nid)
 	struct page *page;
 
 	page = alloc_pages_node(nid,
-		htlb_alloc_mask|__GFP_COMP|__GFP_THISNODE|__GFP_NOWARN,
+		htlb_alloc_mask|__GFP_COMP|__GFP_THISNODE|
+						__GFP_REPEAT|__GFP_NOWARN,
 		HUGETLB_PAGE_ORDER);
 	if (page) {
 		if (arch_prepare_hugepage(page)) {
@@ -294,7 +295,8 @@ static struct page *alloc_buddy_huge_page(struct vm_area_struct *vma,
 	}
 	spin_unlock(&hugetlb_lock);
 
-	page = alloc_pages(htlb_alloc_mask|__GFP_COMP|__GFP_NOWARN,
+	page = alloc_pages(htlb_alloc_mask|__GFP_COMP|
+					__GFP_REPEAT|__GFP_NOWARN,
 					HUGETLB_PAGE_ORDER);
 
 	spin_lock(&hugetlb_lock);

-- 
Nishanth Aravamudan <nacc@us.ibm.com>
IBM Linux Technology Center

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

  reply	other threads:[~2008-04-17  1:40 UTC|newest]

Thread overview: 14+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2008-04-11 23:35 [PATCH 1/3] mm: fix misleading __GFP_REPEAT related comments Nishanth Aravamudan
2008-04-11 23:35 ` [PATCH] Smarter retry of costly-order allocations Nishanth Aravamudan
2008-04-11 23:36   ` [PATCH 3/3] Explicitly retry hugepage allocations Nishanth Aravamudan
2008-04-15  8:56     ` Mel Gorman
2008-04-17  1:40       ` Nishanth Aravamudan [this message]
2008-04-15  7:07   ` [PATCH] Smarter retry of costly-order allocations Andrew Morton
2008-04-15 17:26     ` Nishanth Aravamudan
2008-04-15 19:18       ` Andrew Morton
2008-04-16  0:00         ` Nishanth Aravamudan
2008-04-16  0:09           ` Andrew Morton
2008-04-17  1:39             ` [UPDATED][PATCH 2/3] " Nishanth Aravamudan
2008-04-15  8:51   ` [PATCH] " Mel Gorman
2008-04-15  9:02     ` Andrew Morton
2008-04-15  9:27       ` Mel Gorman

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20080417014054.GB17076@us.ibm.com \
    --to=nacc@us.ibm.com \
    --cc=akpm@linux-foundation.org \
    --cc=apw@shadowen.org \
    --cc=clameter@sgi.com \
    --cc=linux-mm@kvack.org \
    --cc=mel@csn.ul.ie \
    --cc=wli@holomorphy.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).