linuxppc-dev.lists.ozlabs.org archive mirror
 help / color / mirror / Atom feed
From: Adam Litke <agl@us.ibm.com>
To: Christoph Lameter <clameter@sgi.com>
Cc: linuxppc-dev@ozlabs.org,
	David Gibson <david@gibson.dropbear.id.au>,
	linux-kernel@vger.kernel.org, linux-mm@kvack.org
Subject: Re: [PATCH] hugetlb: powerpc: Actively close unused htlb regions on vma close
Date: Fri, 02 Jun 2006 15:57:20 -0500	[thread overview]
Message-ID: <1149281841.9693.39.camel@localhost.localdomain> (raw)
In-Reply-To: <Pine.LNX.4.64.0606021301300.5492@schroedinger.engr.sgi.com>

On Fri, 2006-06-02 at 13:06 -0700, Christoph Lameter wrote:
> On Fri, 2 Jun 2006, Adam Litke wrote:
> 
> > The following patch introduces a architecture-specific vm_ops.close()
> > hook.  For all architectures besides powerpc, this is a no-op.  On
> > powerpc, the low and high segments are scanned to locate empty hugetlb
> > segments which can be made available for normal mappings.  Comments?
> 
> IA64 has similar issues and uses the hook suggested by Hugh. However, we 
> have a permanently reserved memory area. I am a bit surprised about the 
> need to make address space available for normal mappings. Is this for 32 
> bit powerpc support?

I now have a working implementation using Hugh's suggestion and
incorporating some suggestions from David Hansen... (attaching for
reference).

The real reason I want to "close" hugetlb regions (even on 64bit
platforms) is so a process can replace a previous hugetlb mapping with
normal pages when huge pages become scarce.  An example would be the
hugetlb morecore (malloc) feature in libhugetlbfs :)

[PATCH] powerpc: Close hugetlb regions when unmapping VMAs

On powerpc, each segment can contain pages of only one size.  When a hugetlb
mapping is requested, a segment is located and marked for use with huge pages.
This is a uni-directional operation -- hugetlb segments are never marked for
use again with normal pages.  For long running processes which make use of a
combination of normal and hugetlb mappings, this behavior can unduly constrain
the virtual address space.

Changes since V1:
 * Modifications limited to arch-specific code (hugetlb_free_pgd_range)
 * Only scan segments covered by the range to be unmapped

Signed-off-by: Adam Litke <agl@us.ibm.com>
---
 hugetlbpage.c |   49 +++++++++++++++++++++++++++++++++++++++++++++++++
 1 files changed, 49 insertions(+)
diff -upN reference/arch/powerpc/mm/hugetlbpage.c current/arch/powerpc/mm/hugetlbpage.c
--- reference/arch/powerpc/mm/hugetlbpage.c
+++ current/arch/powerpc/mm/hugetlbpage.c
@@ -52,6 +52,7 @@
 typedef struct { unsigned long pd; } hugepd_t;
 
 #define hugepd_none(hpd)	((hpd).pd == 0)
+void close_hugetlb_areas(struct mm_struct *mm);
 
 static inline pte_t *hugepd_page(hugepd_t hpd)
 {
@@ -303,6 +304,8 @@ void hugetlb_free_pgd_range(struct mmu_g
 			continue;
 		hugetlb_free_pud_range(*tlb, pgd, addr, next, floor, ceiling);
 	} while (pgd++, addr = next, addr != end);
+
+	close_hugetlb_areas((*tlb)->mm);
 }
 
 void set_huge_pte_at(struct mm_struct *mm, unsigned long addr,
@@ -518,6 +521,52 @@ int prepare_hugepage_range(unsigned long
 	return 0;
 }
 
+void close_hugetlb_areas(struct mm_struct *mm, unsigned long start,
+		unsigned long end)
+{
+	unsigned long i;
+	struct slb_flush_info fi;
+	u16 inuse, hiflush, loflush, mask;
+
+	if (!mm)
+		return;
+
+	if (start < 0x100000000UL) {
+		mask = LOW_ESID_MASK(start, end - start);
+		inuse = mm->context.low_htlb_areas;
+		for (i = 0; i < NUM_LOW_AREAS; i++) {
+			if (!(mask & (1 << i)))
+				continue;
+			if (prepare_low_area_for_htlb(mm, i) == 0)
+				inuse &= ~(1 << i);
+		}
+		loflush = inuse ^ mm->context.low_htlb_areas;
+		mm->context.low_htlb_areas = inuse;
+	}
+
+	if (end > 0x100000000UL) {
+		mask = HTLB_AREA_MASK(start, end - start);
+		inuse = mm->context.high_htlb_areas;
+		for (i = 0; i < NUM_HIGH_AREAS; i++) {
+			if (!(mask & (1 << i)))
+				continue;
+			if (prepare_high_area_for_htlb(mm, i) == 0)
+				inuse &= ~(1 << i);
+		}
+		hiflush = inuse ^ mm->context.high_htlb_areas;
+		mm->context.high_htlb_areas = inuse;
+	}
+
+	/* the context changes must make it to memory before the flush,
+	 * so that further SLB misses do the right thing. */
+	mb();
+	fi.mm = mm;
+	if ((fi.newareas = loflush))
+		on_each_cpu(flush_low_segments, &fi, 0, 1);
+	if ((fi.newareas = hiflush))
+		on_each_cpu(flush_high_segments, &fi, 0, 1);
+}
+
 struct page *
 follow_huge_addr(struct mm_struct *mm, unsigned long address, int write)
 {

-- 
Adam Litke - (agl at us.ibm.com)
IBM Linux Technology Center

  reply	other threads:[~2006-06-02 20:57 UTC|newest]

Thread overview: 9+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2006-06-02 14:08 [PATCH] hugetlb: powerpc: Actively close unused htlb regions on vma close Adam Litke
2006-06-02 15:17 ` Dave Hansen
2006-06-02 16:47   ` Adam Litke
2006-06-02 16:43 ` Hugh Dickins
2006-06-02 16:49   ` Adam Litke
2006-06-02 20:06 ` Christoph Lameter
2006-06-02 20:57   ` Adam Litke [this message]
2006-06-02 21:08     ` Christoph Lameter
2006-06-09  8:49       ` David Gibson

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1149281841.9693.39.camel@localhost.localdomain \
    --to=agl@us.ibm.com \
    --cc=clameter@sgi.com \
    --cc=david@gibson.dropbear.id.au \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=linuxppc-dev@ozlabs.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).