* [RFC PATCH 0/2] mm: Promote huge_pmd_share from x86 to mm.
@ 2013-04-29 14:55 Steve Capper
2013-04-29 14:55 ` [RFC PATCH 1/2] mm: hugetlb: Copy " Steve Capper
` (2 more replies)
0 siblings, 3 replies; 8+ messages in thread
From: Steve Capper @ 2013-04-29 14:55 UTC (permalink / raw)
To: linux-mm, x86, linux-arch
Cc: Michal Hocko, Ken Chen, Mel Gorman, Catalin Marinas, Will Deacon,
Steve Capper
Under x86, multiple puds can be made to reference the same bank of
huge pmds provided that they represent a full PUD_SIZE of shared
huge memory that is aligned to a PUD_SIZE boundary.
The code to share pmds does not require any architecture specific
knowledge other than the fact that pmds can be indexed, thus can
be beneficial to some other architectures.
This RFC promotes the huge_pmd_share code (and dependencies) from
x86 to mm to make it accessible to other architectures.
I am working on ARM64 support for huge pages and rather than
duplicate the x86 huge_pmd_share code, I thought it would be better
to promote it to mm.
Comments would be very welcome.
Cheers,
--
Steve
Steve Capper (2):
mm: hugetlb: Copy huge_pmd_share from x86 to mm.
x86: mm: Remove x86 version of huge_pmd_share.
arch/x86/Kconfig | 3 ++
arch/x86/mm/hugetlbpage.c | 120 ---------------------------------------------
include/linux/hugetlb.h | 4 ++
mm/hugetlb.c | 122 ++++++++++++++++++++++++++++++++++++++++++++++
4 files changed, 129 insertions(+), 120 deletions(-)
--
1.8.1.4
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 8+ messages in thread
* [RFC PATCH 1/2] mm: hugetlb: Copy huge_pmd_share from x86 to mm.
2013-04-29 14:55 [RFC PATCH 0/2] mm: Promote huge_pmd_share from x86 to mm Steve Capper
@ 2013-04-29 14:55 ` Steve Capper
2013-04-29 15:26 ` Catalin Marinas
2013-04-29 14:55 ` [RFC PATCH 2/2] x86: mm: Remove x86 version of huge_pmd_share Steve Capper
2013-04-29 20:22 ` [RFC PATCH 0/2] mm: Promote huge_pmd_share from x86 to mm David Rientjes
2 siblings, 1 reply; 8+ messages in thread
From: Steve Capper @ 2013-04-29 14:55 UTC (permalink / raw)
To: linux-mm, x86, linux-arch
Cc: Michal Hocko, Ken Chen, Mel Gorman, Catalin Marinas, Will Deacon,
Steve Capper
Under x86, multiple puds can be made to reference the same bank of
huge pmds provided that they represent a full PUD_SIZE of shared
huge memory that is aligned to a PUD_SIZE boundary.
The code to share pmds does not require any architecture specific
knowledge other than the fact that pmds can be indexed, thus can
be beneficial to some other architectures.
This patch copies the huge pmd sharing (and unsharing) logic from
x86/ to mm/ and introduces a new config option to activate it:
CONFIG_ARCH_WANTS_HUGE_PMD_SHARE.
Signed-off-by: Steve Capper <steve.capper@linaro.org>
---
include/linux/hugetlb.h | 4 ++
mm/hugetlb.c | 122 ++++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 126 insertions(+)
diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h
index 16e4e9a..795c32d 100644
--- a/include/linux/hugetlb.h
+++ b/include/linux/hugetlb.h
@@ -68,6 +68,10 @@ void hugetlb_unreserve_pages(struct inode *inode, long offset, long freed);
int dequeue_hwpoisoned_huge_page(struct page *page);
void copy_huge_page(struct page *dst, struct page *src);
+#ifdef CONFIG_ARCH_WANT_HUGE_PMD_SHARE
+pte_t *huge_pmd_share(struct mm_struct *mm, unsigned long addr, pud_t *pud);
+#endif
+
extern unsigned long hugepages_treat_as_movable;
extern const unsigned long hugetlb_zero, hugetlb_infinity;
extern int sysctl_hugetlb_shm_group;
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index ca9a7c6..41179b0 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -3142,6 +3142,128 @@ void hugetlb_unreserve_pages(struct inode *inode, long offset, long freed)
hugetlb_acct_memory(h, -(chg - freed));
}
+#ifdef CONFIG_ARCH_WANT_HUGE_PMD_SHARE
+static unsigned long page_table_shareable(struct vm_area_struct *svma,
+ struct vm_area_struct *vma,
+ unsigned long addr, pgoff_t idx)
+{
+ unsigned long saddr = ((idx - svma->vm_pgoff) << PAGE_SHIFT) +
+ svma->vm_start;
+ unsigned long sbase = saddr & PUD_MASK;
+ unsigned long s_end = sbase + PUD_SIZE;
+
+ /* Allow segments to share if only one is marked locked */
+ unsigned long vm_flags = vma->vm_flags & ~VM_LOCKED;
+ unsigned long svm_flags = svma->vm_flags & ~VM_LOCKED;
+
+ /*
+ * match the virtual addresses, permission and the alignment of the
+ * page table page.
+ */
+ if (pmd_index(addr) != pmd_index(saddr) ||
+ vm_flags != svm_flags ||
+ sbase < svma->vm_start || svma->vm_end < s_end)
+ return 0;
+
+ return saddr;
+}
+
+static int vma_shareable(struct vm_area_struct *vma, unsigned long addr)
+{
+ unsigned long base = addr & PUD_MASK;
+ unsigned long end = base + PUD_SIZE;
+
+ /*
+ * check on proper vm_flags and page table alignment
+ */
+ if (vma->vm_flags & VM_MAYSHARE &&
+ vma->vm_start <= base && end <= vma->vm_end)
+ return 1;
+ return 0;
+}
+
+/*
+ * Search for a shareable pmd page for hugetlb. In any case calls pmd_alloc()
+ * and returns the corresponding pte. While this is not necessary for the
+ * !shared pmd case because we can allocate the pmd later as well, it makes the
+ * code much cleaner. pmd allocation is essential for the shared case because
+ * pud has to be populated inside the same i_mmap_mutex section - otherwise
+ * racing tasks could either miss the sharing (see huge_pte_offset) or select a
+ * bad pmd for sharing.
+ */
+pte_t *huge_pmd_share(struct mm_struct *mm, unsigned long addr, pud_t *pud)
+{
+ struct vm_area_struct *vma = find_vma(mm, addr);
+ struct address_space *mapping = vma->vm_file->f_mapping;
+ pgoff_t idx = ((addr - vma->vm_start) >> PAGE_SHIFT) +
+ vma->vm_pgoff;
+ struct vm_area_struct *svma;
+ unsigned long saddr;
+ pte_t *spte = NULL;
+ pte_t *pte;
+
+ if (!vma_shareable(vma, addr))
+ return (pte_t *)pmd_alloc(mm, pud, addr);
+
+ mutex_lock(&mapping->i_mmap_mutex);
+ vma_interval_tree_foreach(svma, &mapping->i_mmap, idx, idx) {
+ if (svma == vma)
+ continue;
+
+ saddr = page_table_shareable(svma, vma, addr, idx);
+ if (saddr) {
+ spte = huge_pte_offset(svma->vm_mm, saddr);
+ if (spte) {
+ get_page(virt_to_page(spte));
+ break;
+ }
+ }
+ }
+
+ if (!spte)
+ goto out;
+
+ spin_lock(&mm->page_table_lock);
+ if (pud_none(*pud))
+ pud_populate(mm, pud,
+ (pmd_t *)((unsigned long)spte & PAGE_MASK));
+ else
+ put_page(virt_to_page(spte));
+ spin_unlock(&mm->page_table_lock);
+out:
+ pte = (pte_t *)pmd_alloc(mm, pud, addr);
+ mutex_unlock(&mapping->i_mmap_mutex);
+ return pte;
+}
+
+/*
+ * unmap huge page backed by shared pte.
+ *
+ * Hugetlb pte page is ref counted at the time of mapping. If pte is shared
+ * indicated by page_count > 1, unmap is achieved by clearing pud and
+ * decrementing the ref count. If count == 1, the pte page is not shared.
+ *
+ * called with vma->vm_mm->page_table_lock held.
+ *
+ * returns: 1 successfully unmapped a shared pte page
+ * 0 the underlying pte page is not shared, or it is the last user
+ */
+int huge_pmd_unshare(struct mm_struct *mm, unsigned long *addr, pte_t *ptep)
+{
+ pgd_t *pgd = pgd_offset(mm, *addr);
+ pud_t *pud = pud_offset(pgd, *addr);
+
+ BUG_ON(page_count(virt_to_page(ptep)) == 0);
+ if (page_count(virt_to_page(ptep)) == 1)
+ return 0;
+
+ pud_clear(pud);
+ put_page(virt_to_page(ptep));
+ *addr = ALIGN(*addr, HPAGE_SIZE * PTRS_PER_PTE) - HPAGE_SIZE;
+ return 1;
+}
+#endif /* CONFIG_ARCH_WANT_HUGE_PMD_SHARE */
+
#ifdef CONFIG_MEMORY_FAILURE
/* Should be called in hugetlb_lock */
--
1.8.1.4
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply related [flat|nested] 8+ messages in thread
* [RFC PATCH 2/2] x86: mm: Remove x86 version of huge_pmd_share.
2013-04-29 14:55 [RFC PATCH 0/2] mm: Promote huge_pmd_share from x86 to mm Steve Capper
2013-04-29 14:55 ` [RFC PATCH 1/2] mm: hugetlb: Copy " Steve Capper
@ 2013-04-29 14:55 ` Steve Capper
2013-04-29 20:22 ` [RFC PATCH 0/2] mm: Promote huge_pmd_share from x86 to mm David Rientjes
2 siblings, 0 replies; 8+ messages in thread
From: Steve Capper @ 2013-04-29 14:55 UTC (permalink / raw)
To: linux-mm, x86, linux-arch
Cc: Michal Hocko, Ken Chen, Mel Gorman, Catalin Marinas, Will Deacon,
Steve Capper
The huge_pmd_share code has been copied over to mm/hugetlb.c to
make it accessible to other architectures.
Remove the x86 copy of the huge_pmd_share code and enable the
ARCH_WANT_HUGE_PMD_SHARE config flag. That way we reference the
general one.
Signed-off-by: Steve Capper <steve.capper@linaro.org>
---
arch/x86/Kconfig | 3 ++
arch/x86/mm/hugetlbpage.c | 120 ----------------------------------------------
2 files changed, 3 insertions(+), 120 deletions(-)
diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index 70c0f3d..60e3c402 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -212,6 +212,9 @@ config ARCH_HIBERNATION_POSSIBLE
config ARCH_SUSPEND_POSSIBLE
def_bool y
+config ARCH_WANT_HUGE_PMD_SHARE
+ def_bool y
+
config ZONE_DMA32
bool
default X86_64
diff --git a/arch/x86/mm/hugetlbpage.c b/arch/x86/mm/hugetlbpage.c
index ae1aa71..7e522a3 100644
--- a/arch/x86/mm/hugetlbpage.c
+++ b/arch/x86/mm/hugetlbpage.c
@@ -16,126 +16,6 @@
#include <asm/tlbflush.h>
#include <asm/pgalloc.h>
-static unsigned long page_table_shareable(struct vm_area_struct *svma,
- struct vm_area_struct *vma,
- unsigned long addr, pgoff_t idx)
-{
- unsigned long saddr = ((idx - svma->vm_pgoff) << PAGE_SHIFT) +
- svma->vm_start;
- unsigned long sbase = saddr & PUD_MASK;
- unsigned long s_end = sbase + PUD_SIZE;
-
- /* Allow segments to share if only one is marked locked */
- unsigned long vm_flags = vma->vm_flags & ~VM_LOCKED;
- unsigned long svm_flags = svma->vm_flags & ~VM_LOCKED;
-
- /*
- * match the virtual addresses, permission and the alignment of the
- * page table page.
- */
- if (pmd_index(addr) != pmd_index(saddr) ||
- vm_flags != svm_flags ||
- sbase < svma->vm_start || svma->vm_end < s_end)
- return 0;
-
- return saddr;
-}
-
-static int vma_shareable(struct vm_area_struct *vma, unsigned long addr)
-{
- unsigned long base = addr & PUD_MASK;
- unsigned long end = base + PUD_SIZE;
-
- /*
- * check on proper vm_flags and page table alignment
- */
- if (vma->vm_flags & VM_MAYSHARE &&
- vma->vm_start <= base && end <= vma->vm_end)
- return 1;
- return 0;
-}
-
-/*
- * Search for a shareable pmd page for hugetlb. In any case calls pmd_alloc()
- * and returns the corresponding pte. While this is not necessary for the
- * !shared pmd case because we can allocate the pmd later as well, it makes the
- * code much cleaner. pmd allocation is essential for the shared case because
- * pud has to be populated inside the same i_mmap_mutex section - otherwise
- * racing tasks could either miss the sharing (see huge_pte_offset) or select a
- * bad pmd for sharing.
- */
-static pte_t *
-huge_pmd_share(struct mm_struct *mm, unsigned long addr, pud_t *pud)
-{
- struct vm_area_struct *vma = find_vma(mm, addr);
- struct address_space *mapping = vma->vm_file->f_mapping;
- pgoff_t idx = ((addr - vma->vm_start) >> PAGE_SHIFT) +
- vma->vm_pgoff;
- struct vm_area_struct *svma;
- unsigned long saddr;
- pte_t *spte = NULL;
- pte_t *pte;
-
- if (!vma_shareable(vma, addr))
- return (pte_t *)pmd_alloc(mm, pud, addr);
-
- mutex_lock(&mapping->i_mmap_mutex);
- vma_interval_tree_foreach(svma, &mapping->i_mmap, idx, idx) {
- if (svma == vma)
- continue;
-
- saddr = page_table_shareable(svma, vma, addr, idx);
- if (saddr) {
- spte = huge_pte_offset(svma->vm_mm, saddr);
- if (spte) {
- get_page(virt_to_page(spte));
- break;
- }
- }
- }
-
- if (!spte)
- goto out;
-
- spin_lock(&mm->page_table_lock);
- if (pud_none(*pud))
- pud_populate(mm, pud, (pmd_t *)((unsigned long)spte & PAGE_MASK));
- else
- put_page(virt_to_page(spte));
- spin_unlock(&mm->page_table_lock);
-out:
- pte = (pte_t *)pmd_alloc(mm, pud, addr);
- mutex_unlock(&mapping->i_mmap_mutex);
- return pte;
-}
-
-/*
- * unmap huge page backed by shared pte.
- *
- * Hugetlb pte page is ref counted at the time of mapping. If pte is shared
- * indicated by page_count > 1, unmap is achieved by clearing pud and
- * decrementing the ref count. If count == 1, the pte page is not shared.
- *
- * called with vma->vm_mm->page_table_lock held.
- *
- * returns: 1 successfully unmapped a shared pte page
- * 0 the underlying pte page is not shared, or it is the last user
- */
-int huge_pmd_unshare(struct mm_struct *mm, unsigned long *addr, pte_t *ptep)
-{
- pgd_t *pgd = pgd_offset(mm, *addr);
- pud_t *pud = pud_offset(pgd, *addr);
-
- BUG_ON(page_count(virt_to_page(ptep)) == 0);
- if (page_count(virt_to_page(ptep)) == 1)
- return 0;
-
- pud_clear(pud);
- put_page(virt_to_page(ptep));
- *addr = ALIGN(*addr, HPAGE_SIZE * PTRS_PER_PTE) - HPAGE_SIZE;
- return 1;
-}
-
pte_t *huge_pte_alloc(struct mm_struct *mm,
unsigned long addr, unsigned long sz)
{
--
1.8.1.4
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply related [flat|nested] 8+ messages in thread
* Re: [RFC PATCH 1/2] mm: hugetlb: Copy huge_pmd_share from x86 to mm.
2013-04-29 14:55 ` [RFC PATCH 1/2] mm: hugetlb: Copy " Steve Capper
@ 2013-04-29 15:26 ` Catalin Marinas
2013-04-29 15:47 ` Steve Capper
0 siblings, 1 reply; 8+ messages in thread
From: Catalin Marinas @ 2013-04-29 15:26 UTC (permalink / raw)
To: Steve Capper
Cc: linux-mm@kvack.org, x86@kernel.org, linux-arch@vger.kernel.org,
Michal Hocko, Ken Chen, Mel Gorman, Will Deacon
Steve,
On Mon, Apr 29, 2013 at 03:55:55PM +0100, Steve Capper wrote:
> Under x86, multiple puds can be made to reference the same bank of
> huge pmds provided that they represent a full PUD_SIZE of shared
> huge memory that is aligned to a PUD_SIZE boundary.
>
> The code to share pmds does not require any architecture specific
> knowledge other than the fact that pmds can be indexed, thus can
> be beneficial to some other architectures.
>
> This patch copies the huge pmd sharing (and unsharing) logic from
> x86/ to mm/ and introduces a new config option to activate it:
> CONFIG_ARCH_WANTS_HUGE_PMD_SHARE.
Just wondering whether more of it could be shared. The following look
pretty close to what you'd write for arm64:
- huge_pte_alloc()
- huge_pte_offset() (there is a pud_large macro on x86 which checks for
present & huge, we can replace it with just pud_huge in this function
as it already checks for present)
- follow_huge_pud()
- follow_huge_pmd()
Of course, arch-specific macros like pud_huge, pmd_huge would have to go
in a header file.
--
Catalin
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [RFC PATCH 1/2] mm: hugetlb: Copy huge_pmd_share from x86 to mm.
2013-04-29 15:26 ` Catalin Marinas
@ 2013-04-29 15:47 ` Steve Capper
2013-04-29 16:07 ` Catalin Marinas
0 siblings, 1 reply; 8+ messages in thread
From: Steve Capper @ 2013-04-29 15:47 UTC (permalink / raw)
To: Catalin Marinas
Cc: linux-mm@kvack.org, x86@kernel.org, linux-arch@vger.kernel.org,
Michal Hocko, Ken Chen, Mel Gorman, Will Deacon
On Mon, Apr 29, 2013 at 04:26:41PM +0100, Catalin Marinas wrote:
> Steve,
Hi Catalin,
>
> On Mon, Apr 29, 2013 at 03:55:55PM +0100, Steve Capper wrote:
> > Under x86, multiple puds can be made to reference the same bank of
> > huge pmds provided that they represent a full PUD_SIZE of shared
> > huge memory that is aligned to a PUD_SIZE boundary.
> >
> > The code to share pmds does not require any architecture specific
> > knowledge other than the fact that pmds can be indexed, thus can
> > be beneficial to some other architectures.
> >
> > This patch copies the huge pmd sharing (and unsharing) logic from
> > x86/ to mm/ and introduces a new config option to activate it:
> > CONFIG_ARCH_WANTS_HUGE_PMD_SHARE.
>
> Just wondering whether more of it could be shared. The following look
> pretty close to what you'd write for arm64:
>
> - huge_pte_alloc()
> - huge_pte_offset() (there is a pud_large macro on x86 which checks for
> present & huge, we can replace it with just pud_huge in this function
> as it already checks for present)
> - follow_huge_pud()
> - follow_huge_pmd()
I did do something like this initially, then reined it back a bit
as it placed implicit restrictions on x86 and arm64.
If we enable 64K pages on arm64 for instance, we obviate the need
to share pmds (pmd_index doesn't exist for 64K pages). So I have a
slightly different huge_pte_alloc function to account for this.
I would be happy to move more code from x86 to mm though, as my
huge_pte_offset and follow_huge_p[mu]d functions are pretty much
identical to the x86 ones. This patch, I thought, was the most I
could get away with :-).
Cheers,
--
Steve
>
> Of course, arch-specific macros like pud_huge, pmd_huge would have to go
> in a header file.
>
> --
> Catalin
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [RFC PATCH 1/2] mm: hugetlb: Copy huge_pmd_share from x86 to mm.
2013-04-29 15:47 ` Steve Capper
@ 2013-04-29 16:07 ` Catalin Marinas
0 siblings, 0 replies; 8+ messages in thread
From: Catalin Marinas @ 2013-04-29 16:07 UTC (permalink / raw)
To: Steve Capper
Cc: linux-mm@kvack.org, x86@kernel.org, linux-arch@vger.kernel.org,
Michal Hocko, Ken Chen, Mel Gorman, Will Deacon
On Mon, Apr 29, 2013 at 04:47:49PM +0100, Steve Capper wrote:
> On Mon, Apr 29, 2013 at 04:26:41PM +0100, Catalin Marinas wrote:
> > On Mon, Apr 29, 2013 at 03:55:55PM +0100, Steve Capper wrote:
> > > Under x86, multiple puds can be made to reference the same bank of
> > > huge pmds provided that they represent a full PUD_SIZE of shared
> > > huge memory that is aligned to a PUD_SIZE boundary.
> > >
> > > The code to share pmds does not require any architecture specific
> > > knowledge other than the fact that pmds can be indexed, thus can
> > > be beneficial to some other architectures.
> > >
> > > This patch copies the huge pmd sharing (and unsharing) logic from
> > > x86/ to mm/ and introduces a new config option to activate it:
> > > CONFIG_ARCH_WANTS_HUGE_PMD_SHARE.
> >
> > Just wondering whether more of it could be shared. The following look
> > pretty close to what you'd write for arm64:
> >
> > - huge_pte_alloc()
> > - huge_pte_offset() (there is a pud_large macro on x86 which checks for
> > present & huge, we can replace it with just pud_huge in this function
> > as it already checks for present)
> > - follow_huge_pud()
> > - follow_huge_pmd()
>
> I did do something like this initially, then reined it back a bit
> as it placed implicit restrictions on x86 and arm64.
>
> If we enable 64K pages on arm64 for instance, we obviate the need
> to share pmds (pmd_index doesn't exist for 64K pages). So I have a
> slightly different huge_pte_alloc function to account for this.
I guess with 64K pages on arm64 (two levels of page tables), you can't
share the pmds anyway. pud_none() is defined as 0 in pgtable-nopmd.h.
huge_pte_alloc() can probably be the same with 64K pages since pud_alloc
always succeeds (pmd/pud/pgd are all the same).
So with some #ifdef __PAGETABLE_PMD_FOLDED (not nice but it allows for
some more code sharing) you can add empty (NULL-returning)
huge_pmd_share/huge_pmd_unshare functions and avoid the compiler error
for pmd_index.
--
Catalin
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [RFC PATCH 0/2] mm: Promote huge_pmd_share from x86 to mm.
2013-04-29 14:55 [RFC PATCH 0/2] mm: Promote huge_pmd_share from x86 to mm Steve Capper
2013-04-29 14:55 ` [RFC PATCH 1/2] mm: hugetlb: Copy " Steve Capper
2013-04-29 14:55 ` [RFC PATCH 2/2] x86: mm: Remove x86 version of huge_pmd_share Steve Capper
@ 2013-04-29 20:22 ` David Rientjes
2013-04-29 22:10 ` Catalin Marinas
2 siblings, 1 reply; 8+ messages in thread
From: David Rientjes @ 2013-04-29 20:22 UTC (permalink / raw)
To: Steve Capper
Cc: linux-mm, x86, linux-arch, Michal Hocko, Ken Chen, Mel Gorman,
Catalin Marinas, Will Deacon
On Mon, 29 Apr 2013, Steve Capper wrote:
> Under x86, multiple puds can be made to reference the same bank of
> huge pmds provided that they represent a full PUD_SIZE of shared
> huge memory that is aligned to a PUD_SIZE boundary.
>
> The code to share pmds does not require any architecture specific
> knowledge other than the fact that pmds can be indexed, thus can
> be beneficial to some other architectures.
>
> This RFC promotes the huge_pmd_share code (and dependencies) from
> x86 to mm to make it accessible to other architectures.
>
> I am working on ARM64 support for huge pages and rather than
> duplicate the x86 huge_pmd_share code, I thought it would be better
> to promote it to mm.
>
No objections to this, but I think you should do it as the first patch in
a series that adds the arm support. There's no need for this to be moved
until that support is tested, proposed, reviewed, and merged.
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [RFC PATCH 0/2] mm: Promote huge_pmd_share from x86 to mm.
2013-04-29 20:22 ` [RFC PATCH 0/2] mm: Promote huge_pmd_share from x86 to mm David Rientjes
@ 2013-04-29 22:10 ` Catalin Marinas
0 siblings, 0 replies; 8+ messages in thread
From: Catalin Marinas @ 2013-04-29 22:10 UTC (permalink / raw)
To: David Rientjes
Cc: Steve Capper, linux-mm@kvack.org, x86@kernel.org,
linux-arch@vger.kernel.org, Michal Hocko, Ken Chen, Mel Gorman,
Will Deacon
On Mon, Apr 29, 2013 at 09:22:38PM +0100, David Rientjes wrote:
> On Mon, 29 Apr 2013, Steve Capper wrote:
>
> > Under x86, multiple puds can be made to reference the same bank of
> > huge pmds provided that they represent a full PUD_SIZE of shared
> > huge memory that is aligned to a PUD_SIZE boundary.
> >
> > The code to share pmds does not require any architecture specific
> > knowledge other than the fact that pmds can be indexed, thus can
> > be beneficial to some other architectures.
> >
> > This RFC promotes the huge_pmd_share code (and dependencies) from
> > x86 to mm to make it accessible to other architectures.
> >
> > I am working on ARM64 support for huge pages and rather than
> > duplicate the x86 huge_pmd_share code, I thought it would be better
> > to promote it to mm.
>
> No objections to this, but I think you should do it as the first patch in
> a series that adds the arm support. There's no need for this to be moved
> until that support is tested, proposed, reviewed, and merged.
I agree, it would be good to see the arm64 support in this series as
well (though eventual upstreaming may go via separate paths).
--
Catalin
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 8+ messages in thread
end of thread, other threads:[~2013-04-29 22:10 UTC | newest]
Thread overview: 8+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2013-04-29 14:55 [RFC PATCH 0/2] mm: Promote huge_pmd_share from x86 to mm Steve Capper
2013-04-29 14:55 ` [RFC PATCH 1/2] mm: hugetlb: Copy " Steve Capper
2013-04-29 15:26 ` Catalin Marinas
2013-04-29 15:47 ` Steve Capper
2013-04-29 16:07 ` Catalin Marinas
2013-04-29 14:55 ` [RFC PATCH 2/2] x86: mm: Remove x86 version of huge_pmd_share Steve Capper
2013-04-29 20:22 ` [RFC PATCH 0/2] mm: Promote huge_pmd_share from x86 to mm David Rientjes
2013-04-29 22:10 ` Catalin Marinas
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).