* [PATCH 0/5] Support 64K pages mapping of SPE local stores on 4K kernel
@ 2007-02-20 7:44 Benjamin Herrenschmidt
2007-02-20 7:44 ` [PATCH 2/5] This is a hack to get_unmapped_area to make the SPE 64K code work Benjamin Herrenschmidt
` (5 more replies)
0 siblings, 6 replies; 13+ messages in thread
From: Benjamin Herrenschmidt @ 2007-02-20 7:44 UTC (permalink / raw)
To: linuxppc-dev, cbe-oss-dev; +Cc: Arnd Bergmann
This serie of patches supports userland mappings of SPE local stores
using 64K hardware pages rather than 4K on a kernel using 4K pages to
improve performances.
The current version of this serie relies on a hack to the generic code
which is probably not acceptable upsteam. I have plans to do a proper
fix but haven't had time to do it yet.
The first patch of the serie is fairly independant of the rest and
should be applied to 2.6.21 as I beleive it fixes a bug with handling
of huge pages from SPEs.
^ permalink raw reply [flat|nested] 13+ messages in thread
* [PATCH 1/5] powerpc: Fix spu SLB invalidations
2007-02-20 7:44 [PATCH 0/5] Support 64K pages mapping of SPE local stores on 4K kernel Benjamin Herrenschmidt
2007-02-20 7:44 ` [PATCH 2/5] This is a hack to get_unmapped_area to make the SPE 64K code work Benjamin Herrenschmidt
@ 2007-02-20 7:44 ` Benjamin Herrenschmidt
2007-02-20 7:44 ` [PATCH 3/5] powerpc: Introduce address space "slices" Benjamin Herrenschmidt
` (3 subsequent siblings)
5 siblings, 0 replies; 13+ messages in thread
From: Benjamin Herrenschmidt @ 2007-02-20 7:44 UTC (permalink / raw)
To: linuxppc-dev, cbe-oss-dev; +Cc: Arnd Bergmann
The SPU code doesn't properly invalidate SPUs SLBs when necessary, for example
when changing a segment size from the hugetlbfs code. In addition, it saves
and restores the SLB content on context switches which makes it harder to
properly handle those invalidations.
This patch removes the saving & restoring for now, something more efficient
might be found later on. It also adds a spu_flush_all_slbs(mm) that can be
used by the core mm code to flush the SLBs of all SPEs that are running a
given mm at the time of the flush.
In order to do that, it adds a spinlock to the list of all SPEs and move
some bits & pieces from spufs to spu_base.c
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
arch/powerpc/mm/hash_utils_64.c | 6 ++
arch/powerpc/mm/hugetlbpage.c | 4 +
arch/powerpc/platforms/cell/spu_base.c | 81 ++++++++++++++++++++++++-----
arch/powerpc/platforms/cell/spufs/sched.c | 13 ----
arch/powerpc/platforms/cell/spufs/switch.c | 62 +---------------------
include/asm-powerpc/spu.h | 7 ++
include/asm-powerpc/spu_csa.h | 4 -
7 files changed, 91 insertions(+), 86 deletions(-)
Index: linux-cell/arch/powerpc/mm/hugetlbpage.c
===================================================================
--- linux-cell.orig/arch/powerpc/mm/hugetlbpage.c 2007-02-20 14:33:23.000000000 +1100
+++ linux-cell/arch/powerpc/mm/hugetlbpage.c 2007-02-20 18:03:15.000000000 +1100
@@ -24,6 +24,7 @@
#include <asm/machdep.h>
#include <asm/cputable.h>
#include <asm/tlb.h>
+#include <asm/spu.h>
#include <linux/sysctl.h>
@@ -513,6 +514,9 @@ int prepare_hugepage_range(unsigned long
if ((addr + len) > 0x100000000UL)
err = open_high_hpage_areas(current->mm,
HTLB_AREA_MASK(addr, len));
+#ifdef CONFIG_SPE_BASE
+ spu_flush_all_slbs(current->mm);
+#endif
if (err) {
printk(KERN_DEBUG "prepare_hugepage_range(%lx, %lx)"
" failed (lowmask: 0x%04hx, highmask: 0x%04hx)\n",
Index: linux-cell/arch/powerpc/platforms/cell/spu_base.c
===================================================================
--- linux-cell.orig/arch/powerpc/platforms/cell/spu_base.c 2007-02-20 14:32:31.000000000 +1100
+++ linux-cell/arch/powerpc/platforms/cell/spu_base.c 2007-02-20 18:03:15.000000000 +1100
@@ -38,8 +38,61 @@
const struct spu_management_ops *spu_management_ops;
const struct spu_priv1_ops *spu_priv1_ops;
+static struct list_head spu_list[MAX_NUMNODES];
+static LIST_HEAD(spu_full_list);
+static DEFINE_MUTEX(spu_mutex);
+static spinlock_t spu_list_lock = SPIN_LOCK_UNLOCKED;
+
EXPORT_SYMBOL_GPL(spu_priv1_ops);
+void spu_invalidate_slbs(struct spu *spu)
+{
+ struct spu_priv2 __iomem *priv2 = spu->priv2;
+
+ if (spu_mfc_sr1_get(spu) & MFC_STATE1_RELOCATE_MASK)
+ out_be64(&priv2->slb_invalidate_all_W, 0UL);
+}
+EXPORT_SYMBOL_GPL(spu_invalidate_slbs);
+
+/* This is called by the MM core when a segment size is changed, to
+ * request a flush of all the SPEs using a given mm
+ */
+void spu_flush_all_slbs(struct mm_struct *mm)
+{
+ struct spu *spu;
+ unsigned long flags;
+
+ spin_lock_irqsave(&spu_list_lock, flags);
+ list_for_each_entry(spu, &spu_full_list, full_list) {
+ if (spu->mm == mm)
+ spu_invalidate_slbs(spu);
+ }
+ spin_unlock_irqrestore(&spu_list_lock, flags);
+}
+
+/* The hack below stinks... try to do something better one of
+ * these days... Does it even work properly with NR_CPUS == 1 ?
+ */
+static inline void mm_needs_global_tlbie(struct mm_struct *mm)
+{
+ int nr = (NR_CPUS > 1) ? NR_CPUS : NR_CPUS + 1;
+
+ /* Global TLBIE broadcast required with SPEs. */
+ __cpus_setall(&mm->cpu_vm_mask, nr);
+}
+
+void spu_associate_mm(struct spu *spu, struct mm_struct *mm)
+{
+ unsigned long flags;
+
+ spin_lock_irqsave(&spu_list_lock, flags);
+ spu->mm = mm;
+ spin_unlock_irqrestore(&spu_list_lock, flags);
+ if (mm)
+ mm_needs_global_tlbie(mm);
+}
+EXPORT_SYMBOL_GPL(spu_associate_mm);
+
static int __spu_trap_invalid_dma(struct spu *spu)
{
pr_debug("%s\n", __FUNCTION__);
@@ -74,6 +127,7 @@ static int __spu_trap_data_seg(struct sp
struct spu_priv2 __iomem *priv2 = spu->priv2;
struct mm_struct *mm = spu->mm;
u64 esid, vsid, llp;
+ int psize;
pr_debug("%s\n", __FUNCTION__);
@@ -90,22 +144,25 @@ static int __spu_trap_data_seg(struct sp
case USER_REGION_ID:
#ifdef CONFIG_HUGETLB_PAGE
if (in_hugepage_area(mm->context, ea))
- llp = mmu_psize_defs[mmu_huge_psize].sllp;
+ psize = mmu_huge_psize;
else
#endif
- llp = mmu_psize_defs[mmu_virtual_psize].sllp;
+ psize = mm->context.user_psize;
vsid = (get_vsid(mm->context.id, ea) << SLB_VSID_SHIFT) |
- SLB_VSID_USER | llp;
+ SLB_VSID_USER;
break;
case VMALLOC_REGION_ID:
- llp = mmu_psize_defs[mmu_virtual_psize].sllp;
+ if (ea < VMALLOC_END)
+ psize = mmu_vmalloc_psize;
+ else
+ psize = mmu_io_psize;
vsid = (get_kernel_vsid(ea) << SLB_VSID_SHIFT) |
- SLB_VSID_KERNEL | llp;
+ SLB_VSID_KERNEL;
break;
case KERNEL_REGION_ID:
- llp = mmu_psize_defs[mmu_linear_psize].sllp;
+ psize = mmu_linear_psize;
vsid = (get_kernel_vsid(ea) << SLB_VSID_SHIFT) |
- SLB_VSID_KERNEL | llp;
+ SLB_VSID_KERNEL;
break;
default:
/* Future: support kernel segments so that drivers
@@ -114,9 +171,10 @@ static int __spu_trap_data_seg(struct sp
pr_debug("invalid region access at %016lx\n", ea);
return 1;
}
+ llp = mmu_psize_defs[psize].sllp;
out_be64(&priv2->slb_index_W, spu->slb_replace);
- out_be64(&priv2->slb_vsid_RW, vsid);
+ out_be64(&priv2->slb_vsid_RW, vsid | llp);
out_be64(&priv2->slb_esid_RW, esid);
spu->slb_replace++;
@@ -330,10 +388,6 @@ static void spu_free_irqs(struct spu *sp
free_irq(spu->irqs[2], spu);
}
-static struct list_head spu_list[MAX_NUMNODES];
-static LIST_HEAD(spu_full_list);
-static DEFINE_MUTEX(spu_mutex);
-
static void spu_init_channels(struct spu *spu)
{
static const struct {
@@ -593,6 +647,7 @@ static int __init create_spu(void *data)
struct spu *spu;
int ret;
static int number;
+ unsigned long flags;
ret = -ENOMEM;
spu = kzalloc(sizeof (*spu), GFP_KERNEL);
@@ -620,8 +675,10 @@ static int __init create_spu(void *data)
goto out_free_irqs;
mutex_lock(&spu_mutex);
+ spin_lock_irqsave(&spu_list_lock, flags);
list_add(&spu->list, &spu_list[spu->node]);
list_add(&spu->full_list, &spu_full_list);
+ spin_unlock_irqrestore(&spu_list_lock, flags);
mutex_unlock(&spu_mutex);
goto out;
Index: linux-cell/arch/powerpc/platforms/cell/spufs/switch.c
===================================================================
--- linux-cell.orig/arch/powerpc/platforms/cell/spufs/switch.c 2007-02-20 14:35:26.000000000 +1100
+++ linux-cell/arch/powerpc/platforms/cell/spufs/switch.c 2007-02-20 18:03:13.000000000 +1100
@@ -468,26 +468,6 @@ static inline void wait_purge_complete(s
MFC_CNTL_PURGE_DMA_COMPLETE);
}
-static inline void save_mfc_slbs(struct spu_state *csa, struct spu *spu)
-{
- struct spu_priv2 __iomem *priv2 = spu->priv2;
- int i;
-
- /* Save, Step 29:
- * If MFC_SR1[R]='1', save SLBs in CSA.
- */
- if (spu_mfc_sr1_get(spu) & MFC_STATE1_RELOCATE_MASK) {
- csa->priv2.slb_index_W = in_be64(&priv2->slb_index_W);
- for (i = 0; i < 8; i++) {
- out_be64(&priv2->slb_index_W, i);
- eieio();
- csa->slb_esid_RW[i] = in_be64(&priv2->slb_esid_RW);
- csa->slb_vsid_RW[i] = in_be64(&priv2->slb_vsid_RW);
- eieio();
- }
- }
-}
-
static inline void setup_mfc_sr1(struct spu_state *csa, struct spu *spu)
{
/* Save, Step 30:
@@ -708,20 +688,6 @@ static inline void resume_mfc_queue(stru
out_be64(&priv2->mfc_control_RW, MFC_CNTL_RESUME_DMA_QUEUE);
}
-static inline void invalidate_slbs(struct spu_state *csa, struct spu *spu)
-{
- struct spu_priv2 __iomem *priv2 = spu->priv2;
-
- /* Save, Step 45:
- * Restore, Step 19:
- * If MFC_SR1[R]=1, write 0 to SLB_Invalidate_All.
- */
- if (spu_mfc_sr1_get(spu) & MFC_STATE1_RELOCATE_MASK) {
- out_be64(&priv2->slb_invalidate_all_W, 0UL);
- eieio();
- }
-}
-
static inline void get_kernel_slb(u64 ea, u64 slb[2])
{
u64 llp;
@@ -765,7 +731,7 @@ static inline void setup_mfc_slbs(struct
* MFC_SR1[R]=1 (in other words, assume that
* translation is desired by OS environment).
*/
- invalidate_slbs(csa, spu);
+ spu_invalidate_slbs(spu);
get_kernel_slb((unsigned long)&spu_save_code[0], code_slb);
get_kernel_slb((unsigned long)csa->lscsa, lscsa_slb);
load_mfc_slb(spu, code_slb, 0);
@@ -1718,27 +1684,6 @@ static inline void check_ppuint_mb_stat(
}
}
-static inline void restore_mfc_slbs(struct spu_state *csa, struct spu *spu)
-{
- struct spu_priv2 __iomem *priv2 = spu->priv2;
- int i;
-
- /* Restore, Step 68:
- * If MFC_SR1[R]='1', restore SLBs from CSA.
- */
- if (csa->priv1.mfc_sr1_RW & MFC_STATE1_RELOCATE_MASK) {
- for (i = 0; i < 8; i++) {
- out_be64(&priv2->slb_index_W, i);
- eieio();
- out_be64(&priv2->slb_esid_RW, csa->slb_esid_RW[i]);
- out_be64(&priv2->slb_vsid_RW, csa->slb_vsid_RW[i]);
- eieio();
- }
- out_be64(&priv2->slb_index_W, csa->priv2.slb_index_W);
- eieio();
- }
-}
-
static inline void restore_mfc_sr1(struct spu_state *csa, struct spu *spu)
{
/* Restore, Step 69:
@@ -1875,7 +1820,6 @@ static void save_csa(struct spu_state *p
set_mfc_tclass_id(prev, spu); /* Step 26. */
purge_mfc_queue(prev, spu); /* Step 27. */
wait_purge_complete(prev, spu); /* Step 28. */
- save_mfc_slbs(prev, spu); /* Step 29. */
setup_mfc_sr1(prev, spu); /* Step 30. */
save_spu_npc(prev, spu); /* Step 31. */
save_spu_privcntl(prev, spu); /* Step 32. */
@@ -1987,7 +1931,7 @@ static void harvest(struct spu_state *pr
reset_spu_privcntl(prev, spu); /* Step 16. */
reset_spu_lslr(prev, spu); /* Step 17. */
setup_mfc_sr1(prev, spu); /* Step 18. */
- invalidate_slbs(prev, spu); /* Step 19. */
+ spu_invalidate_slbs(spu); /* Step 19. */
reset_ch_part1(prev, spu); /* Step 20. */
reset_ch_part2(prev, spu); /* Step 21. */
enable_interrupts(prev, spu); /* Step 22. */
@@ -2055,7 +1999,7 @@ static void restore_csa(struct spu_state
restore_spu_mb(next, spu); /* Step 65. */
check_ppu_mb_stat(next, spu); /* Step 66. */
check_ppuint_mb_stat(next, spu); /* Step 67. */
- restore_mfc_slbs(next, spu); /* Step 68. */
+ spu_invalidate_slbs(spu); /* Modified Step 68. */
restore_mfc_sr1(next, spu); /* Step 69. */
restore_other_spu_access(next, spu); /* Step 70. */
restore_spu_runcntl(next, spu); /* Step 71. */
Index: linux-cell/include/asm-powerpc/spu.h
===================================================================
--- linux-cell.orig/include/asm-powerpc/spu.h 2007-02-20 14:34:20.000000000 +1100
+++ linux-cell/include/asm-powerpc/spu.h 2007-02-20 15:22:27.000000000 +1100
@@ -165,6 +165,13 @@ int spu_irq_class_0_bottom(struct spu *s
int spu_irq_class_1_bottom(struct spu *spu);
void spu_irq_setaffinity(struct spu *spu, int cpu);
+extern void spu_invalidate_slbs(struct spu *spu);
+extern void spu_associate_mm(struct spu *spu, struct mm_struct *mm);
+
+/* Calls from the memory management to the SPU */
+struct mm_struct;
+extern void spu_flush_all_slbs(struct mm_struct *mm);
+
/* system callbacks from the SPU */
struct spu_syscall_block {
u64 nr_ret;
Index: linux-cell/include/asm-powerpc/spu_csa.h
===================================================================
--- linux-cell.orig/include/asm-powerpc/spu_csa.h 2007-02-20 14:35:02.000000000 +1100
+++ linux-cell/include/asm-powerpc/spu_csa.h 2007-02-20 18:03:14.000000000 +1100
@@ -221,8 +221,6 @@ struct spu_priv2_collapsed {
* @spu_chnlcnt_RW: Array of saved channel counts.
* @spu_chnldata_RW: Array of saved channel data.
* @suspend_time: Time stamp when decrementer disabled.
- * @slb_esid_RW: Array of saved SLB esid entries.
- * @slb_vsid_RW: Array of saved SLB vsid entries.
*
* Structure representing the whole of the SPU
* context save area (CSA). This struct contains
@@ -245,8 +243,6 @@ struct spu_state {
u32 spu_mailbox_data[4];
u32 pu_mailbox_data[1];
unsigned long suspend_time;
- u64 slb_esid_RW[8];
- u64 slb_vsid_RW[8];
spinlock_t register_lock;
};
Index: linux-cell/arch/powerpc/mm/hash_utils_64.c
===================================================================
--- linux-cell.orig/arch/powerpc/mm/hash_utils_64.c 2007-02-20 14:44:10.000000000 +1100
+++ linux-cell/arch/powerpc/mm/hash_utils_64.c 2007-02-20 18:03:15.000000000 +1100
@@ -685,6 +685,9 @@ int hash_page(unsigned long ea, unsigned
"non-cacheable mapping\n");
psize = mmu_vmalloc_psize = MMU_PAGE_4K;
}
+#ifdef CONFIG_SPE_BASE
+ spu_flush_all_slbs(mm);
+#endif
}
if (user_region) {
if (psize != get_paca()->context.user_psize) {
@@ -759,6 +762,9 @@ void hash_preload(struct mm_struct *mm,
mmu_psize_defs[MMU_PAGE_4K].sllp;
get_paca()->context = mm->context;
slb_flush_and_rebolt();
+#ifdef CONFIG_SPE_BASE
+ spu_flush_all_slbs(mm);
+#endif
}
}
if (mm->context.user_psize == MMU_PAGE_64K)
Index: linux-cell/arch/powerpc/platforms/cell/spufs/sched.c
===================================================================
--- linux-cell.orig/arch/powerpc/platforms/cell/spufs/sched.c 2007-02-20 15:23:02.000000000 +1100
+++ linux-cell/arch/powerpc/platforms/cell/spufs/sched.c 2007-02-20 15:25:07.000000000 +1100
@@ -127,14 +127,6 @@ static void spu_remove_from_active_list(
mutex_unlock(&spu_prio->active_mutex[node]);
}
-static inline void mm_needs_global_tlbie(struct mm_struct *mm)
-{
- int nr = (NR_CPUS > 1) ? NR_CPUS : NR_CPUS + 1;
-
- /* Global TLBIE broadcast required with SPEs. */
- __cpus_setall(&mm->cpu_vm_mask, nr);
-}
-
static BLOCKING_NOTIFIER_HEAD(spu_switch_notifier);
static void spu_switch_notify(struct spu *spu, struct spu_context *ctx)
@@ -167,8 +159,7 @@ static void spu_bind_context(struct spu
ctx->spu = spu;
ctx->ops = &spu_hw_ops;
spu->pid = current->pid;
- spu->mm = ctx->owner;
- mm_needs_global_tlbie(spu->mm);
+ spu_associate_mm(spu, ctx->owner);
spu->ibox_callback = spufs_ibox_callback;
spu->wbox_callback = spufs_wbox_callback;
spu->stop_callback = spufs_stop_callback;
@@ -205,7 +196,7 @@ static void spu_unbind_context(struct sp
spu->stop_callback = NULL;
spu->mfc_callback = NULL;
spu->dma_callback = NULL;
- spu->mm = NULL;
+ spu_associate_mm(spu, NULL);
spu->pid = 0;
ctx->ops = &spu_backing_ops;
ctx->spu = NULL;
^ permalink raw reply [flat|nested] 13+ messages in thread
* [PATCH 2/5] This is a hack to get_unmapped_area to make the SPE 64K code work.
2007-02-20 7:44 [PATCH 0/5] Support 64K pages mapping of SPE local stores on 4K kernel Benjamin Herrenschmidt
@ 2007-02-20 7:44 ` Benjamin Herrenschmidt
2007-02-20 7:44 ` [PATCH 1/5] powerpc: Fix spu SLB invalidations Benjamin Herrenschmidt
` (4 subsequent siblings)
5 siblings, 0 replies; 13+ messages in thread
From: Benjamin Herrenschmidt @ 2007-02-20 7:44 UTC (permalink / raw)
To: linuxppc-dev, cbe-oss-dev; +Cc: Arnd Bergmann
(Though it might prove to not have nasty side effects ...)
The basic idea is that if the filesystem's get_unmapped_area was used,
we skip the hugepage check. That assumes that the only filesytems that
provide a g_u_a callback are either hugetlbfs itself, or filesystems
that have arch specific code that "knows" already not to collide with
hugetlbfs.
A proper fix will be done later, basically by removing the hugetlbfs
hacks completely from get_unmapped_area and calling down to the mm
and/or the filesytem g_u_a implementations for MAX_FIXED as well.
(Note that this will still rely on the fact that filesytems that
provide a g_u_a "know" how to return areas that don't collide with
hugetlbfs, thus the base assumption is the same as this hack)
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
mm/mmap.c | 9 ++++++---
1 file changed, 6 insertions(+), 3 deletions(-)
Index: linux-cell/mm/mmap.c
===================================================================
--- linux-cell.orig/mm/mmap.c 2007-02-20 18:09:12.000000000 +1100
+++ linux-cell/mm/mmap.c 2007-02-20 18:10:08.000000000 +1100
@@ -1357,14 +1357,17 @@ unsigned long
get_unmapped_area(struct file *file, unsigned long addr, unsigned long len,
unsigned long pgoff, unsigned long flags)
{
- unsigned long ret;
+ unsigned long ret = 0;
+ int fs_area = 0;
if (!(flags & MAP_FIXED)) {
unsigned long (*get_area)(struct file *, unsigned long, unsigned long, unsigned long, unsigned long);
get_area = current->mm->get_unmapped_area;
- if (file && file->f_op && file->f_op->get_unmapped_area)
+ if (file && file->f_op && file->f_op->get_unmapped_area) {
get_area = file->f_op->get_unmapped_area;
+ fs_area = 1;
+ }
addr = get_area(file, addr, len, pgoff, flags);
if (IS_ERR_VALUE(addr))
return addr;
@@ -1380,7 +1383,7 @@ get_unmapped_area(struct file *file, uns
* can be made suitable for hugepages.
*/
ret = prepare_hugepage_range(addr, len, pgoff);
- } else {
+ } else if (!fs_area) {
/*
* Ensure that a normal request is not falling in a
* reserved hugepage range. For some archs like IA-64,
^ permalink raw reply [flat|nested] 13+ messages in thread
* [PATCH 3/5] powerpc: Introduce address space "slices"
2007-02-20 7:44 [PATCH 0/5] Support 64K pages mapping of SPE local stores on 4K kernel Benjamin Herrenschmidt
2007-02-20 7:44 ` [PATCH 2/5] This is a hack to get_unmapped_area to make the SPE 64K code work Benjamin Herrenschmidt
2007-02-20 7:44 ` [PATCH 1/5] powerpc: Fix spu SLB invalidations Benjamin Herrenschmidt
@ 2007-02-20 7:44 ` Benjamin Herrenschmidt
2007-03-01 6:11 ` [PATCH] Allow spufs to build as a module with slices enabled Michael Ellerman
2007-02-20 7:44 ` [PATCH 4/5] powerpc: Add ability to 4K kernel to hash in 64K pages Benjamin Herrenschmidt
` (2 subsequent siblings)
5 siblings, 1 reply; 13+ messages in thread
From: Benjamin Herrenschmidt @ 2007-02-20 7:44 UTC (permalink / raw)
To: linuxppc-dev, cbe-oss-dev; +Cc: Arnd Bergmann
The basic issue is to be able to do what hugetlbfs does but with
different page sizes for some other special filesystems, more
specifically, my need is:
- Huge pages
- SPE local store mappings using 64K pages on a 4K base page size
kernel on Cell
- Some special 4K segments in 64K pages kernels for mapping a dodgy
specie of powerpc specific infiniband hardware that requires 4K MMU
mappings for various reasons I won't explain here.
The main issues are:
- To maintain/keep track of the page size per "segments" (as we can
only have one page size per segment on powerpc, which are 256MB
divisions of the address space).
- To make sure special mappings stay within their alloted
"segments" (including MAP_FIXED crap)
- To make sure everybody else doesn't mmap/brk/grow_stack into a
"segment" that is used for a special mapping
Some of the necessary mecanisms to handle that were present in the
hugetlbfs code, but mostly in ways not suitable for anything else.
The patch address these in various ways described quickly below that
hijack some of the existing hugetlbfs callbacks.
The ideal solution requires some changes to the generic
get_unmapped_area(), among others, to get rid of the hugetlbfs hacks in
there, and instead, make sure that the fs and mm get_unmapped_area are
also called for MAP_FIXED. We might also need to add an mm callback to
validate a mapping.
I intend to do those changes separately and then adapt this work to use
them.
So what is a slice ? Well, I re-used the mecanism used formerly by our
hugetlbfs implementation which divides the address space in
"meta-segments" which I called "slices". The division is done using
256MB slices below 4G, and 1T slices above. Thus the address space is
divided currently into 16 "low" slices and 16 "high" slices. (Special
case: high slice 0 is the area between 4G and 1T).
Doing so simplifies significantly the tracking of segments and avoid
having to keep track of all the 256MB segments in the address space.
While I used the "concepts" of hugetlbfs, I mostly re-implemented
everything in a more generic way and "ported" hugetlbfs to it.
slices can have an associated page size, which is encoded in the mmu
context and used by the SLB miss handler to set the segment sizes. The
hash code currently doesn't care, it has a specific check for hugepages,
though I might add a mecanism to provide per-slice hash mapping
functions in the future.
The slice code provide a pair of "generic" get_unmapped_area() (bottomup
and topdown) functions that should work with any slice size. There is
some trickyness here so I would appreciate people to have a look at the
implementation of these and let me know if I got something wrong.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
arch/powerpc/Kconfig | 5
arch/powerpc/kernel/asm-offsets.c | 16
arch/powerpc/mm/Makefile | 1
arch/powerpc/mm/hash_utils_64.c | 124 +++---
arch/powerpc/mm/hugetlbpage.c | 528 ---------------------------
arch/powerpc/mm/mmu_context_64.c | 10
arch/powerpc/mm/slb.c | 11
arch/powerpc/mm/slb_low.S | 54 +-
arch/powerpc/mm/slice.c | 630 +++++++++++++++++++++++++++++++++
arch/powerpc/platforms/cell/spu_base.c | 9
include/asm-powerpc/mmu.h | 12
include/asm-powerpc/paca.h | 2
include/asm-powerpc/page_64.h | 87 ++--
13 files changed, 827 insertions(+), 662 deletions(-)
Index: linux-cell/arch/powerpc/Kconfig
===================================================================
--- linux-cell.orig/arch/powerpc/Kconfig 2007-02-20 15:21:05.000000000 +1100
+++ linux-cell/arch/powerpc/Kconfig 2007-02-20 15:28:08.000000000 +1100
@@ -318,6 +318,11 @@ config PPC_STD_MMU_32
def_bool y
depends on PPC_STD_MMU && PPC32
+config PPC_MM_SLICES
+ bool
+ default y if HUGETLB_PAGE
+ default n
+
config VIRT_CPU_ACCOUNTING
bool "Deterministic task and CPU time accounting"
depends on PPC64
Index: linux-cell/include/asm-powerpc/mmu.h
===================================================================
--- linux-cell.orig/include/asm-powerpc/mmu.h 2007-02-20 15:21:05.000000000 +1100
+++ linux-cell/include/asm-powerpc/mmu.h 2007-02-20 15:28:08.000000000 +1100
@@ -355,15 +355,17 @@ typedef unsigned long mm_context_id_t;
typedef struct {
mm_context_id_t id;
- u16 user_psize; /* page size index */
- u16 sllp; /* SLB entry page size encoding */
-#ifdef CONFIG_HUGETLB_PAGE
- u16 low_htlb_areas, high_htlb_areas;
+ u16 user_psize; /* base page size index */
+
+#ifdef CONFIG_PPC_MM_SLICES
+ u64 low_slices_psize; /* SLB page size encodings */
+ u64 high_slices_psize; /* 4 bits per slice for now */
+#else
+ u16 sllp; /* SLB page size encoding */
#endif
unsigned long vdso_base;
} mm_context_t;
-
static inline unsigned long vsid_scramble(unsigned long protovsid)
{
#if 0
Index: linux-cell/include/asm-powerpc/paca.h
===================================================================
--- linux-cell.orig/include/asm-powerpc/paca.h 2007-02-20 15:21:05.000000000 +1100
+++ linux-cell/include/asm-powerpc/paca.h 2007-02-20 15:28:08.000000000 +1100
@@ -82,8 +82,8 @@ struct paca_struct {
mm_context_t context;
u16 vmalloc_sllp;
- u16 slb_cache[SLB_CACHE_ENTRIES];
u16 slb_cache_ptr;
+ u16 slb_cache[SLB_CACHE_ENTRIES];
/*
* then miscellaneous read-write fields
Index: linux-cell/include/asm-powerpc/page_64.h
===================================================================
--- linux-cell.orig/include/asm-powerpc/page_64.h 2007-02-20 15:21:05.000000000 +1100
+++ linux-cell/include/asm-powerpc/page_64.h 2007-02-20 15:28:08.000000000 +1100
@@ -88,58 +88,57 @@ extern unsigned int HPAGE_SHIFT;
#endif /* __ASSEMBLY__ */
-#ifdef CONFIG_HUGETLB_PAGE
+#ifdef CONFIG_PPC_MM_SLICES
+
+#define SLICE_LOW_SHIFT 28
+#define SLICE_HIGH_SHIFT 40
+
+#define SLICE_LOW_TOP (0x100000000ul)
+#define SLICE_NUM_LOW (SLICE_LOW_TOP >> SLICE_LOW_SHIFT)
+#define SLICE_NUM_HIGH (PGTABLE_RANGE >> SLICE_HIGH_SHIFT)
+
+#define GET_LOW_SLICE_INDEX(addr) ((addr) >> SLICE_LOW_SHIFT)
+#define GET_HIGH_SLICE_INDEX(addr) ((addr) >> SLICE_HIGH_SHIFT)
+
+#ifndef __ASSEMBLY__
-#define HTLB_AREA_SHIFT 40
-#define HTLB_AREA_SIZE (1UL << HTLB_AREA_SHIFT)
-#define GET_HTLB_AREA(x) ((x) >> HTLB_AREA_SHIFT)
-
-#define LOW_ESID_MASK(addr, len) \
- (((1U << (GET_ESID(min((addr)+(len)-1, 0x100000000UL))+1)) \
- - (1U << GET_ESID(min((addr), 0x100000000UL)))) & 0xffff)
-#define HTLB_AREA_MASK(addr, len) (((1U << (GET_HTLB_AREA(addr+len-1)+1)) \
- - (1U << GET_HTLB_AREA(addr))) & 0xffff)
+struct slice_mask {
+ u16 low_slices;
+ u16 high_slices;
+};
+
+struct mm_struct;
+
+extern unsigned long slice_get_unmapped_area(unsigned long addr,
+ unsigned long len,
+ unsigned long flags,
+ unsigned int psize,
+ int topdown,
+ int use_cache);
+
+extern unsigned int get_slice_psize(struct mm_struct *mm,
+ unsigned long addr);
+
+extern void slice_init_context(struct mm_struct *mm, unsigned int psize);
+extern void slice_set_user_psize(struct mm_struct *mm, unsigned int psize);
#define ARCH_HAS_HUGEPAGE_ONLY_RANGE
+extern int is_hugepage_only_range(struct mm_struct *m,
+ unsigned long addr,
+ unsigned long len);
+
+#endif /* __ASSEMBLY__ */
+#else
+#define slice_init()
+#endif /* CONFIG_PPC_MM_SLICES */
+
+#ifdef CONFIG_HUGETLB_PAGE
+
#define ARCH_HAS_HUGETLB_FREE_PGD_RANGE
#define ARCH_HAS_PREPARE_HUGEPAGE_RANGE
#define ARCH_HAS_SETCLEAR_HUGE_PTE
-
-#define touches_hugepage_low_range(mm, addr, len) \
- (((addr) < 0x100000000UL) \
- && (LOW_ESID_MASK((addr), (len)) & (mm)->context.low_htlb_areas))
-#define touches_hugepage_high_range(mm, addr, len) \
- ((((addr) + (len)) > 0x100000000UL) \
- && (HTLB_AREA_MASK((addr), (len)) & (mm)->context.high_htlb_areas))
-
-#define __within_hugepage_low_range(addr, len, segmask) \
- ( (((addr)+(len)) <= 0x100000000UL) \
- && ((LOW_ESID_MASK((addr), (len)) | (segmask)) == (segmask)))
-#define within_hugepage_low_range(addr, len) \
- __within_hugepage_low_range((addr), (len), \
- current->mm->context.low_htlb_areas)
-#define __within_hugepage_high_range(addr, len, zonemask) \
- ( ((addr) >= 0x100000000UL) \
- && ((HTLB_AREA_MASK((addr), (len)) | (zonemask)) == (zonemask)))
-#define within_hugepage_high_range(addr, len) \
- __within_hugepage_high_range((addr), (len), \
- current->mm->context.high_htlb_areas)
-
-#define is_hugepage_only_range(mm, addr, len) \
- (touches_hugepage_high_range((mm), (addr), (len)) || \
- touches_hugepage_low_range((mm), (addr), (len)))
#define HAVE_ARCH_HUGETLB_UNMAPPED_AREA
-#define in_hugepage_area(context, addr) \
- (cpu_has_feature(CPU_FTR_16M_PAGE) && \
- ( ( (addr) >= 0x100000000UL) \
- ? ((1 << GET_HTLB_AREA(addr)) & (context).high_htlb_areas) \
- : ((1 << GET_ESID(addr)) & (context).low_htlb_areas) ) )
-
-#else /* !CONFIG_HUGETLB_PAGE */
-
-#define in_hugepage_area(mm, addr) 0
-
#endif /* !CONFIG_HUGETLB_PAGE */
#ifdef MODULE
Index: linux-cell/arch/powerpc/mm/Makefile
===================================================================
--- linux-cell.orig/arch/powerpc/mm/Makefile 2007-02-20 15:21:05.000000000 +1100
+++ linux-cell/arch/powerpc/mm/Makefile 2007-02-20 15:28:08.000000000 +1100
@@ -18,4 +18,5 @@ obj-$(CONFIG_40x) += 4xx_mmu.o
obj-$(CONFIG_44x) += 44x_mmu.o
obj-$(CONFIG_FSL_BOOKE) += fsl_booke_mmu.o
obj-$(CONFIG_NEED_MULTIPLE_NODES) += numa.o
+obj-$(CONFIG_PPC_MM_SLICES) += slice.o
obj-$(CONFIG_HUGETLB_PAGE) += hugetlbpage.o
Index: linux-cell/arch/powerpc/mm/slice.c
===================================================================
--- /dev/null 1970-01-01 00:00:00.000000000 +0000
+++ linux-cell/arch/powerpc/mm/slice.c 2007-02-20 15:28:08.000000000 +1100
@@ -0,0 +1,630 @@
+/*
+ * address space "slices" (meta-segments) support
+ *
+ * Copyright (C) 2007 Benjamin Herrenschmidt, IBM Corporation.
+ *
+ * Based on hugetlb implementation
+ *
+ * Copyright (C) 2003 David Gibson, IBM Corporation.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+ */
+
+#undef DEBUG
+
+#include <linux/kernel.h>
+#include <linux/mm.h>
+#include <linux/pagemap.h>
+#include <linux/err.h>
+#include <linux/spinlock.h>
+#include <asm/mman.h>
+#include <asm/mmu.h>
+#include <asm/spu.h>
+
+static spinlock_t slice_convert_lock = SPIN_LOCK_UNLOCKED;
+
+
+#ifdef DEBUG
+int _slice_debug = 1;
+
+static void slice_print_mask(const char *label, struct slice_mask mask)
+{
+ char *p, buf[16 + 3 + 16 + 1];
+ int i;
+
+ if (!_slice_debug)
+ return;
+ p = buf;
+ for (i = 0; i < SLICE_NUM_LOW; i++)
+ *(p++) = (mask.low_slices & (1 << i)) ? '1' : '0';
+ *(p++) = ' ';
+ *(p++) = '-';
+ *(p++) = ' ';
+ for (i = 0; i < SLICE_NUM_HIGH; i++)
+ *(p++) = (mask.high_slices & (1 << i)) ? '1' : '0';
+ *(p++) = 0;
+
+ printk(KERN_DEBUG "%s:%s\n", label, buf);
+}
+
+#define slice_dbg(fmt...) do { if (_slice_debug) pr_debug(fmt); } while(0)
+
+#else
+
+static void slice_print_mask(const char *label, struct slice_mask mask) {}
+#define slice_dbg(fmt...)
+
+#endif
+
+static struct slice_mask slice_range_to_mask(unsigned long start,
+ unsigned long len)
+{
+ unsigned long end = start + len - 1;
+ struct slice_mask ret = { 0, 0 };
+
+ if (start < SLICE_LOW_TOP) {
+ unsigned long mend = min(end, SLICE_LOW_TOP);
+ unsigned long mstart = min(start, SLICE_LOW_TOP);
+
+ ret.low_slices = (1u << (GET_LOW_SLICE_INDEX(mend) + 1))
+ - (1u << GET_LOW_SLICE_INDEX(mstart));
+ }
+
+ if ((start + len) > SLICE_LOW_TOP)
+ ret.high_slices = (1u << (GET_HIGH_SLICE_INDEX(end) + 1))
+ - (1u << GET_HIGH_SLICE_INDEX(start));
+
+ return ret;
+}
+
+static int slice_area_is_free(struct mm_struct *mm, unsigned long addr,
+ unsigned long len)
+{
+ struct vm_area_struct *vma;
+
+ if ((mm->task_size - len) < addr)
+ return 0;
+ vma = find_vma(mm, addr);
+ return (!vma || (addr + len) <= vma->vm_start);
+}
+
+static int slice_low_has_vma(struct mm_struct *mm, unsigned long slice)
+{
+ return !slice_area_is_free(mm, slice << SLICE_LOW_SHIFT,
+ 1ul << SLICE_LOW_SHIFT);
+}
+
+static int slice_high_has_vma(struct mm_struct *mm, unsigned long slice)
+{
+ unsigned long start = slice << SLICE_HIGH_SHIFT;
+ unsigned long end = start + (1ul << SLICE_HIGH_SHIFT);
+
+ /* Hack, so that each addresses is controlled by exactly one
+ * of the high or low area bitmaps, the first high area starts
+ * at 4GB, not 0 */
+ if (start == 0)
+ start = SLICE_LOW_TOP;
+
+ return !slice_area_is_free(mm, start, end - start);
+}
+
+static struct slice_mask slice_mask_for_free(struct mm_struct *mm)
+{
+ struct slice_mask ret = { 0, 0 };
+ unsigned long i;
+
+ for (i = 0; i < SLICE_NUM_LOW; i++)
+ if (!slice_low_has_vma(mm, i))
+ ret.low_slices |= 1u << i;
+
+ if (mm->task_size <= SLICE_LOW_TOP)
+ return ret;
+
+ for (i = 0; i < SLICE_NUM_HIGH; i++)
+ if (!slice_high_has_vma(mm, i))
+ ret.high_slices |= 1u << i;
+
+ return ret;
+}
+
+static struct slice_mask slice_mask_for_size(struct mm_struct *mm, int psize)
+{
+ struct slice_mask ret = { 0, 0 };
+ unsigned long i;
+ u64 psizes;
+
+ psizes = mm->context.low_slices_psize;
+ for (i = 0; i < SLICE_NUM_LOW; i++)
+ if (((psizes >> (i * 4)) & 0xf) == psize)
+ ret.low_slices |= 1u << i;
+
+ psizes = mm->context.high_slices_psize;
+ for (i = 0; i < SLICE_NUM_HIGH; i++)
+ if (((psizes >> (i * 4)) & 0xf) == psize)
+ ret.high_slices |= 1u << i;
+
+ return ret;
+}
+
+static int slice_check_fit(struct slice_mask mask, struct slice_mask available)
+{
+ return (mask.low_slices & available.low_slices) == mask.low_slices &&
+ (mask.high_slices & available.high_slices) == mask.high_slices;
+}
+
+static void slice_flush_segments(void *parm)
+{
+ struct mm_struct *mm = parm;
+ unsigned long flags;
+
+ if (mm != current->active_mm)
+ return;
+
+ /* update the paca copy of the context struct */
+ get_paca()->context = current->active_mm->context;
+
+ local_irq_save(flags);
+ slb_flush_and_rebolt();
+ local_irq_restore(flags);
+}
+
+static void slice_convert(struct mm_struct *mm, struct slice_mask mask, int psize)
+{
+ /* Write the new slice psize bits */
+ u64 lpsizes, hpsizes;
+ unsigned long i, flags;
+
+ slice_dbg("slice_convert(mm=%p, psize=%d)\n", mm, psize);
+ slice_print_mask(" mask", mask);
+
+ /* We need to use a spinlock here to protect against
+ * concurrent 64k -> 4k demotion ...
+ */
+ spin_lock_irqsave(&slice_convert_lock, flags);
+
+ lpsizes = mm->context.low_slices_psize;
+ for (i = 0; i < SLICE_NUM_LOW; i++)
+ if (mask.low_slices & (1u << i))
+ lpsizes = (lpsizes & ~(0xful << (i * 4))) |
+ (((unsigned long)psize) << (i * 4));
+
+ hpsizes = mm->context.high_slices_psize;
+ for (i = 0; i < SLICE_NUM_HIGH; i++)
+ if (mask.high_slices & (1u << i))
+ hpsizes = (hpsizes & ~(0xful << (i * 4))) |
+ (((unsigned long)psize) << (i * 4));
+
+ mm->context.low_slices_psize = lpsizes;
+ mm->context.high_slices_psize = hpsizes;
+
+ slice_dbg(" lsps=%lx, hsps=%lx\n",
+ mm->context.low_slices_psize,
+ mm->context.high_slices_psize);
+
+ spin_unlock_irqrestore(&slice_convert_lock, flags);
+ mb();
+
+ /* XXX this is sub-optimal but will do for now */
+ on_each_cpu(slice_flush_segments, mm, 0, 1);
+#ifdef CONFIG_SPU_BASE
+ spu_flush_all_slbs(mm);
+#endif
+}
+
+static unsigned long slice_find_area_bottomup(struct mm_struct *mm,
+ unsigned long len,
+ struct slice_mask available,
+ int psize, int use_cache)
+{
+ struct vm_area_struct *vma;
+ unsigned long start_addr, addr;
+ struct slice_mask mask;
+ int pshift = max_t(int, mmu_psize_defs[psize].shift, PAGE_SHIFT);
+
+ if (use_cache) {
+ if (len <= mm->cached_hole_size) {
+ start_addr = addr = TASK_UNMAPPED_BASE;
+ mm->cached_hole_size = 0;
+ } else
+ start_addr = addr = mm->free_area_cache;
+ } else
+ start_addr = addr = TASK_UNMAPPED_BASE;
+
+full_search:
+ for (;;) {
+ addr = _ALIGN_UP(addr, 1ul << pshift);
+ if ((TASK_SIZE - len) < addr)
+ break;
+ vma = find_vma(mm, addr);
+ BUG_ON(vma && (addr >= vma->vm_end));
+
+ mask = slice_range_to_mask(addr, len);
+ if (!slice_check_fit(mask, available)) {
+ if (addr < SLICE_LOW_TOP)
+ addr = _ALIGN_UP(addr + 1, 1ul << SLICE_LOW_SHIFT);
+ else
+ addr = _ALIGN_UP(addr + 1, 1ul << SLICE_HIGH_SHIFT);
+ continue;
+ }
+ if (!vma || addr + len <= vma->vm_start) {
+ /*
+ * Remember the place where we stopped the search:
+ */
+ if (use_cache)
+ mm->free_area_cache = addr + len;
+ return addr;
+ }
+ if (use_cache && (addr + mm->cached_hole_size) < vma->vm_start)
+ mm->cached_hole_size = vma->vm_start - addr;
+ addr = vma->vm_end;
+ }
+
+ /* Make sure we didn't miss any holes */
+ if (use_cache && start_addr != TASK_UNMAPPED_BASE) {
+ start_addr = addr = TASK_UNMAPPED_BASE;
+ mm->cached_hole_size = 0;
+ goto full_search;
+ }
+ return -ENOMEM;
+}
+
+static unsigned long slice_find_area_topdown(struct mm_struct *mm,
+ unsigned long len,
+ struct slice_mask available,
+ int psize, int use_cache)
+{
+ struct vm_area_struct *vma;
+ unsigned long addr;
+ struct slice_mask mask;
+ int pshift = max_t(int, mmu_psize_defs[psize].shift, PAGE_SHIFT);
+
+ /* check if free_area_cache is useful for us */
+ if (use_cache) {
+ if (len <= mm->cached_hole_size) {
+ mm->cached_hole_size = 0;
+ mm->free_area_cache = mm->mmap_base;
+ }
+
+ /* either no address requested or can't fit in requested
+ * address hole
+ */
+ addr = mm->free_area_cache;
+
+ /* make sure it can fit in the remaining address space */
+ if (addr > len) {
+ addr = _ALIGN_DOWN(addr - len, 1ul << pshift);
+ mask = slice_range_to_mask(addr, len);
+ if (slice_check_fit(mask, available) &&
+ slice_area_is_free(mm, addr, len))
+ /* remember the address as a hint for
+ * next time
+ */
+ return (mm->free_area_cache = addr);
+ }
+ }
+
+ addr = mm->mmap_base;
+ while (addr > len) {
+ /* Go down by chunk size */
+ addr = _ALIGN_DOWN(addr - len, 1ul << pshift);
+
+ /* Check for hit with different page size */
+ mask = slice_range_to_mask(addr, len);
+ if (!slice_check_fit(mask, available)) {
+ if (addr < SLICE_LOW_TOP)
+ addr = _ALIGN_DOWN(addr, 1ul << SLICE_LOW_SHIFT);
+ else if (addr < (1ul << SLICE_HIGH_SHIFT))
+ addr = SLICE_LOW_TOP;
+ else
+ addr = _ALIGN_DOWN(addr, 1ul << SLICE_HIGH_SHIFT);
+ continue;
+ }
+
+ /*
+ * Lookup failure means no vma is above this address,
+ * else if new region fits below vma->vm_start,
+ * return with success:
+ */
+ vma = find_vma(mm, addr);
+ if (!vma || (addr + len) <= vma->vm_start) {
+ /* remember the address as a hint for next time */
+ if (use_cache)
+ mm->free_area_cache = addr;
+ return addr;
+ }
+
+ /* remember the largest hole we saw so far */
+ if (use_cache && (addr + mm->cached_hole_size) < vma->vm_start)
+ mm->cached_hole_size = vma->vm_start - addr;
+
+ /* try just below the current vma->vm_start */
+ addr = vma->vm_start;
+ }
+
+ /*
+ * A failed mmap() very likely causes application failure,
+ * so fall back to the bottom-up function here. This scenario
+ * can happen with large stack limits and large mmap()
+ * allocations.
+ */
+ addr = slice_find_area_bottomup(mm, len, available, psize, 0);
+
+ /*
+ * Restore the topdown base:
+ */
+ if (use_cache) {
+ mm->free_area_cache = mm->mmap_base;
+ mm->cached_hole_size = ~0UL;
+ }
+
+ return addr;
+}
+
+
+static unsigned long slice_find_area(struct mm_struct *mm, unsigned long len,
+ struct slice_mask mask, int psize,
+ int topdown, int use_cache)
+{
+ if (topdown)
+ return slice_find_area_topdown(mm, len, mask, psize, use_cache);
+ else
+ return slice_find_area_bottomup(mm, len, mask, psize, use_cache);
+}
+
+unsigned long slice_get_unmapped_area(unsigned long addr, unsigned long len,
+ unsigned long flags, unsigned int psize,
+ int topdown, int use_cache)
+{
+ struct slice_mask mask;
+ struct slice_mask good_mask;
+ struct slice_mask potential_mask = {0,0} /* silence stupid warning */;
+ int pmask_set = 0;
+ int fixed = (flags & MAP_FIXED);
+ int pshift = max_t(int, mmu_psize_defs[psize].shift, PAGE_SHIFT);
+ struct mm_struct *mm = current->mm;
+
+ /* Sanity checks */
+ BUG_ON(mm->task_size == 0);
+
+ slice_dbg("slice_get_unmapped_area(mm=%p, psize=%d...\n", mm, psize);
+ slice_dbg(" addr=%lx, len=%lx, flags=%lx, topdown=%d, use_cache=%d\n",
+ addr, len, flags, topdown, use_cache);
+
+ if (len > mm->task_size)
+ return -ENOMEM;
+ if (fixed && (addr & ((1ul << pshift) - 1)))
+ return -EINVAL;
+ if (fixed && addr > (mm->task_size - len))
+ return -EINVAL;
+
+ /* If hint, make sure it matches our alignment restrictions */
+ if (!fixed && addr) {
+ addr = _ALIGN_UP(addr, 1ul << pshift);
+ slice_dbg(" aligned addr=%lx\n", addr);
+ }
+
+ /* First makeup a "good" mask of slices that have the right size
+ * already
+ */
+ good_mask = slice_mask_for_size(mm, psize);
+ slice_print_mask(" good_mask", good_mask);
+
+ /* First check hint if it's valid or if we have MAP_FIXED */
+ if ((addr != 0 || fixed) && (mm->task_size - len) >= addr) {
+
+ /* Don't bother with hint if it overlaps a VMA */
+ if (!fixed && !slice_area_is_free(mm, addr, len))
+ goto search;
+
+ /* Build a mask for the requested range */
+ mask = slice_range_to_mask(addr, len);
+ slice_print_mask(" mask", mask);
+
+ /* Check if we fit in the good mask. If we do, we just return,
+ * nothing else to do
+ */
+ if (slice_check_fit(mask, good_mask)) {
+ slice_dbg(" fits good !\n");
+ return addr;
+ }
+
+ /* We don't fit in the good mask, check what other slices are
+ * empty and thus can be converted
+ */
+ potential_mask = slice_mask_for_free(mm);
+ potential_mask.low_slices |= good_mask.low_slices;
+ potential_mask.high_slices |= good_mask.high_slices;
+ pmask_set = 1;
+ slice_print_mask(" potential", potential_mask);
+ if (slice_check_fit(mask, potential_mask)) {
+ slice_dbg(" fits potential !\n");
+ goto convert;
+ }
+ }
+
+ /* If we have MAP_FIXED and failed the above step, then error out */
+ if (fixed)
+ return -EBUSY;
+
+ search:
+ slice_dbg(" search...\n");
+
+ /* Now let's see if we can find something in the existing slices
+ * for that size
+ */
+ addr = slice_find_area(mm, len, good_mask, psize, topdown, use_cache);
+ if (addr != -ENOMEM) {
+ /* Found within the good mask, we don't have to setup,
+ * we thus return directly
+ */
+ slice_dbg(" found area at 0x%lx\n", addr);
+ return addr;
+ }
+
+ /* Won't fit, check what can be converted */
+ if (!pmask_set) {
+ potential_mask = slice_mask_for_free(mm);
+ potential_mask.low_slices |= good_mask.low_slices;
+ potential_mask.high_slices |= good_mask.high_slices;
+ pmask_set = 1;
+ slice_print_mask(" potential", potential_mask);
+ }
+
+ /* Now let's see if we can find something in the existing slices
+ * for that size
+ */
+ addr = slice_find_area(mm, len, potential_mask, psize, topdown,
+ use_cache);
+ if (addr == -ENOMEM)
+ return -ENOMEM;
+
+ mask = slice_range_to_mask(addr, len);
+ slice_dbg(" found potential area at 0x%lx\n", addr);
+ slice_print_mask(" mask", mask);
+
+ convert:
+ slice_convert(mm, mask, psize);
+ return addr;
+
+}
+
+unsigned long arch_get_unmapped_area(struct file *filp,
+ unsigned long addr,
+ unsigned long len,
+ unsigned long pgoff,
+ unsigned long flags)
+{
+ return slice_get_unmapped_area(addr, len, flags,
+ current->mm->context.user_psize,
+ 0, 1);
+}
+
+unsigned long arch_get_unmapped_area_topdown(struct file *filp,
+ const unsigned long addr0,
+ const unsigned long len,
+ const unsigned long pgoff,
+ const unsigned long flags)
+{
+ return slice_get_unmapped_area(addr0, len, flags,
+ current->mm->context.user_psize,
+ 1, 1);
+}
+
+unsigned int get_slice_psize(struct mm_struct *mm, unsigned long addr)
+{
+ u64 psizes;
+ int index;
+
+ if (addr < SLICE_LOW_TOP) {
+ psizes = mm->context.low_slices_psize;
+ index = GET_LOW_SLICE_INDEX(addr);
+ } else {
+ psizes = mm->context.high_slices_psize;
+ index = GET_HIGH_SLICE_INDEX(addr);
+ }
+
+ return (psizes >> (index * 4)) & 0xf;
+}
+
+/*
+ * This is called by hash_page when it needs to do a lazy conversion of
+ * an address space from real 64K pages to combo 4K pages (typically
+ * when hitting a non cacheable mapping on a processor or hypervisor
+ * that won't allow them for 64K pages).
+ *
+ * This is also called in init_new_context() to change back the user
+ * psize from whatever the parent context had it set to
+ *
+ * This function will only change the content of the {low,high)_slice_psize
+ * masks, it will not flush SLBs as this shall be handled lazily by the
+ * caller.
+ */
+void slice_set_user_psize(struct mm_struct *mm, unsigned int psize)
+{
+ unsigned long flags, lpsizes, hpsizes;
+ unsigned int old_psize;
+ int i;
+
+ slice_dbg("slice_set_user_psize(mm=%p, psize=%d)\n", mm, psize);
+
+ spin_lock_irqsave(&slice_convert_lock, flags);
+
+ old_psize = mm->context.user_psize;
+ slice_dbg(" old_psize=%d\n", old_psize);
+ if (old_psize == psize)
+ goto bail;
+
+ mm->context.user_psize = psize;
+ wmb();
+
+ lpsizes = mm->context.low_slices_psize;
+ for (i = 0; i < SLICE_NUM_LOW; i++)
+ if (((lpsizes >> (i * 4)) & 0xf) == old_psize)
+ lpsizes = (lpsizes & ~(0xful << (i * 4))) |
+ (((unsigned long)psize) << (i * 4));
+
+ hpsizes = mm->context.high_slices_psize;
+ for (i = 0; i < SLICE_NUM_HIGH; i++)
+ if (((hpsizes >> (i * 4)) & 0xf) == old_psize)
+ hpsizes = (hpsizes & ~(0xful << (i * 4))) |
+ (((unsigned long)psize) << (i * 4));
+
+ mm->context.low_slices_psize = lpsizes;
+ mm->context.high_slices_psize = hpsizes;
+
+ slice_dbg(" lsps=%lx, hsps=%lx\n",
+ mm->context.low_slices_psize,
+ mm->context.high_slices_psize);
+
+ bail:
+ spin_unlock_irqrestore(&slice_convert_lock, flags);
+}
+
+/*
+ * is_hugepage_only_range() is used by generic code to verify wether
+ * a normal mmap mapping (non hugetlbfs) is valid on a given area.
+ *
+ * until the generic code provides a more generic hook and/or starts
+ * calling arch get_unmapped_area for MAP_FIXED (which our implementation
+ * here knows how to deal with), we hijack it to keep standard mappings
+ * away from us.
+ *
+ * because of that generic code limitation, MAP_FIXED mapping cannot
+ * "convert" back a slice with no VMAs to the standard page size, only
+ * get_unmapped_area() can. It would be possible to fix it here but I
+ * prefer working on fixing the generic code instead.
+ *
+ * WARNING: This will not work if hugetlbfs isn't enabled since the
+ * generic code will redefine that function as 0 in that. This is ok
+ * for now as we only use slices with hugetlbfs enabled. This should
+ * be fixed as the generic code gets fixed.
+ */
+int is_hugepage_only_range(struct mm_struct *mm, unsigned long addr,
+ unsigned long len)
+{
+ struct slice_mask mask, available;
+
+ mask = slice_range_to_mask(addr, len);
+ available = slice_mask_for_size(mm, mm->context.user_psize);
+
+#if 0 /* too verbose */
+ slice_dbg("is_hugepage_only_range(mm=%p, addr=%lx, len=%lx)\n",
+ mm, addr, len);
+ slice_print_mask(" mask", mask);
+ slice_print_mask(" available", available);
+#endif
+ return !slice_check_fit(mask, available);
+}
+
Index: linux-cell/arch/powerpc/kernel/asm-offsets.c
===================================================================
--- linux-cell.orig/arch/powerpc/kernel/asm-offsets.c 2007-02-20 15:21:05.000000000 +1100
+++ linux-cell/arch/powerpc/kernel/asm-offsets.c 2007-02-20 15:28:08.000000000 +1100
@@ -123,12 +123,18 @@ int main(void)
DEFINE(PACASLBCACHE, offsetof(struct paca_struct, slb_cache));
DEFINE(PACASLBCACHEPTR, offsetof(struct paca_struct, slb_cache_ptr));
DEFINE(PACACONTEXTID, offsetof(struct paca_struct, context.id));
- DEFINE(PACACONTEXTSLLP, offsetof(struct paca_struct, context.sllp));
DEFINE(PACAVMALLOCSLLP, offsetof(struct paca_struct, vmalloc_sllp));
-#ifdef CONFIG_HUGETLB_PAGE
- DEFINE(PACALOWHTLBAREAS, offsetof(struct paca_struct, context.low_htlb_areas));
- DEFINE(PACAHIGHHTLBAREAS, offsetof(struct paca_struct, context.high_htlb_areas));
-#endif /* CONFIG_HUGETLB_PAGE */
+#ifdef CONFIG_PPC_MM_SLICES
+ DEFINE(PACALOWSLICESPSIZE, offsetof(struct paca_struct,
+ context.low_slices_psize));
+ DEFINE(PACAHIGHSLICEPSIZE, offsetof(struct paca_struct,
+ context.high_slices_psize));
+ DEFINE(MMUPSIZEDEFSIZE, sizeof(struct mmu_psize_def));
+ DEFINE(MMUPSIZESLLP, offsetof(struct mmu_psize_def, sllp));
+#else
+ DEFINE(PACACONTEXTSLLP, offsetof(struct paca_struct, context.sllp));
+
+#endif /* CONFIG_PPC_MM_SLICES */
DEFINE(PACA_EXGEN, offsetof(struct paca_struct, exgen));
DEFINE(PACA_EXMC, offsetof(struct paca_struct, exmc));
DEFINE(PACA_EXSLB, offsetof(struct paca_struct, exslb));
Index: linux-cell/arch/powerpc/mm/hugetlbpage.c
===================================================================
--- linux-cell.orig/arch/powerpc/mm/hugetlbpage.c 2007-02-20 15:21:05.000000000 +1100
+++ linux-cell/arch/powerpc/mm/hugetlbpage.c 2007-02-20 15:28:08.000000000 +1100
@@ -92,7 +92,7 @@ pte_t *huge_pte_offset(struct mm_struct
pgd_t *pg;
pud_t *pu;
- BUG_ON(! in_hugepage_area(mm->context, addr));
+ BUG_ON(get_slice_psize(mm, addr) != mmu_huge_psize);
addr &= HPAGE_MASK;
@@ -120,7 +120,7 @@ pte_t *huge_pte_alloc(struct mm_struct *
pud_t *pu;
hugepd_t *hpdp = NULL;
- BUG_ON(! in_hugepage_area(mm->context, addr));
+ BUG_ON(get_slice_psize(mm, addr) != mmu_huge_psize);
addr &= HPAGE_MASK;
@@ -303,7 +303,7 @@ void hugetlb_free_pgd_range(struct mmu_g
start = addr;
pgd = pgd_offset((*tlb)->mm, addr);
do {
- BUG_ON(! in_hugepage_area((*tlb)->mm->context, addr));
+ BUG_ON(get_slice_psize((*tlb)->mm, addr) != mmu_huge_psize);
next = pgd_addr_end(addr, end);
if (pgd_none_or_clear_bad(pgd))
continue;
@@ -338,194 +338,16 @@ pte_t huge_ptep_get_and_clear(struct mm_
return __pte(old);
}
-struct slb_flush_info {
- struct mm_struct *mm;
- u16 newareas;
-};
-
-static void flush_low_segments(void *parm)
-{
- struct slb_flush_info *fi = parm;
- unsigned long i;
-
- BUILD_BUG_ON((sizeof(fi->newareas)*8) != NUM_LOW_AREAS);
-
- if (current->active_mm != fi->mm)
- return;
-
- /* Only need to do anything if this CPU is working in the same
- * mm as the one which has changed */
-
- /* update the paca copy of the context struct */
- get_paca()->context = current->active_mm->context;
-
- asm volatile("isync" : : : "memory");
- for (i = 0; i < NUM_LOW_AREAS; i++) {
- if (! (fi->newareas & (1U << i)))
- continue;
- asm volatile("slbie %0"
- : : "r" ((i << SID_SHIFT) | SLBIE_C));
- }
- asm volatile("isync" : : : "memory");
-}
-
-static void flush_high_segments(void *parm)
-{
- struct slb_flush_info *fi = parm;
- unsigned long i, j;
-
-
- BUILD_BUG_ON((sizeof(fi->newareas)*8) != NUM_HIGH_AREAS);
-
- if (current->active_mm != fi->mm)
- return;
-
- /* Only need to do anything if this CPU is working in the same
- * mm as the one which has changed */
-
- /* update the paca copy of the context struct */
- get_paca()->context = current->active_mm->context;
-
- asm volatile("isync" : : : "memory");
- for (i = 0; i < NUM_HIGH_AREAS; i++) {
- if (! (fi->newareas & (1U << i)))
- continue;
- for (j = 0; j < (1UL << (HTLB_AREA_SHIFT-SID_SHIFT)); j++)
- asm volatile("slbie %0"
- :: "r" (((i << HTLB_AREA_SHIFT)
- + (j << SID_SHIFT)) | SLBIE_C));
- }
- asm volatile("isync" : : : "memory");
-}
-
-static int prepare_low_area_for_htlb(struct mm_struct *mm, unsigned long area)
-{
- unsigned long start = area << SID_SHIFT;
- unsigned long end = (area+1) << SID_SHIFT;
- struct vm_area_struct *vma;
-
- BUG_ON(area >= NUM_LOW_AREAS);
-
- /* Check no VMAs are in the region */
- vma = find_vma(mm, start);
- if (vma && (vma->vm_start < end))
- return -EBUSY;
-
- return 0;
-}
-
-static int prepare_high_area_for_htlb(struct mm_struct *mm, unsigned long area)
-{
- unsigned long start = area << HTLB_AREA_SHIFT;
- unsigned long end = (area+1) << HTLB_AREA_SHIFT;
- struct vm_area_struct *vma;
-
- BUG_ON(area >= NUM_HIGH_AREAS);
-
- /* Hack, so that each addresses is controlled by exactly one
- * of the high or low area bitmaps, the first high area starts
- * at 4GB, not 0 */
- if (start == 0)
- start = 0x100000000UL;
-
- /* Check no VMAs are in the region */
- vma = find_vma(mm, start);
- if (vma && (vma->vm_start < end))
- return -EBUSY;
-
- return 0;
-}
-
-static int open_low_hpage_areas(struct mm_struct *mm, u16 newareas)
-{
- unsigned long i;
- struct slb_flush_info fi;
-
- BUILD_BUG_ON((sizeof(newareas)*8) != NUM_LOW_AREAS);
- BUILD_BUG_ON((sizeof(mm->context.low_htlb_areas)*8) != NUM_LOW_AREAS);
-
- newareas &= ~(mm->context.low_htlb_areas);
- if (! newareas)
- return 0; /* The segments we want are already open */
-
- for (i = 0; i < NUM_LOW_AREAS; i++)
- if ((1 << i) & newareas)
- if (prepare_low_area_for_htlb(mm, i) != 0)
- return -EBUSY;
-
- mm->context.low_htlb_areas |= newareas;
-
- /* the context change must make it to memory before the flush,
- * so that further SLB misses do the right thing. */
- mb();
-
- fi.mm = mm;
- fi.newareas = newareas;
- on_each_cpu(flush_low_segments, &fi, 0, 1);
-
- return 0;
-}
-
-static int open_high_hpage_areas(struct mm_struct *mm, u16 newareas)
-{
- struct slb_flush_info fi;
- unsigned long i;
-
- BUILD_BUG_ON((sizeof(newareas)*8) != NUM_HIGH_AREAS);
- BUILD_BUG_ON((sizeof(mm->context.high_htlb_areas)*8)
- != NUM_HIGH_AREAS);
-
- newareas &= ~(mm->context.high_htlb_areas);
- if (! newareas)
- return 0; /* The areas we want are already open */
-
- for (i = 0; i < NUM_HIGH_AREAS; i++)
- if ((1 << i) & newareas)
- if (prepare_high_area_for_htlb(mm, i) != 0)
- return -EBUSY;
-
- mm->context.high_htlb_areas |= newareas;
-
- /* the context change must make it to memory before the flush,
- * so that further SLB misses do the right thing. */
- mb();
-
- fi.mm = mm;
- fi.newareas = newareas;
- on_each_cpu(flush_high_segments, &fi, 0, 1);
-
- return 0;
-}
-
int prepare_hugepage_range(unsigned long addr, unsigned long len, pgoff_t pgoff)
{
- int err = 0;
+ unsigned long gua_addr;
- if (pgoff & (~HPAGE_MASK >> PAGE_SHIFT))
- return -EINVAL;
- if (len & ~HPAGE_MASK)
- return -EINVAL;
- if (addr & ~HPAGE_MASK)
- return -EINVAL;
-
- if (addr < 0x100000000UL)
- err = open_low_hpage_areas(current->mm,
- LOW_ESID_MASK(addr, len));
- if ((addr + len) > 0x100000000UL)
- err = open_high_hpage_areas(current->mm,
- HTLB_AREA_MASK(addr, len));
-#ifdef CONFIG_SPE_BASE
- spu_flush_all_slbs(current->mm);
-#endif
- if (err) {
- printk(KERN_DEBUG "prepare_hugepage_range(%lx, %lx)"
- " failed (lowmask: 0x%04hx, highmask: 0x%04hx)\n",
- addr, len,
- LOW_ESID_MASK(addr, len), HTLB_AREA_MASK(addr, len));
- return err;
- }
+ pr_debug("prepare_hugepage_range(addr=0x%lx, len=0x%lx\n", addr, len);
- return 0;
+ /* This is only useful for MAP_FIXED so we turn it into that */
+ gua_addr = slice_get_unmapped_area(addr, len, MAP_FIXED,
+ mmu_huge_psize, 1, 0);
+ return gua_addr == addr ? 0 : -EINVAL;
}
struct page *
@@ -534,7 +356,7 @@ follow_huge_addr(struct mm_struct *mm, u
pte_t *ptep;
struct page *page;
- if (! in_hugepage_area(mm->context, address))
+ if (get_slice_psize(mm, address) != mmu_huge_psize)
return ERR_PTR(-EINVAL);
ptep = huge_pte_offset(mm, address);
@@ -558,338 +380,12 @@ follow_huge_pmd(struct mm_struct *mm, un
return NULL;
}
-/* Because we have an exclusive hugepage region which lies within the
- * normal user address space, we have to take special measures to make
- * non-huge mmap()s evade the hugepage reserved regions. */
-unsigned long arch_get_unmapped_area(struct file *filp, unsigned long addr,
- unsigned long len, unsigned long pgoff,
- unsigned long flags)
-{
- struct mm_struct *mm = current->mm;
- struct vm_area_struct *vma;
- unsigned long start_addr;
-
- if (len > TASK_SIZE)
- return -ENOMEM;
-
- if (addr) {
- addr = PAGE_ALIGN(addr);
- vma = find_vma(mm, addr);
- if (((TASK_SIZE - len) >= addr)
- && (!vma || (addr+len) <= vma->vm_start)
- && !is_hugepage_only_range(mm, addr,len))
- return addr;
- }
- if (len > mm->cached_hole_size) {
- start_addr = addr = mm->free_area_cache;
- } else {
- start_addr = addr = TASK_UNMAPPED_BASE;
- mm->cached_hole_size = 0;
- }
-
-full_search:
- vma = find_vma(mm, addr);
- while (TASK_SIZE - len >= addr) {
- BUG_ON(vma && (addr >= vma->vm_end));
-
- if (touches_hugepage_low_range(mm, addr, len)) {
- addr = ALIGN(addr+1, 1<<SID_SHIFT);
- vma = find_vma(mm, addr);
- continue;
- }
- if (touches_hugepage_high_range(mm, addr, len)) {
- addr = ALIGN(addr+1, 1UL<<HTLB_AREA_SHIFT);
- vma = find_vma(mm, addr);
- continue;
- }
- if (!vma || addr + len <= vma->vm_start) {
- /*
- * Remember the place where we stopped the search:
- */
- mm->free_area_cache = addr + len;
- return addr;
- }
- if (addr + mm->cached_hole_size < vma->vm_start)
- mm->cached_hole_size = vma->vm_start - addr;
- addr = vma->vm_end;
- vma = vma->vm_next;
- }
-
- /* Make sure we didn't miss any holes */
- if (start_addr != TASK_UNMAPPED_BASE) {
- start_addr = addr = TASK_UNMAPPED_BASE;
- mm->cached_hole_size = 0;
- goto full_search;
- }
- return -ENOMEM;
-}
-
-/*
- * This mmap-allocator allocates new areas top-down from below the
- * stack's low limit (the base):
- *
- * Because we have an exclusive hugepage region which lies within the
- * normal user address space, we have to take special measures to make
- * non-huge mmap()s evade the hugepage reserved regions.
- */
-unsigned long
-arch_get_unmapped_area_topdown(struct file *filp, const unsigned long addr0,
- const unsigned long len, const unsigned long pgoff,
- const unsigned long flags)
-{
- struct vm_area_struct *vma, *prev_vma;
- struct mm_struct *mm = current->mm;
- unsigned long base = mm->mmap_base, addr = addr0;
- unsigned long largest_hole = mm->cached_hole_size;
- int first_time = 1;
-
- /* requested length too big for entire address space */
- if (len > TASK_SIZE)
- return -ENOMEM;
-
- /* dont allow allocations above current base */
- if (mm->free_area_cache > base)
- mm->free_area_cache = base;
-
- /* requesting a specific address */
- if (addr) {
- addr = PAGE_ALIGN(addr);
- vma = find_vma(mm, addr);
- if (TASK_SIZE - len >= addr &&
- (!vma || addr + len <= vma->vm_start)
- && !is_hugepage_only_range(mm, addr,len))
- return addr;
- }
-
- if (len <= largest_hole) {
- largest_hole = 0;
- mm->free_area_cache = base;
- }
-try_again:
- /* make sure it can fit in the remaining address space */
- if (mm->free_area_cache < len)
- goto fail;
-
- /* either no address requested or cant fit in requested address hole */
- addr = (mm->free_area_cache - len) & PAGE_MASK;
- do {
-hugepage_recheck:
- if (touches_hugepage_low_range(mm, addr, len)) {
- addr = (addr & ((~0) << SID_SHIFT)) - len;
- goto hugepage_recheck;
- } else if (touches_hugepage_high_range(mm, addr, len)) {
- addr = (addr & ((~0UL) << HTLB_AREA_SHIFT)) - len;
- goto hugepage_recheck;
- }
-
- /*
- * Lookup failure means no vma is above this address,
- * i.e. return with success:
- */
- if (!(vma = find_vma_prev(mm, addr, &prev_vma)))
- return addr;
-
- /*
- * new region fits between prev_vma->vm_end and
- * vma->vm_start, use it:
- */
- if (addr+len <= vma->vm_start &&
- (!prev_vma || (addr >= prev_vma->vm_end))) {
- /* remember the address as a hint for next time */
- mm->cached_hole_size = largest_hole;
- return (mm->free_area_cache = addr);
- } else {
- /* pull free_area_cache down to the first hole */
- if (mm->free_area_cache == vma->vm_end) {
- mm->free_area_cache = vma->vm_start;
- mm->cached_hole_size = largest_hole;
- }
- }
-
- /* remember the largest hole we saw so far */
- if (addr + largest_hole < vma->vm_start)
- largest_hole = vma->vm_start - addr;
-
- /* try just below the current vma->vm_start */
- addr = vma->vm_start-len;
- } while (len <= vma->vm_start);
-
-fail:
- /*
- * if hint left us with no space for the requested
- * mapping then try again:
- */
- if (first_time) {
- mm->free_area_cache = base;
- largest_hole = 0;
- first_time = 0;
- goto try_again;
- }
- /*
- * A failed mmap() very likely causes application failure,
- * so fall back to the bottom-up function here. This scenario
- * can happen with large stack limits and large mmap()
- * allocations.
- */
- mm->free_area_cache = TASK_UNMAPPED_BASE;
- mm->cached_hole_size = ~0UL;
- addr = arch_get_unmapped_area(filp, addr0, len, pgoff, flags);
- /*
- * Restore the topdown base:
- */
- mm->free_area_cache = base;
- mm->cached_hole_size = ~0UL;
-
- return addr;
-}
-
-static int htlb_check_hinted_area(unsigned long addr, unsigned long len)
-{
- struct vm_area_struct *vma;
-
- vma = find_vma(current->mm, addr);
- if (TASK_SIZE - len >= addr &&
- (!vma || ((addr + len) <= vma->vm_start)))
- return 0;
-
- return -ENOMEM;
-}
-
-static unsigned long htlb_get_low_area(unsigned long len, u16 segmask)
-{
- unsigned long addr = 0;
- struct vm_area_struct *vma;
-
- vma = find_vma(current->mm, addr);
- while (addr + len <= 0x100000000UL) {
- BUG_ON(vma && (addr >= vma->vm_end)); /* invariant */
-
- if (! __within_hugepage_low_range(addr, len, segmask)) {
- addr = ALIGN(addr+1, 1<<SID_SHIFT);
- vma = find_vma(current->mm, addr);
- continue;
- }
-
- if (!vma || (addr + len) <= vma->vm_start)
- return addr;
- addr = ALIGN(vma->vm_end, HPAGE_SIZE);
- /* Depending on segmask this might not be a confirmed
- * hugepage region, so the ALIGN could have skipped
- * some VMAs */
- vma = find_vma(current->mm, addr);
- }
-
- return -ENOMEM;
-}
-
-static unsigned long htlb_get_high_area(unsigned long len, u16 areamask)
-{
- unsigned long addr = 0x100000000UL;
- struct vm_area_struct *vma;
-
- vma = find_vma(current->mm, addr);
- while (addr + len <= TASK_SIZE_USER64) {
- BUG_ON(vma && (addr >= vma->vm_end)); /* invariant */
-
- if (! __within_hugepage_high_range(addr, len, areamask)) {
- addr = ALIGN(addr+1, 1UL<<HTLB_AREA_SHIFT);
- vma = find_vma(current->mm, addr);
- continue;
- }
-
- if (!vma || (addr + len) <= vma->vm_start)
- return addr;
- addr = ALIGN(vma->vm_end, HPAGE_SIZE);
- /* Depending on segmask this might not be a confirmed
- * hugepage region, so the ALIGN could have skipped
- * some VMAs */
- vma = find_vma(current->mm, addr);
- }
-
- return -ENOMEM;
-}
-
unsigned long hugetlb_get_unmapped_area(struct file *file, unsigned long addr,
unsigned long len, unsigned long pgoff,
unsigned long flags)
{
- int lastshift;
- u16 areamask, curareas;
-
- if (HPAGE_SHIFT == 0)
- return -EINVAL;
- if (len & ~HPAGE_MASK)
- return -EINVAL;
- if (len > TASK_SIZE)
- return -ENOMEM;
-
- if (!cpu_has_feature(CPU_FTR_16M_PAGE))
- return -EINVAL;
-
- /* Paranoia, caller should have dealt with this */
- BUG_ON((addr + len) < addr);
-
- if (test_thread_flag(TIF_32BIT)) {
- curareas = current->mm->context.low_htlb_areas;
-
- /* First see if we can use the hint address */
- if (addr && (htlb_check_hinted_area(addr, len) == 0)) {
- areamask = LOW_ESID_MASK(addr, len);
- if (open_low_hpage_areas(current->mm, areamask) == 0)
- return addr;
- }
-
- /* Next see if we can map in the existing low areas */
- addr = htlb_get_low_area(len, curareas);
- if (addr != -ENOMEM)
- return addr;
-
- /* Finally go looking for areas to open */
- lastshift = 0;
- for (areamask = LOW_ESID_MASK(0x100000000UL-len, len);
- ! lastshift; areamask >>=1) {
- if (areamask & 1)
- lastshift = 1;
-
- addr = htlb_get_low_area(len, curareas | areamask);
- if ((addr != -ENOMEM)
- && open_low_hpage_areas(current->mm, areamask) == 0)
- return addr;
- }
- } else {
- curareas = current->mm->context.high_htlb_areas;
-
- /* First see if we can use the hint address */
- /* We discourage 64-bit processes from doing hugepage
- * mappings below 4GB (must use MAP_FIXED) */
- if ((addr >= 0x100000000UL)
- && (htlb_check_hinted_area(addr, len) == 0)) {
- areamask = HTLB_AREA_MASK(addr, len);
- if (open_high_hpage_areas(current->mm, areamask) == 0)
- return addr;
- }
-
- /* Next see if we can map in the existing high areas */
- addr = htlb_get_high_area(len, curareas);
- if (addr != -ENOMEM)
- return addr;
-
- /* Finally go looking for areas to open */
- lastshift = 0;
- for (areamask = HTLB_AREA_MASK(TASK_SIZE_USER64-len, len);
- ! lastshift; areamask >>=1) {
- if (areamask & 1)
- lastshift = 1;
-
- addr = htlb_get_high_area(len, curareas | areamask);
- if ((addr != -ENOMEM)
- && open_high_hpage_areas(current->mm, areamask) == 0)
- return addr;
- }
- }
- printk(KERN_DEBUG "hugetlb_get_unmapped_area() unable to open"
- " enough areas\n");
- return -ENOMEM;
+ return slice_get_unmapped_area(addr, len, flags,
+ mmu_huge_psize, 1, 0);
}
/*
Index: linux-cell/arch/powerpc/mm/mmu_context_64.c
===================================================================
--- linux-cell.orig/arch/powerpc/mm/mmu_context_64.c 2007-02-20 15:21:05.000000000 +1100
+++ linux-cell/arch/powerpc/mm/mmu_context_64.c 2007-02-20 15:28:08.000000000 +1100
@@ -28,6 +28,7 @@ int init_new_context(struct task_struct
{
int index;
int err;
+ int new_context = (mm->context.id == 0);
again:
if (!idr_pre_get(&mmu_context_idr, GFP_KERNEL))
@@ -50,9 +51,18 @@ again:
}
mm->context.id = index;
+#ifdef CONFIG_PPC_MM_SLICES
+ /* The old code would re-promote on fork, we don't do that
+ * when using slices as it could cause problem promoting slices
+ * that have been forced down to 4K
+ */
+ if (new_context)
+ slice_set_user_psize(mm, mmu_virtual_psize);
+#else
mm->context.user_psize = mmu_virtual_psize;
mm->context.sllp = SLB_VSID_USER |
mmu_psize_defs[mmu_virtual_psize].sllp;
+#endif
return 0;
}
Index: linux-cell/arch/powerpc/mm/slb_low.S
===================================================================
--- linux-cell.orig/arch/powerpc/mm/slb_low.S 2007-02-20 15:21:05.000000000 +1100
+++ linux-cell/arch/powerpc/mm/slb_low.S 2007-02-20 15:28:08.000000000 +1100
@@ -82,31 +82,45 @@ _GLOBAL(slb_miss_kernel_load_io)
srdi. r9,r10,USER_ESID_BITS
bne- 8f /* invalid ea bits set */
- /* Figure out if the segment contains huge pages */
-#ifdef CONFIG_HUGETLB_PAGE
-BEGIN_FTR_SECTION
- b 1f
-END_FTR_SECTION_IFCLR(CPU_FTR_16M_PAGE)
+
+ /* when using slices, we extract the psize off the slice bitmaps
+ * and then we need to get the sllp encoding off the mmu_psize_defs
+ * array.
+ *
+ * XXX This is a bit inefficient especially for the normal case,
+ * so we should try to implement a fast path for the standard page
+ * size using the old sllp value so we avoid the array. We cannot
+ * really do dynamic patching unfortunately as processes might flip
+ * between 4k and 64k standard page size
+ */
+#ifdef CONFIG_PPC_MM_SLICES
cmpldi r10,16
- lhz r9,PACALOWHTLBAREAS(r13)
- mr r11,r10
+ /* Get the slice index * 4 in r11 and matching slice size mask in r9 */
+ ld r9,PACALOWSLICESPSIZE(r13)
+ sldi r11,r10,2
blt 5f
+ ld r9,PACAHIGHSLICEPSIZE(r13)
+ srdi r11,r10,(SLICE_HIGH_SHIFT - SLICE_LOW_SHIFT - 2)
+ andi. r11,r11,0x3c
+
+5: /* Extract the psize and multiply to get an array offset */
+ srd r9,r9,r11
+ andi. r9,r9,0xf
+ mulli r9,r9,MMUPSIZEDEFSIZE
- lhz r9,PACAHIGHHTLBAREAS(r13)
- srdi r11,r10,(HTLB_AREA_SHIFT-SID_SHIFT)
-
-5: srd r9,r9,r11
- andi. r9,r9,1
- beq 1f
-_GLOBAL(slb_miss_user_load_huge)
- li r11,0
- b 2f
-1:
-#endif /* CONFIG_HUGETLB_PAGE */
-
+ /* Now get to the array and obtain the sllp
+ */
+ ld r11,PACATOC(r13)
+ ld r11,mmu_psize_defs@got(r11)
+ add r11,r11,r9
+ ld r11,MMUPSIZESLLP(r11)
+ ori r11,r11,SLB_VSID_USER
+#else
+ /* paca context sllp already contains the SLB_VSID_USER bits */
lhz r11,PACACONTEXTSLLP(r13)
-2:
+#endif /* CONFIG_PPC_MM_SLICES */
+
ld r9,PACACONTEXTID(r13)
rldimi r10,r9,USER_ESID_BITS,0
b slb_finish_load
Index: linux-cell/arch/powerpc/mm/hash_utils_64.c
===================================================================
--- linux-cell.orig/arch/powerpc/mm/hash_utils_64.c 2007-02-20 15:21:05.000000000 +1100
+++ linux-cell/arch/powerpc/mm/hash_utils_64.c 2007-02-20 15:28:08.000000000 +1100
@@ -51,6 +51,7 @@
#include <asm/cputable.h>
#include <asm/abs_addr.h>
#include <asm/sections.h>
+#include <asm/spu.h>
#ifdef DEBUG
#define DBG(fmt...) udbg_printf(fmt)
@@ -573,6 +574,57 @@ unsigned int hash_page_do_lazy_icache(un
return pp;
}
+#ifdef CONFIG_PPC_64K_PAGES
+static int hash_handle_ci_restrictions(struct mm_struct *mm, unsigned long ea,
+ pte_t *ptep, int psize, int user)
+{
+ /* If this PTE is non-cacheable, switch to 4k */
+ if (psize == MMU_PAGE_64K &&
+ (pte_val(*ptep) & _PAGE_NO_CACHE)) {
+ if (user) {
+ printk(KERN_INFO "Demoting page size of %s\n",
+ current->comm);
+ psize = MMU_PAGE_4K;
+#ifdef CONFIG_PPC_MM_SLICES
+ slice_set_user_psize(mm, psize);
+#else /* CONFIG_PPC_MM_SLICES */
+ mm->context.user_psize = MMU_PAGE_4K;
+ mm->context.sllp = SLB_VSID_USER |
+ mmu_psize_defs[MMU_PAGE_4K].sllp;
+#endif /* !defined(CONFIG_PPC_MM_SLICES) */
+ } else if (ea < VMALLOC_END) {
+ /*
+ * some driver did a non-cacheable mapping
+ * in vmalloc space, so switch vmalloc
+ * to 4k pages
+ */
+ printk(KERN_ALERT "Reducing vmalloc segment "
+ "to 4kB pages because of "
+ "non-cacheable mapping\n");
+ psize = mmu_vmalloc_psize = MMU_PAGE_4K;
+ }
+#ifdef CONFIG_SPU_BASE
+ /* SPEs can't do lazy flushing so we need to flush them
+ * all now
+ */
+ spu_flush_all_slbs(mm);
+#endif /* CONFIG_SPU_BASE */
+ }
+ if (user) {
+ if (psize != get_paca()->context.user_psize) {
+ get_paca()->context = mm->context;
+ slb_flush_and_rebolt();
+ }
+ } else if (get_paca()->vmalloc_sllp !=
+ mmu_psize_defs[mmu_vmalloc_psize].sllp) {
+ get_paca()->vmalloc_sllp =
+ mmu_psize_defs[mmu_vmalloc_psize].sllp;
+ slb_flush_and_rebolt();
+ }
+ return psize;
+}
+#endif /* CONFIG_PPC_64K_PAGES */
+
/* Result code is:
* 0 - handled
* 1 - normal page fault
@@ -634,11 +686,14 @@ int hash_page(unsigned long ea, unsigned
if (user_region && cpus_equal(mm->cpu_vm_mask, tmp))
local = 1;
+#ifdef CONFIG_HUGETLB_PAGE
/* Handle hugepage regions */
- if (unlikely(in_hugepage_area(mm->context, ea))) {
+ if (HPAGE_SHIFT &&
+ unlikely(get_slice_psize(mm, ea) == mmu_huge_psize)) {
DBG_LOW(" -> huge page !\n");
return hash_huge_page(mm, access, ea, vsid, local, trap);
}
+#endif /* CONFIG_HUGETLB_PAGE */
/* Get PTE and page size from page tables */
ptep = find_linux_pte(pgdir, ea);
@@ -665,42 +720,8 @@ int hash_page(unsigned long ea, unsigned
#ifndef CONFIG_PPC_64K_PAGES
rc = __hash_page_4K(ea, access, vsid, ptep, trap, local);
#else
- if (mmu_ci_restrictions) {
- /* If this PTE is non-cacheable, switch to 4k */
- if (psize == MMU_PAGE_64K &&
- (pte_val(*ptep) & _PAGE_NO_CACHE)) {
- if (user_region) {
- psize = MMU_PAGE_4K;
- mm->context.user_psize = MMU_PAGE_4K;
- mm->context.sllp = SLB_VSID_USER |
- mmu_psize_defs[MMU_PAGE_4K].sllp;
- } else if (ea < VMALLOC_END) {
- /*
- * some driver did a non-cacheable mapping
- * in vmalloc space, so switch vmalloc
- * to 4k pages
- */
- printk(KERN_ALERT "Reducing vmalloc segment "
- "to 4kB pages because of "
- "non-cacheable mapping\n");
- psize = mmu_vmalloc_psize = MMU_PAGE_4K;
- }
-#ifdef CONFIG_SPE_BASE
- spu_flush_all_slbs(mm);
-#endif
- }
- if (user_region) {
- if (psize != get_paca()->context.user_psize) {
- get_paca()->context = mm->context;
- slb_flush_and_rebolt();
- }
- } else if (get_paca()->vmalloc_sllp !=
- mmu_psize_defs[mmu_vmalloc_psize].sllp) {
- get_paca()->vmalloc_sllp =
- mmu_psize_defs[mmu_vmalloc_psize].sllp;
- slb_flush_and_rebolt();
- }
- }
+ if (mmu_ci_restrictions)
+ psize = hash_handle_ci_restrictions(mm, ea, ptep, psize, user_region);
if (psize == MMU_PAGE_64K)
rc = __hash_page_64K(ea, access, vsid, ptep, trap, local);
else
@@ -726,12 +747,16 @@ void hash_preload(struct mm_struct *mm,
pte_t *ptep;
cpumask_t mask;
unsigned long flags;
+ int psize;
int local = 0;
- /* We don't want huge pages prefaulted for now
- */
- if (unlikely(in_hugepage_area(mm->context, ea)))
+ BUG_ON(REGION_ID(ea) != USER_REGION_ID);
+
+#ifdef CONFIG_PPC_MM_SLICES
+ /* We only prefault standard pages for now */
+ if (unlikely(get_slice_psize(mm, ea) != mm->context.user_psize));
return;
+#endif
DBG_LOW("hash_preload(mm=%p, mm->pgdir=%p, ea=%016lx, access=%lx,"
" trap=%lx\n", mm, mm->pgd, ea, access, trap);
@@ -750,24 +775,13 @@ void hash_preload(struct mm_struct *mm,
mask = cpumask_of_cpu(smp_processor_id());
if (cpus_equal(mm->cpu_vm_mask, mask))
local = 1;
+ psize = mm->context.user_psize;
#ifndef CONFIG_PPC_64K_PAGES
__hash_page_4K(ea, access, vsid, ptep, trap, local);
#else
- if (mmu_ci_restrictions) {
- /* If this PTE is non-cacheable, switch to 4k */
- if (mm->context.user_psize == MMU_PAGE_64K &&
- (pte_val(*ptep) & _PAGE_NO_CACHE)) {
- mm->context.user_psize = MMU_PAGE_4K;
- mm->context.sllp = SLB_VSID_USER |
- mmu_psize_defs[MMU_PAGE_4K].sllp;
- get_paca()->context = mm->context;
- slb_flush_and_rebolt();
-#ifdef CONFIG_SPE_BASE
- spu_flush_all_slbs(mm);
-#endif
- }
- }
- if (mm->context.user_psize == MMU_PAGE_64K)
+ if (mmu_ci_restrictions)
+ psize = hash_handle_ci_restrictions(mm, ea, ptep, psize, 1);
+ if (psize == MMU_PAGE_64K)
__hash_page_64K(ea, access, vsid, ptep, trap, local);
else
__hash_page_4K(ea, access, vsid, ptep, trap, local);
Index: linux-cell/arch/powerpc/mm/slb.c
===================================================================
--- linux-cell.orig/arch/powerpc/mm/slb.c 2007-02-20 15:21:05.000000000 +1100
+++ linux-cell/arch/powerpc/mm/slb.c 2007-02-20 15:28:08.000000000 +1100
@@ -198,12 +198,6 @@ void slb_initialize(void)
static int slb_encoding_inited;
extern unsigned int *slb_miss_kernel_load_linear;
extern unsigned int *slb_miss_kernel_load_io;
-#ifdef CONFIG_HUGETLB_PAGE
- extern unsigned int *slb_miss_user_load_huge;
- unsigned long huge_llp;
-
- huge_llp = mmu_psize_defs[mmu_huge_psize].sllp;
-#endif
/* Prepare our SLB miss handler based on our page size */
linear_llp = mmu_psize_defs[mmu_linear_psize].sllp;
@@ -220,11 +214,6 @@ void slb_initialize(void)
DBG("SLB: linear LLP = %04x\n", linear_llp);
DBG("SLB: io LLP = %04x\n", io_llp);
-#ifdef CONFIG_HUGETLB_PAGE
- patch_slb_encoding(slb_miss_user_load_huge,
- SLB_VSID_USER | huge_llp);
- DBG("SLB: huge LLP = %04x\n", huge_llp);
-#endif
}
get_paca()->stab_rr = SLB_NUM_BOLTED;
Index: linux-cell/arch/powerpc/platforms/cell/spu_base.c
===================================================================
--- linux-cell.orig/arch/powerpc/platforms/cell/spu_base.c 2007-02-20 15:25:13.000000000 +1100
+++ linux-cell/arch/powerpc/platforms/cell/spu_base.c 2007-02-20 15:28:08.000000000 +1100
@@ -142,12 +142,11 @@ static int __spu_trap_data_seg(struct sp
switch(REGION_ID(ea)) {
case USER_REGION_ID:
-#ifdef CONFIG_HUGETLB_PAGE
- if (in_hugepage_area(mm->context, ea))
- psize = mmu_huge_psize;
- else
+#ifdef CONFIG_PPC_MM_SLICES
+ psize = get_slice_psize(mm, ea);
+#else
+ psize = mm->context.user_psize;
#endif
- psize = mm->context.user_psize;
vsid = (get_vsid(mm->context.id, ea) << SLB_VSID_SHIFT) |
SLB_VSID_USER;
break;
^ permalink raw reply [flat|nested] 13+ messages in thread
* [PATCH 4/5] powerpc: Add ability to 4K kernel to hash in 64K pages
2007-02-20 7:44 [PATCH 0/5] Support 64K pages mapping of SPE local stores on 4K kernel Benjamin Herrenschmidt
` (2 preceding siblings ...)
2007-02-20 7:44 ` [PATCH 3/5] powerpc: Introduce address space "slices" Benjamin Herrenschmidt
@ 2007-02-20 7:44 ` Benjamin Herrenschmidt
2007-02-20 7:44 ` [PATCH 5/5] powerpc: spufs support for 64K LS mappings on 4K kernels Benjamin Herrenschmidt
2007-03-01 7:29 ` [PATCH 0/5] Support 64K pages mapping of SPE local stores on 4K kernel Michael Ellerman
5 siblings, 0 replies; 13+ messages in thread
From: Benjamin Herrenschmidt @ 2007-02-20 7:44 UTC (permalink / raw)
To: linuxppc-dev, cbe-oss-dev; +Cc: Arnd Bergmann
This patch adds the ability for a kernel compiled with 4K page size
to have special slices containing 64K pages and hash the right type
of hash PTEs.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
arch/powerpc/Kconfig | 6 ++++++
arch/powerpc/mm/hash_low_64.S | 5 ++++-
arch/powerpc/mm/hash_utils_64.c | 36 +++++++++++++++++++++++-------------
arch/powerpc/mm/tlb_64.c | 12 +++++++++---
include/asm-powerpc/pgtable-4k.h | 6 +++++-
include/asm-powerpc/pgtable-64k.h | 7 ++++++-
6 files changed, 53 insertions(+), 19 deletions(-)
Index: linux-cell/arch/powerpc/mm/hash_low_64.S
===================================================================
--- linux-cell.orig/arch/powerpc/mm/hash_low_64.S 2007-02-20 18:03:14.000000000 +1100
+++ linux-cell/arch/powerpc/mm/hash_low_64.S 2007-02-20 18:10:15.000000000 +1100
@@ -612,6 +612,9 @@ htab_pte_insert_failure:
li r3,-1
b htab_bail
+#endif /* CONFIG_PPC_64K_PAGES */
+
+#ifdef CONFIG_PPC_HAS_HASH_64K
/*****************************************************************************
* *
@@ -867,7 +870,7 @@ ht64_pte_insert_failure:
b ht64_bail
-#endif /* CONFIG_PPC_64K_PAGES */
+#endif /* CONFIG_PPC_HAS_HASH_64K */
/*****************************************************************************
Index: linux-cell/include/asm-powerpc/pgtable-64k.h
===================================================================
--- linux-cell.orig/include/asm-powerpc/pgtable-64k.h 2007-02-20 18:03:14.000000000 +1100
+++ linux-cell/include/asm-powerpc/pgtable-64k.h 2007-02-20 18:10:15.000000000 +1100
@@ -35,6 +35,11 @@
#define _PAGE_HPTE_SUB 0x0ffff000 /* combo only: sub pages HPTE bits */
#define _PAGE_HPTE_SUB0 0x08000000 /* combo only: first sub page */
#define _PAGE_COMBO 0x10000000 /* this is a combo 4k page */
+
+/* Note the full page bits must be in the same location as for normal
+ * 4k pages as the same asssembly will be used to insert 64K pages
+ * wether the kernel has CONFIG_PPC_64K_PAGES or not
+ */
#define _PAGE_F_SECOND 0x00008000 /* full page: hidx bits */
#define _PAGE_F_GIX 0x00007000 /* full page: hidx bits */
@@ -90,7 +95,7 @@
#define pte_iterate_hashed_end() } while(0); } } while(0)
-#define pte_pagesize_index(pte) \
+#define pte_pagesize_index(mm, addr, pte) \
(((pte) & _PAGE_COMBO)? MMU_PAGE_4K: MMU_PAGE_64K)
#endif /* __ASSEMBLY__ */
Index: linux-cell/arch/powerpc/Kconfig
===================================================================
--- linux-cell.orig/arch/powerpc/Kconfig 2007-02-20 18:10:14.000000000 +1100
+++ linux-cell/arch/powerpc/Kconfig 2007-02-20 18:10:15.000000000 +1100
@@ -905,9 +905,15 @@ config NODES_SPAN_OTHER_NODES
def_bool y
depends on NEED_MULTIPLE_NODES
+config PPC_HAS_HASH_64K
+ bool
+ depends on PPC64
+ default n
+
config PPC_64K_PAGES
bool "64k page size"
depends on PPC64
+ select PPC_HAS_HASH_64K
help
This option changes the kernel logical page size to 64k. On machines
without processor support for 64k pages, the kernel will simulate
Index: linux-cell/arch/powerpc/mm/hash_utils_64.c
===================================================================
--- linux-cell.orig/arch/powerpc/mm/hash_utils_64.c 2007-02-20 18:10:14.000000000 +1100
+++ linux-cell/arch/powerpc/mm/hash_utils_64.c 2007-02-20 18:24:41.000000000 +1100
@@ -408,7 +408,7 @@ static void __init htab_finish_init(void
extern unsigned int *htab_call_hpte_remove;
extern unsigned int *htab_call_hpte_updatepp;
-#ifdef CONFIG_PPC_64K_PAGES
+#ifdef CONFIG_PPC_HAS_HASH_64K
extern unsigned int *ht64_call_hpte_insert1;
extern unsigned int *ht64_call_hpte_insert2;
extern unsigned int *ht64_call_hpte_remove;
@@ -658,7 +658,11 @@ int hash_page(unsigned long ea, unsigned
return 1;
}
vsid = get_vsid(mm->context.id, ea);
+#ifdef CONFIG_PPC_MM_SLICES
+ psize = get_slice_psize(mm, ea);
+#else
psize = mm->context.user_psize;
+#endif
break;
case VMALLOC_REGION_ID:
mm = &init_mm;
@@ -688,8 +692,7 @@ int hash_page(unsigned long ea, unsigned
#ifdef CONFIG_HUGETLB_PAGE
/* Handle hugepage regions */
- if (HPAGE_SHIFT &&
- unlikely(get_slice_psize(mm, ea) == mmu_huge_psize)) {
+ if (HPAGE_SHIFT && psize == mmu_huge_psize) {
DBG_LOW(" -> huge page !\n");
return hash_huge_page(mm, access, ea, vsid, local, trap);
}
@@ -716,17 +719,22 @@ int hash_page(unsigned long ea, unsigned
return 1;
}
- /* Do actual hashing */
-#ifndef CONFIG_PPC_64K_PAGES
- rc = __hash_page_4K(ea, access, vsid, ptep, trap, local);
-#else
+ /* Handle demotion to 4K pages in situations where 64K pages are
+ * not supported for cache inhibited mappings
+ */
+#ifdef CONFIG_PPC_64K_PAGES
if (mmu_ci_restrictions)
- psize = hash_handle_ci_restrictions(mm, ea, ptep, psize, user_region);
+ psize = hash_handle_ci_restrictions(mm, ea, ptep, psize,
+ user_region);
+#endif /* CONFIG_PPC_64K_PAGES */
+
+ /* Do actual hashing */
+#ifdef CONFIG_PPC_HAS_HASH_64K
if (psize == MMU_PAGE_64K)
rc = __hash_page_64K(ea, access, vsid, ptep, trap, local);
else
+#endif /* CONFIG_PPC_HAS_HASH_64K */
rc = __hash_page_4K(ea, access, vsid, ptep, trap, local);
-#endif /* CONFIG_PPC_64K_PAGES */
#ifndef CONFIG_PPC_64K_PAGES
DBG_LOW(" o-pte: %016lx\n", pte_val(*ptep));
@@ -775,17 +783,19 @@ void hash_preload(struct mm_struct *mm,
mask = cpumask_of_cpu(smp_processor_id());
if (cpus_equal(mm->cpu_vm_mask, mask))
local = 1;
+
psize = mm->context.user_psize;
-#ifndef CONFIG_PPC_64K_PAGES
- __hash_page_4K(ea, access, vsid, ptep, trap, local);
-#else
+#ifdef CONFIG_PPC_64K_PAGES
if (mmu_ci_restrictions)
psize = hash_handle_ci_restrictions(mm, ea, ptep, psize, 1);
+#endif /* CONFIG_PPC_64K_PAGES */
+
+#ifdef CONFIG_PPC_HAS_HASH_64K
if (psize == MMU_PAGE_64K)
__hash_page_64K(ea, access, vsid, ptep, trap, local);
else
- __hash_page_4K(ea, access, vsid, ptep, trap, local);
#endif /* CONFIG_PPC_64K_PAGES */
+ __hash_page_4K(ea, access, vsid, ptep, trap, local);
local_irq_restore(flags);
}
Index: linux-cell/arch/powerpc/mm/tlb_64.c
===================================================================
--- linux-cell.orig/arch/powerpc/mm/tlb_64.c 2007-02-20 18:03:14.000000000 +1100
+++ linux-cell/arch/powerpc/mm/tlb_64.c 2007-02-20 18:10:15.000000000 +1100
@@ -140,16 +140,22 @@ void hpte_update(struct mm_struct *mm, u
*/
addr &= PAGE_MASK;
- /* Get page size (maybe move back to caller) */
+ /* Get page size (maybe move back to caller).
+ *
+ * NOTE: when using special 64K mappings in 4K environment like
+ * for SPEs, we obtain the page size from the slice, which thus
+ * must still exist (and thus the VMA not reused) at the time
+ * of this call
+ */
if (huge) {
#ifdef CONFIG_HUGETLB_PAGE
psize = mmu_huge_psize;
#else
BUG();
- psize = pte_pagesize_index(pte); /* shutup gcc */
+ psize = pte_pagesize_index(mm, addr, pte); /* shutup gcc */
#endif
} else
- psize = pte_pagesize_index(pte);
+ psize = pte_pagesize_index(mm, addr, pte);
/*
* This can happen when we are in the middle of a TLB batch and
Index: linux-cell/include/asm-powerpc/pgtable-4k.h
===================================================================
--- linux-cell.orig/include/asm-powerpc/pgtable-4k.h 2007-02-20 18:03:14.000000000 +1100
+++ linux-cell/include/asm-powerpc/pgtable-4k.h 2007-02-20 18:10:15.000000000 +1100
@@ -78,7 +78,11 @@
#define pte_iterate_hashed_end() } while(0)
-#define pte_pagesize_index(pte) MMU_PAGE_4K
+#ifdef CONFIG_PPC_HAS_HASH_64K
+#define pte_pagesize_index(mm, addr, pte) get_slice_psize(mm, addr)
+#else
+#define pte_pagesize_index(mm, addr, pte) MMU_PAGE_4K
+#endif
/*
* 4-level page tables related bits
^ permalink raw reply [flat|nested] 13+ messages in thread
* [PATCH 5/5] powerpc: spufs support for 64K LS mappings on 4K kernels
2007-02-20 7:44 [PATCH 0/5] Support 64K pages mapping of SPE local stores on 4K kernel Benjamin Herrenschmidt
` (3 preceding siblings ...)
2007-02-20 7:44 ` [PATCH 4/5] powerpc: Add ability to 4K kernel to hash in 64K pages Benjamin Herrenschmidt
@ 2007-02-20 7:44 ` Benjamin Herrenschmidt
2007-03-01 7:29 ` [PATCH 0/5] Support 64K pages mapping of SPE local stores on 4K kernel Michael Ellerman
5 siblings, 0 replies; 13+ messages in thread
From: Benjamin Herrenschmidt @ 2007-02-20 7:44 UTC (permalink / raw)
To: linuxppc-dev, cbe-oss-dev; +Cc: Arnd Bergmann
This patch adds an option to spufs when the kernel is configured for
4K page to give it the ability to use 64K pages for SPE local store
mappings.
Currently, we are optimistic and try order 4 allocations when creting
contexts. If that fails, the code will fallback to 4K automatically.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
arch/powerpc/platforms/cell/Kconfig | 15 +
arch/powerpc/platforms/cell/spufs/Makefile | 2
arch/powerpc/platforms/cell/spufs/context.c | 4
arch/powerpc/platforms/cell/spufs/file.c | 77 ++++++++--
arch/powerpc/platforms/cell/spufs/lscsa_alloc.c | 181 ++++++++++++++++++++++++
arch/powerpc/platforms/cell/spufs/switch.c | 28 +--
include/asm-powerpc/spu_csa.h | 10 +
7 files changed, 282 insertions(+), 35 deletions(-)
Index: linux-cell/arch/powerpc/platforms/cell/Kconfig
===================================================================
--- linux-cell.orig/arch/powerpc/platforms/cell/Kconfig 2007-02-20 18:23:44.000000000 +1100
+++ linux-cell/arch/powerpc/platforms/cell/Kconfig 2007-02-20 18:24:58.000000000 +1100
@@ -12,6 +12,21 @@ config SPU_FS
Units on machines implementing the Broadband Processor
Architecture.
+config SPU_FS_64K_LS
+ bool "Use 64K pages to map SPE local store"
+ # we depend on PPC_MM_SLICES for now rather than selecting
+ # it because we depend on hugetlbfs hooks being present. We
+ # will fix that when the generic code has been improved to
+ # not require hijacking hugetlbfs hooks.
+ depends on SPU_FS && PPC_MM_SLICES && !PPC_64K_PAGES
+ default y
+ select PPC_HAS_HASH_64K
+ help
+ This option causes SPE local stores to be mapped in process
+ address spaces using 64K pages while the rest of the kernel
+ uses 4K pages. This can improve performances of applications
+ using multiple SPEs by lowering the TLB pressure on them.
+
config SPU_BASE
bool
default n
Index: linux-cell/arch/powerpc/platforms/cell/spufs/context.c
===================================================================
--- linux-cell.orig/arch/powerpc/platforms/cell/spufs/context.c 2007-02-20 18:23:44.000000000 +1100
+++ linux-cell/arch/powerpc/platforms/cell/spufs/context.c 2007-02-20 18:24:58.000000000 +1100
@@ -36,10 +36,8 @@ struct spu_context *alloc_spu_context(st
/* Binding to physical processor deferred
* until spu_activate().
*/
- spu_init_csa(&ctx->csa);
- if (!ctx->csa.lscsa) {
+ if (spu_init_csa(&ctx->csa))
goto out_free;
- }
spin_lock_init(&ctx->mmio_lock);
kref_init(&ctx->kref);
mutex_init(&ctx->state_mutex);
Index: linux-cell/arch/powerpc/platforms/cell/spufs/switch.c
===================================================================
--- linux-cell.orig/arch/powerpc/platforms/cell/spufs/switch.c 2007-02-20 18:23:44.000000000 +1100
+++ linux-cell/arch/powerpc/platforms/cell/spufs/switch.c 2007-02-20 18:24:58.000000000 +1100
@@ -2185,40 +2185,30 @@ static void init_priv2(struct spu_state
* as it is by far the largest of the context save regions,
* and may need to be pinned or otherwise specially aligned.
*/
-void spu_init_csa(struct spu_state *csa)
+int spu_init_csa(struct spu_state *csa)
{
- struct spu_lscsa *lscsa;
- unsigned char *p;
+ int rc;
if (!csa)
- return;
+ return -EINVAL;
memset(csa, 0, sizeof(struct spu_state));
- lscsa = vmalloc(sizeof(struct spu_lscsa));
- if (!lscsa)
- return;
+ rc = spu_alloc_lscsa(csa);
+ if (rc)
+ return rc;
- memset(lscsa, 0, sizeof(struct spu_lscsa));
- csa->lscsa = lscsa;
spin_lock_init(&csa->register_lock);
- /* Set LS pages reserved to allow for user-space mapping. */
- for (p = lscsa->ls; p < lscsa->ls + LS_SIZE; p += PAGE_SIZE)
- SetPageReserved(vmalloc_to_page(p));
-
init_prob(csa);
init_priv1(csa);
init_priv2(csa);
+
+ return 0;
}
EXPORT_SYMBOL_GPL(spu_init_csa);
void spu_fini_csa(struct spu_state *csa)
{
- /* Clear reserved bit before vfree. */
- unsigned char *p;
- for (p = csa->lscsa->ls; p < csa->lscsa->ls + LS_SIZE; p += PAGE_SIZE)
- ClearPageReserved(vmalloc_to_page(p));
-
- vfree(csa->lscsa);
+ spu_free_lscsa(csa);
}
EXPORT_SYMBOL_GPL(spu_fini_csa);
Index: linux-cell/include/asm-powerpc/spu_csa.h
===================================================================
--- linux-cell.orig/include/asm-powerpc/spu_csa.h 2007-02-20 18:23:44.000000000 +1100
+++ linux-cell/include/asm-powerpc/spu_csa.h 2007-02-20 18:24:58.000000000 +1100
@@ -235,6 +235,12 @@ struct spu_priv2_collapsed {
*/
struct spu_state {
struct spu_lscsa *lscsa;
+#ifdef CONFIG_SPU_FS_64K_LS
+ int use_big_pages;
+ /* One struct page per 64k page */
+#define SPU_LSCSA_NUM_BIG_PAGES (sizeof(struct spu_lscsa) / 0x10000)
+ struct page *lscsa_pages[SPU_LSCSA_NUM_BIG_PAGES];
+#endif
struct spu_problem_collapsed prob;
struct spu_priv1_collapsed priv1;
struct spu_priv2_collapsed priv2;
@@ -246,12 +252,14 @@ struct spu_state {
spinlock_t register_lock;
};
-extern void spu_init_csa(struct spu_state *csa);
+extern int spu_init_csa(struct spu_state *csa);
extern void spu_fini_csa(struct spu_state *csa);
extern int spu_save(struct spu_state *prev, struct spu *spu);
extern int spu_restore(struct spu_state *new, struct spu *spu);
extern int spu_switch(struct spu_state *prev, struct spu_state *new,
struct spu *spu);
+extern int spu_alloc_lscsa(struct spu_state *csa);
+extern void spu_free_lscsa(struct spu_state *csa);
#endif /* !__SPU__ */
#endif /* __KERNEL__ */
Index: linux-cell/arch/powerpc/platforms/cell/spufs/Makefile
===================================================================
--- linux-cell.orig/arch/powerpc/platforms/cell/spufs/Makefile 2007-02-20 18:23:44.000000000 +1100
+++ linux-cell/arch/powerpc/platforms/cell/spufs/Makefile 2007-02-20 18:24:58.000000000 +1100
@@ -1,4 +1,4 @@
-obj-y += switch.o
+obj-y += switch.o lscsa_alloc.o
obj-$(CONFIG_SPU_FS) += spufs.o
spufs-y += inode.o file.o context.o syscalls.o coredump.o
Index: linux-cell/arch/powerpc/platforms/cell/spufs/lscsa_alloc.c
===================================================================
--- /dev/null 1970-01-01 00:00:00.000000000 +0000
+++ linux-cell/arch/powerpc/platforms/cell/spufs/lscsa_alloc.c 2007-02-20 18:29:54.000000000 +1100
@@ -0,0 +1,181 @@
+/*
+ * SPU local store allocation routines
+ *
+ * Copyright 2007 Benjamin Herrenschmidt, IBM Corp.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2, or (at your option)
+ * any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
+ */
+
+#undef DEBUG
+
+#include <linux/kernel.h>
+#include <linux/mm.h>
+#include <linux/vmalloc.h>
+
+#include <asm/spu.h>
+#include <asm/spu_csa.h>
+#include <asm/mmu.h>
+
+static int spu_alloc_lscsa_std(struct spu_state *csa)
+{
+ struct spu_lscsa *lscsa;
+ unsigned char *p;
+
+ lscsa = vmalloc(sizeof(struct spu_lscsa));
+ if (!lscsa)
+ return -ENOMEM;
+ memset(lscsa, 0, sizeof(struct spu_lscsa));
+ csa->lscsa = lscsa;
+
+ /* Set LS pages reserved to allow for user-space mapping. */
+ for (p = lscsa->ls; p < lscsa->ls + LS_SIZE; p += PAGE_SIZE)
+ SetPageReserved(vmalloc_to_page(p));
+
+ return 0;
+}
+
+static void spu_free_lscsa_std(struct spu_state *csa)
+{
+ /* Clear reserved bit before vfree. */
+ unsigned char *p;
+
+ if (csa->lscsa == NULL)
+ return;
+
+ for (p = csa->lscsa->ls; p < csa->lscsa->ls + LS_SIZE; p += PAGE_SIZE)
+ ClearPageReserved(vmalloc_to_page(p));
+
+ vfree(csa->lscsa);
+}
+
+#ifdef CONFIG_SPU_FS_64K_LS
+
+#define SPU_64K_PAGE_SHIFT 16
+#define SPU_64K_PAGE_ORDER (SPU_64K_PAGE_SHIFT - PAGE_SHIFT)
+#define SPU_64K_PAGE_COUNT (1ul << SPU_64K_PAGE_ORDER)
+
+int spu_alloc_lscsa(struct spu_state *csa)
+{
+ struct page **pgarray;
+ unsigned char *p;
+ int i, j, n_4k;
+
+ /* Check availability of 64K pages */
+ if (mmu_psize_defs[MMU_PAGE_64K].shift == 0)
+ goto fail;
+
+ csa->use_big_pages = 1;
+
+ pr_debug("spu_alloc_lscsa(csa=0x%p), trying to allocate 64K pages\n",
+ csa);
+
+ /* First try to allocate our 64K pages. We need 5 of them
+ * with the current implementation. In the future, we should try
+ * to separate the lscsa with the actual local store image, thus
+ * allowing us to require only 4 64K pages per context
+ */
+ for (i = 0; i < SPU_LSCSA_NUM_BIG_PAGES; i++) {
+ /* XXX This is likely to fail, we should use a special pool
+ * similiar to what hugetlbfs does.
+ */
+ csa->lscsa_pages[i] = alloc_pages(GFP_KERNEL,
+ SPU_64K_PAGE_ORDER);
+ if (csa->lscsa_pages[i] == NULL)
+ goto fail;
+ }
+
+ pr_debug(" success ! creating vmap...\n");
+
+ /* Now we need to create a vmalloc mapping of these for the kernel
+ * and SPU context switch code to use. Currently, we stick to a
+ * normal kernel vmalloc mapping, which in our case will be 4K
+ */
+ n_4k = SPU_64K_PAGE_COUNT * SPU_LSCSA_NUM_BIG_PAGES;
+ pgarray = kmalloc(sizeof(struct page *) * n_4k, GFP_KERNEL);
+ if (pgarray == NULL)
+ goto fail;
+ for (i = 0; i < SPU_LSCSA_NUM_BIG_PAGES; i++)
+ for (j = 0; j < SPU_64K_PAGE_COUNT; j++)
+ /* We assume all the struct page's are contiguous
+ * which should be hopefully the case for an order 4
+ * allocation..
+ */
+ pgarray[i * SPU_64K_PAGE_COUNT + j] =
+ csa->lscsa_pages[i] + j;
+ csa->lscsa = vmap(pgarray, n_4k, VM_USERMAP, PAGE_KERNEL);
+ kfree(pgarray);
+ if (csa->lscsa == NULL)
+ goto fail;
+
+ memset(csa->lscsa, 0, sizeof(struct spu_lscsa));
+
+ /* Set LS pages reserved to allow for user-space mapping.
+ *
+ * XXX isn't that a bit obsolete ? I think we should just
+ * make sure the page count is high enough. Anyway, won't harm
+ * for now
+ */
+ for (p = csa->lscsa->ls; p < csa->lscsa->ls + LS_SIZE; p += PAGE_SIZE)
+ SetPageReserved(vmalloc_to_page(p));
+
+ pr_debug(" all good !\n");
+
+ return 0;
+fail:
+ pr_debug("spufs: failed to allocate lscsa 64K pages, falling back\n");
+ spu_free_lscsa(csa);
+ return spu_alloc_lscsa_std(csa);
+}
+
+void spu_free_lscsa(struct spu_state *csa)
+{
+ unsigned char *p;
+ int i;
+
+ if (!csa->use_big_pages) {
+ spu_free_lscsa_std(csa);
+ return;
+ }
+ csa->use_big_pages = 0;
+
+ if (csa->lscsa == NULL)
+ goto free_pages;
+
+ for (p = csa->lscsa->ls; p < csa->lscsa->ls + LS_SIZE; p += PAGE_SIZE)
+ ClearPageReserved(vmalloc_to_page(p));
+
+ vunmap(csa->lscsa);
+ csa->lscsa = NULL;
+
+ free_pages:
+
+ for (i = 0; i < SPU_LSCSA_NUM_BIG_PAGES; i++)
+ if (csa->lscsa_pages[i])
+ __free_pages(csa->lscsa_pages[i], SPU_64K_PAGE_ORDER);
+}
+
+#else /* CONFIG_SPU_FS_64K_LS */
+
+int spu_alloc_lscsa(struct spu_state *csa)
+{
+ return spu_alloc_lscsa_std(csa);
+}
+
+void spu_free_lscsa(struct spu_state *csa)
+{
+ spu_free_lscsa_std(csa);
+}
+
+#endif /* !defined(CONFIG_SPU_FS_64K_LS) */
Index: linux-cell/arch/powerpc/platforms/cell/spufs/file.c
===================================================================
--- linux-cell.orig/arch/powerpc/platforms/cell/spufs/file.c 2007-02-20 18:23:44.000000000 +1100
+++ linux-cell/arch/powerpc/platforms/cell/spufs/file.c 2007-02-20 18:29:11.000000000 +1100
@@ -98,14 +98,32 @@ spufs_mem_write(struct file *file, const
static unsigned long spufs_mem_mmap_nopfn(struct vm_area_struct *vma,
unsigned long address)
{
- struct spu_context *ctx = vma->vm_file->private_data;
- unsigned long pfn, offset = address - vma->vm_start;
-
- offset += vma->vm_pgoff << PAGE_SHIFT;
+ struct spu_context *ctx = vma->vm_file->private_data;
+ unsigned long pfn, offset, addr0 = address;
+#ifdef CONFIG_SPU_FS_64K_LS
+ struct spu_state *csa = &ctx->csa;
+ int psize;
+
+ /* Check what page size we are using */
+ psize = get_slice_psize(vma->vm_mm, address);
+
+ /* Some sanity checking */
+ BUG_ON(csa->use_big_pages != (psize == MMU_PAGE_64K));
+
+ /* Wow, 64K, cool, we need to align the address though */
+ if (csa->use_big_pages) {
+ BUG_ON(vma->vm_start & 0xffff);
+ address &= ~0xfffful;
+ }
+#endif /* CONFIG_SPU_FS_64K_LS */
+ offset = (address - vma->vm_start) + (vma->vm_pgoff << PAGE_SHIFT);
if (offset >= LS_SIZE)
return NOPFN_SIGBUS;
+ pr_debug("spufs_mem_mmap_nopfn address=0x%lx -> 0x%lx, offset=0x%lx\n",
+ addr0, address, offset);
+
spu_acquire(ctx);
if (ctx->state == SPU_STATE_SAVED) {
@@ -129,9 +147,24 @@ static struct vm_operations_struct spufs
.nopfn = spufs_mem_mmap_nopfn,
};
-static int
-spufs_mem_mmap(struct file *file, struct vm_area_struct *vma)
+static int spufs_mem_mmap(struct file *file, struct vm_area_struct *vma)
{
+#ifdef CONFIG_SPU_FS_64K_LS
+ struct spu_context *ctx = file->private_data;
+ struct spu_state *csa = &ctx->csa;
+
+ /* Sanity check VMA alignment */
+ if (csa->use_big_pages) {
+ pr_debug("spufs_mem_mmap 64K, start=0x%lx, end=0x%lx,"
+ " pgoff=0x%lx\n", vma->vm_start, vma->vm_end,
+ vma->vm_pgoff);
+ if (vma->vm_start & 0xffff)
+ return -EINVAL;
+ if (vma->vm_pgoff & 0xf)
+ return -EINVAL;
+ }
+#endif /* CONFIG_SPU_FS_64K_LS */
+
if (!(vma->vm_flags & VM_SHARED))
return -EINVAL;
@@ -143,12 +176,34 @@ spufs_mem_mmap(struct file *file, struct
return 0;
}
+#ifdef CONFIG_SPU_FS_64K_LS
+unsigned long spufs_get_unmapped_area(struct file *file, unsigned long addr,
+ unsigned long len, unsigned long pgoff,
+ unsigned long flags)
+{
+ struct spu_context *ctx = file->private_data;
+ struct spu_state *csa = &ctx->csa;
+
+ /* If not using big pages, fallback to normal MM g_u_a */
+ if (!csa->use_big_pages)
+ return current->mm->get_unmapped_area(file, addr, len,
+ pgoff, flags);
+
+ /* Else, try to obtain a 64K pages slice */
+ return slice_get_unmapped_area(addr, len, flags,
+ MMU_PAGE_64K, 1, 0);
+}
+#endif /* CONFIG_SPU_FS_64K_LS */
+
static const struct file_operations spufs_mem_fops = {
- .open = spufs_mem_open,
- .read = spufs_mem_read,
- .write = spufs_mem_write,
- .llseek = generic_file_llseek,
- .mmap = spufs_mem_mmap,
+ .open = spufs_mem_open,
+ .read = spufs_mem_read,
+ .write = spufs_mem_write,
+ .llseek = generic_file_llseek,
+ .mmap = spufs_mem_mmap,
+#ifdef CONFIG_SPU_FS_64K_LS
+ .get_unmapped_area = spufs_get_unmapped_area,
+#endif
};
static unsigned long spufs_ps_nopfn(struct vm_area_struct *vma,
^ permalink raw reply [flat|nested] 13+ messages in thread
* [PATCH] Allow spufs to build as a module with slices enabled
2007-02-20 7:44 ` [PATCH 3/5] powerpc: Introduce address space "slices" Benjamin Herrenschmidt
@ 2007-03-01 6:11 ` Michael Ellerman
2007-03-02 2:15 ` [Cbe-oss-dev] " Christoph Hellwig
0 siblings, 1 reply; 13+ messages in thread
From: Michael Ellerman @ 2007-03-01 6:11 UTC (permalink / raw)
To: Benjamin Herrenschmidt; +Cc: linuxppc-dev, Arnd Bergmann, cbe-oss-dev
The slice code is missing some exports to allow spufs to build as a
module. Add them.
Signed-off-by: Michael Ellerman <michael@ellerman.id.au>
---
MODPOST 209 modules
WARNING: ".get_slice_psize" [arch/powerpc/platforms/cell/spufs/spufs.ko] undefined!
WARNING: ".slice_get_unmapped_area" [arch/powerpc/platforms/cell/spufs/spufs.ko] undefined!
---
arch/powerpc/mm/slice.c | 3 +++
1 file changed, 3 insertions(+)
Index: ecell/arch/powerpc/mm/slice.c
===================================================================
--- ecell.orig/arch/powerpc/mm/slice.c
+++ ecell/arch/powerpc/mm/slice.c
@@ -29,6 +29,7 @@
#include <linux/pagemap.h>
#include <linux/err.h>
#include <linux/spinlock.h>
+#include <linux/module.h>
#include <asm/mman.h>
#include <asm/mmu.h>
#include <asm/spu.h>
@@ -499,6 +500,7 @@ unsigned long slice_get_unmapped_area(un
return addr;
}
+EXPORT_SYMBOL(slice_get_unmapped_area);
unsigned long arch_get_unmapped_area(struct file *filp,
unsigned long addr,
@@ -537,6 +539,7 @@ unsigned int get_slice_psize(struct mm_s
return (psizes >> (index * 4)) & 0xf;
}
+EXPORT_SYMBOL(get_slice_psize);
/*
* This is called by hash_page when it needs to do a lazy conversion of
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH 0/5] Support 64K pages mapping of SPE local stores on 4K kernel
2007-02-20 7:44 [PATCH 0/5] Support 64K pages mapping of SPE local stores on 4K kernel Benjamin Herrenschmidt
` (4 preceding siblings ...)
2007-02-20 7:44 ` [PATCH 5/5] powerpc: spufs support for 64K LS mappings on 4K kernels Benjamin Herrenschmidt
@ 2007-03-01 7:29 ` Michael Ellerman
5 siblings, 0 replies; 13+ messages in thread
From: Michael Ellerman @ 2007-03-01 7:29 UTC (permalink / raw)
To: Benjamin Herrenschmidt; +Cc: linuxppc-dev, cbe-oss-dev, Arnd Bergmann
[-- Attachment #1: Type: text/plain, Size: 1146 bytes --]
On Tue, 2007-02-20 at 18:44 +1100, Benjamin Herrenschmidt wrote:
> This serie of patches supports userland mappings of SPE local stores
> using 64K hardware pages rather than 4K on a kernel using 4K pages to
> improve performances.
>
> The current version of this serie relies on a hack to the generic code
> which is probably not acceptable upsteam. I have plans to do a proper
> fix but haven't had time to do it yet.
>
> The first patch of the serie is fairly independant of the rest and
> should be applied to 2.6.21 as I beleive it fixes a bug with handling
> of huge pages from SPEs.
Unfortunately this seems to break a test case I have which dmas between
two SPEs.
The process just seems to hang, the machine is still pingable. gdb hangs
attaching to the spu process, but killing the spu process from another
console seems to return the system to normal.
cheers
--
Michael Ellerman
OzLabs, IBM Australia Development Lab
wwweb: http://michael.ellerman.id.au
phone: +61 2 6212 1183 (tie line 70 21183)
We do not inherit the earth from our ancestors,
we borrow it from our children. - S.M.A.R.T Person
[-- Attachment #2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 189 bytes --]
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [Cbe-oss-dev] [PATCH] Allow spufs to build as a module with slices enabled
2007-03-01 6:11 ` [PATCH] Allow spufs to build as a module with slices enabled Michael Ellerman
@ 2007-03-02 2:15 ` Christoph Hellwig
2007-03-02 4:10 ` Michael Ellerman
0 siblings, 1 reply; 13+ messages in thread
From: Christoph Hellwig @ 2007-03-02 2:15 UTC (permalink / raw)
To: Michael Ellerman; +Cc: Arnd Bergmann, cbe-oss-dev, linuxppc-dev
On Thu, Mar 01, 2007 at 03:11:31PM +0900, Michael Ellerman wrote:
> The slice code is missing some exports to allow spufs to build as a
> module. Add them.
should be _GPL exports I think.
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [Cbe-oss-dev] [PATCH] Allow spufs to build as a module with slices enabled
2007-03-02 2:15 ` [Cbe-oss-dev] " Christoph Hellwig
@ 2007-03-02 4:10 ` Michael Ellerman
0 siblings, 0 replies; 13+ messages in thread
From: Michael Ellerman @ 2007-03-02 4:10 UTC (permalink / raw)
To: Christoph Hellwig; +Cc: Arnd Bergmann, cbe-oss-dev, linuxppc-dev
[-- Attachment #1: Type: text/plain, Size: 663 bytes --]
On Fri, 2007-03-02 at 03:15 +0100, Christoph Hellwig wrote:
> On Thu, Mar 01, 2007 at 03:11:31PM +0900, Michael Ellerman wrote:
> > The slice code is missing some exports to allow spufs to build as a
> > module. Add them.
>
> should be _GPL exports I think.
Yeah of course, I just hacked it up to get it building and didn't think
about it much. Benh said he'd add _GPL and respin the patch.
cheers
--
Michael Ellerman
OzLabs, IBM Australia Development Lab
wwweb: http://michael.ellerman.id.au
phone: +61 2 6212 1183 (tie line 70 21183)
We do not inherit the earth from our ancestors,
we borrow it from our children. - S.M.A.R.T Person
[-- Attachment #2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 189 bytes --]
^ permalink raw reply [flat|nested] 13+ messages in thread
* [PATCH 0/5] Support 64K pages mapping of SPE local stores on 4K kernel
@ 2007-03-02 11:32 Benjamin Herrenschmidt
2007-04-21 20:33 ` Arnd Bergmann
0 siblings, 1 reply; 13+ messages in thread
From: Benjamin Herrenschmidt @ 2007-03-02 11:32 UTC (permalink / raw)
To: linuxppc-dev, cbe-oss-dev; +Cc: Arnd Bergmann
This serie of patches supports userland mappings of SPE local stores
using 64K hardware pages rather than 4K on a kernel using 4K pages to
improve performances.
The current version of this serie relies on a hack to the generic code
which is probably not acceptable upsteam. I have plans to do a proper
fix but haven't had time to do it yet.
The first patch of the serie is fairly independant of the rest and
should be applied to 2.6.21 as I beleive it fixes a bug with handling
of huge pages from SPEs.
This drop fixes a bug in the previous version where the wrong PTE could
be used by the hash code causing a failure to hash when accessing a
64K mapping. I also added the missing symbol exports for modules as
pointed out by Michael.
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH 0/5] Support 64K pages mapping of SPE local stores on 4K kernel
2007-03-02 11:32 Benjamin Herrenschmidt
@ 2007-04-21 20:33 ` Arnd Bergmann
2007-04-21 22:44 ` Benjamin Herrenschmidt
0 siblings, 1 reply; 13+ messages in thread
From: Arnd Bergmann @ 2007-04-21 20:33 UTC (permalink / raw)
To: linuxppc-dev; +Cc: cbe-oss-dev
On Friday 02 March 2007, Benjamin Herrenschmidt wrote:
> This serie of patches supports userland mappings of SPE local stores
> using 64K hardware pages rather than 4K on a kernel using 4K pages to
> improve performances.
Ben, I have problems porting these patches to powerpc.git#for-2.6.22.
Could you port them yourself and submit them to Paul? What is the
status of the second patch, do you have a version suitable for upstream?
Arnd <><
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH 0/5] Support 64K pages mapping of SPE local stores on 4K kernel
2007-04-21 20:33 ` Arnd Bergmann
@ 2007-04-21 22:44 ` Benjamin Herrenschmidt
0 siblings, 0 replies; 13+ messages in thread
From: Benjamin Herrenschmidt @ 2007-04-21 22:44 UTC (permalink / raw)
To: Arnd Bergmann; +Cc: linuxppc-dev, cbe-oss-dev
On Sat, 2007-04-21 at 22:33 +0200, Arnd Bergmann wrote:
> On Friday 02 March 2007, Benjamin Herrenschmidt wrote:
> > This serie of patches supports userland mappings of SPE local stores
> > using 64K hardware pages rather than 4K on a kernel using 4K pages to
> > improve performances.
>
> Ben, I have problems porting these patches to powerpc.git#for-2.6.22.
> Could you port them yourself and submit them to Paul? What is the
> status of the second patch, do you have a version suitable for upstream?
I have more up to date versions of these, I'll do the port. The generic
change needs to be done differently for upstream. I've been posting
regulary a serie of patches that change get_unmapped_area() for that
which I hope will get upstream but I yet to have some feedback from
akpm...
Ben.
^ permalink raw reply [flat|nested] 13+ messages in thread
end of thread, other threads:[~2007-04-21 22:44 UTC | newest]
Thread overview: 13+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2007-02-20 7:44 [PATCH 0/5] Support 64K pages mapping of SPE local stores on 4K kernel Benjamin Herrenschmidt
2007-02-20 7:44 ` [PATCH 2/5] This is a hack to get_unmapped_area to make the SPE 64K code work Benjamin Herrenschmidt
2007-02-20 7:44 ` [PATCH 1/5] powerpc: Fix spu SLB invalidations Benjamin Herrenschmidt
2007-02-20 7:44 ` [PATCH 3/5] powerpc: Introduce address space "slices" Benjamin Herrenschmidt
2007-03-01 6:11 ` [PATCH] Allow spufs to build as a module with slices enabled Michael Ellerman
2007-03-02 2:15 ` [Cbe-oss-dev] " Christoph Hellwig
2007-03-02 4:10 ` Michael Ellerman
2007-02-20 7:44 ` [PATCH 4/5] powerpc: Add ability to 4K kernel to hash in 64K pages Benjamin Herrenschmidt
2007-02-20 7:44 ` [PATCH 5/5] powerpc: spufs support for 64K LS mappings on 4K kernels Benjamin Herrenschmidt
2007-03-01 7:29 ` [PATCH 0/5] Support 64K pages mapping of SPE local stores on 4K kernel Michael Ellerman
-- strict thread matches above, loose matches on Subject: below --
2007-03-02 11:32 Benjamin Herrenschmidt
2007-04-21 20:33 ` Arnd Bergmann
2007-04-21 22:44 ` Benjamin Herrenschmidt
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).