xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 00/10] xen: arm: direct 1:1 map on 64 bit
@ 2013-06-18 13:26 Ian Campbell
  2013-06-18 13:26 ` [PATCH 01/10] xen: arm: Use _AC macros in config.h Ian Campbell
                   ` (9 more replies)
  0 siblings, 10 replies; 33+ messages in thread
From: Ian Campbell @ 2013-06-18 13:26 UTC (permalink / raw)
  To: xen-devel; +Cc: Tim Deegan, Stefano Stabellini

The following series is targeted at 4.4 and implements a large direct
mapping of RAM for the 64-bit ARM port. I've not got any real hardware
and haven't tried to measure on the emulators but make map_domain_page
turn into simple arithmetic should be an obvious win.

Ian.

^ permalink raw reply	[flat|nested] 33+ messages in thread

* [PATCH 01/10] xen: arm: Use _AC macros in config.h
  2013-06-18 13:26 [PATCH 00/10] xen: arm: direct 1:1 map on 64 bit Ian Campbell
@ 2013-06-18 13:26 ` Ian Campbell
  2013-06-20 16:29   ` Stefano Stabellini
  2013-06-18 13:26 ` [PATCH 02/10] xen: arm: Add zeroeth level page table macros and defines Ian Campbell
                   ` (8 subsequent siblings)
  9 siblings, 1 reply; 33+ messages in thread
From: Ian Campbell @ 2013-06-18 13:26 UTC (permalink / raw)
  To: xen-devel; +Cc: julien.grall, tim, Ian Campbell, stefano.stabellini

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
---
 xen/include/asm-arm/config.h |   18 ++++++++++--------
 1 files changed, 10 insertions(+), 8 deletions(-)

diff --git a/xen/include/asm-arm/config.h b/xen/include/asm-arm/config.h
index e3cfaf1..7e5daa0 100644
--- a/xen/include/asm-arm/config.h
+++ b/xen/include/asm-arm/config.h
@@ -72,6 +72,8 @@
   END(name)
 #endif
 
+#include <xen/const.h>
+
 /*
  * Memory layout:
  *  0  -   2M   Unmapped
@@ -91,14 +93,14 @@
  *   - in setup_pagetables() when relocating Xen.
  */
 
-#define XEN_VIRT_START         mk_unsigned_long(0x00200000)
-#define FIXMAP_ADDR(n)        (mk_unsigned_long(0x00400000) + (n) * PAGE_SIZE)
-#define BOOT_MISC_VIRT_START   mk_unsigned_long(0x00600000)
-#define FRAMETABLE_VIRT_START  mk_unsigned_long(0x02000000)
-#define VMAP_VIRT_START        mk_unsigned_long(0x10000000)
-#define XENHEAP_VIRT_START     mk_unsigned_long(0x40000000)
-#define DOMHEAP_VIRT_START     mk_unsigned_long(0x80000000)
-#define DOMHEAP_VIRT_END       mk_unsigned_long(0xffffffff)
+#define XEN_VIRT_START         _AC(0x00200000,UL)
+#define FIXMAP_ADDR(n)        (_AC(0x00400000,UL) + (n) * PAGE_SIZE)
+#define BOOT_MISC_VIRT_START   _AC(0x00600000,UL)
+#define FRAMETABLE_VIRT_START  _AC(0x02000000,UL)
+#define VMAP_VIRT_START        _AC(0x10000000,UL)
+#define XENHEAP_VIRT_START     _AC(0x40000000,UL)
+#define DOMHEAP_VIRT_START     _AC(0x80000000,UL)
+#define DOMHEAP_VIRT_END       _AC(0xffffffff,UL)
 
 #define VMAP_VIRT_END          XENHEAP_VIRT_START
 #define HYPERVISOR_VIRT_START  XEN_VIRT_START
-- 
1.7.2.5

^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH 02/10] xen: arm: Add zeroeth level page table macros and defines
  2013-06-18 13:26 [PATCH 00/10] xen: arm: direct 1:1 map on 64 bit Ian Campbell
  2013-06-18 13:26 ` [PATCH 01/10] xen: arm: Use _AC macros in config.h Ian Campbell
@ 2013-06-18 13:26 ` Ian Campbell
  2013-06-18 13:26 ` [PATCH 03/10] xen: arm: Rename page table "hint" field to slightly more descriptive "contig" Ian Campbell
                   ` (7 subsequent siblings)
  9 siblings, 0 replies; 33+ messages in thread
From: Ian Campbell @ 2013-06-18 13:26 UTC (permalink / raw)
  To: xen-devel; +Cc: julien.grall, tim, Ian Campbell, stefano.stabellini

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
---
 xen/include/asm-arm/page.h |   10 +++++++++-
 1 files changed, 9 insertions(+), 1 deletions(-)

diff --git a/xen/include/asm-arm/page.h b/xen/include/asm-arm/page.h
index 41e9eff..6317090 100644
--- a/xen/include/asm-arm/page.h
+++ b/xen/include/asm-arm/page.h
@@ -337,7 +337,15 @@ static inline int gva_to_ipa(vaddr_t va, paddr_t *paddr)
 #define second_table_offset(va) TABLE_OFFSET(second_linear_offset(va))
 #define third_table_offset(va)  TABLE_OFFSET(third_linear_offset(va))
 
-#define clear_page(page)memset((void *)(page), 0, PAGE_SIZE)
+#ifdef CONFIG_ARM_64
+#define ZEROETH_SHIFT  (FIRST_SHIFT + LPAE_SHIFT)
+#define ZEROETH_SIZE   (1u << ZEROETH_SHIFT)
+#define ZEROETH_MASK   (~(ZEROETH_SIZE - 1))
+#define zeroeth_linear_offset(va) ((va) >> ZEROETH_SHIFT)
+#define zeroeth_table_offset(va)  TABLE_OFFSET(zeroeth_linear_offset(va))
+#endif
+
+#define clear_page(page) memset((void *)(page), 0, PAGE_SIZE)
 
 #define PAGE_ALIGN(x) (((x) + PAGE_SIZE - 1) & PAGE_MASK)
 
-- 
1.7.2.5

^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH 03/10] xen: arm: Rename page table "hint" field to slightly more descriptive "contig"
  2013-06-18 13:26 [PATCH 00/10] xen: arm: direct 1:1 map on 64 bit Ian Campbell
  2013-06-18 13:26 ` [PATCH 01/10] xen: arm: Use _AC macros in config.h Ian Campbell
  2013-06-18 13:26 ` [PATCH 02/10] xen: arm: Add zeroeth level page table macros and defines Ian Campbell
@ 2013-06-18 13:26 ` Ian Campbell
  2013-06-20 16:29   ` Stefano Stabellini
  2013-06-18 13:26 ` [PATCH 04/10] xen: arm: refactor create_mappings Ian Campbell
                   ` (6 subsequent siblings)
  9 siblings, 1 reply; 33+ messages in thread
From: Ian Campbell @ 2013-06-18 13:26 UTC (permalink / raw)
  To: xen-devel; +Cc: julien.grall, tim, Ian Campbell, stefano.stabellini

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
---
 xen/arch/arm/mm.c          |    2 +-
 xen/include/asm-arm/page.h |    4 ++--
 2 files changed, 3 insertions(+), 3 deletions(-)

diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
index d1290cd..d57e44e 100644
--- a/xen/arch/arm/mm.c
+++ b/xen/arch/arm/mm.c
@@ -526,7 +526,7 @@ static void __init create_mappings(unsigned long virt,
     count = nr_mfns / LPAE_ENTRIES;
     p = xen_second + second_linear_offset(virt);
     pte = mfn_to_xen_entry(base_mfn);
-    pte.pt.hint = 1;  /* These maps are in 16-entry contiguous chunks. */
+    pte.pt.contig = 1;  /* These maps are in 16-entry contiguous chunks. */
     for ( i = 0; i < count; i++ )
     {
         write_pte(p + i, pte);
diff --git a/xen/include/asm-arm/page.h b/xen/include/asm-arm/page.h
index 6317090..e34a8d6 100644
--- a/xen/include/asm-arm/page.h
+++ b/xen/include/asm-arm/page.h
@@ -115,7 +115,7 @@ typedef struct {
 
     /* These seven bits are only used in Block entries and are ignored
      * in Table entries. */
-    unsigned long hint:1;       /* In a block of 16 contiguous entries */
+    unsigned long contig:1;     /* In a block of 16 contiguous entries */
     unsigned long pxn:1;        /* Privileged-XN */
     unsigned long xn:1;         /* eXecute-Never */
     unsigned long avail:4;      /* Ignored by hardware */
@@ -150,7 +150,7 @@ typedef struct {
 
     /* These seven bits are only used in Block entries and are ignored
      * in Table entries. */
-    unsigned long hint:1;       /* In a block of 16 contiguous entries */
+    unsigned long contig:1;     /* In a block of 16 contiguous entries */
     unsigned long sbz2:1;
     unsigned long xn:1;         /* eXecute-Never */
     unsigned long avail:4;      /* Ignored by hardware */
-- 
1.7.2.5

^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH 04/10] xen: arm: refactor create_mappings
  2013-06-18 13:26 [PATCH 00/10] xen: arm: direct 1:1 map on 64 bit Ian Campbell
                   ` (2 preceding siblings ...)
  2013-06-18 13:26 ` [PATCH 03/10] xen: arm: Rename page table "hint" field to slightly more descriptive "contig" Ian Campbell
@ 2013-06-18 13:26 ` Ian Campbell
  2013-06-20 16:40   ` Stefano Stabellini
  2013-06-18 13:26 ` [PATCH 05/10] xen: arm: define a macro to get this CPUs page table root Ian Campbell
                   ` (5 subsequent siblings)
  9 siblings, 1 reply; 33+ messages in thread
From: Ian Campbell @ 2013-06-18 13:26 UTC (permalink / raw)
  To: xen-devel; +Cc: julien.grall, tim, Ian Campbell, stefano.stabellini

Allow it to work on contiguous second level tables mapping an arbitrary region
of memory. Rename it to create_32mb_mappings.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
---
 xen/arch/arm/mm.c |   19 +++++++++++--------
 1 files changed, 11 insertions(+), 8 deletions(-)

diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
index d57e44e..0698c75 100644
--- a/xen/arch/arm/mm.c
+++ b/xen/arch/arm/mm.c
@@ -511,20 +511,23 @@ void __cpuinit mmu_init_secondary_cpu(void)
 }
 
 /* Create Xen's mappings of memory.
- * Base and virt must be 32MB aligned and size a multiple of 32MB. */
-static void __init create_mappings(unsigned long virt,
-                                   unsigned long base_mfn,
-                                   unsigned long nr_mfns)
+ * Base and virt must be 32MB aligned and size a multiple of 32MB.
+ * second must be a contiguous set of second level page tables
+ * covering the region starting at virt_offset. */
+static void __init create_32mb_mappings(lpae_t *second,
+                                        unsigned long virt_offset,
+                                        unsigned long base_mfn,
+                                        unsigned long nr_mfns)
 {
     unsigned long i, count;
     lpae_t pte, *p;
 
-    ASSERT(!((virt >> PAGE_SHIFT) % (16 * LPAE_ENTRIES)));
+    ASSERT(!((virt_offset >> PAGE_SHIFT) % (16 * LPAE_ENTRIES)));
     ASSERT(!(base_mfn % (16 * LPAE_ENTRIES)));
     ASSERT(!(nr_mfns % (16 * LPAE_ENTRIES)));
 
     count = nr_mfns / LPAE_ENTRIES;
-    p = xen_second + second_linear_offset(virt);
+    p = second + second_linear_offset(virt_offset);
     pte = mfn_to_xen_entry(base_mfn);
     pte.pt.contig = 1;  /* These maps are in 16-entry contiguous chunks. */
     for ( i = 0; i < count; i++ )
@@ -539,7 +542,7 @@ static void __init create_mappings(unsigned long virt,
 void __init setup_xenheap_mappings(unsigned long base_mfn,
                                    unsigned long nr_mfns)
 {
-    create_mappings(XENHEAP_VIRT_START, base_mfn, nr_mfns);
+    create_32mb_mappings(xen_second, XENHEAP_VIRT_START, base_mfn, nr_mfns);
 
     /* Record where the xenheap is, for translation routines. */
     xenheap_virt_end = XENHEAP_VIRT_START + nr_mfns * PAGE_SIZE;
@@ -559,7 +562,7 @@ void __init setup_frametable_mappings(paddr_t ps, paddr_t pe)
     /* Round up to 32M boundary */
     frametable_size = (frametable_size + 0x1ffffff) & ~0x1ffffff;
     base_mfn = alloc_boot_pages(frametable_size >> PAGE_SHIFT, 32<<(20-12));
-    create_mappings(FRAMETABLE_VIRT_START, base_mfn, frametable_size >> PAGE_SHIFT);
+    create_32mb_mappings(xen_second, FRAMETABLE_VIRT_START, base_mfn, frametable_size >> PAGE_SHIFT);
 
     memset(&frame_table[0], 0, nr_pages * sizeof(struct page_info));
     memset(&frame_table[nr_pages], -1,
-- 
1.7.2.5

^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH 05/10] xen: arm: define a macro to get this CPUs page table root
  2013-06-18 13:26 [PATCH 00/10] xen: arm: direct 1:1 map on 64 bit Ian Campbell
                   ` (3 preceding siblings ...)
  2013-06-18 13:26 ` [PATCH 04/10] xen: arm: refactor create_mappings Ian Campbell
@ 2013-06-18 13:26 ` Ian Campbell
  2013-06-20 16:30   ` Stefano Stabellini
  2013-06-18 13:26 ` [PATCH 06/10] xen: arm: Add a pte_of_xenaddr helper Ian Campbell
                   ` (4 subsequent siblings)
  9 siblings, 1 reply; 33+ messages in thread
From: Ian Campbell @ 2013-06-18 13:26 UTC (permalink / raw)
  To: xen-devel; +Cc: julien.grall, tim, Ian Campbell, stefano.stabellini

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
---
 xen/arch/arm/mm.c |    5 +++--
 1 files changed, 3 insertions(+), 2 deletions(-)

diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
index 0698c75..773fbf6 100644
--- a/xen/arch/arm/mm.c
+++ b/xen/arch/arm/mm.c
@@ -60,6 +60,7 @@ lpae_t boot_first[LPAE_ENTRIES] __attribute__((__aligned__(4096)));
 /* Per-CPU pagetable pages */
 /* xen_pgtable == root of the trie (zeroeth level on 64-bit, first on 32-bit) */
 static DEFINE_PER_CPU(lpae_t *, xen_pgtable);
+#define THIS_CPU_PGTABLE this_cpu(xen_pgtable)
 /* xen_dommap == pages used by map_domain_page, these pages contain
  * the second level pagetables which mapp the domheap region
  * DOMHEAP_VIRT_START...DOMHEAP_VIRT_END in 2MB chunks. */
@@ -147,7 +148,7 @@ done:
 void dump_hyp_walk(vaddr_t addr)
 {
     uint64_t ttbr = READ_SYSREG64(TTBR0_EL2);
-    lpae_t *pgtable = this_cpu(xen_pgtable);
+    lpae_t *pgtable = THIS_CPU_PGTABLE;
 
     printk("Walking Hypervisor VA 0x%"PRIvaddr" "
            "on CPU%d via TTBR 0x%016"PRIx64"\n",
@@ -502,7 +503,7 @@ void __cpuinit mmu_init_secondary_cpu(void)
     uint64_t ttbr;
 
     /* Change to this CPU's pagetables */
-    ttbr = (uintptr_t)virt_to_maddr(this_cpu(xen_pgtable));
+    ttbr = (uintptr_t)virt_to_maddr(THIS_CPU_PGTABLE);
     WRITE_TTBR(ttbr);
 
     /* From now on, no mapping may be both writable and executable. */
-- 
1.7.2.5

^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH 06/10] xen: arm: Add a pte_of_xenaddr helper
  2013-06-18 13:26 [PATCH 00/10] xen: arm: direct 1:1 map on 64 bit Ian Campbell
                   ` (4 preceding siblings ...)
  2013-06-18 13:26 ` [PATCH 05/10] xen: arm: define a macro to get this CPUs page table root Ian Campbell
@ 2013-06-18 13:26 ` Ian Campbell
  2013-06-20 16:32   ` Stefano Stabellini
  2013-06-18 13:26 ` [PATCH 07/10] xen: arm: fix is_xen_fixed_mfn Ian Campbell
                   ` (3 subsequent siblings)
  9 siblings, 1 reply; 33+ messages in thread
From: Ian Campbell @ 2013-06-18 13:26 UTC (permalink / raw)
  To: xen-devel; +Cc: julien.grall, tim, Ian Campbell, stefano.stabellini

The open coded version is pretty ugly, not helped by rigid enforcement of an
80 character line length.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
---
 xen/arch/arm/mm.c |   13 +++++++++----
 1 files changed, 9 insertions(+), 4 deletions(-)

diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
index 773fbf6..5ad687f 100644
--- a/xen/arch/arm/mm.c
+++ b/xen/arch/arm/mm.c
@@ -331,6 +331,13 @@ void __cpuinit setup_virt_paging(void)
      * write to TTBR0 has completed. */                                 \
     flush_xen_text_tlb()
 
+static inline lpae_t pte_of_xenaddr(vaddr_t va)
+{
+    paddr_t ma = va + phys_offset;
+    unsigned long mfn = ma >> PAGE_SHIFT;
+    return mfn_to_xen_entry(mfn);
+}
+
 /* Boot-time pagetable setup.
  * Changes here may need matching changes in head.S */
 void __init setup_pagetables(unsigned long boot_phys_offset, paddr_t xen_paddr)
@@ -387,8 +394,7 @@ void __init setup_pagetables(unsigned long boot_phys_offset, paddr_t xen_paddr)
     flush_xen_text_tlb();
 
     /* Link in the fixmap pagetable */
-    pte = mfn_to_xen_entry((((unsigned long) xen_fixmap) + phys_offset)
-                           >> PAGE_SHIFT);
+    pte = pte_of_xenaddr((vaddr_t)xen_fixmap);
     pte.pt.table = 1;
     write_pte(xen_second + second_table_offset(FIXMAP_ADDR(0)), pte);
     /*
@@ -415,8 +421,7 @@ void __init setup_pagetables(unsigned long boot_phys_offset, paddr_t xen_paddr)
         write_pte(xen_xenmap + i, pte);
         /* No flush required here as page table is not hooked in yet. */
     }
-    pte = mfn_to_xen_entry((((unsigned long) xen_xenmap) + phys_offset)
-                           >> PAGE_SHIFT);
+    pte = pte_of_xenaddr((vaddr_t)xen_xenmap);
     pte.pt.table = 1;
     write_pte(xen_second + second_linear_offset(XEN_VIRT_START), pte);
     /* TLBFLUSH and ISB would be needed here, but wait until we set WXN */
-- 
1.7.2.5

^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH 07/10] xen: arm: fix is_xen_fixed_mfn
  2013-06-18 13:26 [PATCH 00/10] xen: arm: direct 1:1 map on 64 bit Ian Campbell
                   ` (5 preceding siblings ...)
  2013-06-18 13:26 ` [PATCH 06/10] xen: arm: Add a pte_of_xenaddr helper Ian Campbell
@ 2013-06-18 13:26 ` Ian Campbell
  2013-06-20 16:33   ` Stefano Stabellini
  2013-06-18 13:26 ` [PATCH 08/10] xen: arm: allow virt_to_maddr to take either a pointer or an integer Ian Campbell
                   ` (2 subsequent siblings)
  9 siblings, 1 reply; 33+ messages in thread
From: Ian Campbell @ 2013-06-18 13:26 UTC (permalink / raw)
  To: xen-devel; +Cc: julien.grall, tim, Ian Campbell, stefano.stabellini

The current implementation is nonsense since the xenheap and the xen
text/data/etc mappings are nowhere near each other.

This is only actually used by the page offlining code, which isn't active on
ARM.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
---
 xen/include/asm-arm/mm.h |    4 +++-
 1 files changed, 3 insertions(+), 1 deletions(-)

diff --git a/xen/include/asm-arm/mm.h b/xen/include/asm-arm/mm.h
index 5e7c5a3..baaf9c3 100644
--- a/xen/include/asm-arm/mm.h
+++ b/xen/include/asm-arm/mm.h
@@ -123,7 +123,9 @@ extern unsigned long xenheap_virt_end;
     unsigned long _mfn = (mfn);                                 \
     (_mfn >= xenheap_mfn_start && _mfn < xenheap_mfn_end);      \
 })
-#define is_xen_fixed_mfn(mfn) is_xen_heap_mfn(mfn)
+#define is_xen_fixed_mfn(mfn)                                   \
+    ((((mfn) << PAGE_SHIFT) >= virt_to_maddr(&_start)) &&       \
+     (((mfn) << PAGE_SHIFT) <= virt_to_maddr(&_end)))
 
 #define page_get_owner(_p)    (_p)->v.inuse.domain
 #define page_set_owner(_p,_d) ((_p)->v.inuse.domain = (_d))
-- 
1.7.2.5

^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH 08/10] xen: arm: allow virt_to_maddr to take either a pointer or an integer
  2013-06-18 13:26 [PATCH 00/10] xen: arm: direct 1:1 map on 64 bit Ian Campbell
                   ` (6 preceding siblings ...)
  2013-06-18 13:26 ` [PATCH 07/10] xen: arm: fix is_xen_fixed_mfn Ian Campbell
@ 2013-06-18 13:26 ` Ian Campbell
  2013-06-20 16:34   ` Stefano Stabellini
  2013-06-18 13:26 ` [PATCH 09/10] xen: gate split heap code on its own config option rather than !X86 Ian Campbell
  2013-06-18 13:26 ` [PATCH 10/10] xen: arm: Use a direct mapping of RAM on arm64 Ian Campbell
  9 siblings, 1 reply; 33+ messages in thread
From: Ian Campbell @ 2013-06-18 13:26 UTC (permalink / raw)
  To: xen-devel; +Cc: julien.grall, tim, Ian Campbell, stefano.stabellini

This seems to be expected by common code which passes both pointers and
unsigned long as virtual addresses. The latter case in particular is in
init_node_heap() under a DIRECTMAP_VIRT_END #ifdef, which is why it hasn't
affected us yet (but weill in a subsequent patch).

The new prototypes match the x86 versions apart from using vaddr_t instead of
unsigned long.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
---
 xen/include/asm-arm/mm.h |    7 ++++---
 1 files changed, 4 insertions(+), 3 deletions(-)

diff --git a/xen/include/asm-arm/mm.h b/xen/include/asm-arm/mm.h
index baaf9c3..7aca836 100644
--- a/xen/include/asm-arm/mm.h
+++ b/xen/include/asm-arm/mm.h
@@ -208,11 +208,12 @@ static inline void __iomem *ioremap_wc(paddr_t start, size_t len)
 #define paddr_to_pdx(pa)    pfn_to_pdx(paddr_to_pfn(pa))
 
 
-static inline paddr_t virt_to_maddr(const void *va)
+static inline paddr_t __virt_to_maddr(vaddr_t va)
 {
-    uint64_t par = va_to_par((vaddr_t)va);
-    return (par & PADDR_MASK & PAGE_MASK) | ((vaddr_t) va & ~PAGE_MASK);
+    uint64_t par = va_to_par(va);
+    return (par & PADDR_MASK & PAGE_MASK) | (va & ~PAGE_MASK);
 }
+#define virt_to_maddr(va)   __virt_to_maddr((vaddr_t)(va))
 
 static inline void *maddr_to_virt(paddr_t ma)
 {
-- 
1.7.2.5

^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH 09/10] xen: gate split heap code on its own config option rather than !X86
  2013-06-18 13:26 [PATCH 00/10] xen: arm: direct 1:1 map on 64 bit Ian Campbell
                   ` (7 preceding siblings ...)
  2013-06-18 13:26 ` [PATCH 08/10] xen: arm: allow virt_to_maddr to take either a pointer or an integer Ian Campbell
@ 2013-06-18 13:26 ` Ian Campbell
  2013-06-18 16:01   ` Jan Beulich
  2013-06-20 16:35   ` Stefano Stabellini
  2013-06-18 13:26 ` [PATCH 10/10] xen: arm: Use a direct mapping of RAM on arm64 Ian Campbell
  9 siblings, 2 replies; 33+ messages in thread
From: Ian Campbell @ 2013-06-18 13:26 UTC (permalink / raw)
  To: xen-devel
  Cc: keir, Ian Campbell, stefano.stabellini, tim, julien.grall,
	jbeulich

I'm going to want to disable this for 64 bit ARM.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
Cc: keir@xen.org
Cc: jbeulich@suse.com
---
 xen/common/page_alloc.c      |    2 +-
 xen/include/asm-arm/config.h |    1 +
 2 files changed, 2 insertions(+), 1 deletions(-)

diff --git a/xen/common/page_alloc.c b/xen/common/page_alloc.c
index 2162ef1..24751ff 100644
--- a/xen/common/page_alloc.c
+++ b/xen/common/page_alloc.c
@@ -1305,7 +1305,7 @@ void __init scrub_heap_pages(void)
  * XEN-HEAP SUB-ALLOCATOR
  */
 
-#if !defined(CONFIG_X86)
+#if defined(CONFIG_SEPARATE_XENHEAP)
 
 void init_xenheap_pages(paddr_t ps, paddr_t pe)
 {
diff --git a/xen/include/asm-arm/config.h b/xen/include/asm-arm/config.h
index 7e5daa0..fb9e93c 100644
--- a/xen/include/asm-arm/config.h
+++ b/xen/include/asm-arm/config.h
@@ -36,6 +36,7 @@
 #define CONFIG_SMP 1
 
 #define CONFIG_DOMAIN_PAGE 1
+#define CONFIG_SEPARATE_XENHEAP 1
 
 #define CONFIG_VIDEO 1
 
-- 
1.7.2.5

^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH 10/10] xen: arm: Use a direct mapping of RAM on arm64
  2013-06-18 13:26 [PATCH 00/10] xen: arm: direct 1:1 map on 64 bit Ian Campbell
                   ` (8 preceding siblings ...)
  2013-06-18 13:26 ` [PATCH 09/10] xen: gate split heap code on its own config option rather than !X86 Ian Campbell
@ 2013-06-18 13:26 ` Ian Campbell
  2013-06-18 14:05   ` Julien Grall
                     ` (2 more replies)
  9 siblings, 3 replies; 33+ messages in thread
From: Ian Campbell @ 2013-06-18 13:26 UTC (permalink / raw)
  To: xen-devel; +Cc: julien.grall, tim, Ian Campbell, stefano.stabellini

We have plenty of virtual address space so we can avoid needing to map and
unmap pages all the time.

A totally arbitrarily chosen 32GB frame table leads to support for 5TB of RAM.
I haven't tested with anything near that amount of RAM though. There is plenty
of room to expand further when that becomes necessary.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
---
 xen/arch/arm/mm.c            |  155 ++++++++++++++++++++++++++++++++++--------
 xen/arch/arm/setup.c         |   92 +++++++++++++++++++++++++
 xen/include/asm-arm/config.h |   96 +++++++++++++++++++-------
 xen/include/asm-arm/mm.h     |   16 ++++
 4 files changed, 307 insertions(+), 52 deletions(-)

diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
index 5ad687f..35ef56d 100644
--- a/xen/arch/arm/mm.c
+++ b/xen/arch/arm/mm.c
@@ -37,6 +37,7 @@
 #include <public/memory.h>
 #include <xen/sched.h>
 #include <xen/vmap.h>
+#include <asm/early_printk.h>
 #include <xsm/xsm.h>
 #include <xen/pfn.h>
 
@@ -48,6 +49,14 @@ struct domain *dom_xen, *dom_io, *dom_cow;
 lpae_t boot_pgtable[LPAE_ENTRIES] __attribute__((__aligned__(4096)));
 #ifdef CONFIG_ARM_64
 lpae_t boot_first[LPAE_ENTRIES] __attribute__((__aligned__(4096)));
+/* The first page of the first level mapping of the xenheap. The
+ * subsequent xenheap first level pages are dynamically allocated, but
+ * we need this one to bootstrap ourselves. */
+lpae_t xenheap_first_first[LPAE_ENTRIES] __attribute__((__aligned__(4096)));
+/* The zeroeth level slot which uses xenheap_first_first. Used because
+ * setup_xenheap_mappings otherwise relies on mfn_to_virt which isn't
+ * valid for a non-xenheap mapping. */
+static __initdata int xenheap_first_first_slot = -1;
 #endif
 
 /*
@@ -57,6 +66,9 @@ lpae_t boot_first[LPAE_ENTRIES] __attribute__((__aligned__(4096)));
  * xen_second, xen_fixmap and xen_xenmap are shared between all PCPUs.
  */
 
+#ifdef CONFIG_ARM_64
+#define THIS_CPU_PGTABLE boot_pgtable
+#else
 /* Per-CPU pagetable pages */
 /* xen_pgtable == root of the trie (zeroeth level on 64-bit, first on 32-bit) */
 static DEFINE_PER_CPU(lpae_t *, xen_pgtable);
@@ -65,6 +77,7 @@ static DEFINE_PER_CPU(lpae_t *, xen_pgtable);
  * the second level pagetables which mapp the domheap region
  * DOMHEAP_VIRT_START...DOMHEAP_VIRT_END in 2MB chunks. */
 static DEFINE_PER_CPU(lpae_t *, xen_dommap);
+#endif
 
 /* Common pagetable leaves */
 /* Second level page tables.
@@ -73,10 +86,16 @@ static DEFINE_PER_CPU(lpae_t *, xen_dommap);
  * addresses from 0 to 0x7fffffff. Offsets into it are calculated
  * with second_linear_offset(), not second_table_offset().
  *
- * Addresses 0x80000000 to 0xffffffff are covered by the per-cpu
- * xen_domheap mappings described above. However we allocate 4 pages
+ * On 32bit addresses 0x80000000 to 0xffffffff are covered by the
+ * per-cpu xen_domheap mappings described above. We allocate 4 pages
  * here for use in the boot page tables and the second two pages
  * become the boot CPUs xen_dommap pages.
+ *
+ * On 64bit addresses 0x80000000 to 0xffffffff are unused. However we
+ * allocate 4 pages here for use while relocating Xen, which currently
+ * expects a second level page to exist for all addresses in the first
+ * 4GB. We need to keep these extra mappings in place for seconary CPU
+ * bring up too. For now we just leave them forever.
  */
 lpae_t xen_second[LPAE_ENTRIES*4] __attribute__((__aligned__(4096*4)));
 /* First level page table used for fixmap */
@@ -92,7 +111,7 @@ uint64_t boot_ttbr;
 static paddr_t phys_offset;
 
 /* Limits of the Xen heap */
-unsigned long xenheap_mfn_start __read_mostly;
+unsigned long xenheap_mfn_start __read_mostly = ~0UL;
 unsigned long xenheap_mfn_end __read_mostly;
 unsigned long xenheap_virt_end __read_mostly;
 
@@ -112,7 +131,9 @@ static inline void check_memory_layout_alignment_constraints(void) {
     BUILD_BUG_ON(BOOT_MISC_VIRT_START & ~SECOND_MASK);
     /* 1GB aligned regions */
     BUILD_BUG_ON(XENHEAP_VIRT_START & ~FIRST_MASK);
+#ifdef CONFIG_DOMAIN_PAGE
     BUILD_BUG_ON(DOMHEAP_VIRT_START & ~FIRST_MASK);
+#endif
 }
 
 void dump_pt_walk(lpae_t *first, paddr_t addr)
@@ -180,6 +201,7 @@ void clear_fixmap(unsigned map)
     flush_xen_data_tlb_range_va(FIXMAP_ADDR(map), PAGE_SIZE);
 }
 
+#ifdef CONFIG_DOMAIN_PAGE
 void *map_domain_page_global(unsigned long mfn)
 {
     return vmap(&mfn, 1);
@@ -284,6 +306,7 @@ unsigned long domain_page_map_to_mfn(const void *va)
 
     return map[slot].pt.base + offset;
 }
+#endif
 
 void __init arch_init_memory(void)
 {
@@ -431,6 +454,7 @@ void __init setup_pagetables(unsigned long boot_phys_offset, paddr_t xen_paddr)
     /* Flush everything after setting WXN bit. */
     flush_xen_text_tlb();
 
+#ifdef CONFIG_ARM_32
     per_cpu(xen_pgtable, 0) = boot_pgtable;
     per_cpu(xen_dommap, 0) = xen_second +
         second_linear_offset(DOMHEAP_VIRT_START);
@@ -440,43 +464,31 @@ void __init setup_pagetables(unsigned long boot_phys_offset, paddr_t xen_paddr)
     memset(this_cpu(xen_dommap), 0, DOMHEAP_SECOND_PAGES*PAGE_SIZE);
     flush_xen_dcache_va_range(this_cpu(xen_dommap),
                               DOMHEAP_SECOND_PAGES*PAGE_SIZE);
+#endif
 }
 
 int init_secondary_pagetables(int cpu)
 {
-    lpae_t *root, *first, *domheap, pte;
-    int i;
-
-    root = alloc_xenheap_page();
 #ifdef CONFIG_ARM_64
-    first = alloc_xenheap_page();
+    /* All CPUs share a single page table on 64 bit */
+    return 0;
 #else
-    first = root; /* root == first level on 32-bit 3-level trie */
-#endif
+    lpae_t *first, *domheap, pte;
+    int i;
+
+    first = alloc_xenheap_page(); /* root == first level on 32-bit 3-level trie */
     domheap = alloc_xenheap_pages(get_order_from_pages(DOMHEAP_SECOND_PAGES), 0);
 
-    if ( root == NULL || domheap == NULL || first == NULL )
+    if ( domheap == NULL || first == NULL )
     {
         printk("Not enough free memory for secondary CPU%d pagetables\n", cpu);
         free_xenheap_pages(domheap, get_order_from_pages(DOMHEAP_SECOND_PAGES));
-#ifdef CONFIG_ARM_64
         free_xenheap_page(first);
-#endif
-        free_xenheap_page(root);
         return -ENOMEM;
     }
 
     /* Initialise root pagetable from root of boot tables */
-    memcpy(root, boot_pgtable, PAGE_SIZE);
-
-#ifdef CONFIG_ARM_64
-    /* Initialise first pagetable from first level of boot tables, and
-     * hook into the new root. */
-    memcpy(first, boot_first, PAGE_SIZE);
-    pte = mfn_to_xen_entry(virt_to_mfn(first));
-    pte.pt.table = 1;
-    write_pte(root, pte);
-#endif
+    memcpy(first, boot_pgtable, PAGE_SIZE);
 
     /* Ensure the domheap has no stray mappings */
     memset(domheap, 0, DOMHEAP_SECOND_PAGES*PAGE_SIZE);
@@ -490,16 +502,14 @@ int init_secondary_pagetables(int cpu)
         write_pte(&first[first_table_offset(DOMHEAP_VIRT_START+i*FIRST_SIZE)], pte);
     }
 
-    flush_xen_dcache_va_range(root, PAGE_SIZE);
-#ifdef CONFIG_ARM_64
     flush_xen_dcache_va_range(first, PAGE_SIZE);
-#endif
     flush_xen_dcache_va_range(domheap, DOMHEAP_SECOND_PAGES*PAGE_SIZE);
 
-    per_cpu(xen_pgtable, cpu) = root;
+    per_cpu(xen_pgtable, cpu) = first;
     per_cpu(xen_dommap, cpu) = domheap;
 
     return 0;
+#endif
 }
 
 /* MMU setup for secondary CPUS (which already have paging enabled) */
@@ -544,6 +554,7 @@ static void __init create_32mb_mappings(lpae_t *second,
     flush_xen_data_tlb();
 }
 
+#ifdef CONFIG_ARM_32
 /* Set up the xenheap: up to 1GB of contiguous, always-mapped memory. */
 void __init setup_xenheap_mappings(unsigned long base_mfn,
                                    unsigned long nr_mfns)
@@ -556,19 +567,107 @@ void __init setup_xenheap_mappings(unsigned long base_mfn,
     xenheap_mfn_end = base_mfn + nr_mfns;
 }
 
+#else /* CONFIG_ARM_64 */
+
+void __init setup_xenheap_mappings(unsigned long base_mfn,
+                                   unsigned long nr_mfns)
+{
+    lpae_t *first, pte;
+    unsigned long offset, end_mfn;
+    vaddr_t vaddr;
+
+    /* First call sets the xenheap physical offset. */
+    if ( xenheap_mfn_start == ~0UL )
+        xenheap_mfn_start = base_mfn;
+
+    if ( base_mfn < xenheap_mfn_start )
+        early_panic("cannot add xenheap mapping at %lx below heap start %lx\n",
+                    base_mfn, xenheap_mfn_start);
+
+    end_mfn = base_mfn + nr_mfns;
+
+    /* Align to previous 1GB boundary */
+    base_mfn &= ~FIRST_MASK;
+
+    offset = base_mfn - xenheap_mfn_start;
+    vaddr = DIRECTMAP_VIRT_START + offset*PAGE_SIZE;
+
+    while ( base_mfn < end_mfn )
+    {
+        int slot = zeroeth_table_offset(vaddr);
+        lpae_t *p = &boot_pgtable[slot];
+
+        if ( p->pt.valid )
+        {
+            /* mfn_to_virt is not valid on the 1st 1st mfn, since it
+             * is not within the xenheap. */
+            first = slot == xenheap_first_first_slot ?
+                xenheap_first_first : mfn_to_virt(p->pt.base);
+        }
+        else if ( xenheap_first_first_slot == -1)
+        {
+            /* Use xenheap_first_first to bootstrap the mappings */
+            first = xenheap_first_first;
+
+            pte = pte_of_xenaddr((vaddr_t)xenheap_first_first);
+            pte.pt.table = 1;
+            write_pte(p, pte);
+
+            xenheap_first_first_slot = slot;
+        }
+        else
+        {
+            unsigned long first_mfn = alloc_boot_pages(1, 1);
+            pte = mfn_to_xen_entry(first_mfn);
+            pte.pt.table = 1;
+            write_pte(p, pte);
+            first = mfn_to_virt(first_mfn);
+        }
+
+        pte = mfn_to_xen_entry(base_mfn);
+        //pte.pt.contig = /* XXX need to think about the logic */
+        write_pte(&first[first_table_offset(vaddr)], pte);
+
+        base_mfn += FIRST_SIZE>>PAGE_SHIFT;
+        vaddr += FIRST_SIZE;
+    }
+
+    flush_xen_data_tlb();
+}
+#endif
+
 /* Map a frame table to cover physical addresses ps through pe */
 void __init setup_frametable_mappings(paddr_t ps, paddr_t pe)
 {
     unsigned long nr_pages = (pe - ps) >> PAGE_SHIFT;
     unsigned long frametable_size = nr_pages * sizeof(struct page_info);
     unsigned long base_mfn;
+#ifdef CONFIG_ARM_64
+    lpae_t *second, pte;
+    unsigned long nr_second, second_base;
+    int i;
+#endif
 
     frametable_base_mfn = ps >> PAGE_SHIFT;
 
     /* Round up to 32M boundary */
     frametable_size = (frametable_size + 0x1ffffff) & ~0x1ffffff;
     base_mfn = alloc_boot_pages(frametable_size >> PAGE_SHIFT, 32<<(20-12));
+
+#ifdef CONFIG_ARM_64
+    nr_second = frametable_size >> SECOND_SHIFT;
+    second_base = alloc_boot_pages(nr_second, 1);
+    second = mfn_to_virt(second_base);
+    for ( i = 0; i < nr_second; i++ )
+    {
+        pte = mfn_to_xen_entry(second_base + i);
+        pte.pt.table = 1;
+        write_pte(&boot_first[first_table_offset(FRAMETABLE_VIRT_START)+i], pte);
+    }
+    create_32mb_mappings(second, 0, base_mfn, frametable_size >> PAGE_SHIFT);
+#else
     create_32mb_mappings(xen_second, FRAMETABLE_VIRT_START, base_mfn, frametable_size >> PAGE_SHIFT);
+#endif
 
     memset(&frame_table[0], 0, nr_pages * sizeof(struct page_info));
     memset(&frame_table[nr_pages], -1,
diff --git a/xen/arch/arm/setup.c b/xen/arch/arm/setup.c
index da2a734..3efa063 100644
--- a/xen/arch/arm/setup.c
+++ b/xen/arch/arm/setup.c
@@ -279,6 +279,7 @@ static paddr_t __init get_xen_paddr(void)
     return paddr;
 }
 
+#ifdef CONFIG_ARM_32
 static void __init setup_mm(unsigned long dtb_paddr, size_t dtb_size)
 {
     paddr_t ram_start;
@@ -392,6 +393,97 @@ static void __init setup_mm(unsigned long dtb_paddr, size_t dtb_size)
 
     end_boot_allocator();
 }
+#else /* CONFIG_ARM_64 */
+static void __init setup_mm(unsigned long dtb_paddr, size_t dtb_size)
+{
+    paddr_t ram_start = ~0;
+    paddr_t ram_end = 0;
+    int bank;
+    unsigned long  xenheap_pages = 0;
+    unsigned long dtb_pages;
+
+    /*
+     * TODO: only using the first RAM bank for now.  The heaps and the
+     * frame table assume RAM is physically contiguous.
+     */
+    total_pages = 0;
+    for ( bank = 0 ; bank < early_info.mem.nr_banks; bank++ )
+    {
+        paddr_t bank_start = early_info.mem.bank[bank].start;
+        paddr_t bank_size  = early_info.mem.bank[bank].size;
+        paddr_t  bank_end = bank_start + bank_size;
+        unsigned long bank_pages = bank_size >> PAGE_SHIFT;
+        paddr_t s, e;
+
+        total_pages += bank_pages;
+
+        if ( bank_start < ram_start )
+            ram_start = bank_start;
+        if ( bank_end > ram_end )
+            ram_end = bank_end;
+
+        /*
+         * Locate the xenheap using these constraints:
+         *
+         *  - must be 32 MiB aligned
+         *  - must not include Xen itself or the boot modules
+         *  - must be at most 1 GiB
+         *  - must be at least 128M
+         *
+         * We try to allocate the largest xenheap possible within these
+         * constraints.
+         */
+        xenheap_pages += (bank_size >> PAGE_SHIFT);
+
+        /* XXX we assume that the ram regions are ordered */
+        s = bank_start;
+        while ( s < bank_end )
+        {
+            paddr_t n = bank_end;
+
+            e = next_module(s, &n);
+
+            if ( e == ~(paddr_t)0 )
+            {
+                e = n = bank_end;
+            }
+
+            setup_xenheap_mappings(s>>PAGE_SHIFT, (e-s)>>PAGE_SHIFT);
+
+            xenheap_mfn_end = e;
+
+            init_boot_pages(s, e);
+            s = n;
+        }
+    }
+
+    xenheap_virt_end = XENHEAP_VIRT_START + ram_end - ram_start;
+    xenheap_mfn_start = ram_start >> PAGE_SHIFT;
+    xenheap_mfn_end = ram_end >> PAGE_SHIFT;
+    xenheap_max_mfn(xenheap_mfn_end);
+
+    /*
+     * Need enough mapped pages for copying the DTB.
+     *
+     * TODO: The DTB (and other payloads) are assumed to be towards
+     * the start of RAM.
+     */
+    dtb_pages = (dtb_size + PAGE_SIZE-1) >> PAGE_SHIFT;
+
+    /*
+     * Copy the DTB.
+     *
+     * TODO: handle other payloads too.
+     */
+    device_tree_flattened = mfn_to_virt(alloc_boot_pages(dtb_pages, 1));
+    copy_from_paddr(device_tree_flattened, dtb_paddr, dtb_size, BUFFERABLE);
+
+    setup_frametable_mappings(ram_start, ram_end);
+    max_page = PFN_DOWN(ram_end);
+
+    end_boot_allocator();
+}
+#endif
 
 size_t __read_mostly cacheline_bytes;
 
diff --git a/xen/include/asm-arm/config.h b/xen/include/asm-arm/config.h
index fb9e93c..259d4c6 100644
--- a/xen/include/asm-arm/config.h
+++ b/xen/include/asm-arm/config.h
@@ -35,9 +35,6 @@
 
 #define CONFIG_SMP 1
 
-#define CONFIG_DOMAIN_PAGE 1
-#define CONFIG_SEPARATE_XENHEAP 1
-
 #define CONFIG_VIDEO 1
 
 #define OPT_CONSOLE_STR "dtuart"
@@ -76,38 +73,89 @@
 #include <xen/const.h>
 
 /*
- * Memory layout:
- *  0  -   2M   Unmapped
- *  2M -   4M   Xen text, data, bss
- *  4M -   6M   Fixmap: special-purpose 4K mapping slots
- *  6M  -  8M   Early boot misc (see below)
- *
- * 32M - 128M   Frametable: 24 bytes per page for 16GB of RAM
- * 256M -  1G   VMAP: ioremap and early_ioremap use this virtual address
- *                    space
- *
- *  1G -   2G   Xenheap: always-mapped memory
- *  2G -   4G   Domheap: on-demand-mapped
+ * Common ARM32 and ARM64 layout:
+ *   0  -   2M   Unmapped
+ *   2M -   4M   Xen text, data, bss
+ *   4M -   6M   Fixmap: special-purpose 4K mapping slots
+ *   6M -   8M   Early boot misc (see below)
  *
  * The early boot misc area is used:
  *   - in head.S for the DTB for device_tree_early_init().
  *   - in setup_pagetables() when relocating Xen.
+ *
+ * ARM32 layout:
+ *   0  -   8M   <COMMON>
+ *
+ *  32M - 128M   Frametable: 24 bytes per page for 16GB of RAM
+ * 256M -   1G   VMAP: ioremap and early_ioremap use this virtual address
+ *                    space
+ *
+ *   1G -   2G   Xenheap: always-mapped memory
+ *   2G -   4G   Domheap: on-demand-mapped
+ *
+ * ARM64 layout:
+ * 0x0000000000000000 - 0x0000007fffffffff (512GB, L0 slot [0])
+ *   0  -   8M   <COMMON>
+ *
+ *   1G -   2G   VMAP: ioremap and early_ioremap
+ *
+ *  32G -  64G   Frametable: 24 bytes per page for 5.3TB of RAM
+ *
+ * 0x0000008000000000 - 0x00007fffffffffff (127.5TB, L0 slots [1..255])
+ *  Unused
+ *
+ * 0x0000800000000000 - 0x000084ffffffffff (5TB, L0 slots [256..265])
+ *  1:1 mapping of RAM
+ *
+ * 0x0000850000000000 - 0x0000ffffffffffff (123TB, L0 slots [266..511])
+ *  Unused
  */
 
-#define XEN_VIRT_START         _AC(0x00200000,UL)
-#define FIXMAP_ADDR(n)        (_AC(0x00400000,UL) + (n) * PAGE_SIZE)
-#define BOOT_MISC_VIRT_START   _AC(0x00600000,UL)
-#define FRAMETABLE_VIRT_START  _AC(0x02000000,UL)
-#define VMAP_VIRT_START        _AC(0x10000000,UL)
-#define XENHEAP_VIRT_START     _AC(0x40000000,UL)
-#define DOMHEAP_VIRT_START     _AC(0x80000000,UL)
-#define DOMHEAP_VIRT_END       _AC(0xffffffff,UL)
+#define XEN_VIRT_START         _AT(vaddr_t,0x00200000)
+#define FIXMAP_ADDR(n)        (_AT(vaddr_t,0x00400000) + (n) * PAGE_SIZE)
+#define BOOT_MISC_VIRT_START   _AT(vaddr_t,0x00600000)
 
-#define VMAP_VIRT_END          XENHEAP_VIRT_START
 #define HYPERVISOR_VIRT_START  XEN_VIRT_START
 
+#ifdef CONFIG_ARM_32
+
+#define CONFIG_DOMAIN_PAGE 1
+#define CONFIG_SEPARATE_XENHEAP 1
+
+#define FRAMETABLE_VIRT_START  _AT(vaddr_t,0x02000000)
+#define VMAP_VIRT_START  _AT(vaddr_t,0x10000000)
+#define XENHEAP_VIRT_START     _AT(vaddr_t,0x40000000)
+#define XENHEAP_VIRT_END       _AT(vaddr_t,0x7fffffff)
+#define DOMHEAP_VIRT_START     _AT(vaddr_t,0x80000000)
+#define DOMHEAP_VIRT_END       _AT(vaddr_t,0xffffffff)
+
+#define VMAP_VIRT_END    XENHEAP_VIRT_START
+
 #define DOMHEAP_ENTRIES        1024  /* 1024 2MB mapping slots */
 
+#else /* ARM_64 */
+
+#define SLOT0_ENTRY_BITS  39
+#define SLOT0(slot) (_AT(vaddr_t,slot) << SLOT0_ENTRY_BITS)
+#define SLOT0_ENTRY_SIZE  SLOT0(1)
+#define GB(_gb)     (_AC(_gb, UL) << 30)
+
+#define VMAP_VIRT_START  GB(1)
+#define VMAP_VIRT_END    (VMAP_VIRT_START + GB(1) - 1)
+
+#define FRAMETABLE_VIRT_START  GB(32)
+#define FRAMETABLE_VIRT_END    (FRAMETABLE_VIRT_START + GB(32) - 1)
+
+#define DIRECTMAP_VIRT_START   SLOT0(256)
+#define DIRECTMAP_SIZE         (SLOT0_ENTRY_SIZE * (265-256))
+#define DIRECTMAP_VIRT_END     (DIRECTMAP_VIRT_START + DIRECTMAP_SIZE - 1)
+
+#define XENHEAP_VIRT_START     DIRECTMAP_VIRT_START
+
+#define HYPERVISOR_VIRT_END    DIRECTMAP_VIRT_END
+
+#endif
+
 /* Number of domheap pagetable pages required at the second level (2MB mappings) */
 #define DOMHEAP_SECOND_PAGES ((DOMHEAP_VIRT_END - DOMHEAP_VIRT_START + 1) >> FIRST_SHIFT)
 
diff --git a/xen/include/asm-arm/mm.h b/xen/include/asm-arm/mm.h
index 7aca836..27284d0 100644
--- a/xen/include/asm-arm/mm.h
+++ b/xen/include/asm-arm/mm.h
@@ -118,11 +118,18 @@ struct page_info
 extern unsigned long xenheap_mfn_start, xenheap_mfn_end;
 extern unsigned long xenheap_virt_end;
 
+#ifdef CONFIG_ARM_32
 #define is_xen_heap_page(page) is_xen_heap_mfn(page_to_mfn(page))
 #define is_xen_heap_mfn(mfn) ({                                 \
     unsigned long _mfn = (mfn);                                 \
     (_mfn >= xenheap_mfn_start && _mfn < xenheap_mfn_end);      \
 })
+#else
+#define is_xen_heap_page(page) ((page)->count_info & PGC_xen_heap)
+#define is_xen_heap_mfn(mfn) \
+    (mfn_valid(mfn) && is_xen_heap_page(__mfn_to_page(mfn)))
+#endif
+
 #define is_xen_fixed_mfn(mfn)                                   \
     ((((mfn) << PAGE_SHIFT) >= virt_to_maddr(&_start)) &&       \
      (((mfn) << PAGE_SHIFT) <= virt_to_maddr(&_end)))
@@ -215,12 +222,21 @@ static inline paddr_t __virt_to_maddr(vaddr_t va)
 }
 #define virt_to_maddr(va)   __virt_to_maddr((vaddr_t)(va))
 
+#ifdef CONFIG_ARM_32
 static inline void *maddr_to_virt(paddr_t ma)
 {
     ASSERT(is_xen_heap_mfn(ma >> PAGE_SHIFT));
     ma -= pfn_to_paddr(xenheap_mfn_start);
     return (void *)(unsigned long) ma + XENHEAP_VIRT_START;
 }
+#else
+static inline void *maddr_to_virt(paddr_t ma)
+{
+    ASSERT((ma >> PAGE_SHIFT) < (DIRECTMAP_SIZE >> PAGE_SHIFT));
+    ma -= pfn_to_paddr(xenheap_mfn_start);
+    return (void *)(unsigned long) ma + DIRECTMAP_VIRT_START;
+}
+#endif
 
 static inline int gvirt_to_maddr(vaddr_t va, paddr_t *pa)
 {
-- 
1.7.2.5

^ permalink raw reply related	[flat|nested] 33+ messages in thread

* Re: [PATCH 10/10] xen: arm: Use a direct mapping of RAM on arm64
  2013-06-18 13:26 ` [PATCH 10/10] xen: arm: Use a direct mapping of RAM on arm64 Ian Campbell
@ 2013-06-18 14:05   ` Julien Grall
  2013-06-18 14:13     ` Ian Campbell
  2013-06-21 14:10   ` Stefano Stabellini
  2013-06-25 15:39   ` Julien Grall
  2 siblings, 1 reply; 33+ messages in thread
From: Julien Grall @ 2013-06-18 14:05 UTC (permalink / raw)
  To: Ian Campbell; +Cc: stefano.stabellini, tim, xen-devel

On 06/18/2013 02:26 PM, Ian Campbell wrote:

> We have plenty of virtual address space so we can avoid needing to map and
> unmap pages all the time.
> 
> A totally arbitrarily chosen 32GB frame table leads to support for 5TB of RAM.
> I haven't tested with anything near that amount of RAM though. There is plenty
> of room to expand further when that becomes necessary.
> 
> Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
> ---
>  xen/arch/arm/mm.c            |  155 ++++++++++++++++++++++++++++++++++--------
>  xen/arch/arm/setup.c         |   92 +++++++++++++++++++++++++
>  xen/include/asm-arm/config.h |   96 +++++++++++++++++++-------
>  xen/include/asm-arm/mm.h     |   16 ++++
>  4 files changed, 307 insertions(+), 52 deletions(-)
> 
> diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
> index 5ad687f..35ef56d 100644
> --- a/xen/arch/arm/mm.c
> +++ b/xen/arch/arm/mm.c
> @@ -37,6 +37,7 @@
>  #include <public/memory.h>
>  #include <xen/sched.h>
>  #include <xen/vmap.h>
> +#include <asm/early_printk.h>
>  #include <xsm/xsm.h>
>  #include <xen/pfn.h>
>  
> @@ -48,6 +49,14 @@ struct domain *dom_xen, *dom_io, *dom_cow;
>  lpae_t boot_pgtable[LPAE_ENTRIES] __attribute__((__aligned__(4096)));
>  #ifdef CONFIG_ARM_64
>  lpae_t boot_first[LPAE_ENTRIES] __attribute__((__aligned__(4096)));
> +/* The first page of the first level mapping of the xenheap. The
> + * subsequent xenheap first level pages are dynamically allocated, but
> + * we need this one to bootstrap ourselves. */
> +lpae_t xenheap_first_first[LPAE_ENTRIES] __attribute__((__aligned__(4096)));
> +/* The zeroeth level slot which uses xenheap_first_first. Used because
> + * setup_xenheap_mappings otherwise relies on mfn_to_virt which isn't
> + * valid for a non-xenheap mapping. */
> +static __initdata int xenheap_first_first_slot = -1;
>  #endif
>  
>  /*
> @@ -57,6 +66,9 @@ lpae_t boot_first[LPAE_ENTRIES] __attribute__((__aligned__(4096)));
>   * xen_second, xen_fixmap and xen_xenmap are shared between all PCPUs.
>   */
>  
> +#ifdef CONFIG_ARM_64
> +#define THIS_CPU_PGTABLE boot_pgtable
> +#else
>  /* Per-CPU pagetable pages */
>  /* xen_pgtable == root of the trie (zeroeth level on 64-bit, first on 32-bit) */
>  static DEFINE_PER_CPU(lpae_t *, xen_pgtable);
> @@ -65,6 +77,7 @@ static DEFINE_PER_CPU(lpae_t *, xen_pgtable);
>   * the second level pagetables which mapp the domheap region
>   * DOMHEAP_VIRT_START...DOMHEAP_VIRT_END in 2MB chunks. */
>  static DEFINE_PER_CPU(lpae_t *, xen_dommap);
> +#endif
>  
>  /* Common pagetable leaves */
>  /* Second level page tables.
> @@ -73,10 +86,16 @@ static DEFINE_PER_CPU(lpae_t *, xen_dommap);
>   * addresses from 0 to 0x7fffffff. Offsets into it are calculated
>   * with second_linear_offset(), not second_table_offset().
>   *
> - * Addresses 0x80000000 to 0xffffffff are covered by the per-cpu
> - * xen_domheap mappings described above. However we allocate 4 pages
> + * On 32bit addresses 0x80000000 to 0xffffffff are covered by the
> + * per-cpu xen_domheap mappings described above. We allocate 4 pages
>   * here for use in the boot page tables and the second two pages
>   * become the boot CPUs xen_dommap pages.
> + *
> + * On 64bit addresses 0x80000000 to 0xffffffff are unused. However we
> + * allocate 4 pages here for use while relocating Xen, which currently
> + * expects a second level page to exist for all addresses in the first
> + * 4GB. We need to keep these extra mappings in place for seconary CPU
> + * bring up too. For now we just leave them forever.
>   */
>  lpae_t xen_second[LPAE_ENTRIES*4] __attribute__((__aligned__(4096*4)));
>  /* First level page table used for fixmap */
> @@ -92,7 +111,7 @@ uint64_t boot_ttbr;
>  static paddr_t phys_offset;
>  
>  /* Limits of the Xen heap */
> -unsigned long xenheap_mfn_start __read_mostly;
> +unsigned long xenheap_mfn_start __read_mostly = ~0UL;
>  unsigned long xenheap_mfn_end __read_mostly;
>  unsigned long xenheap_virt_end __read_mostly;
>  
> @@ -112,7 +131,9 @@ static inline void check_memory_layout_alignment_constraints(void) {
>      BUILD_BUG_ON(BOOT_MISC_VIRT_START & ~SECOND_MASK);
>      /* 1GB aligned regions */
>      BUILD_BUG_ON(XENHEAP_VIRT_START & ~FIRST_MASK);
> +#ifdef CONFIG_DOMAIN_PAGE
>      BUILD_BUG_ON(DOMHEAP_VIRT_START & ~FIRST_MASK);
> +#endif
>  }
>  
>  void dump_pt_walk(lpae_t *first, paddr_t addr)
> @@ -180,6 +201,7 @@ void clear_fixmap(unsigned map)
>      flush_xen_data_tlb_range_va(FIXMAP_ADDR(map), PAGE_SIZE);
>  }
>  
> +#ifdef CONFIG_DOMAIN_PAGE
>  void *map_domain_page_global(unsigned long mfn)
>  {
>      return vmap(&mfn, 1);
> @@ -284,6 +306,7 @@ unsigned long domain_page_map_to_mfn(const void *va)
>  
>      return map[slot].pt.base + offset;
>  }
> +#endif
>  
>  void __init arch_init_memory(void)
>  {
> @@ -431,6 +454,7 @@ void __init setup_pagetables(unsigned long boot_phys_offset, paddr_t xen_paddr)
>      /* Flush everything after setting WXN bit. */
>      flush_xen_text_tlb();
>  
> +#ifdef CONFIG_ARM_32
>      per_cpu(xen_pgtable, 0) = boot_pgtable;
>      per_cpu(xen_dommap, 0) = xen_second +
>          second_linear_offset(DOMHEAP_VIRT_START);
> @@ -440,43 +464,31 @@ void __init setup_pagetables(unsigned long boot_phys_offset, paddr_t xen_paddr)
>      memset(this_cpu(xen_dommap), 0, DOMHEAP_SECOND_PAGES*PAGE_SIZE);
>      flush_xen_dcache_va_range(this_cpu(xen_dommap),
>                                DOMHEAP_SECOND_PAGES*PAGE_SIZE);
> +#endif
>  }
>  
>  int init_secondary_pagetables(int cpu)
>  {
> -    lpae_t *root, *first, *domheap, pte;
> -    int i;
> -
> -    root = alloc_xenheap_page();
>  #ifdef CONFIG_ARM_64
> -    first = alloc_xenheap_page();
> +    /* All CPUs share a single page table on 64 bit */
> +    return 0;
>  #else
> -    first = root; /* root == first level on 32-bit 3-level trie */
> -#endif
> +    lpae_t *first, *domheap, pte;
> +    int i;
> +
> +    first = alloc_xenheap_page(); /* root == first level on 32-bit 3-level trie */
>      domheap = alloc_xenheap_pages(get_order_from_pages(DOMHEAP_SECOND_PAGES), 0);
>  
> -    if ( root == NULL || domheap == NULL || first == NULL )
> +    if ( domheap == NULL || first == NULL )
>      {
>          printk("Not enough free memory for secondary CPU%d pagetables\n", cpu);
>          free_xenheap_pages(domheap, get_order_from_pages(DOMHEAP_SECOND_PAGES));
> -#ifdef CONFIG_ARM_64
>          free_xenheap_page(first);
> -#endif
> -        free_xenheap_page(root);
>          return -ENOMEM;
>      }
>  
>      /* Initialise root pagetable from root of boot tables */
> -    memcpy(root, boot_pgtable, PAGE_SIZE);
> -
> -#ifdef CONFIG_ARM_64
> -    /* Initialise first pagetable from first level of boot tables, and
> -     * hook into the new root. */
> -    memcpy(first, boot_first, PAGE_SIZE);
> -    pte = mfn_to_xen_entry(virt_to_mfn(first));
> -    pte.pt.table = 1;
> -    write_pte(root, pte);
> -#endif
> +    memcpy(first, boot_pgtable, PAGE_SIZE);
>  
>      /* Ensure the domheap has no stray mappings */
>      memset(domheap, 0, DOMHEAP_SECOND_PAGES*PAGE_SIZE);
> @@ -490,16 +502,14 @@ int init_secondary_pagetables(int cpu)
>          write_pte(&first[first_table_offset(DOMHEAP_VIRT_START+i*FIRST_SIZE)], pte);
>      }
>  
> -    flush_xen_dcache_va_range(root, PAGE_SIZE);
> -#ifdef CONFIG_ARM_64
>      flush_xen_dcache_va_range(first, PAGE_SIZE);
> -#endif
>      flush_xen_dcache_va_range(domheap, DOMHEAP_SECOND_PAGES*PAGE_SIZE);
>  
> -    per_cpu(xen_pgtable, cpu) = root;
> +    per_cpu(xen_pgtable, cpu) = first;
>      per_cpu(xen_dommap, cpu) = domheap;
>  
>      return 0;
> +#endif
>  }
>  
>  /* MMU setup for secondary CPUS (which already have paging enabled) */
> @@ -544,6 +554,7 @@ static void __init create_32mb_mappings(lpae_t *second,
>      flush_xen_data_tlb();
>  }
>  
> +#ifdef CONFIG_ARM_32
>  /* Set up the xenheap: up to 1GB of contiguous, always-mapped memory. */
>  void __init setup_xenheap_mappings(unsigned long base_mfn,
>                                     unsigned long nr_mfns)
> @@ -556,19 +567,107 @@ void __init setup_xenheap_mappings(unsigned long base_mfn,
>      xenheap_mfn_end = base_mfn + nr_mfns;
>  }
>  
> +#else /* CONFIG_ARM_64 */
> +
> +void __init setup_xenheap_mappings(unsigned long base_mfn,
> +                                   unsigned long nr_mfns)
> +{
> +    lpae_t *first, pte;
> +    unsigned long offset, end_mfn;
> +    vaddr_t vaddr;
> +
> +    /* First call sets the xenheap physical offset. */
> +    if ( xenheap_mfn_start == ~0UL )
> +        xenheap_mfn_start = base_mfn;
> +
> +    if ( base_mfn < xenheap_mfn_start )
> +        early_panic("cannot add xenheap mapping at %lx below heap start %lx\n",
> +                    base_mfn, xenheap_mfn_start);
> +
> +    end_mfn = base_mfn + nr_mfns;
> +
> +    /* Align to previous 1GB boundary */
> +    base_mfn &= ~FIRST_MASK;
> +
> +    offset = base_mfn - xenheap_mfn_start;
> +    vaddr = DIRECTMAP_VIRT_START + offset*PAGE_SIZE;
> +
> +    while ( base_mfn < end_mfn )
> +    {
> +        int slot = zeroeth_table_offset(vaddr);
> +        lpae_t *p = &boot_pgtable[slot];
> +
> +        if ( p->pt.valid )
> +        {
> +            /* mfn_to_virt is not valid on the 1st 1st mfn, since it
> +             * is not within the xenheap. */
> +            first = slot == xenheap_first_first_slot ?
> +                xenheap_first_first : mfn_to_virt(p->pt.base);
> +        }
> +        else if ( xenheap_first_first_slot == -1)
> +        {
> +            /* Use xenheap_first_first to bootstrap the mappings */
> +            first = xenheap_first_first;
> +
> +            pte = pte_of_xenaddr((vaddr_t)xenheap_first_first);
> +            pte.pt.table = 1;
> +            write_pte(p, pte);
> +
> +            xenheap_first_first_slot = slot;
> +        }
> +        else
> +        {
> +            unsigned long first_mfn = alloc_boot_pages(1, 1);
> +            pte = mfn_to_xen_entry(first_mfn);
> +            pte.pt.table = 1;
> +            write_pte(p, pte);
> +            first = mfn_to_virt(first_mfn);
> +        }
> +
> +        pte = mfn_to_xen_entry(base_mfn);
> +        //pte.pt.contig = /* XXX need to think about the logic */
> +        write_pte(&first[first_table_offset(vaddr)], pte);
> +
> +        base_mfn += FIRST_SIZE>>PAGE_SHIFT;
> +        vaddr += FIRST_SIZE;
> +    }
> +
> +    flush_xen_data_tlb();
> +}
> +#endif
> +
>  /* Map a frame table to cover physical addresses ps through pe */
>  void __init setup_frametable_mappings(paddr_t ps, paddr_t pe)
>  {
>      unsigned long nr_pages = (pe - ps) >> PAGE_SHIFT;
>      unsigned long frametable_size = nr_pages * sizeof(struct page_info);
>      unsigned long base_mfn;
> +#ifdef CONFIG_ARM_64
> +    lpae_t *second, pte;
> +    unsigned long nr_second, second_base;
> +    int i;
> +#endif
>  
>      frametable_base_mfn = ps >> PAGE_SHIFT;
>  
>      /* Round up to 32M boundary */
>      frametable_size = (frametable_size + 0x1ffffff) & ~0x1ffffff;
>      base_mfn = alloc_boot_pages(frametable_size >> PAGE_SHIFT, 32<<(20-12));
> +
> +#ifdef CONFIG_ARM_64
> +    nr_second = frametable_size >> SECOND_SHIFT;
> +    second_base = alloc_boot_pages(nr_second, 1);
> +    second = mfn_to_virt(second_base);
> +    for ( i = 0; i < nr_second; i++ )
> +    {
> +        pte = mfn_to_xen_entry(second_base + i);
> +        pte.pt.table = 1;
> +        write_pte(&boot_first[first_table_offset(FRAMETABLE_VIRT_START)+i], pte);
> +    }
> +    create_32mb_mappings(second, 0, base_mfn, frametable_size >> PAGE_SHIFT);
> +#else
>      create_32mb_mappings(xen_second, FRAMETABLE_VIRT_START, base_mfn, frametable_size >> PAGE_SHIFT);
> +#endif
>  
>      memset(&frame_table[0], 0, nr_pages * sizeof(struct page_info));
>      memset(&frame_table[nr_pages], -1,
> diff --git a/xen/arch/arm/setup.c b/xen/arch/arm/setup.c
> index da2a734..3efa063 100644
> --- a/xen/arch/arm/setup.c
> +++ b/xen/arch/arm/setup.c
> @@ -279,6 +279,7 @@ static paddr_t __init get_xen_paddr(void)
>      return paddr;
>  }
>  
> +#ifdef CONFIG_ARM_32
>  static void __init setup_mm(unsigned long dtb_paddr, size_t dtb_size)
>  {
>      paddr_t ram_start;
> @@ -392,6 +393,97 @@ static void __init setup_mm(unsigned long dtb_paddr, size_t dtb_size)
>  
>      end_boot_allocator();
>  }
> +#else /* CONFIG_ARM_64 */
> +static void __init setup_mm(unsigned long dtb_paddr, size_t dtb_size)
> +{
> +    paddr_t ram_start = ~0;
> +    paddr_t ram_end = 0;
> +    int bank;
> +    unsigned long  xenheap_pages = 0;
> +    unsigned long dtb_pages;
> +
> +    /*
> +     * TODO: only using the first RAM bank for now.  The heaps and the
> +     * frame table assume RAM is physically contiguous.
> +     */
> +    total_pages = 0;
> +    for ( bank = 0 ; bank < early_info.mem.nr_banks; bank++ )
> +    {
> +        paddr_t bank_start = early_info.mem.bank[bank].start;
> +        paddr_t bank_size  = early_info.mem.bank[bank].size;
> +        paddr_t  bank_end = bank_start + bank_size;
> +        unsigned long bank_pages = bank_size >> PAGE_SHIFT;
> +        paddr_t s, e;
> +
> +        total_pages += bank_pages;
> +
> +        if ( bank_start < ram_start )
> +            ram_start = bank_start;
> +        if ( bank_end > ram_end )
> +            ram_end = bank_end;
> +
> +        /*
> +         * Locate the xenheap using these constraints:
> +         *
> +         *  - must be 32 MiB aligned
> +         *  - must not include Xen itself or the boot modules
> +         *  - must be at most 1 GiB
> +         *  - must be at least 128M
> +         *
> +         * We try to allocate the largest xenheap possible within these
> +         * constraints.
> +         */
> +        xenheap_pages += (bank_size >> PAGE_SHIFT);
> +
> +        /* XXX we assume that the ram regions are ordered */
> +        s = bank_start;
> +        while ( s < bank_end )
> +        {
> +            paddr_t n = bank_end;
> +
> +            e = next_module(s, &n);
> +
> +            if ( e == ~(paddr_t)0 )
> +            {
> +                e = n = bank_end;
> +            }
> +
> +            setup_xenheap_mappings(s>>PAGE_SHIFT, (e-s)>>PAGE_SHIFT);
> +
> +            xenheap_mfn_end = e;
> +
> +            init_boot_pages(s, e);
> +            s = n;
> +        }
> +    }
> +
> +    xenheap_virt_end = XENHEAP_VIRT_START + ram_end - ram_start;
> +    xenheap_mfn_start = ram_start >> PAGE_SHIFT;
> +    xenheap_mfn_end = ram_end >> PAGE_SHIFT;
> +    xenheap_max_mfn(xenheap_mfn_end);
> +
> +    /*
> +     * Need enough mapped pages for copying the DTB.
> +     *
> +     * TODO: The DTB (and other payloads) are assumed to be towards
> +     * the start of RAM.
> +     */
> +    dtb_pages = (dtb_size + PAGE_SIZE-1) >> PAGE_SHIFT;
> +
> +    /*
> +     * Copy the DTB.
> +     *
> +     * TODO: handle other payloads too.
> +     */
> +    device_tree_flattened = mfn_to_virt(alloc_boot_pages(dtb_pages, 1));
> +    copy_from_paddr(device_tree_flattened, dtb_paddr, dtb_size, BUFFERABLE);
> +
> +    setup_frametable_mappings(ram_start, ram_end);
> +    max_page = PFN_DOWN(ram_end);
> +
> +    end_boot_allocator();
> +}
> +#endif
>  
>  size_t __read_mostly cacheline_bytes;
>  
> diff --git a/xen/include/asm-arm/config.h b/xen/include/asm-arm/config.h
> index fb9e93c..259d4c6 100644
> --- a/xen/include/asm-arm/config.h
> +++ b/xen/include/asm-arm/config.h
> @@ -35,9 +35,6 @@
>  
>  #define CONFIG_SMP 1
>  
> -#define CONFIG_DOMAIN_PAGE 1
> -#define CONFIG_SEPARATE_XENHEAP 1
> -
>  #define CONFIG_VIDEO 1
>  
>  #define OPT_CONSOLE_STR "dtuart"
> @@ -76,38 +73,89 @@
>  #include <xen/const.h>
>  
>  /*
> - * Memory layout:
> - *  0  -   2M   Unmapped
> - *  2M -   4M   Xen text, data, bss
> - *  4M -   6M   Fixmap: special-purpose 4K mapping slots
> - *  6M  -  8M   Early boot misc (see below)
> - *
> - * 32M - 128M   Frametable: 24 bytes per page for 16GB of RAM
> - * 256M -  1G   VMAP: ioremap and early_ioremap use this virtual address
> - *                    space
> - *
> - *  1G -   2G   Xenheap: always-mapped memory
> - *  2G -   4G   Domheap: on-demand-mapped
> + * Common ARM32 and ARM64 layout:
> + *   0  -   2M   Unmapped
> + *   2M -   4M   Xen text, data, bss
> + *   4M -   6M   Fixmap: special-purpose 4K mapping slots
> + *   6M -   8M   Early boot misc (see below)
>   *
>   * The early boot misc area is used:
>   *   - in head.S for the DTB for device_tree_early_init().
>   *   - in setup_pagetables() when relocating Xen.
> + *
> + * ARM32 layout:
> + *   0  -   8M   <COMMON>
> + *
> + *  32M - 128M   Frametable: 24 bytes per page for 16GB of RAM
> + * 256M -   1G   VMAP: ioremap and early_ioremap use this virtual address
> + *                    space
> + *
> + *   1G -   2G   Xenheap: always-mapped memory
> + *   2G -   4G   Domheap: on-demand-mapped
> + *
> + * ARM64 layout:
> + * 0x0000000000000000 - 0x0000007fffffffff (512GB, L0 slot [0])
> + *   0  -   8M   <COMMON>
> + *
> + *   1G -   2G   VMAP: ioremap and early_ioremap
> + *
> + *  32G -  64G   Frametable: 24 bytes per page for 5.3TB of RAM
> + *
> + * 0x0000008000000000 - 0x00007fffffffffff (127.5TB, L0 slots [1..255])
> + *  Unused
> + *
> + * 0x0000800000000000 - 0x000084ffffffffff (5TB, L0 slots [256..265])
> + *  1:1 mapping of RAM
> + *
> + * 0x0000850000000000 - 0x0000ffffffffffff (123TB, L0 slots [266..511])
> + *  Unused
>   */
>  
> -#define XEN_VIRT_START         _AC(0x00200000,UL)
> -#define FIXMAP_ADDR(n)        (_AC(0x00400000,UL) + (n) * PAGE_SIZE)
> -#define BOOT_MISC_VIRT_START   _AC(0x00600000,UL)
> -#define FRAMETABLE_VIRT_START  _AC(0x02000000,UL)
> -#define VMAP_VIRT_START        _AC(0x10000000,UL)
> -#define XENHEAP_VIRT_START     _AC(0x40000000,UL)
> -#define DOMHEAP_VIRT_START     _AC(0x80000000,UL)
> -#define DOMHEAP_VIRT_END       _AC(0xffffffff,UL)
> +#define XEN_VIRT_START         _AT(vaddr_t,0x00200000)
> +#define FIXMAP_ADDR(n)        (_AT(vaddr_t,0x00400000) + (n) * PAGE_SIZE)
> +#define BOOT_MISC_VIRT_START   _AT(vaddr_t,0x00600000)
>  
> -#define VMAP_VIRT_END          XENHEAP_VIRT_START
>  #define HYPERVISOR_VIRT_START  XEN_VIRT_START
>  
> +#ifdef CONFIG_ARM_32
> +
> +#define CONFIG_DOMAIN_PAGE 1
> +#define CONFIG_SEPARATE_XENHEAP 1
> +
> +#define FRAMETABLE_VIRT_START  _AT(vaddr_t,0x02000000)
> +#define VMAP_VIRT_START  _AT(vaddr_t,0x10000000)
> +#define XENHEAP_VIRT_START     _AT(vaddr_t,0x40000000)
> +#define XENHEAP_VIRT_END       _AT(vaddr_t,0x7fffffff)
> +#define DOMHEAP_VIRT_START     _AT(vaddr_t,0x80000000)
> +#define DOMHEAP_VIRT_END       _AT(vaddr_t,0xffffffff)
> +
> +#define VMAP_VIRT_END    XENHEAP_VIRT_START
> +
>  #define DOMHEAP_ENTRIES        1024  /* 1024 2MB mapping slots */
>  
> +#else /* ARM_64 */
> +
> +#define SLOT0_ENTRY_BITS  39
> +#define SLOT0(slot) (_AT(vaddr_t,slot) << SLOT0_ENTRY_BITS)
> +#define SLOT0_ENTRY_SIZE  SLOT0(1)
> +#define GB(_gb)     (_AC(_gb, UL) << 30)
> +
> +#define VMAP_VIRT_START  GB(1)
> +#define VMAP_VIRT_END    (VMAP_VIRT_START + GB(1) - 1)
> +
> +#define FRAMETABLE_VIRT_START  GB(32)
> +#define FRAMETABLE_VIRT_END    (FRAMETABLE_VIRT_START + GB(32) - 1)
> +
> +#define DIRECTMAP_VIRT_START   SLOT0(256)
> +#define DIRECTMAP_SIZE         (SLOT0_ENTRY_SIZE * (265-256))
> +#define DIRECTMAP_VIRT_END     (DIRECTMAP_VIRT_START + DIRECTMAP_SIZE - 1)
> +
> +#define XENHEAP_VIRT_START     DIRECTMAP_VIRT_START
> +
> +#define HYPERVISOR_VIRT_END    DIRECTMAP_VIRT_END
> +
> +#endif
> +
>  /* Number of domheap pagetable pages required at the second level (2MB mappings) */
>  #define DOMHEAP_SECOND_PAGES ((DOMHEAP_VIRT_END - DOMHEAP_VIRT_START + 1) >> FIRST_SHIFT)
>  
> diff --git a/xen/include/asm-arm/mm.h b/xen/include/asm-arm/mm.h
> index 7aca836..27284d0 100644
> --- a/xen/include/asm-arm/mm.h
> +++ b/xen/include/asm-arm/mm.h
> @@ -118,11 +118,18 @@ struct page_info
>  extern unsigned long xenheap_mfn_start, xenheap_mfn_end;
>  extern unsigned long xenheap_virt_end;
>  
> +#ifdef CONFIG_ARM_32
>  #define is_xen_heap_page(page) is_xen_heap_mfn(page_to_mfn(page))
>  #define is_xen_heap_mfn(mfn) ({                                 \
>      unsigned long _mfn = (mfn);                                 \
>      (_mfn >= xenheap_mfn_start && _mfn < xenheap_mfn_end);      \
>  })
> +#else
> +#define is_xen_heap_page(page) ((page)->count_info & PGC_xen_heap)
> +#define is_xen_heap_mfn(mfn) \
> +    (mfn_valid(mfn) && is_xen_heap_page(__mfn_to_page(mfn)))
> +#endif
> +
>  #define is_xen_fixed_mfn(mfn)                                   \
>      ((((mfn) << PAGE_SHIFT) >= virt_to_maddr(&_start)) &&       \
>       (((mfn) << PAGE_SHIFT) <= virt_to_maddr(&_end)))
> @@ -215,12 +222,21 @@ static inline paddr_t __virt_to_maddr(vaddr_t va)
>  }
>  #define virt_to_maddr(va)   __virt_to_maddr((vaddr_t)(va))
>  
> +#ifdef CONFIG_ARM_32
>  static inline void *maddr_to_virt(paddr_t ma)
>  {
>      ASSERT(is_xen_heap_mfn(ma >> PAGE_SHIFT));
>      ma -= pfn_to_paddr(xenheap_mfn_start);
>      return (void *)(unsigned long) ma + XENHEAP_VIRT_START;
>  }
> +#else
> +static inline void *maddr_to_virt(paddr_t ma)
> +{
> +    ASSERT((ma >> PAGE_SHIFT) < (DIRECTMAP_SIZE >> PAGE_SHIFT));
> +    ma -= pfn_to_paddr(xenheap_mfn_start);
> +    return (void *)(unsigned long) ma + DIRECTMAP_VIRT_START;
> +}
> +#endif


I'm curious, this is not specific to this patch, why don't you split the
file in 3 parts (common, arm32, arm64)?

>From what I've seen, we use the both way on Xen. I think it's clearer to
divide the code in different headers/sources files. Sometimes defines
are hard to follow.

-- 
Julien

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH 10/10] xen: arm: Use a direct mapping of RAM on arm64
  2013-06-18 14:05   ` Julien Grall
@ 2013-06-18 14:13     ` Ian Campbell
  2013-06-20 16:43       ` Stefano Stabellini
  0 siblings, 1 reply; 33+ messages in thread
From: Ian Campbell @ 2013-06-18 14:13 UTC (permalink / raw)
  To: Julien Grall; +Cc: stefano.stabellini, tim, xen-devel

On Tue, 2013-06-18 at 15:05 +0100, Julien Grall wrote:

Please could you trim your quotes.

> On 06/18/2013 02:26 PM, Ian Campbell wrote:
> [...]
> > +#ifdef CONFIG_ARM_32
> >  static inline void *maddr_to_virt(paddr_t ma)
> >  {
> >      ASSERT(is_xen_heap_mfn(ma >> PAGE_SHIFT));
> >      ma -= pfn_to_paddr(xenheap_mfn_start);
> >      return (void *)(unsigned long) ma + XENHEAP_VIRT_START;
> >  }
> > +#else
> > +static inline void *maddr_to_virt(paddr_t ma)
> > +{
> > +    ASSERT((ma >> PAGE_SHIFT) < (DIRECTMAP_SIZE >> PAGE_SHIFT));
> > +    ma -= pfn_to_paddr(xenheap_mfn_start);
> > +    return (void *)(unsigned long) ma + DIRECTMAP_VIRT_START;
> > +}
> > +#endif
> 
> 
> I'm curious, this is not specific to this patch, why don't you split the
> file in 3 parts (common, arm32, arm64)?

In this specific instance I figured it was better to have this one
function which differed alongside all the other foo_to_bar
(virt_to_page, gvirt_to_maddr etc) which don't.

> From what I've seen, we use the both way on Xen. I think it's clearer to
> divide the code in different headers/sources files. Sometimes defines
> are hard to follow.

I'm not sure that putting them in different files makes them easier to
follow.

I'm not really sure what is best. Obviously if something is totally
different on the two different architectures (e.g. sysregs, tlbs etc)
then splitting them up makes sense. When it's just one function among
many which differs then I'm not so sure splitting is useful.

I don't have a strong opinion though.

Ian.

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH 09/10] xen: gate split heap code on its own config option rather than !X86
  2013-06-18 13:26 ` [PATCH 09/10] xen: gate split heap code on its own config option rather than !X86 Ian Campbell
@ 2013-06-18 16:01   ` Jan Beulich
  2013-06-20 16:35   ` Stefano Stabellini
  1 sibling, 0 replies; 33+ messages in thread
From: Jan Beulich @ 2013-06-18 16:01 UTC (permalink / raw)
  To: ian.campbell, xen-devel; +Cc: julien.grall, tim, keir, stefano.stabellini

>>> Ian Campbell <ian.campbell@citrix.com> 06/18/13 3:30 PM >>>
>I'm going to want to disable this for 64 bit ARM.
>
>Signed-off-by: Ian Campbell <ian.campbell@citrix.com>

Acked-by: Jan Beulich <jbeulich@suse.com>

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH 01/10] xen: arm: Use _AC macros in config.h
  2013-06-18 13:26 ` [PATCH 01/10] xen: arm: Use _AC macros in config.h Ian Campbell
@ 2013-06-20 16:29   ` Stefano Stabellini
  0 siblings, 0 replies; 33+ messages in thread
From: Stefano Stabellini @ 2013-06-20 16:29 UTC (permalink / raw)
  To: Ian Campbell; +Cc: julien.grall, stefano.stabellini, tim, xen-devel

On Tue, 18 Jun 2013, Ian Campbell wrote:
> Signed-off-by: Ian Campbell <ian.campbell@citrix.com>

Acked-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>


>  xen/include/asm-arm/config.h |   18 ++++++++++--------
>  1 files changed, 10 insertions(+), 8 deletions(-)
> 
> diff --git a/xen/include/asm-arm/config.h b/xen/include/asm-arm/config.h
> index e3cfaf1..7e5daa0 100644
> --- a/xen/include/asm-arm/config.h
> +++ b/xen/include/asm-arm/config.h
> @@ -72,6 +72,8 @@
>    END(name)
>  #endif
>  
> +#include <xen/const.h>
> +
>  /*
>   * Memory layout:
>   *  0  -   2M   Unmapped
> @@ -91,14 +93,14 @@
>   *   - in setup_pagetables() when relocating Xen.
>   */
>  
> -#define XEN_VIRT_START         mk_unsigned_long(0x00200000)
> -#define FIXMAP_ADDR(n)        (mk_unsigned_long(0x00400000) + (n) * PAGE_SIZE)
> -#define BOOT_MISC_VIRT_START   mk_unsigned_long(0x00600000)
> -#define FRAMETABLE_VIRT_START  mk_unsigned_long(0x02000000)
> -#define VMAP_VIRT_START        mk_unsigned_long(0x10000000)
> -#define XENHEAP_VIRT_START     mk_unsigned_long(0x40000000)
> -#define DOMHEAP_VIRT_START     mk_unsigned_long(0x80000000)
> -#define DOMHEAP_VIRT_END       mk_unsigned_long(0xffffffff)
> +#define XEN_VIRT_START         _AC(0x00200000,UL)
> +#define FIXMAP_ADDR(n)        (_AC(0x00400000,UL) + (n) * PAGE_SIZE)
> +#define BOOT_MISC_VIRT_START   _AC(0x00600000,UL)
> +#define FRAMETABLE_VIRT_START  _AC(0x02000000,UL)
> +#define VMAP_VIRT_START        _AC(0x10000000,UL)
> +#define XENHEAP_VIRT_START     _AC(0x40000000,UL)
> +#define DOMHEAP_VIRT_START     _AC(0x80000000,UL)
> +#define DOMHEAP_VIRT_END       _AC(0xffffffff,UL)
>  
>  #define VMAP_VIRT_END          XENHEAP_VIRT_START
>  #define HYPERVISOR_VIRT_START  XEN_VIRT_START
> -- 
> 1.7.2.5
> 

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH 03/10] xen: arm: Rename page table "hint" field to slightly more descriptive "contig"
  2013-06-18 13:26 ` [PATCH 03/10] xen: arm: Rename page table "hint" field to slightly more descriptive "contig" Ian Campbell
@ 2013-06-20 16:29   ` Stefano Stabellini
  0 siblings, 0 replies; 33+ messages in thread
From: Stefano Stabellini @ 2013-06-20 16:29 UTC (permalink / raw)
  To: Ian Campbell; +Cc: julien.grall, stefano.stabellini, tim, xen-devel

On Tue, 18 Jun 2013, Ian Campbell wrote:
> Signed-off-by: Ian Campbell <ian.campbell@citrix.com>

Acked-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>


>  xen/arch/arm/mm.c          |    2 +-
>  xen/include/asm-arm/page.h |    4 ++--
>  2 files changed, 3 insertions(+), 3 deletions(-)
> 
> diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
> index d1290cd..d57e44e 100644
> --- a/xen/arch/arm/mm.c
> +++ b/xen/arch/arm/mm.c
> @@ -526,7 +526,7 @@ static void __init create_mappings(unsigned long virt,
>      count = nr_mfns / LPAE_ENTRIES;
>      p = xen_second + second_linear_offset(virt);
>      pte = mfn_to_xen_entry(base_mfn);
> -    pte.pt.hint = 1;  /* These maps are in 16-entry contiguous chunks. */
> +    pte.pt.contig = 1;  /* These maps are in 16-entry contiguous chunks. */
>      for ( i = 0; i < count; i++ )
>      {
>          write_pte(p + i, pte);
> diff --git a/xen/include/asm-arm/page.h b/xen/include/asm-arm/page.h
> index 6317090..e34a8d6 100644
> --- a/xen/include/asm-arm/page.h
> +++ b/xen/include/asm-arm/page.h
> @@ -115,7 +115,7 @@ typedef struct {
>  
>      /* These seven bits are only used in Block entries and are ignored
>       * in Table entries. */
> -    unsigned long hint:1;       /* In a block of 16 contiguous entries */
> +    unsigned long contig:1;     /* In a block of 16 contiguous entries */
>      unsigned long pxn:1;        /* Privileged-XN */
>      unsigned long xn:1;         /* eXecute-Never */
>      unsigned long avail:4;      /* Ignored by hardware */
> @@ -150,7 +150,7 @@ typedef struct {
>  
>      /* These seven bits are only used in Block entries and are ignored
>       * in Table entries. */
> -    unsigned long hint:1;       /* In a block of 16 contiguous entries */
> +    unsigned long contig:1;     /* In a block of 16 contiguous entries */
>      unsigned long sbz2:1;
>      unsigned long xn:1;         /* eXecute-Never */
>      unsigned long avail:4;      /* Ignored by hardware */
> -- 
> 1.7.2.5
> 

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH 05/10] xen: arm: define a macro to get this CPUs page table root
  2013-06-18 13:26 ` [PATCH 05/10] xen: arm: define a macro to get this CPUs page table root Ian Campbell
@ 2013-06-20 16:30   ` Stefano Stabellini
  2013-06-25 10:49     ` Ian Campbell
  0 siblings, 1 reply; 33+ messages in thread
From: Stefano Stabellini @ 2013-06-20 16:30 UTC (permalink / raw)
  To: Ian Campbell; +Cc: julien.grall, stefano.stabellini, tim, xen-devel

On Tue, 18 Jun 2013, Ian Campbell wrote:
> Signed-off-by: Ian Campbell <ian.campbell@citrix.com>

Honestly I don't see this as a great improvement


>  xen/arch/arm/mm.c |    5 +++--
>  1 files changed, 3 insertions(+), 2 deletions(-)
> 
> diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
> index 0698c75..773fbf6 100644
> --- a/xen/arch/arm/mm.c
> +++ b/xen/arch/arm/mm.c
> @@ -60,6 +60,7 @@ lpae_t boot_first[LPAE_ENTRIES] __attribute__((__aligned__(4096)));
>  /* Per-CPU pagetable pages */
>  /* xen_pgtable == root of the trie (zeroeth level on 64-bit, first on 32-bit) */
>  static DEFINE_PER_CPU(lpae_t *, xen_pgtable);
> +#define THIS_CPU_PGTABLE this_cpu(xen_pgtable)
>  /* xen_dommap == pages used by map_domain_page, these pages contain
>   * the second level pagetables which mapp the domheap region
>   * DOMHEAP_VIRT_START...DOMHEAP_VIRT_END in 2MB chunks. */
> @@ -147,7 +148,7 @@ done:
>  void dump_hyp_walk(vaddr_t addr)
>  {
>      uint64_t ttbr = READ_SYSREG64(TTBR0_EL2);
> -    lpae_t *pgtable = this_cpu(xen_pgtable);
> +    lpae_t *pgtable = THIS_CPU_PGTABLE;
>  
>      printk("Walking Hypervisor VA 0x%"PRIvaddr" "
>             "on CPU%d via TTBR 0x%016"PRIx64"\n",
> @@ -502,7 +503,7 @@ void __cpuinit mmu_init_secondary_cpu(void)
>      uint64_t ttbr;
>  
>      /* Change to this CPU's pagetables */
> -    ttbr = (uintptr_t)virt_to_maddr(this_cpu(xen_pgtable));
> +    ttbr = (uintptr_t)virt_to_maddr(THIS_CPU_PGTABLE);
>      WRITE_TTBR(ttbr);
>  
>      /* From now on, no mapping may be both writable and executable. */
> -- 
> 1.7.2.5
> 

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH 06/10] xen: arm: Add a pte_of_xenaddr helper
  2013-06-18 13:26 ` [PATCH 06/10] xen: arm: Add a pte_of_xenaddr helper Ian Campbell
@ 2013-06-20 16:32   ` Stefano Stabellini
  2013-06-25 10:50     ` Ian Campbell
  0 siblings, 1 reply; 33+ messages in thread
From: Stefano Stabellini @ 2013-06-20 16:32 UTC (permalink / raw)
  To: Ian Campbell; +Cc: julien.grall, stefano.stabellini, tim, xen-devel

On Tue, 18 Jun 2013, Ian Campbell wrote:
> The open coded version is pretty ugly, not helped by rigid enforcement of an
> 80 character line length.
> 
> Signed-off-by: Ian Campbell <ian.campbell@citrix.com>

Makes sense. Too bad we can't have pte_of_xenaddr in mm.h because
phys_offset is static in mm.c.

In any case

Acked-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>


>  xen/arch/arm/mm.c |   13 +++++++++----
>  1 files changed, 9 insertions(+), 4 deletions(-)
> 
> diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
> index 773fbf6..5ad687f 100644
> --- a/xen/arch/arm/mm.c
> +++ b/xen/arch/arm/mm.c
> @@ -331,6 +331,13 @@ void __cpuinit setup_virt_paging(void)
>       * write to TTBR0 has completed. */                                 \
>      flush_xen_text_tlb()
>  
> +static inline lpae_t pte_of_xenaddr(vaddr_t va)
> +{
> +    paddr_t ma = va + phys_offset;
> +    unsigned long mfn = ma >> PAGE_SHIFT;
> +    return mfn_to_xen_entry(mfn);
> +}
> +
>  /* Boot-time pagetable setup.
>   * Changes here may need matching changes in head.S */
>  void __init setup_pagetables(unsigned long boot_phys_offset, paddr_t xen_paddr)
> @@ -387,8 +394,7 @@ void __init setup_pagetables(unsigned long boot_phys_offset, paddr_t xen_paddr)
>      flush_xen_text_tlb();
>  
>      /* Link in the fixmap pagetable */
> -    pte = mfn_to_xen_entry((((unsigned long) xen_fixmap) + phys_offset)
> -                           >> PAGE_SHIFT);
> +    pte = pte_of_xenaddr((vaddr_t)xen_fixmap);
>      pte.pt.table = 1;
>      write_pte(xen_second + second_table_offset(FIXMAP_ADDR(0)), pte);
>      /*
> @@ -415,8 +421,7 @@ void __init setup_pagetables(unsigned long boot_phys_offset, paddr_t xen_paddr)
>          write_pte(xen_xenmap + i, pte);
>          /* No flush required here as page table is not hooked in yet. */
>      }
> -    pte = mfn_to_xen_entry((((unsigned long) xen_xenmap) + phys_offset)
> -                           >> PAGE_SHIFT);
> +    pte = pte_of_xenaddr((vaddr_t)xen_xenmap);
>      pte.pt.table = 1;
>      write_pte(xen_second + second_linear_offset(XEN_VIRT_START), pte);
>      /* TLBFLUSH and ISB would be needed here, but wait until we set WXN */
> -- 
> 1.7.2.5
> 

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH 07/10] xen: arm: fix is_xen_fixed_mfn
  2013-06-18 13:26 ` [PATCH 07/10] xen: arm: fix is_xen_fixed_mfn Ian Campbell
@ 2013-06-20 16:33   ` Stefano Stabellini
  0 siblings, 0 replies; 33+ messages in thread
From: Stefano Stabellini @ 2013-06-20 16:33 UTC (permalink / raw)
  To: Ian Campbell; +Cc: julien.grall, stefano.stabellini, tim, xen-devel

On Tue, 18 Jun 2013, Ian Campbell wrote:
> The current implementation is nonsense since the xenheap and the xen
> text/data/etc mappings are nowhere near each other.
> 
> This is only actually used by the page offlining code, which isn't active on
> ARM.
> 
> Signed-off-by: Ian Campbell <ian.campbell@citrix.com>

Acked-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>


>  xen/include/asm-arm/mm.h |    4 +++-
>  1 files changed, 3 insertions(+), 1 deletions(-)
> 
> diff --git a/xen/include/asm-arm/mm.h b/xen/include/asm-arm/mm.h
> index 5e7c5a3..baaf9c3 100644
> --- a/xen/include/asm-arm/mm.h
> +++ b/xen/include/asm-arm/mm.h
> @@ -123,7 +123,9 @@ extern unsigned long xenheap_virt_end;
>      unsigned long _mfn = (mfn);                                 \
>      (_mfn >= xenheap_mfn_start && _mfn < xenheap_mfn_end);      \
>  })
> -#define is_xen_fixed_mfn(mfn) is_xen_heap_mfn(mfn)
> +#define is_xen_fixed_mfn(mfn)                                   \
> +    ((((mfn) << PAGE_SHIFT) >= virt_to_maddr(&_start)) &&       \
> +     (((mfn) << PAGE_SHIFT) <= virt_to_maddr(&_end)))
>  
>  #define page_get_owner(_p)    (_p)->v.inuse.domain
>  #define page_set_owner(_p,_d) ((_p)->v.inuse.domain = (_d))
> -- 
> 1.7.2.5
> 

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH 08/10] xen: arm: allow virt_to_maddr to take either a pointer or an integer
  2013-06-18 13:26 ` [PATCH 08/10] xen: arm: allow virt_to_maddr to take either a pointer or an integer Ian Campbell
@ 2013-06-20 16:34   ` Stefano Stabellini
  0 siblings, 0 replies; 33+ messages in thread
From: Stefano Stabellini @ 2013-06-20 16:34 UTC (permalink / raw)
  To: Ian Campbell; +Cc: julien.grall, stefano.stabellini, tim, xen-devel

On Tue, 18 Jun 2013, Ian Campbell wrote:
> This seems to be expected by common code which passes both pointers and
> unsigned long as virtual addresses. The latter case in particular is in
> init_node_heap() under a DIRECTMAP_VIRT_END #ifdef, which is why it hasn't
> affected us yet (but weill in a subsequent patch).
                         ^

> The new prototypes match the x86 versions apart from using vaddr_t instead of
> unsigned long.
> 
> Signed-off-by: Ian Campbell <ian.campbell@citrix.com>

Acked-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>


>  xen/include/asm-arm/mm.h |    7 ++++---
>  1 files changed, 4 insertions(+), 3 deletions(-)
> 
> diff --git a/xen/include/asm-arm/mm.h b/xen/include/asm-arm/mm.h
> index baaf9c3..7aca836 100644
> --- a/xen/include/asm-arm/mm.h
> +++ b/xen/include/asm-arm/mm.h
> @@ -208,11 +208,12 @@ static inline void __iomem *ioremap_wc(paddr_t start, size_t len)
>  #define paddr_to_pdx(pa)    pfn_to_pdx(paddr_to_pfn(pa))
>  
>  
> -static inline paddr_t virt_to_maddr(const void *va)
> +static inline paddr_t __virt_to_maddr(vaddr_t va)
>  {
> -    uint64_t par = va_to_par((vaddr_t)va);
> -    return (par & PADDR_MASK & PAGE_MASK) | ((vaddr_t) va & ~PAGE_MASK);
> +    uint64_t par = va_to_par(va);
> +    return (par & PADDR_MASK & PAGE_MASK) | (va & ~PAGE_MASK);
>  }
> +#define virt_to_maddr(va)   __virt_to_maddr((vaddr_t)(va))
>  
>  static inline void *maddr_to_virt(paddr_t ma)
>  {
> -- 
> 1.7.2.5
> 

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH 09/10] xen: gate split heap code on its own config option rather than !X86
  2013-06-18 13:26 ` [PATCH 09/10] xen: gate split heap code on its own config option rather than !X86 Ian Campbell
  2013-06-18 16:01   ` Jan Beulich
@ 2013-06-20 16:35   ` Stefano Stabellini
  1 sibling, 0 replies; 33+ messages in thread
From: Stefano Stabellini @ 2013-06-20 16:35 UTC (permalink / raw)
  To: Ian Campbell
  Cc: keir, stefano.stabellini, tim, xen-devel, julien.grall, jbeulich

On Tue, 18 Jun 2013, Ian Campbell wrote:
> I'm going to want to disable this for 64 bit ARM.
> 
> Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
> Cc: keir@xen.org
> Cc: jbeulich@suse.com

Acked-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>


>  xen/common/page_alloc.c      |    2 +-
>  xen/include/asm-arm/config.h |    1 +
>  2 files changed, 2 insertions(+), 1 deletions(-)
> 
> diff --git a/xen/common/page_alloc.c b/xen/common/page_alloc.c
> index 2162ef1..24751ff 100644
> --- a/xen/common/page_alloc.c
> +++ b/xen/common/page_alloc.c
> @@ -1305,7 +1305,7 @@ void __init scrub_heap_pages(void)
>   * XEN-HEAP SUB-ALLOCATOR
>   */
>  
> -#if !defined(CONFIG_X86)
> +#if defined(CONFIG_SEPARATE_XENHEAP)
>  
>  void init_xenheap_pages(paddr_t ps, paddr_t pe)
>  {
> diff --git a/xen/include/asm-arm/config.h b/xen/include/asm-arm/config.h
> index 7e5daa0..fb9e93c 100644
> --- a/xen/include/asm-arm/config.h
> +++ b/xen/include/asm-arm/config.h
> @@ -36,6 +36,7 @@
>  #define CONFIG_SMP 1
>  
>  #define CONFIG_DOMAIN_PAGE 1
> +#define CONFIG_SEPARATE_XENHEAP 1
>  
>  #define CONFIG_VIDEO 1
>  
> -- 
> 1.7.2.5
> 

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH 04/10] xen: arm: refactor create_mappings
  2013-06-18 13:26 ` [PATCH 04/10] xen: arm: refactor create_mappings Ian Campbell
@ 2013-06-20 16:40   ` Stefano Stabellini
  0 siblings, 0 replies; 33+ messages in thread
From: Stefano Stabellini @ 2013-06-20 16:40 UTC (permalink / raw)
  To: Ian Campbell; +Cc: julien.grall, stefano.stabellini, tim, xen-devel

On Tue, 18 Jun 2013, Ian Campbell wrote:
> Allow it to work on contiguous second level tables mapping an arbitrary region
> of memory. Rename it to create_32mb_mappings.
> 
> Signed-off-by: Ian Campbell <ian.campbell@citrix.com>

Acked-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>


>  xen/arch/arm/mm.c |   19 +++++++++++--------
>  1 files changed, 11 insertions(+), 8 deletions(-)
> 
> diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
> index d57e44e..0698c75 100644
> --- a/xen/arch/arm/mm.c
> +++ b/xen/arch/arm/mm.c
> @@ -511,20 +511,23 @@ void __cpuinit mmu_init_secondary_cpu(void)
>  }
>  
>  /* Create Xen's mappings of memory.
> - * Base and virt must be 32MB aligned and size a multiple of 32MB. */
> -static void __init create_mappings(unsigned long virt,
> -                                   unsigned long base_mfn,
> -                                   unsigned long nr_mfns)
> + * Base and virt must be 32MB aligned and size a multiple of 32MB.
> + * second must be a contiguous set of second level page tables
> + * covering the region starting at virt_offset. */
> +static void __init create_32mb_mappings(lpae_t *second,
> +                                        unsigned long virt_offset,
> +                                        unsigned long base_mfn,
> +                                        unsigned long nr_mfns)
>  {
>      unsigned long i, count;
>      lpae_t pte, *p;
>  
> -    ASSERT(!((virt >> PAGE_SHIFT) % (16 * LPAE_ENTRIES)));
> +    ASSERT(!((virt_offset >> PAGE_SHIFT) % (16 * LPAE_ENTRIES)));
>      ASSERT(!(base_mfn % (16 * LPAE_ENTRIES)));
>      ASSERT(!(nr_mfns % (16 * LPAE_ENTRIES)));
>  
>      count = nr_mfns / LPAE_ENTRIES;
> -    p = xen_second + second_linear_offset(virt);
> +    p = second + second_linear_offset(virt_offset);
>      pte = mfn_to_xen_entry(base_mfn);
>      pte.pt.contig = 1;  /* These maps are in 16-entry contiguous chunks. */
>      for ( i = 0; i < count; i++ )
> @@ -539,7 +542,7 @@ static void __init create_mappings(unsigned long virt,
>  void __init setup_xenheap_mappings(unsigned long base_mfn,
>                                     unsigned long nr_mfns)
>  {
> -    create_mappings(XENHEAP_VIRT_START, base_mfn, nr_mfns);
> +    create_32mb_mappings(xen_second, XENHEAP_VIRT_START, base_mfn, nr_mfns);
>  
>      /* Record where the xenheap is, for translation routines. */
>      xenheap_virt_end = XENHEAP_VIRT_START + nr_mfns * PAGE_SIZE;
> @@ -559,7 +562,7 @@ void __init setup_frametable_mappings(paddr_t ps, paddr_t pe)
>      /* Round up to 32M boundary */
>      frametable_size = (frametable_size + 0x1ffffff) & ~0x1ffffff;
>      base_mfn = alloc_boot_pages(frametable_size >> PAGE_SHIFT, 32<<(20-12));
> -    create_mappings(FRAMETABLE_VIRT_START, base_mfn, frametable_size >> PAGE_SHIFT);
> +    create_32mb_mappings(xen_second, FRAMETABLE_VIRT_START, base_mfn, frametable_size >> PAGE_SHIFT);
>  
>      memset(&frame_table[0], 0, nr_pages * sizeof(struct page_info));
>      memset(&frame_table[nr_pages], -1,
> -- 
> 1.7.2.5
> 

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH 10/10] xen: arm: Use a direct mapping of RAM on arm64
  2013-06-18 14:13     ` Ian Campbell
@ 2013-06-20 16:43       ` Stefano Stabellini
  2013-07-22 22:23         ` Ian Campbell
  0 siblings, 1 reply; 33+ messages in thread
From: Stefano Stabellini @ 2013-06-20 16:43 UTC (permalink / raw)
  To: Ian Campbell; +Cc: Julien Grall, stefano.stabellini, tim, xen-devel

On Tue, 18 Jun 2013, Ian Campbell wrote:
> On Tue, 2013-06-18 at 15:05 +0100, Julien Grall wrote:
> 
> Please could you trim your quotes.
> 
> > On 06/18/2013 02:26 PM, Ian Campbell wrote:
> > [...]
> > > +#ifdef CONFIG_ARM_32
> > >  static inline void *maddr_to_virt(paddr_t ma)
> > >  {
> > >      ASSERT(is_xen_heap_mfn(ma >> PAGE_SHIFT));
> > >      ma -= pfn_to_paddr(xenheap_mfn_start);
> > >      return (void *)(unsigned long) ma + XENHEAP_VIRT_START;
> > >  }
> > > +#else
> > > +static inline void *maddr_to_virt(paddr_t ma)
> > > +{
> > > +    ASSERT((ma >> PAGE_SHIFT) < (DIRECTMAP_SIZE >> PAGE_SHIFT));
> > > +    ma -= pfn_to_paddr(xenheap_mfn_start);
> > > +    return (void *)(unsigned long) ma + DIRECTMAP_VIRT_START;
> > > +}
> > > +#endif
> > 
> > 
> > I'm curious, this is not specific to this patch, why don't you split the
> > file in 3 parts (common, arm32, arm64)?
> 
> In this specific instance I figured it was better to have this one
> function which differed alongside all the other foo_to_bar
> (virt_to_page, gvirt_to_maddr etc) which don't.
> 
> > From what I've seen, we use the both way on Xen. I think it's clearer to
> > divide the code in different headers/sources files. Sometimes defines
> > are hard to follow.
> 
> I'm not sure that putting them in different files makes them easier to
> follow.
> 
> I'm not really sure what is best. Obviously if something is totally
> different on the two different architectures (e.g. sysregs, tlbs etc)
> then splitting them up makes sense. When it's just one function among
> many which differs then I'm not so sure splitting is useful.
> 
> I don't have a strong opinion though.

>From a quick look it seems to me that most functions have #ifdefs now.
The code would probably improve by having separate arm32 and
arm64 functions.

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH 10/10] xen: arm: Use a direct mapping of RAM on arm64
  2013-06-18 13:26 ` [PATCH 10/10] xen: arm: Use a direct mapping of RAM on arm64 Ian Campbell
  2013-06-18 14:05   ` Julien Grall
@ 2013-06-21 14:10   ` Stefano Stabellini
  2013-06-25 11:04     ` Ian Campbell
  2013-06-25 15:39   ` Julien Grall
  2 siblings, 1 reply; 33+ messages in thread
From: Stefano Stabellini @ 2013-06-21 14:10 UTC (permalink / raw)
  To: Ian Campbell; +Cc: julien.grall, stefano.stabellini, tim, xen-devel

On Tue, 18 Jun 2013, Ian Campbell wrote:
> We have plenty of virtual address space so we can avoid needing to map and
> unmap pages all the time.
> 
> A totally arbitrarily chosen 32GB frame table leads to support for 5TB of RAM.
> I haven't tested with anything near that amount of RAM though. There is plenty
> of room to expand further when that becomes necessary.
> 
> Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
> ---
>  xen/arch/arm/mm.c            |  155 ++++++++++++++++++++++++++++++++++--------
>  xen/arch/arm/setup.c         |   92 +++++++++++++++++++++++++
>  xen/include/asm-arm/config.h |   96 +++++++++++++++++++-------
>  xen/include/asm-arm/mm.h     |   16 ++++
>  4 files changed, 307 insertions(+), 52 deletions(-)
> 
> diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
> index 5ad687f..35ef56d 100644
> --- a/xen/arch/arm/mm.c
> +++ b/xen/arch/arm/mm.c
> @@ -37,6 +37,7 @@
>  #include <public/memory.h>
>  #include <xen/sched.h>
>  #include <xen/vmap.h>
> +#include <asm/early_printk.h>
>  #include <xsm/xsm.h>
>  #include <xen/pfn.h>
>  
> @@ -48,6 +49,14 @@ struct domain *dom_xen, *dom_io, *dom_cow;
>  lpae_t boot_pgtable[LPAE_ENTRIES] __attribute__((__aligned__(4096)));
>  #ifdef CONFIG_ARM_64
>  lpae_t boot_first[LPAE_ENTRIES] __attribute__((__aligned__(4096)));
> +/* The first page of the first level mapping of the xenheap. The
> + * subsequent xenheap first level pages are dynamically allocated, but
> + * we need this one to bootstrap ourselves. */

Why? Can't we dynamically allocate the first page of the first level
mapping too?


> +lpae_t xenheap_first_first[LPAE_ENTRIES] __attribute__((__aligned__(4096)));
> +/* The zeroeth level slot which uses xenheap_first_first. Used because
> + * setup_xenheap_mappings otherwise relies on mfn_to_virt which isn't
> + * valid for a non-xenheap mapping. */
> +static __initdata int xenheap_first_first_slot = -1;
>  #endif
>  
>  /*
> @@ -57,6 +66,9 @@ lpae_t boot_first[LPAE_ENTRIES] __attribute__((__aligned__(4096)));
>   * xen_second, xen_fixmap and xen_xenmap are shared between all PCPUs.
>   */
>  
> +#ifdef CONFIG_ARM_64
> +#define THIS_CPU_PGTABLE boot_pgtable
> +#else
>  /* Per-CPU pagetable pages */
>  /* xen_pgtable == root of the trie (zeroeth level on 64-bit, first on 32-bit) */
>  static DEFINE_PER_CPU(lpae_t *, xen_pgtable);
> @@ -65,6 +77,7 @@ static DEFINE_PER_CPU(lpae_t *, xen_pgtable);
>   * the second level pagetables which mapp the domheap region
>   * DOMHEAP_VIRT_START...DOMHEAP_VIRT_END in 2MB chunks. */
>  static DEFINE_PER_CPU(lpae_t *, xen_dommap);
> +#endif
>  
>  /* Common pagetable leaves */
>  /* Second level page tables.
> @@ -73,10 +86,16 @@ static DEFINE_PER_CPU(lpae_t *, xen_dommap);
>   * addresses from 0 to 0x7fffffff. Offsets into it are calculated
>   * with second_linear_offset(), not second_table_offset().
>   *
> - * Addresses 0x80000000 to 0xffffffff are covered by the per-cpu
> - * xen_domheap mappings described above. However we allocate 4 pages
> + * On 32bit addresses 0x80000000 to 0xffffffff are covered by the
> + * per-cpu xen_domheap mappings described above. We allocate 4 pages
>   * here for use in the boot page tables and the second two pages
>   * become the boot CPUs xen_dommap pages.
> + *
> + * On 64bit addresses 0x80000000 to 0xffffffff are unused. However we
> + * allocate 4 pages here for use while relocating Xen, which currently
> + * expects a second level page to exist for all addresses in the first
> + * 4GB. We need to keep these extra mappings in place for seconary CPU
> + * bring up too. For now we just leave them forever.
>   */
>  lpae_t xen_second[LPAE_ENTRIES*4] __attribute__((__aligned__(4096*4)));
>  /* First level page table used for fixmap */
> @@ -92,7 +111,7 @@ uint64_t boot_ttbr;
>  static paddr_t phys_offset;
>  
>  /* Limits of the Xen heap */
> -unsigned long xenheap_mfn_start __read_mostly;
> +unsigned long xenheap_mfn_start __read_mostly = ~0UL;
>  unsigned long xenheap_mfn_end __read_mostly;
>  unsigned long xenheap_virt_end __read_mostly;
>  
> @@ -112,7 +131,9 @@ static inline void check_memory_layout_alignment_constraints(void) {
>      BUILD_BUG_ON(BOOT_MISC_VIRT_START & ~SECOND_MASK);
>      /* 1GB aligned regions */
>      BUILD_BUG_ON(XENHEAP_VIRT_START & ~FIRST_MASK);
> +#ifdef CONFIG_DOMAIN_PAGE
>      BUILD_BUG_ON(DOMHEAP_VIRT_START & ~FIRST_MASK);
> +#endif
>  }
>  
>  void dump_pt_walk(lpae_t *first, paddr_t addr)
> @@ -180,6 +201,7 @@ void clear_fixmap(unsigned map)
>      flush_xen_data_tlb_range_va(FIXMAP_ADDR(map), PAGE_SIZE);
>  }
>  
> +#ifdef CONFIG_DOMAIN_PAGE
>  void *map_domain_page_global(unsigned long mfn)
>  {
>      return vmap(&mfn, 1);
>
> @@ -284,6 +306,7 @@ unsigned long domain_page_map_to_mfn(const void *va)
>  
>      return map[slot].pt.base + offset;
>  }
> +#endif
  
these are exported functions, shouldn't we implement another version of
them for the non-CONFIG_DOMAIN_PAGE case?


>  void __init arch_init_memory(void)
>  {
> @@ -431,6 +454,7 @@ void __init setup_pagetables(unsigned long boot_phys_offset, paddr_t xen_paddr)
>      /* Flush everything after setting WXN bit. */
>      flush_xen_text_tlb();
>  
> +#ifdef CONFIG_ARM_32
>      per_cpu(xen_pgtable, 0) = boot_pgtable;
>      per_cpu(xen_dommap, 0) = xen_second +
>          second_linear_offset(DOMHEAP_VIRT_START);
> @@ -440,43 +464,31 @@ void __init setup_pagetables(unsigned long boot_phys_offset, paddr_t xen_paddr)
>      memset(this_cpu(xen_dommap), 0, DOMHEAP_SECOND_PAGES*PAGE_SIZE);
>      flush_xen_dcache_va_range(this_cpu(xen_dommap),
>                                DOMHEAP_SECOND_PAGES*PAGE_SIZE);
> +#endif
>  }
>  
>  int init_secondary_pagetables(int cpu)
>  {
> -    lpae_t *root, *first, *domheap, pte;
> -    int i;
> -
> -    root = alloc_xenheap_page();
>  #ifdef CONFIG_ARM_64
> -    first = alloc_xenheap_page();
> +    /* All CPUs share a single page table on 64 bit */
> +    return 0;

I would just ifdef the entire function


>  #else
> -    first = root; /* root == first level on 32-bit 3-level trie */
> -#endif
> +    lpae_t *first, *domheap, pte;
> +    int i;
> +
> +    first = alloc_xenheap_page(); /* root == first level on 32-bit 3-level trie */
>      domheap = alloc_xenheap_pages(get_order_from_pages(DOMHEAP_SECOND_PAGES), 0);
>  
> -    if ( root == NULL || domheap == NULL || first == NULL )
> +    if ( domheap == NULL || first == NULL )
>      {
>          printk("Not enough free memory for secondary CPU%d pagetables\n", cpu);
>          free_xenheap_pages(domheap, get_order_from_pages(DOMHEAP_SECOND_PAGES));
> -#ifdef CONFIG_ARM_64
>          free_xenheap_page(first);
> -#endif
> -        free_xenheap_page(root);
>          return -ENOMEM;
>      }
>  
>      /* Initialise root pagetable from root of boot tables */
> -    memcpy(root, boot_pgtable, PAGE_SIZE);
> -
> -#ifdef CONFIG_ARM_64
> -    /* Initialise first pagetable from first level of boot tables, and
> -     * hook into the new root. */
> -    memcpy(first, boot_first, PAGE_SIZE);
> -    pte = mfn_to_xen_entry(virt_to_mfn(first));
> -    pte.pt.table = 1;
> -    write_pte(root, pte);
> -#endif
> +    memcpy(first, boot_pgtable, PAGE_SIZE);
>  
>      /* Ensure the domheap has no stray mappings */
>      memset(domheap, 0, DOMHEAP_SECOND_PAGES*PAGE_SIZE);
> @@ -490,16 +502,14 @@ int init_secondary_pagetables(int cpu)
>          write_pte(&first[first_table_offset(DOMHEAP_VIRT_START+i*FIRST_SIZE)], pte);
>      }
>  
> -    flush_xen_dcache_va_range(root, PAGE_SIZE);
> -#ifdef CONFIG_ARM_64
>      flush_xen_dcache_va_range(first, PAGE_SIZE);
> -#endif
>      flush_xen_dcache_va_range(domheap, DOMHEAP_SECOND_PAGES*PAGE_SIZE);
>  
> -    per_cpu(xen_pgtable, cpu) = root;
> +    per_cpu(xen_pgtable, cpu) = first;
>      per_cpu(xen_dommap, cpu) = domheap;
>  
>      return 0;
> +#endif
>  }
>  
>  /* MMU setup for secondary CPUS (which already have paging enabled) */
> @@ -544,6 +554,7 @@ static void __init create_32mb_mappings(lpae_t *second,
>      flush_xen_data_tlb();
>  }
>  
> +#ifdef CONFIG_ARM_32
>  /* Set up the xenheap: up to 1GB of contiguous, always-mapped memory. */
>  void __init setup_xenheap_mappings(unsigned long base_mfn,
>                                     unsigned long nr_mfns)
> @@ -556,19 +567,107 @@ void __init setup_xenheap_mappings(unsigned long base_mfn,
>      xenheap_mfn_end = base_mfn + nr_mfns;
>  }
>  
> +#else /* CONFIG_ARM_64 */
> +
> +void __init setup_xenheap_mappings(unsigned long base_mfn,
> +                                   unsigned long nr_mfns)
> +{
> +    lpae_t *first, pte;
> +    unsigned long offset, end_mfn;
> +    vaddr_t vaddr;
> +
> +    /* First call sets the xenheap physical offset. */
> +    if ( xenheap_mfn_start == ~0UL )
> +        xenheap_mfn_start = base_mfn;
> +
> +    if ( base_mfn < xenheap_mfn_start )
> +        early_panic("cannot add xenheap mapping at %lx below heap start %lx\n",
> +                    base_mfn, xenheap_mfn_start);
> +
> +    end_mfn = base_mfn + nr_mfns;
> +
> +    /* Align to previous 1GB boundary */
> +    base_mfn &= ~FIRST_MASK;
> +
> +    offset = base_mfn - xenheap_mfn_start;
> +    vaddr = DIRECTMAP_VIRT_START + offset*PAGE_SIZE;
> +
> +    while ( base_mfn < end_mfn )
> +    {
> +        int slot = zeroeth_table_offset(vaddr);
> +        lpae_t *p = &boot_pgtable[slot];
> +
> +        if ( p->pt.valid )
> +        {
> +            /* mfn_to_virt is not valid on the 1st 1st mfn, since it
> +             * is not within the xenheap. */
> +            first = slot == xenheap_first_first_slot ?
> +                xenheap_first_first : mfn_to_virt(p->pt.base);
> +        }
> +        else if ( xenheap_first_first_slot == -1)
> +        {
> +            /* Use xenheap_first_first to bootstrap the mappings */
> +            first = xenheap_first_first;
> +
> +            pte = pte_of_xenaddr((vaddr_t)xenheap_first_first);
> +            pte.pt.table = 1;
> +            write_pte(p, pte);
> +
> +            xenheap_first_first_slot = slot;

I take that at this point alloc_boot_pages wouldn't work?


> +        }
> +        else
> +        {
> +            unsigned long first_mfn = alloc_boot_pages(1, 1);
> +            pte = mfn_to_xen_entry(first_mfn);
> +            pte.pt.table = 1;
> +            write_pte(p, pte);
> +            first = mfn_to_virt(first_mfn);
> +        }
> +
> +        pte = mfn_to_xen_entry(base_mfn);
> +        //pte.pt.contig = /* XXX need to think about the logic */

Two different comment styles in the same line.
Setting the contig bit here means that we assume that it's likely that
the first 16 adjacent entries in this table point to a contiguous output
address range. I think that should be the case, unless the memory banks
are noncontiguous.


> +        write_pte(&first[first_table_offset(vaddr)], pte);
> +
> +        base_mfn += FIRST_SIZE>>PAGE_SHIFT;
> +        vaddr += FIRST_SIZE;
> +    }
> +
> +    flush_xen_data_tlb();
> +}
> +#endif
> +
>  /* Map a frame table to cover physical addresses ps through pe */
>  void __init setup_frametable_mappings(paddr_t ps, paddr_t pe)
>  {
>      unsigned long nr_pages = (pe - ps) >> PAGE_SHIFT;
>      unsigned long frametable_size = nr_pages * sizeof(struct page_info);
>      unsigned long base_mfn;
> +#ifdef CONFIG_ARM_64
> +    lpae_t *second, pte;
> +    unsigned long nr_second, second_base;
> +    int i;
> +#endif
>  
>      frametable_base_mfn = ps >> PAGE_SHIFT;
>  
>      /* Round up to 32M boundary */
>      frametable_size = (frametable_size + 0x1ffffff) & ~0x1ffffff;
>      base_mfn = alloc_boot_pages(frametable_size >> PAGE_SHIFT, 32<<(20-12));
> +
> +#ifdef CONFIG_ARM_64
> +    nr_second = frametable_size >> SECOND_SHIFT;
> +    second_base = alloc_boot_pages(nr_second, 1);
> +    second = mfn_to_virt(second_base);
> +    for ( i = 0; i < nr_second; i++ )
> +    {
> +        pte = mfn_to_xen_entry(second_base + i);
> +        pte.pt.table = 1;
> +        write_pte(&boot_first[first_table_offset(FRAMETABLE_VIRT_START)+i], pte);
> +    }
> +    create_32mb_mappings(second, 0, base_mfn, frametable_size >> PAGE_SHIFT);
> +#else
>      create_32mb_mappings(xen_second, FRAMETABLE_VIRT_START, base_mfn, frametable_size >> PAGE_SHIFT);
> +#endif

Is it possible to unify the arm32 and arm64 code paths here?
Being able to handle a frametable that spans multiple second level
pagetable pages shouldn't be a problem for arm32.


>      memset(&frame_table[0], 0, nr_pages * sizeof(struct page_info));
>      memset(&frame_table[nr_pages], -1,
> diff --git a/xen/arch/arm/setup.c b/xen/arch/arm/setup.c
> index da2a734..3efa063 100644
> --- a/xen/arch/arm/setup.c
> +++ b/xen/arch/arm/setup.c
> +#else /* CONFIG_ARM_64 */
> +static void __init setup_mm(unsigned long dtb_paddr, size_t dtb_size)
> +{
> +    paddr_t ram_start = ~0;
> +    paddr_t ram_end = 0;
> +    int bank;
> +    unsigned long  xenheap_pages = 0;
> +    unsigned long dtb_pages;
> +
> +    /*
> +     * TODO: only using the first RAM bank for now.  The heaps and the
> +     * frame table assume RAM is physically contiguous.
> +     */
> +    total_pages = 0;
> +    for ( bank = 0 ; bank < early_info.mem.nr_banks; bank++ )
> +    {
> +        paddr_t bank_start = early_info.mem.bank[bank].start;
> +        paddr_t bank_size  = early_info.mem.bank[bank].size;
> +        paddr_t  bank_end = bank_start + bank_size;
> +        unsigned long bank_pages = bank_size >> PAGE_SHIFT;
> +        paddr_t s, e;
> +
> +        total_pages += bank_pages;
> +
> +        if ( bank_start < ram_start )
> +            ram_start = bank_start;
> +        if ( bank_end > ram_end )
> +            ram_end = bank_end;
> +
> +        /*
> +         * Locate the xenheap using these constraints:
> +         *
> +         *  - must be 32 MiB aligned
> +         *  - must not include Xen itself or the boot modules
> +         *  - must be at most 1 GiB
> +         *  - must be at least 128M
> +         *
> +         * We try to allocate the largest xenheap possible within these
> +         * constraints.
> +         */

this comment is not correct on ARM64, is it?


> +        xenheap_pages += (bank_size >> PAGE_SHIFT);
> +
> +        /* XXX we assume that the ram regions are ordered */
> +        s = bank_start;
> +        while ( s < bank_end )
> +        {
> +            paddr_t n = bank_end;
> +
> +            e = next_module(s, &n);
> +
> +            if ( e == ~(paddr_t)0 )
> +            {
> +                e = n = bank_end;
> +            }
> +
> +            setup_xenheap_mappings(s>>PAGE_SHIFT, (e-s)>>PAGE_SHIFT);
> +
> +            xenheap_mfn_end = e;
> +
> +            init_boot_pages(s, e);
> +            s = n;
> +        }
> +    }
> +
> +    xenheap_virt_end = XENHEAP_VIRT_START + ram_end - ram_start;
> +    xenheap_mfn_start = ram_start >> PAGE_SHIFT;
> +    xenheap_mfn_end = ram_end >> PAGE_SHIFT;
> +    xenheap_max_mfn(xenheap_mfn_end);
> +
> +    /*
> +     * Need enough mapped pages for copying the DTB.
> +     *
> +     * TODO: The DTB (and other payloads) are assumed to be towards
> +     * the start of RAM.
> +     */
> +    dtb_pages = (dtb_size + PAGE_SIZE-1) >> PAGE_SHIFT;
> +
> +    /*
> +     * Copy the DTB.
> +     *
> +     * TODO: handle other payloads too.
> +     */
> +    device_tree_flattened = mfn_to_virt(alloc_boot_pages(dtb_pages, 1));
> +    copy_from_paddr(device_tree_flattened, dtb_paddr, dtb_size, BUFFERABLE);
> +
> +    setup_frametable_mappings(ram_start, ram_end);
> +    max_page = PFN_DOWN(ram_end);
> +
> +    end_boot_allocator();
> +}
> +#endif
>  
>  size_t __read_mostly cacheline_bytes;
>  
> diff --git a/xen/include/asm-arm/config.h b/xen/include/asm-arm/config.h
> index fb9e93c..259d4c6 100644
> --- a/xen/include/asm-arm/config.h
> +++ b/xen/include/asm-arm/config.h
> @@ -35,9 +35,6 @@
>  
>  #define CONFIG_SMP 1
>  
> -#define CONFIG_DOMAIN_PAGE 1
> -#define CONFIG_SEPARATE_XENHEAP 1
> -
>  #define CONFIG_VIDEO 1
>  
>  #define OPT_CONSOLE_STR "dtuart"
> @@ -76,38 +73,89 @@
>  #include <xen/const.h>
>  
>  /*
> - * Memory layout:
> - *  0  -   2M   Unmapped
> - *  2M -   4M   Xen text, data, bss
> - *  4M -   6M   Fixmap: special-purpose 4K mapping slots
> - *  6M  -  8M   Early boot misc (see below)
> - *
> - * 32M - 128M   Frametable: 24 bytes per page for 16GB of RAM
> - * 256M -  1G   VMAP: ioremap and early_ioremap use this virtual address
> - *                    space
> - *
> - *  1G -   2G   Xenheap: always-mapped memory
> - *  2G -   4G   Domheap: on-demand-mapped
> + * Common ARM32 and ARM64 layout:
> + *   0  -   2M   Unmapped
> + *   2M -   4M   Xen text, data, bss
> + *   4M -   6M   Fixmap: special-purpose 4K mapping slots
> + *   6M -   8M   Early boot misc (see below)
>   *
>   * The early boot misc area is used:
>   *   - in head.S for the DTB for device_tree_early_init().
>   *   - in setup_pagetables() when relocating Xen.
> + *
> + * ARM32 layout:
> + *   0  -   8M   <COMMON>
> + *
> + *  32M - 128M   Frametable: 24 bytes per page for 16GB of RAM
> + * 256M -   1G   VMAP: ioremap and early_ioremap use this virtual address
> + *                    space
> + *
> + *   1G -   2G   Xenheap: always-mapped memory
> + *   2G -   4G   Domheap: on-demand-mapped
> + *
> + * ARM64 layout:
> + * 0x0000000000000000 - 0x0000007fffffffff (512GB, L0 slot [0])
> + *   0  -   8M   <COMMON>
> + *
> + *   1G -   2G   VMAP: ioremap and early_ioremap
> + *
> + *  32G -  64G   Frametable: 24 bytes per page for 5.3TB of RAM
> + *
> + * 0x0000008000000000 - 0x00007fffffffffff (127.5TB, L0 slots [1..255])
> + *  Unused
> + *
> + * 0x0000800000000000 - 0x000084ffffffffff (5TB, L0 slots [256..265])
> + *  1:1 mapping of RAM
> + *
> + * 0x0000850000000000 - 0x0000ffffffffffff (123TB, L0 slots [266..511])
> + *  Unused
>   */
>  
> -#define XEN_VIRT_START         _AC(0x00200000,UL)
> -#define FIXMAP_ADDR(n)        (_AC(0x00400000,UL) + (n) * PAGE_SIZE)
> -#define BOOT_MISC_VIRT_START   _AC(0x00600000,UL)
> -#define FRAMETABLE_VIRT_START  _AC(0x02000000,UL)
> -#define VMAP_VIRT_START        _AC(0x10000000,UL)
> -#define XENHEAP_VIRT_START     _AC(0x40000000,UL)
> -#define DOMHEAP_VIRT_START     _AC(0x80000000,UL)
> -#define DOMHEAP_VIRT_END       _AC(0xffffffff,UL)
> +#define XEN_VIRT_START         _AT(vaddr_t,0x00200000)
> +#define FIXMAP_ADDR(n)        (_AT(vaddr_t,0x00400000) + (n) * PAGE_SIZE)
> +#define BOOT_MISC_VIRT_START   _AT(vaddr_t,0x00600000)
>  
> -#define VMAP_VIRT_END          XENHEAP_VIRT_START
>  #define HYPERVISOR_VIRT_START  XEN_VIRT_START
>  
> +#ifdef CONFIG_ARM_32
> +
> +#define CONFIG_DOMAIN_PAGE 1
> +#define CONFIG_SEPARATE_XENHEAP 1
> +
> +#define FRAMETABLE_VIRT_START  _AT(vaddr_t,0x02000000)
> +#define VMAP_VIRT_START  _AT(vaddr_t,0x10000000)
> +#define XENHEAP_VIRT_START     _AT(vaddr_t,0x40000000)
> +#define XENHEAP_VIRT_END       _AT(vaddr_t,0x7fffffff)
> +#define DOMHEAP_VIRT_START     _AT(vaddr_t,0x80000000)
> +#define DOMHEAP_VIRT_END       _AT(vaddr_t,0xffffffff)
> +
> +#define VMAP_VIRT_END    XENHEAP_VIRT_START
> +
>  #define DOMHEAP_ENTRIES        1024  /* 1024 2MB mapping slots */
>  
> +#else /* ARM_64 */
> +
> +#define SLOT0_ENTRY_BITS  39
> +#define SLOT0(slot) (_AT(vaddr_t,slot) << SLOT0_ENTRY_BITS)
> +#define SLOT0_ENTRY_SIZE  SLOT0(1)
> +#define GB(_gb)     (_AC(_gb, UL) << 30)
> +
> +#define VMAP_VIRT_START  GB(1)
> +#define VMAP_VIRT_END    (VMAP_VIRT_START + GB(1) - 1)
> +
> +#define FRAMETABLE_VIRT_START  GB(32)
> +#define FRAMETABLE_VIRT_END    (FRAMETABLE_VIRT_START + GB(32) - 1)
> +
> +#define DIRECTMAP_VIRT_START   SLOT0(256)
> +#define DIRECTMAP_SIZE         (SLOT0_ENTRY_SIZE * (265-256))
> +#define DIRECTMAP_VIRT_END     (DIRECTMAP_VIRT_START + DIRECTMAP_SIZE - 1)
> +
> +#define XENHEAP_VIRT_START     DIRECTMAP_VIRT_START
> +
> +#define HYPERVISOR_VIRT_END    DIRECTMAP_VIRT_END
> +
> +#endif
> +
>  /* Number of domheap pagetable pages required at the second level (2MB mappings) */
>  #define DOMHEAP_SECOND_PAGES ((DOMHEAP_VIRT_END - DOMHEAP_VIRT_START + 1) >> FIRST_SHIFT)
>  
> diff --git a/xen/include/asm-arm/mm.h b/xen/include/asm-arm/mm.h
> index 7aca836..27284d0 100644
> --- a/xen/include/asm-arm/mm.h
> +++ b/xen/include/asm-arm/mm.h
> @@ -118,11 +118,18 @@ struct page_info
>  extern unsigned long xenheap_mfn_start, xenheap_mfn_end;
>  extern unsigned long xenheap_virt_end;
>  
> +#ifdef CONFIG_ARM_32
>  #define is_xen_heap_page(page) is_xen_heap_mfn(page_to_mfn(page))
>  #define is_xen_heap_mfn(mfn) ({                                 \
>      unsigned long _mfn = (mfn);                                 \
>      (_mfn >= xenheap_mfn_start && _mfn < xenheap_mfn_end);      \
>  })

I take that this check wouldn't work for arm64 because it would always
return true?


> +#else
> +#define is_xen_heap_page(page) ((page)->count_info & PGC_xen_heap)
> +#define is_xen_heap_mfn(mfn) \
> +    (mfn_valid(mfn) && is_xen_heap_page(__mfn_to_page(mfn)))
> +#endif

Looks like the ARM64 version of these two functions should work for
arm32 too.

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH 05/10] xen: arm: define a macro to get this CPUs page table root
  2013-06-20 16:30   ` Stefano Stabellini
@ 2013-06-25 10:49     ` Ian Campbell
  0 siblings, 0 replies; 33+ messages in thread
From: Ian Campbell @ 2013-06-25 10:49 UTC (permalink / raw)
  To: Stefano Stabellini; +Cc: julien.grall, tim, xen-devel

On Thu, 2013-06-20 at 17:30 +0100, Stefano Stabellini wrote:
> On Tue, 18 Jun 2013, Ian Campbell wrote:
> > Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
> 
> Honestly I don't see this as a great improvement

In a later patch the 32 and 64 bit versions will start to differ,
because 64-bit page tables are no longer per-PCPU.

I suppose I could retain the per-cpu var and just initialise it the same
on all processors, seems a bit unnecessary though.

> 
> 
> >  xen/arch/arm/mm.c |    5 +++--
> >  1 files changed, 3 insertions(+), 2 deletions(-)
> > 
> > diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
> > index 0698c75..773fbf6 100644
> > --- a/xen/arch/arm/mm.c
> > +++ b/xen/arch/arm/mm.c
> > @@ -60,6 +60,7 @@ lpae_t boot_first[LPAE_ENTRIES] __attribute__((__aligned__(4096)));
> >  /* Per-CPU pagetable pages */
> >  /* xen_pgtable == root of the trie (zeroeth level on 64-bit, first on 32-bit) */
> >  static DEFINE_PER_CPU(lpae_t *, xen_pgtable);
> > +#define THIS_CPU_PGTABLE this_cpu(xen_pgtable)
> >  /* xen_dommap == pages used by map_domain_page, these pages contain
> >   * the second level pagetables which mapp the domheap region
> >   * DOMHEAP_VIRT_START...DOMHEAP_VIRT_END in 2MB chunks. */
> > @@ -147,7 +148,7 @@ done:
> >  void dump_hyp_walk(vaddr_t addr)
> >  {
> >      uint64_t ttbr = READ_SYSREG64(TTBR0_EL2);
> > -    lpae_t *pgtable = this_cpu(xen_pgtable);
> > +    lpae_t *pgtable = THIS_CPU_PGTABLE;
> >  
> >      printk("Walking Hypervisor VA 0x%"PRIvaddr" "
> >             "on CPU%d via TTBR 0x%016"PRIx64"\n",
> > @@ -502,7 +503,7 @@ void __cpuinit mmu_init_secondary_cpu(void)
> >      uint64_t ttbr;
> >  
> >      /* Change to this CPU's pagetables */
> > -    ttbr = (uintptr_t)virt_to_maddr(this_cpu(xen_pgtable));
> > +    ttbr = (uintptr_t)virt_to_maddr(THIS_CPU_PGTABLE);
> >      WRITE_TTBR(ttbr);
> >  
> >      /* From now on, no mapping may be both writable and executable. */
> > -- 
> > 1.7.2.5
> > 

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH 06/10] xen: arm: Add a pte_of_xenaddr helper
  2013-06-20 16:32   ` Stefano Stabellini
@ 2013-06-25 10:50     ` Ian Campbell
  2013-06-26 18:21       ` Stefano Stabellini
  0 siblings, 1 reply; 33+ messages in thread
From: Ian Campbell @ 2013-06-25 10:50 UTC (permalink / raw)
  To: Stefano Stabellini; +Cc: julien.grall, tim, xen-devel

On Thu, 2013-06-20 at 17:32 +0100, Stefano Stabellini wrote:
> On Tue, 18 Jun 2013, Ian Campbell wrote:
> > The open coded version is pretty ugly, not helped by rigid enforcement of an
> > 80 character line length.
> > 
> > Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
> 
> Makes sense. Too bad we can't have pte_of_xenaddr in mm.h because
> phys_offset is static in mm.c.

Can change that if we think there is a user, or better when the user
arrives. Or do you think there is existing code which could use this?

> In any case
> 
> Acked-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> 
> 
> >  xen/arch/arm/mm.c |   13 +++++++++----
> >  1 files changed, 9 insertions(+), 4 deletions(-)
> > 
> > diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
> > index 773fbf6..5ad687f 100644
> > --- a/xen/arch/arm/mm.c
> > +++ b/xen/arch/arm/mm.c
> > @@ -331,6 +331,13 @@ void __cpuinit setup_virt_paging(void)
> >       * write to TTBR0 has completed. */                                 \
> >      flush_xen_text_tlb()
> >  
> > +static inline lpae_t pte_of_xenaddr(vaddr_t va)
> > +{
> > +    paddr_t ma = va + phys_offset;
> > +    unsigned long mfn = ma >> PAGE_SHIFT;
> > +    return mfn_to_xen_entry(mfn);
> > +}
> > +
> >  /* Boot-time pagetable setup.
> >   * Changes here may need matching changes in head.S */
> >  void __init setup_pagetables(unsigned long boot_phys_offset, paddr_t xen_paddr)
> > @@ -387,8 +394,7 @@ void __init setup_pagetables(unsigned long boot_phys_offset, paddr_t xen_paddr)
> >      flush_xen_text_tlb();
> >  
> >      /* Link in the fixmap pagetable */
> > -    pte = mfn_to_xen_entry((((unsigned long) xen_fixmap) + phys_offset)
> > -                           >> PAGE_SHIFT);
> > +    pte = pte_of_xenaddr((vaddr_t)xen_fixmap);
> >      pte.pt.table = 1;
> >      write_pte(xen_second + second_table_offset(FIXMAP_ADDR(0)), pte);
> >      /*
> > @@ -415,8 +421,7 @@ void __init setup_pagetables(unsigned long boot_phys_offset, paddr_t xen_paddr)
> >          write_pte(xen_xenmap + i, pte);
> >          /* No flush required here as page table is not hooked in yet. */
> >      }
> > -    pte = mfn_to_xen_entry((((unsigned long) xen_xenmap) + phys_offset)
> > -                           >> PAGE_SHIFT);
> > +    pte = pte_of_xenaddr((vaddr_t)xen_xenmap);
> >      pte.pt.table = 1;
> >      write_pte(xen_second + second_linear_offset(XEN_VIRT_START), pte);
> >      /* TLBFLUSH and ISB would be needed here, but wait until we set WXN */
> > -- 
> > 1.7.2.5
> > 

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH 10/10] xen: arm: Use a direct mapping of RAM on arm64
  2013-06-21 14:10   ` Stefano Stabellini
@ 2013-06-25 11:04     ` Ian Campbell
  2013-06-26 18:32       ` Stefano Stabellini
  0 siblings, 1 reply; 33+ messages in thread
From: Ian Campbell @ 2013-06-25 11:04 UTC (permalink / raw)
  To: Stefano Stabellini; +Cc: julien.grall, tim, xen-devel

On Fri, 2013-06-21 at 15:10 +0100, Stefano Stabellini wrote:
> On Tue, 18 Jun 2013, Ian Campbell wrote:
> > We have plenty of virtual address space so we can avoid needing to map and
> > unmap pages all the time.
> > 
> > A totally arbitrarily chosen 32GB frame table leads to support for 5TB of RAM.
> > I haven't tested with anything near that amount of RAM though. There is plenty
> > of room to expand further when that becomes necessary.
> > 
> > Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
> > ---
> >  xen/arch/arm/mm.c            |  155 ++++++++++++++++++++++++++++++++++--------
> >  xen/arch/arm/setup.c         |   92 +++++++++++++++++++++++++
> >  xen/include/asm-arm/config.h |   96 +++++++++++++++++++-------
> >  xen/include/asm-arm/mm.h     |   16 ++++
> >  4 files changed, 307 insertions(+), 52 deletions(-)
> > 
> > diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
> > index 5ad687f..35ef56d 100644
> > --- a/xen/arch/arm/mm.c
> > +++ b/xen/arch/arm/mm.c
> > @@ -37,6 +37,7 @@
> >  #include <public/memory.h>
> >  #include <xen/sched.h>
> >  #include <xen/vmap.h>
> > +#include <asm/early_printk.h>
> >  #include <xsm/xsm.h>
> >  #include <xen/pfn.h>
> >  
> > @@ -48,6 +49,14 @@ struct domain *dom_xen, *dom_io, *dom_cow;
> >  lpae_t boot_pgtable[LPAE_ENTRIES] __attribute__((__aligned__(4096)));
> >  #ifdef CONFIG_ARM_64
> >  lpae_t boot_first[LPAE_ENTRIES] __attribute__((__aligned__(4096)));
> > +/* The first page of the first level mapping of the xenheap. The
> > + * subsequent xenheap first level pages are dynamically allocated, but
> > + * we need this one to bootstrap ourselves. */
> 
> Why? Can't we dynamically allocate the first page of the first level
> mapping too?

We need it to create the mappings used by the boot allocator, IOW we
need something to bootstrap ourselves. It's a bit hoop jumpy but I think
it is needed.

> > @@ -180,6 +201,7 @@ void clear_fixmap(unsigned map)
> >      flush_xen_data_tlb_range_va(FIXMAP_ADDR(map), PAGE_SIZE);
> >  }
> >  
> > +#ifdef CONFIG_DOMAIN_PAGE
> >  void *map_domain_page_global(unsigned long mfn)
> >  {
> >      return vmap(&mfn, 1);
> >
> > @@ -284,6 +306,7 @@ unsigned long domain_page_map_to_mfn(const void *va)
> >  
> >      return map[slot].pt.base + offset;
> >  }
> > +#endif
>   
> these are exported functions, shouldn't we implement another version of
> them for the non-CONFIG_DOMAIN_PAGE case?

In the !CONFIG_DOMAIN_PAGE case these come from the common code in
xen/include/xen/domain_page.h, so we don't need to provide our own.

> 
> 
> >  void __init arch_init_memory(void)
> >  {
> > @@ -431,6 +454,7 @@ void __init setup_pagetables(unsigned long boot_phys_offset, paddr_t xen_paddr)
> >      /* Flush everything after setting WXN bit. */
> >      flush_xen_text_tlb();
> >  
> > +#ifdef CONFIG_ARM_32
> >      per_cpu(xen_pgtable, 0) = boot_pgtable;
> >      per_cpu(xen_dommap, 0) = xen_second +
> >          second_linear_offset(DOMHEAP_VIRT_START);
> > @@ -440,43 +464,31 @@ void __init setup_pagetables(unsigned long boot_phys_offset, paddr_t xen_paddr)
> >      memset(this_cpu(xen_dommap), 0, DOMHEAP_SECOND_PAGES*PAGE_SIZE);
> >      flush_xen_dcache_va_range(this_cpu(xen_dommap),
> >                                DOMHEAP_SECOND_PAGES*PAGE_SIZE);
> > +#endif
> >  }
> >  
> >  int init_secondary_pagetables(int cpu)
> >  {
> > -    lpae_t *root, *first, *domheap, pte;
> > -    int i;
> > -
> > -    root = alloc_xenheap_page();
> >  #ifdef CONFIG_ARM_64
> > -    first = alloc_xenheap_page();
> > +    /* All CPUs share a single page table on 64 bit */
> > +    return 0;
> 
> I would just ifdef the entire function

I expect you don't mean for me to also ifdef the call? I preferred to
write the prototype only once so the two didn't get out of sync in the
future, which can be an issue with the pattern of stubbing out functions
with a separate static inline. I can change that if you have a strong
preference though.

> > +void __init setup_xenheap_mappings(unsigned long base_mfn,
> > +                                   unsigned long nr_mfns)
> > +{
> > +    lpae_t *first, pte;
> > +    unsigned long offset, end_mfn;
> > +    vaddr_t vaddr;
> > +
> > +    /* First call sets the xenheap physical offset. */
> > +    if ( xenheap_mfn_start == ~0UL )
> > +        xenheap_mfn_start = base_mfn;
> > +
> > +    if ( base_mfn < xenheap_mfn_start )
> > +        early_panic("cannot add xenheap mapping at %lx below heap start %lx\n",
> > +                    base_mfn, xenheap_mfn_start);
> > +
> > +    end_mfn = base_mfn + nr_mfns;
> > +
> > +    /* Align to previous 1GB boundary */
> > +    base_mfn &= ~FIRST_MASK;
> > +
> > +    offset = base_mfn - xenheap_mfn_start;
> > +    vaddr = DIRECTMAP_VIRT_START + offset*PAGE_SIZE;
> > +
> > +    while ( base_mfn < end_mfn )
> > +    {
> > +        int slot = zeroeth_table_offset(vaddr);
> > +        lpae_t *p = &boot_pgtable[slot];
> > +
> > +        if ( p->pt.valid )
> > +        {
> > +            /* mfn_to_virt is not valid on the 1st 1st mfn, since it
> > +             * is not within the xenheap. */
> > +            first = slot == xenheap_first_first_slot ?
> > +                xenheap_first_first : mfn_to_virt(p->pt.base);
> > +        }
> > +        else if ( xenheap_first_first_slot == -1)
> > +        {
> > +            /* Use xenheap_first_first to bootstrap the mappings */
> > +            first = xenheap_first_first;
> > +
> > +            pte = pte_of_xenaddr((vaddr_t)xenheap_first_first);
> > +            pte.pt.table = 1;
> > +            write_pte(p, pte);
> > +
> > +            xenheap_first_first_slot = slot;
> 
> I take that at this point alloc_boot_pages wouldn't work?

Correct, we haven't mapped any memory for it to use yet.

> > +        }
> > +        else
> > +        {
> > +            unsigned long first_mfn = alloc_boot_pages(1, 1);
> > +            pte = mfn_to_xen_entry(first_mfn);
> > +            pte.pt.table = 1;
> > +            write_pte(p, pte);
> > +            first = mfn_to_virt(first_mfn);
> > +        }
> > +
> > +        pte = mfn_to_xen_entry(base_mfn);
> > +        //pte.pt.contig = /* XXX need to think about the logic */
> 
> Two different comment styles in the same line.

Yes, I use // for "temporary" comments, I should have just removed that
line I think.

> Setting the contig bit here means that we assume that it's likely that
> the first 16 adjacent entries in this table point to a contiguous output
> address range. I think that should be the case, unless the memory banks
> are noncontiguous.

Since 16 contiguous entries at this levels means multiples of 16GB of
RAM I think we do need to handle the non-contig case properly. I'm not
sure what happens if e.g. you have 8GB of RAM and set the contig bit on
those 8 entries, but don't fill in the other 8 at all, I suspect you get
either undefined or implementation defined behaviour. IOW more thought
is needed.

> > +#ifdef CONFIG_ARM_64
> > +    nr_second = frametable_size >> SECOND_SHIFT;
> > +    second_base = alloc_boot_pages(nr_second, 1);
> > +    second = mfn_to_virt(second_base);
> > +    for ( i = 0; i < nr_second; i++ )
> > +    {
> > +        pte = mfn_to_xen_entry(second_base + i);
> > +        pte.pt.table = 1;
> > +        write_pte(&boot_first[first_table_offset(FRAMETABLE_VIRT_START)+i], pte);
> > +    }
> > +    create_32mb_mappings(second, 0, base_mfn, frametable_size >> PAGE_SHIFT);
> > +#else
> >      create_32mb_mappings(xen_second, FRAMETABLE_VIRT_START, base_mfn, frametable_size >> PAGE_SHIFT);
> > +#endif
> 
> Is it possible to unify the arm32 and arm64 code paths here?

I tried to unify them as much as I could.

> Being able to handle a frametable that spans multiple second level
> pagetable pages shouldn't be a problem for arm32.

That's not the issue. The issue is that the frame table lives at
different addresses in both cases, and in particular 64-bit has its own
first level pages with dedicated second level pages for this region
while 32-bit has a small number of second level entries under the same
first level entry as various other bits of Xen (including .text, vmap,
ioremap region etc etc).

> > +        /*
> > +         * Locate the xenheap using these constraints:
> > +         *
> > +         *  - must be 32 MiB aligned
> > +         *  - must not include Xen itself or the boot modules
> > +         *  - must be at most 1 GiB
> > +         *  - must be at least 128M
> > +         *
> > +         * We try to allocate the largest xenheap possible within these
> > +         * constraints.
> > +         */
> 
> this comment is not correct on ARM64, is it?

No, I'll fix it.

> > +#ifdef CONFIG_ARM_32
> >  #define is_xen_heap_page(page) is_xen_heap_mfn(page_to_mfn(page))
> >  #define is_xen_heap_mfn(mfn) ({                                 \
> >      unsigned long _mfn = (mfn);                                 \
> >      (_mfn >= xenheap_mfn_start && _mfn < xenheap_mfn_end);      \
> >  })
> 
> I take that this check wouldn't work for arm64 because it would always
> return true?

Correct.

> > +#else
> > +#define is_xen_heap_page(page) ((page)->count_info & PGC_xen_heap)
> > +#define is_xen_heap_mfn(mfn) \
> > +    (mfn_valid(mfn) && is_xen_heap_page(__mfn_to_page(mfn)))
> > +#endif
> 
> Looks like the ARM64 version of these two functions should work for
> arm32 too.

arm32 doesn't set PGC_xen_heap, I don't think. It could I suppose.

Ian.

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH 10/10] xen: arm: Use a direct mapping of RAM on arm64
  2013-06-18 13:26 ` [PATCH 10/10] xen: arm: Use a direct mapping of RAM on arm64 Ian Campbell
  2013-06-18 14:05   ` Julien Grall
  2013-06-21 14:10   ` Stefano Stabellini
@ 2013-06-25 15:39   ` Julien Grall
  2013-06-26 11:22     ` Ian Campbell
  2 siblings, 1 reply; 33+ messages in thread
From: Julien Grall @ 2013-06-25 15:39 UTC (permalink / raw)
  To: Ian Campbell; +Cc: stefano.stabellini, tim, xen-devel

On 06/18/2013 02:26 PM, Ian Campbell wrote:

> We have plenty of virtual address space so we can avoid needing to map and
> unmap pages all the time.
> 
> A totally arbitrarily chosen 32GB frame table leads to support for 5TB of RAM.
> I haven't tested with anything near that amount of RAM though. There is plenty
> of room to expand further when that becomes necessary.
> 
> Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
> ---
>  xen/arch/arm/mm.c            |  155 ++++++++++++++++++++++++++++++++++--------
>  xen/arch/arm/setup.c         |   92 +++++++++++++++++++++++++
>  xen/include/asm-arm/config.h |   96 +++++++++++++++++++-------
>  xen/include/asm-arm/mm.h     |   16 ++++
>  4 files changed, 307 insertions(+), 52 deletions(-)
> 
> diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
> index 5ad687f..35ef56d 100644
> --- a/xen/arch/arm/mm.c
> +++ b/xen/arch/arm/mm.c
> @@ -37,6 +37,7 @@
>  #include <public/memory.h>
>  #include <xen/sched.h>
>  #include <xen/vmap.h>
> +#include <asm/early_printk.h>
>  #include <xsm/xsm.h>
>  #include <xen/pfn.h>
>  
> @@ -48,6 +49,14 @@ struct domain *dom_xen, *dom_io, *dom_cow;
>  lpae_t boot_pgtable[LPAE_ENTRIES] __attribute__((__aligned__(4096)));
>  #ifdef CONFIG_ARM_64
>  lpae_t boot_first[LPAE_ENTRIES] __attribute__((__aligned__(4096)));
> +/* The first page of the first level mapping of the xenheap. The
> + * subsequent xenheap first level pages are dynamically allocated, but
> + * we need this one to bootstrap ourselves. */
> +lpae_t xenheap_first_first[LPAE_ENTRIES] __attribute__((__aligned__(4096)));
> +/* The zeroeth level slot which uses xenheap_first_first. Used because
> + * setup_xenheap_mappings otherwise relies on mfn_to_virt which isn't
> + * valid for a non-xenheap mapping. */
> +static __initdata int xenheap_first_first_slot = -1;
>  #endif
>  
>  /*
> @@ -57,6 +66,9 @@ lpae_t boot_first[LPAE_ENTRIES] __attribute__((__aligned__(4096)));
>   * xen_second, xen_fixmap and xen_xenmap are shared between all PCPUs.
>   */
>  
> +#ifdef CONFIG_ARM_64
> +#define THIS_CPU_PGTABLE boot_pgtable
> +#else
>  /* Per-CPU pagetable pages */
>  /* xen_pgtable == root of the trie (zeroeth level on 64-bit, first on 32-bit) */
>  static DEFINE_PER_CPU(lpae_t *, xen_pgtable);
> @@ -65,6 +77,7 @@ static DEFINE_PER_CPU(lpae_t *, xen_pgtable);
>   * the second level pagetables which mapp the domheap region
>   * DOMHEAP_VIRT_START...DOMHEAP_VIRT_END in 2MB chunks. */
>  static DEFINE_PER_CPU(lpae_t *, xen_dommap);
> +#endif
>  
>  /* Common pagetable leaves */
>  /* Second level page tables.
> @@ -73,10 +86,16 @@ static DEFINE_PER_CPU(lpae_t *, xen_dommap);
>   * addresses from 0 to 0x7fffffff. Offsets into it are calculated
>   * with second_linear_offset(), not second_table_offset().
>   *
> - * Addresses 0x80000000 to 0xffffffff are covered by the per-cpu
> - * xen_domheap mappings described above. However we allocate 4 pages
> + * On 32bit addresses 0x80000000 to 0xffffffff are covered by the
> + * per-cpu xen_domheap mappings described above. We allocate 4 pages
>   * here for use in the boot page tables and the second two pages
>   * become the boot CPUs xen_dommap pages.
> + *
> + * On 64bit addresses 0x80000000 to 0xffffffff are unused. However we
> + * allocate 4 pages here for use while relocating Xen, which currently
> + * expects a second level page to exist for all addresses in the first
> + * 4GB. We need to keep these extra mappings in place for seconary CPU
> + * bring up too. For now we just leave them forever.
>   */
>  lpae_t xen_second[LPAE_ENTRIES*4] __attribute__((__aligned__(4096*4)));
>  /* First level page table used for fixmap */
> @@ -92,7 +111,7 @@ uint64_t boot_ttbr;
>  static paddr_t phys_offset;
>  
>  /* Limits of the Xen heap */
> -unsigned long xenheap_mfn_start __read_mostly;
> +unsigned long xenheap_mfn_start __read_mostly = ~0UL;
>  unsigned long xenheap_mfn_end __read_mostly;
>  unsigned long xenheap_virt_end __read_mostly;
>  
> @@ -112,7 +131,9 @@ static inline void check_memory_layout_alignment_constraints(void) {
>      BUILD_BUG_ON(BOOT_MISC_VIRT_START & ~SECOND_MASK);
>      /* 1GB aligned regions */
>      BUILD_BUG_ON(XENHEAP_VIRT_START & ~FIRST_MASK);
> +#ifdef CONFIG_DOMAIN_PAGE
>      BUILD_BUG_ON(DOMHEAP_VIRT_START & ~FIRST_MASK);
> +#endif
>  }
>  
>  void dump_pt_walk(lpae_t *first, paddr_t addr)
> @@ -180,6 +201,7 @@ void clear_fixmap(unsigned map)
>      flush_xen_data_tlb_range_va(FIXMAP_ADDR(map), PAGE_SIZE);
>  }
>  
> +#ifdef CONFIG_DOMAIN_PAGE
>  void *map_domain_page_global(unsigned long mfn)
>  {
>      return vmap(&mfn, 1);
> @@ -284,6 +306,7 @@ unsigned long domain_page_map_to_mfn(const void *va)
>  
>      return map[slot].pt.base + offset;
>  }
> +#endif
>  
>  void __init arch_init_memory(void)
>  {
> @@ -431,6 +454,7 @@ void __init setup_pagetables(unsigned long boot_phys_offset, paddr_t xen_paddr)
>      /* Flush everything after setting WXN bit. */
>      flush_xen_text_tlb();
>  
> +#ifdef CONFIG_ARM_32
>      per_cpu(xen_pgtable, 0) = boot_pgtable;
>      per_cpu(xen_dommap, 0) = xen_second +
>          second_linear_offset(DOMHEAP_VIRT_START);
> @@ -440,43 +464,31 @@ void __init setup_pagetables(unsigned long boot_phys_offset, paddr_t xen_paddr)
>      memset(this_cpu(xen_dommap), 0, DOMHEAP_SECOND_PAGES*PAGE_SIZE);
>      flush_xen_dcache_va_range(this_cpu(xen_dommap),
>                                DOMHEAP_SECOND_PAGES*PAGE_SIZE);
> +#endif
>  }
>  
>  int init_secondary_pagetables(int cpu)
>  {
> -    lpae_t *root, *first, *domheap, pte;
> -    int i;
> -
> -    root = alloc_xenheap_page();
>  #ifdef CONFIG_ARM_64
> -    first = alloc_xenheap_page();
> +    /* All CPUs share a single page table on 64 bit */
> +    return 0;
>  #else
> -    first = root; /* root == first level on 32-bit 3-level trie */
> -#endif
> +    lpae_t *first, *domheap, pte;
> +    int i;
> +
> +    first = alloc_xenheap_page(); /* root == first level on 32-bit 3-level trie */
>      domheap = alloc_xenheap_pages(get_order_from_pages(DOMHEAP_SECOND_PAGES), 0);
>  
> -    if ( root == NULL || domheap == NULL || first == NULL )
> +    if ( domheap == NULL || first == NULL )
>      {
>          printk("Not enough free memory for secondary CPU%d pagetables\n", cpu);
>          free_xenheap_pages(domheap, get_order_from_pages(DOMHEAP_SECOND_PAGES));
> -#ifdef CONFIG_ARM_64
>          free_xenheap_page(first);
> -#endif
> -        free_xenheap_page(root);
>          return -ENOMEM;
>      }
>  
>      /* Initialise root pagetable from root of boot tables */
> -    memcpy(root, boot_pgtable, PAGE_SIZE);
> -
> -#ifdef CONFIG_ARM_64
> -    /* Initialise first pagetable from first level of boot tables, and
> -     * hook into the new root. */
> -    memcpy(first, boot_first, PAGE_SIZE);
> -    pte = mfn_to_xen_entry(virt_to_mfn(first));
> -    pte.pt.table = 1;
> -    write_pte(root, pte);
> -#endif
> +    memcpy(first, boot_pgtable, PAGE_SIZE);
>  
>      /* Ensure the domheap has no stray mappings */
>      memset(domheap, 0, DOMHEAP_SECOND_PAGES*PAGE_SIZE);
> @@ -490,16 +502,14 @@ int init_secondary_pagetables(int cpu)
>          write_pte(&first[first_table_offset(DOMHEAP_VIRT_START+i*FIRST_SIZE)], pte);
>      }
>  
> -    flush_xen_dcache_va_range(root, PAGE_SIZE);
> -#ifdef CONFIG_ARM_64
>      flush_xen_dcache_va_range(first, PAGE_SIZE);
> -#endif
>      flush_xen_dcache_va_range(domheap, DOMHEAP_SECOND_PAGES*PAGE_SIZE);
>  
> -    per_cpu(xen_pgtable, cpu) = root;
> +    per_cpu(xen_pgtable, cpu) = first;
>      per_cpu(xen_dommap, cpu) = domheap;
>  
>      return 0;
> +#endif
>  }
>  
>  /* MMU setup for secondary CPUS (which already have paging enabled) */
> @@ -544,6 +554,7 @@ static void __init create_32mb_mappings(lpae_t *second,
>      flush_xen_data_tlb();
>  }
>  
> +#ifdef CONFIG_ARM_32
>  /* Set up the xenheap: up to 1GB of contiguous, always-mapped memory. */
>  void __init setup_xenheap_mappings(unsigned long base_mfn,
>                                     unsigned long nr_mfns)
> @@ -556,19 +567,107 @@ void __init setup_xenheap_mappings(unsigned long base_mfn,
>      xenheap_mfn_end = base_mfn + nr_mfns;
>  }
>  
> +#else /* CONFIG_ARM_64 */
> +
> +void __init setup_xenheap_mappings(unsigned long base_mfn,
> +                                   unsigned long nr_mfns)
> +{
> +    lpae_t *first, pte;
> +    unsigned long offset, end_mfn;
> +    vaddr_t vaddr;
> +
> +    /* First call sets the xenheap physical offset. */
> +    if ( xenheap_mfn_start == ~0UL )
> +        xenheap_mfn_start = base_mfn;
> +
> +    if ( base_mfn < xenheap_mfn_start )
> +        early_panic("cannot add xenheap mapping at %lx below heap start %lx\n",
> +                    base_mfn, xenheap_mfn_start);
> +
> +    end_mfn = base_mfn + nr_mfns;
> +
> +    /* Align to previous 1GB boundary */
> +    base_mfn &= ~FIRST_MASK;
> +
> +    offset = base_mfn - xenheap_mfn_start;
> +    vaddr = DIRECTMAP_VIRT_START + offset*PAGE_SIZE;
> +
> +    while ( base_mfn < end_mfn )
> +    {
> +        int slot = zeroeth_table_offset(vaddr);
> +        lpae_t *p = &boot_pgtable[slot];
> +
> +        if ( p->pt.valid )
> +        {
> +            /* mfn_to_virt is not valid on the 1st 1st mfn, since it
> +             * is not within the xenheap. */
> +            first = slot == xenheap_first_first_slot ?
> +                xenheap_first_first : mfn_to_virt(p->pt.base);
> +        }
> +        else if ( xenheap_first_first_slot == -1)
> +        {
> +            /* Use xenheap_first_first to bootstrap the mappings */
> +            first = xenheap_first_first;
> +
> +            pte = pte_of_xenaddr((vaddr_t)xenheap_first_first);
> +            pte.pt.table = 1;
> +            write_pte(p, pte);
> +
> +            xenheap_first_first_slot = slot;
> +        }
> +        else
> +        {
> +            unsigned long first_mfn = alloc_boot_pages(1, 1);
> +            pte = mfn_to_xen_entry(first_mfn);
> +            pte.pt.table = 1;
> +            write_pte(p, pte);
> +            first = mfn_to_virt(first_mfn);
> +        }
> +
> +        pte = mfn_to_xen_entry(base_mfn);
> +        //pte.pt.contig = /* XXX need to think about the logic */
> +        write_pte(&first[first_table_offset(vaddr)], pte);
> +
> +        base_mfn += FIRST_SIZE>>PAGE_SHIFT;
> +        vaddr += FIRST_SIZE;
> +    }
> +
> +    flush_xen_data_tlb();
> +}
> +#endif
> +
>  /* Map a frame table to cover physical addresses ps through pe */
>  void __init setup_frametable_mappings(paddr_t ps, paddr_t pe)
>  {
>      unsigned long nr_pages = (pe - ps) >> PAGE_SHIFT;
>      unsigned long frametable_size = nr_pages * sizeof(struct page_info);
>      unsigned long base_mfn;
> +#ifdef CONFIG_ARM_64
> +    lpae_t *second, pte;
> +    unsigned long nr_second, second_base;
> +    int i;
> +#endif
>  
>      frametable_base_mfn = ps >> PAGE_SHIFT;
>  
>      /* Round up to 32M boundary */
>      frametable_size = (frametable_size + 0x1ffffff) & ~0x1ffffff;
>      base_mfn = alloc_boot_pages(frametable_size >> PAGE_SHIFT, 32<<(20-12));
> +
> +#ifdef CONFIG_ARM_64
> +    nr_second = frametable_size >> SECOND_SHIFT;
> +    second_base = alloc_boot_pages(nr_second, 1);
> +    second = mfn_to_virt(second_base);
> +    for ( i = 0; i < nr_second; i++ )
> +    {
> +        pte = mfn_to_xen_entry(second_base + i);
> +        pte.pt.table = 1;
> +        write_pte(&boot_first[first_table_offset(FRAMETABLE_VIRT_START)+i], pte);
> +    }
> +    create_32mb_mappings(second, 0, base_mfn, frametable_size >> PAGE_SHIFT);
> +#else
>      create_32mb_mappings(xen_second, FRAMETABLE_VIRT_START, base_mfn, frametable_size >> PAGE_SHIFT);
> +#endif
>  
>      memset(&frame_table[0], 0, nr_pages * sizeof(struct page_info));
>      memset(&frame_table[nr_pages], -1,
> diff --git a/xen/arch/arm/setup.c b/xen/arch/arm/setup.c
> index da2a734..3efa063 100644
> --- a/xen/arch/arm/setup.c
> +++ b/xen/arch/arm/setup.c
> @@ -279,6 +279,7 @@ static paddr_t __init get_xen_paddr(void)
>      return paddr;
>  }
>  
> +#ifdef CONFIG_ARM_32
>  static void __init setup_mm(unsigned long dtb_paddr, size_t dtb_size)
>  {
>      paddr_t ram_start;
> @@ -392,6 +393,97 @@ static void __init setup_mm(unsigned long dtb_paddr, size_t dtb_size)
>  
>      end_boot_allocator();
>  }
> +#else /* CONFIG_ARM_64 */
> +static void __init setup_mm(unsigned long dtb_paddr, size_t dtb_size)
> +{
> +    paddr_t ram_start = ~0;
> +    paddr_t ram_end = 0;
> +    int bank;
> +    unsigned long  xenheap_pages = 0;
> +    unsigned long dtb_pages;
> +
> +    /*
> +     * TODO: only using the first RAM bank for now.  The heaps and the
> +     * frame table assume RAM is physically contiguous.
> +     */


I think this comment can be removed.

> +    total_pages = 0;
> +    for ( bank = 0 ; bank < early_info.mem.nr_banks; bank++ )
> +    {
> +        paddr_t bank_start = early_info.mem.bank[bank].start;
> +        paddr_t bank_size  = early_info.mem.bank[bank].size;
> +        paddr_t  bank_end = bank_start + bank_size;
> +        unsigned long bank_pages = bank_size >> PAGE_SHIFT;
> +        paddr_t s, e;
> +
> +        total_pages += bank_pages;
> +
> +        if ( bank_start < ram_start )
> +            ram_start = bank_start;
> +        if ( bank_end > ram_end )
> +            ram_end = bank_end;
> +
> +        /*
> +         * Locate the xenheap using these constraints:
> +         *
> +         *  - must be 32 MiB aligned
> +         *  - must not include Xen itself or the boot modules
> +         *  - must be at most 1 GiB
> +         *  - must be at least 128M
> +         *
> +         * We try to allocate the largest xenheap possible within these
> +         * constraints.
> +         */
> +        xenheap_pages += (bank_size >> PAGE_SHIFT);
> +
> +        /* XXX we assume that the ram regions are ordered */
> +        s = bank_start;
> +        while ( s < bank_end )
> +        {
> +            paddr_t n = bank_end;
> +
> +            e = next_module(s, &n);
> +
> +            if ( e == ~(paddr_t)0 )
> +            {
> +                e = n = bank_end;
> +            }
> +
> +            setup_xenheap_mappings(s>>PAGE_SHIFT, (e-s)>>PAGE_SHIFT);
> +
> +            xenheap_mfn_end = e;
> +
> +            init_boot_pages(s, e);
> +            s = n;
> +        }
> +    }
> +
> +    xenheap_virt_end = XENHEAP_VIRT_START + ram_end - ram_start;
> +    xenheap_mfn_start = ram_start >> PAGE_SHIFT;
> +    xenheap_mfn_end = ram_end >> PAGE_SHIFT;
> +    xenheap_max_mfn(xenheap_mfn_end);
> +
> +    /*
> +     * Need enough mapped pages for copying the DTB.
> +     *
> +     * TODO: The DTB (and other payloads) are assumed to be towards
> +     * the start of RAM.
> +     */
> +    dtb_pages = (dtb_size + PAGE_SIZE-1) >> PAGE_SHIFT;
> +
> +    /*
> +     * Copy the DTB.
> +     *
> +     * TODO: handle other payloads too.
> +     */
> +    device_tree_flattened = mfn_to_virt(alloc_boot_pages(dtb_pages, 1));
> +    copy_from_paddr(device_tree_flattened, dtb_paddr, dtb_size, BUFFERABLE);
> +
> +   setup_frametable_mappings(ram_start, ram_end);


What happen if the RAM is not contiguous? Do we have hole in the frametable?

BTW, I was looking to the mfn_valid function and I think the function
can possibly consider device memory region as valid MFN. Is it the right
behaviour?

-- 
Julien

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH 10/10] xen: arm: Use a direct mapping of RAM on arm64
  2013-06-25 15:39   ` Julien Grall
@ 2013-06-26 11:22     ` Ian Campbell
  0 siblings, 0 replies; 33+ messages in thread
From: Ian Campbell @ 2013-06-26 11:22 UTC (permalink / raw)
  To: Julien Grall; +Cc: stefano.stabellini, tim, xen-devel

On Tue, 2013-06-25 at 16:39 +0100, Julien Grall wrote:

> > +#else /* CONFIG_ARM_64 */
> > +static void __init setup_mm(unsigned long dtb_paddr, size_t dtb_size)
> > +{
> > +    paddr_t ram_start = ~0;
> > +    paddr_t ram_end = 0;
> > +    int bank;
> > +    unsigned long  xenheap_pages = 0;
> > +    unsigned long dtb_pages;
> > +
> > +    /*
> > +     * TODO: only using the first RAM bank for now.  The heaps and the
> > +     * frame table assume RAM is physically contiguous.
> > +     */
> 
> 
> I think this comment can be removed.

Yes, thanks.

> > +   setup_frametable_mappings(ram_start, ram_end);
> 
> 
> What happen if the RAM is not contiguous? Do we have hole in the frametable?

Currently, yes. My intention was to live with this for now and to
eventually implement something like x86's pfn compression scheme (the
pdx_t type which you'll see in various places.

> BTW, I was looking to the mfn_valid function and I think the function
> can possibly consider device memory region as valid MFN. Is it the right
> behaviour?

I think so, an mfn is just a machine frame number and it is valid
whether the thing at the address is RAM or something else.

I think the ARM behaviour is consistent with the x86 implementation
(called __mfn_valid), which just checks against max_page plus some
changes related to the pdx compression scheme.

Ian.

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH 06/10] xen: arm: Add a pte_of_xenaddr helper
  2013-06-25 10:50     ` Ian Campbell
@ 2013-06-26 18:21       ` Stefano Stabellini
  0 siblings, 0 replies; 33+ messages in thread
From: Stefano Stabellini @ 2013-06-26 18:21 UTC (permalink / raw)
  To: Ian Campbell; +Cc: julien.grall, xen-devel, tim, Stefano Stabellini

On Tue, 25 Jun 2013, Ian Campbell wrote:
> On Thu, 2013-06-20 at 17:32 +0100, Stefano Stabellini wrote:
> > On Tue, 18 Jun 2013, Ian Campbell wrote:
> > > The open coded version is pretty ugly, not helped by rigid enforcement of an
> > > 80 character line length.
> > > 
> > > Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
> > 
> > Makes sense. Too bad we can't have pte_of_xenaddr in mm.h because
> > phys_offset is static in mm.c.
> 
> Can change that if we think there is a user, or better when the user
> arrives. Or do you think there is existing code which could use this?

No, it's OK like this, but it seems to me that it should be in page.h or
mm.h like mfn_to_xen_entry.


> > In any case
> > 
> > Acked-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> > 
> > 
> > >  xen/arch/arm/mm.c |   13 +++++++++----
> > >  1 files changed, 9 insertions(+), 4 deletions(-)
> > > 
> > > diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
> > > index 773fbf6..5ad687f 100644
> > > --- a/xen/arch/arm/mm.c
> > > +++ b/xen/arch/arm/mm.c
> > > @@ -331,6 +331,13 @@ void __cpuinit setup_virt_paging(void)
> > >       * write to TTBR0 has completed. */                                 \
> > >      flush_xen_text_tlb()
> > >  
> > > +static inline lpae_t pte_of_xenaddr(vaddr_t va)
> > > +{
> > > +    paddr_t ma = va + phys_offset;
> > > +    unsigned long mfn = ma >> PAGE_SHIFT;
> > > +    return mfn_to_xen_entry(mfn);
> > > +}
> > > +
> > >  /* Boot-time pagetable setup.
> > >   * Changes here may need matching changes in head.S */
> > >  void __init setup_pagetables(unsigned long boot_phys_offset, paddr_t xen_paddr)
> > > @@ -387,8 +394,7 @@ void __init setup_pagetables(unsigned long boot_phys_offset, paddr_t xen_paddr)
> > >      flush_xen_text_tlb();
> > >  
> > >      /* Link in the fixmap pagetable */
> > > -    pte = mfn_to_xen_entry((((unsigned long) xen_fixmap) + phys_offset)
> > > -                           >> PAGE_SHIFT);
> > > +    pte = pte_of_xenaddr((vaddr_t)xen_fixmap);
> > >      pte.pt.table = 1;
> > >      write_pte(xen_second + second_table_offset(FIXMAP_ADDR(0)), pte);
> > >      /*
> > > @@ -415,8 +421,7 @@ void __init setup_pagetables(unsigned long boot_phys_offset, paddr_t xen_paddr)
> > >          write_pte(xen_xenmap + i, pte);
> > >          /* No flush required here as page table is not hooked in yet. */
> > >      }
> > > -    pte = mfn_to_xen_entry((((unsigned long) xen_xenmap) + phys_offset)
> > > -                           >> PAGE_SHIFT);
> > > +    pte = pte_of_xenaddr((vaddr_t)xen_xenmap);
> > >      pte.pt.table = 1;
> > >      write_pte(xen_second + second_linear_offset(XEN_VIRT_START), pte);
> > >      /* TLBFLUSH and ISB would be needed here, but wait until we set WXN */
> > > -- 
> > > 1.7.2.5
> > > 
> 
> 

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH 10/10] xen: arm: Use a direct mapping of RAM on arm64
  2013-06-25 11:04     ` Ian Campbell
@ 2013-06-26 18:32       ` Stefano Stabellini
  2013-06-27  8:41         ` Ian Campbell
  0 siblings, 1 reply; 33+ messages in thread
From: Stefano Stabellini @ 2013-06-26 18:32 UTC (permalink / raw)
  To: Ian Campbell; +Cc: julien.grall, xen-devel, tim, Stefano Stabellini

On Tue, 25 Jun 2013, Ian Campbell wrote:
> > >  void __init arch_init_memory(void)
> > >  {
> > > @@ -431,6 +454,7 @@ void __init setup_pagetables(unsigned long boot_phys_offset, paddr_t xen_paddr)
> > >      /* Flush everything after setting WXN bit. */
> > >      flush_xen_text_tlb();
> > >  
> > > +#ifdef CONFIG_ARM_32
> > >      per_cpu(xen_pgtable, 0) = boot_pgtable;
> > >      per_cpu(xen_dommap, 0) = xen_second +
> > >          second_linear_offset(DOMHEAP_VIRT_START);
> > > @@ -440,43 +464,31 @@ void __init setup_pagetables(unsigned long boot_phys_offset, paddr_t xen_paddr)
> > >      memset(this_cpu(xen_dommap), 0, DOMHEAP_SECOND_PAGES*PAGE_SIZE);
> > >      flush_xen_dcache_va_range(this_cpu(xen_dommap),
> > >                                DOMHEAP_SECOND_PAGES*PAGE_SIZE);
> > > +#endif
> > >  }
> > >  
> > >  int init_secondary_pagetables(int cpu)
> > >  {
> > > -    lpae_t *root, *first, *domheap, pte;
> > > -    int i;
> > > -
> > > -    root = alloc_xenheap_page();
> > >  #ifdef CONFIG_ARM_64
> > > -    first = alloc_xenheap_page();
> > > +    /* All CPUs share a single page table on 64 bit */
> > > +    return 0;
> > 
> > I would just ifdef the entire function
> 
> I expect you don't mean for me to also ifdef the call?

I meant:

#ifdef CONFIG_ARM_64
int init_secondary_pagetables(int cpu)
{
    bla64
#elif CONFIG_ARM_32
int init_secondary_pagetables(int cpu)
{
    bla32
#endif



> I preferred to
> write the prototype only once so the two didn't get out of sync in the
> future, which can be an issue with the pattern of stubbing out functions
> with a separate static inline. I can change that if you have a strong
> preference though.

This is not a static function, if the prototype of the arm64 or the
prototype of the arm32 function changed we would know.


> > > +#ifdef CONFIG_ARM_64
> > > +    nr_second = frametable_size >> SECOND_SHIFT;
> > > +    second_base = alloc_boot_pages(nr_second, 1);
> > > +    second = mfn_to_virt(second_base);
> > > +    for ( i = 0; i < nr_second; i++ )
> > > +    {
> > > +        pte = mfn_to_xen_entry(second_base + i);
> > > +        pte.pt.table = 1;
> > > +        write_pte(&boot_first[first_table_offset(FRAMETABLE_VIRT_START)+i], pte);
> > > +    }
> > > +    create_32mb_mappings(second, 0, base_mfn, frametable_size >> PAGE_SHIFT);
> > > +#else
> > >      create_32mb_mappings(xen_second, FRAMETABLE_VIRT_START, base_mfn, frametable_size >> PAGE_SHIFT);
> > > +#endif
> > 
> > Is it possible to unify the arm32 and arm64 code paths here?
> 
> I tried to unify them as much as I could.
> 
> > Being able to handle a frametable that spans multiple second level
> > pagetable pages shouldn't be a problem for arm32.
> 
> That's not the issue. The issue is that the frame table lives at
> different addresses in both cases, and in particular 64-bit has its own
> first level pages with dedicated second level pages for this region
> while 32-bit has a small number of second level entries under the same
> first level entry as various other bits of Xen (including .text, vmap,
> ioremap region etc etc).

What I mean is that it seems to me that all the code you are adding
under the #ifdef CONFIG_ARM_64 should actually work on CONFIG_ARM_32 too
if you assume that one second level page has already been allocated on
ARM32.  Is that right?


> > > +        /*
> > > +         * Locate the xenheap using these constraints:
> > > +         *
> > > +         *  - must be 32 MiB aligned
> > > +         *  - must not include Xen itself or the boot modules
> > > +         *  - must be at most 1 GiB
> > > +         *  - must be at least 128M
> > > +         *
> > > +         * We try to allocate the largest xenheap possible within these
> > > +         * constraints.
> > > +         */
> > 
> > this comment is not correct on ARM64, is it?
> 
> No, I'll fix it.
> 
> > > +#ifdef CONFIG_ARM_32
> > >  #define is_xen_heap_page(page) is_xen_heap_mfn(page_to_mfn(page))
> > >  #define is_xen_heap_mfn(mfn) ({                                 \
> > >      unsigned long _mfn = (mfn);                                 \
> > >      (_mfn >= xenheap_mfn_start && _mfn < xenheap_mfn_end);      \
> > >  })
> > 
> > I take that this check wouldn't work for arm64 because it would always
> > return true?
> 
> Correct.
> 
> > > +#else
> > > +#define is_xen_heap_page(page) ((page)->count_info & PGC_xen_heap)
> > > +#define is_xen_heap_mfn(mfn) \
> > > +    (mfn_valid(mfn) && is_xen_heap_page(__mfn_to_page(mfn)))
> > > +#endif
> > 
> > Looks like the ARM64 version of these two functions should work for
> > arm32 too.
> 
> arm32 doesn't set PGC_xen_heap, I don't think. It could I suppose.

I think it does. Actually it's the generic code that does it
(xen/common/page_alloc.c:alloc_xenheap_pages).

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH 10/10] xen: arm: Use a direct mapping of RAM on arm64
  2013-06-26 18:32       ` Stefano Stabellini
@ 2013-06-27  8:41         ` Ian Campbell
  0 siblings, 0 replies; 33+ messages in thread
From: Ian Campbell @ 2013-06-27  8:41 UTC (permalink / raw)
  To: Stefano Stabellini; +Cc: julien.grall, tim, xen-devel

On Wed, 2013-06-26 at 19:32 +0100, Stefano Stabellini wrote:
> On Tue, 25 Jun 2013, Ian Campbell wrote:
> > > >  void __init arch_init_memory(void)
> > > >  {
> > > > @@ -431,6 +454,7 @@ void __init setup_pagetables(unsigned long boot_phys_offset, paddr_t xen_paddr)
> > > >      /* Flush everything after setting WXN bit. */
> > > >      flush_xen_text_tlb();
> > > >  
> > > > +#ifdef CONFIG_ARM_32
> > > >      per_cpu(xen_pgtable, 0) = boot_pgtable;
> > > >      per_cpu(xen_dommap, 0) = xen_second +
> > > >          second_linear_offset(DOMHEAP_VIRT_START);
> > > > @@ -440,43 +464,31 @@ void __init setup_pagetables(unsigned long boot_phys_offset, paddr_t xen_paddr)
> > > >      memset(this_cpu(xen_dommap), 0, DOMHEAP_SECOND_PAGES*PAGE_SIZE);
> > > >      flush_xen_dcache_va_range(this_cpu(xen_dommap),
> > > >                                DOMHEAP_SECOND_PAGES*PAGE_SIZE);
> > > > +#endif
> > > >  }
> > > >  
> > > >  int init_secondary_pagetables(int cpu)
> > > >  {
> > > > -    lpae_t *root, *first, *domheap, pte;
> > > > -    int i;
> > > > -
> > > > -    root = alloc_xenheap_page();
> > > >  #ifdef CONFIG_ARM_64
> > > > -    first = alloc_xenheap_page();
> > > > +    /* All CPUs share a single page table on 64 bit */
> > > > +    return 0;
> > > 
> > > I would just ifdef the entire function
> > 
> > I expect you don't mean for me to also ifdef the call?
> 
> I meant:
> 
> #ifdef CONFIG_ARM_64
> int init_secondary_pagetables(int cpu)
> {
>     bla64
> #elif CONFIG_ARM_32
> int init_secondary_pagetables(int cpu)
> {
>     bla32
> #endif
> 
> 
> 
> > I preferred to
> > write the prototype only once so the two didn't get out of sync in the
> > future, which can be an issue with the pattern of stubbing out functions
> > with a separate static inline. I can change that if you have a strong
> > preference though.
> 
> This is not a static function, if the prototype of the arm64 or the
> prototype of the arm32 function changed we would know.

My point is that it is easy for someone to change only one version (and
the caller) and unless they remember to build test the other arch the
breakage won't be spotted until later. Likely during my pre-commit build
tests since it is also the sort of thing which slips through review.

> 
> 
> > > > +#ifdef CONFIG_ARM_64
> > > > +    nr_second = frametable_size >> SECOND_SHIFT;
> > > > +    second_base = alloc_boot_pages(nr_second, 1);
> > > > +    second = mfn_to_virt(second_base);
> > > > +    for ( i = 0; i < nr_second; i++ )
> > > > +    {
> > > > +        pte = mfn_to_xen_entry(second_base + i);
> > > > +        pte.pt.table = 1;
> > > > +        write_pte(&boot_first[first_table_offset(FRAMETABLE_VIRT_START)+i], pte);
> > > > +    }
> > > > +    create_32mb_mappings(second, 0, base_mfn, frametable_size >> PAGE_SHIFT);
> > > > +#else
> > > >      create_32mb_mappings(xen_second, FRAMETABLE_VIRT_START, base_mfn, frametable_size >> PAGE_SHIFT);
> > > > +#endif
> > > 
> > > Is it possible to unify the arm32 and arm64 code paths here?
> > 
> > I tried to unify them as much as I could.
> > 
> > > Being able to handle a frametable that spans multiple second level
> > > pagetable pages shouldn't be a problem for arm32.
> > 
> > That's not the issue. The issue is that the frame table lives at
> > different addresses in both cases, and in particular 64-bit has its own
> > first level pages with dedicated second level pages for this region
> > while 32-bit has a small number of second level entries under the same
> > first level entry as various other bits of Xen (including .text, vmap,
> > ioremap region etc etc).
> 
> What I mean is that it seems to me that all the code you are adding
> under the #ifdef CONFIG_ARM_64 should actually work on CONFIG_ARM_32 too
> if you assume that one second level page has already been allocated on
> ARM32.  Is that right?

If one second level page has already been allocated then the code inside
the ifdef isn't needed -- since it is exactly allocating the required
second level pages for 64-bit. Hence the code in the ifdef is not
required on 32-bit but is on 64-bit.

I could combine the final create_32mb_mappings call with
        #if COFNIG_ARM64
            // existing code
            frametable_second_virt_offset = 0;
	#else 
	    second = xen_second;
            frametable_second_virt_offset = FRAMETABLE_VIRT_START;
        #endif
            create_32mb_mappings(second, frametable_second_virt_offset, base_mfn, frametable_size >> PAGE_SHIFT);

But that doesn't seem much cleaner.

> > > > +#else
> > > > +#define is_xen_heap_page(page) ((page)->count_info & PGC_xen_heap)
> > > > +#define is_xen_heap_mfn(mfn) \
> > > > +    (mfn_valid(mfn) && is_xen_heap_page(__mfn_to_page(mfn)))
> > > > +#endif
> > > 
> > > Looks like the ARM64 version of these two functions should work for
> > > arm32 too.
> > 
> > arm32 doesn't set PGC_xen_heap, I don't think. It could I suppose.
> 
> I think it does. Actually it's the generic code that does it
> (xen/common/page_alloc.c:alloc_xenheap_pages).

You are looking there at the !CONFIG_SEPARATE_XENHEAP (previously X86)
variant of that function, which is the non-split domheap/xenheap case.

On arm32 we use the other variant (split heaps) which doesn't use
PGC_xen_heap, it's about a page above the function you are looking at...

Ian.

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH 10/10] xen: arm: Use a direct mapping of RAM on arm64
  2013-06-20 16:43       ` Stefano Stabellini
@ 2013-07-22 22:23         ` Ian Campbell
  0 siblings, 0 replies; 33+ messages in thread
From: Ian Campbell @ 2013-07-22 22:23 UTC (permalink / raw)
  To: Stefano Stabellini; +Cc: Julien Grall, tim, xen-devel

On Thu, 2013-06-20 at 17:43 +0100, Stefano Stabellini wrote:
> On Tue, 18 Jun 2013, Ian Campbell wrote:
> > On Tue, 2013-06-18 at 15:05 +0100, Julien Grall wrote:
> > 
> > Please could you trim your quotes.
> > 
> > > On 06/18/2013 02:26 PM, Ian Campbell wrote:
> > > [...]
> > > > +#ifdef CONFIG_ARM_32
> > > >  static inline void *maddr_to_virt(paddr_t ma)
> > > >  {
> > > >      ASSERT(is_xen_heap_mfn(ma >> PAGE_SHIFT));
> > > >      ma -= pfn_to_paddr(xenheap_mfn_start);
> > > >      return (void *)(unsigned long) ma + XENHEAP_VIRT_START;
> > > >  }
> > > > +#else
> > > > +static inline void *maddr_to_virt(paddr_t ma)
> > > > +{
> > > > +    ASSERT((ma >> PAGE_SHIFT) < (DIRECTMAP_SIZE >> PAGE_SHIFT));
> > > > +    ma -= pfn_to_paddr(xenheap_mfn_start);
> > > > +    return (void *)(unsigned long) ma + DIRECTMAP_VIRT_START;
> > > > +}
> > > > +#endif
> > > 
> > > 
> > > I'm curious, this is not specific to this patch, why don't you split the
> > > file in 3 parts (common, arm32, arm64)?
> > 
> > In this specific instance I figured it was better to have this one
> > function which differed alongside all the other foo_to_bar
> > (virt_to_page, gvirt_to_maddr etc) which don't.
> > 
> > > From what I've seen, we use the both way on Xen. I think it's clearer to
> > > divide the code in different headers/sources files. Sometimes defines
> > > are hard to follow.
> > 
> > I'm not sure that putting them in different files makes them easier to
> > follow.
> > 
> > I'm not really sure what is best. Obviously if something is totally
> > different on the two different architectures (e.g. sysregs, tlbs etc)
> > then splitting them up makes sense. When it's just one function among
> > many which differs then I'm not so sure splitting is useful.
> > 
> > I don't have a strong opinion though.
> 
> From a quick look it seems to me that most functions have #ifdefs now.
> The code would probably improve by having separate arm32 and
> arm64 functions.

Are we still talking about xen/include/asm-arm/mm.h? Because there's
only around 20 lines under two ifdefs in that 350+ line file.

is_xen_heap_page, is_xen_heap_mfn and maddr_to_virt are ifdeffed but all
the other stuff is common. I'd rather keep maddr_to_virt next to all the
other foo_to_bar functions.

Ian.

^ permalink raw reply	[flat|nested] 33+ messages in thread

end of thread, other threads:[~2013-07-22 22:23 UTC | newest]

Thread overview: 33+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2013-06-18 13:26 [PATCH 00/10] xen: arm: direct 1:1 map on 64 bit Ian Campbell
2013-06-18 13:26 ` [PATCH 01/10] xen: arm: Use _AC macros in config.h Ian Campbell
2013-06-20 16:29   ` Stefano Stabellini
2013-06-18 13:26 ` [PATCH 02/10] xen: arm: Add zeroeth level page table macros and defines Ian Campbell
2013-06-18 13:26 ` [PATCH 03/10] xen: arm: Rename page table "hint" field to slightly more descriptive "contig" Ian Campbell
2013-06-20 16:29   ` Stefano Stabellini
2013-06-18 13:26 ` [PATCH 04/10] xen: arm: refactor create_mappings Ian Campbell
2013-06-20 16:40   ` Stefano Stabellini
2013-06-18 13:26 ` [PATCH 05/10] xen: arm: define a macro to get this CPUs page table root Ian Campbell
2013-06-20 16:30   ` Stefano Stabellini
2013-06-25 10:49     ` Ian Campbell
2013-06-18 13:26 ` [PATCH 06/10] xen: arm: Add a pte_of_xenaddr helper Ian Campbell
2013-06-20 16:32   ` Stefano Stabellini
2013-06-25 10:50     ` Ian Campbell
2013-06-26 18:21       ` Stefano Stabellini
2013-06-18 13:26 ` [PATCH 07/10] xen: arm: fix is_xen_fixed_mfn Ian Campbell
2013-06-20 16:33   ` Stefano Stabellini
2013-06-18 13:26 ` [PATCH 08/10] xen: arm: allow virt_to_maddr to take either a pointer or an integer Ian Campbell
2013-06-20 16:34   ` Stefano Stabellini
2013-06-18 13:26 ` [PATCH 09/10] xen: gate split heap code on its own config option rather than !X86 Ian Campbell
2013-06-18 16:01   ` Jan Beulich
2013-06-20 16:35   ` Stefano Stabellini
2013-06-18 13:26 ` [PATCH 10/10] xen: arm: Use a direct mapping of RAM on arm64 Ian Campbell
2013-06-18 14:05   ` Julien Grall
2013-06-18 14:13     ` Ian Campbell
2013-06-20 16:43       ` Stefano Stabellini
2013-07-22 22:23         ` Ian Campbell
2013-06-21 14:10   ` Stefano Stabellini
2013-06-25 11:04     ` Ian Campbell
2013-06-26 18:32       ` Stefano Stabellini
2013-06-27  8:41         ` Ian Campbell
2013-06-25 15:39   ` Julien Grall
2013-06-26 11:22     ` Ian Campbell

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).