* [PATCH v2 0/5] generic early_ioremap support
@ 2014-01-07 2:35 Mark Salter
2014-01-07 2:35 ` [PATCH v2 1/5] mm: create generic early_ioremap() support Mark Salter
` (4 more replies)
0 siblings, 5 replies; 11+ messages in thread
From: Mark Salter @ 2014-01-07 2:35 UTC (permalink / raw)
To: linux-arm-kernel
This patch series takes the common bits from the x86 early ioremap
implementation and creates a generic implementation which may be used
by other architectures. The early ioremap interfaces are intended for
situations where boot code needs to make temporary virtual mappings
before the normal ioremap interfaces are available. Typically, this
means before paging_init() has run.
These patches are layered on top of generic fixmap patches which
were discussed here (and are in the akpm tree):
http://lkml.org/lkml/2013/11/25/474
This is version 2 of the patch series. These patches (and underlying
fixmap patches) may be found at:
git://github.com/mosalter/linux.git (early-ioremap-v2 branch)
Changes from version 1:
* Moved the generic code into linux/mm instead of linux/lib
* Have early_memremap() return normal pointer instead of __iomem
This is in response to sparse warning cleanups being made in
an unrelated patch series:
https://lkml.org/lkml/2013/12/22/69
* Added arm64 patch to call init_mem_pgprot() earlier so that
the pgprot macros are valid in time for early_ioremap use
* Added validity checking for early_ioremap pgd, pud, and pmd
in arm64
Mark Salter (5):
mm: create generic early_ioremap() support
x86: use generic early_ioremap
arm: add early_ioremap support
arm64: initialize pgprot info earlier in boot
arm64: add early_ioremap support
Documentation/arm64/memory.txt | 4 +-
arch/arm/Kconfig | 11 ++
arch/arm/include/asm/Kbuild | 1 +
arch/arm/include/asm/fixmap.h | 18 +++
arch/arm/include/asm/io.h | 1 +
arch/arm/kernel/setup.c | 3 +
arch/arm/mm/Makefile | 1 +
arch/arm/mm/early_ioremap.c | 93 ++++++++++++++
arch/arm/mm/mmu.c | 2 +
arch/arm64/Kconfig | 1 +
arch/arm64/include/asm/Kbuild | 1 +
arch/arm64/include/asm/fixmap.h | 68 ++++++++++
arch/arm64/include/asm/io.h | 1 +
arch/arm64/include/asm/memory.h | 2 +-
arch/arm64/include/asm/mmu.h | 1 +
arch/arm64/kernel/early_printk.c | 8 +-
arch/arm64/kernel/head.S | 9 +-
arch/arm64/kernel/setup.c | 4 +
arch/arm64/mm/ioremap.c | 85 ++++++++++++
arch/arm64/mm/mmu.c | 44 +------
arch/x86/Kconfig | 1 +
arch/x86/include/asm/Kbuild | 1 +
arch/x86/include/asm/fixmap.h | 6 +
arch/x86/include/asm/io.h | 14 +-
arch/x86/mm/ioremap.c | 224 +-------------------------------
arch/x86/mm/pgtable_32.c | 2 +-
include/asm-generic/early_ioremap.h | 41 ++++++
mm/Kconfig | 3 +
mm/Makefile | 1 +
mm/early_ioremap.c | 249 ++++++++++++++++++++++++++++++++++++
30 files changed, 611 insertions(+), 289 deletions(-)
create mode 100644 arch/arm/mm/early_ioremap.c
create mode 100644 arch/arm64/include/asm/fixmap.h
create mode 100644 include/asm-generic/early_ioremap.h
create mode 100644 mm/early_ioremap.c
--
1.8.3.1
^ permalink raw reply [flat|nested] 11+ messages in thread
* [PATCH v2 1/5] mm: create generic early_ioremap() support
2014-01-07 2:35 [PATCH v2 0/5] generic early_ioremap support Mark Salter
@ 2014-01-07 2:35 ` Mark Salter
2014-01-07 15:25 ` Catalin Marinas
2014-01-07 23:11 ` H. Peter Anvin
2014-01-07 2:35 ` [PATCH v2 3/5] arm: add early_ioremap support Mark Salter
` (3 subsequent siblings)
4 siblings, 2 replies; 11+ messages in thread
From: Mark Salter @ 2014-01-07 2:35 UTC (permalink / raw)
To: linux-arm-kernel
This patch creates a generic implementation of early_ioremap() support
based on the existing x86 implementation. early_ioremp() is useful for
early boot code which needs to temporarily map I/O or memory regions
before normal mapping functions such as ioremap() are available.
There is one difference from the existing x86 implementation which
should be noted. The generic early_memremap() function does not return
an __iomem pointer and a new early_memunmap() function has been added
to act as a wrapper for early_iounmap() but with a non __iomem pointer
passed in. This is in line with the first patch of this series:
https://lkml.org/lkml/2013/12/22/69
Signed-off-by: Mark Salter <msalter@redhat.com>
CC: x86 at kernel.org
CC: linux-arm-kernel at lists.infradead.org
CC: Andrew Morton <akpm@linux-foundation.org>
CC: Arnd Bergmann <arnd@arndb.de>
CC: Ingo Molnar <mingo@kernel.org>
CC: Thomas Gleixner <tglx@linutronix.de>
CC: "H. Peter Anvin" <hpa@zytor.com>
CC: Russell King <linux@arm.linux.org.uk>
CC: Catalin Marinas <catalin.marinas@arm.com>
CC: Will Deacon <will.deacon@arm.com>
---
include/asm-generic/early_ioremap.h | 41 ++++++
mm/Kconfig | 3 +
mm/Makefile | 1 +
mm/early_ioremap.c | 249 ++++++++++++++++++++++++++++++++++++
4 files changed, 294 insertions(+)
create mode 100644 include/asm-generic/early_ioremap.h
create mode 100644 mm/early_ioremap.c
diff --git a/include/asm-generic/early_ioremap.h b/include/asm-generic/early_ioremap.h
new file mode 100644
index 0000000..d43e187
--- /dev/null
+++ b/include/asm-generic/early_ioremap.h
@@ -0,0 +1,41 @@
+#ifndef _ASM_EARLY_IOREMAP_H_
+#define _ASM_EARLY_IOREMAP_H_
+
+#include <linux/types.h>
+
+#ifdef CONFIG_GENERIC_EARLY_IOREMAP
+/*
+ * early_ioremap() and early_iounmap() are for temporary early boot-time
+ * mappings, before the real ioremap() is functional.
+ */
+extern void __iomem *early_ioremap(resource_size_t phys_addr,
+ unsigned long size);
+extern void *early_memremap(resource_size_t phys_addr,
+ unsigned long size);
+extern void early_iounmap(void __iomem *addr, unsigned long size);
+extern void early_memunmap(void *addr, unsigned long size);
+
+/* Arch-specific initialization */
+extern void early_ioremap_init(void);
+
+/* Generic initialization called by architecture code */
+extern void early_ioremap_setup(void);
+
+/*
+ * Called as last step in paging_init() so library can act
+ * accordingly for subsequent map/unmap requests.
+ */
+extern void early_ioremap_reset(void);
+
+/*
+ * Weak function called by early_ioremap_reset(). It does nothing, but
+ * architectures may provide their own version to do any needed cleanups.
+ */
+extern void early_ioremap_shutdown(void);
+#else
+static inline void early_ioremap_init(void) { }
+static inline void early_ioremap_setup(void) { }
+static inline void early_ioremap_reset(void) { }
+#endif
+
+#endif /* _ASM_EARLY_IOREMAP_H_ */
diff --git a/mm/Kconfig b/mm/Kconfig
index 723bbe0..0dcebf2a 100644
--- a/mm/Kconfig
+++ b/mm/Kconfig
@@ -552,3 +552,6 @@ config MEM_SOFT_DIRTY
it can be cleared by hands.
See Documentation/vm/soft-dirty.txt for more details.
+
+config GENERIC_EARLY_IOREMAP
+ bool
diff --git a/mm/Makefile b/mm/Makefile
index 305d10a..4e102e9 100644
--- a/mm/Makefile
+++ b/mm/Makefile
@@ -60,3 +60,4 @@ obj-$(CONFIG_DEBUG_KMEMLEAK_TEST) += kmemleak-test.o
obj-$(CONFIG_CLEANCACHE) += cleancache.o
obj-$(CONFIG_MEMORY_ISOLATION) += page_isolation.o
obj-$(CONFIG_ZBUD) += zbud.o
+obj-$(CONFIG_GENERIC_EARLY_IOREMAP) += early_ioremap.o
diff --git a/mm/early_ioremap.c b/mm/early_ioremap.c
new file mode 100644
index 0000000..8c1ac48
--- /dev/null
+++ b/mm/early_ioremap.c
@@ -0,0 +1,249 @@
+/*
+ * Provide common bits of early_ioremap() support for architectures needing
+ * temporary mappings during boot before ioremap() is available.
+ *
+ * This is mostly a direct copy of the x86 early_ioremap implementation.
+ *
+ * (C) Copyright 1995 1996 Linus Torvalds
+ *
+ */
+#include <linux/init.h>
+#include <linux/io.h>
+#include <linux/module.h>
+#include <linux/slab.h>
+#include <linux/mm.h>
+#include <linux/vmalloc.h>
+#include <asm/fixmap.h>
+
+static int early_ioremap_debug __initdata;
+
+static int __init early_ioremap_debug_setup(char *str)
+{
+ early_ioremap_debug = 1;
+
+ return 0;
+}
+early_param("early_ioremap_debug", early_ioremap_debug_setup);
+
+static int after_paging_init __initdata;
+
+void __init __attribute__((weak)) early_ioremap_shutdown(void)
+{
+}
+
+void __init early_ioremap_reset(void)
+{
+ early_ioremap_shutdown();
+ after_paging_init = 1;
+}
+
+/*
+ * Generally, ioremap() is available after paging_init() has been called.
+ * Architectures wanting to allow early_ioremap after paging_init() can
+ * define __late_set_fixmap and __late_clear_fixmap to do the right thing.
+ */
+#ifndef __late_set_fixmap
+static inline void __init __late_set_fixmap(enum fixed_addresses idx,
+ phys_addr_t phys, pgprot_t prot)
+{
+ BUG();
+}
+#endif
+
+#ifndef __late_clear_fixmap
+static inline void __init __late_clear_fixmap(enum fixed_addresses idx)
+{
+ BUG();
+}
+#endif
+
+static void __iomem *prev_map[FIX_BTMAPS_SLOTS] __initdata;
+static unsigned long prev_size[FIX_BTMAPS_SLOTS] __initdata;
+static unsigned long slot_virt[FIX_BTMAPS_SLOTS] __initdata;
+
+void __init early_ioremap_setup(void)
+{
+ int i;
+
+ for (i = 0; i < FIX_BTMAPS_SLOTS; i++) {
+ if (prev_map[i]) {
+ WARN_ON(1);
+ break;
+ }
+ }
+
+ for (i = 0; i < FIX_BTMAPS_SLOTS; i++)
+ slot_virt[i] = __fix_to_virt(FIX_BTMAP_BEGIN - NR_FIX_BTMAPS*i);
+}
+
+static int __init check_early_ioremap_leak(void)
+{
+ int count = 0;
+ int i;
+
+ for (i = 0; i < FIX_BTMAPS_SLOTS; i++)
+ if (prev_map[i])
+ count++;
+
+ if (!count)
+ return 0;
+ WARN(1, KERN_WARNING
+ "Debug warning: early ioremap leak of %d areas detected.\n",
+ count);
+ pr_warn("please boot with early_ioremap_debug and report the dmesg.\n");
+
+ return 1;
+}
+late_initcall(check_early_ioremap_leak);
+
+static void __init __iomem *
+__early_ioremap(resource_size_t phys_addr, unsigned long size, pgprot_t prot)
+{
+ unsigned long offset;
+ resource_size_t last_addr;
+ unsigned int nrpages;
+ enum fixed_addresses idx;
+ int i, slot;
+
+ WARN_ON(system_state != SYSTEM_BOOTING);
+
+ slot = -1;
+ for (i = 0; i < FIX_BTMAPS_SLOTS; i++) {
+ if (!prev_map[i]) {
+ slot = i;
+ break;
+ }
+ }
+
+ if (slot < 0) {
+ pr_info("%s(%08llx, %08lx) not found slot\n",
+ __func__, (u64)phys_addr, size);
+ WARN_ON(1);
+ return NULL;
+ }
+
+ if (early_ioremap_debug) {
+ pr_info("%s(%08llx, %08lx) [%d] => ",
+ __func__, (u64)phys_addr, size, slot);
+ dump_stack();
+ }
+
+ /* Don't allow wraparound or zero size */
+ last_addr = phys_addr + size - 1;
+ if (!size || last_addr < phys_addr) {
+ WARN_ON(1);
+ return NULL;
+ }
+
+ prev_size[slot] = size;
+ /*
+ * Mappings have to be page-aligned
+ */
+ offset = phys_addr & ~PAGE_MASK;
+ phys_addr &= PAGE_MASK;
+ size = PAGE_ALIGN(last_addr + 1) - phys_addr;
+
+ /*
+ * Mappings have to fit in the FIX_BTMAP area.
+ */
+ nrpages = size >> PAGE_SHIFT;
+ if (nrpages > NR_FIX_BTMAPS) {
+ WARN_ON(1);
+ return NULL;
+ }
+
+ /*
+ * Ok, go for it..
+ */
+ idx = FIX_BTMAP_BEGIN - NR_FIX_BTMAPS*slot;
+ while (nrpages > 0) {
+ if (after_paging_init)
+ __late_set_fixmap(idx, phys_addr, prot);
+ else
+ __early_set_fixmap(idx, phys_addr, prot);
+ phys_addr += PAGE_SIZE;
+ --idx;
+ --nrpages;
+ }
+ if (early_ioremap_debug)
+ pr_cont("%08lx + %08lx\n", offset, slot_virt[slot]);
+
+ prev_map[slot] = (void __iomem *)(offset + slot_virt[slot]);
+ return prev_map[slot];
+}
+
+/* Remap an IO device */
+void __init __iomem *
+early_ioremap(resource_size_t phys_addr, unsigned long size)
+{
+ return __early_ioremap(phys_addr, size, FIXMAP_PAGE_IO);
+}
+
+/* Remap memory */
+void __init *
+early_memremap(resource_size_t phys_addr, unsigned long size)
+{
+ return (__force void *)__early_ioremap(phys_addr, size,
+ FIXMAP_PAGE_NORMAL);
+}
+
+void __init early_iounmap(void __iomem *addr, unsigned long size)
+{
+ unsigned long virt_addr;
+ unsigned long offset;
+ unsigned int nrpages;
+ enum fixed_addresses idx;
+ int i, slot;
+
+ slot = -1;
+ for (i = 0; i < FIX_BTMAPS_SLOTS; i++) {
+ if (prev_map[i] == addr) {
+ slot = i;
+ break;
+ }
+ }
+
+ if (slot < 0) {
+ pr_info("early_iounmap(%p, %08lx) not found slot\n",
+ addr, size);
+ WARN_ON(1);
+ return;
+ }
+
+ if (prev_size[slot] != size) {
+ pr_info("early_iounmap(%p, %08lx) [%d] size not consistent %08lx\n",
+ addr, size, slot, prev_size[slot]);
+ WARN_ON(1);
+ return;
+ }
+
+ if (early_ioremap_debug) {
+ pr_info("early_iounmap(%p, %08lx) [%d]\n", addr,
+ size, slot);
+ dump_stack();
+ }
+
+ virt_addr = (unsigned long)addr;
+ if (virt_addr < fix_to_virt(FIX_BTMAP_BEGIN)) {
+ WARN_ON(1);
+ return;
+ }
+ offset = virt_addr & ~PAGE_MASK;
+ nrpages = PAGE_ALIGN(offset + size) >> PAGE_SHIFT;
+
+ idx = FIX_BTMAP_BEGIN - NR_FIX_BTMAPS*slot;
+ while (nrpages > 0) {
+ if (after_paging_init)
+ __late_clear_fixmap(idx);
+ else
+ __early_set_fixmap(idx, 0, FIXMAP_PAGE_CLEAR);
+ --idx;
+ --nrpages;
+ }
+ prev_map[slot] = NULL;
+}
+
+void __init early_memunmap(void *addr, unsigned long size)
+{
+ early_iounmap((__force void __iomem *)addr, size);
+}
--
1.8.3.1
^ permalink raw reply related [flat|nested] 11+ messages in thread
* [PATCH v2 3/5] arm: add early_ioremap support
2014-01-07 2:35 [PATCH v2 0/5] generic early_ioremap support Mark Salter
2014-01-07 2:35 ` [PATCH v2 1/5] mm: create generic early_ioremap() support Mark Salter
@ 2014-01-07 2:35 ` Mark Salter
2014-01-07 17:23 ` Catalin Marinas
2014-01-07 2:35 ` [PATCH v2 4/5] arm64: initialize pgprot info earlier in boot Mark Salter
` (2 subsequent siblings)
4 siblings, 1 reply; 11+ messages in thread
From: Mark Salter @ 2014-01-07 2:35 UTC (permalink / raw)
To: linux-arm-kernel
This patch uses the generic early_ioremap code to implement
early_ioremap for ARM. The ARM-specific bits come mostly from
an earlier patch from Leif Lindholm <leif.lindholm@linaro.org>
here:
https://lkml.org/lkml/2013/10/3/279
Signed-off-by: Mark Salter <msalter@redhat.com>
Tested-by: Leif Lindholm <leif.lindholm@linaro.org>
CC: linux-arm-kernel at lists.infradead.org
CC: Russell King <linux@arm.linux.org.uk>
CC: Catalin Marinas <catalin.marinas@arm.com>
CC: Will Deacon <will.deacon@arm.com>
CC: Arnd Bergmann <arnd@arndb.de>
---
arch/arm/Kconfig | 11 +++++
arch/arm/include/asm/Kbuild | 1 +
arch/arm/include/asm/fixmap.h | 18 +++++++++
arch/arm/include/asm/io.h | 1 +
arch/arm/kernel/setup.c | 3 ++
arch/arm/mm/Makefile | 1 +
arch/arm/mm/early_ioremap.c | 93 +++++++++++++++++++++++++++++++++++++++++++
arch/arm/mm/mmu.c | 2 +
8 files changed, 130 insertions(+)
create mode 100644 arch/arm/mm/early_ioremap.c
diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig
index c1f1a7e..78a79a6a 100644
--- a/arch/arm/Kconfig
+++ b/arch/arm/Kconfig
@@ -1842,6 +1842,17 @@ config UACCESS_WITH_MEMCPY
However, if the CPU data cache is using a write-allocate mode,
this option is unlikely to provide any performance gain.
+config EARLY_IOREMAP
+ depends on MMU
+ bool "Provide early_ioremap() support for kernel initialization."
+ select GENERIC_EARLY_IOREMAP
+ help
+ Provide a mechanism for kernel initialisation code to temporarily
+ map, in a highmem-agnostic way, memory pages in before ioremap()
+ and friends are available (before paging_init() has run). It uses
+ the same virtual memory range as kmap so all early mappings must
+ be unapped before paging_init() is called.
+
config SECCOMP
bool
prompt "Enable seccomp to safely compute untrusted bytecode"
diff --git a/arch/arm/include/asm/Kbuild b/arch/arm/include/asm/Kbuild
index c38b58c..49ec506 100644
--- a/arch/arm/include/asm/Kbuild
+++ b/arch/arm/include/asm/Kbuild
@@ -4,6 +4,7 @@ generic-y += auxvec.h
generic-y += bitsperlong.h
generic-y += cputime.h
generic-y += current.h
+generic-y += early_ioremap.h
generic-y += emergency-restart.h
generic-y += errno.h
generic-y += exec.h
diff --git a/arch/arm/include/asm/fixmap.h b/arch/arm/include/asm/fixmap.h
index 68ea615..e92b7a4 100644
--- a/arch/arm/include/asm/fixmap.h
+++ b/arch/arm/include/asm/fixmap.h
@@ -21,8 +21,26 @@ enum fixed_addresses {
FIX_KMAP_BEGIN,
FIX_KMAP_END = (FIXADDR_TOP - FIXADDR_START) >> PAGE_SHIFT,
__end_of_fixed_addresses
+/*
+ * 224 temporary boot-time mappings, used by early_ioremap(),
+ * before ioremap() is functional.
+ *
+ * (P)re-using the FIXADDR region, which is used for highmem
+ * later on, and statically aligned to 1MB.
+ */
+#define NR_FIX_BTMAPS 32
+#define FIX_BTMAPS_SLOTS 7
+#define TOTAL_FIX_BTMAPS (NR_FIX_BTMAPS * FIX_BTMAPS_SLOTS)
+#define FIX_BTMAP_END FIX_KMAP_BEGIN
+#define FIX_BTMAP_BEGIN (FIX_BTMAP_END + TOTAL_FIX_BTMAPS - 1)
};
+#define FIXMAP_PAGE_NORMAL (L_PTE_MT_WRITEBACK | L_PTE_YOUNG | L_PTE_PRESENT)
+#define FIXMAP_PAGE_IO (L_PTE_MT_DEV_NONSHARED | L_PTE_YOUNG | L_PTE_PRESENT)
+
+extern void __early_set_fixmap(enum fixed_addresses idx,
+ phys_addr_t phys, pgprot_t flags);
+
#include <asm-generic/fixmap.h>
#endif
diff --git a/arch/arm/include/asm/io.h b/arch/arm/include/asm/io.h
index 3c597c2..131e0ba 100644
--- a/arch/arm/include/asm/io.h
+++ b/arch/arm/include/asm/io.h
@@ -28,6 +28,7 @@
#include <asm/byteorder.h>
#include <asm/memory.h>
#include <asm-generic/pci_iomap.h>
+#include <asm/early_ioremap.h>
#include <xen/xen.h>
/*
diff --git a/arch/arm/kernel/setup.c b/arch/arm/kernel/setup.c
index 987a7f5..038fb75 100644
--- a/arch/arm/kernel/setup.c
+++ b/arch/arm/kernel/setup.c
@@ -36,6 +36,7 @@
#include <asm/cpu.h>
#include <asm/cputype.h>
#include <asm/elf.h>
+#include <asm/io.h>
#include <asm/procinfo.h>
#include <asm/psci.h>
#include <asm/sections.h>
@@ -887,6 +888,8 @@ void __init setup_arch(char **cmdline_p)
parse_early_param();
+ early_ioremap_init();
+
sort(&meminfo.bank, meminfo.nr_banks, sizeof(meminfo.bank[0]), meminfo_cmp, NULL);
early_paging_init(mdesc, lookup_processor_type(read_cpuid_id()));
diff --git a/arch/arm/mm/Makefile b/arch/arm/mm/Makefile
index ecfe6e5..fea855e 100644
--- a/arch/arm/mm/Makefile
+++ b/arch/arm/mm/Makefile
@@ -15,6 +15,7 @@ endif
obj-$(CONFIG_MODULES) += proc-syms.o
obj-$(CONFIG_ALIGNMENT_TRAP) += alignment.o
+obj-$(CONFIG_EARLY_IOREMAP) += early_ioremap.o
obj-$(CONFIG_HIGHMEM) += highmem.o
obj-$(CONFIG_HUGETLB_PAGE) += hugetlbpage.o
diff --git a/arch/arm/mm/early_ioremap.c b/arch/arm/mm/early_ioremap.c
new file mode 100644
index 0000000..c3e2bf2
--- /dev/null
+++ b/arch/arm/mm/early_ioremap.c
@@ -0,0 +1,93 @@
+/*
+ * early_ioremap() support for ARM
+ *
+ * Based on existing support in arch/x86/mm/ioremap.c
+ *
+ * Restrictions: currently only functional before paging_init()
+ */
+
+#include <linux/init.h>
+#include <linux/io.h>
+
+#include <asm/fixmap.h>
+#include <asm/pgalloc.h>
+#include <asm/pgtable.h>
+#include <asm/tlbflush.h>
+
+#include <asm/mach/map.h>
+
+static pte_t bm_pte[PTRS_PER_PTE] __aligned(PTE_HWTABLE_SIZE) __initdata;
+
+static inline pmd_t * __init early_ioremap_pmd(unsigned long addr)
+{
+ unsigned int index = pgd_index(addr);
+ pgd_t *pgd = cpu_get_pgd() + index;
+ pud_t *pud = pud_offset(pgd, addr);
+ pmd_t *pmd = pmd_offset(pud, addr);
+
+ return pmd;
+}
+
+static inline pte_t * __init early_ioremap_pte(unsigned long addr)
+{
+ return &bm_pte[pte_index(addr)];
+}
+
+void __init early_ioremap_init(void)
+{
+ pmd_t *pmd;
+
+ pmd = early_ioremap_pmd(fix_to_virt(FIX_BTMAP_BEGIN));
+
+ pmd_populate_kernel(NULL, pmd, bm_pte);
+
+ /*
+ * Make sure we don't span multiple pmds.
+ */
+ BUILD_BUG_ON((__fix_to_virt(FIX_BTMAP_BEGIN) >> PMD_SHIFT)
+ != (__fix_to_virt(FIX_BTMAP_END) >> PMD_SHIFT));
+
+ if (pmd != early_ioremap_pmd(fix_to_virt(FIX_BTMAP_END))) {
+ WARN_ON(1);
+ pr_warn("pmd %p != %p\n",
+ pmd, early_ioremap_pmd(fix_to_virt(FIX_BTMAP_END)));
+ pr_warn("fix_to_virt(FIX_BTMAP_BEGIN): %08lx\n",
+ fix_to_virt(FIX_BTMAP_BEGIN));
+ pr_warn("fix_to_virt(FIX_BTMAP_END): %08lx\n",
+ fix_to_virt(FIX_BTMAP_END));
+ pr_warn("FIX_BTMAP_END: %d\n", FIX_BTMAP_END);
+ pr_warn("FIX_BTMAP_BEGIN: %d\n", FIX_BTMAP_BEGIN);
+ }
+
+ early_ioremap_setup();
+}
+
+void __init __early_set_fixmap(enum fixed_addresses idx,
+ phys_addr_t phys, pgprot_t flags)
+{
+ unsigned long addr = __fix_to_virt(idx);
+ pte_t *pte;
+ u64 desc;
+
+ if (idx > FIX_KMAP_END) {
+ BUG();
+ return;
+ }
+ pte = early_ioremap_pte(addr);
+
+ if (pgprot_val(flags))
+ set_pte_at(NULL, 0xfff00000, pte,
+ pfn_pte(phys >> PAGE_SHIFT, flags));
+ else
+ pte_clear(NULL, addr, pte);
+ flush_tlb_kernel_range(addr, addr + PAGE_SIZE);
+ desc = *pte;
+}
+
+void __init
+early_ioremap_shutdown(void)
+{
+ pmd_t *pmd;
+ pmd = early_ioremap_pmd(fix_to_virt(FIX_BTMAP_BEGIN));
+ pmd_clear(pmd);
+}
diff --git a/arch/arm/mm/mmu.c b/arch/arm/mm/mmu.c
index 580ef2d..bef59b9 100644
--- a/arch/arm/mm/mmu.c
+++ b/arch/arm/mm/mmu.c
@@ -34,6 +34,7 @@
#include <asm/mach/arch.h>
#include <asm/mach/map.h>
#include <asm/mach/pci.h>
+#include <asm/early_ioremap.h>
#include "mm.h"
#include "tcm.h"
@@ -1405,6 +1406,7 @@ void __init paging_init(const struct machine_desc *mdesc)
{
void *zero_page;
+ early_ioremap_reset();
build_mem_type_table();
prepare_page_table();
map_lowmem();
--
1.8.3.1
^ permalink raw reply related [flat|nested] 11+ messages in thread
* [PATCH v2 4/5] arm64: initialize pgprot info earlier in boot
2014-01-07 2:35 [PATCH v2 0/5] generic early_ioremap support Mark Salter
2014-01-07 2:35 ` [PATCH v2 1/5] mm: create generic early_ioremap() support Mark Salter
2014-01-07 2:35 ` [PATCH v2 3/5] arm: add early_ioremap support Mark Salter
@ 2014-01-07 2:35 ` Mark Salter
2014-01-07 2:35 ` [PATCH v2 5/5] arm64: add early_ioremap support Mark Salter
2014-01-07 17:26 ` [PATCH v2 0/5] generic " Catalin Marinas
4 siblings, 0 replies; 11+ messages in thread
From: Mark Salter @ 2014-01-07 2:35 UTC (permalink / raw)
To: linux-arm-kernel
Presently, paging_init() calls init_mem_pgprot() to initialize pgprot
values used by macros such as PAGE_KERNEL, PAGE_KERNEL_EXEC, etc. The
new fixmap and early_ioremap support also needs to use these macros
before paging_init() is called. This patch moves the init_mem_pgprot()
call out of paging_init() and into setup_arch() so that pgprot_default
gets initialized in time for fixmap and early_ioremap.
Signed-off-by: Mark Salter <msalter@redhat.com>
CC: linux-arm-kernel at lists.infradead.org
CC: Catalin Marinas <catalin.marinas@arm.com>
CC: Will Deacon <will.deacon@arm.com>
---
arch/arm64/include/asm/mmu.h | 1 +
arch/arm64/kernel/setup.c | 2 ++
arch/arm64/mm/mmu.c | 3 +--
3 files changed, 4 insertions(+), 2 deletions(-)
diff --git a/arch/arm64/include/asm/mmu.h b/arch/arm64/include/asm/mmu.h
index 2494fc0..f600d40 100644
--- a/arch/arm64/include/asm/mmu.h
+++ b/arch/arm64/include/asm/mmu.h
@@ -27,5 +27,6 @@ typedef struct {
extern void paging_init(void);
extern void setup_mm_for_reboot(void);
extern void __iomem *early_io_map(phys_addr_t phys, unsigned long virt);
+extern void init_mem_pgprot(void);
#endif
diff --git a/arch/arm64/kernel/setup.c b/arch/arm64/kernel/setup.c
index bd9bbd0..029ecfe 100644
--- a/arch/arm64/kernel/setup.c
+++ b/arch/arm64/kernel/setup.c
@@ -221,6 +221,8 @@ void __init setup_arch(char **cmdline_p)
*cmdline_p = boot_command_line;
+ init_mem_pgprot();
+
parse_early_param();
arm64_memblock_init();
diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
index f557ebb..541c782 100644
--- a/arch/arm64/mm/mmu.c
+++ b/arch/arm64/mm/mmu.c
@@ -125,7 +125,7 @@ early_param("cachepolicy", early_cachepolicy);
/*
* Adjust the PMD section entries according to the CPU in use.
*/
-static void __init init_mem_pgprot(void)
+void __init init_mem_pgprot(void)
{
pteval_t default_pgprot;
int i;
@@ -349,7 +349,6 @@ void __init paging_init(void)
{
void *zero_page;
- init_mem_pgprot();
map_mem();
/*
--
1.8.3.1
^ permalink raw reply related [flat|nested] 11+ messages in thread
* [PATCH v2 5/5] arm64: add early_ioremap support
2014-01-07 2:35 [PATCH v2 0/5] generic early_ioremap support Mark Salter
` (2 preceding siblings ...)
2014-01-07 2:35 ` [PATCH v2 4/5] arm64: initialize pgprot info earlier in boot Mark Salter
@ 2014-01-07 2:35 ` Mark Salter
2014-01-07 17:26 ` [PATCH v2 0/5] generic " Catalin Marinas
4 siblings, 0 replies; 11+ messages in thread
From: Mark Salter @ 2014-01-07 2:35 UTC (permalink / raw)
To: linux-arm-kernel
Add support for early IO or memory mappings which are needed
before the normal ioremap() is usable. This also adds fixmap
support for permanent fixed mappings such as that used by the
earlyprintk device register region.
Signed-off-by: Mark Salter <msalter@redhat.com>
CC: linux-arm-kernel at lists.infradead.org
CC: Catalin Marinas <catalin.marinas@arm.com>
CC: Will Deacon <will.deacon@arm.com>
---
Documentation/arm64/memory.txt | 4 +-
arch/arm64/Kconfig | 1 +
arch/arm64/include/asm/Kbuild | 1 +
arch/arm64/include/asm/fixmap.h | 68 ++++++++++++++++++++++++++++++++
arch/arm64/include/asm/io.h | 1 +
arch/arm64/include/asm/memory.h | 2 +-
arch/arm64/kernel/early_printk.c | 8 +++-
arch/arm64/kernel/head.S | 9 ++---
arch/arm64/kernel/setup.c | 2 +
arch/arm64/mm/ioremap.c | 85 ++++++++++++++++++++++++++++++++++++++++
arch/arm64/mm/mmu.c | 41 -------------------
11 files changed, 170 insertions(+), 52 deletions(-)
create mode 100644 arch/arm64/include/asm/fixmap.h
diff --git a/Documentation/arm64/memory.txt b/Documentation/arm64/memory.txt
index 5e054bf..953c81e 100644
--- a/Documentation/arm64/memory.txt
+++ b/Documentation/arm64/memory.txt
@@ -35,7 +35,7 @@ ffffffbc00000000 ffffffbdffffffff 8GB vmemmap
ffffffbe00000000 ffffffbffbbfffff ~8GB [guard, future vmmemap]
-ffffffbffbc00000 ffffffbffbdfffff 2MB earlyprintk device
+ffffffbffbc00000 ffffffbffbdfffff 2MB fixed mappings
ffffffbffbe00000 ffffffbffbe0ffff 64KB PCI I/O space
@@ -60,7 +60,7 @@ fffffdfc00000000 fffffdfdffffffff 8GB vmemmap
fffffdfe00000000 fffffdfffbbfffff ~8GB [guard, future vmmemap]
-fffffdfffbc00000 fffffdfffbdfffff 2MB earlyprintk device
+fffffdfffbc00000 fffffdfffbdfffff 2MB fixed mappings
fffffdfffbe00000 fffffdfffbe0ffff 64KB PCI I/O space
diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index 6d4dd22..e66a317 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -12,6 +12,7 @@ config ARM64
select CLONE_BACKWARDS
select COMMON_CLK
select GENERIC_CLOCKEVENTS
+ select GENERIC_EARLY_IOREMAP
select GENERIC_IOMAP
select GENERIC_IRQ_PROBE
select GENERIC_IRQ_SHOW
diff --git a/arch/arm64/include/asm/Kbuild b/arch/arm64/include/asm/Kbuild
index 519f89f..b7f99a3 100644
--- a/arch/arm64/include/asm/Kbuild
+++ b/arch/arm64/include/asm/Kbuild
@@ -10,6 +10,7 @@ generic-y += delay.h
generic-y += div64.h
generic-y += dma.h
generic-y += emergency-restart.h
+generic-y += early_ioremap.h
generic-y += errno.h
generic-y += ftrace.h
generic-y += hw_irq.h
diff --git a/arch/arm64/include/asm/fixmap.h b/arch/arm64/include/asm/fixmap.h
new file mode 100644
index 0000000..7ad4f29
--- /dev/null
+++ b/arch/arm64/include/asm/fixmap.h
@@ -0,0 +1,68 @@
+/*
+ * fixmap.h: compile-time virtual memory allocation
+ *
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * Copyright (C) 1998 Ingo Molnar
+ * Copyright (C) 2013 Mark Salter <msalter@redhat.com>
+ *
+ * Adapted from arch/x86_64 version.
+ *
+ */
+
+#ifndef _ASM_ARM64_FIXMAP_H
+#define _ASM_ARM64_FIXMAP_H
+
+#ifndef __ASSEMBLY__
+#include <linux/kernel.h>
+#include <asm/page.h>
+
+/*
+ * Here we define all the compile-time 'special' virtual
+ * addresses. The point is to have a constant address at
+ * compile time, but to set the physical address only
+ * in the boot process.
+ *
+ * These 'compile-time allocated' memory buffers are
+ * page-sized. Use set_fixmap(idx,phys) to associate
+ * physical memory with fixmap indices.
+ *
+ */
+enum fixed_addresses {
+ FIX_EARLYCON,
+ __end_of_permanent_fixed_addresses,
+
+ /*
+ * Temporary boot-time mappings, used by early_ioremap(),
+ * before ioremap() is functional.
+ */
+#ifdef CONFIG_ARM64_64K_PAGES
+#define NR_FIX_BTMAPS 4
+#else
+#define NR_FIX_BTMAPS 64
+#endif
+#define FIX_BTMAPS_SLOTS 7
+#define TOTAL_FIX_BTMAPS (NR_FIX_BTMAPS * FIX_BTMAPS_SLOTS)
+
+ FIX_BTMAP_END = __end_of_permanent_fixed_addresses,
+ FIX_BTMAP_BEGIN = FIX_BTMAP_END + TOTAL_FIX_BTMAPS - 1,
+ __end_of_fixed_addresses
+};
+
+#define FIXADDR_SIZE (__end_of_permanent_fixed_addresses << PAGE_SHIFT)
+#define FIXADDR_START (FIXADDR_TOP - FIXADDR_SIZE)
+
+#define FIXMAP_PAGE_NORMAL PAGE_KERNEL_EXEC
+#define FIXMAP_PAGE_IO __pgprot(PROT_DEVICE_nGnRE)
+
+extern void __early_set_fixmap(enum fixed_addresses idx,
+ phys_addr_t phys, pgprot_t flags);
+
+#define __set_fixmap __early_set_fixmap
+
+#include <asm-generic/fixmap.h>
+
+#endif /* !__ASSEMBLY__ */
+#endif /* _ASM_ARM64_FIXMAP_H */
diff --git a/arch/arm64/include/asm/io.h b/arch/arm64/include/asm/io.h
index 5727697..1e34831 100644
--- a/arch/arm64/include/asm/io.h
+++ b/arch/arm64/include/asm/io.h
@@ -27,6 +27,7 @@
#include <asm/byteorder.h>
#include <asm/barrier.h>
#include <asm/pgtable.h>
+#include <asm/early_ioremap.h>
#include <xen/xen.h>
diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h
index 3776217..50a97df 100644
--- a/arch/arm64/include/asm/memory.h
+++ b/arch/arm64/include/asm/memory.h
@@ -49,7 +49,7 @@
#define PAGE_OFFSET (UL(0xffffffffffffffff) << (VA_BITS - 1))
#define MODULES_END (PAGE_OFFSET)
#define MODULES_VADDR (MODULES_END - SZ_64M)
-#define EARLYCON_IOBASE (MODULES_VADDR - SZ_4M)
+#define FIXADDR_TOP (MODULES_VADDR - SZ_2M - PAGE_SIZE)
#define TASK_SIZE_64 (UL(1) << VA_BITS)
#ifdef CONFIG_COMPAT
diff --git a/arch/arm64/kernel/early_printk.c b/arch/arm64/kernel/early_printk.c
index fbb6e18..850d9a4 100644
--- a/arch/arm64/kernel/early_printk.c
+++ b/arch/arm64/kernel/early_printk.c
@@ -26,6 +26,8 @@
#include <linux/amba/serial.h>
#include <linux/serial_reg.h>
+#include <asm/fixmap.h>
+
static void __iomem *early_base;
static void (*printch)(char ch);
@@ -141,8 +143,10 @@ static int __init setup_early_printk(char *buf)
}
/* no options parsing yet */
- if (paddr)
- early_base = early_io_map(paddr, EARLYCON_IOBASE);
+ if (paddr) {
+ set_fixmap_io(FIX_EARLYCON, paddr);
+ early_base = (void __iomem *)fix_to_virt(FIX_EARLYCON);
+ }
printch = match->printch;
early_console = &early_console_dev;
diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S
index c68cca5..4b47dcb 100644
--- a/arch/arm64/kernel/head.S
+++ b/arch/arm64/kernel/head.S
@@ -412,7 +412,7 @@ ENDPROC(__calc_phys_offset)
* - identity mapping to enable the MMU (low address, TTBR0)
* - first few MB of the kernel linear mapping to jump to once the MMU has
* been enabled, including the FDT blob (TTBR1)
- * - UART mapping if CONFIG_EARLY_PRINTK is enabled (TTBR1)
+ * - pgd entry for fixed mappings (TTBR1)
*/
__create_page_tables:
pgtbl x25, x26, x24 // idmap_pg_dir and swapper_pg_dir addresses
@@ -465,15 +465,12 @@ __create_page_tables:
sub x6, x6, #1 // inclusive range
create_block_map x0, x7, x3, x5, x6
1:
-#ifdef CONFIG_EARLY_PRINTK
/*
- * Create the pgd entry for the UART mapping. The full mapping is done
- * later based earlyprintk kernel parameter.
+ * Create the pgd entry for the fixed mappings.
*/
- ldr x5, =EARLYCON_IOBASE // UART virtual address
+ ldr x5, =FIXADDR_TOP // Fixed mapping virtual address
add x0, x26, #2 * PAGE_SIZE // section table address
create_pgd_entry x26, x0, x5, x6, x7
-#endif
ret
ENDPROC(__create_page_tables)
.ltorg
diff --git a/arch/arm64/kernel/setup.c b/arch/arm64/kernel/setup.c
index 029ecfe..5516f54 100644
--- a/arch/arm64/kernel/setup.c
+++ b/arch/arm64/kernel/setup.c
@@ -42,6 +42,7 @@
#include <linux/of_fdt.h>
#include <linux/of_platform.h>
+#include <asm/fixmap.h>
#include <asm/cputype.h>
#include <asm/elf.h>
#include <asm/cputable.h>
@@ -222,6 +223,7 @@ void __init setup_arch(char **cmdline_p)
*cmdline_p = boot_command_line;
init_mem_pgprot();
+ early_ioremap_init();
parse_early_param();
diff --git a/arch/arm64/mm/ioremap.c b/arch/arm64/mm/ioremap.c
index 2bb1d58..7ec3283 100644
--- a/arch/arm64/mm/ioremap.c
+++ b/arch/arm64/mm/ioremap.c
@@ -25,6 +25,10 @@
#include <linux/vmalloc.h>
#include <linux/io.h>
+#include <asm/fixmap.h>
+#include <asm/tlbflush.h>
+#include <asm/pgalloc.h>
+
static void __iomem *__ioremap_caller(phys_addr_t phys_addr, size_t size,
pgprot_t prot, void *caller)
{
@@ -98,3 +102,84 @@ void __iomem *ioremap_cache(phys_addr_t phys_addr, size_t size)
__builtin_return_address(0));
}
EXPORT_SYMBOL(ioremap_cache);
+
+#ifndef CONFIG_ARM64_64K_PAGES
+static pte_t bm_pte[PTRS_PER_PTE] __page_aligned_bss;
+#endif
+
+static inline pmd_t * __init early_ioremap_pmd(unsigned long addr)
+{
+ pgd_t *pgd;
+ pud_t *pud;
+
+ pgd = pgd_offset_k(addr);
+ BUG_ON(pgd_none(*pgd) || pgd_bad(*pgd));
+
+ pud = pud_offset(pgd, addr);
+ BUG_ON(pud_none(*pud) || pud_bad(*pud));
+
+ return pmd_offset(pud, addr);
+}
+
+static inline pte_t * __init early_ioremap_pte(unsigned long addr)
+{
+ pmd_t *pmd = early_ioremap_pmd(addr);
+
+ BUG_ON(pmd_none(*pmd) || pmd_bad(*pmd));
+
+ return pte_offset_kernel(pmd, addr);
+}
+
+void __init early_ioremap_init(void)
+{
+ pmd_t *pmd;
+
+ pmd = early_ioremap_pmd(fix_to_virt(FIX_BTMAP_BEGIN));
+#ifndef CONFIG_ARM64_64K_PAGES
+ /* need to populate pmd for 4k pagesize only */
+ pmd_populate_kernel(&init_mm, pmd, bm_pte);
+#endif
+ /*
+ * The boot-ioremap range spans multiple pmds, for which
+ * we are not prepared:
+ */
+ BUILD_BUG_ON((__fix_to_virt(FIX_BTMAP_BEGIN) >> PMD_SHIFT)
+ != (__fix_to_virt(FIX_BTMAP_END) >> PMD_SHIFT));
+
+ if (pmd != early_ioremap_pmd(fix_to_virt(FIX_BTMAP_END))) {
+ WARN_ON(1);
+ pr_warn("pmd %p != %p\n",
+ pmd, early_ioremap_pmd(fix_to_virt(FIX_BTMAP_END)));
+ pr_warn("fix_to_virt(FIX_BTMAP_BEGIN): %08lx\n",
+ fix_to_virt(FIX_BTMAP_BEGIN));
+ pr_warn("fix_to_virt(FIX_BTMAP_END): %08lx\n",
+ fix_to_virt(FIX_BTMAP_END));
+
+ pr_warn("FIX_BTMAP_END: %d\n", FIX_BTMAP_END);
+ pr_warn("FIX_BTMAP_BEGIN: %d\n",
+ FIX_BTMAP_BEGIN);
+ }
+
+ early_ioremap_setup();
+}
+
+void __init __early_set_fixmap(enum fixed_addresses idx,
+ phys_addr_t phys, pgprot_t flags)
+{
+ unsigned long addr = __fix_to_virt(idx);
+ pte_t *pte;
+
+ if (idx >= __end_of_fixed_addresses) {
+ BUG();
+ return;
+ }
+
+ pte = early_ioremap_pte(addr);
+
+ if (pgprot_val(flags))
+ set_pte(pte, pfn_pte(phys >> PAGE_SHIFT, flags));
+ else {
+ pte_clear(&init_mm, addr, pte);
+ flush_tlb_kernel_range(addr, addr+PAGE_SIZE);
+ }
+}
diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
index 541c782..7b345e3 100644
--- a/arch/arm64/mm/mmu.c
+++ b/arch/arm64/mm/mmu.c
@@ -252,47 +252,6 @@ static void __init create_mapping(phys_addr_t phys, unsigned long virt,
} while (pgd++, addr = next, addr != end);
}
-#ifdef CONFIG_EARLY_PRINTK
-/*
- * Create an early I/O mapping using the pgd/pmd entries already populated
- * in head.S as this function is called too early to allocated any memory. The
- * mapping size is 2MB with 4KB pages or 64KB or 64KB pages.
- */
-void __iomem * __init early_io_map(phys_addr_t phys, unsigned long virt)
-{
- unsigned long size, mask;
- bool page64k = IS_ENABLED(CONFIG_ARM64_64K_PAGES);
- pgd_t *pgd;
- pud_t *pud;
- pmd_t *pmd;
- pte_t *pte;
-
- /*
- * No early pte entries with !ARM64_64K_PAGES configuration, so using
- * sections (pmd).
- */
- size = page64k ? PAGE_SIZE : SECTION_SIZE;
- mask = ~(size - 1);
-
- pgd = pgd_offset_k(virt);
- pud = pud_offset(pgd, virt);
- if (pud_none(*pud))
- return NULL;
- pmd = pmd_offset(pud, virt);
-
- if (page64k) {
- if (pmd_none(*pmd))
- return NULL;
- pte = pte_offset_kernel(pmd, virt);
- set_pte(pte, __pte((phys & mask) | PROT_DEVICE_nGnRE));
- } else {
- set_pmd(pmd, __pmd((phys & mask) | PROT_SECT_DEVICE_nGnRE));
- }
-
- return (void __iomem *)((virt & mask) + (phys & ~mask));
-}
-#endif
-
static void __init map_mem(void)
{
struct memblock_region *reg;
--
1.8.3.1
^ permalink raw reply related [flat|nested] 11+ messages in thread
* [PATCH v2 1/5] mm: create generic early_ioremap() support
2014-01-07 2:35 ` [PATCH v2 1/5] mm: create generic early_ioremap() support Mark Salter
@ 2014-01-07 15:25 ` Catalin Marinas
2014-01-07 23:11 ` H. Peter Anvin
1 sibling, 0 replies; 11+ messages in thread
From: Catalin Marinas @ 2014-01-07 15:25 UTC (permalink / raw)
To: linux-arm-kernel
On Tue, Jan 07, 2014 at 02:35:16AM +0000, Mark Salter wrote:
> This patch creates a generic implementation of early_ioremap() support
> based on the existing x86 implementation. early_ioremp() is useful for
> early boot code which needs to temporarily map I/O or memory regions
> before normal mapping functions such as ioremap() are available.
>
> There is one difference from the existing x86 implementation which
> should be noted. The generic early_memremap() function does not return
> an __iomem pointer and a new early_memunmap() function has been added
> to act as a wrapper for early_iounmap() but with a non __iomem pointer
> passed in. This is in line with the first patch of this series:
>
> https://lkml.org/lkml/2013/12/22/69
>
> Signed-off-by: Mark Salter <msalter@redhat.com>
> CC: x86 at kernel.org
> CC: linux-arm-kernel at lists.infradead.org
> CC: Andrew Morton <akpm@linux-foundation.org>
> CC: Arnd Bergmann <arnd@arndb.de>
> CC: Ingo Molnar <mingo@kernel.org>
> CC: Thomas Gleixner <tglx@linutronix.de>
> CC: "H. Peter Anvin" <hpa@zytor.com>
> CC: Russell King <linux@arm.linux.org.uk>
> CC: Catalin Marinas <catalin.marinas@arm.com>
> CC: Will Deacon <will.deacon@arm.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
^ permalink raw reply [flat|nested] 11+ messages in thread
* [PATCH v2 3/5] arm: add early_ioremap support
2014-01-07 2:35 ` [PATCH v2 3/5] arm: add early_ioremap support Mark Salter
@ 2014-01-07 17:23 ` Catalin Marinas
0 siblings, 0 replies; 11+ messages in thread
From: Catalin Marinas @ 2014-01-07 17:23 UTC (permalink / raw)
To: linux-arm-kernel
On Tue, Jan 07, 2014 at 02:35:18AM +0000, Mark Salter wrote:
> This patch uses the generic early_ioremap code to implement
> early_ioremap for ARM. The ARM-specific bits come mostly from
> an earlier patch from Leif Lindholm <leif.lindholm@linaro.org>
> here:
>
> https://lkml.org/lkml/2013/10/3/279
>
> Signed-off-by: Mark Salter <msalter@redhat.com>
> Tested-by: Leif Lindholm <leif.lindholm@linaro.org>
> CC: linux-arm-kernel at lists.infradead.org
> CC: Russell King <linux@arm.linux.org.uk>
> CC: Catalin Marinas <catalin.marinas@arm.com>
> CC: Will Deacon <will.deacon@arm.com>
> CC: Arnd Bergmann <arnd@arndb.de>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
^ permalink raw reply [flat|nested] 11+ messages in thread
* [PATCH v2 0/5] generic early_ioremap support
2014-01-07 2:35 [PATCH v2 0/5] generic early_ioremap support Mark Salter
` (3 preceding siblings ...)
2014-01-07 2:35 ` [PATCH v2 5/5] arm64: add early_ioremap support Mark Salter
@ 2014-01-07 17:26 ` Catalin Marinas
4 siblings, 0 replies; 11+ messages in thread
From: Catalin Marinas @ 2014-01-07 17:26 UTC (permalink / raw)
To: linux-arm-kernel
On Tue, Jan 07, 2014 at 02:35:15AM +0000, Mark Salter wrote:
> This patch series takes the common bits from the x86 early ioremap
> implementation and creates a generic implementation which may be used
> by other architectures. The early ioremap interfaces are intended for
> situations where boot code needs to make temporary virtual mappings
> before the normal ioremap interfaces are available. Typically, this
> means before paging_init() has run.
>
> These patches are layered on top of generic fixmap patches which
> were discussed here (and are in the akpm tree):
>
> http://lkml.org/lkml/2013/11/25/474
>
> This is version 2 of the patch series. These patches (and underlying
> fixmap patches) may be found at:
>
> git://github.com/mosalter/linux.git (early-ioremap-v2 branch)
The patches look fine to me. I haven't acked the arm64 patches as I'll
eventually merge/sign them off once the first patch in the series goes
in.
Thanks.
--
Catalin
^ permalink raw reply [flat|nested] 11+ messages in thread
* [PATCH v2 1/5] mm: create generic early_ioremap() support
2014-01-07 2:35 ` [PATCH v2 1/5] mm: create generic early_ioremap() support Mark Salter
2014-01-07 15:25 ` Catalin Marinas
@ 2014-01-07 23:11 ` H. Peter Anvin
2014-01-08 16:09 ` Mark Salter
1 sibling, 1 reply; 11+ messages in thread
From: H. Peter Anvin @ 2014-01-07 23:11 UTC (permalink / raw)
To: linux-arm-kernel
On 01/06/2014 06:35 PM, Mark Salter wrote:
>
> There is one difference from the existing x86 implementation which
> should be noted. The generic early_memremap() function does not return
> an __iomem pointer and a new early_memunmap() function has been added
> to act as a wrapper for early_iounmap() but with a non __iomem pointer
> passed in. This is in line with the first patch of this series:
>
This makes a lot of sense. However, I would suggest that we preface the
patch series with a single patch changing the signature for the existing
x86 function, that way this change becomes bisectable.
-hpa
^ permalink raw reply [flat|nested] 11+ messages in thread
* [PATCH v2 1/5] mm: create generic early_ioremap() support
2014-01-07 23:11 ` H. Peter Anvin
@ 2014-01-08 16:09 ` Mark Salter
2014-01-09 10:15 ` Matt Fleming
0 siblings, 1 reply; 11+ messages in thread
From: Mark Salter @ 2014-01-08 16:09 UTC (permalink / raw)
To: linux-arm-kernel
On Tue, 2014-01-07 at 15:11 -0800, H. Peter Anvin wrote:
> On 01/06/2014 06:35 PM, Mark Salter wrote:
> >
> > There is one difference from the existing x86 implementation which
> > should be noted. The generic early_memremap() function does not return
> > an __iomem pointer and a new early_memunmap() function has been added
> > to act as a wrapper for early_iounmap() but with a non __iomem pointer
> > passed in. This is in line with the first patch of this series:
> >
>
> This makes a lot of sense. However, I would suggest that we preface the
> patch series with a single patch changing the signature for the existing
> x86 function, that way this change becomes bisectable.
Ok, that sounds like a good idea. I'm uncertain how best to coordinate
with Dave Young's patch series to avoid conflicts. His first patch does
the signature change (and adds the early_memunmap function but that
isn't used anywhere):
https://lkml.org/lkml/2013/12/20/143
Any thoughts on how best to avoid potential merge conflicts here?
^ permalink raw reply [flat|nested] 11+ messages in thread
* [PATCH v2 1/5] mm: create generic early_ioremap() support
2014-01-08 16:09 ` Mark Salter
@ 2014-01-09 10:15 ` Matt Fleming
0 siblings, 0 replies; 11+ messages in thread
From: Matt Fleming @ 2014-01-09 10:15 UTC (permalink / raw)
To: linux-arm-kernel
On Wed, 08 Jan, at 11:09:48AM, Mark Salter wrote:
>
> Ok, that sounds like a good idea. I'm uncertain how best to coordinate
> with Dave Young's patch series to avoid conflicts. His first patch does
> the signature change (and adds the early_memunmap function but that
> isn't used anywhere):
>
> https://lkml.org/lkml/2013/12/20/143
>
> Any thoughts on how best to avoid potential merge conflicts here?
Dave's patch hasn't been picked up by anyone so far, so you could
incorporate it into your series.
--
Matt Fleming, Intel Open Source Technology Center
^ permalink raw reply [flat|nested] 11+ messages in thread
end of thread, other threads:[~2014-01-09 10:15 UTC | newest]
Thread overview: 11+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2014-01-07 2:35 [PATCH v2 0/5] generic early_ioremap support Mark Salter
2014-01-07 2:35 ` [PATCH v2 1/5] mm: create generic early_ioremap() support Mark Salter
2014-01-07 15:25 ` Catalin Marinas
2014-01-07 23:11 ` H. Peter Anvin
2014-01-08 16:09 ` Mark Salter
2014-01-09 10:15 ` Matt Fleming
2014-01-07 2:35 ` [PATCH v2 3/5] arm: add early_ioremap support Mark Salter
2014-01-07 17:23 ` Catalin Marinas
2014-01-07 2:35 ` [PATCH v2 4/5] arm64: initialize pgprot info earlier in boot Mark Salter
2014-01-07 2:35 ` [PATCH v2 5/5] arm64: add early_ioremap support Mark Salter
2014-01-07 17:26 ` [PATCH v2 0/5] generic " Catalin Marinas
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).