linux-arm-kernel.lists.infradead.org archive mirror
 help / color / mirror / Atom feed
* [RFC 00/18] generic arm needed for msm
@ 2010-01-11 22:47 Daniel Walker
  2010-01-11 22:47 ` [RFC 01/18] arm: msm: allow ARCH_MSM to have v7 cpus Daniel Walker
                   ` (17 more replies)
  0 siblings, 18 replies; 68+ messages in thread
From: Daniel Walker @ 2010-01-11 22:47 UTC (permalink / raw)
  To: linux-arm-kernel

This is just a lot of generic arm code that has been sitting here at Qualcomm
for some time.. Most of it seems fairly reasonable, but I figured I would send it
out for some review. I need the bulk of it merged eventually in some form for
2.6.34.

Daniel Walker (2):
  arm: msm: allow ARCH_MSM to have v7 cpus
  arm: msm: add oprofile pmu support

Dave Estes (3):
  arm: vfp: Add additional vfp interfaces
  arm: mm: Add SW emulation for ARM domain manager feature
  arm: mm: qsd8x50: Fix incorrect permission faults

Larry Bassel (3):
  arm: msm: implement ioremap_strongly_ordered
  arm: msm: implement proper dmb() for 7x27
  arm: msm: set L2CR1 to enable prefetch and burst on Scorpion.

Praveen Chidambaram (1):
  arm: msm: Enable frequency scaling.

Steve Muckle (6):
  arm: boot: remove old ARM ID for QSD
  arm: mm: retry on QSD icache parity errors
  arm: mm: support error reporting in L1/L2 caches on QSD
  arm: msm: add ARCH_MSM_SCORPION to CPU_V7
  arm: msm: define HAVE_CLK for ARCH_MSM
  arm: msm: add arch_has_speculative_dfetch()

Taniya Das (1):
  arm: msm: add v7 support for compiler version-4.1.1

Willie Ruan (2):
  arm: cache-l2x0: add l2x0 suspend and resume functions
  arm: mm: enable L2X0 to use L2 cache on MSM7X27

 Documentation/arm/msm/emulate_domain_manager.txt |  282 ++++++++++++++++
 arch/arm/Kconfig                                 |   20 +-
 arch/arm/Makefile                                |    8 +-
 arch/arm/boot/compressed/head.S                  |    2 +
 arch/arm/include/asm/dma-mapping.h               |   11 +-
 arch/arm/include/asm/domain.h                    |   13 +
 arch/arm/include/asm/hardware/cache-l2x0.h       |    3 +
 arch/arm/include/asm/io.h                        |    2 +
 arch/arm/include/asm/mach/map.h                  |    1 +
 arch/arm/include/asm/memory.h                    |    7 +
 arch/arm/include/asm/system.h                    |   11 +-
 arch/arm/include/asm/vfp.h                       |    6 +
 arch/arm/kernel/entry-armv.S                     |    8 +
 arch/arm/kernel/head.S                           |    8 +
 arch/arm/mach-msm/Kconfig                        |    2 +
 arch/arm/mach-msm/include/mach/memory.h          |    3 +
 arch/arm/mm/Kconfig                              |   19 +-
 arch/arm/mm/Makefile                             |    1 +
 arch/arm/mm/abort-ev7.S                          |   78 +++++
 arch/arm/mm/cache-l2x0.c                         |   29 ++
 arch/arm/mm/emulate_domain_manager-v7.c          |  386 ++++++++++++++++++++++
 arch/arm/mm/fault.c                              |   50 +++-
 arch/arm/mm/mmu.c                                |    6 +
 arch/arm/mm/proc-v7.S                            |   21 ++
 arch/arm/oprofile/op_model_v6.c                  |    2 +
 arch/arm/vfp/vfpmodule.c                         |   37 ++-
 26 files changed, 995 insertions(+), 21 deletions(-)
 create mode 100644 Documentation/arm/msm/emulate_domain_manager.txt
 create mode 100644 arch/arm/mm/emulate_domain_manager-v7.c

^ permalink raw reply	[flat|nested] 68+ messages in thread

* [RFC 01/18] arm: msm: allow ARCH_MSM to have v7 cpus
  2010-01-11 22:47 [RFC 00/18] generic arm needed for msm Daniel Walker
@ 2010-01-11 22:47 ` Daniel Walker
  2010-01-11 22:47 ` [RFC 02/18] arm: msm: add oprofile pmu support Daniel Walker
                   ` (16 subsequent siblings)
  17 siblings, 0 replies; 68+ messages in thread
From: Daniel Walker @ 2010-01-11 22:47 UTC (permalink / raw)
  To: linux-arm-kernel

ARCH_MSM supports armv7 cpus, so we're pushed the CPU_V6/CPU_V7 selection
down into the arch/arm/mach-msm/Kconfig.

Also update the description to be a bit more accurate.

Signed-off-by: Daniel Walker <dwalker@codeaurora.org>
---
 arch/arm/Kconfig          |   10 +++++-----
 arch/arm/mach-msm/Kconfig |    2 ++
 2 files changed, 7 insertions(+), 5 deletions(-)

diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig
index 233a222..c752b7d 100644
--- a/arch/arm/Kconfig
+++ b/arch/arm/Kconfig
@@ -573,14 +573,14 @@ config ARCH_PXA
 
 config ARCH_MSM
 	bool "Qualcomm MSM"
-	select CPU_V6
 	select GENERIC_TIME
 	select GENERIC_CLOCKEVENTS
 	help
-	  Support for Qualcomm MSM7K based systems.  This runs on the ARM11
-	  apps processor of the MSM7K and depends on a shared memory
-	  interface to the ARM9 modem processor which runs the baseband stack
-	  and controls some vital subsystems (clock and power control, etc).
+	  Support for Qualcomm MSM/QSD based systems.  This runs on the
+	  apps processor of the MSM/QSD and depends on a shared memory
+	  interface to the modem processor which runs the baseband
+	  stack and controls some vital subsystems
+	  (clock and power control, etc).
 
 config ARCH_RPC
 	bool "RiscPC"
diff --git a/arch/arm/mach-msm/Kconfig b/arch/arm/mach-msm/Kconfig
index f780086..b9fd5c5 100644
--- a/arch/arm/mach-msm/Kconfig
+++ b/arch/arm/mach-msm/Kconfig
@@ -29,12 +29,14 @@ endchoice
 
 config MACH_HALIBUT
 	depends on ARCH_MSM
+	select CPU_V6
 	default y
 	bool "Halibut Board (QCT SURF7201A)"
 	help
 	  Support for the Qualcomm SURF7201A eval board.
 
 config MACH_TROUT
+	select CPU_V6
 	default y
 	bool "HTC Dream (aka trout)"
 	help
-- 
1.6.3.3

^ permalink raw reply related	[flat|nested] 68+ messages in thread

* [RFC 02/18] arm: msm: add oprofile pmu support
  2010-01-11 22:47 [RFC 00/18] generic arm needed for msm Daniel Walker
  2010-01-11 22:47 ` [RFC 01/18] arm: msm: allow ARCH_MSM to have v7 cpus Daniel Walker
@ 2010-01-11 22:47 ` Daniel Walker
  2010-01-11 22:47 ` [RFC 03/18] arm: boot: remove old ARM ID for QSD Daniel Walker
                   ` (15 subsequent siblings)
  17 siblings, 0 replies; 68+ messages in thread
From: Daniel Walker @ 2010-01-11 22:47 UTC (permalink / raw)
  To: linux-arm-kernel

add oprofile pmu support for msm.

Signed-off-by: Daniel Walker <dwalker@codeaurora.org>
---
 arch/arm/oprofile/op_model_v6.c |    2 ++
 1 files changed, 2 insertions(+), 0 deletions(-)

diff --git a/arch/arm/oprofile/op_model_v6.c b/arch/arm/oprofile/op_model_v6.c
index f7d2ec5..b240adb 100644
--- a/arch/arm/oprofile/op_model_v6.c
+++ b/arch/arm/oprofile/op_model_v6.c
@@ -32,6 +32,8 @@
 static int irqs[] = {
 #ifdef CONFIG_ARCH_OMAP2
 	3,
+#elif defined(CONFIG_ARCH_MSM_ARM11)
+	INT_ARM11_PMU,
 #endif
 #ifdef CONFIG_ARCH_BCMRING
 	IRQ_PMUIRQ, /* for BCMRING, ARM PMU interrupt is 43 */
-- 
1.6.3.3

^ permalink raw reply related	[flat|nested] 68+ messages in thread

* [RFC 03/18] arm: boot: remove old ARM ID for QSD
  2010-01-11 22:47 [RFC 00/18] generic arm needed for msm Daniel Walker
  2010-01-11 22:47 ` [RFC 01/18] arm: msm: allow ARCH_MSM to have v7 cpus Daniel Walker
  2010-01-11 22:47 ` [RFC 02/18] arm: msm: add oprofile pmu support Daniel Walker
@ 2010-01-11 22:47 ` Daniel Walker
  2010-01-15 21:26   ` Russell King - ARM Linux
  2010-01-11 22:47 ` [RFC 04/18] arm: cache-l2x0: add l2x0 suspend and resume functions Daniel Walker
                   ` (14 subsequent siblings)
  17 siblings, 1 reply; 68+ messages in thread
From: Daniel Walker @ 2010-01-11 22:47 UTC (permalink / raw)
  To: linux-arm-kernel

From: Steve Muckle <smuckle@quicinc.com>

The mask and ID pattern for older ARM IDs in the kernel
decompressor matches the CPU ID for Scorpion, causing the
v7 caching routines not to be run and kernel decompression
to take significantly longer.

QSD may eventually use CPUs other than Scorpion, but they
will adhere to the new ARM CPU ID format, which is
incompatible with the entry for older ARM CPU IDs.

Signed-off-by: Steve Muckle <smuckle@quicinc.com>
Signed-off-by: Daniel Walker <dwalker@codeaurora.org>
---
 arch/arm/boot/compressed/head.S |    2 ++
 1 files changed, 2 insertions(+), 0 deletions(-)

diff --git a/arch/arm/boot/compressed/head.S b/arch/arm/boot/compressed/head.S
index d356af7..36c321c 100644
--- a/arch/arm/boot/compressed/head.S
+++ b/arch/arm/boot/compressed/head.S
@@ -620,6 +620,7 @@ proc_types:
 @		b	__arm6_mmu_cache_off
 @		b	__armv3_mmu_cache_flush
 
+#ifndef CONFIG_ARCH_MSM_SCORPION
 		.word	0x00000000		@ old ARM ID
 		.word	0x0000f000
 		mov	pc, lr
@@ -628,6 +629,7 @@ proc_types:
  THUMB(		nop				)
 		mov	pc, lr
  THUMB(		nop				)
+#endif
 
 		.word	0x41007000		@ ARM7/710
 		.word	0xfff8fe00
-- 
1.6.3.3

^ permalink raw reply related	[flat|nested] 68+ messages in thread

* [RFC 04/18] arm: cache-l2x0: add l2x0 suspend and resume functions
  2010-01-11 22:47 [RFC 00/18] generic arm needed for msm Daniel Walker
                   ` (2 preceding siblings ...)
  2010-01-11 22:47 ` [RFC 03/18] arm: boot: remove old ARM ID for QSD Daniel Walker
@ 2010-01-11 22:47 ` Daniel Walker
  2010-01-11 23:44   ` Russell King - ARM Linux
  2010-01-11 22:47 ` [RFC 05/18] arm: msm: implement ioremap_strongly_ordered Daniel Walker
                   ` (13 subsequent siblings)
  17 siblings, 1 reply; 68+ messages in thread
From: Daniel Walker @ 2010-01-11 22:47 UTC (permalink / raw)
  To: linux-arm-kernel

From: Willie Ruan <wruan@quicinc.com>

Suspend function should be called before L2 cache power is turned off
to save power. Resume function should be called when the power is
reapplied.

Signed-off-by: Willie Ruan <wruan@quicinc.com>
Signed-off-by: Daniel Walker <dwalker@codeaurora.org>
---
 arch/arm/include/asm/hardware/cache-l2x0.h |    3 ++
 arch/arm/mm/cache-l2x0.c                   |   29 ++++++++++++++++++++++++++++
 2 files changed, 32 insertions(+), 0 deletions(-)

diff --git a/arch/arm/include/asm/hardware/cache-l2x0.h b/arch/arm/include/asm/hardware/cache-l2x0.h
index cdb9022..a86b948 100644
--- a/arch/arm/include/asm/hardware/cache-l2x0.h
+++ b/arch/arm/include/asm/hardware/cache-l2x0.h
@@ -55,4 +55,7 @@
 extern void __init l2x0_init(void __iomem *base, __u32 aux_val, __u32 aux_mask);
 #endif
 
+extern void l2x0_suspend(void);
+extern void l2x0_resume(int collapsed);
+
 #endif
diff --git a/arch/arm/mm/cache-l2x0.c b/arch/arm/mm/cache-l2x0.c
index cb8fc65..05765f6 100644
--- a/arch/arm/mm/cache-l2x0.c
+++ b/arch/arm/mm/cache-l2x0.c
@@ -2,6 +2,7 @@
  * arch/arm/mm/cache-l2x0.c - L210/L220 cache controller support
  *
  * Copyright (C) 2007 ARM Limited
+ * Copyright (c) 2009, Code Aurora Forum. All rights reserved.
  *
  * This program is free software; you can redistribute it and/or modify
  * it under the terms of the GNU General Public License version 2 as
@@ -26,6 +27,7 @@
 #define CACHE_LINE_SIZE		32
 
 static void __iomem *l2x0_base;
+static uint32_t aux_ctrl_save;
 static DEFINE_SPINLOCK(l2x0_lock);
 
 static inline void cache_wait(void __iomem *reg, unsigned long mask)
@@ -54,6 +56,13 @@ static inline void l2x0_inv_all(void)
 	spin_unlock_irqrestore(&l2x0_lock, flags);
 }
 
+static inline void l2x0_flush_all(void)
+{
+	/* clean and invalidate all ways */
+	sync_writel(0xff, L2X0_CLEAN_INV_WAY, 0xff);
+	cache_sync();
+}
+
 static void l2x0_inv_range(unsigned long start, unsigned long end)
 {
 	void __iomem *base = l2x0_base;
@@ -176,3 +185,23 @@ void __init l2x0_init(void __iomem *base, __u32 aux_val, __u32 aux_mask)
 
 	printk(KERN_INFO "L2X0 cache controller enabled\n");
 }
+
+void l2x0_suspend(void)
+{
+	/* Save aux control register value */
+	aux_ctrl_save = readl(l2x0_base + L2X0_AUX_CTRL);
+	/* Flush all cache */
+	l2x0_flush_all();
+	/* Disable the cache */
+	writel(0, l2x0_base + L2X0_CTRL);
+}
+
+void l2x0_resume(int collapsed)
+{
+	if (collapsed)
+		/* Restore aux control register value */
+		writel(aux_ctrl_save, l2x0_base + L2X0_AUX_CTRL);
+
+	/* Enable the cache */
+	writel(1, l2x0_base + L2X0_CTRL);
+}
-- 
1.6.3.3

^ permalink raw reply related	[flat|nested] 68+ messages in thread

* [RFC 05/18] arm: msm: implement ioremap_strongly_ordered
  2010-01-11 22:47 [RFC 00/18] generic arm needed for msm Daniel Walker
                   ` (3 preceding siblings ...)
  2010-01-11 22:47 ` [RFC 04/18] arm: cache-l2x0: add l2x0 suspend and resume functions Daniel Walker
@ 2010-01-11 22:47 ` Daniel Walker
  2010-01-11 23:37   ` Russell King - ARM Linux
  2010-01-11 22:47 ` [RFC 06/18] arm: msm: implement proper dmb() for 7x27 Daniel Walker
                   ` (12 subsequent siblings)
  17 siblings, 1 reply; 68+ messages in thread
From: Daniel Walker @ 2010-01-11 22:47 UTC (permalink / raw)
  To: linux-arm-kernel

From: Larry Bassel <lbassel@quicinc.com>

Both the clean and invalidate functionality needed
for the video encoder and 7x27 barrier code
need to have a strongly ordered mapping set up
so that one may perform a write to strongly ordered
memory. The generic ARM code does not provide this.

The generic ARM code does provide MT_DEVICE, which starts
as strongly ordered, but the code later turns the buffered flag
on for ARMv6 in order to make the device shared. This is not
suitable for my purpose, so this patch adds code for a
MT_DEVICE_STRONGLY_ORDERED mapping type.

Signed-off-by: Larry Bassel <lbassel@quicinc.com>
Signed-off-by: Daniel Walker <dwalker@codeaurora.org>
---
 arch/arm/include/asm/io.h       |    2 ++
 arch/arm/include/asm/mach/map.h |    1 +
 arch/arm/mm/mmu.c               |    6 ++++++
 3 files changed, 9 insertions(+), 0 deletions(-)

diff --git a/arch/arm/include/asm/io.h b/arch/arm/include/asm/io.h
index d2a59cf..05b281e 100644
--- a/arch/arm/include/asm/io.h
+++ b/arch/arm/include/asm/io.h
@@ -222,12 +222,14 @@ extern void _memset_io(volatile void __iomem *, int, size_t);
 #ifndef __arch_ioremap
 #define ioremap(cookie,size)		__arm_ioremap(cookie, size, MT_DEVICE)
 #define ioremap_nocache(cookie,size)	__arm_ioremap(cookie, size, MT_DEVICE)
+#define ioremap_strongly_ordered(cookie,size)  __arm_ioremap(cookie, size, MT_DEVICE_STRONGLY_ORDERED)
 #define ioremap_cached(cookie,size)	__arm_ioremap(cookie, size, MT_DEVICE_CACHED)
 #define ioremap_wc(cookie,size)		__arm_ioremap(cookie, size, MT_DEVICE_WC)
 #define iounmap(cookie)			__iounmap(cookie)
 #else
 #define ioremap(cookie,size)		__arch_ioremap((cookie), (size), MT_DEVICE)
 #define ioremap_nocache(cookie,size)	__arch_ioremap((cookie), (size), MT_DEVICE)
+#define ioremap_strongly_ordered(cookie,size)  __arch_ioremap((cookie), (size), MT_DEVICE_STRONGLY_ORDERED)
 #define ioremap_cached(cookie,size)	__arch_ioremap((cookie), (size), MT_DEVICE_CACHED)
 #define ioremap_wc(cookie,size)		__arch_ioremap((cookie), (size), MT_DEVICE_WC)
 #define iounmap(cookie)			__arch_iounmap(cookie)
diff --git a/arch/arm/include/asm/mach/map.h b/arch/arm/include/asm/mach/map.h
index 742c2aa..ddff29f 100644
--- a/arch/arm/include/asm/mach/map.h
+++ b/arch/arm/include/asm/mach/map.h
@@ -27,6 +27,7 @@ struct map_desc {
 #define MT_MEMORY		9
 #define MT_ROM			10
 #define MT_MEMORY_NONCACHED	11
+#define MT_DEVICE_STRONGLY_ORDERED 12
 
 #ifdef CONFIG_MMU
 extern void iotable_init(struct map_desc *, int);
diff --git a/arch/arm/mm/mmu.c b/arch/arm/mm/mmu.c
index 1708da8..bcbf774 100644
--- a/arch/arm/mm/mmu.c
+++ b/arch/arm/mm/mmu.c
@@ -201,6 +201,12 @@ static struct mem_type mem_types[] = {
 		.prot_sect	= PROT_SECT_DEVICE | PMD_SECT_S,
 		.domain		= DOMAIN_IO,
 	},
+	[MT_DEVICE_STRONGLY_ORDERED] = {  /* Guaranteed strongly ordered */
+		.prot_pte	= PROT_PTE_DEVICE,
+		.prot_l1	= PMD_TYPE_TABLE,
+		.prot_sect	= PROT_SECT_DEVICE | PMD_SECT_UNCACHED,
+		.domain		= DOMAIN_IO,
+	},
 	[MT_DEVICE_NONSHARED] = { /* ARMv6 non-shared device */
 		.prot_pte	= PROT_PTE_DEVICE | L_PTE_MT_DEV_NONSHARED,
 		.prot_l1	= PMD_TYPE_TABLE,
-- 
1.6.3.3

^ permalink raw reply related	[flat|nested] 68+ messages in thread

* [RFC 06/18] arm: msm: implement proper dmb() for 7x27
  2010-01-11 22:47 [RFC 00/18] generic arm needed for msm Daniel Walker
                   ` (4 preceding siblings ...)
  2010-01-11 22:47 ` [RFC 05/18] arm: msm: implement ioremap_strongly_ordered Daniel Walker
@ 2010-01-11 22:47 ` Daniel Walker
  2010-01-11 23:39   ` Russell King - ARM Linux
  2010-01-19 17:16   ` Jamie Lokier
  2010-01-11 22:47 ` [RFC 07/18] arm: mm: retry on QSD icache parity errors Daniel Walker
                   ` (11 subsequent siblings)
  17 siblings, 2 replies; 68+ messages in thread
From: Daniel Walker @ 2010-01-11 22:47 UTC (permalink / raw)
  To: linux-arm-kernel

From: Larry Bassel <lbassel@quicinc.com>

For 7x27 it is necessary to write to strongly
ordered memory after executing the coprocessor 15
instruction dmb instruction.

This is only for data barrier dmb().
Note that the test for 7x27 is done on all MSM platforms
(even ones such as 7201a whose kernel is distinct from
that of 7x25/7x27).

Acked-by: Willie Ruan <wruan@quicinc.com>
Signed-off-by: Larry Bassel <lbassel@quicinc.com>
Signed-off-by: Daniel Walker <dwalker@codeaurora.org>
---
 arch/arm/include/asm/system.h |   11 +++++++++--
 1 files changed, 9 insertions(+), 2 deletions(-)

diff --git a/arch/arm/include/asm/system.h b/arch/arm/include/asm/system.h
index 058e7e9..55d942b 100644
--- a/arch/arm/include/asm/system.h
+++ b/arch/arm/include/asm/system.h
@@ -3,6 +3,8 @@
 
 #ifdef __KERNEL__
 
+#include <asm/memory.h>
+
 #define CPU_ARCH_UNKNOWN	0
 #define CPU_ARCH_ARMv3		1
 #define CPU_ARCH_ARMv4		2
@@ -114,6 +116,10 @@ extern unsigned int user_debug;
 #define vectors_high()	(0)
 #endif
 
+#ifndef arch_barrier_extra
+#define arch_barrier_extra() do {} while (0)
+#endif
+
 #if __LINUX_ARM_ARCH__ >= 7
 #define isb() __asm__ __volatile__ ("isb" : : : "memory")
 #define dsb() __asm__ __volatile__ ("dsb" : : : "memory")
@@ -123,8 +129,9 @@ extern unsigned int user_debug;
 				    : : "r" (0) : "memory")
 #define dsb() __asm__ __volatile__ ("mcr p15, 0, %0, c7, c10, 4" \
 				    : : "r" (0) : "memory")
-#define dmb() __asm__ __volatile__ ("mcr p15, 0, %0, c7, c10, 5" \
-				    : : "r" (0) : "memory")
+#define dmb() do { __asm__ __volatile__ ("mcr p15, 0, %0, c7, c10, 5" \
+					 : : "r" (0) : "memory"); \
+		arch_barrier_extra(); } while (0)
 #elif defined(CONFIG_CPU_FA526)
 #define isb() __asm__ __volatile__ ("mcr p15, 0, %0, c7, c5, 4" \
 				    : : "r" (0) : "memory")
-- 
1.6.3.3

^ permalink raw reply related	[flat|nested] 68+ messages in thread

* [RFC 07/18] arm: mm: retry on QSD icache parity errors
  2010-01-11 22:47 [RFC 00/18] generic arm needed for msm Daniel Walker
                   ` (5 preceding siblings ...)
  2010-01-11 22:47 ` [RFC 06/18] arm: msm: implement proper dmb() for 7x27 Daniel Walker
@ 2010-01-11 22:47 ` Daniel Walker
  2010-01-18 18:42   ` Ashwin Chaugule
  2010-01-11 22:47 ` [RFC 08/18] arm: msm: set L2CR1 to enable prefetch and burst on Scorpion Daniel Walker
                   ` (10 subsequent siblings)
  17 siblings, 1 reply; 68+ messages in thread
From: Daniel Walker @ 2010-01-11 22:47 UTC (permalink / raw)
  To: linux-arm-kernel

From: Steve Muckle <smuckle@quicinc.com>

Parity errors in the icache on QSD can be worked around either by
retrying the access, or invalidating the icache. The whole icache
must be invalidated since the data abort is imprecise (the faulting
address is not known).

Signed-off-by: Steve Muckle <smuckle@quicinc.com>
Signed-off-by: Daniel Walker <dwalker@codeaurora.org>
---
 arch/arm/mm/fault.c |   36 +++++++++++++++++++++++++++++++++++-
 1 files changed, 35 insertions(+), 1 deletions(-)

diff --git a/arch/arm/mm/fault.c b/arch/arm/mm/fault.c
index 10e0680..bea3e75 100644
--- a/arch/arm/mm/fault.c
+++ b/arch/arm/mm/fault.c
@@ -2,6 +2,7 @@
  *  linux/arch/arm/mm/fault.c
  *
  *  Copyright (C) 1995  Linus Torvalds
+ *  Copyright (c) 2009, Code Aurora Forum. All rights reserved.
  *  Modifications for ARM processor (c) 1995-2004 Russell King
  *
  * This program is free software; you can redistribute it and/or modify
@@ -442,6 +443,39 @@ do_bad(unsigned long addr, unsigned int fsr, struct pt_regs *regs)
 	return 1;
 }
 
+static int
+do_imprecise_ext(unsigned long addr, unsigned int fsr, struct pt_regs *regs)
+{
+#ifdef CONFIG_ARCH_MSM_SCORPION
+	unsigned int regval;
+	static unsigned char flush_toggle;
+
+	asm("mrc p15, 0, %0, c5, c1, 0\n" /* read adfsr for fault status */
+	    : "=r" (regval));
+	if (regval == 0x2) {
+		/* Fault was caused by icache parity error. Alternate
+		 * simply retrying the access and flushing the icache. */
+		flush_toggle ^= 1;
+		if (flush_toggle)
+			asm("mcr p15, 0, %0, c7, c5, 0\n"
+			    :
+			    : "r" (regval)); /* input value is ignored */
+		/* Clear fault in EFSR. */
+		asm("mcr p15, 7, %0, c15, c0, 1\n"
+		    :
+		    : "r" (regval));
+		/* Clear fault in ADFSR. */
+		regval = 0;
+		asm("mcr p15, 0, %0, c5, c1, 0\n"
+		    :
+		    : "r" (regval));
+		return 0;
+	}
+#endif
+
+	return 1;
+}
+
 static struct fsr_info {
 	int	(*fn)(unsigned long addr, unsigned int fsr, struct pt_regs *regs);
 	int	sig;
@@ -479,7 +513,7 @@ static struct fsr_info {
 	{ do_bad,		SIGBUS,  0,		"unknown 19"			   },
 	{ do_bad,		SIGBUS,  0,		"lock abort"			   }, /* xscale */
 	{ do_bad,		SIGBUS,  0,		"unknown 21"			   },
-	{ do_bad,		SIGBUS,  BUS_OBJERR,	"imprecise external abort"	   }, /* xscale */
+	{ do_imprecise_ext,	SIGBUS,  BUS_OBJERR,	"imprecise external abort"	   }, /* xscale */
 	{ do_bad,		SIGBUS,  0,		"unknown 23"			   },
 	{ do_bad,		SIGBUS,  0,		"dcache parity error"		   }, /* xscale */
 	{ do_bad,		SIGBUS,  0,		"unknown 25"			   },
-- 
1.6.3.3

^ permalink raw reply related	[flat|nested] 68+ messages in thread

* [RFC 08/18] arm: msm: set L2CR1 to enable prefetch and burst on Scorpion.
  2010-01-11 22:47 [RFC 00/18] generic arm needed for msm Daniel Walker
                   ` (6 preceding siblings ...)
  2010-01-11 22:47 ` [RFC 07/18] arm: mm: retry on QSD icache parity errors Daniel Walker
@ 2010-01-11 22:47 ` Daniel Walker
  2010-01-11 23:45   ` Russell King - ARM Linux
  2010-01-11 22:47 ` [RFC 09/18] arm: mm: support error reporting in L1/L2 caches on QSD Daniel Walker
                   ` (9 subsequent siblings)
  17 siblings, 1 reply; 68+ messages in thread
From: Daniel Walker @ 2010-01-11 22:47 UTC (permalink / raw)
  To: linux-arm-kernel

From: Larry Bassel <lbassel@quicinc.com>

This change improves the following LMBench benchmarks
by over 15%:

System Call Latency
Signal Handling Latency
Fault Latency
Inter-process Communication Latency
Inter-process Communication Bandwidth
Random Number Generation Latency

Acked-by: Steve Muckle <smuckle@quicinc.com>
Signed-off-by: Larry Bassel <lbassel@quicinc.com>
Signed-off-by: Daniel Walker <dwalker@codeaurora.org>
---
 arch/arm/mm/proc-v7.S |    5 +++++
 1 files changed, 5 insertions(+), 0 deletions(-)

diff --git a/arch/arm/mm/proc-v7.S b/arch/arm/mm/proc-v7.S
index 3a28521..b331f23 100644
--- a/arch/arm/mm/proc-v7.S
+++ b/arch/arm/mm/proc-v7.S
@@ -2,6 +2,7 @@
  *  linux/arch/arm/mm/proc-v7.S
  *
  *  Copyright (C) 2001 Deep Blue Solutions Ltd.
+ *  Copyright (c) 2009, Code Aurora Forum. All rights reserved.
  *
  * This program is free software; you can redistribute it and/or modify
  * it under the terms of the GNU General Public License version 2 as
@@ -237,6 +238,10 @@ __v7_setup:
 	mcr	p15, 0, r4, c2, c0, 1		@ load TTB1
 	mov	r10, #0x1f			@ domains 0, 1 = manager
 	mcr	p15, 0, r10, c3, c0, 0		@ load domain access register
+#ifdef CONFIG_ARCH_MSM_SCORPION
+	mov     r0, #0x77
+	mcr     p15, 3, r0, c15, c0, 3          @ set L2CR1
+#endif
 	/*
 	 * Memory region attributes with SCTLR.TRE=1
 	 *
-- 
1.6.3.3

^ permalink raw reply related	[flat|nested] 68+ messages in thread

* [RFC 09/18] arm: mm: support error reporting in L1/L2 caches on QSD
  2010-01-11 22:47 [RFC 00/18] generic arm needed for msm Daniel Walker
                   ` (7 preceding siblings ...)
  2010-01-11 22:47 ` [RFC 08/18] arm: msm: set L2CR1 to enable prefetch and burst on Scorpion Daniel Walker
@ 2010-01-11 22:47 ` Daniel Walker
  2010-01-11 22:47 ` [RFC 10/18] arm: mm: enable L2X0 to use L2 cache on MSM7X27 Daniel Walker
                   ` (8 subsequent siblings)
  17 siblings, 0 replies; 68+ messages in thread
From: Daniel Walker @ 2010-01-11 22:47 UTC (permalink / raw)
  To: linux-arm-kernel

From: Steve Muckle <smuckle@quicinc.com>

The Scorpion processor supports reporting L2 errors, L1 icache parity
errors, and L1 dcache parity errors as imprecise external aborts. If
this option is not enabled these errors will go unreported and data
corruption will occur.

Signed-off-by: Steve Muckle <smuckle@quicinc.com>
Signed-off-by: Daniel Walker <dwalker@codeaurora.org>
---
 arch/arm/mm/Kconfig   |    8 ++++++++
 arch/arm/mm/proc-v7.S |    8 ++++++++
 2 files changed, 16 insertions(+), 0 deletions(-)

diff --git a/arch/arm/mm/Kconfig b/arch/arm/mm/Kconfig
index baf6384..8ae3fce 100644
--- a/arch/arm/mm/Kconfig
+++ b/arch/arm/mm/Kconfig
@@ -687,6 +687,14 @@ config CPU_DCACHE_SIZE
 	  If your SoC is configured to have a different size, define the value
 	  here with proper conditions.
 
+config CPU_CACHE_ERR_REPORT
+	bool "Report errors in the L1 and L2 caches"
+	depends on ARCH_MSM_SCORPION
+	default y
+	help
+	  Say Y here to have errors in the L1 and L2 caches reported as
+	  imprecise data aborts.
+
 config CPU_DCACHE_WRITETHROUGH
 	bool "Force write through D-cache"
 	depends on (CPU_ARM740T || CPU_ARM920T || CPU_ARM922T || CPU_ARM925T || CPU_ARM926T || CPU_ARM940T || CPU_ARM946E || CPU_ARM1020 || CPU_FA526) && !CPU_DCACHE_DISABLE
diff --git a/arch/arm/mm/proc-v7.S b/arch/arm/mm/proc-v7.S
index b331f23..88da392 100644
--- a/arch/arm/mm/proc-v7.S
+++ b/arch/arm/mm/proc-v7.S
@@ -241,6 +241,14 @@ __v7_setup:
 #ifdef CONFIG_ARCH_MSM_SCORPION
 	mov     r0, #0x77
 	mcr     p15, 3, r0, c15, c0, 3          @ set L2CR1
+
+	mrc     p15, 0, r0, c1, c0, 1           @ read ACTLR
+#ifdef CONFIG_CPU_CACHE_ERR_REPORT
+	orr     r0, r0, #0x37                   @ turn on L1/L2 error reporting
+#else
+	bic     r0, r0, #0x37
+#endif
+	mcr     p15, 0, r0, c1, c0, 1           @ write ACTLR
 #endif
 	/*
 	 * Memory region attributes with SCTLR.TRE=1
-- 
1.6.3.3

^ permalink raw reply related	[flat|nested] 68+ messages in thread

* [RFC 10/18] arm: mm: enable L2X0 to use L2 cache on MSM7X27
  2010-01-11 22:47 [RFC 00/18] generic arm needed for msm Daniel Walker
                   ` (8 preceding siblings ...)
  2010-01-11 22:47 ` [RFC 09/18] arm: mm: support error reporting in L1/L2 caches on QSD Daniel Walker
@ 2010-01-11 22:47 ` Daniel Walker
  2010-01-11 22:47 ` [RFC 11/18] arm: msm: add ARCH_MSM_SCORPION to CPU_V7 Daniel Walker
                   ` (7 subsequent siblings)
  17 siblings, 0 replies; 68+ messages in thread
From: Daniel Walker @ 2010-01-11 22:47 UTC (permalink / raw)
  To: linux-arm-kernel

From: Willie Ruan <wruan@quicinc.com>

Acked-by: Steve Muckle <smuckle@quicinc.com>
Signed-off-by: Willie Ruan <wruan@quicinc.com>
Signed-off-by: Daniel Walker <dwalker@codeaurora.org>
---
 arch/arm/mm/Kconfig |    3 ++-
 1 files changed, 2 insertions(+), 1 deletions(-)

diff --git a/arch/arm/mm/Kconfig b/arch/arm/mm/Kconfig
index 8ae3fce..155c7a3 100644
--- a/arch/arm/mm/Kconfig
+++ b/arch/arm/mm/Kconfig
@@ -762,7 +762,8 @@ config CACHE_FEROCEON_L2_WRITETHROUGH
 config CACHE_L2X0
 	bool "Enable the L2x0 outer cache controller"
 	depends on REALVIEW_EB_ARM11MP || MACH_REALVIEW_PB11MP || MACH_REALVIEW_PB1176 || \
-		   REALVIEW_EB_A9MP || ARCH_MX35 || ARCH_MX31 || MACH_REALVIEW_PBX || ARCH_NOMADIK
+		   REALVIEW_EB_A9MP || ARCH_MX35 || ARCH_MX31 || MACH_REALVIEW_PBX || ARCH_NOMADIK || \
+		   MACH_MSM7X27_SURF || MACH_MSM7X27_FFA
 	default y
 	select OUTER_CACHE
 	help
-- 
1.6.3.3

^ permalink raw reply related	[flat|nested] 68+ messages in thread

* [RFC 11/18] arm: msm: add ARCH_MSM_SCORPION to CPU_V7
  2010-01-11 22:47 [RFC 00/18] generic arm needed for msm Daniel Walker
                   ` (9 preceding siblings ...)
  2010-01-11 22:47 ` [RFC 10/18] arm: mm: enable L2X0 to use L2 cache on MSM7X27 Daniel Walker
@ 2010-01-11 22:47 ` Daniel Walker
  2010-01-11 23:13   ` Russell King - ARM Linux
  2010-01-11 22:47 ` [RFC 12/18] arm: msm: Enable frequency scaling Daniel Walker
                   ` (6 subsequent siblings)
  17 siblings, 1 reply; 68+ messages in thread
From: Daniel Walker @ 2010-01-11 22:47 UTC (permalink / raw)
  To: linux-arm-kernel

From: Steve Muckle <smuckle@quicinc.com>

ARCH_MSM_SCORPION supports Qualcomm SnapDragon chipsets.

Signed-off-by: Steve Muckle <smuckle@quicinc.com>
Signed-off-by: Daniel Walker <dwalker@codeaurora.org>
---
 arch/arm/mm/Kconfig |    3 ++-
 1 files changed, 2 insertions(+), 1 deletions(-)

diff --git a/arch/arm/mm/Kconfig b/arch/arm/mm/Kconfig
index 155c7a3..c721880 100644
--- a/arch/arm/mm/Kconfig
+++ b/arch/arm/mm/Kconfig
@@ -409,7 +409,8 @@ config CPU_32v6K
 
 # ARMv7
 config CPU_V7
-	bool "Support ARM V7 processor" if ARCH_INTEGRATOR || MACH_REALVIEW_EB || MACH_REALVIEW_PBX
+	bool "Support ARM V7 processor" if ARCH_INTEGRATOR || MACH_REALVIEW_EB || MACH_REALVIEW_PBX || ARCH_MSM_SCORPION
+	default y if ARCH_MSM_SCORPION
 	select CPU_32v6K
 	select CPU_32v7
 	select CPU_ABRT_EV7
-- 
1.6.3.3

^ permalink raw reply related	[flat|nested] 68+ messages in thread

* [RFC 12/18] arm: msm: Enable frequency scaling.
  2010-01-11 22:47 [RFC 00/18] generic arm needed for msm Daniel Walker
                   ` (10 preceding siblings ...)
  2010-01-11 22:47 ` [RFC 11/18] arm: msm: add ARCH_MSM_SCORPION to CPU_V7 Daniel Walker
@ 2010-01-11 22:47 ` Daniel Walker
  2010-01-11 22:47 ` [RFC 13/18] arm: msm: define HAVE_CLK for ARCH_MSM Daniel Walker
                   ` (5 subsequent siblings)
  17 siblings, 0 replies; 68+ messages in thread
From: Daniel Walker @ 2010-01-11 22:47 UTC (permalink / raw)
  To: linux-arm-kernel

From: Praveen Chidambaram <pchidamb@quicinc.com>

Acked-by: Saravana Kannan <skannan@quicinc.com>
Signed-off-by: Praveen Chidambaram <pchidamb@quicinc.com>
Signed-off-by: Daniel Walker <dwalker@codeaurora.org>
---
 arch/arm/Kconfig |    9 +++++++++
 1 files changed, 9 insertions(+), 0 deletions(-)

diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig
index c752b7d..2db3739 100644
--- a/arch/arm/Kconfig
+++ b/arch/arm/Kconfig
@@ -573,6 +573,7 @@ config ARCH_PXA
 
 config ARCH_MSM
 	bool "Qualcomm MSM"
+	select ARCH_HAS_CPUFREQ
 	select GENERIC_TIME
 	select GENERIC_CLOCKEVENTS
 	help
@@ -1437,6 +1438,14 @@ source "drivers/cpuidle/Kconfig"
 
 endmenu
 
+config CPU_FREQ_MSM
+	bool
+	depends on CPU_FREQ && ARCH_MSM
+	default y
+	help
+	  This enables the CPUFreq driver for Qualcomm CPUs.
+	  If in doubt, say Y.
+
 menu "Floating point emulation"
 
 comment "At least one emulation must be selected"
-- 
1.6.3.3

^ permalink raw reply related	[flat|nested] 68+ messages in thread

* [RFC 13/18] arm: msm: define HAVE_CLK for ARCH_MSM
  2010-01-11 22:47 [RFC 00/18] generic arm needed for msm Daniel Walker
                   ` (11 preceding siblings ...)
  2010-01-11 22:47 ` [RFC 12/18] arm: msm: Enable frequency scaling Daniel Walker
@ 2010-01-11 22:47 ` Daniel Walker
  2010-01-11 22:47 ` [RFC 14/18] arm: msm: add v7 support for compiler version-4.1.1 Daniel Walker
                   ` (4 subsequent siblings)
  17 siblings, 0 replies; 68+ messages in thread
From: Daniel Walker @ 2010-01-11 22:47 UTC (permalink / raw)
  To: linux-arm-kernel

From: Steve Muckle <smuckle@quicinc.com>

MSM supports the <linux/clk.h> interface.

Acked-by: David Brown <davidb@quicinc.com>
Signed-off-by: Steve Muckle <smuckle@quicinc.com>
Signed-off-by: Daniel Walker <dwalker@codeaurora.org>
---
 arch/arm/Kconfig |    1 +
 1 files changed, 1 insertions(+), 0 deletions(-)

diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig
index 2db3739..c8b3a29 100644
--- a/arch/arm/Kconfig
+++ b/arch/arm/Kconfig
@@ -574,6 +574,7 @@ config ARCH_PXA
 config ARCH_MSM
 	bool "Qualcomm MSM"
 	select ARCH_HAS_CPUFREQ
+	select HAVE_CLK
 	select GENERIC_TIME
 	select GENERIC_CLOCKEVENTS
 	help
-- 
1.6.3.3

^ permalink raw reply related	[flat|nested] 68+ messages in thread

* [RFC 14/18] arm: msm: add v7 support for compiler version-4.1.1
  2010-01-11 22:47 [RFC 00/18] generic arm needed for msm Daniel Walker
                   ` (12 preceding siblings ...)
  2010-01-11 22:47 ` [RFC 13/18] arm: msm: define HAVE_CLK for ARCH_MSM Daniel Walker
@ 2010-01-11 22:47 ` Daniel Walker
  2010-01-11 23:07   ` Russell King - ARM Linux
  2010-01-11 22:47 ` [RFC 15/18] arm: vfp: Add additional vfp interfaces Daniel Walker
                   ` (3 subsequent siblings)
  17 siblings, 1 reply; 68+ messages in thread
From: Daniel Walker @ 2010-01-11 22:47 UTC (permalink / raw)
  To: linux-arm-kernel

From: Taniya Das <tdas@qualcomm.com>

Signed-off-by: Taniya Das <tdas@qualcomm.com>
Signed-off-by: Daniel Walker <dwalker@codeaurora.org>
---
 arch/arm/Makefile |    8 +++++++-
 1 files changed, 7 insertions(+), 1 deletions(-)

diff --git a/arch/arm/Makefile b/arch/arm/Makefile
index e9da084..aa32b39 100644
--- a/arch/arm/Makefile
+++ b/arch/arm/Makefile
@@ -50,7 +50,13 @@ comma = ,
 # Note that GCC does not numerically define an architecture version
 # macro, but instead defines a whole series of macros which makes
 # testing for a specific architecture or later rather impossible.
-arch-$(CONFIG_CPU_32v7)		:=-D__LINUX_ARM_ARCH__=7 $(call cc-option,-march=armv7-a,-march=armv5t -Wa$(comma)-march=armv7-a)
+CONFIG_SHELL	:=/bin/sh
+GCC_VERSION	:= $(shell $(CONFIG_SHELL) $(PWD)/scripts/gcc-version.sh $(CROSS_COMPILE)gcc)
+ifeq ($(GCC_VERSION),0401)
+arch-$(CONFIG_CPU_32v7)		:=-D__LINUX_ARM_ARCH__=7 $(call cc-option,-march=armv7a,-march=armv5t -Wa$(comma)-march=armv7a)
+else
+arch-$(CONFIG_CPU_32v7)		:=-D__LINUX_ARM_ARCH__=7 $(call cc-option,-march=armv7-a,-march=armv5t -Wa$(comma)-march=armv7a)
+endif
 arch-$(CONFIG_CPU_32v6)		:=-D__LINUX_ARM_ARCH__=6 $(call cc-option,-march=armv6,-march=armv5t -Wa$(comma)-march=armv6)
 # Only override the compiler option if ARMv6. The ARMv6K extensions are
 # always available in ARMv7
-- 
1.6.3.3

^ permalink raw reply related	[flat|nested] 68+ messages in thread

* [RFC 15/18] arm: vfp: Add additional vfp interfaces
  2010-01-11 22:47 [RFC 00/18] generic arm needed for msm Daniel Walker
                   ` (13 preceding siblings ...)
  2010-01-11 22:47 ` [RFC 14/18] arm: msm: add v7 support for compiler version-4.1.1 Daniel Walker
@ 2010-01-11 22:47 ` Daniel Walker
  2010-01-11 22:47 ` [RFC 16/18] arm: msm: add arch_has_speculative_dfetch() Daniel Walker
                   ` (2 subsequent siblings)
  17 siblings, 0 replies; 68+ messages in thread
From: Daniel Walker @ 2010-01-11 22:47 UTC (permalink / raw)
  To: linux-arm-kernel

From: Dave Estes <cestes@quicinc.com>

Refactor common code to vfp_flush_context() and vfp_reinit().  Allow
use by other client beside suspend/resume.  Currently intended for
idle power collapse.

Signed-off-by: Dave Estes <cestes@quicinc.com>
Signed-off-by: Daniel Walker <dwalker@codeaurora.org>
---
 arch/arm/include/asm/vfp.h |    6 ++++++
 arch/arm/vfp/vfpmodule.c   |   37 ++++++++++++++++++++++++++++---------
 2 files changed, 34 insertions(+), 9 deletions(-)

diff --git a/arch/arm/include/asm/vfp.h b/arch/arm/include/asm/vfp.h
index f4ab34f..ea2e3ac 100644
--- a/arch/arm/include/asm/vfp.h
+++ b/arch/arm/include/asm/vfp.h
@@ -82,3 +82,9 @@
 #define VFPOPDESC_UNUSED_BIT	(24)
 #define VFPOPDESC_UNUSED_MASK	(0xFF << VFPOPDESC_UNUSED_BIT)
 #define VFPOPDESC_OPDESC_MASK	(~(VFPOPDESC_LENGTH_MASK | VFPOPDESC_UNUSED_MASK))
+
+#ifndef __ASSEMBLY__
+int vfp_flush_context(void);
+void vfp_reinit(void);
+#endif
+
diff --git a/arch/arm/vfp/vfpmodule.c b/arch/arm/vfp/vfpmodule.c
index f60a540..c3a088c 100644
--- a/arch/arm/vfp/vfpmodule.c
+++ b/arch/arm/vfp/vfpmodule.c
@@ -370,13 +370,12 @@ static void vfp_enable(void *unused)
 	set_copro_access(access | CPACC_FULL(10) | CPACC_FULL(11));
 }
 
-#ifdef CONFIG_PM
-#include <linux/sysdev.h>
-
-static int vfp_pm_suspend(struct sys_device *dev, pm_message_t state)
+int vfp_flush_context(void)
 {
 	struct thread_info *ti = current_thread_info();
 	u32 fpexc = fmrx(FPEXC);
+	u32 cpu = ti->cpu;
+	int saved = 0;
 
 	/* if vfp is on, then save state for resumption */
 	if (fpexc & FPEXC_EN) {
@@ -385,7 +384,31 @@ static int vfp_pm_suspend(struct sys_device *dev, pm_message_t state)
 
 		/* disable, just in case */
 		fmxr(FPEXC, fmrx(FPEXC) & ~FPEXC_EN);
+
+		last_VFP_context[cpu] = NULL;
+		saved = 1;
 	}
+	return saved;
+}
+
+void vfp_reinit(void)
+{
+	/* ensure we have access to the vfp */
+	vfp_enable(NULL);
+
+	/* and disable it to ensure the next usage restores the state */
+	fmxr(FPEXC, fmrx(FPEXC) & ~FPEXC_EN);
+}
+
+#ifdef CONFIG_PM
+#include <linux/sysdev.h>
+
+static int vfp_pm_suspend(struct sys_device *dev, pm_message_t state)
+{
+	int saved = vfp_flush_context();
+
+	if (saved)
+		printk(KERN_DEBUG "%s: saved vfp state\n", __func__);
 
 	/* clear any information we had about last context state */
 	memset(last_VFP_context, 0, sizeof(last_VFP_context));
@@ -395,11 +418,7 @@ static int vfp_pm_suspend(struct sys_device *dev, pm_message_t state)
 
 static int vfp_pm_resume(struct sys_device *dev)
 {
-	/* ensure we have access to the vfp */
-	vfp_enable(NULL);
-
-	/* and disable it to ensure the next usage restores the state */
-	fmxr(FPEXC, fmrx(FPEXC) & ~FPEXC_EN);
+	vfp_reinit();
 
 	return 0;
 }
-- 
1.6.3.3

^ permalink raw reply related	[flat|nested] 68+ messages in thread

* [RFC 16/18] arm: msm: add arch_has_speculative_dfetch()
  2010-01-11 22:47 [RFC 00/18] generic arm needed for msm Daniel Walker
                   ` (14 preceding siblings ...)
  2010-01-11 22:47 ` [RFC 15/18] arm: vfp: Add additional vfp interfaces Daniel Walker
@ 2010-01-11 22:47 ` Daniel Walker
  2010-01-11 23:33   ` Russell King - ARM Linux
  2010-01-11 22:47 ` [RFC 17/18] arm: mm: Add SW emulation for ARM domain manager feature Daniel Walker
  2010-01-11 22:47 ` [RFC 18/18] arm: mm: qsd8x50: Fix incorrect permission faults Daniel Walker
  17 siblings, 1 reply; 68+ messages in thread
From: Daniel Walker @ 2010-01-11 22:47 UTC (permalink / raw)
  To: linux-arm-kernel

From: Steve Muckle <smuckle@quicinc.com>

The Scorpion CPU speculatively reads data into the cache. This
may occur while a region of memory is being written via DMA, so
that region must be invalidated when it is brought under CPU
control after the DMA transaction finishes, assuming the DMA
was either bidirectional or from the device.

Currently both a clean and invalidate are being done for
DMA_BIDIRECTIONAL in dma_unmap_single. Only an invalidate should be
required here. There are drivers that currently rely on the clean
however so this will be removed when those drivers are updated.

Signed-off-by: Steve Muckle <smuckle@quicinc.com>
Signed-off-by: Daniel Walker <dwalker@codeaurora.org>
---
 arch/arm/include/asm/dma-mapping.h      |   11 ++++++++++-
 arch/arm/include/asm/memory.h           |    7 +++++++
 arch/arm/mach-msm/include/mach/memory.h |    3 +++
 3 files changed, 20 insertions(+), 1 deletions(-)

diff --git a/arch/arm/include/asm/dma-mapping.h b/arch/arm/include/asm/dma-mapping.h
index a96300b..db81a59 100644
--- a/arch/arm/include/asm/dma-mapping.h
+++ b/arch/arm/include/asm/dma-mapping.h
@@ -352,7 +352,12 @@ static inline dma_addr_t dma_map_page(struct device *dev, struct page *page,
 static inline void dma_unmap_single(struct device *dev, dma_addr_t handle,
 		size_t size, enum dma_data_direction dir)
 {
-	/* nothing to do */
+	BUG_ON(!valid_dma_direction(dir));
+
+	if (arch_has_speculative_dfetch() && dir != DMA_TO_DEVICE)
+		/* For DMA_BIDIRECTIONAL only an invalidate should be required
+		 * here, fix when all drivers are ready */
+		dma_cache_maint(dma_to_virt(dev, handle), size, dir);
 }
 
 /**
@@ -401,6 +406,10 @@ static inline void dma_sync_single_range_for_cpu(struct device *dev,
 	BUG_ON(!valid_dma_direction(dir));
 
 	dmabounce_sync_for_cpu(dev, handle, offset, size, dir);
+
+	if (arch_has_speculative_dfetch() && dir != DMA_TO_DEVICE)
+		dma_cache_maint(dma_to_virt(dev, handle) + offset, size,
+				DMA_FROM_DEVICE);
 }
 
 static inline void dma_sync_single_range_for_device(struct device *dev,
diff --git a/arch/arm/include/asm/memory.h b/arch/arm/include/asm/memory.h
index 5421d82..792011b 100644
--- a/arch/arm/include/asm/memory.h
+++ b/arch/arm/include/asm/memory.h
@@ -309,6 +309,13 @@ static inline __deprecated void *bus_to_virt(unsigned long x)
 #define arch_is_coherent()		0
 #endif
 
+/*
+ * Set if the architecture speculatively fetches data into cache.
+ */
+#ifndef arch_has_speculative_dfetch
+#define arch_has_speculative_dfetch()	0
+#endif
+
 #endif
 
 #include <asm-generic/memory_model.h>
diff --git a/arch/arm/mach-msm/include/mach/memory.h b/arch/arm/mach-msm/include/mach/memory.h
index f4698ba..a538f2e 100644
--- a/arch/arm/mach-msm/include/mach/memory.h
+++ b/arch/arm/mach-msm/include/mach/memory.h
@@ -19,5 +19,8 @@
 /* physical offset of RAM */
 #define PHYS_OFFSET		UL(0x10000000)
 
+#ifdef CONFIG_ARCH_MSM_SCORPION
+#define arch_has_speculative_dfetch()	1
 #endif
 
+#endif
-- 
1.6.3.3

^ permalink raw reply related	[flat|nested] 68+ messages in thread

* [RFC 17/18] arm: mm: Add SW emulation for ARM domain manager feature
  2010-01-11 22:47 [RFC 00/18] generic arm needed for msm Daniel Walker
                   ` (15 preceding siblings ...)
  2010-01-11 22:47 ` [RFC 16/18] arm: msm: add arch_has_speculative_dfetch() Daniel Walker
@ 2010-01-11 22:47 ` Daniel Walker
  2010-01-25 16:40   ` Catalin Marinas
  2010-01-11 22:47 ` [RFC 18/18] arm: mm: qsd8x50: Fix incorrect permission faults Daniel Walker
  17 siblings, 1 reply; 68+ messages in thread
From: Daniel Walker @ 2010-01-11 22:47 UTC (permalink / raw)
  To: linux-arm-kernel

From: Dave Estes <cestes@quicinc.com>

Do not set domain manager bits in cp15 dacr.  Emulate using SW.  Add
kernel hooks to handle domain changes, permission faults, and context
switches.

This feature is required by ARCH_QSD8x50 to fix a problem with page
crossing memory accesses.  Set ARCH_QSD8X50 to select emulator.

Signed-off-by: Dave Estes <cestes@quicinc.com>
Signed-off-by: Daniel Walker <dwalker@codeaurora.org>
---
 Documentation/arm/msm/emulate_domain_manager.txt |  282 ++++++++++++++++
 arch/arm/include/asm/domain.h                    |   13 +
 arch/arm/kernel/entry-armv.S                     |    8 +
 arch/arm/kernel/head.S                           |    8 +
 arch/arm/mm/Kconfig                              |    2 +
 arch/arm/mm/Makefile                             |    1 +
 arch/arm/mm/emulate_domain_manager-v7.c          |  386 ++++++++++++++++++++++
 arch/arm/mm/fault.c                              |   14 +
 arch/arm/mm/proc-v7.S                            |    8 +
 9 files changed, 722 insertions(+), 0 deletions(-)
 create mode 100644 Documentation/arm/msm/emulate_domain_manager.txt
 create mode 100644 arch/arm/mm/emulate_domain_manager-v7.c

diff --git a/Documentation/arm/msm/emulate_domain_manager.txt b/Documentation/arm/msm/emulate_domain_manager.txt
new file mode 100644
index 0000000..97a2566
--- /dev/null
+++ b/Documentation/arm/msm/emulate_domain_manager.txt
@@ -0,0 +1,282 @@
+Copyright (c) 2009, Code Aurora Forum. All rights reserved.
+
+Redistribution and use in source form and compiled forms (SGML, HTML, PDF,
+PostScript, RTF and so forth) with or without modification, are permitted
+provided that the following conditions are met:
+
+Redistributions in source form must retain the above copyright notice, this
+list of conditions and the following disclaimer as the first lines of this
+file unmodified.
+
+Redistributions in compiled form (transformed to other DTDs, converted to
+PDF, PostScript, RTF and other formats) must reproduce the above copyright
+notice, this list of conditions and the following disclaimer in the
+documentation and/or other materials provided with the distribution.
+
+THIS DOCUMENTATION IS PROVIDED BY THE CODE AURORA FORUM "AS IS"
+AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+AND NON-INFRINGEMENT ARE DISCLAIMED. IN NO EVENT SHALL THE FREEBSD
+DOCUMENTATION PROJECT BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
+PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS;
+OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY,
+WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR
+OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS DOCUMENTATION, EVEN IF
+ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+
+Introduction
+============
+
+8x50 chipset requires the ability to disable HW domain manager function.
+
+The ARM MMU architecture has a feature known as domain manager mode.
+Briefly each page table, section, or supersection is assigned a domain.
+Each domain can be globally configured to NoAccess, Client, or Manager
+mode.  These global configurations allow the access permissions of the
+entire domain to be changed simultaneously.
+
+The domain manger emulation is required to fix a HW problem on the 8x50
+chipset.  The problem is simple to repair except when domain manager mode
+is enabled.  The emulation allows the problem to be completely resolved.
+
+
+Hardware description
+====================
+
+When domain manager mode is enabled on a specific domain, the MMU
+hardware ignores the access permission bits and the execute never bit.  All
+accesses, to memory in the domain, are granted full read, write,
+execute permissions.
+
+The mode of each domain is controlled by a field in the cp15 dacr register.
+Each domain can be globally configured to NoAccess, Client, or Manager mode.
+
+See: ARMv7 Architecture Reference Manual
+
+
+Software description
+====================
+
+In order to disable domain manager mode the equivalent HW functionality must
+be emulated in SW.  Any attempts to enable domain manager mode, must be
+intercepted.
+
+Because domain manager mode is not enabled, permissions for the
+associated domain will remain restricted.  Permission faults will be generated.
+The permission faults will be intercepted.  The faulted pages/sections will
+be modified to grant full access and execute permissions.
+
+The modified page tables must be restored when exiting domain manager mode.
+
+
+Design
+======
+
+Design Goals:
+
+Disable Domain Manager Mode
+Exact SW emulation of Domain Manager Mode
+Minimal Kernel changes
+Minimal Security Risk
+
+Design Decisions:
+
+Detect kernel page table modifications on restore
+Direct ARMv7 HW MMU table manipulation
+Restore emulation modified MMU entries on context switch
+No need to restore MMU entries for MMU entry copy operations
+Invalidate TLB entries on modification
+Store Domain Manager bits in memory
+8 entry MMU entry cache
+Use spin_lock_irqsave to protect domain manipulation
+Assume no split MMU table
+
+Design Discussion:
+
+Detect kernel page table modifications on restore -
+When restoring original page/section permission faults, the submitted design
+verifies the MMU entry has not been modified.  The kernel modifies MMU
+entries for the following purposes : create a memory mapping, release a
+memory mapping, add permissions during a permission fault, and map a page
+during a translation fault.  The submitted design works with the listed
+scenarios.  The translation fault and permission faults simply do not happen on
+relevant entries (valid entries with full access permissions).  The alternative
+would be to hook every MMU table modification.  The alternative greatly
+increases complexity and code maintenance issues.
+
+Direct ARMv7 HW MMU table manipulation -
+The natural choice would be to use the kernel provided mechanism to manipulate
+MMU page table entries.  The ARM MMU interface is described in pgtable.h.
+This interface is complicated by the Linux implementation.  The level 1 pgd
+entries are treated and manipulated as entry pairs.  The level 2 entries are
+shadowed and cloned.  The compromise was chosen to actually use the ARMv7 HW
+registers to walk and modify the MMU table entries.  The choice limits the
+usage of this implementation to ARMv7 and similar ARM MMU architectures.  Since
+this implementation is targeted at fixing an issue in 8x50 ARMv7, the choice is
+logical.  The HW manipulation is in distinct low level functions.  These could
+easily be replaced or generalized to support other architectures as necessary.
+
+Restore emulation modified MMU entries on context switch -
+This additional hook was added to minimize performance impact.  By guaranteeing
+the ASID will not change during the emulation, the emulation may invalidate each
+entry by MVA & ASID.  Only the affected page table entries will be removed from
+the TLB cache.  The performance cost of the invalidate on context switch is near
+zero.  Typically on context switch the domain mode would also change, forcing a
+complete restore of all modified MMU entries.  The alternative would be to
+invalidate the entire TLB every time a table entry is restored.
+
+No need to restore MMU entries for copy operations -
+Operations which copy MMU entries are relatively rare in the kernel.  Because
+we modify the level 2 pte entries directly in hardware, the Linux shadow copies
+are left untouched.  The kernel treats the shadow copies as the primary pte
+entry.  Any pte copy operations would be unaffected by the HW modification.
+On translation section fault, pgd entries are copied from the kernel master
+page table to the current thread page table.  Since we restore MMU entries on
+context switch, we guarantee the master table will not contain modifications,
+while faulting on a process local entry.  Other read, modify write operations
+occur during permission fault handling.  Since we open permission on modified
+entries, these do not need to be restored, because we guarantee these
+permission fault operations will not happen.
+
+Invalidate TLB entries on modification -
+No real choice here.  This is more of a design requirement.  On permission
+fault, the MMU entry with restricted permissions will be in the TLB.  To open
+access permissions, the TLB entry must be invalidated.  Otherwise the access
+will permission fault again.  Upon restoring original MMU entries, the TLB
+must be invalidated to restrict memory access.
+
+Store Domain Manager bits in memory -
+There was only one alternative here.  2.6.29 kernel only uses 3 of 16
+possible domains.  Additional bits in dacr could be used to store the
+manager bits.  This would allow faster access to the manager bits.
+Overall this would reduce any performance impact.  The performance
+needs did not seem to justify the added weirdness.
+
+8 entry MMU entry cache-
+The size of the modified MMU entry cache is somewhat arbitrary.  The thought
+process is that typically, a thread is using two pointers to perform a copy
+operation.  In this case only 2 entries would be required.  One could imagine
+a more complicated operation, a masked copy for instance, which would require
+more pointers.  8 pointer seemed to be large enough to minimize risk of
+permission fault thrashing.  The disadvantage of a larger cache would simply
+be a longer list of entries to restore.
+
+Use spin_lock_irqsave to protect domain manipulation -
+The obvious choice.
+
+Assume no split MMU table -
+This same assumption is documented in cpu_v7_switch_mm.
+
+
+Power Management
+================
+
+Not affected.
+
+
+SMP/multi-core
+==============
+
+SMP/multicore not supported.  This is intended as a 8x50 workaround.
+
+
+Security
+========
+
+MMU page/section permissions must be manipulated correctly to emulate domain
+manager mode.  If page permission are left in full access mode, any process
+can read associated memory.
+
+
+Performance
+===========
+
+Performance should be impacted only minimally.  When emulating domain manager
+mode, there is overhead added to MMU table/context switches, set_domain()
+calls, data aborts, and prefetch aborts.
+
+Normally the kernel operates with domain != DOMAIN_MANAGER.  In this case the
+overhead is minimal.  An additional check is required to see if domain manager
+mode is on.  This minimal code is added to each of emulation entry points :
+set, data abort, prefetch abort, and MMU table/context switch.
+
+Initial accesses to a MMU protected page/section will generate a permission
+fault. The page will be manipulated to grant full access permissions and
+the access will be retried.  This will typically require 2-3 page table
+walks.
+
+On a context switch, all modified MMU entries will be restored.  On thread
+resume, additional accesses will be treated as initial accesses.
+
+
+Interface
+=========
+
+The emulation does not have clients.  It is hooked to the kernel through a
+small list of functions.
+
+void emulate_domain_manager_set(u32 domain);
+int emulate_domain_manager_data_abort(u32 dfsr, u32 dfar);
+int emulate_domain_manager_prefetch_abort(u32 ifsr, u32 ifar);
+void emulate_domain_manager_switch_mm(
+	unsigned long pgd_phys,
+	struct mm_struct *mm,
+	void (*switch_mm)(unsigned long pgd_phys, struct mm_struct *));
+
+emulate_domain_manager_set() is the set_domain handler.  This replaces the
+direct manipulation of CP15 dacr with a function call.  This allows emulation
+to prevent setting dacr manager bits.  It also allows emulation to restore
+page/section permissions when domain manger is disabled.
+
+emulate_domain_manager_data_abort() handles data aborts caused by domain
+not being set in HW, and handles section/page manipulation.
+
+emulate_domain_manager_prefetch_abort() is the similar prefetch abort handler.
+
+emulate_domain_manager_switch_mm() handles MMU table and context switches.
+This notifies the emulation that the MMU context is changing.  Allowing the
+emulation to restore page table entry permission before switching contexts.
+
+
+Config options
+==============
+
+This option is enable/disable by the EMULATE_DOMAIN_MANAGER_V7 option.
+
+
+Dependencies
+============
+
+Implementation is for ARMv7, MMU, and !SMP.  Targets solving issue for 8x50
+chipset.
+
+
+User space utilities
+====================
+
+None
+
+
+Other
+=====
+
+Code is implemented in kernel/arch/arm/mm.
+
+
+arch/arm/mm/emulate_domain_manager.c contains comments.  No additional public
+documentation available or planned.
+
+
+Known issues
+============
+
+No intent to support SMP or non ARMv7 architectures
+
+
+To do
+=====
+
+None
+
diff --git a/arch/arm/include/asm/domain.h b/arch/arm/include/asm/domain.h
index cc7ef40..2f1d9e4 100644
--- a/arch/arm/include/asm/domain.h
+++ b/arch/arm/include/asm/domain.h
@@ -2,6 +2,7 @@
  *  arch/arm/include/asm/domain.h
  *
  *  Copyright (C) 1999 Russell King.
+ *  Copyright (c) 2009, Code Aurora Forum. All rights reserved.
  *
  * This program is free software; you can redistribute it and/or modify
  * it under the terms of the GNU General Public License version 2 as
@@ -52,6 +53,17 @@
 #ifndef __ASSEMBLY__
 
 #ifdef CONFIG_MMU
+#ifdef CONFIG_EMULATE_DOMAIN_MANAGER_V7
+void emulate_domain_manager_set(u32 domain);
+int emulate_domain_manager_data_abort(u32 dfsr, u32 dfar);
+int emulate_domain_manager_prefetch_abort(u32 ifsr, u32 ifar);
+void emulate_domain_manager_switch_mm(
+	unsigned long pgd_phys,
+	struct mm_struct *mm,
+	void (*switch_mm)(unsigned long pgd_phys, struct mm_struct *));
+
+#define set_domain(x) emulate_domain_manager_set(x)
+#else
 #define set_domain(x)					\
 	do {						\
 	__asm__ __volatile__(				\
@@ -59,6 +71,7 @@
 	  : : "r" (x));					\
 	isb();						\
 	} while (0)
+#endif
 
 #define modify_domain(dom,type)					\
 	do {							\
diff --git a/arch/arm/kernel/entry-armv.S b/arch/arm/kernel/entry-armv.S
index d2903e3..d6aaf63 100644
--- a/arch/arm/kernel/entry-armv.S
+++ b/arch/arm/kernel/entry-armv.S
@@ -4,6 +4,7 @@
  *  Copyright (C) 1996,1997,1998 Russell King.
  *  ARM700 fix by Matthew Godbolt (linux-user at willothewisp.demon.co.uk)
  *  nommu support by Hyok S. Choi (hyok.choi at samsung.com)
+ *  Copyright (c) 2009, Code Aurora Forum. All rights reserved.
  *
  * This program is free software; you can redistribute it and/or modify
  * it under the terms of the GNU General Public License version 2 as
@@ -746,8 +747,15 @@ ENTRY(__switch_to)
 	str	r3, [r4, #-15]			@ TLS val at 0xffff0ff0
 #endif
 #ifdef CONFIG_MMU
+#ifdef CONFIG_EMULATE_DOMAIN_MANAGER_V7
+	stmdb	r13!, {r0-r3, lr}
+	mov	r0, r6
+	bl	emulate_domain_manager_set
+	ldmia	r13!, {r0-r3, lr}
+#else
 	mcr	p15, 0, r6, c3, c0, 0		@ Set domain register
 #endif
+#endif
 	mov	r5, r0
 	add	r4, r2, #TI_CPU_SAVE
 	ldr	r0, =thread_notify_head
diff --git a/arch/arm/kernel/head.S b/arch/arm/kernel/head.S
index eb62bf9..70be2c7 100644
--- a/arch/arm/kernel/head.S
+++ b/arch/arm/kernel/head.S
@@ -4,6 +4,7 @@
  *  Copyright (C) 1994-2002 Russell King
  *  Copyright (c) 2003 ARM Limited
  *  All Rights Reserved
+ *  Copyright (c) 2009, Code Aurora Forum. All rights reserved.
  *
  * This program is free software; you can redistribute it and/or modify
  * it under the terms of the GNU General Public License version 2 as
@@ -172,10 +173,17 @@ __enable_mmu:
 #ifdef CONFIG_CPU_ICACHE_DISABLE
 	bic	r0, r0, #CR_I
 #endif
+#ifdef CONFIG_EMULATE_DOMAIN_MANAGER_V7
+	mov	r5, #(domain_val(DOMAIN_USER, DOMAIN_CLIENT) | \
+		      domain_val(DOMAIN_KERNEL, DOMAIN_CLIENT) | \
+		      domain_val(DOMAIN_TABLE, DOMAIN_CLIENT) | \
+		      domain_val(DOMAIN_IO, DOMAIN_CLIENT))
+#else
 	mov	r5, #(domain_val(DOMAIN_USER, DOMAIN_MANAGER) | \
 		      domain_val(DOMAIN_KERNEL, DOMAIN_MANAGER) | \
 		      domain_val(DOMAIN_TABLE, DOMAIN_MANAGER) | \
 		      domain_val(DOMAIN_IO, DOMAIN_CLIENT))
+#endif
 	mcr	p15, 0, r5, c3, c0, 0		@ load domain access register
 	mcr	p15, 0, r4, c2, c0, 0		@ load page table pointer
 	b	__turn_mmu_on
diff --git a/arch/arm/mm/Kconfig b/arch/arm/mm/Kconfig
index c721880..9c745ca 100644
--- a/arch/arm/mm/Kconfig
+++ b/arch/arm/mm/Kconfig
@@ -573,6 +573,8 @@ config CPU_TLB_V6
 config CPU_TLB_V7
 	bool
 
+config EMULATE_DOMAIN_MANAGER_V7
+	bool
 endif
 
 config CPU_HAS_ASID
diff --git a/arch/arm/mm/Makefile b/arch/arm/mm/Makefile
index 827e238..3fe7a01 100644
--- a/arch/arm/mm/Makefile
+++ b/arch/arm/mm/Makefile
@@ -83,6 +83,7 @@ obj-$(CONFIG_CPU_MOHAWK)	+= proc-mohawk.o
 obj-$(CONFIG_CPU_FEROCEON)	+= proc-feroceon.o
 obj-$(CONFIG_CPU_V6)		+= proc-v6.o
 obj-$(CONFIG_CPU_V7)		+= proc-v7.o
+obj-$(CONFIG_EMULATE_DOMAIN_MANAGER_V7) += emulate_domain_manager-v7.o
 
 obj-$(CONFIG_CACHE_FEROCEON_L2)	+= cache-feroceon-l2.o
 obj-$(CONFIG_CACHE_L2X0)	+= cache-l2x0.o
diff --git a/arch/arm/mm/emulate_domain_manager-v7.c b/arch/arm/mm/emulate_domain_manager-v7.c
new file mode 100644
index 0000000..a55d2b2
--- /dev/null
+++ b/arch/arm/mm/emulate_domain_manager-v7.c
@@ -0,0 +1,386 @@
+/*
+ * Basic implementation of a SW emulation of the domain manager feature in
+ * ARM architecture.  Assumes single processor ARMv7 chipset.
+ *
+ * Requires hooks to be alerted to any runtime changes of dacr or MMU context.
+ *
+ * Copyright (c) 2009, Code Aurora Forum. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in the
+ *       documentation and/or other materials provided with the distribution.
+ *     * Neither the name of Code Aurora Forum nor
+ *       the names of its contributors may be used to endorse or promote
+ *       products derived from this software without specific prior written
+ *       permission.
+ *
+ * Alternatively, provided that this notice is retained in full, this software
+ * may be relicensed by the recipient under the terms of the GNU General Public
+ * License version 2 ("GPL") and only version 2, in which case the provisions of
+ * the GPL apply INSTEAD OF those given above.  If the recipient relicenses the
+ * software under the GPL, then the identification text in the MODULE_LICENSE
+ * macro must be changed to reflect "GPLv2" instead of "Dual BSD/GPL".  Once a
+ * recipient changes the license terms to the GPL, subsequent recipients shall
+ * not relicense under alternate licensing terms, including the BSD or dual
+ * BSD/GPL terms.  In addition, the following license statement immediately
+ * below and between the words START and END shall also then apply when this
+ * software is relicensed under the GPL:
+ *
+ * START
+ *
+ * This program is free software; you can redistribute it and/or modify it under
+ * the terms of the GNU General Public License version 2 and only version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS
+ * FOR A PARTICULAR PURPOSE.  See the GNU General Public License for more
+ * details.
+ *
+ * You should have received a copy of the GNU General Public License along with
+ * this program; if not, write to the Free Software Foundation, Inc.,
+ * 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
+ *
+ * END
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ *
+ */
+
+#include <linux/sched.h>
+#include <asm/domain.h>
+#include <asm/pgtable.h>
+#include <asm/tlbflush.h>
+
+#define DOMAIN_MANAGER_BITS (0xAAAAAAAA)
+
+#define DFSR_DOMAIN(dfsr) ((dfsr >> 4) & (16-1))
+
+#define FSR_PERMISSION_FAULT(fsr) ((fsr & 0x40D) == 0x00D)
+#define FSR_PERMISSION_SECT(fsr) ((fsr & 0x40F) == 0x00D)
+
+/* ARMv7 MMU HW Macros.  Not conveniently defined elsewhere */
+#define MMU_TTB_ADDRESS(x)   ((u32 *)(((u32)(x)) & ~((1 << 14) - 1)))
+#define MMU_PMD_INDEX(addr) (((u32)addr) >> SECTION_SHIFT)
+#define MMU_TABLE_ADDRESS(x) ((u32 *)((x) & ~((1 << 10) - 1)))
+#define MMU_TABLE_INDEX(x) ((((u32)x) >> 12) & (256 - 1))
+
+/* Convenience Macros */
+#define PMD_IS_VALID(x) (PMD_IS_TABLE(x) || PMD_IS_SECTION(x))
+#define PMD_IS_TABLE(x) ((x & PMD_TYPE_MASK) == PMD_TYPE_TABLE)
+#define PMD_IS_SECTION(x) ((x & PMD_TYPE_MASK) == PMD_TYPE_SECT)
+#define PMD_IS_SUPERSECTION(x) \
+	(PMD_IS_SECTION(x) && ((x & PMD_SECT_SUPER) == PMD_SECT_SUPER))
+
+#define PMD_GET_DOMAIN(x)					\
+	(PMD_IS_TABLE(x) ||					\
+	(PMD_IS_SECTION(x) && !PMD_IS_SUPERSECTION(x)) ?	\
+		 0 : (x >> 5) & (16-1))
+
+#define PTE_IS_LARGE(x) ((x & PTE_TYPE_MASK) == PTE_TYPE_LARGE)
+
+
+/* Only DOMAIN_MMU_ENTRIES will be granted access simultaneously */
+#define DOMAIN_MMU_ENTRIES (8)
+
+#define LRU_INC(lru) ((lru + 1) >= DOMAIN_MMU_ENTRIES ? 0 : lru + 1)
+
+
+static DEFINE_SPINLOCK(edm_lock);
+
+static u32 edm_manager_bits;
+
+struct domain_entry_save {
+	u32 *mmu_entry;
+	u32 *addr;
+	u32 value;
+	u16 sect;
+	u16 size;
+};
+
+static struct domain_entry_save edm_save[DOMAIN_MMU_ENTRIES];
+
+static u32 edm_lru;
+
+
+/*
+ *  Return virtual address of pmd (level 1) entry for addr
+ *
+ *  This routine walks the ARMv7 page tables in HW.
+ */
+static inline u32 *__get_pmd_v7(u32 *addr)
+{
+	u32 *ttb;
+
+	__asm__ __volatile__(
+		"mrc	p15, 0, %0, c2, c0, 0	@ ttbr0\n\t"
+		: "=r" (ttb)
+		:
+	);
+
+	return __va(MMU_TTB_ADDRESS(ttb) + MMU_PMD_INDEX(addr));
+}
+
+/*
+ *  Return virtual address of pte (level 2) entry for addr
+ *
+ *  This routine walks the ARMv7 page tables in HW.
+ */
+static inline u32 *__get_pte_v7(u32 *addr)
+{
+	u32 *pmd = __get_pmd_v7(addr);
+	u32 *table_pa = pmd && PMD_IS_TABLE(*pmd) ?
+		MMU_TABLE_ADDRESS(*pmd) : 0;
+	u32 *entry = table_pa ? __va(table_pa[MMU_TABLE_INDEX(addr)]) : 0;
+
+	return entry;
+}
+
+/*
+ *  Invalidate the TLB for a given address for the current context
+ *
+ *  After manipulating access permissions, TLB invalidation changes are
+ *  observed
+ */
+static inline void __tlb_invalidate(u32 *addr)
+{
+	__asm__ __volatile__(
+		"mrc	p15, 0, %%r2, c13, c0, 1	@ contextidr\n\t"
+		"and %%r2, %%r2, #0xff			@ asid\n\t"
+		"mov %%r3, %0, lsr #12			@ mva[31:12]\n\t"
+		"orr %%r2, %%r2, %%r3, lsl #12		@ tlb mva and asid\n\t"
+		"mcr	p15, 0, %%r2, c8, c7, 1		@ utlbimva\n\t"
+		"isb"
+		:
+		: "r" (addr)
+		: "r2", "r3"
+	);
+}
+
+/*
+ *  Set HW MMU entry and do required synchronization operations.
+ */
+static inline void __set_entry(u32 *entry, u32 *addr, u32 value, int size)
+{
+	int i;
+
+	if (!entry)
+		return;
+
+	entry = (u32 *)((u32) entry & ~(size * sizeof(u32) - 1));
+
+	for (i = 0; i < size; i++)
+		entry[i] = value;
+
+	__asm__ __volatile__(
+		"mcr	p15, 0, %0, c7, c10, 1		@ flush entry\n\t"
+		"dsb\n\t"
+		"isb\n\t"
+		:
+		: "r" (entry)
+	);
+	__tlb_invalidate(addr);
+}
+
+/*
+ *  Return the number of duplicate entries associated with entry value.
+ *  Supersections and Large page table entries are replicated 16x.
+ */
+static inline int __entry_size(int sect, int value)
+{
+	u32 size;
+
+	if (sect)
+		size = PMD_IS_SUPERSECTION(value) ? 16 : 1;
+	else
+		size = PTE_IS_LARGE(value) ? 16 : 1;
+
+	return size;
+}
+
+/*
+ *  Change entry permissions to emulate domain manager access
+ */
+static inline int __manager_perm(int sect, int value)
+{
+	u32 edm_value;
+
+	if (sect) {
+		edm_value = (value & ~(PMD_SECT_APX | PMD_SECT_XN)) |
+		(PMD_SECT_AP_READ | PMD_SECT_AP_WRITE);
+	} else {
+		edm_value = (value & ~(PTE_EXT_APX | PTE_EXT_XN)) |
+			(PTE_EXT_AP1 | PTE_EXT_AP0);
+	}
+	return edm_value;
+}
+
+/*
+ *  Restore original HW MMU entry.  Cancels domain manager access
+ */
+static inline void __restore_entry(int index)
+{
+	struct domain_entry_save *entry = &edm_save[index];
+	u32 edm_value;
+
+	if (!entry->mmu_entry)
+		return;
+
+	edm_value = __manager_perm(entry->sect, entry->value);
+
+	if (*entry->mmu_entry == edm_value)
+		__set_entry(entry->mmu_entry, entry->addr,
+			entry->value, entry->size);
+
+	entry->mmu_entry = 0;
+}
+
+/*
+ *  Modify HW MMU entry to grant domain manager access for a given MMU entry.
+ *  This adds full read, write, and exec access permissions.
+ */
+static inline void __set_manager(int sect, u32 *addr)
+{
+	u32 *entry = sect ? __get_pmd_v7(addr) : __get_pte_v7(addr);
+	u32 value;
+	u32 edm_value;
+	u16 size;
+
+	if (!entry)
+		return;
+
+	value = *entry;
+
+	size = __entry_size(sect, value);
+	edm_value = __manager_perm(sect, value);
+
+	__set_entry(entry, addr, edm_value, size);
+
+	__restore_entry(edm_lru);
+
+	edm_save[edm_lru].mmu_entry = entry;
+	edm_save[edm_lru].addr = addr;
+	edm_save[edm_lru].value = value;
+	edm_save[edm_lru].sect = sect;
+	edm_save[edm_lru].size = size;
+
+	edm_lru = LRU_INC(edm_lru);
+}
+
+/*
+ *  Restore original HW MMU entries.
+ *
+ *  entry - MVA for HW MMU entry
+ */
+static inline void __restore(void)
+{
+	if (unlikely(edm_manager_bits)) {
+		u32 i;
+
+		for (i = 0; i < DOMAIN_MMU_ENTRIES; i++)
+			__restore_entry(i);
+	}
+}
+
+/*
+ * Common abort handler code
+ *
+ * If domain manager was actually set, permission fault would not happen.
+ * Open access permissions to emulate.  Save original settings to restore
+ * later. Return 1 to pretend fault did not happen.
+ */
+static int __emulate_domain_manager_abort(u32 fsr, u32 far, int dabort)
+{
+	if (unlikely(FSR_PERMISSION_FAULT(fsr) && edm_manager_bits)) {
+		int domain = dabort ? DFSR_DOMAIN(fsr) : PMD_GET_DOMAIN(far);
+		if (edm_manager_bits & domain_val(domain, DOMAIN_MANAGER)) {
+			unsigned long flags;
+
+			spin_lock_irqsave(&edm_lock, flags);
+
+			__set_manager(FSR_PERMISSION_SECT(fsr), (u32 *) far);
+
+			spin_unlock_irqrestore(&edm_lock, flags);
+			return 1;
+		}
+	}
+	return 0;
+}
+
+/*
+ * Change domain setting.
+ *
+ * Lock and restore original contents.  Extract and save manager bits.  Set
+ * DACR, excluding manager bits.
+ */
+void emulate_domain_manager_set(u32 domain)
+{
+	unsigned long flags;
+
+	spin_lock_irqsave(&edm_lock, flags);
+
+	if (edm_manager_bits != (domain & DOMAIN_MANAGER_BITS)) {
+		__restore();
+		edm_manager_bits = domain & DOMAIN_MANAGER_BITS;
+	}
+
+	__asm__ __volatile__(
+		"mcr	p15, 0, %0, c3, c0, 0	@ set domain\n\t"
+		"isb"
+		:
+		: "r" (domain & ~DOMAIN_MANAGER_BITS)
+	);
+
+	spin_unlock_irqrestore(&edm_lock, flags);
+}
+
+/*
+ * Switch thread context.  Restore original contents.
+ */
+void emulate_domain_manager_switch_mm(unsigned long pgd_phys,
+	struct mm_struct *mm,
+	void (*switch_mm)(unsigned long pgd_phys, struct mm_struct *))
+{
+	unsigned long flags;
+
+	spin_lock_irqsave(&edm_lock, flags);
+
+	__restore();
+
+	/* Call underlying kernel handler */
+	switch_mm(pgd_phys, mm);
+
+	spin_unlock_irqrestore(&edm_lock, flags);
+}
+
+/*
+ * Kernel data_abort hook
+ */
+int emulate_domain_manager_data_abort(u32 dfsr, u32 dfar)
+{
+	return __emulate_domain_manager_abort(dfsr, dfar, 1);
+}
+
+/*
+ * Kernel prefetch_abort hook
+ */
+int emulate_domain_manager_prefetch_abort(u32 ifsr, u32 ifar)
+{
+	return __emulate_domain_manager_abort(ifsr, ifar, 0);
+}
+
+
diff --git a/arch/arm/mm/fault.c b/arch/arm/mm/fault.c
index bea3e75..3446863 100644
--- a/arch/arm/mm/fault.c
+++ b/arch/arm/mm/fault.c
@@ -24,6 +24,10 @@
 #include <asm/pgtable.h>
 #include <asm/tlbflush.h>
 
+#ifdef CONFIG_EMULATE_DOMAIN_MANAGER_V7
+#include <asm/domain.h>
+#endif /* CONFIG_EMULATE_DOMAIN_MANAGER_V7 */
+
 #include "fault.h"
 
 /*
@@ -545,6 +549,11 @@ do_DataAbort(unsigned long addr, unsigned int fsr, struct pt_regs *regs)
 	const struct fsr_info *inf = fsr_info + fsr_fs(fsr);
 	struct siginfo info;
 
+#ifdef CONFIG_EMULATE_DOMAIN_MANAGER_V7
+	if (emulate_domain_manager_data_abort(fsr, addr))
+		return;
+#endif
+
 	if (!inf->fn(addr, fsr & ~FSR_LNX_PF, regs))
 		return;
 
@@ -600,6 +609,11 @@ do_PrefetchAbort(unsigned long addr, unsigned int ifsr, struct pt_regs *regs)
 	const struct fsr_info *inf = ifsr_info + fsr_fs(ifsr);
 	struct siginfo info;
 
+#ifdef CONFIG_EMULATE_DOMAIN_MANAGER_V7
+	if (emulate_domain_manager_prefetch_abort(ifsr, addr))
+		return;
+#endif
+
 	if (!inf->fn(addr, ifsr | FSR_LNX_PF, regs))
 		return;
 
diff --git a/arch/arm/mm/proc-v7.S b/arch/arm/mm/proc-v7.S
index 88da392..ec6388e 100644
--- a/arch/arm/mm/proc-v7.S
+++ b/arch/arm/mm/proc-v7.S
@@ -13,6 +13,7 @@
 #include <linux/init.h>
 #include <linux/linkage.h>
 #include <asm/assembler.h>
+#include <asm/domain.h>
 #include <asm/asm-offsets.h>
 #include <asm/hwcap.h>
 #include <asm/pgtable-hwdef.h>
@@ -102,6 +103,11 @@ ENDPROC(cpu_v7_dcache_clean_area)
  */
 ENTRY(cpu_v7_switch_mm)
 #ifdef CONFIG_MMU
+#ifdef CONFIG_EMULATE_DOMAIN_MANAGER_V7
+	ldr	r2, =cpu_v7_switch_mm_private
+	b	emulate_domain_manager_switch_mm
+cpu_v7_switch_mm_private:
+#endif
 	mov	r2, #0
 	ldr	r1, [r1, #MM_CONTEXT_ID]	@ get mm->context.id
 	orr	r0, r0, #TTB_FLAGS
@@ -236,8 +242,10 @@ __v7_setup:
 	mcr	p15, 0, r10, c2, c0, 2		@ TTB control register
 	orr	r4, r4, #TTB_FLAGS
 	mcr	p15, 0, r4, c2, c0, 1		@ load TTB1
+#ifndef CONFIG_EMULATE_DOMAIN_MANAGER_V7
 	mov	r10, #0x1f			@ domains 0, 1 = manager
 	mcr	p15, 0, r10, c3, c0, 0		@ load domain access register
+#endif
 #ifdef CONFIG_ARCH_MSM_SCORPION
 	mov     r0, #0x77
 	mcr     p15, 3, r0, c15, c0, 3          @ set L2CR1
-- 
1.6.3.3

^ permalink raw reply related	[flat|nested] 68+ messages in thread

* [RFC 18/18] arm: mm: qsd8x50: Fix incorrect permission faults
  2010-01-11 22:47 [RFC 00/18] generic arm needed for msm Daniel Walker
                   ` (16 preceding siblings ...)
  2010-01-11 22:47 ` [RFC 17/18] arm: mm: Add SW emulation for ARM domain manager feature Daniel Walker
@ 2010-01-11 22:47 ` Daniel Walker
  2010-01-11 23:11   ` Russell King - ARM Linux
  17 siblings, 1 reply; 68+ messages in thread
From: Daniel Walker @ 2010-01-11 22:47 UTC (permalink / raw)
  To: linux-arm-kernel

From: Dave Estes <cestes@quicinc.com>

Handle incorrectly reported permission faults for qsd8650.  On
permission faults, retry MVA to PA conversion.  If retry detects
translation fault.  Report as translation fault.

Signed-off-by: Dave Estes <cestes@quicinc.com>
Signed-off-by: Daniel Walker <dwalker@codeaurora.org>
---
 arch/arm/mm/Kconfig     |    3 ++
 arch/arm/mm/abort-ev7.S |   78 +++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 81 insertions(+), 0 deletions(-)

diff --git a/arch/arm/mm/Kconfig b/arch/arm/mm/Kconfig
index 9c745ca..1f17def 100644
--- a/arch/arm/mm/Kconfig
+++ b/arch/arm/mm/Kconfig
@@ -575,6 +575,9 @@ config CPU_TLB_V7
 
 config EMULATE_DOMAIN_MANAGER_V7
 	bool
+
+config VERIFY_PERMISSION_FAULT
+	bool
 endif
 
 config CPU_HAS_ASID
diff --git a/arch/arm/mm/abort-ev7.S b/arch/arm/mm/abort-ev7.S
index 2e6dc04..8a75c21 100644
--- a/arch/arm/mm/abort-ev7.S
+++ b/arch/arm/mm/abort-ev7.S
@@ -1,3 +1,60 @@
+/*
+ * Copyright (c) 2009, Code Aurora Forum. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in the
+ *       documentation and/or other materials provided with the distribution.
+ *     * Neither the name of Code Aurora Forum nor
+ *       the names of its contributors may be used to endorse or promote
+ *       products derived from this software without specific prior written
+ *       permission.
+ *
+ * Alternatively, provided that this notice is retained in full, this software
+ * may be relicensed by the recipient under the terms of the GNU General Public
+ * License version 2 ("GPL") and only version 2, in which case the provisions of
+ * the GPL apply INSTEAD OF those given above.  If the recipient relicenses the
+ * software under the GPL, then the identification text in the MODULE_LICENSE
+ * macro must be changed to reflect "GPLv2" instead of "Dual BSD/GPL".  Once a
+ * recipient changes the license terms to the GPL, subsequent recipients shall
+ * not relicense under alternate licensing terms, including the BSD or dual
+ * BSD/GPL terms.  In addition, the following license statement immediately
+ * below and between the words START and END shall also then apply when this
+ * software is relicensed under the GPL:
+ *
+ * START
+ *
+ * This program is free software; you can redistribute it and/or modify it under
+ * the terms of the GNU General Public License version 2 and only version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS
+ * FOR A PARTICULAR PURPOSE.  See the GNU General Public License for more
+ * details.
+ *
+ * You should have received a copy of the GNU General Public License along with
+ * this program; if not, write to the Free Software Foundation, Inc.,
+ * 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
+ *
+ * END
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
 #include <linux/linkage.h>
 #include <asm/assembler.h>
 /*
@@ -29,5 +86,26 @@ ENTRY(v7_early_abort)
 	 * V6 code adjusts the returned DFSR.
 	 * New designs should not need to patch up faults.
 	 */
+
+#if defined(CONFIG_VERIFY_PERMISSION_FAULT)
+	/*
+	 * Detect erroneous permission failures and fix
+	 */
+	ldr	r3, =0x40d			@ On permission fault
+	and	r3, r1, r3
+	cmp	r3, #0x0d
+	movne	pc, lr
+
+	mcr	p15, 0, r0, c7, c8, 0   	@ Retranslate FAR
+	isb
+	mrc	p15, 0, r2, c7, c4, 0   	@ Read the PAR
+	and	r3, r2, #0x7b   		@ On translation fault
+	cmp	r3, #0x0b
+	movne	pc, lr
+	bic	r1, r1, #0xf			@ Fix up FSR FS[5:0]
+	and	r2, r2, #0x7e
+	orr	r1, r1, r2, LSR #1
+#endif
+
 	mov	pc, lr
 ENDPROC(v7_early_abort)
-- 
1.6.3.3

^ permalink raw reply related	[flat|nested] 68+ messages in thread

* [RFC 14/18] arm: msm: add v7 support for compiler version-4.1.1
  2010-01-11 22:47 ` [RFC 14/18] arm: msm: add v7 support for compiler version-4.1.1 Daniel Walker
@ 2010-01-11 23:07   ` Russell King - ARM Linux
  0 siblings, 0 replies; 68+ messages in thread
From: Russell King - ARM Linux @ 2010-01-11 23:07 UTC (permalink / raw)
  To: linux-arm-kernel

Is it really worth adding support for these old (and probably buggy)
compiler versions?

In any case, certainly not in this way.  Detect options by using the
cc-option makefile call, rather than writing your own way to get the
compiler version.

On Mon, Jan 11, 2010 at 02:47:33PM -0800, Daniel Walker wrote:
> From: Taniya Das <tdas@qualcomm.com>
> 
> Signed-off-by: Taniya Das <tdas@qualcomm.com>
> Signed-off-by: Daniel Walker <dwalker@codeaurora.org>
> ---
>  arch/arm/Makefile |    8 +++++++-
>  1 files changed, 7 insertions(+), 1 deletions(-)
> 
> diff --git a/arch/arm/Makefile b/arch/arm/Makefile
> index e9da084..aa32b39 100644
> --- a/arch/arm/Makefile
> +++ b/arch/arm/Makefile
> @@ -50,7 +50,13 @@ comma = ,
>  # Note that GCC does not numerically define an architecture version
>  # macro, but instead defines a whole series of macros which makes
>  # testing for a specific architecture or later rather impossible.
> -arch-$(CONFIG_CPU_32v7)		:=-D__LINUX_ARM_ARCH__=7 $(call cc-option,-march=armv7-a,-march=armv5t -Wa$(comma)-march=armv7-a)
> +CONFIG_SHELL	:=/bin/sh
> +GCC_VERSION	:= $(shell $(CONFIG_SHELL) $(PWD)/scripts/gcc-version.sh $(CROSS_COMPILE)gcc)
> +ifeq ($(GCC_VERSION),0401)
> +arch-$(CONFIG_CPU_32v7)		:=-D__LINUX_ARM_ARCH__=7 $(call cc-option,-march=armv7a,-march=armv5t -Wa$(comma)-march=armv7a)
> +else
> +arch-$(CONFIG_CPU_32v7)		:=-D__LINUX_ARM_ARCH__=7 $(call cc-option,-march=armv7-a,-march=armv5t -Wa$(comma)-march=armv7a)
> +endif
>  arch-$(CONFIG_CPU_32v6)		:=-D__LINUX_ARM_ARCH__=6 $(call cc-option,-march=armv6,-march=armv5t -Wa$(comma)-march=armv6)
>  # Only override the compiler option if ARMv6. The ARMv6K extensions are
>  # always available in ARMv7
> -- 
> 1.6.3.3
> 

^ permalink raw reply	[flat|nested] 68+ messages in thread

* [RFC 18/18] arm: mm: qsd8x50: Fix incorrect permission faults
  2010-01-11 22:47 ` [RFC 18/18] arm: mm: qsd8x50: Fix incorrect permission faults Daniel Walker
@ 2010-01-11 23:11   ` Russell King - ARM Linux
  2010-01-19 17:10     ` Jamie Lokier
  0 siblings, 1 reply; 68+ messages in thread
From: Russell King - ARM Linux @ 2010-01-11 23:11 UTC (permalink / raw)
  To: linux-arm-kernel

On Mon, Jan 11, 2010 at 02:47:37PM -0800, Daniel Walker wrote:
> From: Dave Estes <cestes@quicinc.com>
> 
> Handle incorrectly reported permission faults for qsd8650.  On
> permission faults, retry MVA to PA conversion.  If retry detects
> translation fault.  Report as translation fault.

This is totally unacceptable to add such a demanding copyright header to
any file, imposing this notice upon pre-existing code.  Please remove it.

^ permalink raw reply	[flat|nested] 68+ messages in thread

* [RFC 11/18] arm: msm: add ARCH_MSM_SCORPION to CPU_V7
  2010-01-11 22:47 ` [RFC 11/18] arm: msm: add ARCH_MSM_SCORPION to CPU_V7 Daniel Walker
@ 2010-01-11 23:13   ` Russell King - ARM Linux
  2010-01-11 23:17     ` Daniel Walker
  0 siblings, 1 reply; 68+ messages in thread
From: Russell King - ARM Linux @ 2010-01-11 23:13 UTC (permalink / raw)
  To: linux-arm-kernel

On Mon, Jan 11, 2010 at 02:47:30PM -0800, Daniel Walker wrote:
> From: Steve Muckle <smuckle@quicinc.com>
> 
> ARCH_MSM_SCORPION supports Qualcomm SnapDragon chipsets.
> 
> Signed-off-by: Steve Muckle <smuckle@quicinc.com>
> Signed-off-by: Daniel Walker <dwalker@codeaurora.org>
> ---
>  arch/arm/mm/Kconfig |    3 ++-
>  1 files changed, 2 insertions(+), 1 deletions(-)
> 
> diff --git a/arch/arm/mm/Kconfig b/arch/arm/mm/Kconfig
> index 155c7a3..c721880 100644
> --- a/arch/arm/mm/Kconfig
> +++ b/arch/arm/mm/Kconfig
> @@ -409,7 +409,8 @@ config CPU_32v6K
>  
>  # ARMv7
>  config CPU_V7
> -	bool "Support ARM V7 processor" if ARCH_INTEGRATOR || MACH_REALVIEW_EB || MACH_REALVIEW_PBX
> +	bool "Support ARM V7 processor" if ARCH_INTEGRATOR || MACH_REALVIEW_EB || MACH_REALVIEW_PBX || ARCH_MSM_SCORPION
> +	default y if ARCH_MSM_SCORPION

You don't understand what's going on here.  You want to offer
the question "Support ARM V7 processor" to the user, and it's
acceptable on Scorpion to say "no" to this?

I think not.  Do it in the same way other platforms do it, and
select CPU_V7 from ARCH_MSM_SCORPION - if you have Scorpion, then
surely V7 CPU support is mandatory.

You only add to the 'if' line if the support is optional for your
platform.

^ permalink raw reply	[flat|nested] 68+ messages in thread

* [RFC 11/18] arm: msm: add ARCH_MSM_SCORPION to CPU_V7
  2010-01-11 23:13   ` Russell King - ARM Linux
@ 2010-01-11 23:17     ` Daniel Walker
  0 siblings, 0 replies; 68+ messages in thread
From: Daniel Walker @ 2010-01-11 23:17 UTC (permalink / raw)
  To: linux-arm-kernel

On Mon, 2010-01-11 at 23:13 +0000, Russell King - ARM Linux wrote:
> You don't understand what's going on here.  You want to offer
> the question "Support ARM V7 processor" to the user, and it's
> acceptable on Scorpion to say "no" to this?
> 
> I think not.  Do it in the same way other platforms do it, and
> select CPU_V7 from ARCH_MSM_SCORPION - if you have Scorpion, then
> surely V7 CPU support is mandatory.
> 
> You only add to the 'if' line if the support is optional for your
> platform.

Ok .. I'll drop this one ..

Daniel

^ permalink raw reply	[flat|nested] 68+ messages in thread

* [RFC 16/18] arm: msm: add arch_has_speculative_dfetch()
  2010-01-11 22:47 ` [RFC 16/18] arm: msm: add arch_has_speculative_dfetch() Daniel Walker
@ 2010-01-11 23:33   ` Russell King - ARM Linux
  2010-01-12  0:28     ` Daniel Walker
  0 siblings, 1 reply; 68+ messages in thread
From: Russell King - ARM Linux @ 2010-01-11 23:33 UTC (permalink / raw)
  To: linux-arm-kernel

On Mon, Jan 11, 2010 at 02:47:35PM -0800, Daniel Walker wrote:
> From: Steve Muckle <smuckle@quicinc.com>
> 
> The Scorpion CPU speculatively reads data into the cache. This
> may occur while a region of memory is being written via DMA, so
> that region must be invalidated when it is brought under CPU
> control after the DMA transaction finishes, assuming the DMA
> was either bidirectional or from the device.
> 
> Currently both a clean and invalidate are being done for
> DMA_BIDIRECTIONAL in dma_unmap_single. Only an invalidate should be
> required here. There are drivers that currently rely on the clean
> however so this will be removed when those drivers are updated.

NAK.  There are patches around (and potentially queued) for the next
merge window which properly address this.

^ permalink raw reply	[flat|nested] 68+ messages in thread

* [RFC 05/18] arm: msm: implement ioremap_strongly_ordered
  2010-01-11 22:47 ` [RFC 05/18] arm: msm: implement ioremap_strongly_ordered Daniel Walker
@ 2010-01-11 23:37   ` Russell King - ARM Linux
  2010-01-28 23:04     ` Larry Bassel
  0 siblings, 1 reply; 68+ messages in thread
From: Russell King - ARM Linux @ 2010-01-11 23:37 UTC (permalink / raw)
  To: linux-arm-kernel

On Mon, Jan 11, 2010 at 02:47:24PM -0800, Daniel Walker wrote:
> From: Larry Bassel <lbassel@quicinc.com>
> 
> Both the clean and invalidate functionality needed
> for the video encoder and 7x27 barrier code
> need to have a strongly ordered mapping set up
> so that one may perform a write to strongly ordered
> memory. The generic ARM code does not provide this.
> 
> The generic ARM code does provide MT_DEVICE, which starts
> as strongly ordered, but the code later turns the buffered flag
> on for ARMv6 in order to make the device shared. This is not
> suitable for my purpose, so this patch adds code for a
> MT_DEVICE_STRONGLY_ORDERED mapping type.

This doesn't really describe what "my purpose" is; the patch description
is too vague to ascertain why this is required.

^ permalink raw reply	[flat|nested] 68+ messages in thread

* [RFC 06/18] arm: msm: implement proper dmb() for 7x27
  2010-01-11 22:47 ` [RFC 06/18] arm: msm: implement proper dmb() for 7x27 Daniel Walker
@ 2010-01-11 23:39   ` Russell King - ARM Linux
  2010-01-11 23:45     ` Daniel Walker
  2010-01-19 17:16   ` Jamie Lokier
  1 sibling, 1 reply; 68+ messages in thread
From: Russell King - ARM Linux @ 2010-01-11 23:39 UTC (permalink / raw)
  To: linux-arm-kernel

On Mon, Jan 11, 2010 at 02:47:25PM -0800, Daniel Walker wrote:
> From: Larry Bassel <lbassel@quicinc.com>
> 
> For 7x27 it is necessary to write to strongly
> ordered memory after executing the coprocessor 15
> instruction dmb instruction.
> 
> This is only for data barrier dmb().
> Note that the test for 7x27 is done on all MSM platforms
> (even ones such as 7201a whose kernel is distinct from
> that of 7x25/7x27).
> 
> Acked-by: Willie Ruan <wruan@quicinc.com>
> Signed-off-by: Larry Bassel <lbassel@quicinc.com>
> Signed-off-by: Daniel Walker <dwalker@codeaurora.org>

Can only see half of this change - what's the actual implementation of
arch_barrier_extra()?

I'd prefer not to include asm/memory.h into asm/system.h to avoid
needlessly polluting headers.

^ permalink raw reply	[flat|nested] 68+ messages in thread

* [RFC 04/18] arm: cache-l2x0: add l2x0 suspend and resume functions
  2010-01-11 22:47 ` [RFC 04/18] arm: cache-l2x0: add l2x0 suspend and resume functions Daniel Walker
@ 2010-01-11 23:44   ` Russell King - ARM Linux
  2010-01-12  0:52     ` Ruan, Willie
  0 siblings, 1 reply; 68+ messages in thread
From: Russell King - ARM Linux @ 2010-01-11 23:44 UTC (permalink / raw)
  To: linux-arm-kernel

On Mon, Jan 11, 2010 at 02:47:23PM -0800, Daniel Walker wrote:
> From: Willie Ruan <wruan@quicinc.com>
> 
> Suspend function should be called before L2 cache power is turned off
> to save power. Resume function should be called when the power is
> reapplied.

What is this 'collapsed' argument for?  Who is responsible for calling
the suspend/resume functions?

This is clearly a case where inappropriate commenting is a problem:

> +void l2x0_resume(int collapsed)
> +{
> +	if (collapsed)
> +		/* Restore aux control register value */
> +		writel(aux_ctrl_save, l2x0_base + L2X0_AUX_CTRL);
> +
> +	/* Enable the cache */
> +	writel(1, l2x0_base + L2X0_CTRL);
> +}

The above comments provide little in the way of useful value - they
just tell the already informed reader what's going on, but not why.
Eg, a comment before the function describing what this 'collapsed'
argument is would be much more useful, and describing when this
should be called.

^ permalink raw reply	[flat|nested] 68+ messages in thread

* [RFC 06/18] arm: msm: implement proper dmb() for 7x27
  2010-01-11 23:39   ` Russell King - ARM Linux
@ 2010-01-11 23:45     ` Daniel Walker
  2010-01-12  0:01       ` Russell King - ARM Linux
  0 siblings, 1 reply; 68+ messages in thread
From: Daniel Walker @ 2010-01-11 23:45 UTC (permalink / raw)
  To: linux-arm-kernel

On Mon, 2010-01-11 at 23:39 +0000, Russell King - ARM Linux wrote:
> On Mon, Jan 11, 2010 at 02:47:25PM -0800, Daniel Walker wrote:
> > From: Larry Bassel <lbassel@quicinc.com>
> > 
> > For 7x27 it is necessary to write to strongly
> > ordered memory after executing the coprocessor 15
> > instruction dmb instruction.
> > 
> > This is only for data barrier dmb().
> > Note that the test for 7x27 is done on all MSM platforms
> > (even ones such as 7201a whose kernel is distinct from
> > that of 7x25/7x27).
> > 
> > Acked-by: Willie Ruan <wruan@quicinc.com>
> > Signed-off-by: Larry Bassel <lbassel@quicinc.com>
> > Signed-off-by: Daniel Walker <dwalker@codeaurora.org>
> 
> Can only see half of this change - what's the actual implementation of
> arch_barrier_extra()?
> 
> I'd prefer not to include asm/memory.h into asm/system.h to avoid
> needlessly polluting headers.

I don't have a real patch for it yet, but here are the pieces ..

+#define arch_barrier_extra() do \
+       { if (machine_is_msm7x27_surf() || machine_is_msm7x27_ffa())  \
+               write_to_strongly_ordered_memory(); \
+       } while (0)

(btw, the machine types above registered either..)

and memory.c

/* arch/arm/mach-msm/memory.c
 *
 * Copyright (C) 2007 Google, Inc.
 * Copyright (c) 2009, Code Aurora Forum. All rights reserved.
 *
 * This software is licensed under the terms of the GNU General Public
 * License version 2, as published by the Free Software Foundation, and
 * may be copied, distributed, and modified under those terms.
 *
 * This program is distributed in the hope that it will be useful,
 * but WITHOUT ANY WARRANTY; without even the implied warranty of
 * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
 * GNU General Public License for more details.
 *
 */

#include <linux/mm.h>
#include <linux/mm_types.h>
#include <linux/bootmem.h>
#include <asm/pgtable.h>
#include <asm/io.h>
#include <asm/mach/map.h>

int arch_io_remap_pfn_range(struct vm_area_struct *vma, unsigned long addr,
			    unsigned long pfn, unsigned long size, pgprot_t prot)
{
	unsigned long pfn_addr = pfn << PAGE_SHIFT;
	if ((pfn_addr >= 0x88000000) && (pfn_addr < 0xD0000000)) {
		prot = pgprot_device(prot);
		printk("remapping device %lx\n", prot);
	}
	return remap_pfn_range(vma, addr, pfn, size, prot);
}

void *zero_page_strongly_ordered;

static void map_zero_page_strongly_ordered(void)
{
	if (zero_page_strongly_ordered)
		return;

	zero_page_strongly_ordered =
		ioremap_strongly_ordered(page_to_pfn(empty_zero_page)
		<< PAGE_SHIFT, PAGE_SIZE);
}

void write_to_strongly_ordered_memory(void)
{
	map_zero_page_strongly_ordered();
	*(int *)zero_page_strongly_ordered = 0;
}

void flush_axi_bus_buffer(void)
{
	__asm__ __volatile__ ("mcr p15, 0, %0, c7, c10, 5" \
				    : : "r" (0) : "memory");
	write_to_strongly_ordered_memory();
}

void *alloc_bootmem_aligned(unsigned long size, unsigned long alignment)
{
	void *unused_addr = NULL;
	unsigned long addr, tmp_size, unused_size;

	/* Allocate maximum size needed, see where it ends up.
	 * Then free it -- in this path there are no other allocators
	 * so we can depend on getting the same address back
	 * when we allocate a smaller piece that is aligned
	 *@the end (if necessary) and the piece we really want,
	 * then free the unused first piece.
	 */

	tmp_size = size + alignment - PAGE_SIZE;
	addr = (unsigned long)alloc_bootmem(tmp_size);
	free_bootmem(__pa(addr), tmp_size);

	unused_size = alignment - (addr % alignment);
	if (unused_size)
		unused_addr = alloc_bootmem(unused_size);

	addr = (unsigned long)alloc_bootmem(size);
	if (unused_size)
		free_bootmem(__pa(unused_addr), unused_size);

	return (void *)addr;
}

^ permalink raw reply	[flat|nested] 68+ messages in thread

* [RFC 08/18] arm: msm: set L2CR1 to enable prefetch and burst on Scorpion.
  2010-01-11 22:47 ` [RFC 08/18] arm: msm: set L2CR1 to enable prefetch and burst on Scorpion Daniel Walker
@ 2010-01-11 23:45   ` Russell King - ARM Linux
  2010-01-12 10:51     ` [RFC 08/18] arm: msm: set L2CR1 to enable prefetch and burston Scorpion Catalin Marinas
  0 siblings, 1 reply; 68+ messages in thread
From: Russell King - ARM Linux @ 2010-01-11 23:45 UTC (permalink / raw)
  To: linux-arm-kernel

On Mon, Jan 11, 2010 at 02:47:27PM -0800, Daniel Walker wrote:
> From: Larry Bassel <lbassel@quicinc.com>
> 
> This change improves the following LMBench benchmarks
> by over 15%:

Is this something that could be done in the platform initialisation code
rather than the processor code?  It's clearly not specific to all ARMv7
CPUs.

^ permalink raw reply	[flat|nested] 68+ messages in thread

* [RFC 06/18] arm: msm: implement proper dmb() for 7x27
  2010-01-11 23:45     ` Daniel Walker
@ 2010-01-12  0:01       ` Russell King - ARM Linux
  2010-01-19 17:28         ` Jamie Lokier
  0 siblings, 1 reply; 68+ messages in thread
From: Russell King - ARM Linux @ 2010-01-12  0:01 UTC (permalink / raw)
  To: linux-arm-kernel

On Mon, Jan 11, 2010 at 03:45:16PM -0800, Daniel Walker wrote:
> On Mon, 2010-01-11 at 23:39 +0000, Russell King - ARM Linux wrote:
> > On Mon, Jan 11, 2010 at 02:47:25PM -0800, Daniel Walker wrote:
> > > From: Larry Bassel <lbassel@quicinc.com>
> > > 
> > > For 7x27 it is necessary to write to strongly
> > > ordered memory after executing the coprocessor 15
> > > instruction dmb instruction.
> > > 
> > > This is only for data barrier dmb().
> > > Note that the test for 7x27 is done on all MSM platforms
> > > (even ones such as 7201a whose kernel is distinct from
> > > that of 7x25/7x27).
> > > 
> > > Acked-by: Willie Ruan <wruan@quicinc.com>
> > > Signed-off-by: Larry Bassel <lbassel@quicinc.com>
> > > Signed-off-by: Daniel Walker <dwalker@codeaurora.org>
> > 
> > Can only see half of this change - what's the actual implementation of
> > arch_barrier_extra()?
> > 
> > I'd prefer not to include asm/memory.h into asm/system.h to avoid
> > needlessly polluting headers.
> 
> I don't have a real patch for it yet, but here are the pieces ..
> 
> +#define arch_barrier_extra() do \
> +       { if (machine_is_msm7x27_surf() || machine_is_msm7x27_ffa())  \
> +               write_to_strongly_ordered_memory(); \
> +       } while (0)
> 
> (btw, the machine types above registered either..)

Hmm.  We can do far better than this.  Rather than do two tests and call
a function, wouldn't it be better to do something like:

#ifdef CONFIG_ARM_DMB_MEM
extern int *dmb_mem;
#define dmb_extra() do { if (dmb_mem) *dmb_mem = 0; } while (0)
#else
#define dmb_extra() do { } while (0)
#endif

in asm/system.h, and only set dmb_mem for the affected platforms?

> static void map_zero_page_strongly_ordered(void)
> {
> 	if (zero_page_strongly_ordered)
> 		return;
> 
> 	zero_page_strongly_ordered =
> 		ioremap_strongly_ordered(page_to_pfn(empty_zero_page)
> 		<< PAGE_SHIFT, PAGE_SIZE);

This can't work.  You're not allowed to map the same memory with differing
memory types from ARMv7.  This ends up mapping 'empty_zero_page' as both
cacheable memory and strongly ordered.  That's illegal according to the
ARM ARM.

You need to find something else to map - allocating a page of system
memory for this won't work either (it'll have the same issue.)

(This is a new problem to the ARM architecture, one which we're only just
getting to grips with - many of our old tricks with remapping DMA memory
no longer work on these latest CPUs.  You really must not take the
remapping which the kernel does today as a good idea anymore.)

> void flush_axi_bus_buffer(void)
> {
> 	__asm__ __volatile__ ("mcr p15, 0, %0, c7, c10, 5" \
> 				    : : "r" (0) : "memory");
> 	write_to_strongly_ordered_memory();

Isn't this just one of your modified dmb()s ?

> }
> 
> void *alloc_bootmem_aligned(unsigned long size, unsigned long alignment)
> {
> 	void *unused_addr = NULL;
> 	unsigned long addr, tmp_size, unused_size;
> 
> 	/* Allocate maximum size needed, see where it ends up.
> 	 * Then free it -- in this path there are no other allocators
> 	 * so we can depend on getting the same address back
> 	 * when we allocate a smaller piece that is aligned
> 	 * at the end (if necessary) and the piece we really want,
> 	 * then free the unused first piece.
> 	 */
> 
> 	tmp_size = size + alignment - PAGE_SIZE;
> 	addr = (unsigned long)alloc_bootmem(tmp_size);
> 	free_bootmem(__pa(addr), tmp_size);
> 
> 	unused_size = alignment - (addr % alignment);
> 	if (unused_size)
> 		unused_addr = alloc_bootmem(unused_size);
> 
> 	addr = (unsigned long)alloc_bootmem(size);
> 	if (unused_size)
> 		free_bootmem(__pa(unused_addr), unused_size);
> 
> 	return (void *)addr;

Erm, there is __alloc_bootmem(size, align, 0) - the bootmem allocator
already does alignment.

^ permalink raw reply	[flat|nested] 68+ messages in thread

* [RFC 16/18] arm: msm: add arch_has_speculative_dfetch()
  2010-01-11 23:33   ` Russell King - ARM Linux
@ 2010-01-12  0:28     ` Daniel Walker
  2010-01-12  8:59       ` Russell King - ARM Linux
  0 siblings, 1 reply; 68+ messages in thread
From: Daniel Walker @ 2010-01-12  0:28 UTC (permalink / raw)
  To: linux-arm-kernel

On Mon, 2010-01-11 at 23:33 +0000, Russell King - ARM Linux wrote:
> On Mon, Jan 11, 2010 at 02:47:35PM -0800, Daniel Walker wrote:
> > From: Steve Muckle <smuckle@quicinc.com>
> > 
> > The Scorpion CPU speculatively reads data into the cache. This
> > may occur while a region of memory is being written via DMA, so
> > that region must be invalidated when it is brought under CPU
> > control after the DMA transaction finishes, assuming the DMA
> > was either bidirectional or from the device.
> > 
> > Currently both a clean and invalidate are being done for
> > DMA_BIDIRECTIONAL in dma_unmap_single. Only an invalidate should be
> > required here. There are drivers that currently rely on the clean
> > however so this will be removed when those drivers are updated.
> 
> NAK.  There are patches around (and potentially queued) for the next
> merge window which properly address this.

Do you have any hints on who's doing the development ?

Daniel

^ permalink raw reply	[flat|nested] 68+ messages in thread

* [RFC 04/18] arm: cache-l2x0: add l2x0 suspend and resume functions
  2010-01-11 23:44   ` Russell King - ARM Linux
@ 2010-01-12  0:52     ` Ruan, Willie
  0 siblings, 0 replies; 68+ messages in thread
From: Ruan, Willie @ 2010-01-12  0:52 UTC (permalink / raw)
  To: linux-arm-kernel

Hi Russell,

Please see my comments below with [wr].

-----Original Message-----
From: Russell King - ARM Linux [mailto:linux at arm.linux.org.uk] 
Sent: Monday, January 11, 2010 3:44 PM
To: Daniel Walker
Cc: linux-arm-kernel at lists.infradead.org; Ruan, Willie
Subject: Re: [RFC 04/18] arm: cache-l2x0: add l2x0 suspend and resume functions

On Mon, Jan 11, 2010 at 02:47:23PM -0800, Daniel Walker wrote:
> From: Willie Ruan <wruan@quicinc.com>
> 
> Suspend function should be called before L2 cache power is turned off
> to save power. Resume function should be called when the power is
> reapplied.

What is this 'collapsed' argument for?  Who is responsible for calling
the suspend/resume functions?

[wr] The application processor in Qualcomm's 7xxx and 8xxx series chips can be powered off during Linux suspend and idle time. L2 cache is on the same power rail as the processor for current chips. We use "power collapse" to describe the action (turn off apps power) and the state (apps being powered off). The platform_suspend_ops.enter() and arch_idle() should call l2x0_suspend and l2x0_resume before and after collapsing the power. Since the action could be aborted due to pending interrupt or other events, we need to use a variable to the state whether the power was turned off or not. That state variable is 'collapsed' here.


This is clearly a case where inappropriate commenting is a problem:

[wr] Surely I can change the commit comment. I'll add WHY this patch is needed. Do you have any suggestion for the function names? They are 'suspend' and 'resume' functions, but also called in arch_idle.

> +void l2x0_resume(int collapsed)
> +{
> +	if (collapsed)
> +		/* Restore aux control register value */
> +		writel(aux_ctrl_save, l2x0_base + L2X0_AUX_CTRL);
> +
> +	/* Enable the cache */
> +	writel(1, l2x0_base + L2X0_CTRL);
> +}

The above comments provide little in the way of useful value - they
just tell the already informed reader what's going on, but not why.
Eg, a comment before the function describing what this 'collapsed'
argument is would be much more useful, and describing when this
should be called.

[wr] I can do the change. 

Thank you,
~Willie
[/wr]

^ permalink raw reply	[flat|nested] 68+ messages in thread

* [RFC 16/18] arm: msm: add arch_has_speculative_dfetch()
  2010-01-12  0:28     ` Daniel Walker
@ 2010-01-12  8:59       ` Russell King - ARM Linux
  0 siblings, 0 replies; 68+ messages in thread
From: Russell King - ARM Linux @ 2010-01-12  8:59 UTC (permalink / raw)
  To: linux-arm-kernel

On Mon, Jan 11, 2010 at 04:28:53PM -0800, Daniel Walker wrote:
> On Mon, 2010-01-11 at 23:33 +0000, Russell King - ARM Linux wrote:
> > On Mon, Jan 11, 2010 at 02:47:35PM -0800, Daniel Walker wrote:
> > > From: Steve Muckle <smuckle@quicinc.com>
> > > 
> > > The Scorpion CPU speculatively reads data into the cache. This
> > > may occur while a region of memory is being written via DMA, so
> > > that region must be invalidated when it is brought under CPU
> > > control after the DMA transaction finishes, assuming the DMA
> > > was either bidirectional or from the device.
> > > 
> > > Currently both a clean and invalidate are being done for
> > > DMA_BIDIRECTIONAL in dma_unmap_single. Only an invalidate should be
> > > required here. There are drivers that currently rely on the clean
> > > however so this will be removed when those drivers are updated.
> > 
> > NAK.  There are patches around (and potentially queued) for the next
> > merge window which properly address this.
> 
> Do you have any hints on who's doing the development ?

Me, and they've been posted to this mailing list a few times in the last
months.

^ permalink raw reply	[flat|nested] 68+ messages in thread

* [RFC 08/18] arm: msm: set L2CR1 to enable prefetch and burston Scorpion.
  2010-01-11 23:45   ` Russell King - ARM Linux
@ 2010-01-12 10:51     ` Catalin Marinas
  2010-01-12 11:23       ` Shilimkar, Santosh
  2010-01-12 11:44       ` Russell King - ARM Linux
  0 siblings, 2 replies; 68+ messages in thread
From: Catalin Marinas @ 2010-01-12 10:51 UTC (permalink / raw)
  To: linux-arm-kernel

On Mon, 2010-01-11 at 23:45 +0000, Russell King - ARM Linux wrote:
> On Mon, Jan 11, 2010 at 02:47:27PM -0800, Daniel Walker wrote:
> > From: Larry Bassel <lbassel@quicinc.com>
> >
> > This change improves the following LMBench benchmarks
> > by over 15%:
> 
> Is this something that could be done in the platform initialisation code
> rather than the processor code?  It's clearly not specific to all ARMv7
> CPUs.

We discussed in the past but the thread died. There are various bits
that may need to be enabled before the CPU is initialised but it is
highly dependent on the hardware configuration and not only the CPU
type. For example, Cortex-A8/A9 may have some bits in the ACTLR register
which are fine to set on RealView but not on OMAP because the kernel
there is running in non-secure mode. Now Scorpion has other needs.

Since such initialisation would run before the MMU is enabled, should we
add an additional per-platform macro to be invoked before the CPU is set
up? A pointer in the machine_desc structure to an asm routine would also
work assuming that care is taken to calculate the phys address and the
code is position independent.

An alternative would be to briefly enable the MMU for the initialisation
(using a temporary page table) than disable it and switch to the proper
one built via create_mapping(). This would allow some initialisation
code to be written in C (though I'm not sure it's worth the effort).

-- 
Catalin

^ permalink raw reply	[flat|nested] 68+ messages in thread

* [RFC 08/18] arm: msm: set L2CR1 to enable prefetch and burston Scorpion.
  2010-01-12 10:51     ` [RFC 08/18] arm: msm: set L2CR1 to enable prefetch and burston Scorpion Catalin Marinas
@ 2010-01-12 11:23       ` Shilimkar, Santosh
  2010-01-12 11:44       ` Russell King - ARM Linux
  1 sibling, 0 replies; 68+ messages in thread
From: Shilimkar, Santosh @ 2010-01-12 11:23 UTC (permalink / raw)
  To: linux-arm-kernel

> -----Original Message-----
> From: linux-arm-kernel-bounces at lists.infradead.org [mailto:linux-arm-kernel-
> bounces at lists.infradead.org] On Behalf Of Catalin Marinas
> Sent: Tuesday, January 12, 2010 4:22 PM
> To: Russell King - ARM Linux
> Cc: Larry Bassel; Daniel Walker; linux-arm-kernel at lists.infradead.org
> Subject: Re: [RFC 08/18] arm: msm: set L2CR1 to enable prefetch and burston Scorpion.
> 
> On Mon, 2010-01-11 at 23:45 +0000, Russell King - ARM Linux wrote:
> > On Mon, Jan 11, 2010 at 02:47:27PM -0800, Daniel Walker wrote:
> > > From: Larry Bassel <lbassel@quicinc.com>
> > >
> > > This change improves the following LMBench benchmarks
> > > by over 15%:
> >
> > Is this something that could be done in the platform initialisation code
> > rather than the processor code?  It's clearly not specific to all ARMv7
> > CPUs.
> 
> We discussed in the past but the thread died. There are various bits
> that may need to be enabled before the CPU is initialised but it is
> highly dependent on the hardware configuration and not only the CPU
> type. For example, Cortex-A8/A9 may have some bits in the ACTLR register
> which are fine to set on RealView but not on OMAP because the kernel
> there is running in non-secure mode. Now Scorpion has other needs.
> 
> Since such initialisation would run before the MMU is enabled, should we
> add an additional per-platform macro to be invoked before the CPU is set
> up? A pointer in the machine_desc structure to an asm routine would also
> work assuming that care is taken to calculate the phys address and the
> code is position independent.
Thanks for bringing out this. A per-platform macro would be usefull and
easy to populate and surely needed one.
> An alternative would be to briefly enable the MMU for the initialisation
> (using a temporary page table) than disable it and switch to the proper
> one built via create_mapping(). This would allow some initialisation
> code to be written in C (though I'm not sure it's worth the effort).
> 

Regards,
Santosh

^ permalink raw reply	[flat|nested] 68+ messages in thread

* [RFC 08/18] arm: msm: set L2CR1 to enable prefetch and burston Scorpion.
  2010-01-12 10:51     ` [RFC 08/18] arm: msm: set L2CR1 to enable prefetch and burston Scorpion Catalin Marinas
  2010-01-12 11:23       ` Shilimkar, Santosh
@ 2010-01-12 11:44       ` Russell King - ARM Linux
  2010-01-12 13:32         ` [RFC 08/18] arm: msm: set L2CR1 to enable prefetch and burstonScorpion Catalin Marinas
  2010-01-12 20:21         ` [RFC 08/18] arm: msm: set L2CR1 to enable prefetch and burston Scorpion Nicolas Pitre
  1 sibling, 2 replies; 68+ messages in thread
From: Russell King - ARM Linux @ 2010-01-12 11:44 UTC (permalink / raw)
  To: linux-arm-kernel

On Tue, Jan 12, 2010 at 10:51:41AM +0000, Catalin Marinas wrote:
> We discussed in the past but the thread died. There are various bits
> that may need to be enabled before the CPU is initialised but it is
> highly dependent on the hardware configuration and not only the CPU
> type. For example, Cortex-A8/A9 may have some bits in the ACTLR register
> which are fine to set on RealView but not on OMAP because the kernel
> there is running in non-secure mode. Now Scorpion has other needs.
> 
> Since such initialisation would run before the MMU is enabled, should we
> add an additional per-platform macro to be invoked before the CPU is set
> up? A pointer in the machine_desc structure to an asm routine would also
> work assuming that care is taken to calculate the phys address and the
> code is position independent.

What you're asking for is bith platform and CPU dependent - which makes
it much more difficult to generalize.  It also throws a spanner in the
works for the DT people.

> An alternative would be to briefly enable the MMU for the initialisation
> (using a temporary page table) than disable it and switch to the proper
> one built via create_mapping(). This would allow some initialisation
> code to be written in C (though I'm not sure it's worth the effort).

How would that work?  You're saying on one hand that there's initialization
which needs to be done before the MMU is initialized, and in this paragraph
you're saying it can be done while the MMU is initialized and enabled.

Well, if it can be done while the MMU is initialized and enabled, why can't
it be done later?

Actually, what you describe is what's already being done by the head.S
code - we already build a temporary page table and enable the MMU so that
we can get the C code running to setup the page tables properly.

It strikes me that things are just becoming excessively complicated with
these seemingly "catch-22" issues.  Maybe a totally different approach is
needed - such as requiring some of this low level setup to be done by the
platform's boot loader?

^ permalink raw reply	[flat|nested] 68+ messages in thread

* [RFC 08/18] arm: msm: set L2CR1 to enable prefetch and burstonScorpion.
  2010-01-12 11:44       ` Russell King - ARM Linux
@ 2010-01-12 13:32         ` Catalin Marinas
  2010-01-12 13:58           ` Russell King - ARM Linux
  2010-01-13  6:14           ` [RFC 08/18] arm: msm: set L2CR1 to enable prefetch and burstonScorpion Shilimkar, Santosh
  2010-01-12 20:21         ` [RFC 08/18] arm: msm: set L2CR1 to enable prefetch and burston Scorpion Nicolas Pitre
  1 sibling, 2 replies; 68+ messages in thread
From: Catalin Marinas @ 2010-01-12 13:32 UTC (permalink / raw)
  To: linux-arm-kernel

On Tue, 2010-01-12 at 11:44 +0000, Russell King - ARM Linux wrote:
> On Tue, Jan 12, 2010 at 10:51:41AM +0000, Catalin Marinas wrote:
> > We discussed in the past but the thread died. There are various bits
> > that may need to be enabled before the CPU is initialised but it is
> > highly dependent on the hardware configuration and not only the CPU
> > type. For example, Cortex-A8/A9 may have some bits in the ACTLR register
> > which are fine to set on RealView but not on OMAP because the kernel
> > there is running in non-secure mode. Now Scorpion has other needs.
> >
> > Since such initialisation would run before the MMU is enabled, should we
> > add an additional per-platform macro to be invoked before the CPU is set
> > up? A pointer in the machine_desc structure to an asm routine would also
> > work assuming that care is taken to calculate the phys address and the
> > code is position independent.
> 
> What you're asking for is bith platform and CPU dependent - which makes
> it much more difficult to generalize.  It also throws a spanner in the
> works for the DT people.

If it would only be CPU dependent, we could just add some ID checking in
proc-v*.S and but for some cases it's also platform dependent (like
secure vs. non-secure vs. secure monitor API).
> 
> > An alternative would be to briefly enable the MMU for the initialisation
> > (using a temporary page table) than disable it and switch to the proper
> > one built via create_mapping(). This would allow some initialisation
> > code to be written in C (though I'm not sure it's worth the effort).
> 
> How would that work?  You're saying on one hand that there's initialization
> which needs to be done before the MMU is initialized, and in this paragraph
> you're saying it can be done while the MMU is initialized and enabled.

Standard (architecture) MMU initialisation without additional
optimisation bits (similar to what we have in the decompressor). It
won't support SMP, shared page tables etc.

> Actually, what you describe is what's already being done by the head.S
> code - we already build a temporary page table and enable the MMU so that
> we can get the C code running to setup the page tables properly.

Yes, that's the table I was referring to. It could be even made not to
depend on CONFIG_SMP and always make it UP (no shared bit, this way it
would be simpler to run SMP kernel on UP hardware by doing some checks
later).

We still have things like the SMP/nAMP mode which is Cortex-A9 and
ARM11MPCore specific. There is also the TLB ops broadcasting bit in
ACTLR which is also specific to ARM Ltd cores and some of these bits may
not be accessible directly if you run in non-secure mode.

> It strikes me that things are just becoming excessively complicated with
> these seemingly "catch-22" issues.  Maybe a totally different approach is
> needed - such as requiring some of this low level setup to be done by the
> platform's boot loader?

Ideally, yes, it would be nice if these were done by the boot loader. We
would have to define some clear requirements or at least saying that
Linux only touches the architectured registers and not the
implementation defined ones. But I'm not sure how feasible this would
be.

There is also the CPU hotplug case where CPUs come back via the setup
function and may need to touch bits like SMP/nAMP. This should work
together with a boot monitor.

-- 
Catalin

^ permalink raw reply	[flat|nested] 68+ messages in thread

* [RFC 08/18] arm: msm: set L2CR1 to enable prefetch and burstonScorpion.
  2010-01-12 13:32         ` [RFC 08/18] arm: msm: set L2CR1 to enable prefetch and burstonScorpion Catalin Marinas
@ 2010-01-12 13:58           ` Russell King - ARM Linux
  2010-01-12 14:41             ` [RFC 08/18] arm: msm: set L2CR1 to enable prefetch andburstonScorpion Catalin Marinas
  2010-01-13  6:14           ` [RFC 08/18] arm: msm: set L2CR1 to enable prefetch and burstonScorpion Shilimkar, Santosh
  1 sibling, 1 reply; 68+ messages in thread
From: Russell King - ARM Linux @ 2010-01-12 13:58 UTC (permalink / raw)
  To: linux-arm-kernel

On Tue, Jan 12, 2010 at 01:32:56PM +0000, Catalin Marinas wrote:
> On Tue, 2010-01-12 at 11:44 +0000, Russell King - ARM Linux wrote:
> > What you're asking for is bith platform and CPU dependent - which makes
> > it much more difficult to generalize.  It also throws a spanner in the
> > works for the DT people.
> 
> If it would only be CPU dependent, we could just add some ID checking in
> proc-v*.S and but for some cases it's also platform dependent (like
> secure vs. non-secure vs. secure monitor API).
> 
> > > An alternative would be to briefly enable the MMU for the initialisation
> > > (using a temporary page table) than disable it and switch to the proper
> > > one built via create_mapping(). This would allow some initialisation
> > > code to be written in C (though I'm not sure it's worth the effort).
> > 
> > How would that work?  You're saying on one hand that there's initialization
> > which needs to be done before the MMU is initialized, and in this paragraph
> > you're saying it can be done while the MMU is initialized and enabled.
> 
> Standard (architecture) MMU initialisation without additional
> optimisation bits (similar to what we have in the decompressor). It
> won't support SMP, shared page tables etc.

That's more or less what is done already - up to the point where the
proper page tables are setup by paging_init().

> > Actually, what you describe is what's already being done by the head.S
> > code - we already build a temporary page table and enable the MMU so that
> > we can get the C code running to setup the page tables properly.
> 
> Yes, that's the table I was referring to. It could be even made not to
> depend on CONFIG_SMP and always make it UP (no shared bit, this way it
> would be simpler to run SMP kernel on UP hardware by doing some checks
> later).

^ permalink raw reply	[flat|nested] 68+ messages in thread

* [RFC 08/18] arm: msm: set L2CR1 to enable prefetch andburstonScorpion.
  2010-01-12 13:58           ` Russell King - ARM Linux
@ 2010-01-12 14:41             ` Catalin Marinas
  2010-01-12 18:23               ` Daniel Walker
  0 siblings, 1 reply; 68+ messages in thread
From: Catalin Marinas @ 2010-01-12 14:41 UTC (permalink / raw)
  To: linux-arm-kernel

On Tue, 2010-01-12 at 13:58 +0000, Russell King - ARM Linux wrote:
> On Tue, Jan 12, 2010 at 01:32:56PM +0000, Catalin Marinas wrote:
> > On Tue, 2010-01-12 at 11:44 +0000, Russell King - ARM Linux wrote:
> > > Actually, what you describe is what's already being done by the head.S
> > > code - we already build a temporary page table and enable the MMU so that
> > > we can get the C code running to setup the page tables properly.
> >
> > Yes, that's the table I was referring to. It could be even made not to
> > depend on CONFIG_SMP and always make it UP (no shared bit, this way it
> > would be simpler to run SMP kernel on UP hardware by doing some checks
> > later).
> 
> From what I remember, there's a problem where the TTB flags don't match
> the memory which the page tables live in, so we had to make the two
> match - and it's not possible to change the TTB flags and the page
> tables simultaneously, certainly not without turning the MMU off, doing
> the modification and turning it back on again.

Indeed, it would need turning the MMU off and on again when switching to
a new page table.

I tried in the past to run an SMP kernel on UP platform (and gave up
because of lack of time) and the main issue was the shared setting of
the page tables where exclusives wound no longer work on UP. Anyway, I
think this subject diverges from the original discussion (maybe for a
different thread).

> > We still have things like the SMP/nAMP mode which is Cortex-A9 and
> > ARM11MPCore specific. There is also the TLB ops broadcasting bit in
> > ACTLR which is also specific to ARM Ltd cores and some of these bits
> > may not be accessible directly if you run in non-secure mode.
> 
> If you're running in non-secure mode and the parts of the system you
> don't have access to haven't already been setup, you're running in a
> crippled environment - so this isn't really an argument.

My argument was that ACTLR is implementation-defined, so other cores
like Scorpion may have a different meaning for such bits (Daniel's
patches are probably held in the moderation queue, I only saw your
replies).

> > > It strikes me that things are just becoming excessively complicated with
> > > these seemingly "catch-22" issues.  Maybe a totally different approach is
> > > needed - such as requiring some of this low level setup to be done by the
> > > platform's boot loader?
> >
> > Ideally, yes, it would be nice if these were done by the boot loader. We
> > would have to define some clear requirements or at least saying that
> > Linux only touches the architectured registers and not the
> > implementation defined ones. But I'm not sure how feasible this would
> > be.
> 
> Unless we do something like this, we're going to end up with lots of
> bits of additional platform specific non-conditional assembly in the
> early kernel boot path.

As a gatekeeper, maybe you could just reject such patches and enforce
the implementation-specific bits setting in the boot loader. That's also
true for the SMP/nAMP mode setting on A9 and 11MPCore.

> > There is also the CPU hotplug case where CPUs come back via the setup
> > function and may need to touch bits like SMP/nAMP. This should work
> > together with a boot monitor.
> 
> How does the CPU get back to the setup function?  Is the hardware aware
> of the address of our setup function?  I don't think so, so I don't buy
> this argument.
> 
> If the CPU is reset (eg, because power has been removed), it's going to
> want to start executing from the reset vector - which means it will come
> back to us via the boot loader.

It depends on what you put in the reset vector (on RealView, address 0
gets remapped to RAM). It is true that it can come back online via the
boot loader (especially if Linux is running in non-secure mode). There
are a few complications like the boot loader being around (flash) and
the kernel knowing its address but it's doable.

-- 
Catalin

^ permalink raw reply	[flat|nested] 68+ messages in thread

* [RFC 08/18] arm: msm: set L2CR1 to enable prefetch andburstonScorpion.
  2010-01-12 14:41             ` [RFC 08/18] arm: msm: set L2CR1 to enable prefetch andburstonScorpion Catalin Marinas
@ 2010-01-12 18:23               ` Daniel Walker
  2010-01-13 10:36                 ` Catalin Marinas
  0 siblings, 1 reply; 68+ messages in thread
From: Daniel Walker @ 2010-01-12 18:23 UTC (permalink / raw)
  To: linux-arm-kernel

On Tue, 2010-01-12 at 14:41 +0000, Catalin Marinas wrote:
> My argument was that ACTLR is implementation-defined, so other cores
> like Scorpion may have a different meaning for such bits (Daniel's
> patches are probably held in the moderation queue, I only saw your
> replies). 

Yeah, they are being held for moderator approval ("Message has a
suspicious header") .. I see a lot of other patchsets going through,
does anyone have a location for the right method to submit a series to
this list?

Daniel

^ permalink raw reply	[flat|nested] 68+ messages in thread

* [RFC 08/18] arm: msm: set L2CR1 to enable prefetch and burston Scorpion.
  2010-01-12 11:44       ` Russell King - ARM Linux
  2010-01-12 13:32         ` [RFC 08/18] arm: msm: set L2CR1 to enable prefetch and burstonScorpion Catalin Marinas
@ 2010-01-12 20:21         ` Nicolas Pitre
  1 sibling, 0 replies; 68+ messages in thread
From: Nicolas Pitre @ 2010-01-12 20:21 UTC (permalink / raw)
  To: linux-arm-kernel

On Tue, 12 Jan 2010, Russell King - ARM Linux wrote:

> On Tue, Jan 12, 2010 at 10:51:41AM +0000, Catalin Marinas wrote:
> > We discussed in the past but the thread died. There are various bits
> > that may need to be enabled before the CPU is initialised but it is
> > highly dependent on the hardware configuration and not only the CPU
> > type. For example, Cortex-A8/A9 may have some bits in the ACTLR register
> > which are fine to set on RealView but not on OMAP because the kernel
> > there is running in non-secure mode. Now Scorpion has other needs.
> > 
> > Since such initialisation would run before the MMU is enabled, should we
> > add an additional per-platform macro to be invoked before the CPU is set
> > up? A pointer in the machine_desc structure to an asm routine would also
> > work assuming that care is taken to calculate the phys address and the
> > code is position independent.
> 
> What you're asking for is bith platform and CPU dependent - which makes
> it much more difficult to generalize.  It also throws a spanner in the
> works for the DT people.

We have a similar issue with the latest Marvell SOCs.  Those are using a 
PJ4 core which is available in two variants: ARMv6 compatible and ARMv7 
compatible. The problem is that both variants have the same main CPU ID: 
0x560f5810.  Now that the architecture level field has been obsoleted by 
ARM Ltd (it contains 0xf) therefore the architecture has to be 
determined through other means.  There is unfortunately no other 
standard register with a simple architecture level value like this field 
used to hold.

Of course we want a single kernel image to be able to work with both 
variants.  That means that we need to find a way at runtime to select 
between service functions from proc-v6.S or proc-v7.S, but the current 
simple value/mask scheme doesn't work anymore.

So...  Maybe a new scheme could be used instead without impacting 
backward compatibility.  For example, what about replacing the mask 
field with a probe function address?  Whenever the recorded CPU ID value 
in the proc_info structure is 0xffffff then the CPU mask value is 
instead a probe function that should return success/failure, and 
therefore the CPU probing could be as elaborate as it needs to be.  And 
if special initializations have to be performed before the MMU is turned 
on then that could be the place to perform them before success is 
returned.

What do you think?


Nicolas

^ permalink raw reply	[flat|nested] 68+ messages in thread

* [RFC 08/18] arm: msm: set L2CR1 to enable prefetch and burstonScorpion.
  2010-01-12 13:32         ` [RFC 08/18] arm: msm: set L2CR1 to enable prefetch and burstonScorpion Catalin Marinas
  2010-01-12 13:58           ` Russell King - ARM Linux
@ 2010-01-13  6:14           ` Shilimkar, Santosh
  1 sibling, 0 replies; 68+ messages in thread
From: Shilimkar, Santosh @ 2010-01-13  6:14 UTC (permalink / raw)
  To: linux-arm-kernel

> -----Original Message-----
> From: linux-arm-kernel-bounces at lists.infradead.org [mailto:linux-arm-kernel-
> bounces at lists.infradead.org] On Behalf Of Catalin Marinas
> Sent: Tuesday, January 12, 2010 7:03 PM
> To: Russell King - ARM Linux
> Cc: Larry Bassel; Daniel Walker; linux-arm-kernel at lists.infradead.org
> Subject: Re: [RFC 08/18] arm: msm: set L2CR1 to enable prefetch and burstonScorpion.
> 
> On Tue, 2010-01-12 at 11:44 +0000, Russell King - ARM Linux wrote:
> > On Tue, Jan 12, 2010 at 10:51:41AM +0000, Catalin Marinas wrote:
> > > We discussed in the past but the thread died. There are various bits
> > > that may need to be enabled before the CPU is initialised but it is
> > > highly dependent on the hardware configuration and not only the CPU
> > > type. For example, Cortex-A8/A9 may have some bits in the ACTLR register
> > > which are fine to set on RealView but not on OMAP because the kernel
> > > there is running in non-secure mode. Now Scorpion has other needs.
> > >
> > > Since such initialisation would run before the MMU is enabled, should we
> > > add an additional per-platform macro to be invoked before the CPU is set
> > > up? A pointer in the machine_desc structure to an asm routine would also
> > > work assuming that care is taken to calculate the phys address and the
> > > code is position independent.
> >
> > What you're asking for is bith platform and CPU dependent - which makes
> > it much more difficult to generalize.  It also throws a spanner in the
> > works for the DT people.
> 
> If it would only be CPU dependent, we could just add some ID checking in
> proc-v*.S and but for some cases it's also platform dependent (like
> secure vs. non-secure vs. secure monitor API).
> >
> > > An alternative would be to briefly enable the MMU for the initialisation
> > > (using a temporary page table) than disable it and switch to the proper
> > > one built via create_mapping(). This would allow some initialisation
> > > code to be written in C (though I'm not sure it's worth the effort).
> >
> > How would that work?  You're saying on one hand that there's initialization
> > which needs to be done before the MMU is initialized, and in this paragraph
> > you're saying it can be done while the MMU is initialized and enabled.
> 
> Standard (architecture) MMU initialisation without additional
> optimisation bits (similar to what we have in the decompressor). It
> won't support SMP, shared page tables etc.
> 
> > Actually, what you describe is what's already being done by the head.S
> > code - we already build a temporary page table and enable the MMU so that
> > we can get the C code running to setup the page tables properly.
> 
> Yes, that's the table I was referring to. It could be even made not to
> depend on CONFIG_SMP and always make it UP (no shared bit, this way it
> would be simpler to run SMP kernel on UP hardware by doing some checks
> later).
> 
> We still have things like the SMP/nAMP mode which is Cortex-A9 and
> ARM11MPCore specific. There is also the TLB ops broadcasting bit in
> ACTLR which is also specific to ARM Ltd cores and some of these bits may
> not be accessible directly if you run in non-secure mode.
> 
> > It strikes me that things are just becoming excessively complicated with
> > these seemingly "catch-22" issues.  Maybe a totally different approach is
> > needed - such as requiring some of this low level setup to be done by the
> > platform's boot loader?
> 
> Ideally, yes, it would be nice if these were done by the boot loader. We
> would have to define some clear requirements or at least saying that
> Linux only touches the architectured registers and not the
> implementation defined ones. But I'm not sure how feasible this would
> be.
> 
> There is also the CPU hotplug case where CPUs come back via the setup
> function and may need to touch bits like SMP/nAMP. This should work
> together with a boot monitor.
Doing this in Bootloader only fixes half of the problem. When doing 
power management in the kernel, you still need to play with these bits 
so having support in the kernel is preferable.
Also people will use custom boot-loaders for the same SOC and in such 
cases it's difficult to ensure/control that the needed bits are enabled.

Regards,
Santosh

^ permalink raw reply	[flat|nested] 68+ messages in thread

* [RFC 08/18] arm: msm: set L2CR1 to enable prefetch andburstonScorpion.
  2010-01-12 18:23               ` Daniel Walker
@ 2010-01-13 10:36                 ` Catalin Marinas
  2010-01-19 17:38                   ` Jamie Lokier
  0 siblings, 1 reply; 68+ messages in thread
From: Catalin Marinas @ 2010-01-13 10:36 UTC (permalink / raw)
  To: linux-arm-kernel

On Tue, 2010-01-12 at 18:23 +0000, Daniel Walker wrote:
> On Tue, 2010-01-12 at 14:41 +0000, Catalin Marinas wrote:
> > My argument was that ACTLR is implementation-defined, so other cores
> > like Scorpion may have a different meaning for such bits (Daniel's
> > patches are probably held in the moderation queue, I only saw your
> > replies).
> 
> Yeah, they are being held for moderator approval ("Message has a
> suspicious header") .. I see a lot of other patchsets going through,
> does anyone have a location for the right method to submit a series to
> this list?

The problem is that subsequent patches probably came as replies to the
top e-mail but without the "Re: " prefix (which is fine). However, the
mail filters may reject mails with "In-Reply-To" headers without "Re: "
in the subject.

If you use Git, it has some options to send patches unthreaded (which I
don't particularly like but that's a way around this problem; an
alternative is to convince the moderator to remove this rule).

-- 
Catalin

^ permalink raw reply	[flat|nested] 68+ messages in thread

* [RFC 03/18] arm: boot: remove old ARM ID for QSD
  2010-01-11 22:47 ` [RFC 03/18] arm: boot: remove old ARM ID for QSD Daniel Walker
@ 2010-01-15 21:26   ` Russell King - ARM Linux
  0 siblings, 0 replies; 68+ messages in thread
From: Russell King - ARM Linux @ 2010-01-15 21:26 UTC (permalink / raw)
  To: linux-arm-kernel

On Mon, Jan 11, 2010 at 02:47:22PM -0800, Daniel Walker wrote:
> From: Steve Muckle <smuckle@quicinc.com>
> 
> The mask and ID pattern for older ARM IDs in the kernel
> decompressor matches the CPU ID for Scorpion, causing the
> v7 caching routines not to be run and kernel decompression
> to take significantly longer.
> 
> QSD may eventually use CPUs other than Scorpion, but they
> will adhere to the new ARM CPU ID format, which is
> incompatible with the entry for older ARM CPU IDs.

Actually, we need to change this.

> +#ifndef CONFIG_ARCH_MSM_SCORPION
>  		.word	0x00000000		@ old ARM ID
>  		.word	0x0000f000

		.word	0x41000000		@ old ARM ID
		.word	0xff00f000

would be more appropriate - but I don't know if this means we'll miss
out on some CPUs.  However, the latest spec indicates that this is
how it should be for matching ARMv2 and v3 architectures.

^ permalink raw reply	[flat|nested] 68+ messages in thread

* [RFC 07/18] arm: mm: retry on QSD icache parity errors
  2010-01-11 22:47 ` [RFC 07/18] arm: mm: retry on QSD icache parity errors Daniel Walker
@ 2010-01-18 18:42   ` Ashwin Chaugule
  2010-01-19 16:16     ` Ashwin Chaugule
  0 siblings, 1 reply; 68+ messages in thread
From: Ashwin Chaugule @ 2010-01-18 18:42 UTC (permalink / raw)
  To: linux-arm-kernel

On Mon, Jan 11, 2010 at 5:47 PM, Daniel Walker <dwalker@codeaurora.org> wrote:
> From: Steve Muckle <smuckle@quicinc.com>
>

> +static int
> +do_imprecise_ext(unsigned long addr, unsigned int fsr, struct pt_regs *regs)
> +{
> +#ifdef CONFIG_ARCH_MSM_SCORPION
> + ? ? ? unsigned int regval;
> + ? ? ? static unsigned char flush_toggle;
> +
> + ? ? ? asm("mrc p15, 0, %0, c5, c1, 0\n" /* read adfsr for fault status */
> + ? ? ? ? ? : "=r" (regval));
> + ? ? ? if (regval == 0x2) {
> + ? ? ? ? ? ? ? /* Fault was caused by icache parity error. Alternate
> + ? ? ? ? ? ? ? ?* simply retrying the access and flushing the icache. */
> + ? ? ? ? ? ? ? flush_toggle ^= 1;
> + ? ? ? ? ? ? ? if (flush_toggle)
> + ? ? ? ? ? ? ? ? ? ? ? asm("mcr p15, 0, %0, c7, c5, 0\n"
> + ? ? ? ? ? ? ? ? ? ? ? ? ? :
> + ? ? ? ? ? ? ? ? ? ? ? ? ? : "r" (regval)); /* input value is ignored */

Wouldn't you need regval = 0 here, to clear the EFSR ?


> + ? ? ? ? ? ? ? /* Clear fault in EFSR. */
> + ? ? ? ? ? ? ? asm("mcr p15, 7, %0, c15, c0, 1\n"
> + ? ? ? ? ? ? ? ? ? :
> + ? ? ? ? ? ? ? ? ? : "r" (regval));
> + ? ? ? ? ? ? ? /* Clear fault in ADFSR. */
> + ? ? ? ? ? ? ? regval = 0;
> + ? ? ? ? ? ? ? asm("mcr p15, 0, %0, c5, c1, 0\n"
> + ? ? ? ? ? ? ? ? ? :
> + ? ? ? ? ? ? ? ? ? : "r" (regval));
> + ? ? ? ? ? ? ? return 0;
> + ? ? ? }
> +#endif
> +

^ permalink raw reply	[flat|nested] 68+ messages in thread

* [RFC 07/18] arm: mm: retry on QSD icache parity errors
  2010-01-18 18:42   ` Ashwin Chaugule
@ 2010-01-19 16:16     ` Ashwin Chaugule
  0 siblings, 0 replies; 68+ messages in thread
From: Ashwin Chaugule @ 2010-01-19 16:16 UTC (permalink / raw)
  To: linux-arm-kernel

On Mon, Jan 18, 2010 at 1:42 PM, Ashwin Chaugule
<ashbertslists@gmail.com> wrote:

>> +static int
>> +do_imprecise_ext(unsigned long addr, unsigned int fsr, struct pt_regs *regs)
>> +{
>> +#ifdef CONFIG_ARCH_MSM_SCORPION
>> + ? ? ? unsigned int regval;
>> + ? ? ? static unsigned char flush_toggle;
>> +
>> + ? ? ? asm("mrc p15, 0, %0, c5, c1, 0\n" /* read adfsr for fault status */
>> + ? ? ? ? ? : "=r" (regval));
>> + ? ? ? if (regval == 0x2) {
>> + ? ? ? ? ? ? ? /* Fault was caused by icache parity error. Alternate
>> + ? ? ? ? ? ? ? ?* simply retrying the access and flushing the icache. */
>> + ? ? ? ? ? ? ? flush_toggle ^= 1;
>> + ? ? ? ? ? ? ? if (flush_toggle)
>> + ? ? ? ? ? ? ? ? ? ? ? asm("mcr p15, 0, %0, c7, c5, 0\n"
>> + ? ? ? ? ? ? ? ? ? ? ? ? ? :
>> + ? ? ? ? ? ? ? ? ? ? ? ? ? : "r" (regval)); /* input value is ignored */
>
> Wouldn't you need regval = 0 here, to clear the EFSR ?
>
>

Gah. Nevermind. Need to write 1 to clear this reg. Alternating methods
to clear regs, is just .. confusing. ;)

^ permalink raw reply	[flat|nested] 68+ messages in thread

* [RFC 18/18] arm: mm: qsd8x50: Fix incorrect permission faults
  2010-01-11 23:11   ` Russell King - ARM Linux
@ 2010-01-19 17:10     ` Jamie Lokier
  2010-01-19 17:33       ` Daniel Walker
  2010-02-04  0:09       ` David Brown
  0 siblings, 2 replies; 68+ messages in thread
From: Jamie Lokier @ 2010-01-19 17:10 UTC (permalink / raw)
  To: linux-arm-kernel

Russell King - ARM Linux wrote:
> On Mon, Jan 11, 2010 at 02:47:37PM -0800, Daniel Walker wrote:
> > From: Dave Estes <cestes@quicinc.com>
> > 
> > Handle incorrectly reported permission faults for qsd8650.  On
> > permission faults, retry MVA to PA conversion.  If retry detects
> > translation fault.  Report as translation fault.
> 
> This is totally unacceptable to add such a demanding copyright header to
> any file, imposing this notice upon pre-existing code.  Please remove it.

Other files also.

I was going to enquire about another file in this patch, wondering if
the long copyright header is compatible with the GPLv2 used for the
kernel tree as a whole:

--- /dev/null
+++ b/Documentation/arm/msm/emulate_domain_manager.txt
@@ -0,0 +1,282 @@
+Copyright (c) 2009, Code Aurora Forum. All rights reserved.
+
+[... longish license header, like new BSD but different...

Notably:

+Redistributions in source form must retain the above copyright notice, this
+list of conditions and the following disclaimer as the first lines of this
+file unmodified.

So nobody can add a title to the documentation, add another copyright
year and name, convert it to Docbook, etc....?

But also, the simple fact that it is not a standard license raises the
question of whether it's acceptable at all.


For another file, I'm not sure that adding "All rights reserved" is ok.
Under the GPL, all rights are _not_ reserved:

--- a/arch/arm/mm/fault.c
+++ b/arch/arm/mm/fault.c
@@ -2,6 +2,7 @@
  *  linux/arch/arm/mm/fault.c
  *
  *  Copyright (C) 1995  Linus Torvalds
+ *  Copyright (c) 2009, Code Aurora Forum. All rights reserved.
  *  Modifications for ARM processor (c) 1995-2004 Russell King
  *
  * This program is free software; you can redistribute it and/or modify


-- Jamie

^ permalink raw reply	[flat|nested] 68+ messages in thread

* [RFC 06/18] arm: msm: implement proper dmb() for 7x27
  2010-01-11 22:47 ` [RFC 06/18] arm: msm: implement proper dmb() for 7x27 Daniel Walker
  2010-01-11 23:39   ` Russell King - ARM Linux
@ 2010-01-19 17:16   ` Jamie Lokier
  1 sibling, 0 replies; 68+ messages in thread
From: Jamie Lokier @ 2010-01-19 17:16 UTC (permalink / raw)
  To: linux-arm-kernel

Daniel Walker wrote:
> From: Larry Bassel <lbassel@quicinc.com>
> 
> For 7x27 it is necessary to write to strongly
> ordered memory after executing the coprocessor 15
> instruction dmb instruction.
> 
> This is only for data barrier dmb().
> Note that the test for 7x27 is done on all MSM platforms
> (even ones such as 7201a whose kernel is distinct from
> that of 7x25/7x27).

How is userspace dealing with this?

Userspace also needs dmb(), in threaded code.

See __kernel_dmb in arch/arm/kernel/entry-armv.S.

-- Jamie

^ permalink raw reply	[flat|nested] 68+ messages in thread

* [RFC 06/18] arm: msm: implement proper dmb() for 7x27
  2010-01-12  0:01       ` Russell King - ARM Linux
@ 2010-01-19 17:28         ` Jamie Lokier
  2010-01-19 18:04           ` Russell King - ARM Linux
  0 siblings, 1 reply; 68+ messages in thread
From: Jamie Lokier @ 2010-01-19 17:28 UTC (permalink / raw)
  To: linux-arm-kernel

Russell King - ARM Linux wrote:
> > 	zero_page_strongly_ordered =
> > 		ioremap_strongly_ordered(page_to_pfn(empty_zero_page)
> > 		<< PAGE_SHIFT, PAGE_SIZE);
> 
> This can't work.  You're not allowed to map the same memory with differing
> memory types from ARMv7.  This ends up mapping 'empty_zero_page' as both
> cacheable memory and strongly ordered.  That's illegal according to the
> ARM ARM.

It's not an ARMv7, otherwise it wouldn't be using the mcr version of
dmb().  Does that make the mapping ok, since it's been ok for years on
< ARMv7?  Or are we trying to get away from doing that on all ARMs?

Actually it is only used on two very specific CPUs.  Perhaps it can be
confirmed as Not A Problem(tm) on those, with a comment to say why
it's ok in the mapping call?

> You need to find something else to map - allocating a page of system
> memory for this won't work either (it'll have the same issue.)

Is strongly ordered RAM or even uncached RAM used at all for anything
at the moment?  It looks quite tricky to allocate a little RAM that
never becomes part of the kernel direct mapping.

> > [alloc_bootmem_aligned]
>
> Erm, there is __alloc_bootmem(size, align, 0) - the bootmem allocator
> already does alignment.

I agree.
Best use the provided API.

If there _was_ a need for a new function, then the right place for it
would have been a patch to mm/bootmem.c, where everyone can see what
it does and avoid breaking it if they change how bootmem works.

-- Jamie

^ permalink raw reply	[flat|nested] 68+ messages in thread

* [RFC 18/18] arm: mm: qsd8x50: Fix incorrect permission faults
  2010-01-19 17:10     ` Jamie Lokier
@ 2010-01-19 17:33       ` Daniel Walker
  2010-01-19 17:43         ` Jamie Lokier
  2010-02-04  0:09       ` David Brown
  1 sibling, 1 reply; 68+ messages in thread
From: Daniel Walker @ 2010-01-19 17:33 UTC (permalink / raw)
  To: linux-arm-kernel

On Tue, 2010-01-19 at 17:10 +0000, Jamie Lokier wrote:

> Notably:
> 
> +Redistributions in source form must retain the above copyright notice, this
> +list of conditions and the following disclaimer as the first lines of this
> +file unmodified.
> 
> So nobody can add a title to the documentation, add another copyright
> year and name, convert it to Docbook, etc....?

Everything I release will ultimately be licensed under the GPLv2
regardless of the license in the current files.. So it's just a matter
of removing that old license ..

> For another file, I'm not sure that adding "All rights reserved" is ok.
> Under the GPL, all rights are _not_ reserved:

This would be a problem because our lawyers mandate this.. AFAIK, it
doesn't change the licensing tho.

Daniel

^ permalink raw reply	[flat|nested] 68+ messages in thread

* [RFC 08/18] arm: msm: set L2CR1 to enable prefetch andburstonScorpion.
  2010-01-13 10:36                 ` Catalin Marinas
@ 2010-01-19 17:38                   ` Jamie Lokier
  0 siblings, 0 replies; 68+ messages in thread
From: Jamie Lokier @ 2010-01-19 17:38 UTC (permalink / raw)
  To: linux-arm-kernel

Catalin Marinas wrote:
> If you use Git, it has some options to send patches unthreaded (which I
> don't particularly like but that's a way around this problem; an
> alternative is to convince the moderator to remove this rule).

Or add a rule to recognise Git patches.

-- Jamie

^ permalink raw reply	[flat|nested] 68+ messages in thread

* [RFC 18/18] arm: mm: qsd8x50: Fix incorrect permission faults
  2010-01-19 17:33       ` Daniel Walker
@ 2010-01-19 17:43         ` Jamie Lokier
  2010-01-19 17:49           ` Daniel Walker
  2010-01-19 18:09           ` Russell King - ARM Linux
  0 siblings, 2 replies; 68+ messages in thread
From: Jamie Lokier @ 2010-01-19 17:43 UTC (permalink / raw)
  To: linux-arm-kernel

Daniel Walker wrote:
> Everything I release will ultimately be licensed under the GPLv2
> regardless of the license in the current files.. So it's just a matter
> of removing that old license ..

Ok.  Please do :-)

> > For another file, I'm not sure that adding "All rights reserved" is ok.
> > Under the GPL, all rights are _not_ reserved:
> 
> This would be a problem because our lawyers mandate this.. AFAIK, it
> doesn't change the licensing tho.

If it doesn't change the license, why is it mandated?

Sounds like your lawyers aren't very familiar with open source licensing.

-- Jamie

^ permalink raw reply	[flat|nested] 68+ messages in thread

* [RFC 18/18] arm: mm: qsd8x50: Fix incorrect permission faults
  2010-01-19 17:43         ` Jamie Lokier
@ 2010-01-19 17:49           ` Daniel Walker
  2010-01-19 18:09           ` Russell King - ARM Linux
  1 sibling, 0 replies; 68+ messages in thread
From: Daniel Walker @ 2010-01-19 17:49 UTC (permalink / raw)
  To: linux-arm-kernel

On Tue, 2010-01-19 at 17:43 +0000, Jamie Lokier wrote:
> Daniel Walker wrote:
> > Everything I release will ultimately be licensed under the GPLv2
> > regardless of the license in the current files.. So it's just a matter
> > of removing that old license ..
> 
> Ok.  Please do :-)
> 
> > > For another file, I'm not sure that adding "All rights reserved" is ok.
> > > Under the GPL, all rights are _not_ reserved:
> > 
> > This would be a problem because our lawyers mandate this.. AFAIK, it
> > doesn't change the licensing tho.
> 
> If it doesn't change the license, why is it mandated?

I don't know .. But I would guess it has to do with re-licensing under a
BSD license potentially .

> Sounds like your lawyers aren't very familiar with open source licensing.

I'm not sure how familiar they are with it ..

Daniel

^ permalink raw reply	[flat|nested] 68+ messages in thread

* [RFC 06/18] arm: msm: implement proper dmb() for 7x27
  2010-01-19 17:28         ` Jamie Lokier
@ 2010-01-19 18:04           ` Russell King - ARM Linux
  2010-01-19 21:12             ` Jamie Lokier
  0 siblings, 1 reply; 68+ messages in thread
From: Russell King - ARM Linux @ 2010-01-19 18:04 UTC (permalink / raw)
  To: linux-arm-kernel

On Tue, Jan 19, 2010 at 05:28:35PM +0000, Jamie Lokier wrote:
> Russell King - ARM Linux wrote:
> > > 	zero_page_strongly_ordered =
> > > 		ioremap_strongly_ordered(page_to_pfn(empty_zero_page)
> > > 		<< PAGE_SHIFT, PAGE_SIZE);
> > 
> > This can't work.  You're not allowed to map the same memory with differing
> > memory types from ARMv7.  This ends up mapping 'empty_zero_page' as both
> > cacheable memory and strongly ordered.  That's illegal according to the
> > ARM ARM.
> 
> It's not an ARMv7, otherwise it wouldn't be using the mcr version of
> dmb().  Does that make the mapping ok, since it's been ok for years on
> < ARMv7?  Or are we trying to get away from doing that on all ARMs?

Technically, it also applies to ARMv6 as well.

> Actually it is only used on two very specific CPUs.  Perhaps it can be
> confirmed as Not A Problem(tm) on those, with a comment to say why
> it's ok in the mapping call?

The fact of the matter is that cache lines will be allocated for
empty_zero_page.  If this CPU is ARMv6 or ARMv7, with either an aliasing
or non-aliasing VIPT cache, you will get cache lines allocated for this
page which will overlap the strongly ordered mapping.

That in turn can turn the strongly ordered mapping into a cached mapping
which is definitely not what you want.

If your CPU speculatively prefetches, and it prefetches some data via a
cached mapping, the same thing can happen.

> > You need to find something else to map - allocating a page of system
> > memory for this won't work either (it'll have the same issue.)
> 
> Is strongly ordered RAM or even uncached RAM used at all for anything
> at the moment?  It looks quite tricky to allocate a little RAM that
> never becomes part of the kernel direct mapping.

'Memory, uncached' is used for DMA mappings on ARMv7 and soon to be on
ARMv6 as well to comply with the architecture requirements.  Strongly
ordered is used for a certain set of IOP3xx registers because that is
what is stipulated in the device documentation.

What is expressly not permitted for ARMv7 (and ARMv6) is having two or
more mappings of the same physical address with differing memory types
or sharability settings.

(Technically, that extends to cacheability modes as well - but if ARM Ltd
think that the kernel's going to comply with that, I think they're in
cloud cuckoo land.  Well, we could do _if_ (eg) ARM Ltd bring in hardware
DMA coherency as a mandatory architecture requirement.)

^ permalink raw reply	[flat|nested] 68+ messages in thread

* [RFC 18/18] arm: mm: qsd8x50: Fix incorrect permission faults
  2010-01-19 17:43         ` Jamie Lokier
  2010-01-19 17:49           ` Daniel Walker
@ 2010-01-19 18:09           ` Russell King - ARM Linux
  1 sibling, 0 replies; 68+ messages in thread
From: Russell King - ARM Linux @ 2010-01-19 18:09 UTC (permalink / raw)
  To: linux-arm-kernel

On Tue, Jan 19, 2010 at 05:43:13PM +0000, Jamie Lokier wrote:
> Daniel Walker wrote:
> > Everything I release will ultimately be licensed under the GPLv2
> > regardless of the license in the current files.. So it's just a matter
> > of removing that old license ..
> 
> Ok.  Please do :-)
> 
> > > For another file, I'm not sure that adding "All rights reserved" is ok.
> > > Under the GPL, all rights are _not_ reserved:
> > 
> > This would be a problem because our lawyers mandate this.. AFAIK, it
> > doesn't change the licensing tho.
> 
> If it doesn't change the license, why is it mandated?
> 
> Sounds like your lawyers aren't very familiar with open source licensing.

It is quite normal to do:

 (C) 2010 Joe Bloggs, All Rights Reserved.

 GPLv2 boiler plate.

Here's some examples:

 * Copyright (C) 2007 Red Hat, Inc.  All rights reserved.  This copyrighted
 * material is made available to anyone wishing to use, modify, copy, or
 * redistribute it subject to the terms and conditions of the GNU General
 * Public License v.2.

 *   Copyright 2000-2008 H. Peter Anvin - All Rights Reserved

 * Copyright (C) 2008 Silicon Graphics, Inc. All rights reserved.

 *      Copyright (C) IBM Corporation, 2004. All rights reserved

 *   Copyright 2007 rPath, Inc. - All Rights Reserved

^ permalink raw reply	[flat|nested] 68+ messages in thread

* [RFC 06/18] arm: msm: implement proper dmb() for 7x27
  2010-01-19 18:04           ` Russell King - ARM Linux
@ 2010-01-19 21:12             ` Jamie Lokier
  2010-01-19 23:11               ` Russell King - ARM Linux
  0 siblings, 1 reply; 68+ messages in thread
From: Jamie Lokier @ 2010-01-19 21:12 UTC (permalink / raw)
  To: linux-arm-kernel

Russell King - ARM Linux wrote:
> > It's not an ARMv7, otherwise it wouldn't be using the mcr version of
> > dmb().  Does that make the mapping ok, since it's been ok for years on
> > < ARMv7?  Or are we trying to get away from doing that on all ARMs?
> 
> Technically, it also applies to ARMv6 as well.
> 
> > Actually it is only used on two very specific CPUs.  Perhaps it can be
> > confirmed as Not A Problem(tm) on those, with a comment to say why
> > it's ok in the mapping call?
> 
> The fact of the matter is that cache lines will be allocated for
> empty_zero_page.  If this CPU is ARMv6 or ARMv7, with either an aliasing
> or non-aliasing VIPT cache, you will get cache lines allocated for this
> page which will overlap the strongly ordered mapping.
> 
> That in turn can turn the strongly ordered mapping into a cached mapping
> which is definitely not what you want.
> 
> If your CPU speculatively prefetches, and it prefetches some data via a
> cached mapping, the same thing can happen.

Fair enough.  Is it like this?

   1. Data ends up in cache lines from access via the cached mapping.
   2. Access via the strongly ordered mapping may still look at the cache,
      because it's easier that way and it's not supposed to have any data.
   3. Cache effectively intercepts the access.
   4. Bus does not see strongly ordered access.

> 'Memory, uncached' is used for DMA mappings on ARMv7 and soon to be on
> ARMv6 as well to comply with the architecture requirements.  Strongly
> ordered is used for a certain set of IOP3xx registers because that is
> what is stipulated in the device documentation.
> 
> What is expressly not permitted for ARMv7 (and ARMv6) is having two or
> more mappings of the same physical address with differing memory types
> or sharability settings.
> 
> (Technically, that extends to cacheability modes as well - but if ARM Ltd
> think that the kernel's going to comply with that, I think they're in
> cloud cuckoo land.  Well, we could do _if_ (eg) ARM Ltd bring in hardware
> DMA coherency as a mandatory architecture requirement.)

Isn't the above sequence of events you described earlier even _more_
likely to occur when there are simultaneous cacheable and
non-cacheable mappings?

I understand the reason for the cloud cuckoo land comment :-) But I'm
thinking, an implementation which has no problem with cacheable and
non-cacheable overlapping mappings is very unlikely to have a problem
with cacheable and strongly-ordered mappings.  Is there a reason to
believe otherwise?

-- Jamie

^ permalink raw reply	[flat|nested] 68+ messages in thread

* [RFC 06/18] arm: msm: implement proper dmb() for 7x27
  2010-01-19 21:12             ` Jamie Lokier
@ 2010-01-19 23:11               ` Russell King - ARM Linux
  0 siblings, 0 replies; 68+ messages in thread
From: Russell King - ARM Linux @ 2010-01-19 23:11 UTC (permalink / raw)
  To: linux-arm-kernel

On Tue, Jan 19, 2010 at 09:12:44PM +0000, Jamie Lokier wrote:
> Fair enough.  Is it like this?
> 
>    1. Data ends up in cache lines from access via the cached mapping.
>    2. Access via the strongly ordered mapping may still look at the cache,
>       because it's easier that way and it's not supposed to have any data.
>    3. Cache effectively intercepts the access.
>    4. Bus does not see strongly ordered access.

Yes, that's one possible scenario.

> Isn't the above sequence of events you described earlier even _more_
> likely to occur when there are simultaneous cacheable and
> non-cacheable mappings?

I guess it depends how the access hardware is designed - and what
effect an access resulting in a device or strongly ordered type access
(thereby having one set of ordering requirements) has when it hits a
cache line, which will have different ordering requirements.

With ARMv6 and above, it's no longer just about cache policy, as with
previous CPUs.  There's ordering requirements to ensure that (eg) device
accesses occur on the busses in program order rather than out of order,
and this will require hardware - which could be confused by a device
access hitting the cache.

The difference between this and differing cache policies, the ordering
requirement is the same.

(ISTR ARM Ltd's architecture folk know about the problem Linux has with
cache policies, and although this requirement is written into the arch
manual, they're aware that Linux will have major problems with it.  I
believe the only unpredictability that results from this that a NC
region may end up with cache lines allocated, which may or may not be
hit.  That's not to say that it _won't_ become a problem with later
ARM CPUs, but let's pray that when it does, we have coherent DMA
support.)

^ permalink raw reply	[flat|nested] 68+ messages in thread

* [RFC 17/18] arm: mm: Add SW emulation for ARM domain manager feature
  2010-01-11 22:47 ` [RFC 17/18] arm: mm: Add SW emulation for ARM domain manager feature Daniel Walker
@ 2010-01-25 16:40   ` Catalin Marinas
  2010-01-25 17:04     ` Nicolas Pitre
                       ` (2 more replies)
  0 siblings, 3 replies; 68+ messages in thread
From: Catalin Marinas @ 2010-01-25 16:40 UTC (permalink / raw)
  To: linux-arm-kernel

Hi Daniel,

On Mon, 2010-01-11 at 22:47 +0000, Daniel Walker wrote:
> Do not set domain manager bits in cp15 dacr.  Emulate using SW.  Add
> kernel hooks to handle domain changes, permission faults, and context
> switches.

In case you were not aware, there's a patch around (I think since 2007)
that removes the domain switching entirely from the kernel (given that
they have been deprecated for some time and may disappear completely in
the future):

http://lists.infradead.org/pipermail/linux-arm-kernel/2009-December/005616.html

It doesn't require handling domain faults and works on SMP as well.
Another advantage is that we can re-implement functions like
copy_from_user etc. only using LDR/LDM rather than LDRT with some
performance improvements.

Of course, being my patch, I'm pushing for it :-) but I think longer
term is a better approach that the domains emulation. Any thoughts on
this?

> --- /dev/null
> +++ b/Documentation/arm/msm/emulate_domain_manager.txt
[...]
> +Software description
> +====================
> +
> +In order to disable domain manager mode the equivalent HW functionality must
> +be emulated in SW.  Any attempts to enable domain manager mode, must be
> +intercepted.
> +
> +Because domain manager mode is not enabled, permissions for the
> +associated domain will remain restricted.  Permission faults will be generated.
> +The permission faults will be intercepted.  The faulted pages/sections will
> +be modified to grant full access and execute permissions.
> +
> +The modified page tables must be restored when exiting domain manager mode.

BTW, on the current (unpatched) kernel, what happens if after a
set_fs(KERNEL_DS) call the kernel is preempted and a user space
application executed? Does it gain access to the kernel pages? Something
like this may affect your patches as well.

Thanks.

-- 
Catalin

^ permalink raw reply	[flat|nested] 68+ messages in thread

* [RFC 17/18] arm: mm: Add SW emulation for ARM domain manager feature
  2010-01-25 16:40   ` Catalin Marinas
@ 2010-01-25 17:04     ` Nicolas Pitre
  2010-01-25 18:25     ` Daniel Walker
  2010-03-22 18:11     ` Daniel Walker
  2 siblings, 0 replies; 68+ messages in thread
From: Nicolas Pitre @ 2010-01-25 17:04 UTC (permalink / raw)
  To: linux-arm-kernel

On Mon, 25 Jan 2010, Catalin Marinas wrote:

> Hi Daniel,
> 
> On Mon, 2010-01-11 at 22:47 +0000, Daniel Walker wrote:
> > Do not set domain manager bits in cp15 dacr.  Emulate using SW.  Add
> > kernel hooks to handle domain changes, permission faults, and context
> > switches.
> 
> In case you were not aware, there's a patch around (I think since 2007)
> that removes the domain switching entirely from the kernel (given that
> they have been deprecated for some time and may disappear completely in
> the future):
> 
> http://lists.infradead.org/pipermail/linux-arm-kernel/2009-December/005616.html

Also the manager domain doesn't Honor the nx bit, and that is causing 
issues with speculative prefetching as DMA coherent memory area may end 
up being speculatively prefetched unexpectedly.  The on-going DMA 
handling rework takes care of the bigger issue, but the nx bit not being 
observed in all domains is another one for which this patch is a nice 
"fix".


Nicolas

^ permalink raw reply	[flat|nested] 68+ messages in thread

* [RFC 17/18] arm: mm: Add SW emulation for ARM domain manager feature
  2010-01-25 16:40   ` Catalin Marinas
  2010-01-25 17:04     ` Nicolas Pitre
@ 2010-01-25 18:25     ` Daniel Walker
  2010-03-22 18:11     ` Daniel Walker
  2 siblings, 0 replies; 68+ messages in thread
From: Daniel Walker @ 2010-01-25 18:25 UTC (permalink / raw)
  To: linux-arm-kernel

On Mon, 2010-01-25 at 16:40 +0000, Catalin Marinas wrote:
> Hi Daniel,
> 
> On Mon, 2010-01-11 at 22:47 +0000, Daniel Walker wrote:
> > Do not set domain manager bits in cp15 dacr.  Emulate using SW.  Add
> > kernel hooks to handle domain changes, permission faults, and context
> > switches.
> 
> In case you were not aware, there's a patch around (I think since 2007)
> that removes the domain switching entirely from the kernel (given that
> they have been deprecated for some time and may disappear completely in
> the future):
> 
> http://lists.infradead.org/pipermail/linux-arm-kernel/2009-December/005616.html
> 
> It doesn't require handling domain faults and works on SMP as well.
> Another advantage is that we can re-implement functions like
> copy_from_user etc. only using LDR/LDM rather than LDRT with some
> performance improvements.

Not in mainline yet?

> Of course, being my patch, I'm pushing for it :-) but I think longer
> term is a better approach that the domains emulation. Any thoughts on
> this?

This is just to fix a hardware problem on MSM (can't enable the domain
manager), so it's not super interesting to me as a feature per se ..

If there are benefits that your creating by disabling the domain
manager, then that seems like something we would want over this patch
since all we're trying to do is fix defective hardware (not enhance the
kernel) ..

I'll try discussing your patch with the author, and see if it satisfies
our needs (maybe Dave E. on the CC list might want to jump in here.)

Daniel

^ permalink raw reply	[flat|nested] 68+ messages in thread

* [RFC 05/18] arm: msm: implement ioremap_strongly_ordered
  2010-01-11 23:37   ` Russell King - ARM Linux
@ 2010-01-28 23:04     ` Larry Bassel
  2010-02-03 14:59       ` Russell King - ARM Linux
  0 siblings, 1 reply; 68+ messages in thread
From: Larry Bassel @ 2010-01-28 23:04 UTC (permalink / raw)
  To: linux-arm-kernel

On Mon, Jan 11, 2010 at 03:37:20PM -0800, Russell King - ARM Linux wrote:
> On Mon, Jan 11, 2010 at 02:47:24PM -0800, Daniel Walker wrote:
> > From: Larry Bassel <lbassel@quicinc.com>
> > 
> > Both the clean and invalidate functionality needed
> > for the video encoder and 7x27 barrier code
> > need to have a strongly ordered mapping set up
> > so that one may perform a write to strongly ordered
> > memory. The generic ARM code does not provide this.
> > 
> > The generic ARM code does provide MT_DEVICE, which starts
> > as strongly ordered, but the code later turns the buffered flag
> > on for ARMv6 in order to make the device shared. This is not
> > suitable for my purpose, so this patch adds code for a
> > MT_DEVICE_STRONGLY_ORDERED mapping type.
> 
> This doesn't really describe what "my purpose" is; the patch description
> is too vague to ascertain why this is required.

Hopefully this is a better description of the patch:

Some Qualcomm SOCs (such as the MSM7x27) require a write to
strongly ordered memory in order to fully flush the AXI bus.
Although the generic ARM code provides MT_DEVICE, which starts
as strongly ordered, it later turns on the buffered flag to
make this memory shared.

Add an additional mapping type MT_DEVICE_STRONGLY_ORDERED which
will stay in strongly ordered mode and allow proper
implementation of cache and invalidate operations on these
devices.

Larry

^ permalink raw reply	[flat|nested] 68+ messages in thread

* [RFC 05/18] arm: msm: implement ioremap_strongly_ordered
  2010-01-28 23:04     ` Larry Bassel
@ 2010-02-03 14:59       ` Russell King - ARM Linux
  0 siblings, 0 replies; 68+ messages in thread
From: Russell King - ARM Linux @ 2010-02-03 14:59 UTC (permalink / raw)
  To: linux-arm-kernel

Larry,

Something in your email system is broken:

Cc: Daniel Walker <dwalker@codeaurora.org>,
        "linux-arm-kernel at lists.infradead.org"@qualcomm.com,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
        linux-arm-msm at vger.kernel.org

No idea why it's trying to turn the mailing list address into a qualcomm
address.

^ permalink raw reply	[flat|nested] 68+ messages in thread

* [RFC 18/18] arm: mm: qsd8x50: Fix incorrect permission faults
  2010-01-19 17:10     ` Jamie Lokier
  2010-01-19 17:33       ` Daniel Walker
@ 2010-02-04  0:09       ` David Brown
  1 sibling, 0 replies; 68+ messages in thread
From: David Brown @ 2010-02-04  0:09 UTC (permalink / raw)
  To: linux-arm-kernel

On Tue, Jan 19, 2010 at 09:10:33AM -0800, Jamie Lokier wrote:

> --- /dev/null
> +++ b/Documentation/arm/msm/emulate_domain_manager.txt
> @@ -0,0 +1,282 @@
> +[... longish license header, like new BSD but different...

All of the license blurbs that made it under the documentation
directory were put there erroneously, and will be removed
entirely in future patches.  That license wasn't intended for
kernel documentation.

David

^ permalink raw reply	[flat|nested] 68+ messages in thread

* [RFC 17/18] arm: mm: Add SW emulation for ARM domain manager feature
  2010-01-25 16:40   ` Catalin Marinas
  2010-01-25 17:04     ` Nicolas Pitre
  2010-01-25 18:25     ` Daniel Walker
@ 2010-03-22 18:11     ` Daniel Walker
  2010-03-22 18:58       ` Nicolas Pitre
  2 siblings, 1 reply; 68+ messages in thread
From: Daniel Walker @ 2010-03-22 18:11 UTC (permalink / raw)
  To: linux-arm-kernel

On Mon, 2010-01-25 at 16:40 +0000, Catalin Marinas wrote:
> Hi Daniel,
> 
> On Mon, 2010-01-11 at 22:47 +0000, Daniel Walker wrote:
> > Do not set domain manager bits in cp15 dacr.  Emulate using SW.  Add
> > kernel hooks to handle domain changes, permission faults, and context
> > switches.
> 
> In case you were not aware, there's a patch around (I think since 2007)
> that removes the domain switching entirely from the kernel (given that
> they have been deprecated for some time and may disappear completely in
> the future):
> 
> http://lists.infradead.org/pipermail/linux-arm-kernel/2009-December/005616.html
> 

I didn't find this in 2.6.34 ? Are you re-writing it? Why has it been
around so long out of mainline?

Daniel

^ permalink raw reply	[flat|nested] 68+ messages in thread

* [RFC 17/18] arm: mm: Add SW emulation for ARM domain manager feature
  2010-03-22 18:11     ` Daniel Walker
@ 2010-03-22 18:58       ` Nicolas Pitre
  2010-03-22 20:01         ` Daniel Walker
  0 siblings, 1 reply; 68+ messages in thread
From: Nicolas Pitre @ 2010-03-22 18:58 UTC (permalink / raw)
  To: linux-arm-kernel

On Mon, 22 Mar 2010, Daniel Walker wrote:

> On Mon, 2010-01-25 at 16:40 +0000, Catalin Marinas wrote:
> > Hi Daniel,
> > 
> > On Mon, 2010-01-11 at 22:47 +0000, Daniel Walker wrote:
> > > Do not set domain manager bits in cp15 dacr.  Emulate using SW.  Add
> > > kernel hooks to handle domain changes, permission faults, and context
> > > switches.
> > 
> > In case you were not aware, there's a patch around (I think since 2007)
> > that removes the domain switching entirely from the kernel (given that
> > they have been deprecated for some time and may disappear completely in
> > the future):
> > 
> > http://lists.infradead.org/pipermail/linux-arm-kernel/2009-December/005616.html
> > 
> 
> I didn't find this in 2.6.34 ? Are you re-writing it? Why has it been
> around so long out of mainline?

Because no real hardware out there needed it until recently, i.e. 
inertia.


Nicolas

^ permalink raw reply	[flat|nested] 68+ messages in thread

* [RFC 17/18] arm: mm: Add SW emulation for ARM domain manager feature
  2010-03-22 18:58       ` Nicolas Pitre
@ 2010-03-22 20:01         ` Daniel Walker
  2010-03-22 20:32           ` Nicolas Pitre
  0 siblings, 1 reply; 68+ messages in thread
From: Daniel Walker @ 2010-03-22 20:01 UTC (permalink / raw)
  To: linux-arm-kernel

On Mon, 2010-03-22 at 14:58 -0400, Nicolas Pitre wrote:
> On Mon, 22 Mar 2010, Daniel Walker wrote:
> 
> > On Mon, 2010-01-25 at 16:40 +0000, Catalin Marinas wrote:
> > > Hi Daniel,
> > > 
> > > On Mon, 2010-01-11 at 22:47 +0000, Daniel Walker wrote:
> > > > Do not set domain manager bits in cp15 dacr.  Emulate using SW.  Add
> > > > kernel hooks to handle domain changes, permission faults, and context
> > > > switches.
> > > 
> > > In case you were not aware, there's a patch around (I think since 2007)
> > > that removes the domain switching entirely from the kernel (given that
> > > they have been deprecated for some time and may disappear completely in
> > > the future):
> > > 
> > > http://lists.infradead.org/pipermail/linux-arm-kernel/2009-December/005616.html
> > > 
> > 
> > I didn't find this in 2.6.34 ? Are you re-writing it? Why has it been
> > around so long out of mainline?
> 
> Because no real hardware out there needed it until recently, i.e. 
> inertia.

Current MSM hardware needs it, just to cover up a defect in the
processor. At least that's what this thread was originally about.

Daniel

^ permalink raw reply	[flat|nested] 68+ messages in thread

* [RFC 17/18] arm: mm: Add SW emulation for ARM domain manager feature
  2010-03-22 20:01         ` Daniel Walker
@ 2010-03-22 20:32           ` Nicolas Pitre
  2010-03-23 10:04             ` Catalin Marinas
  0 siblings, 1 reply; 68+ messages in thread
From: Nicolas Pitre @ 2010-03-22 20:32 UTC (permalink / raw)
  To: linux-arm-kernel

On Mon, 22 Mar 2010, Daniel Walker wrote:

> On Mon, 2010-03-22 at 14:58 -0400, Nicolas Pitre wrote:
> > On Mon, 22 Mar 2010, Daniel Walker wrote:
> > 
> > > On Mon, 2010-01-25 at 16:40 +0000, Catalin Marinas wrote:
> > > > Hi Daniel,
> > > > 
> > > > On Mon, 2010-01-11 at 22:47 +0000, Daniel Walker wrote:
> > > > > Do not set domain manager bits in cp15 dacr.  Emulate using SW.  Add
> > > > > kernel hooks to handle domain changes, permission faults, and context
> > > > > switches.
> > > > 
> > > > In case you were not aware, there's a patch around (I think since 2007)
> > > > that removes the domain switching entirely from the kernel (given that
> > > > they have been deprecated for some time and may disappear completely in
> > > > the future):
> > > > 
> > > > http://lists.infradead.org/pipermail/linux-arm-kernel/2009-December/005616.html
> > > > 
> > > 
> > > I didn't find this in 2.6.34 ? Are you re-writing it? Why has it been
> > > around so long out of mainline?
> > 
> > Because no real hardware out there needed it until recently, i.e. 
> > inertia.
> 
> Current MSM hardware needs it, just to cover up a defect in the
> processor. At least that's what this thread was originally about.

Some other processors should need it too for some architecturally 
defined reasons. Would be a good thing if you could remove your 
emulation and replace it with this patch, and confirm it works for you 
by providing Tested-by tags.


Nicolas

^ permalink raw reply	[flat|nested] 68+ messages in thread

* [RFC 17/18] arm: mm: Add SW emulation for ARM domain manager feature
  2010-03-22 20:32           ` Nicolas Pitre
@ 2010-03-23 10:04             ` Catalin Marinas
  0 siblings, 0 replies; 68+ messages in thread
From: Catalin Marinas @ 2010-03-23 10:04 UTC (permalink / raw)
  To: linux-arm-kernel

On Mon, 2010-03-22 at 20:32 +0000, Nicolas Pitre wrote:
> On Mon, 22 Mar 2010, Daniel Walker wrote:
> 
> > On Mon, 2010-03-22 at 14:58 -0400, Nicolas Pitre wrote:
> > > On Mon, 22 Mar 2010, Daniel Walker wrote:
> > >
> > > > On Mon, 2010-01-25 at 16:40 +0000, Catalin Marinas wrote:
> > > > > Hi Daniel,
> > > > >
> > > > > On Mon, 2010-01-11 at 22:47 +0000, Daniel Walker wrote:
> > > > > > Do not set domain manager bits in cp15 dacr.  Emulate using SW.  Add
> > > > > > kernel hooks to handle domain changes, permission faults, and context
> > > > > > switches.
> > > > >
> > > > > In case you were not aware, there's a patch around (I think since 2007)
> > > > > that removes the domain switching entirely from the kernel (given that
> > > > > they have been deprecated for some time and may disappear completely in
> > > > > the future):
> > > > >
> > > > > http://lists.infradead.org/pipermail/linux-arm-kernel/2009-December/005616.html
> > > > >
> > > >
> > > > I didn't find this in 2.6.34 ? Are you re-writing it? Why has it been
> > > > around so long out of mainline?
> > >
> > > Because no real hardware out there needed it until recently, i.e.
> > > inertia.
> >
> > Current MSM hardware needs it, just to cover up a defect in the
> > processor. At least that's what this thread was originally about.
> 
> Some other processors should need it too for some architecturally
> defined reasons. Would be a good thing if you could remove your
> emulation and replace it with this patch, and confirm it works for you
> by providing Tested-by tags.

I posted an up-to-date patch yesterday or you can grab it from my devel
branch -
http://www.linux-arm.org/git?p=linux-2.6.git;a=shortlog;h=refs/heads/devel 

(the master branch on his tree is the same, only that it's
merge-friendly rather than rebased but the git server got a bit slower I
think because git gc stopped few some months ago).

-- 
Catalin

^ permalink raw reply	[flat|nested] 68+ messages in thread

end of thread, other threads:[~2010-03-23 10:04 UTC | newest]

Thread overview: 68+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2010-01-11 22:47 [RFC 00/18] generic arm needed for msm Daniel Walker
2010-01-11 22:47 ` [RFC 01/18] arm: msm: allow ARCH_MSM to have v7 cpus Daniel Walker
2010-01-11 22:47 ` [RFC 02/18] arm: msm: add oprofile pmu support Daniel Walker
2010-01-11 22:47 ` [RFC 03/18] arm: boot: remove old ARM ID for QSD Daniel Walker
2010-01-15 21:26   ` Russell King - ARM Linux
2010-01-11 22:47 ` [RFC 04/18] arm: cache-l2x0: add l2x0 suspend and resume functions Daniel Walker
2010-01-11 23:44   ` Russell King - ARM Linux
2010-01-12  0:52     ` Ruan, Willie
2010-01-11 22:47 ` [RFC 05/18] arm: msm: implement ioremap_strongly_ordered Daniel Walker
2010-01-11 23:37   ` Russell King - ARM Linux
2010-01-28 23:04     ` Larry Bassel
2010-02-03 14:59       ` Russell King - ARM Linux
2010-01-11 22:47 ` [RFC 06/18] arm: msm: implement proper dmb() for 7x27 Daniel Walker
2010-01-11 23:39   ` Russell King - ARM Linux
2010-01-11 23:45     ` Daniel Walker
2010-01-12  0:01       ` Russell King - ARM Linux
2010-01-19 17:28         ` Jamie Lokier
2010-01-19 18:04           ` Russell King - ARM Linux
2010-01-19 21:12             ` Jamie Lokier
2010-01-19 23:11               ` Russell King - ARM Linux
2010-01-19 17:16   ` Jamie Lokier
2010-01-11 22:47 ` [RFC 07/18] arm: mm: retry on QSD icache parity errors Daniel Walker
2010-01-18 18:42   ` Ashwin Chaugule
2010-01-19 16:16     ` Ashwin Chaugule
2010-01-11 22:47 ` [RFC 08/18] arm: msm: set L2CR1 to enable prefetch and burst on Scorpion Daniel Walker
2010-01-11 23:45   ` Russell King - ARM Linux
2010-01-12 10:51     ` [RFC 08/18] arm: msm: set L2CR1 to enable prefetch and burston Scorpion Catalin Marinas
2010-01-12 11:23       ` Shilimkar, Santosh
2010-01-12 11:44       ` Russell King - ARM Linux
2010-01-12 13:32         ` [RFC 08/18] arm: msm: set L2CR1 to enable prefetch and burstonScorpion Catalin Marinas
2010-01-12 13:58           ` Russell King - ARM Linux
2010-01-12 14:41             ` [RFC 08/18] arm: msm: set L2CR1 to enable prefetch andburstonScorpion Catalin Marinas
2010-01-12 18:23               ` Daniel Walker
2010-01-13 10:36                 ` Catalin Marinas
2010-01-19 17:38                   ` Jamie Lokier
2010-01-13  6:14           ` [RFC 08/18] arm: msm: set L2CR1 to enable prefetch and burstonScorpion Shilimkar, Santosh
2010-01-12 20:21         ` [RFC 08/18] arm: msm: set L2CR1 to enable prefetch and burston Scorpion Nicolas Pitre
2010-01-11 22:47 ` [RFC 09/18] arm: mm: support error reporting in L1/L2 caches on QSD Daniel Walker
2010-01-11 22:47 ` [RFC 10/18] arm: mm: enable L2X0 to use L2 cache on MSM7X27 Daniel Walker
2010-01-11 22:47 ` [RFC 11/18] arm: msm: add ARCH_MSM_SCORPION to CPU_V7 Daniel Walker
2010-01-11 23:13   ` Russell King - ARM Linux
2010-01-11 23:17     ` Daniel Walker
2010-01-11 22:47 ` [RFC 12/18] arm: msm: Enable frequency scaling Daniel Walker
2010-01-11 22:47 ` [RFC 13/18] arm: msm: define HAVE_CLK for ARCH_MSM Daniel Walker
2010-01-11 22:47 ` [RFC 14/18] arm: msm: add v7 support for compiler version-4.1.1 Daniel Walker
2010-01-11 23:07   ` Russell King - ARM Linux
2010-01-11 22:47 ` [RFC 15/18] arm: vfp: Add additional vfp interfaces Daniel Walker
2010-01-11 22:47 ` [RFC 16/18] arm: msm: add arch_has_speculative_dfetch() Daniel Walker
2010-01-11 23:33   ` Russell King - ARM Linux
2010-01-12  0:28     ` Daniel Walker
2010-01-12  8:59       ` Russell King - ARM Linux
2010-01-11 22:47 ` [RFC 17/18] arm: mm: Add SW emulation for ARM domain manager feature Daniel Walker
2010-01-25 16:40   ` Catalin Marinas
2010-01-25 17:04     ` Nicolas Pitre
2010-01-25 18:25     ` Daniel Walker
2010-03-22 18:11     ` Daniel Walker
2010-03-22 18:58       ` Nicolas Pitre
2010-03-22 20:01         ` Daniel Walker
2010-03-22 20:32           ` Nicolas Pitre
2010-03-23 10:04             ` Catalin Marinas
2010-01-11 22:47 ` [RFC 18/18] arm: mm: qsd8x50: Fix incorrect permission faults Daniel Walker
2010-01-11 23:11   ` Russell King - ARM Linux
2010-01-19 17:10     ` Jamie Lokier
2010-01-19 17:33       ` Daniel Walker
2010-01-19 17:43         ` Jamie Lokier
2010-01-19 17:49           ` Daniel Walker
2010-01-19 18:09           ` Russell King - ARM Linux
2010-02-04  0:09       ` David Brown

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).