* [PATCH v2 0/5] OMAP3: PM: Fixes for low power code
@ 2011-03-10 7:07 Santosh Shilimkar
2011-03-10 7:07 ` [PATCH v2 1/5] OMAP3: PM: Use ARMv7 supported instructions instead of legacy CP15 ones Santosh Shilimkar
` (5 more replies)
0 siblings, 6 replies; 7+ messages in thread
From: Santosh Shilimkar @ 2011-03-10 7:07 UTC (permalink / raw)
To: linux-arm-kernel
The series does below fixes to the omap3 low power code.
1. Use supported ARMv7 instructions instead of the legacy ones
2. Fix the MMU on sequence
3. Fix the cache flush scenario when only L1 lost.
4. Remove all un-necessary context save registers
5. Disable C-bit before cache clean
V2:
- Rebased series on top of pm-core branch where the Dave Martin's
Thumb2 patches are merged.
- Dropped set_cr() patch since it's already merged in pm-core branch
- Fixed the hang issue reported by Kevin Hilman with OMAP3630
It's generated against latest pm-core branch and tested suspend
and cpuilde on OMAP3630 ZOOM.
The following changes since commit 7d6d079fbd46aee85e9b5de1d67de15b85e50b04:
Kevin Hilman (1):
Merge branch 'for_2.6.39/pm-voltage' into pm-reset
are available in the git repository at:
git://dev.omapzoom.org/pub/scm/santosh/kernel-omap4-base.git
pm-core-omap3-asm_v2
Santosh Shilimkar (5):
OMAP3: PM: Use ARMv7 supported instructions instead of legacy CP15
ones
OMAP3: PM: Fix the MMU on sequence in the asm code
OMAP3: PM: Allow the cache clean when L1 is lost.
OMAP3: PM: Remove un-necessary cp15 registers form low power cpu
context
OMAP3: PM: Clear the SCTLR C bit in asm code to prevent data cache
allocation
arch/arm/mach-omap2/sleep34xx.S | 224 +++++++++++++++------------------------
1 files changed, 85 insertions(+), 139 deletions(-)
^ permalink raw reply [flat|nested] 7+ messages in thread* [PATCH v2 1/5] OMAP3: PM: Use ARMv7 supported instructions instead of legacy CP15 ones 2011-03-10 7:07 [PATCH v2 0/5] OMAP3: PM: Fixes for low power code Santosh Shilimkar @ 2011-03-10 7:07 ` Santosh Shilimkar 2011-03-10 7:07 ` [PATCH v2 2/5] OMAP3: PM: Fix the MMU on sequence in the asm code Santosh Shilimkar ` (4 subsequent siblings) 5 siblings, 0 replies; 7+ messages in thread From: Santosh Shilimkar @ 2011-03-10 7:07 UTC (permalink / raw) To: linux-arm-kernel On ARMv7 dsb, dmb instructions are supported and can be used directly instead of their cp15 equivalnet. Also remove the opcodes for smc and use the available instruction directly in OMAP3 low power asm code Signed-off-by: Santosh Shilimkar <santosh.shilimkar@ti.com> Cc: Kevin Hilman <khilman@ti.com> --- arch/arm/mach-omap2/sleep34xx.S | 21 ++++++++++----------- 1 files changed, 10 insertions(+), 11 deletions(-) diff --git a/arch/arm/mach-omap2/sleep34xx.S b/arch/arm/mach-omap2/sleep34xx.S index 5403bc4..8894c08 100644 --- a/arch/arm/mach-omap2/sleep34xx.S +++ b/arch/arm/mach-omap2/sleep34xx.S @@ -145,8 +145,8 @@ ENTRY(save_secure_ram_context) mov r1, #0 @ set task id for ROM code in r1 mov r2, #4 @ set some flags in r2, r6 mov r6, #0xff - mcr p15, 0, r0, c7, c10, 4 @ data write barrier - mcr p15, 0, r0, c7, c10, 5 @ data memory barrier + dsb @ data write barrier + dmb @ data memory barrier smc #1 @ call SMI monitor (smi #1) nop nop @@ -316,9 +316,8 @@ omap3_do_wfi: str r5, [r4] @ write back to SDRC_POWER register /* Data memory barrier and Data sync barrier */ - mov r1, #0 - mcr p15, 0, r1, c7, c10, 4 - mcr p15, 0, r1, c7, c10, 5 + dsb + dmb /* * =================================== @@ -433,8 +432,8 @@ skipl2dis: mov r2, #4 @ set some flags in r2, r6 mov r6, #0xff adr r3, l2_inv_api_params @ r3 points to dummy parameters - mcr p15, 0, r0, c7, c10, 4 @ data write barrier - mcr p15, 0, r0, c7, c10, 5 @ data memory barrier + dsb @ data write barrier + dmb @ data memory barrier smc #1 @ call SMI monitor (smi #1) /* Write to Aux control register to set some bits */ mov r0, #42 @ set service ID for PPA @@ -444,8 +443,8 @@ skipl2dis: mov r6, #0xff ldr r4, scratchpad_base ldr r3, [r4, #0xBC] @ r3 points to parameters - mcr p15, 0, r0, c7, c10, 4 @ data write barrier - mcr p15, 0, r0, c7, c10, 5 @ data memory barrier + dsb @ data write barrier + dmb @ data memory barrier smc #1 @ call SMI monitor (smi #1) #ifdef CONFIG_OMAP3_L2_AUX_SECURE_SAVE_RESTORE @@ -459,8 +458,8 @@ skipl2dis: ldr r4, scratchpad_base ldr r3, [r4, #0xBC] adds r3, r3, #8 @ r3 points to parameters - mcr p15, 0, r0, c7, c10, 4 @ data write barrier - mcr p15, 0, r0, c7, c10, 5 @ data memory barrier + dsb @ data write barrier + dmb @ data memory barrier smc #1 @ call SMI monitor (smi #1) #endif b logic_l1_restore -- 1.6.0.4 ^ permalink raw reply related [flat|nested] 7+ messages in thread
* [PATCH v2 2/5] OMAP3: PM: Fix the MMU on sequence in the asm code 2011-03-10 7:07 [PATCH v2 0/5] OMAP3: PM: Fixes for low power code Santosh Shilimkar 2011-03-10 7:07 ` [PATCH v2 1/5] OMAP3: PM: Use ARMv7 supported instructions instead of legacy CP15 ones Santosh Shilimkar @ 2011-03-10 7:07 ` Santosh Shilimkar 2011-03-10 7:07 ` [PATCH v2 3/5] OMAP3: PM: Allow the cache clean when L1 is lost Santosh Shilimkar ` (3 subsequent siblings) 5 siblings, 0 replies; 7+ messages in thread From: Santosh Shilimkar @ 2011-03-10 7:07 UTC (permalink / raw) To: linux-arm-kernel Add necessary barriers after enabling MMU. Also use the sane way to load pc and jump to it instead of executing ldma first up. Signed-off-by: Santosh Shilimkar <santosh.shilimkar@ti.com> Cc: Kevin Hilman <khilman@ti.com> --- arch/arm/mach-omap2/sleep34xx.S | 5 +++++ 1 files changed, 5 insertions(+), 0 deletions(-) diff --git a/arch/arm/mach-omap2/sleep34xx.S b/arch/arm/mach-omap2/sleep34xx.S index 8894c08..e58ec7d 100644 --- a/arch/arm/mach-omap2/sleep34xx.S +++ b/arch/arm/mach-omap2/sleep34xx.S @@ -619,12 +619,17 @@ usettbr0: ldr r2, cache_pred_disable_mask and r4, r2 mcr p15, 0, r4, c1, c0, 0 + dsb + isb + ldr r0, =restoremmu_on + bx r0 /* * ============================== * == Exit point from OFF mode == * ============================== */ +restoremmu_on: ldmfd sp!, {r0-r12, pc} @ restore regs and return -- 1.6.0.4 ^ permalink raw reply related [flat|nested] 7+ messages in thread
* [PATCH v2 3/5] OMAP3: PM: Allow the cache clean when L1 is lost. 2011-03-10 7:07 [PATCH v2 0/5] OMAP3: PM: Fixes for low power code Santosh Shilimkar 2011-03-10 7:07 ` [PATCH v2 1/5] OMAP3: PM: Use ARMv7 supported instructions instead of legacy CP15 ones Santosh Shilimkar 2011-03-10 7:07 ` [PATCH v2 2/5] OMAP3: PM: Fix the MMU on sequence in the asm code Santosh Shilimkar @ 2011-03-10 7:07 ` Santosh Shilimkar 2011-03-10 7:07 ` [PATCH v2 4/5] OMAP3: PM: Remove un-necessary cp15 registers form low power cpu context Santosh Shilimkar ` (2 subsequent siblings) 5 siblings, 0 replies; 7+ messages in thread From: Santosh Shilimkar @ 2011-03-10 7:07 UTC (permalink / raw) To: linux-arm-kernel When L1 cache is suppose to be lost, it needs to be cleaned before entrering to the low power mode. While at this, also fix few comments and remove un-necessary clean_l2 lable. Signed-off-by: Santosh Shilimkar <santosh.shilimkar@ti.com> Cc: Kevin Hilman <khilman@ti.com> --- arch/arm/mach-omap2/sleep34xx.S | 15 +++------------ 1 files changed, 3 insertions(+), 12 deletions(-) diff --git a/arch/arm/mach-omap2/sleep34xx.S b/arch/arm/mach-omap2/sleep34xx.S index e58ec7d..7f13336 100644 --- a/arch/arm/mach-omap2/sleep34xx.S +++ b/arch/arm/mach-omap2/sleep34xx.S @@ -190,12 +190,12 @@ ENTRY(omap34xx_cpu_suspend) stmfd sp!, {r0-r12, lr} @ save registers on stack /* - * r0 contains restore pointer in sdram + * r0 contains CPU context save/restore pointer in sdram * r1 contains information about saving context: * 0 - No context lost * 1 - Only L1 and logic lost - * 2 - Only L2 lost - * 3 - Both L1 and L2 lost + * 2 - Only L2 lost (Even L1 is retained we clean it along with L2) + * 3 - Both L1 and L2 lost and logic lost */ /* Directly jump to WFI is the context save is not required */ @@ -280,15 +280,6 @@ l1_logic_lost: clean_caches: /* - * Clean Data or unified cache to POU - * How to invalidate only L1 cache???? - #FIX_ME# - * mcr p15, 0, r11, c7, c11, 1 - */ - cmp r1, #0x1 @ Check whether L2 inval is required - beq omap3_do_wfi - -clean_l2: - /* * jump out to kernel flush routine * - reuse that code is better * - it executes in a cached space so is faster than refetch per-block -- 1.6.0.4 ^ permalink raw reply related [flat|nested] 7+ messages in thread
* [PATCH v2 4/5] OMAP3: PM: Remove un-necessary cp15 registers form low power cpu context 2011-03-10 7:07 [PATCH v2 0/5] OMAP3: PM: Fixes for low power code Santosh Shilimkar ` (2 preceding siblings ...) 2011-03-10 7:07 ` [PATCH v2 3/5] OMAP3: PM: Allow the cache clean when L1 is lost Santosh Shilimkar @ 2011-03-10 7:07 ` Santosh Shilimkar 2011-03-10 7:07 ` [PATCH v2 5/5] OMAP3: PM: Clear the SCTLR C bit in asm code to prevent data cache allocation Santosh Shilimkar 2011-03-10 20:24 ` [PATCH v2 0/5] OMAP3: PM: Fixes for low power code Kevin Hilman 5 siblings, 0 replies; 7+ messages in thread From: Santosh Shilimkar @ 2011-03-10 7:07 UTC (permalink / raw) To: linux-arm-kernel The current code saves few un-necessary registers which are read-only or write-only, unused CP15 registers. Remove them and keep only necessary CP15 registers part of low power context save/restore. Signed-off-by: Santosh Shilimkar <santosh.shilimkar@ti.com> Cc: Kevin Hilman <khilman@ti.com> --- arch/arm/mach-omap2/sleep34xx.S | 156 ++++++++++----------------------------- 1 files changed, 40 insertions(+), 116 deletions(-) diff --git a/arch/arm/mach-omap2/sleep34xx.S b/arch/arm/mach-omap2/sleep34xx.S index 7f13336..d29476e 100644 --- a/arch/arm/mach-omap2/sleep34xx.S +++ b/arch/arm/mach-omap2/sleep34xx.S @@ -216,66 +216,29 @@ save_context_wfi: beq clean_caches l1_logic_lost: - /* Store sp and spsr to SDRAM */ - mov r4, sp - mrs r5, spsr - mov r6, lr + mov r4, sp @ Store sp + mrs r5, spsr @ Store spsr + mov r6, lr @ Store lr stmia r8!, {r4-r6} - /* Save all ARM registers */ - /* Coprocessor access control register */ - mrc p15, 0, r6, c1, c0, 2 - stmia r8!, {r6} - /* TTBR0, TTBR1 and Translation table base control */ - mrc p15, 0, r4, c2, c0, 0 - mrc p15, 0, r5, c2, c0, 1 - mrc p15, 0, r6, c2, c0, 2 - stmia r8!, {r4-r6} - /* - * Domain access control register, data fault status register, - * and instruction fault status register - */ - mrc p15, 0, r4, c3, c0, 0 - mrc p15, 0, r5, c5, c0, 0 - mrc p15, 0, r6, c5, c0, 1 - stmia r8!, {r4-r6} - /* - * Data aux fault status register, instruction aux fault status, - * data fault address register and instruction fault address register - */ - mrc p15, 0, r4, c5, c1, 0 - mrc p15, 0, r5, c5, c1, 1 - mrc p15, 0, r6, c6, c0, 0 - mrc p15, 0, r7, c6, c0, 2 - stmia r8!, {r4-r7} - /* - * user r/w thread and process ID, user r/o thread and process ID, - * priv only thread and process ID, cache size selection - */ - mrc p15, 0, r4, c13, c0, 2 - mrc p15, 0, r5, c13, c0, 3 - mrc p15, 0, r6, c13, c0, 4 - mrc p15, 2, r7, c0, c0, 0 + + mrc p15, 0, r4, c1, c0, 2 @ Coprocessor access control register + mrc p15, 0, r5, c2, c0, 0 @ TTBR0 + mrc p15, 0, r6, c2, c0, 1 @ TTBR1 + mrc p15, 0, r7, c2, c0, 2 @ TTBCR stmia r8!, {r4-r7} - /* Data TLB lockdown, instruction TLB lockdown registers */ - mrc p15, 0, r5, c10, c0, 0 - mrc p15, 0, r6, c10, c0, 1 - stmia r8!, {r5-r6} - /* Secure or non secure vector base address, FCSE PID, Context PID*/ - mrc p15, 0, r4, c12, c0, 0 - mrc p15, 0, r5, c13, c0, 0 - mrc p15, 0, r6, c13, c0, 1 - stmia r8!, {r4-r6} - /* Primary remap, normal remap registers */ - mrc p15, 0, r4, c10, c2, 0 - mrc p15, 0, r5, c10, c2, 1 - stmia r8!,{r4-r5} - /* Store current cpsr*/ - mrs r2, cpsr - stmia r8!, {r2} + mrc p15, 0, r4, c3, c0, 0 @ Domain access Control Register + mrc p15, 0, r5, c10, c2, 0 @ PRRR + mrc p15, 0, r6, c10, c2, 1 @ NMRR + stmia r8!,{r4-r6} + + mrc p15, 0, r4, c13, c0, 1 @ Context ID + mrc p15, 0, r5, c13, c0, 2 @ User r/w thread and process ID + mrc p15, 0, r6, c12, c0, 0 @ Secure or NS vector base address + mrs r7, cpsr @ Store current cpsr + stmia r8!, {r4-r7} - mrc p15, 0, r4, c1, c0, 0 - /* save control register */ + mrc p15, 0, r4, c1, c0, 0 @ save control register stmia r8!, {r4} clean_caches: @@ -491,68 +454,29 @@ skipl2reen: ldr r4, scratchpad_base ldr r3, [r4,#0xBC] adds r3, r3, #16 + ldmia r3!, {r4-r6} - mov sp, r4 - msr spsr_cxsf, r5 - mov lr, r6 - - ldmia r3!, {r4-r9} - /* Coprocessor access Control Register */ - mcr p15, 0, r4, c1, c0, 2 - - /* TTBR0 */ - MCR p15, 0, r5, c2, c0, 0 - /* TTBR1 */ - MCR p15, 0, r6, c2, c0, 1 - /* Translation table base control register */ - MCR p15, 0, r7, c2, c0, 2 - /* Domain access Control Register */ - MCR p15, 0, r8, c3, c0, 0 - /* Data fault status Register */ - MCR p15, 0, r9, c5, c0, 0 - - ldmia r3!,{r4-r8} - /* Instruction fault status Register */ - MCR p15, 0, r4, c5, c0, 1 - /* Data Auxiliary Fault Status Register */ - MCR p15, 0, r5, c5, c1, 0 - /* Instruction Auxiliary Fault Status Register*/ - MCR p15, 0, r6, c5, c1, 1 - /* Data Fault Address Register */ - MCR p15, 0, r7, c6, c0, 0 - /* Instruction Fault Address Register*/ - MCR p15, 0, r8, c6, c0, 2 - ldmia r3!,{r4-r7} + mov sp, r4 @ Restore sp + msr spsr_cxsf, r5 @ Restore spsr + mov lr, r6 @ Restore lr - /* User r/w thread and process ID */ - MCR p15, 0, r4, c13, c0, 2 - /* User ro thread and process ID */ - MCR p15, 0, r5, c13, c0, 3 - /* Privileged only thread and process ID */ - MCR p15, 0, r6, c13, c0, 4 - /* Cache size selection */ - MCR p15, 2, r7, c0, c0, 0 - ldmia r3!,{r4-r8} - /* Data TLB lockdown registers */ - MCR p15, 0, r4, c10, c0, 0 - /* Instruction TLB lockdown registers */ - MCR p15, 0, r5, c10, c0, 1 - /* Secure or Nonsecure Vector Base Address */ - MCR p15, 0, r6, c12, c0, 0 - /* FCSE PID */ - MCR p15, 0, r7, c13, c0, 0 - /* Context PID */ - MCR p15, 0, r8, c13, c0, 1 - - ldmia r3!,{r4-r5} - /* Primary memory remap register */ - MCR p15, 0, r4, c10, c2, 0 - /* Normal memory remap register */ - MCR p15, 0, r5, c10, c2, 1 - - /* Restore cpsr */ - ldmia r3!,{r4} @ load CPSR from SDRAM - msr cpsr, r4 @ store cpsr + ldmia r3!, {r4-r7} + mcr p15, 0, r4, c1, c0, 2 @ Coprocessor access Control Register + mcr p15, 0, r5, c2, c0, 0 @ TTBR0 + mcr p15, 0, r6, c2, c0, 1 @ TTBR1 + mcr p15, 0, r7, c2, c0, 2 @ TTBCR + + ldmia r3!,{r4-r6} + mcr p15, 0, r4, c3, c0, 0 @ Domain access Control Register + mcr p15, 0, r5, c10, c2, 0 @ PRRR + mcr p15, 0, r6, c10, c2, 1 @ NMRR + + + ldmia r3!,{r4-r7} + mcr p15, 0, r4, c13, c0, 1 @ Context ID + mcr p15, 0, r5, c13, c0, 2 @ User r/w thread and process ID + mrc p15, 0, r6, c12, c0, 0 @ Secure or NS vector base address + msr cpsr, r7 @ store cpsr /* Enabling MMU here */ mrc p15, 0, r7, c2, c0, 2 @ Read TTBRControl -- 1.6.0.4 ^ permalink raw reply related [flat|nested] 7+ messages in thread
* [PATCH v2 5/5] OMAP3: PM: Clear the SCTLR C bit in asm code to prevent data cache allocation 2011-03-10 7:07 [PATCH v2 0/5] OMAP3: PM: Fixes for low power code Santosh Shilimkar ` (3 preceding siblings ...) 2011-03-10 7:07 ` [PATCH v2 4/5] OMAP3: PM: Remove un-necessary cp15 registers form low power cpu context Santosh Shilimkar @ 2011-03-10 7:07 ` Santosh Shilimkar 2011-03-10 20:24 ` [PATCH v2 0/5] OMAP3: PM: Fixes for low power code Kevin Hilman 5 siblings, 0 replies; 7+ messages in thread From: Santosh Shilimkar @ 2011-03-10 7:07 UTC (permalink / raw) To: linux-arm-kernel On the newer ARM processors like CortexA8, CortexA9, the caches can be speculatively loaded while they are getting flushed. Clear the SCTLR C bit to prevent further data cache allocation as part of cache clean routine Signed-off-by: Santosh Shilimkar <santosh.shilimkar@ti.com> Cc: Kevin Hilman <khilman@ti.com> --- arch/arm/mach-omap2/sleep34xx.S | 27 +++++++++++++++++++++++++++ 1 files changed, 27 insertions(+), 0 deletions(-) diff --git a/arch/arm/mach-omap2/sleep34xx.S b/arch/arm/mach-omap2/sleep34xx.S index d29476e..63f1066 100644 --- a/arch/arm/mach-omap2/sleep34xx.S +++ b/arch/arm/mach-omap2/sleep34xx.S @@ -248,6 +248,27 @@ clean_caches: * - it executes in a cached space so is faster than refetch per-block * - should be faster and will change with kernel * - 'might' have to copy address, load and jump to it + * Flush all data from the L1 data cache before disabling + * SCTLR.C bit. + */ + ldr r1, kernel_flush + mov lr, pc + bx r1 + + /* + * Clear the SCTLR.C bit to prevent further data cache + * allocation. Clearing SCTLR.C would make all the data accesses + * strongly ordered and would not hit the cache. + */ + mrc p15, 0, r0, c1, c0, 0 + bic r0, r0, #(1 << 2) @ Disable the C bit + mcr p15, 0, r0, c1, c0, 0 + isb + + /* + * Invalidate L1 data cache. Even though only invalidate is + * necessary exported flush API is used here. Doing clean + * on already clean cache would be almost NOP. */ ldr r1, kernel_flush blx r1 @@ -297,6 +318,12 @@ omap3_do_wfi: nop bl wait_sdrc_ok + mrc p15, 0, r0, c1, c0, 0 + tst r0, #(1 << 2) @ Check C bit enabled? + orreq r0, r0, #(1 << 2) @ Enable the C bit if cleared + mcreq p15, 0, r0, c1, c0, 0 + isb + /* * =================================== * == Exit point from non-OFF modes == -- 1.6.0.4 ^ permalink raw reply related [flat|nested] 7+ messages in thread
* [PATCH v2 0/5] OMAP3: PM: Fixes for low power code 2011-03-10 7:07 [PATCH v2 0/5] OMAP3: PM: Fixes for low power code Santosh Shilimkar ` (4 preceding siblings ...) 2011-03-10 7:07 ` [PATCH v2 5/5] OMAP3: PM: Clear the SCTLR C bit in asm code to prevent data cache allocation Santosh Shilimkar @ 2011-03-10 20:24 ` Kevin Hilman 5 siblings, 0 replies; 7+ messages in thread From: Kevin Hilman @ 2011-03-10 20:24 UTC (permalink / raw) To: linux-arm-kernel Santosh Shilimkar <santosh.shilimkar@ti.com> writes: > The series does below fixes to the omap3 low power code. > 1. Use supported ARMv7 instructions instead of the legacy ones > 2. Fix the MMU on sequence > 3. Fix the cache flush scenario when only L1 lost. > 4. Remove all un-necessary context save registers > 5. Disable C-bit before cache clean > > V2: > - Rebased series on top of pm-core branch where the Dave Martin's > Thumb2 patches are merged. > - Dropped set_cr() patch since it's already merged in pm-core branch > - Fixed the hang issue reported by Kevin Hilman with OMAP3630 Thanks for the updates. I've been testing the previous series along with Paul's integration branch and things look good. Will re-test with this updated version and queue for 2.6.39 (branch: for_2.6.39/pm-misc) Thanks, Kevin > It's generated against latest pm-core branch and tested suspend > and cpuilde on OMAP3630 ZOOM. > > The following changes since commit 7d6d079fbd46aee85e9b5de1d67de15b85e50b04: > Kevin Hilman (1): > Merge branch 'for_2.6.39/pm-voltage' into pm-reset > > are available in the git repository at: > > git://dev.omapzoom.org/pub/scm/santosh/kernel-omap4-base.git > pm-core-omap3-asm_v2 > > Santosh Shilimkar (5): > OMAP3: PM: Use ARMv7 supported instructions instead of legacy CP15 > ones > OMAP3: PM: Fix the MMU on sequence in the asm code > OMAP3: PM: Allow the cache clean when L1 is lost. > OMAP3: PM: Remove un-necessary cp15 registers form low power cpu > context > OMAP3: PM: Clear the SCTLR C bit in asm code to prevent data cache > allocation > > arch/arm/mach-omap2/sleep34xx.S | 224 +++++++++++++++------------------------ > 1 files changed, 85 insertions(+), 139 deletions(-) ^ permalink raw reply [flat|nested] 7+ messages in thread
end of thread, other threads:[~2011-03-10 20:24 UTC | newest] Thread overview: 7+ messages (download: mbox.gz follow: Atom feed -- links below jump to the message on this page -- 2011-03-10 7:07 [PATCH v2 0/5] OMAP3: PM: Fixes for low power code Santosh Shilimkar 2011-03-10 7:07 ` [PATCH v2 1/5] OMAP3: PM: Use ARMv7 supported instructions instead of legacy CP15 ones Santosh Shilimkar 2011-03-10 7:07 ` [PATCH v2 2/5] OMAP3: PM: Fix the MMU on sequence in the asm code Santosh Shilimkar 2011-03-10 7:07 ` [PATCH v2 3/5] OMAP3: PM: Allow the cache clean when L1 is lost Santosh Shilimkar 2011-03-10 7:07 ` [PATCH v2 4/5] OMAP3: PM: Remove un-necessary cp15 registers form low power cpu context Santosh Shilimkar 2011-03-10 7:07 ` [PATCH v2 5/5] OMAP3: PM: Clear the SCTLR C bit in asm code to prevent data cache allocation Santosh Shilimkar 2011-03-10 20:24 ` [PATCH v2 0/5] OMAP3: PM: Fixes for low power code Kevin Hilman
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).