* [PATCH] ARM: enable IRQs in user undefined instruction vector @ 2014-02-04 20:19 Kevin Bracey 2014-02-06 11:20 ` Catalin Marinas 0 siblings, 1 reply; 8+ messages in thread From: Kevin Bracey @ 2014-02-04 20:19 UTC (permalink / raw) To: linux-arm-kernel If an abort occurs while loading an instruction from user space in __und_usr, the resulting do_page_fault() can output "sleeping function called from invalid context" warnings, due to IRQs being disabled in __und_usr, and hence in do_page_fault(). Avoid the problem by enabling IRQs in __und_usr before attempting to load the instruction, and modify code and comments in the undefined instruction handlers to note that IRQs are enabled on entry iff the instruction was executed in user mode. See http://comments.gmane.org/gmane.linux.ports.arm.omap/59256 for an earlier report of the observed might_sleep() warning. The proposed patch in that thread, which adds a "!irqs_disabled()" test to do_page_fault(), has already been applied to Android, but that patch causes an execution failure if another CPU ages the page between the instruction execution and __und_usr; it prevents do_page_fault() from attempting to handle the fault and __und_usr's fixup handler is called instead, but the fixup handler just continues execution from the next instruction, so the original instruction is silently skipped. This patch modifies the fixup handler to attempt to re-execute the original instruction, bringing it in line with the SWI fixup handler; this also avoids the possibility of the instruction being skipped if do_page_fault() doesn't handle a fault. Signed-off-by: Kevin Bracey <kevin@bracey.fi> --- arch/arm/kernel/entry-armv.S | 11 ++++++++--- arch/arm/mach-ep93xx/crunch-bits.S | 7 ++++++- arch/arm/vfp/entry.S | 2 +- 3 files changed, 15 insertions(+), 5 deletions(-) diff --git a/arch/arm/kernel/entry-armv.S b/arch/arm/kernel/entry-armv.S index b3fb8c9..bed1567 100644 --- a/arch/arm/kernel/entry-armv.S +++ b/arch/arm/kernel/entry-armv.S @@ -399,6 +399,7 @@ ENDPROC(__irq_usr) .align 5 __und_usr: usr_entry + enable_irq mov r2, r4 mov r3, r5 @@ -478,11 +479,14 @@ __und_usr_thumb: ENDPROC(__und_usr) /* - * The out of line fixup for the ldrt instructions above. + * The out of line fixup for the ldrt instructions above. Called when there + * was an unrecoverable fault accessing the instruction. Attempt to re-execute + * the instruction, which should trigger the user fault handling path. */ .pushsection .fixup, "ax" .align 2 -4: mov pc, r9 +4: str r4, [sp, #S_PC] + mov pc, r9 .popsection .pushsection __ex_table,"a" .long 1b, 4b @@ -515,7 +519,8 @@ ENDPROC(__und_usr) * r9 = normal "successful" return address * r10 = this threads thread_info structure * lr = unrecognised instruction return address - * IRQs disabled, FIQs enabled. + * IRQs enabled iff the instruction was executed in user mode. + * FIQs enabled. */ @ @ Fall-through from Thumb-2 __und_usr diff --git a/arch/arm/mach-ep93xx/crunch-bits.S b/arch/arm/mach-ep93xx/crunch-bits.S index 0ec9bb4..413d46c 100644 --- a/arch/arm/mach-ep93xx/crunch-bits.S +++ b/arch/arm/mach-ep93xx/crunch-bits.S @@ -62,9 +62,14 @@ * r9 = ret_from_exception * lr = undefined instr exit * - * called from prefetch exception handler with interrupts disabled + * Called from undefined instruction handler. + * Interrupts enabled iff instruction executed in user mode. */ ENTRY(crunch_task_enable) + mrs r1, cpsr + orr r2, r1, #PSR_I_BIT @ disable interrupts + msr cpsr_c, r2 + ldr r8, =(EP93XX_APB_VIRT_BASE + 0x00130000) @ syscon addr ldr r1, [r8, #0x80] diff --git a/arch/arm/vfp/entry.S b/arch/arm/vfp/entry.S index 46e1749..e0e3a00 100644 --- a/arch/arm/vfp/entry.S +++ b/arch/arm/vfp/entry.S @@ -19,7 +19,7 @@ @ r9 = normal "successful" return address @ r10 = this threads thread_info structure @ lr = unrecognised instruction return address -@ IRQs disabled. +@ IRQs enabled iff the instruction was executed in user mode. @ ENTRY(do_vfp) #ifdef CONFIG_PREEMPT_COUNT -- 1.8.4.dirty ^ permalink raw reply related [flat|nested] 8+ messages in thread
* [PATCH] ARM: enable IRQs in user undefined instruction vector 2014-02-04 20:19 [PATCH] ARM: enable IRQs in user undefined instruction vector Kevin Bracey @ 2014-02-06 11:20 ` Catalin Marinas 2014-02-06 18:10 ` Kevin Bracey 0 siblings, 1 reply; 8+ messages in thread From: Catalin Marinas @ 2014-02-06 11:20 UTC (permalink / raw) To: linux-arm-kernel On Tue, Feb 04, 2014 at 10:19:06PM +0200, Kevin Bracey wrote: > If an abort occurs while loading an instruction from user space in > __und_usr, the resulting do_page_fault() can output "sleeping function > called from invalid context" warnings, due to IRQs being disabled in > __und_usr, and hence in do_page_fault(). > > Avoid the problem by enabling IRQs in __und_usr before attempting to > load the instruction, and modify code and comments in the undefined > instruction handlers to note that IRQs are enabled on entry iff the > instruction was executed in user mode. > > See http://comments.gmane.org/gmane.linux.ports.arm.omap/59256 for > an earlier report of the observed might_sleep() warning. There was a follow-up on this: http://thread.gmane.org/gmane.linux.ports.arm.kernel/263765 > arch/arm/kernel/entry-armv.S | 11 ++++++++--- > arch/arm/mach-ep93xx/crunch-bits.S | 7 ++++++- > arch/arm/vfp/entry.S | 2 +- > 3 files changed, 15 insertions(+), 5 deletions(-) > > diff --git a/arch/arm/kernel/entry-armv.S b/arch/arm/kernel/entry-armv.S > index b3fb8c9..bed1567 100644 > --- a/arch/arm/kernel/entry-armv.S > +++ b/arch/arm/kernel/entry-armv.S > @@ -399,6 +399,7 @@ ENDPROC(__irq_usr) > .align 5 > __und_usr: > usr_entry > + enable_irq The problem is that you can be preempted here and parts of the kernel may not cope with this. The reason I haven't pushed my patches to mainline is that I was worried about such preemption cases. We enter iwmmxt_task_enable for example with interrupts and preemption enabled, we disable preemption there but is it too late? I don't have a way to test these and even for VFP I'm not sure testing would guarantee it in all scenarios. So it needs more thinking. Please have a look at my patches, if we get to the conclusion there is no issue, I'll push them upstream. -- Catalin ^ permalink raw reply [flat|nested] 8+ messages in thread
* [PATCH] ARM: enable IRQs in user undefined instruction vector 2014-02-06 11:20 ` Catalin Marinas @ 2014-02-06 18:10 ` Kevin Bracey 2014-02-06 20:54 ` Russell King - ARM Linux 0 siblings, 1 reply; 8+ messages in thread From: Kevin Bracey @ 2014-02-06 18:10 UTC (permalink / raw) To: linux-arm-kernel On 06/02/2014 13:20, Catalin Marinas wrote: > On Tue, Feb 04, 2014 at 10:19:06PM +0200, Kevin Bracey wrote: >> See http://comments.gmane.org/gmane.linux.ports.arm.omap/59256 for >> an earlier report of the observed might_sleep() warning. > There was a follow-up on this: > > http://thread.gmane.org/gmane.linux.ports.arm.kernel/263765 Thanks for that pointer - we failed to find that thread. And from it I see I totally missed the iwmmxt path. :( > __und_usr: > usr_entry > + enable_irq > The problem is that you can be preempted here and parts of the kernel > may not cope with this. A very close analogue to this code is vector_swi. As far as I can see, having interrupts+preemption enabled in this __und_usr section should be as safe as it is there, so I'm pretty confident there shouldn't be a general kernel issue. But... > The reason I haven't pushed my patches to mainline is that I was worried > about such preemption cases. We enter iwmmxt_task_enable for example > with interrupts and preemption enabled, we disable preemption there but > is it too late? I don't have a way to test these and even for VFP I'm > not sure testing would guarantee it in all scenarios. The only concern I can see - and it's a big one - is that of how a HW coprocessor responds. If the coprocessor bounces an instruction, and we switch context before reaching the support code, will the hardware and its support code cope - both with switching to another context in that state, and handling the original bounce in this context when we get back? I know that because they're asynchronous, the FPA+VFP have always had to cope with possible pre-emption in the window between them deciding they need support and their next opportunity to bounce an instruction; but what if they get pre-empted between a bounce being generated and the support code processing it? The VFP code did take pains to increment the pre-emption count before enabling interrupts, but it's not obvious whether it requires no pre-emption between the bounce and handler, or just no pre-emption during the handler. For the FPA, we only have FPA emulation, not FPA support code, so nothing to worry about there. What about the iwmmxt and the crunchbits? They are only lazy-enable routines, to activate an inactive coprocessor. Which I think makes them safe. If we schedule before reaching the handler, when this context is returned to, the coprocessor must still be disabled in our context - the handler can proceed to enable it. And there can't be any other context state to worry about, assuming the lazy enable scheme works. Personally, I've always wondered why ARM never implemented a CP15 register you could read to find out what instruction generated a SWI or undefined instruction exception, to save all this hassle. It's always seemed to me like an obvious addition, ever since the I+D caches were split; why do I have to load my code into the data cache? > So it needs more thinking. Please have a look at my patches, if we get > to the conclusion there is no issue, I'll push them upstream. > I think if there's a problem, it will be in the VFP code - the others seem straightforward. The interrupt enable has to be made safe - as long as another core can age a page before the handler is reached, the VFP support code will /have/ to be able to cope with pre-emption between the fault and handler, if it doesn't already. We need a VFP expert to figure it out properly. We will stress test your posted patch for our failure case, which is Android skipping VFP instructions thanks to the extra irqs_disabled() patch in do_page_fault(). I think we should also include the fixup handler modification from my patch - I think __und_usr having a fixup handler that skips the instruction if do_page_fault fails is just asking for trouble, so it should be aligned with vector_swi so it retries. I share Alexey's Ignatov's concern that your patch ends up running the support code with interrupts either on or off depending on whether you came from user or supervisor mode, which feels a little odd. But then, always enabling interrupts like the current code does, when they might have been off in the SVC mode context, also seems wrong. I did ponder for my version whether the __und_svc path should restore original IRQ status before going to call_fpe, to make the handler entry environment more uniform - IRQ status would always be as in original context. Maybe it's a fine detail, as we don't really expect to be handling any of these instructions from the kernel anyway, but if we are going to the trouble to send __und_svc to handlers rather than going straight to __und_svc_fault... Kevin ^ permalink raw reply [flat|nested] 8+ messages in thread
* [PATCH] ARM: enable IRQs in user undefined instruction vector 2014-02-06 18:10 ` Kevin Bracey @ 2014-02-06 20:54 ` Russell King - ARM Linux 2014-02-07 10:00 ` vinayak menon 2014-02-07 16:25 ` Catalin Marinas 0 siblings, 2 replies; 8+ messages in thread From: Russell King - ARM Linux @ 2014-02-06 20:54 UTC (permalink / raw) To: linux-arm-kernel On Thu, Feb 06, 2014 at 08:10:44PM +0200, Kevin Bracey wrote: > The VFP code did take pains to increment the pre-emption count before > enabling interrupts, but it's not obvious whether it requires no > pre-emption between the bounce and handler, or just no pre-emption > during the handler. Just take a moment to think about this. - Userspace raises an undefined instruction exception, running on CPU 0. - The kernel starts to handle the exception by saving the ARM state. - Enables interrupts. - Context switch occurs. - VFP state is saved and the VFP is disabled. Now, on CPU1... - Context switch occurs to the above thread (due to thread migration). - VFP will be in a disabled state. - We read FPEXC, find that the VFP is disabled - Load saved state into the VFP - Check for an exception recorded in the VFP state, and handle it. So, that seems to work. I suspect most things will work just fine in this case. What /can't/ be allowed to happen is we can't start reading state from the hardware to handle the exception (or even be mid-restoring the state) and be preempted - if we do, we'll end up in a right mess because we'll end up with inconsistent state. > What about the iwmmxt and the crunchbits? They are only lazy-enable > routines, to activate an inactive coprocessor. Which I think makes them > safe. If we schedule before reaching the handler, when this context is > returned to, the coprocessor must still be disabled in our context - the > handler can proceed to enable it. And there can't be any other context > state to worry about, assuming the lazy enable scheme works. Again, the restore needs to happen with preemption disabled so that you don't end up with the state half-restored, and then when you resume the thread, you restore the other half. This is actually the case I'm more worried about - whether all the various handlers are safe being entered with preemption enabled. They weren't written for it so the current answer is that they aren't safe. > I share Alexey's Ignatov's concern that your patch ends up running the > support code with interrupts either on or off depending on whether you > came from user or supervisor mode, which feels a little odd. But then, > always enabling interrupts like the current code does, when they might > have been off in the SVC mode context, also seems wrong. That is indeed wrong, but then we /used/ to have the requirement that the kernel will _never_ execute VFP instructions. That's changed recently since we now permit neon instructions to be executed. However, the requirements to execute these instructions is very strict: preemption disabled, it must not fault. If you do fault, none of the standard handlers get invoked, instead you get a critical kernel message. So, if it does happen, then it's quite a bug already. The only case where the kernel intentionally provokes such a fault (which won't even reach the normal VFP handler) is when testing for the presence of VFP and there is no hardware fitted. -- FTTC broadband for 0.8mile line: 5.8Mbps down 500kbps up. Estimation in database were 13.1 to 19Mbit for a good line, about 7.5+ for a bad. Estimate before purchase was "up to 13.2Mbit". ^ permalink raw reply [flat|nested] 8+ messages in thread
* [PATCH] ARM: enable IRQs in user undefined instruction vector 2014-02-06 20:54 ` Russell King - ARM Linux @ 2014-02-07 10:00 ` vinayak menon 2014-02-07 10:06 ` Russell King - ARM Linux 2014-02-07 16:25 ` Catalin Marinas 1 sibling, 1 reply; 8+ messages in thread From: vinayak menon @ 2014-02-07 10:00 UTC (permalink / raw) To: linux-arm-kernel Can we do something similar to what pagefault_disable does ? So that we jump directly to fixup, after the in_atomic() check in do_page_fault. -------------------- >8 ---------------------------------------- Disable pagefault (preemption) before executing ldrt/ldrht instructions in __und_usr, to tell the page fault handler that we are in atomic context and need to jump directly to the fixup. In the fixup, correct the PC to re-execute the faulted instruction (from Kevein Bracey's patch). Signed-off-by: Vinayak Menon <vinayakm.list@gmail.com> Signed-off-by: Kevin Bracey <kevin@bracey.fi> --- arch/arm/include/asm/assembler.h | 26 ++++++++++++++++++++++++++ arch/arm/kernel/entry-armv.S | 8 ++++++++ 2 files changed, 34 insertions(+), 0 deletions(-) diff --git a/arch/arm/include/asm/assembler.h b/arch/arm/include/asm/assembler.h index 5c22851..cba5f2c 100644 --- a/arch/arm/include/asm/assembler.h +++ b/arch/arm/include/asm/assembler.h @@ -23,6 +23,7 @@ #include <asm/ptrace.h> #include <asm/domain.h> #include <asm/opcodes-virt.h> +#include <asm/asm-offsets.h> #define IOMEM(x) (x) @@ -85,6 +86,31 @@ #endif /* + * Enable/Disable page fault + */ +#ifdef CONFIG_PREEMPT_COUNT + .macro pgflt_disable, ti, tmp + get_thread_info \ti + ldr \tmp, [\ti, #TI_PREEMPT] + add \tmp, \tmp, #1 + str \tmp, [\ti, #TI_PREEMPT] + .endm + + .macro pgflt_enable, ti, tmp + get_thread_info \ti + ldr \tmp, [\ti, #TI_PREEMPT] + sub \tmp, \tmp, #1 + str \tmp, [\ti, #TI_PREEMPT] + .endm +#else + .macro pgflt_disable, ti, tmp + .endm + + .macro pgflt_enable, ti, tmp + .endm +#endif + +/* * Enable and disable interrupts */ #if __LINUX_ARM_ARCH__ >= 6 diff --git a/arch/arm/kernel/entry-armv.S b/arch/arm/kernel/entry-armv.S index 1879e8d..5ed57d8 100644 --- a/arch/arm/kernel/entry-armv.S +++ b/arch/arm/kernel/entry-armv.S @@ -416,7 +416,9 @@ __und_usr: tst r3, #PSR_T_BIT @ Thumb mode? bne __und_usr_thumb sub r4, r2, #4 @ ARM instr at LR - 4 + pgflt_disable r10, r3 1: ldrt r0, [r4] + pgflt_enable r10, r3 ARM_BE8(rev r0, r0) @ little endian instruction @ r0 = 32-bit ARM instruction which caused the exception @@ -450,11 +452,15 @@ __und_usr_thumb: */ .arch armv6t2 #endif + pgflt_disable r10, r3 2: ldrht r5, [r4] + pgflt_enable r10, r3 ARM_BE8(rev16 r5, r5) @ little endian instruction cmp r5, #0xe800 @ 32bit instruction if xx != 0 blo __und_usr_fault_16 @ 16bit undefined instruction + pgflt_disable r10, r3 3: ldrht r0, [r2] + pgflt_enable r10, r3 ARM_BE8(rev16 r0, r0) @ little endian instruction add r2, r2, #2 @ r2 is PC + 2, make it PC + 4 str r2, [sp, #S_PC] @ it's a 2x16bit instr, update @@ -484,6 +490,8 @@ ENDPROC(__und_usr) */ .pushsection .fixup, "ax" .align 2 + str r4, [sp, #S_PC] + pgflt_enable r10, r3 4: mov pc, r9 .popsection .pushsection __ex_table,"a" -- On Fri, Feb 7, 2014 at 2:24 AM, Russell King - ARM Linux <linux@arm.linux.org.uk> wrote: > On Thu, Feb 06, 2014 at 08:10:44PM +0200, Kevin Bracey wrote: >> The VFP code did take pains to increment the pre-emption count before >> enabling interrupts, but it's not obvious whether it requires no >> pre-emption between the bounce and handler, or just no pre-emption >> during the handler. > > Just take a moment to think about this. > > - Userspace raises an undefined instruction exception, running on CPU 0. > - The kernel starts to handle the exception by saving the ARM state. > - Enables interrupts. > - Context switch occurs. > - VFP state is saved and the VFP is disabled. > > Now, on CPU1... > - Context switch occurs to the above thread (due to thread migration). > - VFP will be in a disabled state. > - We read FPEXC, find that the VFP is disabled > - Load saved state into the VFP > - Check for an exception recorded in the VFP state, and handle it. > > So, that seems to work. I suspect most things will work just fine in > this case. What /can't/ be allowed to happen is we can't start > reading state from the hardware to handle the exception (or even be > mid-restoring the state) and be preempted - if we do, we'll end up > in a right mess because we'll end up with inconsistent state. > >> What about the iwmmxt and the crunchbits? They are only lazy-enable >> routines, to activate an inactive coprocessor. Which I think makes them >> safe. If we schedule before reaching the handler, when this context is >> returned to, the coprocessor must still be disabled in our context - the >> handler can proceed to enable it. And there can't be any other context >> state to worry about, assuming the lazy enable scheme works. > > Again, the restore needs to happen with preemption disabled so that > you don't end up with the state half-restored, and then when you > resume the thread, you restore the other half. > > This is actually the case I'm more worried about - whether all the > various handlers are safe being entered with preemption enabled. > They weren't written for it so the current answer is that they > aren't safe. > >> I share Alexey's Ignatov's concern that your patch ends up running the >> support code with interrupts either on or off depending on whether you >> came from user or supervisor mode, which feels a little odd. But then, >> always enabling interrupts like the current code does, when they might >> have been off in the SVC mode context, also seems wrong. > > That is indeed wrong, but then we /used/ to have the requirement that > the kernel will _never_ execute VFP instructions. That's changed > recently since we now permit neon instructions to be executed. > > However, the requirements to execute these instructions is very strict: > preemption disabled, it must not fault. If you do fault, none of the > standard handlers get invoked, instead you get a critical kernel message. > So, if it does happen, then it's quite a bug already. > > The only case where the kernel intentionally provokes such a fault > (which won't even reach the normal VFP handler) is when testing for the > presence of VFP and there is no hardware fitted. > > -- > FTTC broadband for 0.8mile line: 5.8Mbps down 500kbps up. Estimation > in database were 13.1 to 19Mbit for a good line, about 7.5+ for a bad. > Estimate before purchase was "up to 13.2Mbit". > > _______________________________________________ > linux-arm-kernel mailing list > linux-arm-kernel at lists.infradead.org > http://lists.infradead.org/mailman/listinfo/linux-arm-kernel ^ permalink raw reply related [flat|nested] 8+ messages in thread
* [PATCH] ARM: enable IRQs in user undefined instruction vector 2014-02-07 10:00 ` vinayak menon @ 2014-02-07 10:06 ` Russell King - ARM Linux 2014-02-07 12:19 ` vinayak menon 0 siblings, 1 reply; 8+ messages in thread From: Russell King - ARM Linux @ 2014-02-07 10:06 UTC (permalink / raw) To: linux-arm-kernel On Fri, Feb 07, 2014 at 03:30:13PM +0530, vinayak menon wrote: > Can we do something similar to what pagefault_disable does ? So that > we jump directly to fixup, after the in_atomic() check in > do_page_fault. I don't see any point to this change - it does nothing to address the point I raised. -- FTTC broadband for 0.8mile line: 5.8Mbps down 500kbps up. Estimation in database were 13.1 to 19Mbit for a good line, about 7.5+ for a bad. Estimate before purchase was "up to 13.2Mbit". ^ permalink raw reply [flat|nested] 8+ messages in thread
* [PATCH] ARM: enable IRQs in user undefined instruction vector 2014-02-07 10:06 ` Russell King - ARM Linux @ 2014-02-07 12:19 ` vinayak menon 0 siblings, 0 replies; 8+ messages in thread From: vinayak menon @ 2014-02-07 12:19 UTC (permalink / raw) To: linux-arm-kernel > I don't see any point to this change - it does nothing to address the > point I raised. I see. The issue that was observed can be summarized like this. There was a userspace crash which was because of an 8 byte offset to SP when returning from a function (strtoimax). Analysis showed that the vpush {d8} instruction at the beginning of strtoimax failed to execute, but vpop {d8} at the end did execute. This resulted in a 8 byte offset in SP and resulted in the crash. Further debugging showed that this was happening because, one of the ldrht instructions in __und_usr was hitting a page fault, and the fixup code was returning to the next instruction. Correction was added to PC in the fixup (str r4, [sp, #S_PC] , in the patch above), to fix the problem. But we were left with the warning (might_sleep). Reading the discussions, I thought enabling irq is an issue, and felt that without enabling the interrupts, just disabling preemption before calling ldrht should stop the warnings. Because do_page_fault jumps to call fixup, if its an atomic context. Thanks ^ permalink raw reply [flat|nested] 8+ messages in thread
* [PATCH] ARM: enable IRQs in user undefined instruction vector 2014-02-06 20:54 ` Russell King - ARM Linux 2014-02-07 10:00 ` vinayak menon @ 2014-02-07 16:25 ` Catalin Marinas 1 sibling, 0 replies; 8+ messages in thread From: Catalin Marinas @ 2014-02-07 16:25 UTC (permalink / raw) To: linux-arm-kernel On Thu, Feb 06, 2014 at 08:54:48PM +0000, Russell King - ARM Linux wrote: > On Thu, Feb 06, 2014 at 08:10:44PM +0200, Kevin Bracey wrote: > > The VFP code did take pains to increment the pre-emption count before > > enabling interrupts, but it's not obvious whether it requires no > > pre-emption between the bounce and handler, or just no pre-emption > > during the handler. > > Just take a moment to think about this. > > - Userspace raises an undefined instruction exception, running on CPU 0. > - The kernel starts to handle the exception by saving the ARM state. > - Enables interrupts. > - Context switch occurs. > - VFP state is saved and the VFP is disabled. There is one case when it isn't saved because of FPEXC_EN being cleared but we don't care since the VFP registers haven't been touched. > Now, on CPU1... > - Context switch occurs to the above thread (due to thread migration). > - VFP will be in a disabled state. > - We read FPEXC, find that the VFP is disabled > - Load saved state into the VFP > - Check for an exception recorded in the VFP state, and handle it. This should work since we check the restored registers. > > What about the iwmmxt and the crunchbits? They are only lazy-enable > > routines, to activate an inactive coprocessor. Which I think makes them > > safe. If we schedule before reaching the handler, when this context is > > returned to, the coprocessor must still be disabled in our context - the > > handler can proceed to enable it. And there can't be any other context > > state to worry about, assuming the lazy enable scheme works. > > Again, the restore needs to happen with preemption disabled so that > you don't end up with the state half-restored, and then when you > resume the thread, you restore the other half. > > This is actually the case I'm more worried about - whether all the > various handlers are safe being entered with preemption enabled. > They weren't written for it so the current answer is that they > aren't safe. My patchset disables preemption on entering those handlers via __und_usr, so they don't touch their state with preemption enabled (well, review is still needed, I can repost them). > > I share Alexey's Ignatov's concern that your patch ends up running the > > support code with interrupts either on or off depending on whether you > > came from user or supervisor mode, which feels a little odd. But then, > > always enabling interrupts like the current code does, when they might > > have been off in the SVC mode context, also seems wrong. > > That is indeed wrong, but then we /used/ to have the requirement that > the kernel will _never_ execute VFP instructions. That's changed > recently since we now permit neon instructions to be executed. Even if we would take VFP faults from the kernel, I think they are fine to be executed with interrupts disabled. The problem is when we get page faults and these may sleep but that's not the case with kernel mappings. An alternative would be to only enable interrupts around user access in __und_usr and disable them again afterwards. -- Catalin ^ permalink raw reply [flat|nested] 8+ messages in thread
end of thread, other threads:[~2014-02-07 16:25 UTC | newest] Thread overview: 8+ messages (download: mbox.gz follow: Atom feed -- links below jump to the message on this page -- 2014-02-04 20:19 [PATCH] ARM: enable IRQs in user undefined instruction vector Kevin Bracey 2014-02-06 11:20 ` Catalin Marinas 2014-02-06 18:10 ` Kevin Bracey 2014-02-06 20:54 ` Russell King - ARM Linux 2014-02-07 10:00 ` vinayak menon 2014-02-07 10:06 ` Russell King - ARM Linux 2014-02-07 12:19 ` vinayak menon 2014-02-07 16:25 ` Catalin Marinas
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).