public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
* CONFIG_PREEMPT x86 assembly question
@ 2004-11-20 14:43 Nikita V. Youshchenko
  2004-11-22 22:10 ` Michal Schmidt
  0 siblings, 1 reply; 2+ messages in thread
From: Nikita V. Youshchenko @ 2004-11-20 14:43 UTC (permalink / raw)
  To: linux-kernel

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Hello

Whily lazy-examining kernel code, I found the following interesting point.

In arch/i386/kernel/entry.S

...
ENTRY(resume_kernel)
 cmpl $0,TI_preempt_count(%ebp) # non-zero preempt_count ?
 jnz restore_all
need_resched:
 movl TI_flags(%ebp), %ecx # need_resched set ?
 testb $_TIF_NEED_RESCHED, %cl
 jz restore_all
 testl $IF_MASK,EFLAGS(%esp)     # interrupts off (exception path) ?
 jz restore_all
 movl $PREEMPT_ACTIVE,TI_preempt_count(%ebp)
 sti
 call schedule
 movl $0,TI_preempt_count(%ebp)
 cli
 jmp need_resched
#endif
...

Why, after return from schedule(), first 0 is written to 
TI_preempt_count(%ebp), and only then interrupts are disabled?
Wht not the reverse order?

As far as I understand, the idea of the preempt_count flag is to avoid 
nested preemts. The fact that flag is reset before interrupts are 
disabled, somewhat breaks this: interrupt may happen just after flag is 
reset, causing nested interrupt while preempt_count flag is already reset. 
In a very unprobable case this could happen unlimited number of times, 
causing kernel stack overflow.

Very unprobable? But couldn't this be the cause of kernel lockups I 
suffered several times while writing DVD on a probably broken media (which 
could cause interrupt storm)?..
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.2.4 (GNU/Linux)

iD8DBQFBn1jSv3x5OskTLdsRAu/lAKCCqeNbJSkhC4W3iWawjm4vctOzpwCeN7vX
Cjk39KRgRSnjN8ktKGCfoUA=
=XvKR
-----END PGP SIGNATURE-----

^ permalink raw reply	[flat|nested] 2+ messages in thread

* Re: CONFIG_PREEMPT x86 assembly question
  2004-11-20 14:43 CONFIG_PREEMPT x86 assembly question Nikita V. Youshchenko
@ 2004-11-22 22:10 ` Michal Schmidt
  0 siblings, 0 replies; 2+ messages in thread
From: Michal Schmidt @ 2004-11-22 22:10 UTC (permalink / raw)
  To: Nikita V. Youshchenko; +Cc: linux-kernel

Nikita V. Youshchenko wrote:
> In arch/i386/kernel/entry.S
> 
> ...
> ENTRY(resume_kernel)
>  cmpl $0,TI_preempt_count(%ebp) # non-zero preempt_count ?
>  jnz restore_all
> need_resched:
>  movl TI_flags(%ebp), %ecx # need_resched set ?
>  testb $_TIF_NEED_RESCHED, %cl
>  jz restore_all
>  testl $IF_MASK,EFLAGS(%esp)     # interrupts off (exception path) ?
>  jz restore_all
>  movl $PREEMPT_ACTIVE,TI_preempt_count(%ebp)
>  sti
>  call schedule
>  movl $0,TI_preempt_count(%ebp)
>  cli
>  jmp need_resched
> #endif
> ...
> 
> Why, after return from schedule(), first 0 is written to 
> TI_preempt_count(%ebp), and only then interrupts are disabled?
> Wht not the reverse order?
> 

It's already reversed in linux-2.6.10-rc2:
  ...
  call schedule
  cli
  movl $0,TI_preempt_count(%ebp)
  jmp need_resched

Michal

^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2004-11-22 22:15 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2004-11-20 14:43 CONFIG_PREEMPT x86 assembly question Nikita V. Youshchenko
2004-11-22 22:10 ` Michal Schmidt

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox