* BUG: scheduling while atomic
[not found] <CABOM9ZqSazS-NkD980f6sUyy=hk1aLVY+Vjwcxs3mGybvbkgaQ@mail.gmail.com>
@ 2012-04-18 6:44 ` Arun KS
2012-04-18 7:31 ` Dave Hylands
2012-04-18 8:58 ` Arun KS
0 siblings, 2 replies; 13+ messages in thread
From: Arun KS @ 2012-04-18 6:44 UTC (permalink / raw)
To: kernelnewbies
Hello Guys,
System is working normal after this BUG.
PC is at 0x400b4614, probably a mmaped address.
Just wondering how can this BUG happen when a process is running in user
space.
Can it be something like this
1) enter to kernel from userspace through some system call.
2) kernel disables the interrupt and return to user space.
3) and now it can happen in user space?
Any thoughts?
shell at android: # ls
device[ 40.603515] BUG: scheduling while atomic: Binder Thread
#/1355/0x00010003
[ 40.610290] Modules linked in:
[ 40.613342]
[ 40.614837] Pid: 1355, comm: Binder Thread #
[ 40.619506] CPU: 0 Tainted: G W (3.0.15+ #174)
[ 40.625061] PC is at 0x400b4614
[ 40.628173] LR is at 0x408d83c9
[ 40.631317] pc : [<400b4614>] lr : [<408d83c9>] psr: 40000010
[ 40.631317] sp : 50551918 ip : 4092d1c0 fp : 505519a4
[ 40.642730] r10: 50551974 r9 : 5016de28 r8 : 1f600009
[ 40.647949] r7 : 00000000 r6 : 00000000 r5 : 5055194c r4 : 00f09f90
[ 40.654418] r3 : 40931c58 r2 : 00000000 r1 : 00f09f90 r0 : 00000006
[ 40.660919] Flags: nZcv IRQs on FIQs on Mode USER_32 ISA ARM
Segment user
[ 40.668121] Control: 10c53c7d Table: 91184059 DAC: 00000015
shell at android: #
shell at android: #
shell at android: #
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.kernelnewbies.org/pipermail/kernelnewbies/attachments/20120418/e7932fec/attachment.html
^ permalink raw reply [flat|nested] 13+ messages in thread* BUG: scheduling while atomic
2012-04-18 6:44 ` BUG: scheduling while atomic Arun KS
@ 2012-04-18 7:31 ` Dave Hylands
2012-04-18 8:08 ` Arun KS
2012-04-18 8:58 ` Arun KS
1 sibling, 1 reply; 13+ messages in thread
From: Dave Hylands @ 2012-04-18 7:31 UTC (permalink / raw)
To: kernelnewbies
Hi Arun,
On Tue, Apr 17, 2012 at 11:44 PM, Arun KS <getarunks@gmail.com> wrote:
>
> Hello Guys,
>
> System is working normal after this BUG.
> PC is at 0x400b4614, probably a mmaped address.
>
> Just wondering how can this BUG happen when a process is running in user
> space.
>
> Can it be something like this
> 1) enter to kernel from userspace through some system call.
> 2) kernel disables the interrupt and return to user space.
Don't do that.
> 3) and now it can happen in user space?
Because something in userspace made a blocking call which would cause
a context switch to occur and your driver erroneously left interrupts
disabled.
--
Dave Hylands
Shuswap, BC, Canada
http://www.davehylands.com
^ permalink raw reply [flat|nested] 13+ messages in thread
* BUG: scheduling while atomic
2012-04-18 7:31 ` Dave Hylands
@ 2012-04-18 8:08 ` Arun KS
2012-04-18 8:14 ` Dave Hylands
2012-04-18 8:27 ` Srivatsa S. Bhat
0 siblings, 2 replies; 13+ messages in thread
From: Arun KS @ 2012-04-18 8:08 UTC (permalink / raw)
To: kernelnewbies
Hi Dave,
Thanks for your reply.
On Wed, Apr 18, 2012 at 1:01 PM, Dave Hylands <dhylands@gmail.com> wrote:
> Hi Arun,
>
> On Tue, Apr 17, 2012 at 11:44 PM, Arun KS <getarunks@gmail.com> wrote:
> >
> > Hello Guys,
> >
> > System is working normal after this BUG.
> > PC is at 0x400b4614, probably a mmaped address.
> >
> > Just wondering how can this BUG happen when a process is running in user
> > space.
> >
> > Can it be something like this
> > 1) enter to kernel from userspace through some system call.
> > 2) kernel disables the interrupt and return to user space.
>
> Don't do that
>
I don't do that. This scenario mentioned is a just a wild guess.
>
> > 3) and now it can happen in user space?
>
> Because something in userspace made a blocking call which would cause
> a context switch to occur and your driver erroneously left interrupts
> disabled.
>
In that case, my system should have been unstable afterwards if interrupts
are left disabled. But that is not happening.
If we return to user space with interrupts disabled, can we switch back
again to kernel using a system cal(because interrupts are already disabled)?
Arun
> --
> Dave Hylands
> Shuswap, BC, Canada
> http://www.davehylands.com
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.kernelnewbies.org/pipermail/kernelnewbies/attachments/20120418/f7f7b448/attachment.html
^ permalink raw reply [flat|nested] 13+ messages in thread
* BUG: scheduling while atomic
2012-04-18 8:08 ` Arun KS
@ 2012-04-18 8:14 ` Dave Hylands
2012-04-18 8:27 ` Srivatsa S. Bhat
1 sibling, 0 replies; 13+ messages in thread
From: Dave Hylands @ 2012-04-18 8:14 UTC (permalink / raw)
To: kernelnewbies
Hi Arun,
On Wed, Apr 18, 2012 at 1:08 AM, Arun KS <getarunks@gmail.com> wrote:
> Hi Dave,
>
> Thanks for your reply.
>
> On Wed, Apr 18, 2012 at 1:01 PM, Dave Hylands <dhylands@gmail.com> wrote:
>>
>> Hi Arun,
>>
>> On Tue, Apr 17, 2012 at 11:44 PM, Arun KS <getarunks@gmail.com> wrote:
>> >
>> > Hello Guys,
>> >
>> > System is working normal after this BUG.
>> > PC is at 0x400b4614, probably a mmaped address.
>> >
>> > Just wondering how can this BUG happen when a process is running in user
>> > space.
>> >
>> > Can it be something like this
>> > 1) enter to kernel from userspace through some system call.
>> > 2) kernel disables the interrupt and return to user space.
>>
>> Don't do that
>
>
> I don't do that. This scenario mentioned is a just a wild guess.
>>
>>
>> > 3) and now it can happen in user space?
>>
>> Because something in userspace made a blocking call which would cause
>> a context switch to occur and your driver erroneously left interrupts
>> disabled.
>
> In that case, my system should have been unstable afterwards if interrupts
> are left disabled. But that is not happening.
>
> If we return to user space with interrupts disabled, can we switch back
> again to kernel using a system cal(because interrupts are already disabled)?
As long as you don't do anything which would need to block.
A buggy driver could also interrupt user-code (hardware interrupt) and
disable interrupts and not re-enable them.
--
Dave Hylands
Shuswap, BC, Canada
http://www.davehylands.com
^ permalink raw reply [flat|nested] 13+ messages in thread
* BUG: scheduling while atomic
2012-04-18 8:08 ` Arun KS
2012-04-18 8:14 ` Dave Hylands
@ 2012-04-18 8:27 ` Srivatsa S. Bhat
2012-04-18 8:40 ` Arun KS
1 sibling, 1 reply; 13+ messages in thread
From: Srivatsa S. Bhat @ 2012-04-18 8:27 UTC (permalink / raw)
To: kernelnewbies
On 04/18/2012 01:38 PM, Arun KS wrote:
> Hi Dave,
>
> Thanks for your reply.
>
> On Wed, Apr 18, 2012 at 1:01 PM, Dave Hylands <dhylands@gmail.com
> <mailto:dhylands@gmail.com>> wrote:
>
> Hi Arun,
>
> On Tue, Apr 17, 2012 at 11:44 PM, Arun KS <getarunks@gmail.com
> <mailto:getarunks@gmail.com>> wrote:
> >
> > Hello Guys,
> >
> > System is working normal after this BUG.
> > PC is at 0x400b4614, probably a mmaped address.
> >
> > Just wondering how can this BUG happen when a process is running
> in user
> > space.
> >
> > Can it be something like this
> > 1) enter to kernel from userspace through some system call.
> > 2) kernel disables the interrupt and return to user space.
>
> Don't do that
>
>
> I don't do that. This scenario mentioned is a just a wild guess.
>
>
> > 3) and now it can happen in user space?
>
> Because something in userspace made a blocking call which would cause
> a context switch to occur and your driver erroneously left interrupts
> disabled.
>
> In that case, my system should have been unstable afterwards if
> interrupts are left disabled. But that is not happening.
>
> If we return to user space with interrupts disabled, can we switch back
> again to kernel using a system cal(because interrupts are already disabled)?
>
Depends on how many CPUs you have - AFAICS the "interrupts disabled"
discussion above applies to a single CPU.. so if you have other CPUs on your
system, you could probably use the system for a little more time.
There is a simple way to check if interrupts are indeed disabled as
hypothesised: turn on the hard-lockup detector (See
Documentation/lockup-watchdogs.txt for details on what it is and what
config options you have to enable). You can even turn on the soft-lockup
detector and see what you get. Setting the option to panic on hard-lockup/
soft-lockup/hung tasks would be even better, to debug the issue.
Regards,
Srivatsa S. Bhat
^ permalink raw reply [flat|nested] 13+ messages in thread
* BUG: scheduling while atomic
2012-04-18 8:27 ` Srivatsa S. Bhat
@ 2012-04-18 8:40 ` Arun KS
0 siblings, 0 replies; 13+ messages in thread
From: Arun KS @ 2012-04-18 8:40 UTC (permalink / raw)
To: kernelnewbies
Hi Srivatsa,
On Wed, Apr 18, 2012 at 1:57 PM, Srivatsa S. Bhat <
srivatsa.bhat@linux.vnet.ibm.com> wrote:
> On 04/18/2012 01:38 PM, Arun KS wrote:
>
> > Hi Dave,
> >
> > Thanks for your reply.
> >
> > On Wed, Apr 18, 2012 at 1:01 PM, Dave Hylands <dhylands@gmail.com
> > <mailto:dhylands@gmail.com>> wrote:
> >
> > Hi Arun,
> >
> > On Tue, Apr 17, 2012 at 11:44 PM, Arun KS <getarunks@gmail.com
> > <mailto:getarunks@gmail.com>> wrote:
> > >
> > > Hello Guys,
> > >
> > > System is working normal after this BUG.
> > > PC is at 0x400b4614, probably a mmaped address.
> > >
> > > Just wondering how can this BUG happen when a process is running
> > in user
> > > space.
> > >
> > > Can it be something like this
> > > 1) enter to kernel from userspace through some system call.
> > > 2) kernel disables the interrupt and return to user space.
> >
> > Don't do that
> >
> >
> > I don't do that. This scenario mentioned is a just a wild guess.
> >
> >
> > > 3) and now it can happen in user space?
> >
> > Because something in userspace made a blocking call which would cause
> > a context switch to occur and your driver erroneously left interrupts
> > disabled.
> >
> > In that case, my system should have been unstable afterwards if
> > interrupts are left disabled. But that is not happening.
> >
> > If we return to user space with interrupts disabled, can we switch back
> > again to kernel using a system cal(because interrupts are already
> disabled)?
> >
>
>
> Depends on how many CPUs you have - AFAICS the "interrupts disabled"
> discussion above applies to a single CPU.. so if you have other CPUs on
> your
> system, you could probably use the system for a little more time.
>
>
> Hmm.. I have uniprocessor.
>
> There is a simple way to check if interrupts are indeed disabled as
> hypothesised: turn on the hard-lockup detector (See
> Documentation/lockup-watchdogs.txt for details on what it is and what
> config options you have to enable). You can even turn on the soft-lockup
> detector and see what you get. Setting the option to panic on hard-lockup/
> soft-lockup/hung tasks would be even better, to debug the issue.
>
Thanks for the pointers. I ll try this out.
Arun
>
> Regards,
> Srivatsa S. Bhat
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.kernelnewbies.org/pipermail/kernelnewbies/attachments/20120418/87ed6fc2/attachment.html
^ permalink raw reply [flat|nested] 13+ messages in thread
* BUG: scheduling while atomic
2012-04-18 6:44 ` BUG: scheduling while atomic Arun KS
2012-04-18 7:31 ` Dave Hylands
@ 2012-04-18 8:58 ` Arun KS
2012-04-18 15:40 ` Dave Hylands
1 sibling, 1 reply; 13+ messages in thread
From: Arun KS @ 2012-04-18 8:58 UTC (permalink / raw)
To: kernelnewbies
On Wed, Apr 18, 2012 at 12:14 PM, Arun KS <getarunks@gmail.com> wrote:
>
> Hello Guys,
>
> System is working normal after this BUG.
> PC is at 0x400b4614, probably a mmaped address.
>
> Just wondering how can this BUG happen when a process is running in user
> space.
>
>
> Can it be something like this
> 1) enter to kernel from userspace through some system call.
> 2) kernel disables the interrupt and return to user space.
> 3) and now it can happen in user space?
>
> Any thoughts?
>
> shell at android: # ls
> device[ 40.603515] BUG: scheduling while atomic: Binder Thread
> #/1355/0x00010003
> [ 40.610290] Modules linked in:
> [ 40.613342]
> [ 40.614837] Pid: 1355, comm: Binder Thread #
> [ 40.619506] CPU: 0 Tainted: G W (3.0.15+ #174)
> [ 40.625061] PC is at 0x400b4614
> [ 40.628173] LR is at 0x408d83c9
> [ 40.631317] pc : [<400b4614>] lr : [<408d83c9>] psr: 40000010
> [ 40.631317] sp : 50551918 ip : 4092d1c0 fp : 505519a4
> [ 40.642730] r10: 50551974 r9 : 5016de28 r8 : 1f600009
> [ 40.647949] r7 : 00000000 r6 : 00000000 r5 : 5055194c r4 : 00f09f90
> [ 40.654418] r3 : 40931c58 r2 : 00000000 r1 : 00f09f90 r0 : 00000006
> [ 40.660919] Flags: nZcv IRQs on FIQs on Mode USER_32 ISA ARM
> Segment user
>
But if you look at the flags here, it is showing that IRQs are on.
[ 40.668121] Control: 10c53c7d Table: 91184059 DAC: 00000015
>
> shell at android: #
> shell at android: #
> shell at android: #
>
>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.kernelnewbies.org/pipermail/kernelnewbies/attachments/20120418/fa9e3272/attachment.html
^ permalink raw reply [flat|nested] 13+ messages in thread
* BUG: scheduling while atomic
2012-04-18 8:58 ` Arun KS
@ 2012-04-18 15:40 ` Dave Hylands
0 siblings, 0 replies; 13+ messages in thread
From: Dave Hylands @ 2012-04-18 15:40 UTC (permalink / raw)
To: kernelnewbies
Hi Arun,
On Wed, Apr 18, 2012 at 1:58 AM, Arun KS <getarunks@gmail.com> wrote:
>
>
> On Wed, Apr 18, 2012 at 12:14 PM, Arun KS <getarunks@gmail.com> wrote:
>>
>>
>> Hello Guys,
>>
>> System is working normal after this BUG.
>> PC is at 0x400b4614, probably a mmaped address.
>>
>> Just wondering how can this BUG happen when a process is running in user
>> space.
>>
>>
>> Can it be something like this
>> 1) enter to kernel from userspace through some system call.
>> 2) kernel disables the interrupt and return to user space.
>> 3) and now it can happen in user space?
>>
>> Any thoughts?
>>
>> shell at android: # ls
>> device[?? 40.603515] BUG: scheduling while atomic: Binder Thread
>> #/1355/0x00010003
>> [?? 40.610290] Modules linked in:
>> [?? 40.613342]
>> [?? 40.614837] Pid: 1355, comm:????? Binder Thread #
>> [?? 40.619506] CPU: 0??? Tainted: G??????? W??? (3.0.15+ #174)
>> [?? 40.625061] PC is at 0x400b4614
>> [?? 40.628173] LR is at 0x408d83c9
>> [?? 40.631317] pc : [<400b4614>]??? lr : [<408d83c9>]??? psr: 40000010
>> [?? 40.631317] sp : 50551918? ip : 4092d1c0? fp : 505519a4
>> [?? 40.642730] r10: 50551974? r9 : 5016de28? r8 : 1f600009
>> [?? 40.647949] r7 : 00000000? r6 : 00000000? r5 : 5055194c? r4 : 00f09f90
>> [?? 40.654418] r3 : 40931c58? r2 : 00000000? r1 : 00f09f90? r0 : 00000006
>> [?? 40.660919] Flags: nZcv? IRQs on? FIQs on? Mode USER_32? ISA ARM
>> Segment user
>
>
> But if you look at the flags here,? it is showing that IRQs are on.
>
>> [?? 40.668121] Control: 10c53c7d? Table: 91184059? DAC: 00000015
So, an atomic context, as far as the kernel is concerned, also
includes disabling preemption, not just disabling interrupts.
--
Dave Hylands
Shuswap, BC, Canada
http://www.davehylands.com
^ permalink raw reply [flat|nested] 13+ messages in thread
* Bug:scheduling while atomic..
@ 2011-12-05 9:12 sandeep kumar
2011-12-05 19:03 ` Jonathan Neuschäfer
2011-12-05 19:16 ` Jonathan Neuschäfer
0 siblings, 2 replies; 13+ messages in thread
From: sandeep kumar @ 2011-12-05 9:12 UTC (permalink / raw)
To: kernelnewbies
Hi all,
here is the dump i got in dmesg...
<3>[ 4940.803872] BUG: scheduling while atomic: rild/144/0x00000908
<4>[ 4940.803895] Modules linked in: mwlan_aarp(P) bthid
<4>[ 4940.803918]
<4>[ 4940.803927] Pid: 144, comm: rild
<4>[ 4940.803943] CPU: 0 Tainted: P W (2.6.38.6 #1)
<4>[ 4940.803973] PC is at dpram_write+0x3e4/0x848
<4>[ 4940.803988] LR is at 0xad
<4>[ 4940.804002] pc : [<c03b8b80>] lr : [<000000ad>] psr: 20000013
<4>[ 4940.804010] sp : e8379e50 ip : c09a4578 fp : e8379e8c
<4>[ 4940.804022] r10: e8308450 r9 : e89d5400 r8 : e849ba40
<4>[ 4940.804037] r7 : e89d5400 r6 : 0000000d r5 : 0000000d r4 : c078f324
<4>[ 4940.804050] r3 : fa14c4b0 r2 : 000001ad r1 : 00000008 r0 : 0000007e
<4>[ 4940.804067] Flags: nzCv IRQs on FIQs on Mode SVC_ate -2017760,
reprogram it
<3>[ 4940.915690] msm_timer_enter_idlable: 2853c059 DAC: 00000015
<4>[ 4940.804095]
<4>[ 4940.804098] PC: 0xc03b8b00:
<4>[ 4940.804, reprogram it
<3>[ 4940.915719]fc07c e142c0b1 e2400001 e3110001 e1a000c0 02632000
<4>[ 4940.804148] 8b20 0a00000b ea000003 e551c001 e5512002 e182240c
e0c320b2 e3500000 e2811002
<4>[ 4940.804188] 8b40 e2400001 1afffff7 ea0000b6 e19cc0b1 e0c3c0b2
e3500000 e083c002 e2400001
<4>[ 4940.804-2019424, reprogram it
<3>[ 494025005 e5933000 e1560005 b1a05006 a1a05005 e3550000
<4>[ 4940.804270] 8b80 da00004d e0822001 e1aer_idle: timer late -2019968,
re 0a00000e e3170001
<4>[ 4940.804310] 8ba0 e2450001 015320b1 01d710b0 115310b1 115720b1
06ef2072 01822401 16ef1071
<4>[ 4940.804350] 8bc0 13c220ff 11822001 06ff2072 e2871001 e14320b1
e2833001 e3100001 0a000022
<4>[ 4940.804390] 8be0 e2432001 e241c001 e0822000 e08cc000 e3120001
e20ce001 1a000006 e35e0000
<4>[ 4940.804432]
<4>[ 4940.804435] SP: 0xe8379dd0:
<4>[ 4940.804443] 9dd0 e8379dec e8379de0 c00b1644 c00b14c4 e8379e04
e8379df0 c0033084 c00b1600
<4>[ 4940.804485] 9df0 ffffffff fa000000 e8379e8c e8379e08 c003918c
c003300c 0000007e 00000008
<4>[ 4940.804523] 9e10 000001ad fa14c4b0 c078f324 0000000d 0000000d
e89d5400 e849ba40 e89d5400
<4>[ 4940.804563] 9e30 e8308450 e8379e8c c09a4578 e8379e50 000000ad
c03b8b80 20000013 ffffffff
<4>[ 4940.804603] 9e50 60000013 e8379ec0 e8379e7c 01ad01ad c04dd664
e8308000 0000000d e89d5400
<4>[ 4940.804643] 9e70 e8378000 e849ba40 e89d5400 e8308it
<3>[ 4940.915979] msm_timer_e03b87a8
<4>[ 4940.804683] 9e90 e8379ef4 e8379ea0 c025f3a0 c03b8ff0 c02623e0
c04dd650 e83080d0 e8308158
<4>[ 4940.804723] 9eb0 e8379ef4 00000000 e83746c0 c00a8imer late -2024128,
reprogram it8308000
<4>[ 4940.804765]
<4>[ 4940.804768] IP: 0xc09a44f8:
<4>[ 4940.804777] 44f8 00000000 00000000 c074b1c0 e8b49200 e8b3266024672,
reprogram it
<3>[ 4940.9nter_idle: timer late -2513888, 000000 00000000 00000000
e92b4e60 00000001 e8b4d000 00000001
<4>[ 4940.804855] 4538 00000001 00000001 00000000 00000000 00000000
00000000 00000000 e926de00
<4>[ 4940.804893] 4558 e8b32460 00094] msm_timer_enter_idle: timer8
00000000 e8b7db60 e8b854a8
<4>[ 4940.804933] 4578 fa14c4b0 00000000 00000000 00000000 00000000
00000000 00000000 00000000
<4>[ 4940.804972] 4598 00000000 00000000 00000000 00000000 00000000
00000000 00000000 00000000
<4>[ 4940.805010] 45b8 00000000 00000000 00000000 00000000 00000005808,
reprogram it
<3>[ 4940.941[ 4940.805048] 45d8 00000000 00000000 00000000 00000000
00000000 00000000 00000000 00000000
<4>[ 4940.805088]
<4>[ 4940.805092e -2027424, reprogram it
<3>[ 4900] 9e0c 00000008 000001ad fa14c4b0 c078f324 0000000d 0000000d
e89d5400 e849ba40
<4>[ 4940.805138] 9e2c e89d5400 e8308450 e8379e8c c09a4578 e8379e50
000000ad c03b8b80 20000013
<4>[ 4940.805178] 9e4c ffffffff 60000013 e8379ec0 e8379e7c 01ad01ad
c04dd664 e8308000 0000000d
<4>[ 4940.805218] 9e6c e89d5400 e8378000 e849ba40 e89d5400 e8308450
e8379e9c program it
<3>[ 4940.916265] msm58] 9e8c c03b87a8 e8379ef4 e8379ea0 c025f3a0 c03b8ff0
c02623e0 c04dd650 e83080d0
<4>[ 4940.805298] 9eac e8308158 e8379ef4 00000000 e83746c0 c00a89ac
e8377bec e8308168 e849ba40
<4>[ 4940.805338] 9ecc e8308000 0000000d e849ba40 e8378000 0000000d
0001e7d0 0000000d e8379f3c
<4>[ 4940.805378] 9eec e8379ef8 c025c280 c025f0d8 00000018 c025f0cc
e836b520 [ 4940.916337] msm_timer_enter_i20]
<4>[ 4940.805423] R3: 0xfa14c430:
<4>[ 4940.805432] c430 0er_enter_idle: timer late -2030700 00000000
00000000 00000000 00000000
<4>[ 4940.805472] c450 00000000 00000000 00000000 00000000 00000000
00000000 00000000 00000000
<4>[ 4940.805512] c470 00000000 00000000 00000000 00000000 00000000
00000000 00000000 00000000
<4>[ 4940.805550] c490 00000000 00000000 00000000 00000000 00000000
00000000 00000000 00000000
<4>[ 4940.805590] c4b0 000100aa 01ad01ad 00127f7e 3a000fit
<3>[ 4940.916439] msm_timer_e0b7f7e
<4>[ 4940.805630] c4d0 3b000816 04031300 0f7f7e00 000c170272, reprogram it
<3>[ 4940.96718000b
<4>[ 4940.805672] c4f0 13003d00 7e000401 1900127f 003e000f 06050113
38013100 7e303031 1a
<3>[ 4940.916480] msm_timer_ent03f0008 00040313 000f7f7e 40000c1b 05031300
00040003 0b7f7e00 00081c00
<4>[ 4940.805755]
<4>[ 4940.805758] R4: 0xc078f2a4:
<4>[ 4940.805767] f2a4 00000000 00000000 00000000 00000000 00000000
00000000 e8b8cf60 c0563518
<4>[_idle: timer late -2034048, repr00000 c078f2cc c078f2cc c078f300
c078f310 00000000 c069ea3b
<4>[ 4940.805845] f2e4 000001b4 c03b3004 00000000 c069ea47 000001b4
c03b3054 00000000 c0643eea
<4>[ 4940.805885] f304 000001b4 c03b2fd0 c03b3a8c c069ea50 000001b442046]
msm_timer_enter_idle: tim 4940.805925] f324 00002000 00002002 00002004
000003fc 00000004 00000006 00000008 000003fc
<4>[ 4940.805963] f344 00080020 00000002 e8308000 00000001 00000001
dead4ead ffffffff ffffffff
<4>[ 4940.806003] f364 00000001 c078f368 c078f368 00002400 00002402
00002404 00005bf8 00000404
<4>[ 4940.806043] f384 00000406 00000408 00001bf8 00040010 00000001
00000000 00000000 00000001
<4>[ 4940.806083]
<4>[ 4940.806087] R7: 0xe89d5380:
<4>[ 4940.806095] 5380 00000000 00000000 00000000 00000000 00000000
00000000 00000000 00000000
<4>[ 4940.806132] 53a0 00000000 00000000 00000000 00000000 00000000
00000000 00000000 00000000
<4>[ 4940.806170] 53c0 00000000 00000000 00000000 00000000 00000000
00000000 0program it
<3>[ 4940.916725] msm8] 53e0 00000000 00000000 00000037920, reprogram it
<3>[ 4940.90000000 00000000
<4>[ 4940.806247] 5400 30000b7f 00550008 00040113 0000047e 7e30307e
0100007e 007e0100 00000000
<4>[ 4940.806287] 5420 00000000 00000000 00000000 00000000 00000000
00000000 00000000 00000000
<4>[ 4940.806325] 5440 00000000 00000000 00000000 00000000 00000000
00000000 00000000 00000000
<4>[ 4940.806363] 5460 00000000 00000000 00000000 00000000 00000000
00000000 00000000 00000000
<4>[ 4940.806402]
<4>[ 4940.806405] R8: 0xe849b9c0:
<4>[ 4940.806413] b9c0 ffffffff 00000002 00000002 00000003 00000000
00000000 deaf1eed fff9056, reprogram it
<3>[ 4940.942ffffff 00000000 00000000 00000000 00000000 00000000 e8dde5c0
00000000
<4>[ 4940.806492] ba00 00000000 00000000 00000000 00000000 00000000
00000000 00000000 00000000
<4>[ 4940.806530] ba20 00000000 00000000 e8e667c0 e849ba2c e849ba2c
e8e668d4 00000000 00000000
<4>[ 4940.806568] ba40 e849bc20 e849bcc0 e9210300 e8de5478 c0545ec8
00000001 dead4ead ffffffff
<4>[ 4940.806608] ba60 ffffffff 00000004 00020802 00000003 00000000
00000000 deaf1eed ffffffff
<4>[ 4940.806648] ba80 ffffffff 00000000 00000000 00000000 00000000
00000000 e8dde5c0 000 timer late -2531264, reprogram 000000 00000000
00000000 00000000 00000000 00000000 ffffffff ffffffff
<4>[ 4940.806727]
<4>[ 4940.806730] R9: 0xe89d5380:
<4>[ 4940.806738] 5380 00000000 00000000 00000000 00000000 00000000
00000000 00000000 00000000
<4>[ 4940.806775] 53a0 00000000 00000000 00000000 00000000 00000000
00000000 00000000 00000000
<4>[ 4940.806813] 53c0 00000000 00000000 00000000 00000000 00000000
00000000 00000000 00000000
<4>[ 4940.806852] 53e0 00000000 00000000 00000000 00000000 00000000
00000000 00000000 00000000
<4>[ 4940.806890] 5400 30000b7f 00550008 00040113 0000047e 7e30307e e:
timer late -2044576, reprogra4940.806930] 5420 00000000 00000000 00000000
00000000 00000000 00000000 00000000 00000000
<4>[ 4940.806967] 5440 00000000 00000000 00000000 00000000 00000000
er_enter_idle: timer late -253434940.807005] 5460 00000000 00000000
00000000 00000000 00000000 nter_idle: timer late -2045664, 4940.807045]
<4>[ 4940.807048] e: timer late -2534848, reprogra-2045952, reprogram it
<3>[ 4940000 00000000 00000000 00000000 e8383000 00000000
<4>[ 4940.80709r late -2146945184, reprogram it000 00000000 00000001
00000001 dead4ead ffffffff
<4>[ 4940.807133] 8410 ffffffff e8308414 e8308414 e8376000 00000000
e8308400 00000000 00000001
<4>[ 4940.807173] 8430 dead4ead ffffffff fffff3>[ 4940.917215]
msm_timer_enter0000000 e8308428
<4>[ 4940.807213] 8450 00000001 00000001 dead4ead ffffffff ffffffff
e8308464 e7616, reprogram it
<3>[ 4940.917er_idle: timer late -2536800, ree: timer late -9850304,
reprografffffff e830848c
<4>[ 4940.807292] 8490 e830848c e8522000 00000000 e8308478 e89d5400
00000400 00000001 dead4ead
<4>[ 4940.807332] 84b0 ffffffff ffffffff 00000200 e83084bc e83084bc
c025bdcc 0ogram it
<3>[ 4940.942764] msm_t5] [<c003b0e8>] (show_regs+0x0/0
<3>[ 4940.968228] msm_timer_enter_idle: timer late -3026816, re10]
r5:c075dac0 r4:e8379e08
<4>, reprogram it
<3>[ 4940.917330]chedule_bug+0x0/0x60) from [<c04daa48>]
(schedule+0x88/0x5c8)
<4>[ 4940.807453] r5:c075dac0 r4:e83746c0
<4>[ 4940.807478] [<c04da9c0>] (schedule+0x0/0x5c8) from [<c04db1e4>]
(schedule_timeout+0x24/0x1e4)
<4>[ 4940.807502] [<c04db1c0>] (schedule_timeout+0x0/0x1e4) from
[<c04dc9a8>] (__down+0x74/0xa8)
<4>[ 4940.807517] r7:00000002 r6:e83746c0 r5:7fffffff r4:c078f428
<4>[ 4940.807550] [<c04dc934>] (__down+0x0/0xa8) from [<c00ca394>]
(down+0x34/0x44)
<4>[ 4940.807563] r7:e8b9b6ec r6:00000033 r5:60000013 r4:c078f428
<4>[ 4940.807597] [<c00ca360>] (down+0x0/0x44) from [<c03>[ 4940.917447]
msm_timer_enter)
<4>[ 4940.807610] r5:e8b9b6ec r4:c078f370
<4>[ 4940.807635] [<c03b879c>] (dpram_write+0x0/0x848) from [<c03b90b8>]
(vs_write+0xb4/0xf4)
<4>[ 4940.807658] [<c03b9004>] (vs_write+0x0/0xf4) from [<c02cea5c>]
(ppp_async_push+0x10c/0x570)
<4>[ 4940.807680] [<c02ce950>] (ppp_async_push+0x0/0x570) from [<c02cef10>]
(ppp_async_send+0x50/0x58)
<4>[ 4940.809] msm_timer_enter_idle: timer late -2053152, reprogram it
<3>[ ppp_push+0x74/0x5c4)
<4>[ 4940.807722] r5:caf9c0d4 r4:c80abc00
<4>[ 4940.807748] [<c02ca9dc>] (ppp_push+0x0/0x5c4) from [<c02cba3c>]
(ppp_xmit_process+0x430/0x4ec)
<4>[ 4940.807773] [<c02cb60c>] (ppp_xmit_process+0x0/0x4ec) from
[<c02cbc94>] (ppp_start_xm3>[ 4940.968501] msm_timer_enter] [<c02cbaf8>]
(ppp_start_xmit+0x0/0x1c8) from [<c03cc490>] (devimer_enter_idle: timer
late -303>[ 4940.807813] r7:e0bcba5c r6:c80ab800 r5:00002000
r4:e839de20.917619] msm_timer_enter_idle: t(dev_hard_start_xmit+0x0/0x554)
from [<c03dfe7c>] (sch_direct_xmit+0x68/0x1b4)
<4>[ 4940.807877] [<c03dfe14>] (sch_direct_xmit+0x0/0x1b4) from
[<c03cc7d8>] (dev055648, reprogram it
<3>[ 4940.940.807902] [<c03cc5bc>] (dev_queue_xmit+0x0/0x42c) from
[<c040dd3>[ 4940.917677] msm_timer_enter1c)
<4>[ 4940.807925] [<c040da48>] (ip_finish_output+0x0/0x31c) from
[<c040de50>] (ip_output+0xec/0x100)
<4>[ 4940.807948] [<c040dd64>] (ip_output+0x0/0x100) from [<c040cd14>]
(ip_local_out+0x30/0x34)
<4>[ 4940.807962] r9:e8379b6c r8:00000040 r6:e853c660 r5:e92efc20
r4:e839de20
<4>[ 4940.808000] [<c040cce4>] (ip_local_out+0x0/0x34) from [<c040d028>]
(ip_push_pending_frames+0x310/0x3b0)
<4>[ 4940.808015] r5:e92efc20 r4:e839de20
<4>[ 4940.808040] [<c040cd18>] (ip_push_pending_frames+0x0/0x3b0) from
[<c040d2ac>] (ip_send_reply+0x1e4/0x20c)
<4>[ 4940.808065] [<c040d0c8>] (ip_send_reply+0x0/0x20c) from [e -2058400,
reprogram it
<3>[ 49x140/0x16c)
<4>[ 4940.808088] [<c0424890>] (tcp_v4_send_reset+0x0/0x16c) from
[<c0426924>] (tcp_v4_rcv+0x7e0/0x84c)
<4>[ 4940.808103] r6:c800b038 r5:00000000 r4:e839dca0
<4>[ 4940.808133] [<c0426144>] (tcp_v4_rcv+0x0/0x84c) from [<c040899c>]
(ip_local_delimer late -2059488, reprogram it40.808157] [<c040887c>]
(ip_local_deliver_finish+0x0/0x228) from [<c0408b30>]
(ip_local_deliver+0x8c/0x9c)
<4>[ 4940.808172] r9:00000000 r8:00a6518a r7:00000000 r6:00a65748
r5:c800b024
<4>[ 4940.808198] r4:e839dca0
<4>[ 4940.808217] [<c0408aa4>] (ip_local_deliver+0x0/0x9c) from
[<c0408538>] (ip_rcv_finish+0x324/0x344)
<4>[ 4940.808232] r4:e839dca0
<4>[ 4940.808250] [<c0408214>] (ip_rcv_finish+0x0/0x344) from [<c0408838>]
(ip_rcv+0x2e0/0x324)
<4>[ 4940.808263] r9:00000000 r8:00a6518a r7:00000000 r6:00a65743425]
msm_timer_enter_idle: tim r4:e839dca0
<4>[ 4940.808310] [<c0408558>] (ip_rcv+0x0/0x324) from [<c03ca974>]
(__netif_receive_skb+0x34c/0x3a4)
<4>[ 4940.808325] r9:00000008 r8:e8379cfc r7:e839dd48 r6:c0740ff8
r5:c80ab800
<4>[ 4940.808352] r4:e839dca0
<4>[ 4940.808370] [<c03ca628>] (__netif_receive_skb+0x0/0x3a4) from
[<c03caa54>] (process_backlog+0x88/0x138)
<4>[ 4940.808393] [<c03ca9cc>] (process_backlog+0x0/0x138) from
[<c03cadcc>] (net_rx_action+0x70/0x160)
<4>[ 4940.808417] [<c03cad5c>] (net_rx_action+0x0/0x160) from [<c00b1540>]
(__do_softirq+0x88/0x13c)
<4>[ 4940.808440] [<c00b14b8>] (__do_softirq+0x0/0x13c) from [<c00b1644>]
(irq_exit+0x50/0xa4)
<4>[ 4940.808465] [<c00b15f4>] (irq_exit+0x0/0xa4) from [<c0033084>]
(program it
<3>[ 4940.918108] msm_timer_enter_idle: timer late -2+0x0/0xa8) from
[<c003918c>] (__irq_svc+0x4c/0x90)
<4>[ 4940.808503] Exception stack(0xe8379e08 to 0xe8379e50)
<4>[ 4940.808522] 9e00: 0000007e 00000008 000001ad
fa14c4b0 c078f324 0000000d
<4>[ 4940.808543] 9e20: 0000000d e89d5400 e849ba40 e89d5400 e8308450
e8379e8c c09a4578 e8379e50
<4>[ 4940.808560]_timer_enter_idle: timer late -23 ffffffff
<4>[ 4940.808572] r5:fa000000 r4:ffffffff
<4>[ 4940.808598] [<c03b879c>] (dpram_write+0x0/0x848) from [<c03b9000>]
(dpram_tty_write+0x1c/0x20)
<4>[ 4940.808623] [<c03b8fe4>] (dpram_tty_write+0x0/0x20) from [<c025f3a0>]
(n_tty_write+0x2d4/0x3c0)
<4>[ 4940.808652] [<c025f0cc>] (n_tty_write+0x0/0x3c0) from [<c025c280>]
(tty_write+0x190/0x230)
<4>[ 4940.808678] [<c025c0f0>] (tty_write+0x0/0x230) from [<c0123d90>]
(vfs_write+0xb8/0x144)
<4>[ 4940.808703] [<c0123cd8>] (vfs_write+0x0/0x144) from [<c0123ee0>]
(sys_write+0x44/0x70)
<4>[ 4940.808717] r8:0000000d r7:00000000 r6:00000000 r5:0001e7d0
r4:e849ba40
<4>[ 4940.808755] [<c0123e9c>] (sys_write+0x0/0x70) from [<c0039680>]
(ret_fast_syscall+0x0/0x30)
<4>[ 4940.808770] er_idle: timer late -2068640, re1300 r5:0000000d
r4:00000000
I felt, something is wrong in dpram_write function, But couldnt exactly
know what happend.
Even i tried disassembling the function, i traced to some normal jump
instruction.
I know this error comes when, some function sleeps in atomic context.
Can anyone pls help me in fixing this..
THanks
--
With regards,
Sandeep Kumar Anantapalli,
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.kernelnewbies.org/pipermail/kernelnewbies/attachments/20111205/29db0555/attachment-0001.html
^ permalink raw reply [flat|nested] 13+ messages in thread
* Bug:scheduling while atomic..
2011-12-05 9:12 Bug:scheduling " sandeep kumar
@ 2011-12-05 19:03 ` Jonathan Neuschäfer
2011-12-05 19:16 ` Jonathan Neuschäfer
1 sibling, 0 replies; 13+ messages in thread
From: Jonathan Neuschäfer @ 2011-12-05 19:03 UTC (permalink / raw)
To: kernelnewbies
On Mon, Dec 05, 2011 at 02:42:20PM +0530, sandeep kumar wrote:
> Hi all,
> here is the dump i got in dmesg...
>
[snip]
>
> I felt, something is wrong in dpram_write function, But couldnt exactly
> know what happend.
> Even i tried disassembling the function, i traced to some normal jump
> instruction.
>
> I know this error comes when, some function sleeps in atomic context.
>
> Can anyone pls help me in fixing this..
Is it your code that's failing here?
Otherwise a good bug report on linux-kernel at vger.kernel.org is usually
appreciated.
Thanks,
Jonathan Neusch?fer
^ permalink raw reply [flat|nested] 13+ messages in thread
* Bug:scheduling while atomic..
2011-12-05 9:12 Bug:scheduling " sandeep kumar
2011-12-05 19:03 ` Jonathan Neuschäfer
@ 2011-12-05 19:16 ` Jonathan Neuschäfer
1 sibling, 0 replies; 13+ messages in thread
From: Jonathan Neuschäfer @ 2011-12-05 19:16 UTC (permalink / raw)
To: kernelnewbies
On Mon, Dec 05, 2011 at 02:42:20PM +0530, sandeep kumar wrote:
> Hi all,
> here is the dump i got in dmesg...
>
> <3>[ 4940.803872] BUG: scheduling while atomic: rild/144/0x00000908
> <4>[ 4940.803895] Modules linked in: mwlan_aarp(P) bthid
[...]
> I felt, something is wrong in dpram_write function, But couldnt exactly
> know what happend.
> Even i tried disassembling the function, i traced to some normal jump
> instruction.
You should consider imforming the mwlan_aarp module's vendor/maintainer;
Few kernel people like to debug someone else's proprietary code ;-).
HTH,
Jonathan Neusch?fer
^ permalink raw reply [flat|nested] 13+ messages in thread
* BUG: scheduling while atomic
@ 2011-05-10 5:51 sandeep kumar
2011-05-10 6:08 ` Dave Hylands
0 siblings, 1 reply; 13+ messages in thread
From: sandeep kumar @ 2011-05-10 5:51 UTC (permalink / raw)
To: kernelnewbies
Here is the following logs i got when i collected ramdump from my
development mobile after going to kernel panic
Kernel version is 2.6.35.7, Android version GingerBread.
BUG: scheduling while atomic: pppd/675/0x00000203
<4>[ 85.745849] Modules linked in: dhd hotspot_event_monitoring bthid
cmc7xx_sdio
<4>[ 85.746032] [<c003f7dc>] (unwind_backtrace+0x0/0x168) from
[<c05aa808>] (dump_stack+0x18/0x1c)
<4>[ 85.746154] [<c05aa808>] (dump_stack+0x18/0x1c) from [<c00d86a4>]
(__schedule_bug+0x54/0x68)
<4>[ 85.746246] [<c00d86a4>] (__schedule_bug+0x54/0x68) from [<c05aab3c>]
(schedule+0x78/0x48c)
<4>[ 85.746337] [<c05aab3c>] (schedule+0x78/0x48c) from [<c05ab604>]
(schedule_timeout+0x24/0x23c)
<4>[ 85.746429] [<c05ab604>] (schedule_timeout+0x24/0x23c) from
[<c05aced0>] (__down+0x88/0xc4)
<4>[ 85.746520] [<c05aced0>] (__down+0x88/0xc4) from [<c00fd654>]
(down+0x44/0x84)
<4>[ 85.746643] [<c00fd654>] (down+0x44/0x84) from [<c0440bac>]
(dpram_write+0x64/0x884)
<4>[ 85.746734] [<c0440bac>] (dpram_write+0x64/0x884) from [<c04414f0>]
(vs_write+0x104/0x154)
<4>[ 85.746826] [<c04414f0>] (vs_write+0x104/0x154) from [<c034328c>]
(ppp_async_push+0x110/0x584)
<4>[ 85.746917] [<c034328c>] (ppp_async_push+0x110/0x584) from
[<c0343750>] (ppp_async_send+0x50/0x58)
<4>[ 85.747009] [<c0343750>] (ppp_async_send+0x50/0x58) from [<c0341360>]
(ppp_channel_push+0x60/0x100)
<4>[ 85.747100] [<c0341360>] (ppp_channel_push+0x60/0x100) from
[<c0341500>] (ppp_write+0x100/0x108)
<4>[ 85.747192] [<c0341500>] (ppp_write+0x100/0x108) from [<c015a428>]
(vfs_write+0xb8/0x164)
<4>[ 85.747283] [<c015a428>] (vfs_write+0xb8/0x164) from [<c015a598>]
(sys_write+0x44/0x70)
<4>[ 85.747375] [<c015a598>] (sys_write+0x44/0x70) from [<c00390c0>]
(ret_fast_syscall+0x0/0x30)
<1>[ 85.747436] Unable to handle kernel NULL pointer dereference at
virtual address 00000000
<1>[ 85.747497] pgd = db338000
<1>[ 85.747528] [00000000] *pgd=4a7d9031, *pte=00000000, *ppte=00000000
<0>[ 85.747619]I[ pppd: 675] Internal error: Oops: 817 [#1]
PREEMPT
<0>[ 85.747650]I[ pppd: 675] last sysfs file:
/sys/devices/system/cpu/cpu0/cpufreq/stats/time_in_state
<4>[ 85.747711]I[ pppd: 675] Modules linked in: dhd
hotspot_event_monitoring bthid cmc7xx_sdio
<4>[ 85.747833]I[ pppd: 675] CPU: 0 Tainted: G W
(2.6.35.7-perf #2)
<4>[ 85.747894]I[ pppd: 675] PC is at __schedule_bug+0x58/0x68
<4>[ 85.747955]I[ pppd: 675] LR is at unwind_frame+0xe8/0x628
<4>[ 85.748016]I[ pppd: 675] pc : [<c00d86a8>] lr :
[<c003f29c>] psr: a0000013
<4>[ 85.748077]I[ pppd: 675] sp : dab59d00 ip : dab59c28 fp :
dab59d14
<4>[ 85.748138]I[ pppd: 675] r10: c08546b0 r9 : 0000007d r8 :
00000428
<4>[ 85.748199]I[ pppd: 675] r7 : dcbca8c0 r6 : dab58000 r5 :
c080c748 r4 : 00000000
<4>[ 85.748260]I[ pppd: 675] r3 : 00000000 r2 : dab59fa8 r1 :
00000001 r0 : c00390c0
<4>[ 85.748321]I[ pppd: 675] Flags: NzCv IRQs on FIQs on Mode
SVC_32 ISA ARM Segment user
<4>[ 85.748382]I[ pppd: 675] Control: 10c57c7d Table: 4b338059
DAC: 00000015
[<c00d86a8>] (__schedule_bug+0x58/0x68) from [<c05aab3c>]
(schedule+0x78/0x48c)
<4>[ 85.762084]I[ pppd: 675] [<c05aab3c>] (schedule+0x78/0x48c)
from [<c05ab604>] (schedule_timeout+0x24/0x23c)
<4>[ 85.762176]I[ pppd: 675] [<c05ab604>]
(schedule_timeout+0x24/0x23c) from [<c05aced0>] (__down+0x88/0xc4)
<4>[ 85.762268]I[ pppd: 675] [<c05aced0>] (__down+0x88/0xc4)
from [<c00fd654>] (down+0x44/0x84)
<4>[ 85.762390]I[ pppd: 675] [<c00fd654>] (down+0x44/0x84) from
[<c0440bac>] (dpram_write+0x64/0x884)
<4>[ 85.762481]I[ pppd: 675] [<c0440bac>]
(dpram_write+0x64/0x884) from [<c04414f0>] (vs_write+0x104/0x154)
<4>[ 85.762603]I[ pppd: 675] [<c04414f0>] (vs_write+0x104/0x154)
from [<c034328c>] (ppp_async_push+0x110/0x584)
<4>[ 85.762695]I[ pppd: 675] [<c034328c>]
(ppp_async_push+0x110/0x584) from [<c0343750>] (ppp_async_send+0x50/0x58)
<4>[ 85.762786]I[ pppd: 675] [<c0343750>]
(ppp_async_send+0x50/0x58) from [<c0341360>] (ppp_channel_push+0x60/0x100)
<4>[ 85.762908]I[ pppd: 675] [<c0341360>]
(ppp_channel_push+0x60/0x100) from [<c0341500>] (ppp_write+0x100/0x108)
<4>[ 85.763000]I[ pppd: 675] [<c0341500>]
(ppp_write+0x100/0x108) from [<c015a428>] (vfs_write+0xb8/0x164)
<4>[ 85.763092]I[ pppd: 675] [<c015a428>] (vfs_write+0xb8/0x164)
from [<c015a598>] (sys_write+0x44/0x70)
<4>[ 85.763214]I[ pppd: 675] [<c015a598>] (sys_write+0x44/0x70)
from [<c00390c0>] (ret_fast_syscall+0x0/0x30)
<0>[ 85.763305]I[ pppd: 675] Code: ebfd894a ea000000 eb134852
e3a03000 (e5833000)
<4>[ 85.763458]I[ pppd: 675] ---[ end trace 1b75b31a2719ed20
]---
<0>[ 85.763519]I[ pppd: 675] Kernel panic - not syncing: Fatal
exception in interrupt
<4>[ 85.763641]I[ pppd: 675] [<c003f7dc>]
(unwind_backtrace+0x0/0x168) from [<c05aa808>] (dump_stack+0x18/0x1c)
<4>[ 85.763732]I[ pppd: 675] [<c05aa808>] (dump_stack+0x18/0x1c)
from [<c05aa884>] (panic+0x78/0x16c)
<4>[ 85.763824]I[ pppd: 675] [<c05aa884>] (panic+0x78/0x16c)
from [<c003d5bc>] (die+0x248/0x288)
<4>[ 85.763946]I[ pppd: 675] [<c003d5bc>] (die+0x248/0x288) from
[<c00437b8>] (__do_kernel_fault+0x6c/0x8c)
<4>[ 85.764038]I[ pppd: 675] [<c00437b8>]
(__do_kernel_fault+0x6c/0x8c) from [<c0043a70>] (do_page_fault+0x298/0x2b8)
<4>[ 85.764160]I[ pppd: 675] [<c0043a70>]
(do_page_fault+0x298/0x2b8) from [<c0038408>] (do_DataAbort+0x3c/0xa0)
<4>[ 85.764251]I[ pppd: 675] [<c0038408>]
(do_DataAbort+0x3c/0xa0) from [<c0038bec>] (__dabt_svc+0x4c/0x60)
<4>[ 85.764312]I[ pppd: 675] Exception stack(0xdab59cb8 to
0xdab59d00)
<4>[ 85.764373]I[ pppd: 675]
9ca0: c00390c0
00000001
<4>[ 85.764465]I[ pppd: 675] 9cc0: dab59fa8 00000000 00000000
c080c748 dab58000 dcbca8c0 00000428 0000007d
<4>[ 85.764587]I[ pppd: 675] 9ce0: c08546b0 dab59d14 dab59c28
dab59d00 c003f29c c00d86a8 a0000013 ffffffff
<4>[ 85.764678]I[ pppd: 675] [<c0038bec>] (__dabt_svc+0x4c/0x60)
from [<c00d86a8>] (__schedule_bug+0x58/0x68)
<4>[ 85.764801]I[ pppd: 675] [<c00d86a8>]
(__schedule_bug+0x58/0x68) from [<c05aab3c>] (schedule+0x78/0x48c)
<4>[ 85.764892]I[ pppd: 675] [<c05aab3c>] (schedule+0x78/0x48c)
from [<c05ab604>] (schedule_timeout+0x24/0x23c)
<4>[ 85.765014]I[ pppd: 675] [<c05ab604>]
(schedule_timeout+0x24/0x23c) from [<c05aced0>] (__down+0x88/0xc4)
<4>[ 85.765106]I[ pppd: 675] [<c05aced0>] (__down+0x88/0xc4)
from [<c00fd654>] (down+0x44/0x84)
<4>[ 85.765197]I[ pppd: 675] [<c00fd654>] (down+0x44/0x84) from
[<c0440bac>] (dpram_write+0x64/0x884)
<4>[ 85.765319]I[ pppd: 675] [<c0440bac>]
(dpram_write+0x64/0x884) from [<c04414f0>] (vs_write+0x104/0x154)
<4>[ 85.765411]I[ pppd: 675] [<c04414f0>] (vs_write+0x104/0x154)
from [<c034328c>] (ppp_async_push+0x110/0x584)
<4>[ 85.765502]I[ pppd: 675] [<c034328c>]
(ppp_async_push+0x110/0x584) from [<c0343750>] (ppp_async_send+0x50/0x58)
<4>[ 85.765624]I[ pppd: 675] [<c0343750>]
(ppp_async_send+0x50/0x58) from [<c0341360>] (ppp_channel_push+0x60/0x100)
<4>[ 85.765716]I[ pppd: 675] [<c0341360>]
(ppp_channel_push+0x60/0x100) from [<c0341500>] (ppp_write+0x100/0x108)
<4>[ 85.765838]I[ pppd: 675] [<c0341500>]
(ppp_write+0x100/0x108) from [<c015a428>] (vfs_write+0xb8/0x164)
<4>[ 85.765930]I[ pppd: 675] [<c015a428>] (vfs_write+0xb8/0x164)
from [<c015a598>] (sys_write+0x44/0x70)
<4>[ 85.766021]I[ pppd: 675] [<c015a598>] (sys_write+0x44/0x70)
from [<c00390c0>] (ret_fast_syscall+0x0/0x30)
<0>[ 86.776214]I[ pppd: 675] (kernel_sec_save_final_context)
Final context was saved before the system reset.
<0>[ 86.776306]I[ pppd: 675] (kernel_sec_set_upload_cause) :
upload_cause set c8
<0>[ 86.776367]I[ pppd: 675] (kernel_sec_reset) BUILD_INFO:
HWREV: b Date:May 2 2011 Time:22:27:11
<0>[ 86.776428]I[ pppd: 675] (kernel_sec_reset) Kernel panic.
The system will be reset !!
In the logs which error message is causing this kernel panic.
BUG: scheduling while atomic: pppd/675/0x00000203
or
Unable to handle kernel NULL pointer dereference at virtual address 00000000
or
Kernel panic - not syncing: Fatal exception in interrupt
Please help me here...
Thanks in advance..
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.kernelnewbies.org/pipermail/kernelnewbies/attachments/20110510/efdc6f48/attachment-0001.html
^ permalink raw reply [flat|nested] 13+ messages in thread* BUG: scheduling while atomic
2011-05-10 5:51 BUG: scheduling " sandeep kumar
@ 2011-05-10 6:08 ` Dave Hylands
0 siblings, 0 replies; 13+ messages in thread
From: Dave Hylands @ 2011-05-10 6:08 UTC (permalink / raw)
To: kernelnewbies
Hi Sandeep,
Sending to the list this time...
On Mon, May 9, 2011 at 10:51 PM, sandeep kumar
<coolsandyforyou@gmail.com> wrote:
> Here is the following logs i got when i collected ramdump from my
> development mobile after going to kernel panic
> Kernel version is 2.6.35.7, Android version GingerBread.
>
> BUG: scheduling while atomic: pppd/675/0x00000203
> <4>[?? 85.745849] Modules linked in: dhd hotspot_event_monitoring bthid
> cmc7xx_sdio
> <4>[?? 85.746032] [<c003f7dc>] (unwind_backtrace+0x0/0x168) from
> [<c05aa808>] (dump_stack+0x18/0x1c)
> <4>[?? 85.746154] [<c05aa808>] (dump_stack+0x18/0x1c) from [<c00d86a4>]
> (__schedule_bug+0x54/0x68)
> <4>[?? 85.746246] [<c00d86a4>] (__schedule_bug+0x54/0x68) from [<c05aab3c>]
> (schedule+0x78/0x48c)
> <4>[?? 85.746337] [<c05aab3c>] (schedule+0x78/0x48c) from [<c05ab604>]
> (schedule_timeout+0x24/0x23c)
> <4>[?? 85.746429] [<c05ab604>] (schedule_timeout+0x24/0x23c) from
> [<c05aced0>] (__down+0x88/0xc4)
> <4>[?? 85.746520] [<c05aced0>] (__down+0x88/0xc4) from [<c00fd654>]
> (down+0x44/0x84)
> <4>[?? 85.746643] [<c00fd654>] (down+0x44/0x84) from [<c0440bac>]
> (dpram_write+0x64/0x884)
> <4>[?? 85.746734] [<c0440bac>] (dpram_write+0x64/0x884) from [<c04414f0>]
> (vs_write+0x104/0x154)
> <4>[?? 85.746826] [<c04414f0>] (vs_write+0x104/0x154) from [<c034328c>]
> (ppp_async_push+0x110/0x584)
> <4>[?? 85.746917] [<c034328c>] (ppp_async_push+0x110/0x584) from
> [<c0343750>] (ppp_async_send+0x50/0x58)
> <4>[?? 85.747009] [<c0343750>] (ppp_async_send+0x50/0x58) from [<c0341360>]
> (ppp_channel_push+0x60/0x100)
> <4>[?? 85.747100] [<c0341360>] (ppp_channel_push+0x60/0x100) from
> [<c0341500>] (ppp_write+0x100/0x108)
Looking at the source, ppp_channel_push calls spin_lock_bh which
enters the atomic context.
dpram_write tries to call down from within this context, which isn't legal.
--
Dave Hylands
Shuswap, BC, Canada
http://www.davehylands.com
^ permalink raw reply [flat|nested] 13+ messages in thread
end of thread, other threads:[~2012-04-18 15:40 UTC | newest]
Thread overview: 13+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
[not found] <CABOM9ZqSazS-NkD980f6sUyy=hk1aLVY+Vjwcxs3mGybvbkgaQ@mail.gmail.com>
2012-04-18 6:44 ` BUG: scheduling while atomic Arun KS
2012-04-18 7:31 ` Dave Hylands
2012-04-18 8:08 ` Arun KS
2012-04-18 8:14 ` Dave Hylands
2012-04-18 8:27 ` Srivatsa S. Bhat
2012-04-18 8:40 ` Arun KS
2012-04-18 8:58 ` Arun KS
2012-04-18 15:40 ` Dave Hylands
2011-12-05 9:12 Bug:scheduling " sandeep kumar
2011-12-05 19:03 ` Jonathan Neuschäfer
2011-12-05 19:16 ` Jonathan Neuschäfer
-- strict thread matches above, loose matches on Subject: below --
2011-05-10 5:51 BUG: scheduling " sandeep kumar
2011-05-10 6:08 ` Dave Hylands
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).