* v2.6.27-rc7: x86: #GP on panic?
@ 2008-09-24 19:09 Vegard Nossum
2008-09-25 8:04 ` Ingo Molnar
0 siblings, 1 reply; 10+ messages in thread
From: Vegard Nossum @ 2008-09-24 19:09 UTC (permalink / raw)
To: x86; +Cc: linux-kernel
Hi,
With 2.6.27-rc7 on qemu-x86_64, it seems that panic will trigger a
General Protection Fault. I haven't seen it before.
[ 4.499793] VFS: Cannot open root device "hda1" or unknown-block(2,0)
[ 4.502747] Please append a correct "root=" boot option; here are
the available partitions:
[ 4.506641] 0800 2048000 sda driver: sd
[ 4.508987] 0801 1895638 sda1
[ 4.511088] 0802 1 sda2
[ 4.512858] 0810 2048 sdb driver: sd
[ 4.514915] 0b00 1048575 sr0 driver: sr
[ 4.519074] Kernel panic - not syncing: VFS: Unable to mount root
fs on unknown-block(2,0)
[ 4.523477] general protection fault: fff2 [1] SMP
[ 4.523641] CPU 0
[ 4.523641] Modules linked in:
[ 4.523641] Pid: 1, comm: swapper Tainted: G W 2.6.27-rc7 #1
[ 4.523641] RIP: 0010:[<ffffffff81019d27>] [<ffffffff81019d27>]
native_smp_send_stop+0x29/0x2d
[ 4.523641] RSP: 0018:ffff880007867d70 EFLAGS: 00000286
[ 4.523641] RAX: 00000000000000ff RBX: 0000000000000286 RCX: 0000000000000000
[ 4.523641] RDX: 0000000000000005 RSI: ffffffff81019ce1 RDI: 0000000000000000
[ 4.523641] RBP: ffff880007867d80 R08: 0000000000000000 R09: 0000000000002800
[ 4.523641] R10: 0000000000002800 R11: ffff880001020a40 R12: ffff88000705b018
[ 4.523641] R13: ffff88000705b000 R14: 0000000000008001 R15: ffffffff8159d550
[ 4.523641] FS: 0000000000000000(0000) GS:ffffffff816fae00(0000)
knlGS:0000000000000000
[ 4.523641] CS: 0010 DS: 0018 ES: 0018 CR0: 000000008005003b
[ 4.523641] CR2: 0000000000000000 CR3: 0000000000201000 CR4: 00000000000006a0
[ 4.523641] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
[ 4.523641] DR3: 0000000000000000 DR6: 0000000000000000 DR7: 0000000000000000
[ 4.523641] Process swapper (pid: 1, threadinfo ffff880007866000,
task ffff880007868000)
[ 4.523641] Stack: 0000000000005131 ffffffff8159d52d
ffff880007867e70 ffffffff810344a4
[ 4.523641] 0000003000000010 ffff880007867e80 ffff880007867db0
ffff880007867e80
[ 4.523641] ffff880007867dd0 ffff880007867e80 ffff88000789d360
00000000000050d0
[ 4.523641] Call Trace:
[ 4.523641] [<ffffffff810344a4>] panic+0xe8/0x193
[ 4.523641] [<ffffffff8118efaf>] ? kobject_put+0x44/0x49
[ 4.523641] [<ffffffff812177de>] ? put_device+0x15/0x17
[ 4.523641] [<ffffffff8121ad99>] ? class_for_each_device+0xfe/0x10e
[ 4.523641] [<ffffffff81715059>] mount_block_root+0x1ee/0x205
[ 4.523641] [<ffffffff81009417>] ? name_to_dev_t+0x1bb/0xda4
[ 4.523641] [<ffffffff817152cd>] mount_root+0xe5/0xea
[ 4.523641] [<ffffffff81715449>] prepare_namespace+0x177/0x1a4
[ 4.523641] [<ffffffff810aaf2e>] ? putname+0x37/0x39
[ 4.523641] [<ffffffff81714d0f>] kernel_init+0x16a/0x178
[ 4.523641] [<ffffffff8102be33>] ? schedule_tail+0x24/0x5d
[ 4.523641] [<ffffffff8100cf79>] child_rip+0xa/0x11
[ 4.523641] [<ffffffff811b92f4>] ? acpi_ds_init_one_object+0x0/0x88
[ 4.523641] [<ffffffff81714ba5>] ? kernel_init+0x0/0x178
[ 4.523641] [<ffffffff8100cf6f>] ? child_rip+0x0/0x11
[ 4.523641]
[ 4.523641]
[ 4.523641] Code: eb fd 55 48 89 e5 53 51 83 3d 25 e8 78 00 00 75
1a 31 d2 31 f6 48 c7 c7 e1 9c 01 81 e8 f7 a4 03 00 9c 5b fa e8 94 09
00 00 53 9d <5a> 5b c9 c3 55 31 c0 48 89 e5 89 04 25 b0 c0 5f ff 65 83
04 25
[ 4.523641] RIP [<ffffffff81019d27>] native_smp_send_stop+0x29/0x2d
[ 4.523641] RSP <ffff880007867d70>
[ 4.523641] ---[ end trace 4eaa2a86a8e2da22 ]---
[ 4.523641] swapper used greatest stack depth: 3664 bytes left
[ 4.523641] Kernel panic - not syncing: Attempted to kill init!
Maybe this will not wrap. I can at least hope.
ffffffff81019cfe <native_smp_send_stop>:
ffffffff81019cfe: 55 push %rbp
ffffffff81019cff: 48 89 e5 mov %rsp,%rbp
ffffffff81019d02: 53 push %rbx
ffffffff81019d03: 51 push %rcx
ffffffff81019d04: 83 3d 25 e8 78 00 00 cmpl
$0x0,7923749(%rip) # ffffffff817a8530 <reboot_force>
ffffffff81019d0b: 75 1a jne
ffffffff81019d27 <native_smp_send_stop+0x29>
ffffffff81019d0d: 31 d2 xor %edx,%edx
ffffffff81019d0f: 31 f6 xor %esi,%esi
ffffffff81019d11: 48 c7 c7 e1 9c 01 81 mov $0xffffffff81019ce1,%rdi
ffffffff81019d18: e8 f7 a4 03 00 callq
ffffffff81054214 <smp_call_function>
ffffffff81019d1d: 9c pushfq
ffffffff81019d1e: 5b pop %rbx
ffffffff81019d1f: fa cli
ffffffff81019d20: e8 94 09 00 00 callq
ffffffff8101a6b9 <disable_local_APIC>
ffffffff81019d25: 53 push %rbx
ffffffff81019d26: 9d popfq
ffffffff81019d27: 5a pop %rdx
ffffffff81019d28: 5b pop %rbx
ffffffff81019d29: c9 leaveq
ffffffff81019d2a: c3 retq
Vegard
--
"The animistic metaphor of the bug that maliciously sneaked in while
the programmer was not looking is intellectually dishonest as it
disguises that the error is the programmer's own creation."
-- E. W. Dijkstra, EWD1036
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: v2.6.27-rc7: x86: #GP on panic?
2008-09-24 19:09 v2.6.27-rc7: x86: #GP on panic? Vegard Nossum
@ 2008-09-25 8:04 ` Ingo Molnar
2008-09-25 8:53 ` H. Peter Anvin
0 siblings, 1 reply; 10+ messages in thread
From: Ingo Molnar @ 2008-09-25 8:04 UTC (permalink / raw)
To: Vegard Nossum; +Cc: x86, linux-kernel, H. Peter Anvin, Thomas Gleixner
* Vegard Nossum <vegard.nossum@gmail.com> wrote:
> Hi,
>
> With 2.6.27-rc7 on qemu-x86_64, it seems that panic will trigger a
> General Protection Fault. I haven't seen it before.
> [ 4.523641] Code: eb fd 55 48 89 e5 53 51 83 3d 25 e8 78 00 00 75
> 1a 31 d2 31 f6 48 c7 c7 e1 9c 01 81 e8 f7 a4 03 00 9c 5b fa e8 94 09
> 00 00 53 9d <5a> 5b c9 c3 55 31 c0 48 89 e5 89 04 25 b0 c0 5f ff 65 83
> 04 25
hm, 0x5a is a simple pop %rdx. A #GP there means the stack segment is
bust?
hm:
> ffffffff8101a6b9 <disable_local_APIC>
> ffffffff81019d25: 53 push %rbx
> ffffffff81019d26: 9d popfq
> ffffffff81019d27: 5a pop %rdx
so it's preceded by a popfq and on the next instruction we #GP.
but the stack and flags state looks good:
[ 4.523641] RSP: 0018:ffff880007867d70 EFLAGS: 00000286
weird.
Ingo
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: v2.6.27-rc7: x86: #GP on panic?
2008-09-25 8:04 ` Ingo Molnar
@ 2008-09-25 8:53 ` H. Peter Anvin
2008-09-25 14:07 ` Vegard Nossum
0 siblings, 1 reply; 10+ messages in thread
From: H. Peter Anvin @ 2008-09-25 8:53 UTC (permalink / raw)
To: Ingo Molnar; +Cc: Vegard Nossum, x86, linux-kernel, Thomas Gleixner
Ingo Molnar wrote:
> * Vegard Nossum <vegard.nossum@gmail.com> wrote:
>
>> Hi,
>>
>> With 2.6.27-rc7 on qemu-x86_64, it seems that panic will trigger a
>> General Protection Fault. I haven't seen it before.
>
>> [ 4.523641] Code: eb fd 55 48 89 e5 53 51 83 3d 25 e8 78 00 00 75
>> 1a 31 d2 31 f6 48 c7 c7 e1 9c 01 81 e8 f7 a4 03 00 9c 5b fa e8 94 09
>> 00 00 53 9d <5a> 5b c9 c3 55 31 c0 48 89 e5 89 04 25 b0 c0 5f ff 65 83
>> 04 25
>
> hm, 0x5a is a simple pop %rdx. A #GP there means the stack segment is
> bust?
>
No, that would be #SS (and segments don't really exist in 64-bit mode
anyway.) In 32-bit mode it could mean a code segment overrun.
*However*...
[ 4.523477] general protection fault: fff2 [1] SMP
There is an error code attached to the #GP, which is supposed to mean
that somehow a segment selector was involved. This doesn't look like a
very valid segment selector at all.
> hm:
>
>> ffffffff8101a6b9 <disable_local_APIC>
>> ffffffff81019d25: 53 push %rbx
>> ffffffff81019d26: 9d popfq
>> ffffffff81019d27: 5a pop %rdx
>
> so it's preceded by a popfq and on the next instruction we #GP.
>
> but the stack and flags state looks good:
>
> [ 4.523641] RSP: 0018:ffff880007867d70 EFLAGS: 00000286
>
My guess is that the popfq enables interrupts, and we try to take an
interrupt through an IDT entry which isn't set up correctly.
-hpa
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: v2.6.27-rc7: x86: #GP on panic?
2008-09-25 8:53 ` H. Peter Anvin
@ 2008-09-25 14:07 ` Vegard Nossum
2008-09-25 15:20 ` Vegard Nossum
0 siblings, 1 reply; 10+ messages in thread
From: Vegard Nossum @ 2008-09-25 14:07 UTC (permalink / raw)
To: H. Peter Anvin; +Cc: Ingo Molnar, x86, linux-kernel, Thomas Gleixner
On Thu, Sep 25, 2008 at 10:53 AM, H. Peter Anvin <hpa@zytor.com> wrote:
> Ingo Molnar wrote:
>>
>> * Vegard Nossum <vegard.nossum@gmail.com> wrote:
>>
>>> Hi,
>>>
>>> With 2.6.27-rc7 on qemu-x86_64, it seems that panic will trigger a
>>> General Protection Fault. I haven't seen it before.
>>
>>> [ 4.523641] Code: eb fd 55 48 89 e5 53 51 83 3d 25 e8 78 00 00 75
>>> 1a 31 d2 31 f6 48 c7 c7 e1 9c 01 81 e8 f7 a4 03 00 9c 5b fa e8 94 09
>>> 00 00 53 9d <5a> 5b c9 c3 55 31 c0 48 89 e5 89 04 25 b0 c0 5f ff 65 83
>>> 04 25
>>
>> hm, 0x5a is a simple pop %rdx. A #GP there means the stack segment is
>> bust?
>>
>
> No, that would be #SS (and segments don't really exist in 64-bit mode
> anyway.) In 32-bit mode it could mean a code segment overrun.
>
> *However*...
>
> [ 4.523477] general protection fault: fff2 [1] SMP
>
> There is an error code attached to the #GP, which is supposed to mean that
> somehow a segment selector was involved. This doesn't look like a very valid
> segment selector at all.
>
>> hm:
>>
>>> ffffffff8101a6b9 <disable_local_APIC>
>>> ffffffff81019d25: 53 push %rbx
>>> ffffffff81019d26: 9d popfq
>>> ffffffff81019d27: 5a pop %rdx
>>
>> so it's preceded by a popfq and on the next instruction we #GP.
>>
>> but the stack and flags state looks good:
>>
>> [ 4.523641] RSP: 0018:ffff880007867d70 EFLAGS: 00000286
>>
>
> My guess is that the popfq enables interrupts, and we try to take an
> interrupt through an IDT entry which isn't set up correctly.
I'm sorry for the false alarm. I discovered that it did not happen on
a clean kernel. My kernel was using this patch.
diff --git a/arch/x86/kernel/cpu/common_64.c b/arch/x86/kernel/cpu/common_64.c
index a11f5d4..abf5bc8 100644
--- a/arch/x86/kernel/cpu/common_64.c
+++ b/arch/x86/kernel/cpu/common_64.c
@@ -261,6 +261,8 @@ void __init early_cpu_init(void)
cpu_devs[cvdev->vendor] = cvdev->cpu_dev;
early_cpu_support_print();
early_identify_cpu(&boot_cpu_data);
+
+ setup_clear_cpu_cap(X86_FEATURE_PSE);
}
/* Do some early cpuid on the boot CPU to get some parameter that are
:-(
Vegard
--
"The animistic metaphor of the bug that maliciously sneaked in while
the programmer was not looking is intellectually dishonest as it
disguises that the error is the programmer's own creation."
-- E. W. Dijkstra, EWD1036
^ permalink raw reply related [flat|nested] 10+ messages in thread
* Re: v2.6.27-rc7: x86: #GP on panic?
2008-09-25 14:07 ` Vegard Nossum
@ 2008-09-25 15:20 ` Vegard Nossum
2008-09-25 20:46 ` Vegard Nossum
0 siblings, 1 reply; 10+ messages in thread
From: Vegard Nossum @ 2008-09-25 15:20 UTC (permalink / raw)
To: H. Peter Anvin; +Cc: Ingo Molnar, x86, linux-kernel, Thomas Gleixner
On Thu, Sep 25, 2008 at 4:07 PM, Vegard Nossum <vegard.nossum@gmail.com> wrote:
> On Thu, Sep 25, 2008 at 10:53 AM, H. Peter Anvin <hpa@zytor.com> wrote:
>> Ingo Molnar wrote:
>>>
>>> * Vegard Nossum <vegard.nossum@gmail.com> wrote:
>>>
>>>> Hi,
>>>>
>>>> With 2.6.27-rc7 on qemu-x86_64, it seems that panic will trigger a
>>>> General Protection Fault. I haven't seen it before.
>>>
>>>> [ 4.523641] Code: eb fd 55 48 89 e5 53 51 83 3d 25 e8 78 00 00 75
>>>> 1a 31 d2 31 f6 48 c7 c7 e1 9c 01 81 e8 f7 a4 03 00 9c 5b fa e8 94 09
>>>> 00 00 53 9d <5a> 5b c9 c3 55 31 c0 48 89 e5 89 04 25 b0 c0 5f ff 65 83
>>>> 04 25
>>>
>>> hm, 0x5a is a simple pop %rdx. A #GP there means the stack segment is
>>> bust?
>>>
>>
>> No, that would be #SS (and segments don't really exist in 64-bit mode
>> anyway.) In 32-bit mode it could mean a code segment overrun.
>>
>> *However*...
>>
>> [ 4.523477] general protection fault: fff2 [1] SMP
>>
>> There is an error code attached to the #GP, which is supposed to mean that
>> somehow a segment selector was involved. This doesn't look like a very valid
>> segment selector at all.
>>
>>> hm:
>>>
>>>> ffffffff8101a6b9 <disable_local_APIC>
>>>> ffffffff81019d25: 53 push %rbx
>>>> ffffffff81019d26: 9d popfq
>>>> ffffffff81019d27: 5a pop %rdx
>>>
>>> so it's preceded by a popfq and on the next instruction we #GP.
>>>
>>> but the stack and flags state looks good:
>>>
>>> [ 4.523641] RSP: 0018:ffff880007867d70 EFLAGS: 00000286
>>>
>>
>> My guess is that the popfq enables interrupts, and we try to take an
>> interrupt through an IDT entry which isn't set up correctly.
>
> I'm sorry for the false alarm. I discovered that it did not happen on
> a clean kernel. My kernel was using this patch.
No, I was wrong! It *does* happen for vanilla as well, but it doesn't
happen reliably.
[ 4.043370] Kernel panic - not syncing: VFS: Unable to mount root
fs on unknown-block(2,0)
[ 4.048765] general protection fault: fff2 [1] SMP
[ 4.048765] CPU 0
[ 4.048765] Modules linked in:
[ 4.048765] Pid: 1, comm: swapper Tainted: G W 2.6.27-rc7 #8
[ 4.048765] RIP: 0010:[<ffffffff81019d27>] [<ffffffff81019d27>]
native_smp_send_stop+0x29/0x2d
[ 4.048765] RSP: 0018:ffff880007867d70 EFLAGS: 00000286
[ 4.048765] RAX: 00000000000000ff RBX: 0000000000000286 RCX: 0000000000000000
[ 4.048765] RDX: 0000000000000005 RSI: ffffffff81019ce1 RDI: 0000000000000000
[ 4.048765] RBP: ffff880007867d80 R08: 0000000000000000 R09: ffff880087867bff
[ 4.048765] R10: ffff880087867bff R11: 000000000000000a R12: ffff88000707b018
[ 4.048765] R13: ffff88000707b000 R14: 0000000000008001 R15: ffffffff8159d550
[ 4.048765] FS: 0000000000000000(0000) GS:ffffffff816fae00(0000)
knlGS:0000000000000000
[ 4.048765] CS: 0010 DS: 0018 ES: 0018 CR0: 000000008005003b
[ 4.048765] CR2: 0000000000000000 CR3: 0000000000201000 CR4: 00000000000006a0
[ 4.048765] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
[ 4.048765] DR3: 0000000000000000 DR6: 0000000000000000 DR7: 0000000000000000
[ 4.048765] Process swapper (pid: 1, threadinfo ffff880007866000,
task ffff880007868000)
[ 4.048765] Stack: 000000000000506f ffffffff8159d52d
ffff880007867e70 ffffffff81034454
[ 4.048765] 0000003000000010 ffff880007867e80 ffff880007867db0
ffff880007867e80
[ 4.048765] ffff880007867dd0 ffff880007867e80 ffff880007899360
000000000000500e
[ 4.048765] Call Trace:
[ 4.048765] [<ffffffff81034454>] panic+0xe8/0x193
[ 4.048765] [<ffffffff8118ef5f>] ? kobject_put+0x44/0x49
[ 4.048765] [<ffffffff8121778e>] ? put_device+0x15/0x17
[ 4.048765] [<ffffffff8121ad49>] ? class_for_each_device+0xfe/0x10e
[ 4.048765] [<ffffffff81715059>] mount_block_root+0x1ee/0x205
[ 4.048765] [<ffffffff81009417>] ? name_to_dev_t+0x1bb/0xda4
[ 4.048765] [<ffffffff817152cd>] mount_root+0xe5/0xea
[ 4.048765] [<ffffffff81715449>] prepare_namespace+0x177/0x1a4
[ 4.048765] [<ffffffff810aaede>] ? putname+0x37/0x39
[ 4.048765] [<ffffffff81714d0f>] kernel_init+0x16a/0x178
[ 4.048765] [<ffffffff8102bde3>] ? schedule_tail+0x24/0x5d
[ 4.048765] [<ffffffff8100cf79>] child_rip+0xa/0x11
[ 4.048765] [<ffffffff811b92a4>] ? acpi_ds_init_one_object+0x0/0x88
[ 4.048765] [<ffffffff81714ba5>] ? kernel_init+0x0/0x178
[ 4.048765] [<ffffffff8100cf6f>] ? child_rip+0x0/0x11
[ 4.048765]
[ 4.048765]
[ 4.048765] Code: eb fd 55 48 89 e5 53 51 83 3d 25 e8 78 00 00 75
1a 31 d2 31 f6 48 c7 c7 e1 9c 01 81 e8 a7 a4 03 00 9c 5b fa e8 94 09
00 00 53 9d <5a> 5b c9 c3 55 31 c0 48 89 e5 89 04 25 b0 c0 5f ff 65 83
04 25
[ 4.048765] RIP [<ffffffff81019d27>] native_smp_send_stop+0x29/0x2d
[ 4.048765] RSP <ffff880007867d70>
[ 4.048765] ---[ end trace 4eaa2a86a8e2da22 ]---
This was after 49 successful boots (qemu running the same clean kernel
in a loop over and over).
Could be a qemu thing, though.
Vegard
--
"The animistic metaphor of the bug that maliciously sneaked in while
the programmer was not looking is intellectually dishonest as it
disguises that the error is the programmer's own creation."
-- E. W. Dijkstra, EWD1036
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: v2.6.27-rc7: x86: #GP on panic?
2008-09-25 15:20 ` Vegard Nossum
@ 2008-09-25 20:46 ` Vegard Nossum
2008-09-25 20:49 ` H. Peter Anvin
0 siblings, 1 reply; 10+ messages in thread
From: Vegard Nossum @ 2008-09-25 20:46 UTC (permalink / raw)
To: H. Peter Anvin; +Cc: Ingo Molnar, x86, linux-kernel, Thomas Gleixner
On Thu, Sep 25, 2008 at 5:20 PM, Vegard Nossum <vegard.nossum@gmail.com> wrote:
> No, I was wrong! It *does* happen for vanilla as well, but it doesn't
> happen reliably.
>
> [ 4.043370] Kernel panic - not syncing: VFS: Unable to mount root
> fs on unknown-block(2,0)
> [ 4.048765] general protection fault: fff2 [1] SMP
> [ 4.048765] CPU 0
> [ 4.048765] Modules linked in:
> [ 4.048765] Pid: 1, comm: swapper Tainted: G W 2.6.27-rc7 #8
> [ 4.048765] RIP: 0010:[<ffffffff81019d27>] [<ffffffff81019d27>]
> native_smp_send_stop+0x29/0x2d
> [ 4.048765] RSP: 0018:ffff880007867d70 EFLAGS: 00000286
> [ 4.048765] RAX: 00000000000000ff RBX: 0000000000000286 RCX: 0000000000000000
> [ 4.048765] RDX: 0000000000000005 RSI: ffffffff81019ce1 RDI: 0000000000000000
> [ 4.048765] RBP: ffff880007867d80 R08: 0000000000000000 R09: ffff880087867bff
> [ 4.048765] R10: ffff880087867bff R11: 000000000000000a R12: ffff88000707b018
> [ 4.048765] R13: ffff88000707b000 R14: 0000000000008001 R15: ffffffff8159d550
> [ 4.048765] FS: 0000000000000000(0000) GS:ffffffff816fae00(0000)
> knlGS:0000000000000000
> [ 4.048765] CS: 0010 DS: 0018 ES: 0018 CR0: 000000008005003b
> [ 4.048765] CR2: 0000000000000000 CR3: 0000000000201000 CR4: 00000000000006a0
> [ 4.048765] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
> [ 4.048765] DR3: 0000000000000000 DR6: 0000000000000000 DR7: 0000000000000000
> [ 4.048765] Process swapper (pid: 1, threadinfo ffff880007866000,
> task ffff880007868000)
> [ 4.048765] Stack: 000000000000506f ffffffff8159d52d
> ffff880007867e70 ffffffff81034454
> [ 4.048765] 0000003000000010 ffff880007867e80 ffff880007867db0
> ffff880007867e80
> [ 4.048765] ffff880007867dd0 ffff880007867e80 ffff880007899360
> 000000000000500e
> [ 4.048765] Call Trace:
> [ 4.048765] [<ffffffff81034454>] panic+0xe8/0x193
> [ 4.048765] [<ffffffff8118ef5f>] ? kobject_put+0x44/0x49
> [ 4.048765] [<ffffffff8121778e>] ? put_device+0x15/0x17
> [ 4.048765] [<ffffffff8121ad49>] ? class_for_each_device+0xfe/0x10e
> [ 4.048765] [<ffffffff81715059>] mount_block_root+0x1ee/0x205
> [ 4.048765] [<ffffffff81009417>] ? name_to_dev_t+0x1bb/0xda4
> [ 4.048765] [<ffffffff817152cd>] mount_root+0xe5/0xea
> [ 4.048765] [<ffffffff81715449>] prepare_namespace+0x177/0x1a4
> [ 4.048765] [<ffffffff810aaede>] ? putname+0x37/0x39
> [ 4.048765] [<ffffffff81714d0f>] kernel_init+0x16a/0x178
> [ 4.048765] [<ffffffff8102bde3>] ? schedule_tail+0x24/0x5d
> [ 4.048765] [<ffffffff8100cf79>] child_rip+0xa/0x11
> [ 4.048765] [<ffffffff811b92a4>] ? acpi_ds_init_one_object+0x0/0x88
> [ 4.048765] [<ffffffff81714ba5>] ? kernel_init+0x0/0x178
> [ 4.048765] [<ffffffff8100cf6f>] ? child_rip+0x0/0x11
> [ 4.048765]
> [ 4.048765]
> [ 4.048765] Code: eb fd 55 48 89 e5 53 51 83 3d 25 e8 78 00 00 75
> 1a 31 d2 31 f6 48 c7 c7 e1 9c 01 81 e8 a7 a4 03 00 9c 5b fa e8 94 09
> 00 00 53 9d <5a> 5b c9 c3 55 31 c0 48 89 e5 89 04 25 b0 c0 5f ff 65 83
> 04 25
> [ 4.048765] RIP [<ffffffff81019d27>] native_smp_send_stop+0x29/0x2d
> [ 4.048765] RSP <ffff880007867d70>
> [ 4.048765] ---[ end trace 4eaa2a86a8e2da22 ]---
>
> This was after 49 successful boots (qemu running the same clean kernel
> in a loop over and over).
>
> Could be a qemu thing, though.
Keeping it going also found this bootup failure:
[ 0.321423] Freeing SMP alternatives: 39k freed
[ 0.323950] ACPI: Core revision 20080609
[ 0.360390] divide error: 0000 [1] SMP
[ 0.360944] CPU 0
[ 0.360944] Modules linked in:
[ 0.360944] Pid: 1, comm: swapper Tainted: G W 2.6.27-rc7 #9
[ 0.360944] RIP: 0010:[<ffffffff81039193>] [<ffffffff81039193>]
__do_softirq+0x49/0xc5
[ 0.360944] RSP: 0018:ffffffff81792f00 EFLAGS: 00000206
[ 0.360944] RAX: ffff880007867fd8 RBX: 0000000000000042 RCX: ffff880007867d90
[ 0.360944] RDX: ffff880007867d90 RSI: 0000000000000086 RDI: ffffffff817ac208
[ 0.360944] RBP: ffffffff81792f20 R08: ffff88000100d0b0 R09: ffff88000100d040
[ 0.360944] R10: ffff88000100d040 R11: ffffffff81646b40 R12: ffffffff816ec080
[ 0.360944] R13: 000000000000000a R14: 0000000000000000 R15: 0000000000000000
[ 0.360944] FS: 0000000000000000(0000) GS:ffffffff816fae00(0000)
knlGS:0000000000000000
[ 0.360944] CS: 0010 DS: 0018 ES: 0018 CR0: 000000008005003b
[ 0.360944] CR2: 0000000000000000 CR3: 0000000000201000 CR4: 00000000000006a0
[ 0.360944] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
[ 0.360944] DR3: 0000000000000000 DR6: 0000000000000000 DR7: 0000000000000000
[ 0.360944] Process swapper (pid: 1, threadinfo ffff880007866000,
task ffff880007868000)
[ 0.360944] Stack: 0000000000000046 0000000000000000
ffffffff817893e0 0000000000000030
[ 0.360944] ffffffff81792f38 ffffffff8100d24c ffffffff81792f38
ffffffff81792f58
[ 0.360944] ffffffff8100eb81 ffff880007867ce8 0000000000000000
ffffffff81792f68
[ 0.360944] Call Trace:
[ 0.360944] <IRQ> [<ffffffff8100d24c>] call_softirq+0x1c/0x28
[ 0.360944] [<ffffffff8100eb81>] do_softirq+0x32/0x89
[ 0.360944] [<ffffffff810392ad>] irq_exit+0x3f/0x82
[ 0.360944] [<ffffffff8100e9b3>] do_IRQ+0x147/0x166
[ 0.360944] [<ffffffff8100c5a1>] ret_from_intr+0x0/0xb
[ 0.360944] <EOI> [<ffffffff8107013f>] ? noop+0x0/0x6
[ 0.360944] [<ffffffff8107121b>] ? default_disable+0x0/0x6
[ 0.360944] [<ffffffff8145ba2c>] ? _spin_unlock_irqrestore+0x8/0xa
[ 0.360944] [<ffffffff8107101f>] ? set_irq_chip+0x79/0x84
[ 0.360944] [<ffffffff810715fa>] ? handle_edge_irq+0x0/0x12f
[ 0.360944] [<ffffffff810717d2>] ? set_irq_chip_and_handler_name+0x19/0x33
[ 0.360944] [<ffffffff8101b8e4>] ? setup_IO_APIC_irq+0x18b/0x1bb
[ 0.360944] [<ffffffff8101ad4b>] ? ioapic_read_entry+0x71/0x84
[ 0.360944] [<ffffffff817237bd>] ? setup_IO_APIC+0x158/0x66b
[ 0.360944] [<ffffffff8101b05b>] ? clear_IO_APIC+0x31/0x41
[ 0.360944] [<ffffffff817235d3>] ? enable_IO_APIC+0x165/0x170
[ 0.360944] [<ffffffff81721171>] ? native_smp_prepare_cpus+0x25a/0x2bb
[ 0.360944] [<ffffffff81714bfe>] ? kernel_init+0x59/0x178
[ 0.360944] [<ffffffff8102bde3>] ? schedule_tail+0x24/0x5d
[ 0.360944] [<ffffffff8100cf79>] ? child_rip+0xa/0x11
[ 0.360944] [<ffffffff811b92a4>] ? acpi_ds_init_one_object+0x0/0x88
[ 0.360944] [<ffffffff81714ba5>] ? kernel_init+0x0/0x178
[ 0.360944] [<ffffffff8100cf6f>] ? child_rip+0x0/0x11
[ 0.360944]
[ 0.360944]
[ 0.360944] Code: 34 00 00 00 81 80 48 e0 ff ff 00 01 00 00 65 44
8b 34 25 24 00 00 00 65 c7 04 25 34 00 00 00 00 00 00 00 fb 49 c7 c4
80 c0 6e 81
[ 0.360944] RIP [<ffffffff81039193>] __do_softirq+0x49/0xc5
[ 0.360944] RSP <ffffffff81792f00>
But I don't see how the divide error could occur here:
ffffffff8103918b: fb sti
ffffffff8103918c: 49 c7 c4 80 c0 6e 81 mov $0xffffffff816ec080,%r12
ffffffff81039193: f6 c3 01 test $0x1,%bl
ffffffff81039196: 74 27 je ffffffff810391bf <__do_so
ffffffff81039198: 4c 89 e7 mov %r12,%rdi
ffffffff8103919b: 41 ff 14 24 callq *(%r12)
Seems like an external interrupt happened and was delivered after the sti?
Hm. I guess it smells like a qemu bug since it's rather easily
reproducible here and sounds strange that nobody else saw it. Is qemu
0.9.1.
Vegard
--
"The animistic metaphor of the bug that maliciously sneaked in while
the programmer was not looking is intellectually dishonest as it
disguises that the error is the programmer's own creation."
-- E. W. Dijkstra, EWD1036
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: v2.6.27-rc7: x86: #GP on panic?
2008-09-25 20:46 ` Vegard Nossum
@ 2008-09-25 20:49 ` H. Peter Anvin
2008-09-25 21:02 ` Vegard Nossum
0 siblings, 1 reply; 10+ messages in thread
From: H. Peter Anvin @ 2008-09-25 20:49 UTC (permalink / raw)
To: Vegard Nossum; +Cc: Ingo Molnar, x86, linux-kernel, Thomas Gleixner
Vegard Nossum wrote:
>
> But I don't see how the divide error could occur here:
>
> ffffffff8103918b: fb sti
> ffffffff8103918c: 49 c7 c4 80 c0 6e 81 mov $0xffffffff816ec080,%r12
> ffffffff81039193: f6 c3 01 test $0x1,%bl
> ffffffff81039196: 74 27 je ffffffff810391bf <__do_so
> ffffffff81039198: 4c 89 e7 mov %r12,%rdi
> ffffffff8103919b: 41 ff 14 24 callq *(%r12)
>
> Seems like an external interrupt happened and was delivered after the sti?
>
> Hm. I guess it smells like a qemu bug since it's rather easily
> reproducible here and sounds strange that nobody else saw it. Is qemu
> 0.9.1.
>
Yes, but there shouldn't be any external interrupts that could turn into
a divide error. It really smells like a Qemu problem -- possibly even
a Qemu miscompile -- to me.
Does it reproduce in KVM?
-hpa
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: v2.6.27-rc7: x86: #GP on panic?
2008-09-25 20:49 ` H. Peter Anvin
@ 2008-09-25 21:02 ` Vegard Nossum
2008-09-25 21:53 ` H. Peter Anvin
0 siblings, 1 reply; 10+ messages in thread
From: Vegard Nossum @ 2008-09-25 21:02 UTC (permalink / raw)
To: H. Peter Anvin; +Cc: Ingo Molnar, x86, linux-kernel, Thomas Gleixner
On Thu, Sep 25, 2008 at 10:49 PM, H. Peter Anvin <hpa@zytor.com> wrote:
>> Seems like an external interrupt happened and was delivered after the sti?
>>
>> Hm. I guess it smells like a qemu bug since it's rather easily
>> reproducible here and sounds strange that nobody else saw it. Is qemu
>> 0.9.1.
>>
>
> Yes, but there shouldn't be any external interrupts that could turn into a
> divide error. It really smells like a Qemu problem -- possibly even a Qemu
> miscompile -- to me.
>
> Does it reproduce in KVM?
I have no computer that can do KVM, sorry :-(
Stack trace contains IO_APIC functions, so it seems that maybe the
emulated IOAPIC is trying to (erroneously) deliver an int 0 (for some
reason)? But I don't know, that's just speculation which can be done
better by others, so I will stop now :-)
Vegard
--
"The animistic metaphor of the bug that maliciously sneaked in while
the programmer was not looking is intellectually dishonest as it
disguises that the error is the programmer's own creation."
-- E. W. Dijkstra, EWD1036
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: v2.6.27-rc7: x86: #GP on panic?
2008-09-25 21:02 ` Vegard Nossum
@ 2008-09-25 21:53 ` H. Peter Anvin
2008-09-27 18:43 ` Ingo Molnar
0 siblings, 1 reply; 10+ messages in thread
From: H. Peter Anvin @ 2008-09-25 21:53 UTC (permalink / raw)
To: Vegard Nossum; +Cc: Ingo Molnar, x86, linux-kernel, Thomas Gleixner
Vegard Nossum wrote:
> On Thu, Sep 25, 2008 at 10:49 PM, H. Peter Anvin <hpa@zytor.com> wrote:
>>> Seems like an external interrupt happened and was delivered after the sti?
>>>
>>> Hm. I guess it smells like a qemu bug since it's rather easily
>>> reproducible here and sounds strange that nobody else saw it. Is qemu
>>> 0.9.1.
>>>
>> Yes, but there shouldn't be any external interrupts that could turn into a
>> divide error. It really smells like a Qemu problem -- possibly even a Qemu
>> miscompile -- to me.
>>
>> Does it reproduce in KVM?
>
> I have no computer that can do KVM, sorry :-(
>
> Stack trace contains IO_APIC functions, so it seems that maybe the
> emulated IOAPIC is trying to (erroneously) deliver an int 0 (for some
> reason)? But I don't know, that's just speculation which can be done
> better by others, so I will stop now :-)
>
I suspect it's a problem in Qemu's IOAPIC model, but it's hard to know
for sure.
-hpa
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: v2.6.27-rc7: x86: #GP on panic?
2008-09-25 21:53 ` H. Peter Anvin
@ 2008-09-27 18:43 ` Ingo Molnar
0 siblings, 0 replies; 10+ messages in thread
From: Ingo Molnar @ 2008-09-27 18:43 UTC (permalink / raw)
To: H. Peter Anvin; +Cc: Vegard Nossum, x86, linux-kernel, Thomas Gleixner
* H. Peter Anvin <hpa@zytor.com> wrote:
> Vegard Nossum wrote:
>> On Thu, Sep 25, 2008 at 10:49 PM, H. Peter Anvin <hpa@zytor.com> wrote:
>>>> Seems like an external interrupt happened and was delivered after the sti?
>>>>
>>>> Hm. I guess it smells like a qemu bug since it's rather easily
>>>> reproducible here and sounds strange that nobody else saw it. Is qemu
>>>> 0.9.1.
>>>>
>>> Yes, but there shouldn't be any external interrupts that could turn into a
>>> divide error. It really smells like a Qemu problem -- possibly even a Qemu
>>> miscompile -- to me.
>>>
>>> Does it reproduce in KVM?
>>
>> I have no computer that can do KVM, sorry :-(
>>
>> Stack trace contains IO_APIC functions, so it seems that maybe the
>> emulated IOAPIC is trying to (erroneously) deliver an int 0 (for some
>> reason)? But I don't know, that's just speculation which can be done
>> better by others, so I will stop now :-)
>>
>
> I suspect it's a problem in Qemu's IOAPIC model, but it's hard to know
> for sure.
yes - it smells like it tries to deliver vector 0, after the panic code
has deinitialized the lapic / ioapic.
Ingo
^ permalink raw reply [flat|nested] 10+ messages in thread
end of thread, other threads:[~2008-09-27 18:44 UTC | newest]
Thread overview: 10+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2008-09-24 19:09 v2.6.27-rc7: x86: #GP on panic? Vegard Nossum
2008-09-25 8:04 ` Ingo Molnar
2008-09-25 8:53 ` H. Peter Anvin
2008-09-25 14:07 ` Vegard Nossum
2008-09-25 15:20 ` Vegard Nossum
2008-09-25 20:46 ` Vegard Nossum
2008-09-25 20:49 ` H. Peter Anvin
2008-09-25 21:02 ` Vegard Nossum
2008-09-25 21:53 ` H. Peter Anvin
2008-09-27 18:43 ` Ingo Molnar
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox