* gdb and kvm status?
@ 2008-09-05 20:28 Tim Pepper
2008-09-05 21:22 ` Mohammed Gamal
2008-09-08 6:48 ` Jan Kiszka
0 siblings, 2 replies; 4+ messages in thread
From: Tim Pepper @ 2008-09-05 20:28 UTC (permalink / raw)
To: kvm
(not on list, so please cc: replies and I'm new to kvm and qemu so
forgive my ignorance of things)
I was trying to play recently with a kvm guest and gdb and am experiencing
bad behavior. I'm curious if this is expected to work currently or if
there's newer code upstream that's better? I see a couple mailing list
posts around this topic in the last months, but didn't yet go pouring
through code to see how much of the stuff kicked around actually went
into an upstream and at what versions.
For background, I've got an Ubuntu 8.04.1 guest image, a Fedora 9 host, and the guest
is being booted directly to a 2.6.26-rc4 kernel, eg:
qemu-kvm -s -S -redir tcp:2255::22 ./ubuntu8.04.1.img \
-kernel './vmlinux-2.6.27-rc4' -append 'console=tty0 root=/dev/hda1'
If I don't run UP for the guest I miss breakpoints, which it sounds like is
expected. But worse, I'm getting crashes with kvm. Things are fine
(albeit slower) if run without kvm, eg:
qemu-system-x86_64 -s -S -redir tcp:2255::22 ./ubuntu8.04.1.img \
-kernel './vmlinux-2.6.27-rc4' -append 'console=tty0 root=/dev/hda1'
The host specifics are standard current fedora/livna:
kernel-2.6.25.14-108.fc9.i686
qemu-0.9.1-6.fc9.i386
kmod-kqemu-1.3.0-0.38.lvn9.i686
kmod-kqemu-2.6.25.14-108.fc9.i686-1.3.0-0.38.lvn9.i686
kqemu-1.3.0-0.7.pre11.lvn9.noarch
And the host hardware is Intel core duo.
Output from UP is:
unhandled vm exit: 0x80000021 vcpu_id 0
rax 000000000003b06b rbx 0000000000000000 rcx 0000000000000008 rdx 000000000003b06b
rsi 00000000c12e6f00 rdi 00000000c06d8f00 rsp 00000000c724fbc4 rbp 00000000c724fc04
r8 0000000000000000 r9 0000000000000000 r10 0000000000000000 r11 0000000000000000
r12 0000000000000000 r13 0000000000000000 r14 0000000000000000 r15 0000000000000000
rip 00000000c01c1a6b rflags 00000246
cs 0060 (00000000/ffffffff p 1 dpl 0 db 1 s 1 type b l 0 g 1 avl 0)
ds 007b (00000000/ffffffff p 1 dpl 3 db 1 s 1 type 3 l 0 g 1 avl 0)
es 007b (00000000/ffffffff p 1 dpl 3 db 1 s 1 type 3 l 0 g 1 avl 0)
ss 0068 (00000000/ffffffff p 1 dpl 0 db 1 s 1 type 3 l 0 g 1 avl 0)
fs 00d8 (00c0e000/ffffffff p 1 dpl 0 db 0 s 1 type 3 l 0 g 1 avl 0)
gs 0033 (b7d926b0/ffffffff p 1 dpl 3 db 1 s 1 type 3 l 0 g 1 avl 1)
tr 0080 (c12e7400/0000206b p 1 dpl 0 db 0 s 0 type b l 0 g 0 avl 0)
ldt 0000 (00000000/ffffffff p 0 dpl 0 db 0 s 0 type 0 l 0 g 0 avl 0)
gdt c11c4000/ff
idt c055c000/7ff
cr0 8005003b cr2 80bc960 cr3 5c2a000 cr4 690 cr8 0 efer 0
Aborted
So...Normal/expected behavior or ?
--
Tim Pepper <lnxninja@linux.vnet.ibm.com>
IBM Linux Technology Center
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: gdb and kvm status?
2008-09-05 20:28 gdb and kvm status? Tim Pepper
@ 2008-09-05 21:22 ` Mohammed Gamal
2008-09-08 18:20 ` Tim Pepper
2008-09-08 6:48 ` Jan Kiszka
1 sibling, 1 reply; 4+ messages in thread
From: Mohammed Gamal @ 2008-09-05 21:22 UTC (permalink / raw)
To: Tim Pepper; +Cc: kvm
On Fri, Sep 5, 2008 at 11:28 PM, Tim Pepper <lnxninja@linux.vnet.ibm.com> wrote:
> (not on list, so please cc: replies and I'm new to kvm and qemu so
> forgive my ignorance of things)
>
> I was trying to play recently with a kvm guest and gdb and am experiencing
> bad behavior. I'm curious if this is expected to work currently or if
> there's newer code upstream that's better? I see a couple mailing list
> posts around this topic in the last months, but didn't yet go pouring
> through code to see how much of the stuff kicked around actually went
> into an upstream and at what versions.
>
> For background, I've got an Ubuntu 8.04.1 guest image, a Fedora 9 host, and the guest
> is being booted directly to a 2.6.26-rc4 kernel, eg:
>
> qemu-kvm -s -S -redir tcp:2255::22 ./ubuntu8.04.1.img \
> -kernel './vmlinux-2.6.27-rc4' -append 'console=tty0 root=/dev/hda1'
>
> If I don't run UP for the guest I miss breakpoints, which it sounds like is
> expected. But worse, I'm getting crashes with kvm. Things are fine
> (albeit slower) if run without kvm, eg:
>
> qemu-system-x86_64 -s -S -redir tcp:2255::22 ./ubuntu8.04.1.img \
> -kernel './vmlinux-2.6.27-rc4' -append 'console=tty0 root=/dev/hda1'
>
> The host specifics are standard current fedora/livna:
> kernel-2.6.25.14-108.fc9.i686
> qemu-0.9.1-6.fc9.i386
> kmod-kqemu-1.3.0-0.38.lvn9.i686
> kmod-kqemu-2.6.25.14-108.fc9.i686-1.3.0-0.38.lvn9.i686
> kqemu-1.3.0-0.7.pre11.lvn9.noarch
> And the host hardware is Intel core duo.
>
> Output from UP is:
> unhandled vm exit: 0x80000021 vcpu_id 0
> rax 000000000003b06b rbx 0000000000000000 rcx 0000000000000008 rdx 000000000003b06b
> rsi 00000000c12e6f00 rdi 00000000c06d8f00 rsp 00000000c724fbc4 rbp 00000000c724fc04
> r8 0000000000000000 r9 0000000000000000 r10 0000000000000000 r11 0000000000000000
> r12 0000000000000000 r13 0000000000000000 r14 0000000000000000 r15 0000000000000000
> rip 00000000c01c1a6b rflags 00000246
> cs 0060 (00000000/ffffffff p 1 dpl 0 db 1 s 1 type b l 0 g 1 avl 0)
> ds 007b (00000000/ffffffff p 1 dpl 3 db 1 s 1 type 3 l 0 g 1 avl 0)
> es 007b (00000000/ffffffff p 1 dpl 3 db 1 s 1 type 3 l 0 g 1 avl 0)
> ss 0068 (00000000/ffffffff p 1 dpl 0 db 1 s 1 type 3 l 0 g 1 avl 0)
> fs 00d8 (00c0e000/ffffffff p 1 dpl 0 db 0 s 1 type 3 l 0 g 1 avl 0)
> gs 0033 (b7d926b0/ffffffff p 1 dpl 3 db 1 s 1 type 3 l 0 g 1 avl 1)
> tr 0080 (c12e7400/0000206b p 1 dpl 0 db 0 s 0 type b l 0 g 0 avl 0)
> ldt 0000 (00000000/ffffffff p 0 dpl 0 db 0 s 0 type 0 l 0 g 0 avl 0)
> gdt c11c4000/ff
> idt c055c000/7ff
> cr0 8005003b cr2 80bc960 cr3 5c2a000 cr4 690 cr8 0 efer 0
> Aborted
>
>
> So...Normal/expected behavior or ?
> --
> Tim Pepper <lnxninja@linux.vnet.ibm.com>
> IBM Linux Technology Center
> --
> To unsubscribe from this list: send the line "unsubscribe kvm" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
>
I am not currently running fedora so I'd like to know what version of
kvm (kernel modules and userspace) are you running?
Anyways, the vmexit is caused because the guest is in an invalid state
and fails VMX entry checks. I'll look further into the output to see
where exactly the guest fails.
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: gdb and kvm status?
2008-09-05 20:28 gdb and kvm status? Tim Pepper
2008-09-05 21:22 ` Mohammed Gamal
@ 2008-09-08 6:48 ` Jan Kiszka
1 sibling, 0 replies; 4+ messages in thread
From: Jan Kiszka @ 2008-09-08 6:48 UTC (permalink / raw)
To: Tim Pepper; +Cc: kvm
[-- Attachment #1: Type: text/plain, Size: 3558 bytes --]
Tim Pepper wrote:
> (not on list, so please cc: replies and I'm new to kvm and qemu so
> forgive my ignorance of things)
>
> I was trying to play recently with a kvm guest and gdb and am experiencing
> bad behavior. I'm curious if this is expected to work currently or if
> there's newer code upstream that's better? I see a couple mailing list
> posts around this topic in the last months, but didn't yet go pouring
> through code to see how much of the stuff kicked around actually went
> into an upstream and at what versions.
>
> For background, I've got an Ubuntu 8.04.1 guest image, a Fedora 9 host, and the guest
> is being booted directly to a 2.6.26-rc4 kernel, eg:
>
> qemu-kvm -s -S -redir tcp:2255::22 ./ubuntu8.04.1.img \
> -kernel './vmlinux-2.6.27-rc4' -append 'console=tty0 root=/dev/hda1'
>
> If I don't run UP for the guest I miss breakpoints, which it sounds like is
SMP debugging with both vanilla qemu and kvm is broken, you need
additional patches to get a usable environment.
> expected. But worse, I'm getting crashes with kvm. Things are fine
> (albeit slower) if run without kvm, eg:
>
> qemu-system-x86_64 -s -S -redir tcp:2255::22 ./ubuntu8.04.1.img \
> -kernel './vmlinux-2.6.27-rc4' -append 'console=tty0 root=/dev/hda1'
>
> The host specifics are standard current fedora/livna:
> kernel-2.6.25.14-108.fc9.i686
> qemu-0.9.1-6.fc9.i386
> kmod-kqemu-1.3.0-0.38.lvn9.i686
> kmod-kqemu-2.6.25.14-108.fc9.i686-1.3.0-0.38.lvn9.i686
> kqemu-1.3.0-0.7.pre11.lvn9.noarch
> And the host hardware is Intel core duo.
>
> Output from UP is:
> unhandled vm exit: 0x80000021 vcpu_id 0
> rax 000000000003b06b rbx 0000000000000000 rcx 0000000000000008 rdx 000000000003b06b
> rsi 00000000c12e6f00 rdi 00000000c06d8f00 rsp 00000000c724fbc4 rbp 00000000c724fc04
> r8 0000000000000000 r9 0000000000000000 r10 0000000000000000 r11 0000000000000000
> r12 0000000000000000 r13 0000000000000000 r14 0000000000000000 r15 0000000000000000
> rip 00000000c01c1a6b rflags 00000246
> cs 0060 (00000000/ffffffff p 1 dpl 0 db 1 s 1 type b l 0 g 1 avl 0)
> ds 007b (00000000/ffffffff p 1 dpl 3 db 1 s 1 type 3 l 0 g 1 avl 0)
> es 007b (00000000/ffffffff p 1 dpl 3 db 1 s 1 type 3 l 0 g 1 avl 0)
> ss 0068 (00000000/ffffffff p 1 dpl 0 db 1 s 1 type 3 l 0 g 1 avl 0)
> fs 00d8 (00c0e000/ffffffff p 1 dpl 0 db 0 s 1 type 3 l 0 g 1 avl 0)
> gs 0033 (b7d926b0/ffffffff p 1 dpl 3 db 1 s 1 type 3 l 0 g 1 avl 1)
> tr 0080 (c12e7400/0000206b p 1 dpl 0 db 0 s 0 type b l 0 g 0 avl 0)
> ldt 0000 (00000000/ffffffff p 0 dpl 0 db 0 s 0 type 0 l 0 g 0 avl 0)
> gdt c11c4000/ff
> idt c055c000/7ff
> cr0 8005003b cr2 80bc960 cr3 5c2a000 cr4 690 cr8 0 efer 0
> Aborted
>
>
> So...Normal/expected behavior or ?
That the existing code is able to oops was new to me, but I never
seriously tried what comes by default (due to all the missing features
and known limitations).
However, I just rebased my full gdb series for qemu on top of kvm and
also refreshed the kvm guest debugging patches once again. I will try to
find some time sending them out soon, at least for early adopters. The
first part is still awaiting the honour to be reviewed and finally
merged by qemu people. And as long as this didn't happen, the kvm part
will not be accepted here as well. Unfortunately, the last feedback I
received from qemu maintainers involved in the touched components was
that there are "more interesting features" to be addressed first. Well,
depends on your POV...
Jan
[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 258 bytes --]
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: gdb and kvm status?
2008-09-05 21:22 ` Mohammed Gamal
@ 2008-09-08 18:20 ` Tim Pepper
0 siblings, 0 replies; 4+ messages in thread
From: Tim Pepper @ 2008-09-08 18:20 UTC (permalink / raw)
To: Mohammed Gamal; +Cc: Tim Pepper, kvm
On Sat 06 Sep at 00:22:16 +0300 m.gamal005@gmail.com said:
> On Fri, Sep 5, 2008 at 11:28 PM, Tim Pepper <lnxninja@linux.vnet.ibm.com> wrote:
>
>....snip
>
> >
> > The host specifics are standard current fedora/livna:
> > kernel-2.6.25.14-108.fc9.i686
> > qemu-0.9.1-6.fc9.i386
> > kmod-kqemu-1.3.0-0.38.lvn9.i686
> > kmod-kqemu-2.6.25.14-108.fc9.i686-1.3.0-0.38.lvn9.i686
> > kqemu-1.3.0-0.7.pre11.lvn9.noarch
> > And the host hardware is Intel core duo.
>
>....snip
>
> I am not currently running fedora so I'd like to know what version of
> kvm (kernel modules and userspace) are you running?
>
> Anyways, the vmexit is caused because the guest is in an invalid state
> and fails VMX entry checks. I'll look further into the output to see
> where exactly the guest fails.
Is this what you're looking for:
$ qemu-kvm --help
QEMU PC emulator version 0.9.1 (kvm-65), Copyright (c) 2003-2008 Fabrice Bellard
The kernel is 2.6.25.14, but it's fedora's -108. If desired, I can grab
the src rpm and see if there are any patches that might be kvm related.
The rpm's changelog doesn't show any since they'd rebased from 2.6.15.14,
but that doesn't necessarily mean anything...
--
Tim Pepper <lnxninja@linux.vnet.ibm.com>
IBM Linux Technology Center
^ permalink raw reply [flat|nested] 4+ messages in thread
end of thread, other threads:[~2008-09-08 18:21 UTC | newest]
Thread overview: 4+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2008-09-05 20:28 gdb and kvm status? Tim Pepper
2008-09-05 21:22 ` Mohammed Gamal
2008-09-08 18:20 ` Tim Pepper
2008-09-08 6:48 ` Jan Kiszka
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox