* Running KVM inside a chroot
@ 2011-10-12 18:49 Jorge Lucangeli Obes
2011-10-12 19:55 ` Alexander Graf
0 siblings, 1 reply; 21+ messages in thread
From: Jorge Lucangeli Obes @ 2011-10-12 18:49 UTC (permalink / raw)
To: kvm
Hi all,
I'm working on Chromium OS development. We have a pretty elaborate
chroot inside of which we carry out all development. We use KVM to
launch Chromium OS builds inside a VM for testing. Turns out that for
some reason, when QEMU is launched from inside the chroot, KVM itself
seems not to be used. The VM is extremely slow.
Is this known/expected? QEMU is installed inside the chroot, the KVM
modules are loaded, the /dev/kvm device is present and accesible. Any
ideas on how to debug this?
Thanks,
Jorge
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: Running KVM inside a chroot
2011-10-12 18:49 Running KVM inside a chroot Jorge Lucangeli Obes
@ 2011-10-12 19:55 ` Alexander Graf
[not found] ` <CAKYuF5TG4+5yaVZh9KX0wLOjjg2h01Maz-VOsr2u4BVHzE8i7g@mail.gmail.com>
0 siblings, 1 reply; 21+ messages in thread
From: Alexander Graf @ 2011-10-12 19:55 UTC (permalink / raw)
To: Jorge Lucangeli Obes; +Cc: kvm
On 12.10.2011, at 20:49, Jorge Lucangeli Obes wrote:
> Hi all,
>
> I'm working on Chromium OS development. We have a pretty elaborate
> chroot inside of which we carry out all development. We use KVM to
> launch Chromium OS builds inside a VM for testing. Turns out that for
> some reason, when QEMU is launched from inside the chroot, KVM itself
> seems not to be used. The VM is extremely slow.
>
> Is this known/expected? QEMU is installed inside the chroot, the KVM
> modules are loaded, the /dev/kvm device is present and accesible. Any
> ideas on how to debug this?
The first obvious idea I'd have here would be to strace the qemu process and check what happens when it opens /dev/kvm :)
Alex
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: Running KVM inside a chroot
[not found] ` <CAKYuF5TG4+5yaVZh9KX0wLOjjg2h01Maz-VOsr2u4BVHzE8i7g@mail.gmail.com>
@ 2011-10-13 0:51 ` Jorge Lucangeli Obes
2011-10-16 16:23 ` Avi Kivity
0 siblings, 1 reply; 21+ messages in thread
From: Jorge Lucangeli Obes @ 2011-10-13 0:51 UTC (permalink / raw)
To: Alexander Graf; +Cc: kvm
[-- Attachment #1: Type: text/plain, Size: 1492 bytes --]
On Wed, Oct 12, 2011 at 3:52 PM, Jorge Lucangeli Obes
<jorgelo@chromium.org> wrote:
> On Wed, Oct 12, 2011 at 12:55 PM, Alexander Graf <agraf@suse.de> wrote:
>>
>> On 12.10.2011, at 20:49, Jorge Lucangeli Obes wrote:
>>
>>> Hi all,
>>>
>>> I'm working on Chromium OS development. We have a pretty elaborate
>>> chroot inside of which we carry out all development. We use KVM to
>>> launch Chromium OS builds inside a VM for testing. Turns out that for
>>> some reason, when QEMU is launched from inside the chroot, KVM itself
>>> seems not to be used. The VM is extremely slow.
>>>
>>> Is this known/expected? QEMU is installed inside the chroot, the KVM
>>> modules are loaded, the /dev/kvm device is present and accesible. Any
>>> ideas on how to debug this?
>>
>> The first obvious idea I'd have here would be to strace the qemu process and check what happens when it opens /dev/kvm :)
Resending since original attachment was too large.
> That's what I thought. I did a test run under strace. I'm attaching
> the list of syscalls from the call to 'open(/dev/kvm)' to the first
> successful 'ioctl(KVM_RUN)'. /dev/kvm seems to be opened correctly, a
> VCPU is created, and then that VCPU is used with KVM_RUN. After the
> first call to 'ioctl(KVM_RUN)', there are long lists of more KVM_RUN
> calls, separated by brief groups of other calls. So, IIUC, KVM seems
> to be used, and seems to be "working", but the VM is one order of
> magnitude slower anyways.
>
> Any ideas?
Thanks,
Jorge
[-- Attachment #2: syscalls --]
[-- Type: application/octet-stream, Size: 12376 bytes --]
29977 open("/dev/kvm", O_RDWR <unfinished ...>
29975 read(3, <unfinished ...>
29977 <... open resumed> ) = 3
29977 ioctl(3, KVM_GET_API_VERSION, 0) = 12
29977 ioctl(3, KVM_CHECK_EXTENSION, 0x19) = 1024
29977 ioctl(3, KVM_CREATE_VM, 0) = 5
29977 ioctl(3, KVM_CHECK_EXTENSION, 0x4) = 1
29977 ioctl(3, KVM_CHECK_EXTENSION, 0x4) = 1
29977 ioctl(5, KVM_SET_TSS_ADDR, 0xfeffd000) = 0
29977 ioctl(3, KVM_CHECK_EXTENSION, 0x25) = 1
29977 ioctl(3, KVM_CHECK_EXTENSION, 0x25) = 1
29977 ioctl(5, KVM_SET_IDENTITY_MAP_ADDR, 0x7fff5ec6df68) = 0
29977 ioctl(3, KVM_CHECK_EXTENSION, 0xb) = 1
29977 ioctl(5, KVM_CREATE_PIT, 0xb) = 0
29977 ioctl(3, KVM_CHECK_EXTENSION, 0xf) = 2
29977 ioctl(3, KVM_CHECK_EXTENSION, 0x3) = 1
29977 ioctl(3, KVM_CHECK_EXTENSION, 0) = 1
29977 ioctl(5, KVM_CREATE_IRQCHIP, 0) = 0
29977 ioctl(3, KVM_CHECK_EXTENSION, 0x1a) = 1
29977 uname({sys="Linux", node="tegan", ...}) = 0
29977 ioctl(3, KVM_GET_MSR_INDEX_LIST, 0x7fff5ec6dde0) = -1 E2BIG (Argument list too long)
29977 ioctl(3, KVM_GET_MSR_INDEX_LIST, 0x1eced90) = 0
29977 ioctl(3, KVM_CHECK_EXTENSION, 0x27) = 1
29977 ioctl(3, KVM_CHECK_EXTENSION, 0x15) = 1
29977 ioctl(3, KVM_CHECK_EXTENSION, 0x19) = 1024
29977 ioctl(5, KVM_SET_GSI_ROUTING, 0x1ecf300) = 0
29977 ioctl(3, KVM_CHECK_EXTENSION, 0x29) = 0
29977 rt_sigaction(SIGRT_6, {0x423290, [RT_6], SA_RESTORER|SA_RESTART, 0x7f99c97a0180}, {SIG_DFL, [], 0}, 8) = 0
29977 rt_sigaction(SIGBUS, {0x423ed0, [], SA_RESTORER|SA_SIGINFO, 0x7f99caf6e0a0}, NULL, 8) = 0
29977 prctl(0x21 /* PR_??? */, 0x1, 0x1, 0xffffffffffffffff, 0) = -1 EINVAL (Invalid argument)
29977 ioctl(3, KVM_CHECK_EXTENSION, 0x19) = 1024
29977 ioctl(3, KVM_CHECK_EXTENSION, 0x23) = 1
29977 pipe2([6, 7], O_CLOEXEC) = 0
...
29978 ioctl(11, KVM_SET_SIGNAL_MASK, 0x1ee8c00) = 0
29978 ioctl(3, KVM_CHECK_EXTENSION, 0x8) = 1
29978 ioctl(3, KVM_CHECK_EXTENSION, 0xc) = 1
29978 ioctl(3, KVM_CHECK_EXTENSION, 0xd) = 0
29978 ioctl(3, KVM_CHECK_EXTENSION, 0x7) = 1
29978 ioctl(3, KVM_GET_SUPPORTED_CPUID, 0x1ef0010) = -1 E2BIG (Argument list too long)
29978 ioctl(3, KVM_GET_SUPPORTED_CPUID, 0x1ee8c90) = -1 E2BIG (Argument list too long)
29978 ioctl(3, KVM_GET_SUPPORTED_CPUID, 0x1ee8cf0) = -1 E2BIG (Argument list too long)
29978 ioctl(3, KVM_GET_SUPPORTED_CPUID, 0x1ee8cf0) = -1 E2BIG (Argument list too long)
29978 ioctl(3, KVM_GET_SUPPORTED_CPUID, 0x1ee8cf0) = -1 E2BIG (Argument list too long)
29978 ioctl(3, KVM_GET_SUPPORTED_CPUID, 0x1eeb010) = 0
29978 ioctl(3, KVM_CHECK_EXTENSION, 0x7) = 1
29978 ioctl(3, KVM_GET_SUPPORTED_CPUID, 0x1ee8c00) = -1 E2BIG (Argument list too long)
29978 ioctl(3, KVM_GET_SUPPORTED_CPUID, 0x1ee8c40) = -1 E2BIG (Argument list too long)
29978 ioctl(3, KVM_GET_SUPPORTED_CPUID, 0x1ee8ca0) = -1 E2BIG (Argument list too long)
29978 ioctl(3, KVM_GET_SUPPORTED_CPUID, 0x1ee8ca0) = -1 E2BIG (Argument list too long)
29978 ioctl(3, KVM_GET_SUPPORTED_CPUID, 0x1ee8ca0) = -1 E2BIG (Argument list too long)
29978 ioctl(3, KVM_GET_SUPPORTED_CPUID, 0x1eeb010) = 0
29978 ioctl(3, KVM_CHECK_EXTENSION, 0x7) = 1
29978 ioctl(3, KVM_GET_SUPPORTED_CPUID, 0x1ee8c00) = -1 E2BIG (Argument list too long)
29978 ioctl(3, KVM_GET_SUPPORTED_CPUID, 0x1ee8c40) = -1 E2BIG (Argument list too long)
29978 ioctl(3, KVM_GET_SUPPORTED_CPUID, 0x1ee8ca0) = -1 E2BIG (Argument list too long)
29978 ioctl(3, KVM_GET_SUPPORTED_CPUID, 0x1ee8ca0) = -1 E2BIG (Argument list too long)
29978 ioctl(3, KVM_GET_SUPPORTED_CPUID, 0x1ee8ca0) = -1 E2BIG (Argument list too long)
29978 ioctl(3, KVM_GET_SUPPORTED_CPUID, 0x1eeb010) = 0
29978 ioctl(3, KVM_CHECK_EXTENSION, 0x7) = 1
29978 ioctl(3, KVM_GET_SUPPORTED_CPUID, 0x1ee8c00) = -1 E2BIG (Argument list too long)
29978 ioctl(3, KVM_GET_SUPPORTED_CPUID, 0x1ee8c40) = -1 E2BIG (Argument list too long)
29978 ioctl(3, KVM_GET_SUPPORTED_CPUID, 0x1ee8ca0) = -1 E2BIG (Argument list too long)
29978 ioctl(3, KVM_GET_SUPPORTED_CPUID, 0x1ee8ca0) = -1 E2BIG (Argument list too long)
29978 ioctl(3, KVM_GET_SUPPORTED_CPUID, 0x1ee8ca0) = -1 E2BIG (Argument list too long)
29978 ioctl(3, KVM_GET_SUPPORTED_CPUID, 0x1eeb520) = 0
29978 ioctl(3, KVM_CHECK_EXTENSION, 0x7) = 1
29978 ioctl(3, KVM_GET_SUPPORTED_CPUID, 0x1ee8c00) = -1 E2BIG (Argument list too long)
29978 ioctl(3, KVM_GET_SUPPORTED_CPUID, 0x1ee8c40) = -1 E2BIG (Argument list too long)
29978 ioctl(3, KVM_GET_SUPPORTED_CPUID, 0x1ee8ca0) = -1 E2BIG (Argument list too long)
29978 ioctl(3, KVM_GET_SUPPORTED_CPUID, 0x1ee8ca0) = -1 E2BIG (Argument list too long)
29978 ioctl(3, KVM_GET_SUPPORTED_CPUID, 0x1ee8ca0) = -1 E2BIG (Argument list too long)
29978 ioctl(3, KVM_GET_SUPPORTED_CPUID, 0x1eeb010) = 0
29978 ioctl(11, KVM_SET_CPUID2, 0x1ee8c00) = 0
29978 ioctl(3, KVM_CHECK_EXTENSION, 0x1f) = 32
29978 ioctl(3, KVM_CHECK_EXTENSION, 0x1f) = 32
29978 ioctl(3, KVM_X86_GET_MCE_CAP_SUPPORTED, 0x7f99c8d2ad40) = 0
29978 ioctl(11, KVM_X86_SETUP_MCE, 0x7f99c8d2ad40) = 0
29978 ioctl(3, KVM_CHECK_EXTENSION, 0x6) = 0
29978 ioctl(11, KVM_SET_REGS, 0x7f99c8d2ace0) = 0
29978 ioctl(11, KVM_SET_FPU, 0x7f99c8d2aa00) = 0
29978 ioctl(11, KVM_SET_SREGS, 0x7f99c8d2aba0) = 0
29978 ioctl(11, KVM_SET_MSRS, 0x1ee8c00) = 10
29978 futex(0x859de4, FUTEX_WAKE_OP_PRIVATE, 1, 1, 0x859de0, {FUTEX_OP_SET, 0, FUTEX_OP_CMP_GT, 1}) = 1
29977 <... futex resumed> ) = 0
29978 futex(0x859e24, FUTEX_WAIT_PRIVATE, 1, NULL <unfinished ...>
29977 mmap(NULL, 1073750016, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f9988529000
29977 madvise(0x7f998852a000, 1073741824, 0xc /* MADV_??? */) = 0
29977 mmap(NULL, 266240, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f99cb54c000
29977 ioctl(3, KVM_CHECK_EXTENSION, 0x10) = 1
29977 ioctl(3, KVM_CHECK_EXTENSION, 0x4) = 1
29977 ioctl(5, KVM_SET_USER_MEMORY_REGION, 0x7fff5ec6de10) = 0
29977 ioctl(3, KVM_CHECK_EXTENSION, 0x4) = 1
29977 ioctl(5, KVM_SET_USER_MEMORY_REGION, 0x7fff5ec6de10) = 0
...
29977 access("/usr/share/qemu/bios.bin", R_OK) = 0
29977 open("/usr/share/qemu/bios.bin", O_RDONLY) = 12
29977 lseek(12, 0, SEEK_END) = 131072
29977 close(12) = 0
29977 mmap(NULL, 139264, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f99cb52a000
29977 madvise(0x7f99cb52b000, 131072, 0xc /* MADV_??? */) = 0
29977 ioctl(3, KVM_CHECK_EXTENSION, 0x10) = 1
29977 access("/usr/share/qemu/bios.bin", R_OK) = 0
29977 open("/usr/share/qemu/bios.bin", O_RDONLY) = 12
29977 lseek(12, 0, SEEK_END) = 131072
29977 mmap(NULL, 135168, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f99cb509000
29977 lseek(12, 0, SEEK_SET) = 0
29977 read(12, "\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 131072) = 131072
29977 close(12) = 0
29977 ioctl(3, KVM_CHECK_EXTENSION, 0x4) = 1
29977 ioctl(5, KVM_SET_USER_MEMORY_REGION, 0x7fff5ec6de10) = 0
29977 mmap(NULL, 139264, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f99cb4e7000
29977 madvise(0x7f99cb4e8000, 131072, 0xc /* MADV_??? */) = 0
29977 ioctl(3, KVM_CHECK_EXTENSION, 0x10) = 1
29977 ioctl(3, KVM_CHECK_EXTENSION, 0x4) = 1
29977 ioctl(5, KVM_SET_USER_MEMORY_REGION, 0x7fff5ec6de10) = 0
29977 ioctl(3, KVM_CHECK_EXTENSION, 0x4) = 1
29977 ioctl(5, KVM_SET_USER_MEMORY_REGION, 0x7fff5ec6de10) = 0
...
29977 ioctl(11, KVM_SET_LAPIC, 0x7fff5ec6dbb0) = 0
29977 ioctl(5, KVM_IRQ_LINE_STATUS, 0x7fff5ec6df60) = 0
29977 ioctl(5, KVM_SET_IRQCHIP, 0x7fff5ec6dda0) = 0
29977 ioctl(5, KVM_IRQ_LINE_STATUS, 0x7fff5ec6df70) = 0
29977 ioctl(5, KVM_IRQ_LINE_STATUS, 0x7fff5ec6df40) = 0
29977 ioctl(5, KVM_IRQ_LINE_STATUS, 0x7fff5ec6df60) = 0
29977 ioctl(5, KVM_IRQ_LINE_STATUS, 0x7fff5ec6df40) = 0
29977 ioctl(5, KVM_IRQ_LINE_STATUS, 0x7fff5ec6df60) = 0
29977 munmap(0x7f99cb509000, 135168) = 0
...
29977 select(17, [6 8 10 14 16], [], [], {0, 0} <unfinished ...>
29978 ioctl(11, KVM_SET_REGS <unfinished ...>
29977 <... select resumed> ) = 2 (in [6 14], left {0, 0})
29978 <... ioctl resumed> , 0x7f99c8d2ac20) = 0
29977 futex(0x859da0, FUTEX_WAIT_PRIVATE, 2, NULL <unfinished ...>
29978 ioctl(11, KVM_SET_FPU, 0x7f99c8d2a940) = 0
29978 ioctl(11, KVM_SET_SREGS, 0x7f99c8d2aae0) = 0
29978 ioctl(11, KVM_SET_MSRS, 0x1ed1ea0) = 10
29978 futex(0x859da0, FUTEX_WAKE_PRIVATE, 1) = 1
29977 <... futex resumed> ) = 0
29978 ioctl(11, KVM_RUN <unfinished ...>
29977 read(14, "\1\0\0\0\0\0\0\0", 4096) = 8
29977 read(14, <unfinished ...>
29978 <... ioctl resumed> , 0) = -1 EINTR (Interrupted system call)
29977 <... read resumed> 0x7fff5ec6cd20, 4096) = -1 EAGAIN (Resource temporarily unavailable)
29978 futex(0x859da0, FUTEX_WAIT_PRIVATE, 2, NULL <unfinished ...>
29977 read(6, "\0\0", 512) = 2
29977 read(6, 0x7fff5ec6db20, 512) = -1 EAGAIN (Resource temporarily unavailable)
29977 timer_gettime(0, {it_interval={0, 0}, it_value={0, 0}}) = 0
29977 timer_settime(0, 0, {it_interval={0, 0}, it_value={0, 250000}}, NULL) = 0
29977 poll([{fd=12, events=POLLIN|POLLOUT}], 1, -1) = 1 ([{fd=12, revents=POLLOUT}])
29977 writev(12, [{"\22\0\7\0\3\0\0\6'\0\0\0\37\0\0\0\10\1\4\0\4\0\0\0QEMU\22\0\7\0"..., 116}, {NULL, 0}, {"", 0}], 3) = 116
29977 poll([{fd=12, events=POLLIN}], 1, -1) = 1 ([{fd=12, revents=POLLIN}])
29977 read(12, "\34\0u\0\3\0\0\6'\0\0\0\244\24\22\n\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 4096) = 160
29977 read(12, 0x24453e4, 4096) = -1 EAGAIN (Resource temporarily unavailable)
...
29977 read(12, 0x24453e4, 4096) = -1 EAGAIN (Resource temporarily unavailable)
29977 select(13, [12], NULL, NULL, {0, 0}) = 0 (Timeout)
29977 read(12, 0x24453e4, 4096) = -1 EAGAIN (Resource temporarily unavailable)
29977 select(13, [12], NULL, NULL, {0, 0}) = 0 (Timeout)
29977 timer_gettime(0, {it_interval={0, 0}, it_value={0, 0}}) = 0
29977 timer_settime(0, 0, {it_interval={0, 0}, it_value={0, 20860000}}, NULL) = 0
29977 futex(0x859da0, FUTEX_WAKE_PRIVATE, 1) = 1
29978 <... futex resumed> ) = 0
29977 select(17, [6 8 10 14 16], [], [], {1, 0} <unfinished ...>
29978 futex(0x859da0, FUTEX_WAKE_PRIVATE, 1 <unfinished ...>
29977 <... select resumed> ) = 1 (in [16], left {0, 999996})
29978 <... futex resumed> ) = 0
29977 read(16, <unfinished ...>
29978 rt_sigtimedwait([BUS RT_6], <unfinished ...>
29977 <... read resumed> "\16\0\0\0\0\0\0\0\376\377\377\377\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 128) = 128
29978 <... rt_sigtimedwait resumed> {si_signo=SIGRT_6, si_code=SI_TKILL, si_pid=29977, si_uid=0, si_value={int=29978, ptr=0x751a}}, {0, 0}, 8) = 38
29977 rt_sigaction(SIGALRM, NULL, <unfinished ...>
29978 futex(0x859da0, FUTEX_WAIT_PRIVATE, 2, NULL <unfinished ...>
29977 <... rt_sigaction resumed> {0x40a880, ~[KILL STOP RTMIN RT_1], SA_RESTORER, 0x7f99caf6e0a0}, 8) = 0
29977 write(7, "\0", 1) = 1
29977 write(15, "\1\0\0\0\0\0\0\0", 8) = 8
29977 read(16, 0x7fff5ec6dc90, 128) = -1 EAGAIN (Resource temporarily unavailable)
29977 timer_gettime(0, {it_interval={0, 0}, it_value={0, 20538822}}) = 0
29977 timer_settime(0, 0, {it_interval={0, 0}, it_value={0, 20484000}}, NULL) = 0
29977 futex(0x859da0, FUTEX_WAKE_PRIVATE, 1 <unfinished ...>
29978 <... futex resumed> ) = 0
29977 <... futex resumed> ) = 1
29978 rt_sigpending( <unfinished ...>
29977 select(17, [6 8 10 14 16], [], [], {1, 0} <unfinished ...>
29978 <... rt_sigpending resumed> []) = 0
29977 <... select resumed> ) = 2 (in [6 14], left {0, 999997})
29978 futex(0x859da0, FUTEX_WAKE_PRIVATE, 1 <unfinished ...>
29977 read(14, <unfinished ...>
29978 <... futex resumed> ) = 0
29977 <... read resumed> "\1\0\0\0\0\0\0\0", 4096) = 8
29978 ioctl(11, KVM_RUN <unfinished ...>
29977 read(14, 0x7fff5ec6cd20, 4096) = -1 EAGAIN (Resource temporarily unavailable)
29978 <... ioctl resumed> , 0) = 0
29977 read(6, <unfinished ...>
29978 futex(0x859da0, FUTEX_WAIT_PRIVATE, 2, NULL <unfinished ...>
29977 <... read resumed> "\0", 512) = 1
29977 read(6, 0x7fff5ec6db20, 512) = -1 EAGAIN (Resource temporarily unavailable)
29977 futex(0x859da0, FUTEX_WAKE_PRIVATE, 1 <unfinished ...>
29978 <... futex resumed> ) = 0
29977 <... futex resumed> ) = 1
29978 futex(0x859da0, FUTEX_WAKE_PRIVATE, 1 <unfinished ...>
29977 select(17, [6 8 10 14 16], [], [], {1, 0} <unfinished ...>
29978 <... futex resumed> ) = 0
29978 ioctl(11, KVM_RUN, 0) = 0
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: Running KVM inside a chroot
2011-10-13 0:51 ` Jorge Lucangeli Obes
@ 2011-10-16 16:23 ` Avi Kivity
2011-10-17 6:10 ` Jorge Lucangeli Obes
0 siblings, 1 reply; 21+ messages in thread
From: Avi Kivity @ 2011-10-16 16:23 UTC (permalink / raw)
To: Jorge Lucangeli Obes; +Cc: Alexander Graf, kvm
On 10/13/2011 02:51 AM, Jorge Lucangeli Obes wrote:
> On Wed, Oct 12, 2011 at 3:52 PM, Jorge Lucangeli Obes
> <jorgelo@chromium.org> wrote:
> > On Wed, Oct 12, 2011 at 12:55 PM, Alexander Graf <agraf@suse.de> wrote:
> >>
> >> On 12.10.2011, at 20:49, Jorge Lucangeli Obes wrote:
> >>
> >>> Hi all,
> >>>
> >>> I'm working on Chromium OS development. We have a pretty elaborate
> >>> chroot inside of which we carry out all development. We use KVM to
> >>> launch Chromium OS builds inside a VM for testing. Turns out that for
> >>> some reason, when QEMU is launched from inside the chroot, KVM itself
> >>> seems not to be used. The VM is extremely slow.
> >>>
> >>> Is this known/expected? QEMU is installed inside the chroot, the KVM
> >>> modules are loaded, the /dev/kvm device is present and accesible. Any
> >>> ideas on how to debug this?
> >>
> >> The first obvious idea I'd have here would be to strace the qemu process and check what happens when it opens /dev/kvm :)
>
> Resending since original attachment was too large.
>
> > That's what I thought. I did a test run under strace. I'm attaching
> > the list of syscalls from the call to 'open(/dev/kvm)' to the first
> > successful 'ioctl(KVM_RUN)'. /dev/kvm seems to be opened correctly, a
> > VCPU is created, and then that VCPU is used with KVM_RUN. After the
> > first call to 'ioctl(KVM_RUN)', there are long lists of more KVM_RUN
> > calls, separated by brief groups of other calls. So, IIUC, KVM seems
> > to be used, and seems to be "working", but the VM is one order of
> > magnitude slower anyways.
> >
> > Any ideas?
>
What do top/vmstat/kvm_stat say?
--
error compiling committee.c: too many arguments to function
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: Running KVM inside a chroot
2011-10-16 16:23 ` Avi Kivity
@ 2011-10-17 6:10 ` Jorge Lucangeli Obes
2011-10-17 9:56 ` Avi Kivity
0 siblings, 1 reply; 21+ messages in thread
From: Jorge Lucangeli Obes @ 2011-10-17 6:10 UTC (permalink / raw)
To: Avi Kivity; +Cc: Alexander Graf, kvm
[-- Attachment #1: Type: text/plain, Size: 2294 bytes --]
On Sun, Oct 16, 2011 at 9:23 AM, Avi Kivity <avi@redhat.com> wrote:
> On 10/13/2011 02:51 AM, Jorge Lucangeli Obes wrote:
>> On Wed, Oct 12, 2011 at 3:52 PM, Jorge Lucangeli Obes
>> <jorgelo@chromium.org> wrote:
>> > On Wed, Oct 12, 2011 at 12:55 PM, Alexander Graf <agraf@suse.de> wrote:
>> >>
>> >> On 12.10.2011, at 20:49, Jorge Lucangeli Obes wrote:
>> >>
>> >>> Hi all,
>> >>>
>> >>> I'm working on Chromium OS development. We have a pretty elaborate
>> >>> chroot inside of which we carry out all development. We use KVM to
>> >>> launch Chromium OS builds inside a VM for testing. Turns out that for
>> >>> some reason, when QEMU is launched from inside the chroot, KVM itself
>> >>> seems not to be used. The VM is extremely slow.
>> >>>
>> >>> Is this known/expected? QEMU is installed inside the chroot, the KVM
>> >>> modules are loaded, the /dev/kvm device is present and accesible. Any
>> >>> ideas on how to debug this?
>> >>
>> >> The first obvious idea I'd have here would be to strace the qemu process and check what happens when it opens /dev/kvm :)
>>
>> Resending since original attachment was too large.
>>
>> > That's what I thought. I did a test run under strace. I'm attaching
>> > the list of syscalls from the call to 'open(/dev/kvm)' to the first
>> > successful 'ioctl(KVM_RUN)'. /dev/kvm seems to be opened correctly, a
>> > VCPU is created, and then that VCPU is used with KVM_RUN. After the
>> > first call to 'ioctl(KVM_RUN)', there are long lists of more KVM_RUN
>> > calls, separated by brief groups of other calls. So, IIUC, KVM seems
>> > to be used, and seems to be "working", but the VM is one order of
>> > magnitude slower anyways.
>> >
>> > Any ideas?
>>
>
> What do top/vmstat/kvm_stat say?
I'm attaching the output of the three commands during a test run
launched form inside the chroot, which was slow as usual. I didn't see
anything too weird on top/vmstat, though I found it odd that once the
VM had booted Chromium OS, QEMU still ate 100% of one core. vmstat
didn't show anything strange, QEMU taking up the memory it's supposed
to. What I don't quite know how to interpret is the output of
kvm_stat, hence the attachment.
The host is a Xeon X5650, 6c12t. Kernel is 2.6.32, QEMU in the chroot is 0.12.5.
Thanks for the help Avi.
Cheers,
Jorge
[-- Attachment #2: kvm_stat_out --]
[-- Type: application/octet-stream, Size: 27456 bytes --]
efer_relo exits fpu_reloa halt_exit halt_wake host_stat hypercall insn_emul insn_emul invlpg io_exits irq_exits irq_injec irq_windo largepage mmio_exit mmu_cache mmu_flood mmu_pde_z mmu_pte_u mmu_pte_w mmu_recyc mmu_shado mmu_unsyn nmi_injec nmi_windo pf_fixed pf_guest remote_tl request_i signal_ex tlb_flush
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 213961 61510 0 0 119646 0 73554 0 0 121145 194 157 155 0 5514 110 0 0 0 5000 0 105 0 0 0 5625 0 333 0 1 0
0 65317 3 493 230 18077 0 7472 0 0 27613 570 1173 339 0 4930 248 0 0 0 0 0 295 0 0 0 5724 0 19 0 0 0
0 22020 5937 482 346 15896 0 2203 0 0 14431 574 830 149 0 976 25 0 0 0 0 0 0 0 0 0 3773 0 8 0 0 0
0 16447 1376 450 335 13156 0 1237 0 0 12681 800 806 89 0 2 22 0 0 0 0 0 0 0 0 0 1058 0 4 0 0 0
0 57492 19625 245 205 32743 0 33638 0 0 11996 429 654 33 0 20473 56 0 0 0 4000 0 0 0 0 0 8271 0 0 0 0 0
0 398545 2994 0 0 3208 0 390305 0 0 330 818 1021 6 0 2779 36 0 0 0 0 0 0 0 0 0 6315 0 1905 0 0 0
0 458814 3501 0 0 3527 0 457077 0 0 164 1018 1012 0 0 3266 0 0 0 0 0 0 0 0 0 0 19 0 2824 0 0 0
0 458644 3469 0 0 3469 0 457323 0 0 108 666 1011 1 0 3264 0 0 0 0 0 0 0 0 0 0 30 0 59 0 0 0
0 459064 3478 0 0 3478 0 457238 0 0 112 303 1002 1 0 3267 1 0 0 0 0 0 0 0 0 0 460 0 159 0 0 0
0 366712 356 0 0 4677 0 347915 0 0 2100 685 1080 36 0 2480 21 0 0 0 0 0 0 0 0 0 12988 0 283 0 0 0
0 459958 0 0 0 3484 0 458797 0 0 108 570 1003 0 0 3278 0 0 0 0 0 0 0 0 0 0 1 0 634 0 0 0
0 464918 0 0 0 3527 0 464090 0 0 116 190 1001 0 0 3315 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 467104 0 0 0 3529 0 466298 0 0 112 191 1002 0 0 3330 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 399814 222 0 0 3508 0 395084 0 0 588 363 1029 4 0 2820 4 0 0 0 0 0 0 0 0 0 3039 0 0 0 0 0
0 467605 0 0 0 3548 0 466756 0 0 116 210 1012 0 0 3335 0 0 0 0 0 0 0 0 0 0 0 0 8 0 0 0
0 466228 0 0 0 3533 0 465413 0 0 112 205 1014 0 0 3325 0 0 0 0 0 0 0 0 0 0 0 0 72 0 0 0
0 462356 0 0 0 3503 0 461526 0 0 112 208 1011 0 0 3294 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
efer_relo exits fpu_reloa halt_exit halt_wake host_stat hypercall insn_emul insn_emul invlpg io_exits irq_exits irq_injec irq_windo largepage mmio_exit mmu_cache mmu_flood mmu_pde_z mmu_pte_u mmu_pte_w mmu_recyc mmu_shado mmu_unsyn nmi_injec nmi_windo pf_fixed pf_guest remote_tl request_i signal_ex tlb_flush
0 447448 24 0 0 3367 0 443131 0 0 108 280 1013 2 0 3164 6 0 0 0 0 0 0 0 0 0 3375 0 94 0 0 0
0 467994 0 0 0 3551 0 467118 0 0 116 237 1010 0 0 3337 0 0 0 0 0 0 0 0 0 0 1 0 69 0 0 0
0 462627 0 0 0 3499 0 461386 0 0 112 624 1013 0 0 3294 0 0 0 0 0 0 0 0 0 0 0 0 2 0 0 0
0 432181 153 0 0 3425 0 421448 0 0 320 2711 1046 11 0 3004 8 0 0 0 0 0 0 0 0 0 5671 0 3868 0 0 0
0 349854 116 0 0 2641 0 334715 0 0 162 3553 1147 10 0 2384 9 0 0 0 0 0 0 0 0 0 7330 0 3972 0 0 0
0 456798 43 0 0 3445 0 453551 0 0 108 2547 1045 0 0 3240 0 0 0 0 0 0 0 0 0 0 2 0 3881 0 0 0
0 459596 40 0 0 3476 0 457381 0 0 112 1535 1045 1 0 3264 0 0 0 0 0 0 0 0 0 0 0 0 2244 0 0 0
0 459103 40 0 0 3466 0 455753 0 0 112 2649 1045 2 0 3255 0 0 0 0 0 0 0 0 0 0 0 0 3375 0 0 0
0 406146 156 0 0 3302 0 399632 0 0 354 347 1073 4 0 2850 0 0 0 0 0 0 0 0 0 0 2423 0 0 0 0 0
0 468053 43 0 0 3548 0 467137 0 0 116 194 1044 0 0 3336 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 464569 41 0 0 3518 0 463668 0 0 112 203 1041 0 0 3312 0 0 0 0 0 0 0 0 0 0 0 0 4 0 0 0
0 464371 41 0 0 3521 0 463439 0 0 112 235 1043 1 0 3310 0 0 0 0 0 0 0 0 0 0 0 0 2 0 0 0
0 245463 325 0 0 14447 0 222343 0 0 12777 1288 1552 219 0 1576 59 0 0 0 0 0 0 0 0 0 7370 0 140 0 0 0
0 334093 44 0 0 3835 0 331589 0 0 1390 414 1098 46 0 2347 18 0 0 0 0 0 0 0 0 0 179 0 0 0 0 0
0 457901 44 0 0 3546 0 456901 0 0 184 228 1046 7 0 3264 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0
0 460981 42 0 0 4044 0 459435 0 0 671 277 1061 9 0 3277 2 0 0 0 0 0 0 0 0 0 10 0 115 0 0 0
0 362032 200 0 0 3150 0 357645 0 0 501 1027 1069 18 0 2550 8 0 0 0 0 0 0 0 0 0 1333 0 12 0 0 0
0 429677 3256 0 0 3323 0 426709 0 0 181 1379 1059 10 0 3048 5 0 0 0 0 0 0 0 0 0 30 0 259 0 0 0
0 457786 2619 0 0 3697 0 456311 0 0 341 395 1053 9 0 3256 4 0 0 0 0 0 0 0 0 0 54 0 521 0 0 0
0 461697 42 0 0 3562 0 460205 0 0 181 700 1041 4 0 3285 1 0 0 0 0 0 0 0 0 0 7 0 1044 0 0 0
efer_relo exits fpu_reloa halt_exit halt_wake host_stat hypercall insn_emul insn_emul invlpg io_exits irq_exits irq_injec irq_windo largepage mmio_exit mmu_cache mmu_flood mmu_pde_z mmu_pte_u mmu_pte_w mmu_recyc mmu_shado mmu_unsyn nmi_injec nmi_windo pf_fixed pf_guest remote_tl request_i signal_ex tlb_flush
0 433413 51 0 0 3344 0 429162 0 0 181 562 1047 8 0 3063 6 0 0 0 0 0 0 0 0 0 2274 0 723 0 0 0
0 369162 167 0 0 3081 0 366435 0 0 383 734 1142 20 0 2602 6 0 0 0 0 0 0 0 0 0 270 0 623 0 0 0
0 464032 49 0 0 3581 0 462498 0 0 182 708 1050 4 0 3301 2 0 0 0 0 0 0 0 0 0 39 0 680 0 0 0
0 463495 45 0 0 3697 0 462319 0 0 311 265 1048 5 0 3297 2 0 0 0 0 0 0 0 0 0 2 0 120 0 0 0
0 463260 44 0 0 3578 0 462251 0 0 181 216 1045 5 0 3299 2 0 0 0 0 0 0 0 0 0 2 0 58 0 0 0
0 388523 137 0 0 3047 0 385689 0 0 200 380 1047 5 0 2751 4 0 0 0 0 0 0 0 0 0 727 0 0 0 0 0
0 463925 45 0 0 3582 0 462956 0 0 181 184 1044 3 0 3304 2 0 0 0 0 0 0 0 0 0 4 0 3 0 0 0
0 462669 44 0 0 3654 0 461598 0 0 259 215 1050 4 0 3296 2 0 0 0 0 0 0 0 0 0 2 0 0 0 0 0
0 463361 42 0 0 3577 0 462406 0 0 181 186 1043 4 0 3305 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 393640 52 0 0 3074 0 391372 0 0 191 351 1047 11 0 2789 2 0 0 0 0 0 0 0 0 0 300 0 0 0 0 0
0 464659 43 0 0 3587 0 463668 0 0 181 215 1045 3 0 3312 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 465136 42 0 0 3604 0 464148 0 0 207 193 1044 3 0 3313 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 465132 41 0 0 3596 0 464127 0 0 185 218 1044 3 0 3314 0 0 0 0 0 0 0 0 0 0 0 0 25 0 0 0
0 392292 117 0 0 3199 0 390020 0 0 319 421 1052 18 0 2782 2 0 0 0 0 0 0 0 0 0 100 0 38 0 0 0
0 464988 43 0 0 3587 0 464051 0 0 181 167 1045 3 0 3313 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 463376 41 0 0 3965 0 461729 0 0 571 483 1056 7 0 3294 0 0 0 0 0 0 0 0 0 0 0 0 932 0 0 0
0 463453 42 0 0 3570 0 462081 0 0 181 601 1046 3 0 3298 0 0 0 0 0 0 0 0 0 0 0 0 882 0 0 0
0 393323 54 0 0 3039 0 388736 0 0 165 919 1046 8 0 2774 2 0 0 0 0 0 0 0 0 0 1151 0 967 0 0 0
0 464179 42 0 0 3575 0 462630 0 0 181 779 1044 3 0 3301 0 0 0 0 0 0 0 0 0 0 0 0 1451 0 0 0
0 462049 42 0 0 3556 0 459810 0 0 181 1473 1047 3 0 3281 0 0 0 0 0 0 0 0 0 0 0 0 3267 0 0 0
efer_relo exits fpu_reloa halt_exit halt_wake host_stat hypercall insn_emul insn_emul invlpg io_exits irq_exits irq_injec irq_windo largepage mmio_exit mmu_cache mmu_flood mmu_pde_z mmu_pte_u mmu_pte_w mmu_recyc mmu_shado mmu_unsyn nmi_injec nmi_windo pf_fixed pf_guest remote_tl request_i signal_ex tlb_flush
0 464109 41 0 0 3585 0 462869 0 0 181 473 1041 4 0 3306 0 0 0 0 0 0 0 0 0 0 0 0 633 0 0 0
0 393997 54 0 0 3038 0 389531 0 0 165 358 1046 11 0 2778 3 0 0 0 0 0 0 0 0 0 1316 0 39 0 0 0
0 462796 43 0 0 3576 0 461448 0 0 181 570 1045 4 0 3296 0 0 0 0 0 0 0 0 0 0 0 0 938 0 0 0
0 465657 43 0 0 3596 0 464694 0 0 181 210 1043 3 0 3318 0 0 0 0 0 0 0 0 0 0 0 0 2 0 0 0
0 458793 43 0 0 3542 0 457317 0 0 181 206 1042 4 0 3266 0 0 0 0 0 0 0 0 0 0 156 0 0 0 0 0
0 400069 53 0 0 3102 0 397476 0 0 169 343 1046 8 0 2837 2 0 0 0 0 0 0 0 0 0 481 0 0 0 0 0
0 464838 43 0 0 3589 0 463855 0 0 181 212 1044 3 0 3313 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 464469 41 0 0 3587 0 463524 0 0 181 177 1043 5 0 3309 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 448938 43 0 0 3468 0 447512 0 0 177 238 1043 7 0 3193 1 0 0 0 0 0 0 0 0 0 3 0 12 0 0 0
0 409325 51 0 0 3178 0 407834 0 0 169 321 1045 9 0 2911 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0
0 463572 42 0 0 3584 0 462584 0 0 181 223 1044 3 0 3302 0 0 0 0 0 0 0 0 0 0 0 0 21 0 0 0
0 464191 43 0 0 3588 0 463166 0 0 185 229 1047 3 0 3307 0 0 0 0 0 0 0 0 0 0 1 0 37 0 0 0
0 440468 45 0 0 3405 0 438792 0 0 174 458 1047 7 0 3133 0 0 0 0 0 0 0 0 0 0 21 0 0 0 0 0
0 414687 52 0 0 3216 0 412549 0 0 173 942 1042 7 0 2944 1 0 0 0 0 0 0 0 0 0 6 0 1525 0 0 0
[-- Attachment #3: top_out_snip --]
[-- Type: application/octet-stream, Size: 20123 bytes --]
top - 22:47:38 up 6 days, 6:12, 15 users, load average: 0.42, 0.44, 0.37
Tasks: 414 total, 1 running, 409 sleeping, 0 stopped, 4 zombie
Cpu(s): 1.9%us, 1.0%sy, 0.0%ni, 96.8%id, 0.3%wa, 0.0%hi, 0.0%si, 0.0%st
Mem: 12328620k total, 9176856k used, 3151764k free, 916812k buffers
Swap: 36110328k total, 105984k used, 36004344k free, 4988980k cached
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
28185 root 20 0 1109m 36m 1364 S 23 0.3 0:00.70 qemu-system-x86
116 root 25 5 0 0 0 S 3 0.0 124:44.40 ksmd
top - 22:47:41 up 6 days, 6:12, 15 users, load average: 0.54, 0.46, 0.38
Tasks: 414 total, 1 running, 409 sleeping, 0 stopped, 4 zombie
Cpu(s): 2.1%us, 1.9%sy, 0.1%ni, 90.4%id, 5.3%wa, 0.0%hi, 0.1%si, 0.0%st
Mem: 12328620k total, 9433108k used, 2895512k free, 916816k buffers
Swap: 36110328k total, 105984k used, 36004344k free, 5088968k cached
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
28185 root 20 0 1109m 189m 1364 S 32 1.6 0:01.67 qemu-system-x86
116 root 25 5 0 0 0 S 5 0.0 124:44.54 ksmd
top - 22:47:44 up 6 days, 6:12, 15 users, load average: 0.54, 0.46, 0.38
Tasks: 414 total, 1 running, 409 sleeping, 0 stopped, 4 zombie
Cpu(s): 3.7%us, 5.6%sy, 0.0%ni, 90.2%id, 0.5%wa, 0.0%hi, 0.0%si, 0.0%st
Mem: 12328620k total, 9476896k used, 2851724k free, 916816k buffers
Swap: 36110328k total, 105984k used, 36004344k free, 5095088k cached
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
28185 root 20 0 1109m 235m 1364 S 95 2.0 0:04.54 qemu-system-x86
116 root 25 5 0 0 0 S 3 0.0 124:44.63 ksmd
top - 22:47:47 up 6 days, 6:12, 15 users, load average: 0.58, 0.47, 0.38
Tasks: 414 total, 1 running, 409 sleeping, 0 stopped, 4 zombie
Cpu(s): 3.0%us, 5.1%sy, 0.0%ni, 91.7%id, 0.2%wa, 0.0%hi, 0.0%si, 0.0%st
Mem: 12328620k total, 9517072k used, 2811548k free, 916820k buffers
Swap: 36110328k total, 105984k used, 36004344k free, 5095420k cached
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
28185 root 20 0 1109m 278m 1364 S 100 2.3 0:07.56 qemu-system-x86
116 root 25 5 0 0 0 S 3 0.0 124:44.72 ksmd
top - 22:47:50 up 6 days, 6:12, 15 users, load average: 0.61, 0.48, 0.38
Tasks: 414 total, 1 running, 409 sleeping, 0 stopped, 4 zombie
Cpu(s): 3.0%us, 6.7%sy, 0.0%ni, 89.9%id, 0.3%wa, 0.0%hi, 0.0%si, 0.0%st
Mem: 12328620k total, 9520984k used, 2807636k free, 916820k buffers
Swap: 36110328k total, 105984k used, 36004344k free, 5095504k cached
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
28185 root 20 0 1109m 279m 1364 S 100 2.3 0:10.58 qemu-system-x86
116 root 25 5 0 0 0 S 2 0.0 124:44.79 ksmd
top - 22:47:53 up 6 days, 6:12, 15 users, load average: 0.61, 0.48, 0.38
Tasks: 415 total, 1 running, 410 sleeping, 0 stopped, 4 zombie
Cpu(s): 2.8%us, 8.8%sy, 0.0%ni, 88.4%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
Mem: 12328620k total, 9522088k used, 2806532k free, 916820k buffers
Swap: 36110328k total, 105984k used, 36004344k free, 5095704k cached
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
28185 root 20 0 1109m 279m 1364 S 100 2.3 0:13.60 qemu-system-x86
116 root 25 5 0 0 0 S 3 0.0 124:44.88 ksmd
top - 22:47:56 up 6 days, 6:12, 15 users, load average: 0.56, 0.47, 0.38
Tasks: 415 total, 2 running, 409 sleeping, 0 stopped, 4 zombie
Cpu(s): 3.1%us, 7.3%sy, 0.0%ni, 89.5%id, 0.1%wa, 0.0%hi, 0.0%si, 0.0%st
Mem: 12328620k total, 9533380k used, 2795240k free, 916824k buffers
Swap: 36110328k total, 105984k used, 36004344k free, 5095772k cached
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
28185 root 20 0 1109m 291m 1364 S 100 2.4 0:16.62 qemu-system-x86
116 root 25 5 0 0 0 S 3 0.0 124:44.97 ksmd
top - 22:47:59 up 6 days, 6:12, 15 users, load average: 0.56, 0.47, 0.38
Tasks: 415 total, 1 running, 410 sleeping, 0 stopped, 4 zombie
Cpu(s): 3.1%us, 4.9%sy, 0.0%ni, 91.9%id, 0.1%wa, 0.0%hi, 0.0%si, 0.0%st
Mem: 12328620k total, 9553964k used, 2774656k free, 916824k buffers
Swap: 36110328k total, 105984k used, 36004344k free, 5095852k cached
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
28185 root 20 0 1109m 320m 1364 S 100 2.7 0:19.64 qemu-system-x86
116 root 25 5 0 0 0 S 4 0.0 124:45.09 ksmd
top - 22:48:02 up 6 days, 6:12, 15 users, load average: 0.60, 0.48, 0.38
Tasks: 415 total, 1 running, 410 sleeping, 0 stopped, 4 zombie
Cpu(s): 3.6%us, 7.3%sy, 0.0%ni, 88.6%id, 0.4%wa, 0.0%hi, 0.0%si, 0.0%st
Mem: 12328620k total, 9548392k used, 2780228k free, 916824k buffers
Swap: 36110328k total, 105984k used, 36004344k free, 5096312k cached
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
28185 root 20 0 1109m 320m 1364 S 100 2.7 0:22.65 qemu-system-x86
116 root 25 5 0 0 0 S 4 0.0 124:45.22 ksmd
top - 22:48:05 up 6 days, 6:12, 15 users, load average: 0.63, 0.49, 0.39
Tasks: 415 total, 1 running, 410 sleeping, 0 stopped, 4 zombie
Cpu(s): 2.9%us, 6.0%sy, 0.0%ni, 91.1%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
Mem: 12328620k total, 9548888k used, 2779732k free, 916824k buffers
Swap: 36110328k total, 105984k used, 36004344k free, 5096364k cached
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
28185 root 20 0 1109m 320m 1364 S 100 2.7 0:25.67 qemu-system-x86
116 root 25 5 0 0 0 S 4 0.0 124:45.33 ksmd
top - 22:48:08 up 6 days, 6:12, 15 users, load average: 0.63, 0.49, 0.39
Tasks: 415 total, 2 running, 409 sleeping, 0 stopped, 4 zombie
Cpu(s): 4.2%us, 6.0%sy, 0.0%ni, 88.2%id, 1.6%wa, 0.0%hi, 0.0%si, 0.0%st
Mem: 12328620k total, 9583012k used, 2745608k free, 916828k buffers
Swap: 36110328k total, 105984k used, 36004344k free, 5103576k cached
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
28185 root 20 0 1109m 345m 1420 S 101 2.9 0:28.70 qemu-system-x86
116 root 25 5 0 0 0 R 4 0.0 124:45.45 ksmd
top - 22:48:11 up 6 days, 6:12, 15 users, load average: 0.66, 0.50, 0.39
Tasks: 415 total, 1 running, 410 sleeping, 0 stopped, 4 zombie
Cpu(s): 3.5%us, 6.2%sy, 0.0%ni, 90.2%id, 0.1%wa, 0.0%hi, 0.0%si, 0.0%st
Mem: 12328620k total, 9580416k used, 2748204k free, 916828k buffers
Swap: 36110328k total, 105984k used, 36004344k free, 5104136k cached
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
28185 root 20 0 1109m 349m 1420 S 100 2.9 0:31.71 qemu-system-x86
116 root 25 5 0 0 0 S 3 0.0 124:45.55 ksmd
top - 22:48:14 up 6 days, 6:12, 15 users, load average: 0.66, 0.50, 0.39
Tasks: 415 total, 1 running, 410 sleeping, 0 stopped, 4 zombie
Cpu(s): 3.1%us, 7.0%sy, 0.0%ni, 89.8%id, 0.1%wa, 0.0%hi, 0.0%si, 0.0%st
Mem: 12328620k total, 9582772k used, 2745848k free, 916828k buffers
Swap: 36110328k total, 105984k used, 36004344k free, 5104128k cached
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
28185 root 20 0 1109m 349m 1420 S 100 2.9 0:34.73 qemu-system-x86
116 root 25 5 0 0 0 S 3 0.0 124:45.65 ksmd
top - 22:48:17 up 6 days, 6:13, 15 users, load average: 0.69, 0.50, 0.39
Tasks: 415 total, 1 running, 410 sleeping, 0 stopped, 4 zombie
Cpu(s): 3.4%us, 6.4%sy, 0.0%ni, 90.1%id, 0.1%wa, 0.0%hi, 0.1%si, 0.0%st
Mem: 12328620k total, 9590424k used, 2738196k free, 916828k buffers
Swap: 36110328k total, 105984k used, 36004344k free, 5104480k cached
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
28185 root 20 0 1109m 357m 1420 S 100 3.0 0:37.74 qemu-system-x86
116 root 25 5 0 0 0 S 2 0.0 124:45.72 ksmd
top - 22:48:20 up 6 days, 6:13, 15 users, load average: 0.87, 0.55, 0.41
Tasks: 415 total, 1 running, 410 sleeping, 0 stopped, 4 zombie
Cpu(s): 3.6%us, 7.7%sy, 0.0%ni, 88.3%id, 0.3%wa, 0.0%hi, 0.0%si, 0.0%st
Mem: 12328620k total, 9593028k used, 2735592k free, 916832k buffers
Swap: 36110328k total, 105984k used, 36004344k free, 5104704k cached
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
28185 root 20 0 1109m 359m 1420 S 100 3.0 0:40.76 qemu-system-x86
116 root 25 5 0 0 0 S 4 0.0 124:45.84 ksmd
top - 22:48:23 up 6 days, 6:13, 15 users, load average: 0.87, 0.55, 0.41
Tasks: 415 total, 2 running, 409 sleeping, 0 stopped, 4 zombie
Cpu(s): 3.1%us, 8.6%sy, 0.0%ni, 88.3%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
Mem: 12328620k total, 9594392k used, 2734228k free, 916832k buffers
Swap: 36110328k total, 105984k used, 36004344k free, 5104780k cached
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
28185 root 20 0 1109m 360m 1420 S 100 3.0 0:43.77 qemu-system-x86
116 root 25 5 0 0 0 S 4 0.0 124:45.96 ksmd
top - 22:48:26 up 6 days, 6:13, 15 users, load average: 0.96, 0.57, 0.41
Tasks: 415 total, 1 running, 410 sleeping, 0 stopped, 4 zombie
Cpu(s): 3.4%us, 9.5%sy, 0.0%ni, 87.0%id, 0.1%wa, 0.0%hi, 0.0%si, 0.0%st
Mem: 12328620k total, 9594160k used, 2734460k free, 916836k buffers
Swap: 36110328k total, 105984k used, 36004344k free, 5104848k cached
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
28185 root 20 0 1109m 360m 1420 S 100 3.0 0:46.78 qemu-system-x86
116 root 25 5 0 0 0 S 4 0.0 124:46.08 ksmd
top - 22:48:29 up 6 days, 6:13, 15 users, load average: 0.96, 0.57, 0.41
Tasks: 415 total, 1 running, 410 sleeping, 0 stopped, 4 zombie
Cpu(s): 3.0%us, 8.3%sy, 0.0%ni, 88.5%id, 0.2%wa, 0.0%hi, 0.0%si, 0.0%st
Mem: 12328620k total, 9593648k used, 2734972k free, 916836k buffers
Swap: 36110328k total, 105984k used, 36004344k free, 5104928k cached
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
28185 root 20 0 1109m 361m 1420 S 100 3.0 0:49.79 qemu-system-x86
116 root 25 5 0 0 0 S 3 0.0 124:46.17 ksmd
top - 22:48:32 up 6 days, 6:13, 15 users, load average: 0.96, 0.58, 0.42
Tasks: 415 total, 1 running, 410 sleeping, 0 stopped, 4 zombie
Cpu(s): 3.6%us, 8.6%sy, 0.0%ni, 87.6%id, 0.1%wa, 0.0%hi, 0.0%si, 0.0%st
Mem: 12328620k total, 9592648k used, 2735972k free, 916836k buffers
Swap: 36110328k total, 105984k used, 36004344k free, 5105008k cached
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
28185 root 20 0 1109m 362m 1420 S 100 3.0 0:52.80 qemu-system-x86
116 root 25 5 0 0 0 S 4 0.0 124:46.30 ksmd
top - 22:48:35 up 6 days, 6:13, 15 users, load average: 0.97, 0.58, 0.42
Tasks: 415 total, 1 running, 410 sleeping, 0 stopped, 4 zombie
Cpu(s): 3.3%us, 5.6%sy, 0.0%ni, 91.1%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
Mem: 12328620k total, 9591036k used, 2737584k free, 916836k buffers
Swap: 36110328k total, 105984k used, 36004344k free, 5105076k cached
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
28185 root 20 0 1109m 362m 1420 S 100 3.0 0:55.81 qemu-system-x86
116 root 25 5 0 0 0 S 4 0.0 124:46.43 ksmd
top - 22:48:38 up 6 days, 6:13, 15 users, load average: 0.97, 0.58, 0.42
Tasks: 415 total, 1 running, 410 sleeping, 0 stopped, 4 zombie
Cpu(s): 3.5%us, 6.2%sy, 0.0%ni, 90.2%id, 0.1%wa, 0.0%hi, 0.0%si, 0.0%st
Mem: 12328620k total, 9589300k used, 2739320k free, 916840k buffers
Swap: 36110328k total, 105984k used, 36004344k free, 5105144k cached
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
28185 root 20 0 1109m 362m 1424 S 100 3.0 0:58.82 qemu-system-x86
116 root 25 5 0 0 0 S 5 0.0 124:46.57 ksmd
top - 22:48:41 up 6 days, 6:13, 15 users, load average: 0.97, 0.59, 0.42
Tasks: 415 total, 1 running, 410 sleeping, 0 stopped, 4 zombie
Cpu(s): 2.6%us, 5.3%sy, 0.0%ni, 92.2%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
Mem: 12328620k total, 9591912k used, 2736708k free, 916840k buffers
Swap: 36110328k total, 105984k used, 36004344k free, 5105220k cached
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
28185 root 20 0 1109m 363m 1424 S 100 3.0 1:01.84 qemu-system-x86
116 root 25 5 0 0 0 S 4 0.0 124:46.70 ksmd
top - 22:48:44 up 6 days, 6:13, 15 users, load average: 0.97, 0.59, 0.42
Tasks: 414 total, 1 running, 409 sleeping, 0 stopped, 4 zombie
Cpu(s): 2.8%us, 5.5%sy, 0.0%ni, 91.5%id, 0.1%wa, 0.0%hi, 0.1%si, 0.0%st
Mem: 12328620k total, 9591472k used, 2737148k free, 916844k buffers
Swap: 36110328k total, 105984k used, 36004344k free, 5105288k cached
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
28185 root 20 0 1109m 363m 1424 S 100 3.0 1:04.84 qemu-system-x86
116 root 25 5 0 0 0 S 4 0.0 124:46.81 ksmd
[-- Attachment #4: vmstat_out --]
[-- Type: application/octet-stream, Size: 6947 bytes --]
procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu----
r b swpd free buff cache si so bi bo in cs us sy id wa
0 0 103 3117 895 4870 0 0 8 34 6 4 2 1 97 0
0 0 103 3117 895 4870 0 0 0 0 912 2723 1 0 99 0
1 0 103 3104 895 4870 0 0 96 104 1192 3972 1 1 98 0
2 0 103 3076 895 4873 0 0 2212 16 2969 6077 5 2 91 2
1 0 103 3033 895 4888 0 0 16008 100 3065 7694 2 1 93 4
0 1 103 2912 895 4940 0 0 52520 0 3798 8854 1 3 91 5
1 1 103 2816 895 4974 0 0 34800 0 4838 9889 3 3 87 7
1 0 103 2779 895 4975 0 0 1420 0 2192 3577 5 4 91 0
1 0 103 2785 895 4975 0 0 0 32 11019 3286 3 6 90 0
1 0 103 2784 895 4975 0 0 0 0 2881 3354 3 6 90 0
1 0 103 2787 895 4975 0 0 0 12 4704 3156 2 5 93 0
1 0 103 2746 895 4976 0 0 260 0 2799 3655 4 5 91 0
1 0 103 2745 895 4975 0 0 0 0 3581 3117 3 6 90 0
1 0 103 2743 895 4975 0 0 0 0 2149 3192 3 6 91 0
2 0 103 2742 895 4976 0 0 80 0 2315 3702 3 6 91 0
1 0 103 2741 895 4976 0 0 24 164 2270 3509 4 9 87 1
1 0 103 2740 895 4976 0 0 64 0 2172 3183 2 8 89 0
1 0 103 2740 895 4976 0 0 0 0 2257 3008 3 9 88 0
1 0 103 2740 895 4976 0 0 0 108 2185 3181 3 9 88 0
1 0 103 2728 895 4976 0 0 0 0 2265 3234 3 9 88 0
1 0 103 2728 895 4976 0 0 0 0 2276 3321 3 7 90 0
procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu----
r b swpd free buff cache si so bi bo in cs us sy id wa
1 0 103 2730 895 4976 0 0 0 16 3387 3269 4 6 90 0
1 0 103 2732 895 4976 0 0 0 0 3469 3040 2 4 94 0
2 0 103 2715 895 4976 0 0 0 52 15570 3165 4 5 91 0
1 0 103 2709 895 4976 0 0 0 0 7452 3158 3 6 90 0
1 0 103 2717 895 4976 0 0 0 0 11550 3332 3 7 90 0
4 0 103 2720 895 4976 0 0 0 32 9101 3270 4 8 86 1
1 0 103 2715 895 4976 0 0 356 72 2412 3726 4 6 90 0
1 0 103 2714 895 4976 0 0 0 0 2144 3243 3 6 91 0
1 0 103 2714 895 4976 0 0 0 0 2177 3340 3 7 90 0
4 0 103 2714 895 4976 0 0 0 0 2203 3366 4 5 91 0
1 0 103 2689 895 4981 0 0 4328 0 2969 4532 5 4 89 2
2 2 103 2681 895 4984 0 0 2060 292 2577 4326 5 6 88 2
1 0 103 2681 895 4983 0 0 0 0 2178 2967 3 8 88 1
1 0 103 2681 895 4983 0 0 0 0 2621 3128 3 8 90 0
1 0 103 2681 895 4984 0 0 0 0 2160 2995 4 5 91 0
1 0 103 2684 895 4984 0 0 296 0 6176 3188 4 5 90 0
1 0 103 2682 895 4984 0 0 0 0 2360 3010 3 8 89 0
1 0 103 2680 895 4984 0 0 0 28 3996 3842 3 6 91 0
1 0 103 2681 895 4984 0 0 0 0 4611 3144 3 8 89 0
2 0 103 2672 895 4984 0 0 352 0 3185 3474 4 7 90 0
1 0 103 2673 895 4984 0 0 0 0 3827 3751 4 8 89 0
procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu----
r b swpd free buff cache si so bi bo in cs us sy id wa
1 0 103 2674 895 4984 0 0 0 0 3228 3244 3 5 92 0
1 0 103 2674 895 4984 0 0 0 40 2467 3368 3 6 90 1
1 0 103 2671 895 4984 0 0 76 0 2221 3323 4 8 89 0
1 0 103 2671 895 4985 0 0 0 0 2166 2998 4 10 87 0
1 0 103 2671 895 4985 0 0 0 0 2206 3094 3 8 89 0
2 0 103 2671 895 4985 0 0 0 120 2203 2950 3 10 87 0
1 0 103 2670 895 4985 0 0 0 0 2209 3023 4 8 88 0
1 0 103 2670 895 4985 0 0 0 0 2132 3017 3 10 87 0
2 0 103 2670 895 4985 0 0 0 32 2197 3011 4 10 86 0
1 0 103 2670 895 4985 0 0 0 0 2243 3150 3 9 87 0
2 0 103 2670 895 4985 0 0 0 424 2290 3387 3 6 91 1
1 0 103 2670 895 4985 0 0 0 0 2179 3046 2 10 87 0
1 0 103 2670 895 4985 0 0 0 0 2846 3043 3 9 87 0
1 0 103 2671 895 4985 0 0 0 0 4519 3372 4 8 89 0
1 0 103 2669 895 4985 0 0 0 24 4322 3188 4 9 87 0
2 0 103 2671 895 4985 0 0 0 0 4431 3112 4 10 87 0
1 0 103 2677 895 4985 0 0 0 0 10070 3146 3 6 92 0
2 0 103 2678 895 4985 0 0 0 0 3703 3152 3 6 91 0
1 0 103 2673 895 4985 0 0 0 0 2193 3057 4 5 91 0
1 0 103 2674 895 4985 0 0 0 0 3140 3269 4 6 90 0
1 0 103 2674 895 4985 0 0 0 108 3195 3290 4 6 90 0
procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu----
r b swpd free buff cache si so bi bo in cs us sy id wa
1 0 103 2675 895 4985 0 0 0 0 2130 3109 3 6 91 0
1 0 103 2672 895 4985 0 0 0 0 2120 3183 3 3 94 0
1 0 103 2672 895 4985 0 0 0 0 2145 3170 3 6 91 0
1 0 103 2672 895 4985 0 0 0 0 2143 3291 2 7 91 0
1 0 103 2672 895 4985 0 0 0 0 2164 3201 3 7 91 0
2 0 103 2672 895 4985 0 0 0 12 2190 3247 4 5 90 0
1 0 103 2672 895 4985 0 0 0 0 2192 3285 3 7 90 0
1 0 103 2673 895 4985 0 0 0 0 2310 3706 2 4 94 0
1 0 103 2674 895 4985 0 0 0 0 3044 3842 2 8 90 0
1 0 103 2674 895 4985 0 0 0 0 3408 3153 3 6 91 0
1 0 103 2677 895 4985 0 0 0 0 5022 3002 3 7 90 0
1 0 103 2682 895 4985 0 0 0 0 4326 3294 3 7 90 0
1 0 103 2683 895 4985 0 0 0 0 2806 3076 3 7 90 0
1 0 103 2681 895 4985 0 0 0 0 10086 2976 3 5 92 0
2 0 103 2681 895 4985 0 0 184 104 2292 3115 3 7 90 0
1 0 103 2681 895 4985 0 0 0 0 2082 2935 3 7 90 0
1 0 103 2681 895 4985 0 0 0 24 2147 3012 3 7 89 0
1 0 103 2679 895 4985 0 0 0 32 2252 3082 5 8 87 0
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: Running KVM inside a chroot
2011-10-17 6:10 ` Jorge Lucangeli Obes
@ 2011-10-17 9:56 ` Avi Kivity
2011-10-17 16:37 ` Jorge Lucangeli Obes
0 siblings, 1 reply; 21+ messages in thread
From: Avi Kivity @ 2011-10-17 9:56 UTC (permalink / raw)
To: Jorge Lucangeli Obes; +Cc: Alexander Graf, kvm
On 10/17/2011 08:10 AM, Jorge Lucangeli Obes wrote:
> > What do top/vmstat/kvm_stat say?
>
> I'm attaching the output of the three commands during a test run
> launched form inside the chroot, which was slow as usual. I didn't see
> anything too weird on top/vmstat, though I found it odd that once the
> VM had booted Chromium OS, QEMU still ate 100% of one core. vmstat
> didn't show anything strange, QEMU taking up the memory it's supposed
> to. What I don't quite know how to interpret is the output of
> kvm_stat, hence the attachment.
>
>
kvm_stat shows insane instruction emulation counts. Please post a trace
as described in http://www.linux-kvm.org/page/Tracing.
--
error compiling committee.c: too many arguments to function
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: Running KVM inside a chroot
2011-10-17 9:56 ` Avi Kivity
@ 2011-10-17 16:37 ` Jorge Lucangeli Obes
2011-10-18 10:29 ` Avi Kivity
0 siblings, 1 reply; 21+ messages in thread
From: Jorge Lucangeli Obes @ 2011-10-17 16:37 UTC (permalink / raw)
To: Avi Kivity; +Cc: Alexander Graf, kvm
On Mon, Oct 17, 2011 at 2:56 AM, Avi Kivity <avi@redhat.com> wrote:
> On 10/17/2011 08:10 AM, Jorge Lucangeli Obes wrote:
>> > What do top/vmstat/kvm_stat say?
>>
>> I'm attaching the output of the three commands during a test run
>> launched form inside the chroot, which was slow as usual. I didn't see
>> anything too weird on top/vmstat, though I found it odd that once the
>> VM had booted Chromium OS, QEMU still ate 100% of one core. vmstat
>> didn't show anything strange, QEMU taking up the memory it's supposed
>> to. What I don't quite know how to interpret is the output of
>> kvm_stat, hence the attachment.
>>
>>
>
> kvm_stat shows insane instruction emulation counts. Please post a trace
> as described in http://www.linux-kvm.org/page/Tracing.
I've uploaded a bzip'd trace here:
https://docs.google.com/leaf?id=0B78o7gMWkuFeMzM3MWM5NmUtYTY3My00ZDkxLTljZmUtYjRhMWRhOWVjZTZh&hl=en_US
Almost all the exits show "[FAILED TO PARSE]", would this be a problem?
Thanks!
Jorge
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: Running KVM inside a chroot
2011-10-17 16:37 ` Jorge Lucangeli Obes
@ 2011-10-18 10:29 ` Avi Kivity
2011-10-18 16:43 ` Jorge Lucangeli Obes
2011-10-19 15:38 ` David Ahern
0 siblings, 2 replies; 21+ messages in thread
From: Avi Kivity @ 2011-10-18 10:29 UTC (permalink / raw)
To: Jorge Lucangeli Obes; +Cc: Alexander Graf, kvm
On 10/17/2011 06:37 PM, Jorge Lucangeli Obes wrote:
> On Mon, Oct 17, 2011 at 2:56 AM, Avi Kivity <avi@redhat.com> wrote:
> > On 10/17/2011 08:10 AM, Jorge Lucangeli Obes wrote:
> >> > What do top/vmstat/kvm_stat say?
> >>
> >> I'm attaching the output of the three commands during a test run
> >> launched form inside the chroot, which was slow as usual. I didn't see
> >> anything too weird on top/vmstat, though I found it odd that once the
> >> VM had booted Chromium OS, QEMU still ate 100% of one core. vmstat
> >> didn't show anything strange, QEMU taking up the memory it's supposed
> >> to. What I don't quite know how to interpret is the output of
> >> kvm_stat, hence the attachment.
> >>
> >>
> >
> > kvm_stat shows insane instruction emulation counts. Please post a trace
> > as described in http://www.linux-kvm.org/page/Tracing.
>
> I've uploaded a bzip'd trace here:
>
> https://docs.google.com/leaf?id=0B78o7gMWkuFeMzM3MWM5NmUtYTY3My00ZDkxLTljZmUtYjRhMWRhOWVjZTZh&hl=en_US
>
> Almost all the exits show "[FAILED TO PARSE]", would this be a problem?
>
Did you 'make install' trace-cmd? run it from outside the chroot.
You're showing be a trace of the boot process, please start the trace
after the guest is idle (but consuming lots of cpu).
What host kernel version are you running?
--
I have a truly marvellous patch that fixes the bug which this
signature is too narrow to contain.
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: Running KVM inside a chroot
2011-10-18 10:29 ` Avi Kivity
@ 2011-10-18 16:43 ` Jorge Lucangeli Obes
2011-10-18 17:28 ` Avi Kivity
2011-10-19 15:38 ` David Ahern
1 sibling, 1 reply; 21+ messages in thread
From: Jorge Lucangeli Obes @ 2011-10-18 16:43 UTC (permalink / raw)
To: Avi Kivity; +Cc: Alexander Graf, kvm
On Tue, Oct 18, 2011 at 3:29 AM, Avi Kivity <avi@redhat.com> wrote:
> On 10/17/2011 06:37 PM, Jorge Lucangeli Obes wrote:
>> On Mon, Oct 17, 2011 at 2:56 AM, Avi Kivity <avi@redhat.com> wrote:
>> > On 10/17/2011 08:10 AM, Jorge Lucangeli Obes wrote:
>> >> > What do top/vmstat/kvm_stat say?
>> >>
>> >> I'm attaching the output of the three commands during a test run
>> >> launched form inside the chroot, which was slow as usual. I didn't see
>> >> anything too weird on top/vmstat, though I found it odd that once the
>> >> VM had booted Chromium OS, QEMU still ate 100% of one core. vmstat
>> >> didn't show anything strange, QEMU taking up the memory it's supposed
>> >> to. What I don't quite know how to interpret is the output of
>> >> kvm_stat, hence the attachment.
>> >>
>> >>
>> >
>> > kvm_stat shows insane instruction emulation counts. Please post a trace
>> > as described in http://www.linux-kvm.org/page/Tracing.
>>
>> I've uploaded a bzip'd trace here:
>>
>> https://docs.google.com/leaf?id=0B78o7gMWkuFeMzM3MWM5NmUtYTY3My00ZDkxLTljZmUtYjRhMWRhOWVjZTZh&hl=en_US
>>
>> Almost all the exits show "[FAILED TO PARSE]", would this be a problem?
>>
>
> Did you 'make install' trace-cmd? run it from outside the chroot.
Yes:
$ which trace-cmd
/usr/local/bin/trace-cmd
I ran it from outside the chroot
> You're showing be a trace of the boot process, please start the trace
> after the guest is idle (but consuming lots of cpu).
Done. Trace uploaded to:
https://docs.google.com/leaf?id=0B78o7gMWkuFeYzUzODViMjUtNzliNy00ODc5LWIwY2YtOGYyMTI3MzMxNjI5&hl=en_US
However, running trace-cmd gives:
$ trace-cmd report trace_report
error reading header for trace_report
jorgelo@tegan:~/local$ trace-cmd report trace.dat > trace_report2
cound not load plugin '/usr/local/share/trace-cmd/plugins/plugin_kvm.so'
/usr/local/share/trace-cmd/plugins/plugin_kvm.so: undefined symbol:
ud_translate_att
trace-cmd: No such file or directory
function ftrace_print_symbols_seq not defined
failed to read event print fmt for kvm_nested_vmexit_inject
function ftrace_print_symbols_seq not defined
failed to read event print fmt for kvm_nested_vmexit
function ftrace_print_symbols_seq not defined
failed to read event print fmt for kvm_exit
bad op token {
failed to read event print fmt for kvm_emulate_insn
I recompiled both udis86 and trace-cmd, and the error persisted.
> What host kernel version are you running?
$ uname -a
Linux tegan 2.6.38.8 #2 SMP Wed Sep 14 13:43:05 PDT 2011 x86_64 GNU/Linux
Thanks,
Jorge
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: Running KVM inside a chroot
2011-10-18 16:43 ` Jorge Lucangeli Obes
@ 2011-10-18 17:28 ` Avi Kivity
2011-10-18 17:39 ` Jorge Lucangeli Obes
0 siblings, 1 reply; 21+ messages in thread
From: Avi Kivity @ 2011-10-18 17:28 UTC (permalink / raw)
To: Jorge Lucangeli Obes; +Cc: Alexander Graf, kvm
On 10/18/2011 06:43 PM, Jorge Lucangeli Obes wrote:
> https://docs.google.com/leaf?id=0B78o7gMWkuFeYzUzODViMjUtNzliNy00ODc5LWIwY2YtOGYyMTI3MzMxNjI5&hl=en_US
Dumping a lot of junk to the display. What's the guest doing?
> However, running trace-cmd gives:
>
> $ trace-cmd report trace_report
> error reading header for trace_report
> jorgelo@tegan:~/local$ trace-cmd report trace.dat > trace_report2
> cound not load plugin '/usr/local/share/trace-cmd/plugins/plugin_kvm.so'
> /usr/local/share/trace-cmd/plugins/plugin_kvm.so: undefined symbol:
> ud_translate_att
>
Looks like udis86 isn't loaded correctly, is it installed in /usr/lib[64]?
--
error compiling committee.c: too many arguments to function
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: Running KVM inside a chroot
2011-10-18 17:28 ` Avi Kivity
@ 2011-10-18 17:39 ` Jorge Lucangeli Obes
2011-10-18 17:46 ` Avi Kivity
0 siblings, 1 reply; 21+ messages in thread
From: Jorge Lucangeli Obes @ 2011-10-18 17:39 UTC (permalink / raw)
To: Avi Kivity; +Cc: Alexander Graf, kvm
On Tue, Oct 18, 2011 at 10:28 AM, Avi Kivity <avi@redhat.com> wrote:
> On 10/18/2011 06:43 PM, Jorge Lucangeli Obes wrote:
>> https://docs.google.com/leaf?id=0B78o7gMWkuFeYzUzODViMjUtNzliNy00ODc5LWIwY2YtOGYyMTI3MzMxNjI5&hl=en_US
>
> Dumping a lot of junk to the display. What's the guest doing?
This trace is taken while Chromium OS presents the login screen. The
mouse pointer is almost completely unresponsive (i.e. it's impossible
to click on the text areas for username and password).
The really weird thing is that the same screen is completely fine
outside of the chroot
>> However, running trace-cmd gives:
>>
>> $ trace-cmd report trace_report
>> error reading header for trace_report
>> jorgelo@tegan:~/local$ trace-cmd report trace.dat > trace_report2
>> cound not load plugin '/usr/local/share/trace-cmd/plugins/plugin_kvm.so'
>> /usr/local/share/trace-cmd/plugins/plugin_kvm.so: undefined symbol:
>> ud_translate_att
>>
>
> Looks like udis86 isn't loaded correctly, is it installed in /usr/lib[64]?
$ ls /usr/local/lib/libudis*
/usr/local/lib/libudis86.a /usr/local/lib/libudis86.la
Does "/usr/local/lib" vs "/usr/lib" make a difference?
Thanks,
Jorge
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: Running KVM inside a chroot
2011-10-18 17:39 ` Jorge Lucangeli Obes
@ 2011-10-18 17:46 ` Avi Kivity
2011-10-20 3:30 ` Jorge Lucangeli Obes
0 siblings, 1 reply; 21+ messages in thread
From: Avi Kivity @ 2011-10-18 17:46 UTC (permalink / raw)
To: Jorge Lucangeli Obes; +Cc: Alexander Graf, kvm
On 10/18/2011 07:39 PM, Jorge Lucangeli Obes wrote:
> On Tue, Oct 18, 2011 at 10:28 AM, Avi Kivity <avi@redhat.com> wrote:
> > On 10/18/2011 06:43 PM, Jorge Lucangeli Obes wrote:
> >> https://docs.google.com/leaf?id=0B78o7gMWkuFeYzUzODViMjUtNzliNy00ODc5LWIwY2YtOGYyMTI3MzMxNjI5&hl=en_US
> >
> > Dumping a lot of junk to the display. What's the guest doing?
>
> This trace is taken while Chromium OS presents the login screen. The
> mouse pointer is almost completely unresponsive (i.e. it's impossible
> to click on the text areas for username and password).
>
> The really weird thing is that the same screen is completely fine
> outside of the chroot
Perhaps you miss the vga bios? Or maybe you have different versions?
Please check /usr/local/share/qemu or the equivalent for your system.
>
> >> However, running trace-cmd gives:
> >>
> >> $ trace-cmd report trace_report
> >> error reading header for trace_report
> >> jorgelo@tegan:~/local$ trace-cmd report trace.dat > trace_report2
> >> cound not load plugin '/usr/local/share/trace-cmd/plugins/plugin_kvm.so'
> >> /usr/local/share/trace-cmd/plugins/plugin_kvm.so: undefined symbol:
> >> ud_translate_att
> >>
> >
> > Looks like udis86 isn't loaded correctly, is it installed in /usr/lib[64]?
>
> $ ls /usr/local/lib/libudis*
> /usr/local/lib/libudis86.a /usr/local/lib/libudis86.la
>
> Does "/usr/local/lib" vs "/usr/lib" make a difference?
Shoudn't. I have a .so instead of .a, maybe trace-cmd's Makefile isn't
prepared for static libraries?
--
error compiling committee.c: too many arguments to function
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: Running KVM inside a chroot
2011-10-18 10:29 ` Avi Kivity
2011-10-18 16:43 ` Jorge Lucangeli Obes
@ 2011-10-19 15:38 ` David Ahern
1 sibling, 0 replies; 21+ messages in thread
From: David Ahern @ 2011-10-19 15:38 UTC (permalink / raw)
To: Avi Kivity, Jorge Lucangeli Obes; +Cc: Alexander Graf, kvm
On 10/18/2011 04:29 AM, Avi Kivity wrote:
>>
>> Almost all the exits show "[FAILED TO PARSE]", would this be a problem?
>>
>
> Did you 'make install' trace-cmd? run it from outside the chroot.
You need to install the plugins: make install_plugins
David
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: Running KVM inside a chroot
2011-10-18 17:46 ` Avi Kivity
@ 2011-10-20 3:30 ` Jorge Lucangeli Obes
2011-10-30 14:36 ` Avi Kivity
2011-10-30 14:41 ` Avi Kivity
0 siblings, 2 replies; 21+ messages in thread
From: Jorge Lucangeli Obes @ 2011-10-20 3:30 UTC (permalink / raw)
To: Avi Kivity; +Cc: Alexander Graf, kvm
On Tue, Oct 18, 2011 at 10:46 AM, Avi Kivity <avi@redhat.com> wrote:
> On 10/18/2011 07:39 PM, Jorge Lucangeli Obes wrote:
>> On Tue, Oct 18, 2011 at 10:28 AM, Avi Kivity <avi@redhat.com> wrote:
>> > On 10/18/2011 06:43 PM, Jorge Lucangeli Obes wrote:
>> >> https://docs.google.com/leaf?id=0B78o7gMWkuFeYzUzODViMjUtNzliNy00ODc5LWIwY2YtOGYyMTI3MzMxNjI5&hl=en_US
>> >
>> > Dumping a lot of junk to the display. What's the guest doing?
>>
>> This trace is taken while Chromium OS presents the login screen. The
>> mouse pointer is almost completely unresponsive (i.e. it's impossible
>> to click on the text areas for username and password).
>>
>> The really weird thing is that the same screen is completely fine
>> outside of the chroot
>
> Perhaps you miss the vga bios? Or maybe you have different versions?
>
> Please check /usr/local/share/qemu or the equivalent for your system.
This is inside the chroot:
$ ls /usr/share/qemu/
bamboo.dtb keymaps openbios-ppc petalogix-s3adsp1800.dtb
pxe-i82559er.bin pxe-rtl8139.bin vgabios.bin
bios.bin linuxboot.bin openbios-sparc32 ppc_rom.bin
pxe-ne2k_pci.bin pxe-virtio.bin vgabios-cirrus.bin
extboot.bin multiboot.bin openbios-sparc64 pxe-e1000.bin
pxe-pcnet.bin vapic.bin video.x
The VGA bios seems to be there.
>>
>> >> However, running trace-cmd gives:
>> >>
>> >> $ trace-cmd report trace_report
>> >> error reading header for trace_report
>> >> jorgelo@tegan:~/local$ trace-cmd report trace.dat > trace_report2
>> >> cound not load plugin '/usr/local/share/trace-cmd/plugins/plugin_kvm.so'
>> >> /usr/local/share/trace-cmd/plugins/plugin_kvm.so: undefined symbol:
>> >> ud_translate_att
>> >>
>> >
>> > Looks like udis86 isn't loaded correctly, is it installed in /usr/lib[64]?
>>
>> $ ls /usr/local/lib/libudis*
>> /usr/local/lib/libudis86.a /usr/local/lib/libudis86.la
>>
>> Does "/usr/local/lib" vs "/usr/lib" make a difference?
>
> Shoudn't. I have a .so instead of .a, maybe trace-cmd's Makefile isn't
> prepared for static libraries?
'configure' doesn't seem to enable building shared libs by default. I
had to pass '--enable-shared'. Once the shared libs were installed, I
ran 'ldconfig' and the problem was solved. I've uploaded a new version
of the trace report, now generated without any errors. I see lots of:
kvm_exit: reason EPT_VIOLATION
But maybe that's expected.
https://docs.google.com/leaf?id=0B78o7gMWkuFeNmJlYzNiNWYtMGQ4MS00NzhiLWIyNTMtY2NlNWEzNzQ5NGYx&hl=en_US
I'll get a trace of the VM outside the chroot to see if there are
noticeable differences.
Could ksmd have anything to do with this? The daemon is running on my host.
Thanks,
Jorge
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: Running KVM inside a chroot
2011-10-20 3:30 ` Jorge Lucangeli Obes
@ 2011-10-30 14:36 ` Avi Kivity
2011-10-30 14:41 ` Avi Kivity
1 sibling, 0 replies; 21+ messages in thread
From: Avi Kivity @ 2011-10-30 14:36 UTC (permalink / raw)
To: Jorge Lucangeli Obes; +Cc: Alexander Graf, kvm
On 10/20/2011 05:30 AM, Jorge Lucangeli Obes wrote:
> >>
> >> >> However, running trace-cmd gives:
> >> >>
> >> >> $ trace-cmd report trace_report
> >> >> error reading header for trace_report
> >> >> jorgelo@tegan:~/local$ trace-cmd report trace.dat > trace_report2
> >> >> cound not load plugin '/usr/local/share/trace-cmd/plugins/plugin_kvm.so'
> >> >> /usr/local/share/trace-cmd/plugins/plugin_kvm.so: undefined symbol:
> >> >> ud_translate_att
> >> >>
> >> >
> >> > Looks like udis86 isn't loaded correctly, is it installed in /usr/lib[64]?
> >>
> >> $ ls /usr/local/lib/libudis*
> >> /usr/local/lib/libudis86.a /usr/local/lib/libudis86.la
> >>
> >> Does "/usr/local/lib" vs "/usr/lib" make a difference?
> >
> > Shoudn't. I have a .so instead of .a, maybe trace-cmd's Makefile isn't
> > prepared for static libraries?
>
> 'configure' doesn't seem to enable building shared libs by default. I
> had to pass '--enable-shared'. Once the shared libs were installed, I
> ran 'ldconfig' and the problem was solved.
Ok, great.
> I've uploaded a new version
> of the trace report, now generated without any errors. I see lots of:
>
> kvm_exit: reason EPT_VIOLATION
>
> But maybe that's expected.
Yes, completely normal.
>
> https://docs.google.com/leaf?id=0B78o7gMWkuFeNmJlYzNiNWYtMGQ4MS00NzhiLWIyNTMtY2NlNWEzNzQ5NGYx&hl=en_US
>
> I'll get a trace of the VM outside the chroot to see if there are
> noticeable differences.
>
> Could ksmd have anything to do with this? The daemon is running on my host.
>
It shouldn't, but worth trying to disable it.
--
error compiling committee.c: too many arguments to function
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: Running KVM inside a chroot
2011-10-20 3:30 ` Jorge Lucangeli Obes
2011-10-30 14:36 ` Avi Kivity
@ 2011-10-30 14:41 ` Avi Kivity
2011-11-01 16:06 ` Jorge Lucangeli Obes
1 sibling, 1 reply; 21+ messages in thread
From: Avi Kivity @ 2011-10-30 14:41 UTC (permalink / raw)
To: Jorge Lucangeli Obes; +Cc: Alexander Graf, kvm
On 10/20/2011 05:30 AM, Jorge Lucangeli Obes wrote:
> https://docs.google.com/leaf?id=0B78o7gMWkuFeNmJlYzNiNWYtMGQ4MS00NzhiLWIyNTMtY2NlNWEzNzQ5NGYx&hl=en_US
>
>
The logs show lots of accesses to the vga region at 0xa0000 instead of
the linear framebuffer. Please diff the Xorg logs in the guest for the
working and non-working cases (and post them, too).
--
error compiling committee.c: too many arguments to function
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: Running KVM inside a chroot
2011-10-30 14:41 ` Avi Kivity
@ 2011-11-01 16:06 ` Jorge Lucangeli Obes
2011-11-01 16:29 ` Gerd Hoffmann
0 siblings, 1 reply; 21+ messages in thread
From: Jorge Lucangeli Obes @ 2011-11-01 16:06 UTC (permalink / raw)
To: Avi Kivity; +Cc: Alexander Graf, kvm
On Sun, Oct 30, 2011 at 7:41 AM, Avi Kivity <avi@redhat.com> wrote:
> On 10/20/2011 05:30 AM, Jorge Lucangeli Obes wrote:
>> https://docs.google.com/leaf?id=0B78o7gMWkuFeNmJlYzNiNWYtMGQ4MS00NzhiLWIyNTMtY2NlNWEzNzQ5NGYx&hl=en_US
>>
>>
>
> The logs show lots of accesses to the vga region at 0xa0000 instead of
> the linear framebuffer. Please diff the Xorg logs in the guest for the
> working and non-working cases (and post them, too).
I've uploaded the logs here:
https://docs.google.com/open?id=0B78o7gMWkuFeY2FkNjllNjgtNDQyZC00NDU5LWFjODctNzMzNzU3NjNjODkz
'Xorg.0.log.outside' is the result of running KVM outside the chroot,
which is fast as normal. 'Xorg.0.log.inside' is the result of running
KVM inside the chroot, which is slow. 'diff' is the diff obtained by
first removing the timestamps from the log files.
I think you're right re: linear framebuffer Avi. The last part of the
diff shows (< fast > slow)
1377c2113
< (II) VESA(0): VESA VBE Total Mem: 8192 kB
---
> (II) VESA(0): VESA VBE Total Mem: 16384 kB
1383,1385c2119,2123
< (II) VESA(0): virtual address = 0x75f11000,
< physical address = 0xe0000000, size = 8388608
< (II) VESA(0): Setting up VESA Mode 0x145 (1280x1024)
---
> (II) VESA(0): virtual address = (nil), <========== HERE
> physical address = 0xe0000000, size = 16777216
> (II) VESA(0): virtual address = 0x76e6a000,
> physical address = 0xa0000, size = 65536
> (II) VESA(0): Setting up VESA Mode 0x186 (1680x1050)
1409c2147
FWIW:
(outside-OK) $ qemu-system-x86_64 -version
QEMU emulator version 0.15.0 (qemu-kvm-0.15.0)
(inside-slow) $ qemu-system-x86_64 -version
QEMU PC emulator version 0.12.5 (qemu-kvm-0.12.5)
However, running outside the chroot was also OK with 0.12.3.
What could be causing this?
Thanks,
Jorge
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: Running KVM inside a chroot
2011-11-01 16:06 ` Jorge Lucangeli Obes
@ 2011-11-01 16:29 ` Gerd Hoffmann
2011-11-01 18:10 ` Jorge Lucangeli Obes
0 siblings, 1 reply; 21+ messages in thread
From: Gerd Hoffmann @ 2011-11-01 16:29 UTC (permalink / raw)
To: Jorge Lucangeli Obes; +Cc: Avi Kivity, Alexander Graf, kvm
Hi,
> (outside-OK) $ qemu-system-x86_64 -version
> QEMU emulator version 0.15.0 (qemu-kvm-0.15.0)
>
> (inside-slow) $ qemu-system-x86_64 -version
> QEMU PC emulator version 0.12.5 (qemu-kvm-0.12.5)
>
> However, running outside the chroot was also OK with 0.12.3.
>
> What could be causing this?
vgabios version most likely. Latest vgabios as shipped with 0.15
doesn't work with 0.12 IIRC, make sure you have an old one as shipped by
qemu 0.12.
cheers,
Gerd
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: Running KVM inside a chroot
2011-11-01 16:29 ` Gerd Hoffmann
@ 2011-11-01 18:10 ` Jorge Lucangeli Obes
2011-11-02 0:12 ` Jorge Lucangeli Obes
2011-11-02 8:36 ` Gerd Hoffmann
0 siblings, 2 replies; 21+ messages in thread
From: Jorge Lucangeli Obes @ 2011-11-01 18:10 UTC (permalink / raw)
To: Gerd Hoffmann; +Cc: Avi Kivity, Alexander Graf, kvm
On Tue, Nov 1, 2011 at 9:29 AM, Gerd Hoffmann <kraxel@redhat.com> wrote:
> Hi,
>
>> (outside-OK) $ qemu-system-x86_64 -version
>> QEMU emulator version 0.15.0 (qemu-kvm-0.15.0)
>>
>> (inside-slow) $ qemu-system-x86_64 -version
>> QEMU PC emulator version 0.12.5 (qemu-kvm-0.12.5)
>>
>> However, running outside the chroot was also OK with 0.12.3.
>>
>> What could be causing this?
>
> vgabios version most likely. Latest vgabios as shipped with 0.15
> doesn't work with 0.12 IIRC, make sure you have an old one as shipped by
> qemu 0.12.
OK, I'll check that, thanks. However, KVM inside the chroot was
already slow when the version outside the chroot was 0.12.3 and the
version inside was 0.12.5.
Thanks,
Jorge
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: Running KVM inside a chroot
2011-11-01 18:10 ` Jorge Lucangeli Obes
@ 2011-11-02 0:12 ` Jorge Lucangeli Obes
2011-11-02 8:36 ` Gerd Hoffmann
1 sibling, 0 replies; 21+ messages in thread
From: Jorge Lucangeli Obes @ 2011-11-02 0:12 UTC (permalink / raw)
To: Gerd Hoffmann; +Cc: Avi Kivity, Alexander Graf, kvm
On Tue, Nov 1, 2011 at 11:10 AM, Jorge Lucangeli Obes
<jorgelo@chromium.org> wrote:
> On Tue, Nov 1, 2011 at 9:29 AM, Gerd Hoffmann <kraxel@redhat.com> wrote:
>> Hi,
>>
>>> (outside-OK) $ qemu-system-x86_64 -version
>>> QEMU emulator version 0.15.0 (qemu-kvm-0.15.0)
>>>
>>> (inside-slow) $ qemu-system-x86_64 -version
>>> QEMU PC emulator version 0.12.5 (qemu-kvm-0.12.5)
>>>
>>> However, running outside the chroot was also OK with 0.12.3.
>>>
>>> What could be causing this?
>>
>> vgabios version most likely. Latest vgabios as shipped with 0.15
>> doesn't work with 0.12 IIRC, make sure you have an old one as shipped by
>> qemu 0.12.
The version of vgabios seems to be correct. I downloaded 0.12.5 from
Sourceforge, and diff'ed vgabios.bin against the version inside the
chroot. Moreover, I started the VM with "-L <path>" to make sure that
the correct version was being used. The Xorg logs still showed "(II)
VESA(0): virtual address = (nil)".
Thanks,
Jorge
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: Running KVM inside a chroot
2011-11-01 18:10 ` Jorge Lucangeli Obes
2011-11-02 0:12 ` Jorge Lucangeli Obes
@ 2011-11-02 8:36 ` Gerd Hoffmann
1 sibling, 0 replies; 21+ messages in thread
From: Gerd Hoffmann @ 2011-11-02 8:36 UTC (permalink / raw)
To: Jorge Lucangeli Obes; +Cc: Avi Kivity, Alexander Graf, kvm
Hi,
> OK, I'll check that, thanks. However, KVM inside the chroot was
> already slow when the version outside the chroot was 0.12.3 and the
> version inside was 0.12.5.
That is strange. There are no code changes in the vga emulation between
0.12.3 and 0.12.5 ...
Which vga you are using? The default (cirrus)? Or another one?
Does vesafb (vga=0x317) work inside/outside the chroot?
How do the vesafb boot messages look like (dmesg | grep vesafb)?
How does /proc/iomem look like?
cheers,
Gerd
^ permalink raw reply [flat|nested] 21+ messages in thread
end of thread, other threads:[~2011-11-02 8:36 UTC | newest]
Thread overview: 21+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2011-10-12 18:49 Running KVM inside a chroot Jorge Lucangeli Obes
2011-10-12 19:55 ` Alexander Graf
[not found] ` <CAKYuF5TG4+5yaVZh9KX0wLOjjg2h01Maz-VOsr2u4BVHzE8i7g@mail.gmail.com>
2011-10-13 0:51 ` Jorge Lucangeli Obes
2011-10-16 16:23 ` Avi Kivity
2011-10-17 6:10 ` Jorge Lucangeli Obes
2011-10-17 9:56 ` Avi Kivity
2011-10-17 16:37 ` Jorge Lucangeli Obes
2011-10-18 10:29 ` Avi Kivity
2011-10-18 16:43 ` Jorge Lucangeli Obes
2011-10-18 17:28 ` Avi Kivity
2011-10-18 17:39 ` Jorge Lucangeli Obes
2011-10-18 17:46 ` Avi Kivity
2011-10-20 3:30 ` Jorge Lucangeli Obes
2011-10-30 14:36 ` Avi Kivity
2011-10-30 14:41 ` Avi Kivity
2011-11-01 16:06 ` Jorge Lucangeli Obes
2011-11-01 16:29 ` Gerd Hoffmann
2011-11-01 18:10 ` Jorge Lucangeli Obes
2011-11-02 0:12 ` Jorge Lucangeli Obes
2011-11-02 8:36 ` Gerd Hoffmann
2011-10-19 15:38 ` David Ahern
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).