From: Chris Friesen <chris.friesen@windriver.com>
To: Amit Shah <amit.shah@redhat.com>, Paolo Bonzini <pbonzini@redhat.com>
Cc: qemu-devel@nongnu.org, armbru@redhat.com
Subject: Re: [Qemu-devel] virtio-serial-pci very expensive during live migration
Date: Thu, 8 May 2014 18:53:38 -0600 [thread overview]
Message-ID: <536C2712.3030803@windriver.com> (raw)
In-Reply-To: <536BA579.10604@windriver.com>
On 05/08/2014 09:40 AM, Chris Friesen wrote:
> On 05/08/2014 07:47 AM, Amit Shah wrote:
>
>> Chris, I just tried a simple test this way:
>>
>> ./x86_64-softmmu/qemu-system-x86_64 -device virtio-serial-pci -device
>> virtserialport -S -monitor stdio -nographic
>>
>> and it didn't crash for me. This was with qemu.git. Perhaps you can
>> try in a similar way.
>
> I just tried it with the "stable-1.4" branch from upstream with my first
> patch added on.
> Anyway, it seems to boot up okay, which is better than what I was
> getting before.
Turns out I spoke too soon. With the patch applied, it boots, but if I
try to do a live migration both the source and destination crash. This
happens for both the master branch as well as the stable-1.4 branch.
If I back out the patch, it works fine. If I leave the patch in and
disable kvm acceleration it works fine.
I'm running the source as
/tmp/qemu-system-x86_64-upstream -machine accel=kvm -m 1000
test-cgcs-guest.img -device virtio-serial -chardev
socket,path=/tmp/foo,server,nowait,id=foo -device
virtserialport,chardev=foo,name=myfoo -monitor stdio
and the dest as
/tmp/qemu-system-x86_64-upstream -machine accel=kvm -m 1000
test-cgcs-guest.img -device virtio-serial -chardev
socket,path=/tmp/foo,server,nowait,id=foo -device
virtserialport,chardev=foo,name=myfoo -monitor stdio -incoming tcp:0:4444
and I'm triggering the migration with
migrate -d tcp:localhost:4444
On the dest side monitor I get:
(qemu) qemu: warning: error while loading state section id 3
load of migration failed
I managed to get gdb working, but it's not very helpful. With gdb
attached to the destination process I just get:
[Thread 0x7f760f6dc700 (LWP 15328) exited]
[Inferior 1 (process 15326) exited normally]
next prev parent reply other threads:[~2014-05-09 0:54 UTC|newest]
Thread overview: 22+ messages / expand[flat|nested] mbox.gz Atom feed top
2014-05-06 20:01 [Qemu-devel] virtio-serial-pci very expensive during live migration Chris Friesen
2014-05-07 5:58 ` Markus Armbruster
2014-05-07 6:39 ` Paolo Bonzini
2014-05-07 22:25 ` Chris Friesen
2014-05-08 13:02 ` Amit Shah
2014-05-08 13:14 ` Paolo Bonzini
2014-05-08 13:30 ` Amit Shah
2014-05-08 13:36 ` Paolo Bonzini
2014-05-08 13:47 ` Amit Shah
2014-05-08 15:40 ` Chris Friesen
2014-05-09 0:53 ` Chris Friesen [this message]
2014-05-09 8:19 ` Paolo Bonzini
2014-06-19 15:19 ` Chris Friesen
2014-06-19 15:31 ` Paolo Bonzini
2014-05-08 14:31 ` Chris Friesen
2014-05-08 14:34 ` Paolo Bonzini
2014-05-08 15:57 ` Chris Friesen
2014-05-08 16:02 ` Paolo Bonzini
2014-05-08 20:57 ` Chris Friesen
2014-05-09 1:44 ` ChenLiang
2014-05-09 3:31 ` Chris Friesen
2014-05-08 2:54 ` Gonglei (Arei)
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=536C2712.3030803@windriver.com \
--to=chris.friesen@windriver.com \
--cc=amit.shah@redhat.com \
--cc=armbru@redhat.com \
--cc=pbonzini@redhat.com \
--cc=qemu-devel@nongnu.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).