qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
* [Qemu-devel] virsh live migration w/o shared storage fails with error as vm is not running
@ 2013-06-13  5:01 chandrashekar shastri
  2013-06-13  8:44 ` Stefan Hajnoczi
  0 siblings, 1 reply; 7+ messages in thread
From: chandrashekar shastri @ 2013-06-13  5:01 UTC (permalink / raw)
  To: qemu-devel, libvir-list, virt-tools-list

[-- Attachment #1: Type: text/plain, Size: 21844 bytes --]

Hi All,

We are testing the upstream KVM with :

Kernel, Qemu, Libvirt, Virt-Manager is built from the source (git).

kernel version : 3.9.0+
qemu version : QEMU emulator version 1.5.0
libvirt version : 1.0.5
virt-install : 0.600.3

I have followed the below steps to test the "Live migration w/o shared 
storage" feature :

1. Created the qemu-img create -f qcow2 vm.qcow2 12G on the destination 
host which is same as the source.
2. Started the guest on the source
3. Started the vncdisplay to monitor the guest
4. Initiated the migration "virsh migrate --live rhel64-64 
qemu+ssh://9.126.89.202/system --verbose --copy-storage-all"
5. It started the copying the storage from souce to destination 
(conitinously monitored it was growing)
6. Guest on the destination was paused and was running on the source
7. At some point the VM on the source shutdown and got an error on the 
vnc display as " Viewport:    write: Broken pipe (32)" and the VM on the 
destination was undefined.

Below is the libvirt debug log, please let me with your comments.

Debug log:
--------------

When the copy operation started:

2013-06-12 14:49:43.640+0000: 1696: info : libvirt version: 1.0.5
2013-06-12 14:49:43.640+0000: 1696: debug : virGlobalInit:439 : register drivers
2013-06-12 14:49:43.640+0000: 1696: debug : virRegisterDriver:769 : driver=0x7f2a6a5cd6a0 name=Test
2013-06-12 14:49:43.640+0000: 1696: debug : virRegisterDriver:781 : registering Test as driver 0
2013-06-12 14:49:43.640+0000: 1696: debug : virRegisterNetworkDriver:616 : registering Test as network driver 0
2013-06-12 14:49:43.640+0000: 1696: debug : virRegisterInterfaceDriver:643 : registering Test as interface driver 0
2013-06-12 14:49:43.640+0000: 1696: debug : virRegisterStorageDriver:670 : registering Test as storage driver 0
2013-06-12 14:49:43.640+0000: 1696: debug : virRegisterNodeDeviceDriver:697 : registering Test as device driver 0
2013-06-12 14:49:43.640+0000: 1696: debug : virRegisterSecretDriver:724 : registering Test as secret driver 0
2013-06-12 14:49:43.640+0000: 1696: debug : virRegisterNWFilterDriver:751 : registering Test as network filter driver 0
2013-06-12 14:49:43.640+0000: 1696: debug : virRegisterDriver:769 : driver=0x7f2a6a5ced60 name=OPENVZ
2013-06-12 14:49:43.640+0000: 1696: debug : virRegisterDriver:781 : registering OPENVZ as driver 1
2013-06-12 14:49:43.640+0000: 1696: debug : virRegisterDriver:769 : driver=0x7f2a6a5cf340 name=VMWARE
2013-06-12 14:49:43.640+0000: 1696: debug : virRegisterDriver:781 : registering VMWARE as driver 2
2013-06-12 14:49:43.640+0000: 1696: debug : vboxRegister:131 : VBoxCGlueInit failed, using dummy driver
2013-06-12 14:49:43.640+0000: 1696: debug : virRegisterDriver:769 : driver=0x7f2a6a5cf920 name=VBOX
2013-06-12 14:49:43.640+0000: 1696: debug : virRegisterDriver:781 : registering VBOX as driver 3
2013-06-12 14:49:43.640+0000: 1696: debug : virRegisterNetworkDriver:616 : registering VBOX as network driver 1
2013-06-12 14:49:43.640+0000: 1696: debug : virRegisterStorageDriver:670 : registering VBOX as storage driver 1
2013-06-12 14:49:43.640+0000: 1696: debug : virRegisterDriver:769 : driver=0x7f2a6a5d3080 name=ESX
2013-06-12 14:49:43.640+0000: 1696: debug : virRegisterDriver:781 : registering ESX as driver 4
2013-06-12 14:49:43.640+0000: 1696: debug : virRegisterInterfaceDriver:643 : registering ESX as interface driver 1
2013-06-12 14:49:43.640+0000: 1696: debug : virRegisterNetworkDriver:616 : registering ESX as network driver 2
2013-06-12 14:49:43.640+0000: 1696: debug : virRegisterStorageDriver:670 : registering ESX as storage driver 2
2013-06-12 14:49:43.640+0000: 1696: debug : virRegisterNodeDeviceDriver:697 : registering ESX as device driver 1
2013-06-12 14:49:43.640+0000: 1696: debug : virRegisterSecretDriver:724 : registering ESX as secret driver 1
2013-06-12 14:49:43.640+0000: 1696: debug : virRegisterNWFilterDriver:751 : registering ESX as network filter driver 1
2013-06-12 14:49:43.640+0000: 1696: debug : parallelsRegister:2448 : Can't find prlctl command in the PATH env
2013-06-12 14:49:43.640+0000: 1696: debug : virRegisterDriver:769 : driver=0x7f2a6a5ce0a0 name=remote
2013-06-12 14:49:43.640+0000: 1696: debug : virRegisterDriver:781 : registering remote as driver 5
2013-06-12 14:49:43.640+0000: 1696: debug : virRegisterNetworkDriver:616 : registering remote as network driver 3
2013-06-12 14:49:43.640+0000: 1696: debug : virRegisterInterfaceDriver:643 : registering remote as interface driver 2
2013-06-12 14:49:43.641+0000: 1696: debug : virRegisterStorageDriver:670 : registering remote as storage driver 3
2013-06-12 14:49:43.641+0000: 1696: debug : virRegisterNodeDeviceDriver:697 : registering remote as device driver 2
2013-06-12 14:49:43.641+0000: 1696: debug : virRegisterSecretDriver:724 : registering remote as secret driver 2
2013-06-12 14:49:43.641+0000: 1696: debug : virRegisterNWFilterDriver:751 : registering remote as network filter driver 2
2013-06-12 14:49:43.641+0000: 1696: debug : virEventRegisterDefaultImpl:230 : registering default event implementation
2013-06-12 14:49:43.641+0000: 1696: debug : virEventPollAddHandle:111 : Used 0 handle slots, adding at least 10 more
2013-06-12 14:49:43.641+0000: 1696: debug : virEventPollInterruptLocked:712 : Skip interrupt, 0 0
2013-06-12 14:49:43.641+0000: 1696: debug : virEventPollAddHandle:136 : EVENT_POLL_ADD_HANDLE: watch=1 fd=5 events=1 cb=0x7f2a6a0c3070 opaque=(nil) ff=(nil)
2013-06-12 14:49:43.641+0000: 1696: debug : virEventRegisterImpl:203 : addHandle=0x7f2a6a0c3e50 updateHandle=0x7f2a6a0c3cd0 removeHandle=0x7f2a6a0c3550 addTimeout=0x7f2a6a0c36e0 updateTimeout=0x7f2a6a0c38f0 removeTimeout=0x7f2a6a0c3ac0
2013-06-12 14:49:43.641+0000: 1696: debug : virConnectOpenAuth:1452 : name=(null), auth=0x7f2a6a5cd600, flags=0
2013-06-12 14:49:43.641+0000: 1696: debug : virObjectNew:201 : OBJECT_NEW: obj=0x7f2a6b625450 classname=virConnect
2013-06-12 14:49:43.641+0000: 1696: debug : virObjectNew:201 : OBJECT_NEW: obj=0x7f2a6b625130 classname=virConnectCloseCallbackData
2013-06-12 14:49:43.641+0000: 1697: debug : virEventRunDefaultImpl:270 : running default event implementation
2013-06-12 14:49:43.641+0000: 1697: debug : virEventPollCleanupTimeouts:516 : Cleanup 0
2013-06-12 14:49:43.641+0000: 1696: debug : virConnectGetConfigFile:993 : Loading config file '/usr/local/etc/libvirt/libvirt.conf'
2013-06-12 14:49:43.641+0000: 1697: debug : virEventPollCleanupTimeouts:552 : Found 0 out of 0 timeout slots used, releasing 0
2013-06-12 14:49:43.641+0000: 1696: debug : virConfReadFile:767 : filename=/usr/local/etc/libvirt/libvirt.conf
2013-06-12 14:49:43.641+0000: 1697: debug : virEventPollCleanupHandles:564 : Cleanup 1
2013-06-12 14:49:43.641+0000: 1697: debug : virEventPollMakePollFDs:393 : Prepare n=0 w=1, f=5 e=1 d=0
2013-06-12 14:49:43.641+0000: 1696: debug : virFileClose:73 : Closed fd 7
2013-06-12 14:49:43.641+0000: 1697: debug : virEventPollCalculateTimeout:332 : Calculate expiry of 0 timers
2013-06-12 14:49:43.641+0000: 1697: debug : virEventPollCalculateTimeout:361 : Timeout at 0 due in -1 ms
2013-06-12 14:49:43.641+0000: 1696: debug : do_open:1171 : no name, allowing driver auto-select
2013-06-12 14:49:43.641+0000: 1696: debug : do_open:1213 : trying driver 0 (Test) ...
2013-06-12 14:49:43.641+0000: 1697: debug : virEventPollRunOnce:629 : EVENT_POLL_RUN: nhandles=1 timeout=-1
2013-06-12 14:49:43.641+0000: 1696: debug : do_open:1219 : driver 0 Test returned DECLINED
2013-06-12 14:49:43.641+0000: 1696: debug : do_open:1213 : trying driver 1 (OPENVZ) ...
2013-06-12 14:49:43.641+0000: 1696: debug : do_open:1219 : driver 1 OPENVZ returned DECLINED
2013-06-12 14:49:43.641+0000: 1696: debug : do_open:1213 : trying driver 2 (VMWARE) ...
2013-06-12 14:49:43.641+0000: 1696: debug : do_open:1219 : driver 2 VMWARE returned DECLINED
2013-06-12 14:49:43.641+0000: 1696: debug : do_open:1213 : trying driver 3 (VBOX) ...
2013-06-12 14:49:43.641+0000: 1696: debug : do_open:1219 : driver 3 VBOX returned DECLINED
2013-06-12 14:49:43.641+0000: 1696: debug : do_open:1213 : trying driver 4 (ESX) ...
2013-06-12 14:49:43.641+0000: 1696: debug : do_open:1219 : driver 4 ESX returned DECLINED
2013-06-12 14:49:43.641+0000: 1696: debug : do_open:1213 : trying driver 5 (remote) ...
2013-06-12 14:49:43.641+0000: 1696: debug : remoteConnectOpen:985 : Auto-probe remote URI
2013-06-12 14:49:43.641+0000: 1696: debug : doRemoteOpen:594 : proceeding with name =
2013-06-12 14:49:43.641+0000: 1696: debug : doRemoteOpen:603 : Connecting with transport 1
2013-06-12 14:49:43.641+0000: 1696: debug : doRemoteOpen:688 : Proceeding with sockname /usr/local/var/run/libvirt/libvirt-sock
2013-06-12 14:49:43.642+0000: 1696: debug : virNetSocketNew:155 : localAddr=0x7fffcba10510 remoteAddr=0x7fffcba105a0 fd=7 errfd=-1 pid=0
2013-06-12 14:49:43.642+0000: 1696: debug : virObjectNew:201 : OBJECT_NEW: obj=0x7f2a6b6282e0 classname=virNetSocket
2013-06-12 14:49:43.642+0000: 1696: debug : virNetSocketNew:205 : RPC_SOCKET_NEW: sock=0x7f2a6b6282e0 fd=7 errfd=-1 pid=0 localAddr=127.0.0.1;0, remoteAddr=127.0.0.1;0
2013-06-12 14:49:43.642+0000: 1696: debug : virObjectNew:201 : OBJECT_NEW: obj=0x7f2a6b628590 classname=virNetClient
2013-06-12 14:49:43.642+0000: 1696: debug : virNetClientNew:326 : RPC_CLIENT_NEW: client=0x7f2a6b628590 sock=0x7f2a6b6282e0
2013-06-12 14:49:43.642+0000: 1696: debug : virObjectRef:295 : OBJECT_REF: obj=0x7f2a6b628590
2013-06-12 14:49:43.642+0000: 1696: debug : virObjectRef:295 : OBJECT_REF: obj=0x7f2a6b6282e0
2013-06-12 14:49:43.642+0000: 1696: debug : virEventPollInterruptLocked:716 : Interrupting
2013-06-12 14:49:43.642+0000: 1696: debug : virEventPollAddHandle:136 : EVENT_POLL_ADD_HANDLE: watch=2 fd=7 events=1 cb=0x7f2a6a1db2e0 opaque=0x7f2a6b6282e0 ff=0x7f2a6a1db3a0
2013-06-12 14:49:43.642+0000: 1696: debug : virKeepAliveNew:196 : client=0x7f2a6b628590, interval=-1, count=0
2013-06-12 14:49:43.642+0000: 1696: debug : virObjectNew:201 : OBJECT_NEW: obj=0x7f2a6b628880 classname=virKeepAlive
2013-06-12 14:49:43.642+0000: 1696: debug : virKeepAliveNew:215 : RPC_KEEPALIVE_NEW: ka=0x7f2a6b628880 client=0x7f2a6b628590
2013-06-12 14:49:43.642+0000: 1696: debug : virObjectRef:295 : OBJECT_REF: obj=0x7f2a6b628590
2013-06-12 14:49:43.642+0000: 1696: debug : virObjectRef:295 : OBJECT_REF: obj=0x7f2a6b625130
2013-06-12 14:49:43.642+0000: 1696: debug : virObjectNew:201 : OBJECT_NEW: obj=0x7f2a6b6287a0 classname=virNetClientProgram
2013-06-12 14:49:43.642+0000: 1696: debug : virObjectNew:201 : OBJECT_NEW: obj=0x7f2a6b6284b0 classname=virNetClientProgram
2013-06-12 14:49:43.642+0000: 1696: debug : virObjectNew:201 : OBJECT_NEW: obj=0x7f2a6b6280b0 classname=virNetClientProgram
2013-06-12 14:49:43.642+0000: 1696: debug : virObjectRef:295 : OBJECT_REF: obj=0x7f2a6b6287a0
2013-06-12 14:49:43.642+0000: 1696: debug : virObjectRef:295 : OBJECT_REF: obj=0x7f2a6b6284b0
2013-06-12 14:49:43.642+0000: 1696: debug : virObjectRef:295 : OBJECT_REF: obj=0x7f2a6b6280b0
2013-06-12 14:49:43.642+0000: 1696: debug : doRemoteOpen:798 : Trying authentication
2013-06-12 14:49:43.642+0000: 1696: debug : virNetMessageNew:45 : msg=0x7f2a6b628db0 tracked=0
2013-06-12 14:49:43.642+0000: 1696: debug : virNetMessageEncodePayload:382 : Encode length as 28
2013-06-12 14:49:43.642+0000: 1696: debug : virNetClientSendInternal:1961 : RPC_CLIENT_MSG_TX_QUEUE: client=0x7f2a6b628590 len=28 prog=536903814 vers=1 proc=66 type=0 status=0 serial=0
2013-06-12 14:49:43.642+0000: 1696: debug : virNetClientCallNew:1914 : New call 0x7f2a6b624f30: msg=0x7f2a6b628db0, expectReply=1, nonBlock=0
2013-06-12 14:49:43.642+0000: 1696: debug : virNetClientIO:1721 : Outgoing message prog=536903814 version=1 serial=0 proc=66 type=0 length=28 dispatch=(nil)
2013-06-12 14:49:43.642+0000: 1696: debug : virNetClientIO:1780 : We have the buck head=0x7f2a6b624f30 call=0x7f2a6b624f30
2013-06-12 14:49:43.642+0000: 1696: debug : virEventPollUpdateHandle:147 : EVENT_POLL_UPDATE_HANDLE: watch=2 events=0
2013-06-12 14:49:43.642+0000: 1696: debug : virEventPollInterruptLocked:716 : Interrupting
2013-06-12 14:49:43.642+0000: 1697: debug : virEventPollRunOnce:640 : Poll got 1 event(s)
2013-06-12 14:49:43.642+0000: 1697: debug : virEventPollDispatchTimeouts:425 : Dispatch 0
2013-06-12 14:49:43.642+0000: 1697: debug : virEventPollDispatchHandles:470 : Dispatch 1
2013-06-12 14:49:43.642+0000: 1697: debug : virEventPollDispatchHandles:484 : i=0 w=1
2013-06-12 14:49:43.642+0000: 1697: debug : virEventPollDispatchHandles:498 : EVENT_POLL_DISPATCH_HANDLE: watch=1 events=1
2013-06-12 14:49:43.642+0000: 1697: debug : virEventPollCleanupTimeouts:516 : Cleanup 0
2013-06-12 14:49:43.642+0000: 1697: debug : virEventPollCleanupTimeouts:552 : Found 0 out of 0 timeout slots used, releasing 0
2013-06-12 14:49:43.642+0000: 1697: debug : virEventPollCleanupHandles:564 : Cleanup 2
2013-06-12 14:49:43.642+0000: 1697: debug : virEventRunDefaultImpl:270 : running default event implementation
2013-06-12 14:49:43.642+0000: 1697: debug : virEventPollCleanupTimeouts:516 : Cleanup 0
2013-06-12 14:49:43.642+0000: 1697: debug : virEventPollCleanupTimeouts:552 : Found 0 out of 0 timeout slots used, releasing 0
2013-06-12 14:49:43.642+0000: 1697: debug : virEventPollCleanupHandles:564 : Cleanup 2
2013-06-12 14:49:43.642+0000: 1697: debug : virEventPollMakePollFDs:393 : Prepare n=0 w=1, f=5 e=1 d=0
2013-06-12 14:49:43.642+0000: 1697: debug : virEventPollMakePollFDs:393 : Prepare n=1 w=2, f=7 e=0 d=0
2013-06-12 14:49:43.642+0000: 1697: debug : virEventPollCalculateTimeout:332 : Calculate expiry of 0 timers
2013-06-12 14:49:43.642+0000: 1697: debug : virEventPollCalculateTimeout:361 : Timeout at 0 due in -1 ms
2013-06-12 14:49:43.642+0000: 1697: debug : virEventPollRunOnce:629 : EVENT_POLL_RUN: nhandles=1 timeout=-1
2013-06-12 14:49:43.642+0000: 1697: debug : virEventPollRunOnce:640 : Poll got 1 event(s)
2013-06-12 14:49:43.642+0000: 1697: debug : virEventPollDispatchTimeouts:425 : Dispatch 0
2013-06-12 14:49:43.642+0000: 1697: debug : virEventPollDispatchHandles:470 : Dispatch 1
2013-06-12 14:49:43.642+0000: 1697: debug : virEventPollDispatchHandles:484 : i=0 w=1
2013-06-12 14:49:43.642+0000: 1697: debug : virEventPollDispatchHandles:498 : EVENT_POLL_DISPATCH_HANDLE: watch=1 events=1
2013-06-12 14:49:43.642+0000: 1697: debug : virEventPollCleanupTimeouts:516 : Cleanup 0
2013-06-12 14:49:43.642+0000: 1697: debug : virEventPollCleanupTimeouts:552 : Found 0 out of 0 timeout slots used, releasing 0
2013-06-12 14:49:43.642+0000: 1697: debug : virEventPollCleanupHandles:564 : Cleanup 2
2013-06-12 14:49:43.642+0000: 1697: debug : virEventRunDefaultImpl:270 : running default event implementation
2013-06-12 14:49:43.642+0000: 1697: debug : virEventPollCleanupTimeouts:516 : Cleanup 0

When the copy operation was stopped:


2013-06-12 14:59:33.443+0000: 1696: debug : virNetClientMarkClose:634 : client=0x7f2a6b628590, reason=3
2013-06-12 14:59:33.443+0000: 1696: debug : virEventPollRemoveHandle:180 : EVENT_POLL_REMOVE_HANDLE: watch=2
2013-06-12 14:59:33.443+0000: 1696: debug : virEventPollRemoveHandle:193 : mark delete 1 7
2013-06-12 14:59:33.443+0000: 1696: debug : virEventPollInterruptLocked:716 : Interrupting
2013-06-12 14:59:33.443+0000: 1696: debug : virNetClientIOEventLoopPassTheBuck:1427 : Giving up the buck (nil)
2013-06-12 14:59:33.443+0000: 1696: debug : virNetClientIOEventLoopPassTheBuck:1441 : No thread to pass the buck to
2013-06-12 14:59:33.443+0000: 1696: debug : virNetClientCloseLocked:647 : client=0x7f2a6b628590, sock=0x7f2a6b6282e0, reason=3
2013-06-12 14:59:33.443+0000: 1696: debug : virObjectUnref:258 : OBJECT_UNREF: obj=0x7f2a6b6282e0
2013-06-12 14:59:33.443+0000: 1696: debug : virObjectRef:295 : OBJECT_REF: obj=0x7f2a6b628590
2013-06-12 14:59:33.443+0000: 1696: debug : virKeepAliveStop:307 : RPC_KEEPALIVE_STOP: ka=0x7f2a6b628880 client=0x7f2a6b628590
2013-06-12 14:59:33.443+0000: 1696: debug : virObjectUnref:258 : OBJECT_UNREF: obj=0x7f2a6b628880
2013-06-12 14:59:33.443+0000: 1696: debug : virObjectUnref:260 : OBJECT_DISPOSE: obj=0x7f2a6b628880
2013-06-12 14:59:33.443+0000: 1697: debug : virEventPollRunOnce:640 : Poll got 1 event(s)
2013-06-12 14:59:33.443+0000: 1697: debug : virEventPollDispatchTimeouts:425 : Dispatch 0
2013-06-12 14:59:33.443+0000: 1697: debug : virEventPollDispatchHandles:470 : Dispatch 1
2013-06-12 14:59:33.443+0000: 1697: debug : virEventPollDispatchHandles:484 : i=0 w=1
2013-06-12 14:59:33.443+0000: 1697: debug : virEventPollDispatchHandles:498 : EVENT_POLL_DISPATCH_HANDLE: watch=1 events=1
2013-06-12 14:59:33.443+0000: 1697: debug : virEventPollCleanupTimeouts:516 : Cleanup 0
2013-06-12 14:59:33.443+0000: 1697: debug : virEventPollCleanupTimeouts:552 : Found 0 out of 0 timeout slots used, releasing 0
2013-06-12 14:59:33.443+0000: 1697: debug : virEventPollCleanupHandles:564 : Cleanup 2
2013-06-12 14:59:33.443+0000: 1697: debug : virEventPollCleanupHandles:577 : EVENT_POLL_PURGE_HANDLE: watch=2
2013-06-12 14:59:33.443+0000: 1697: debug : virObjectUnref:258 : OBJECT_UNREF: obj=0x7f2a6b628590
2013-06-12 14:59:33.443+0000: 1697: debug : virObjectUnref:258 : OBJECT_UNREF: obj=0x7f2a6b6282e0
2013-06-12 14:59:33.443+0000: 1697: debug : virObjectUnref:260 : OBJECT_DISPOSE: obj=0x7f2a6b6282e0
2013-06-12 14:59:33.443+0000: 1697: debug : virNetSocketDispose:1004 : RPC_SOCKET_DISPOSE: sock=0x7f2a6b6282e0
2013-06-12 14:59:33.443+0000: 1697: debug : virEventPollRemoveHandle:180 : EVENT_POLL_REMOVE_HANDLE: watch=2
2013-06-12 14:59:33.443+0000: 1697: debug : virFileClose:73 : Closed fd 7
2013-06-12 14:59:33.459+0000: 1696: debug : virKeepAliveDispose:227 : RPC_KEEPALIVE_DISPOSE: ka=0x7f2a6b628880
2013-06-12 14:59:33.459+0000: 1696: debug : virObjectUnref:258 : OBJECT_UNREF: obj=0x7f2a6b628590
2013-06-12 14:59:33.459+0000: 1696: debug : virObjectUnref:258 : OBJECT_UNREF: obj=0x7f2a6b628590
2013-06-12 14:59:33.459+0000: 1696: debug : virObjectUnref:258 : OBJECT_UNREF: obj=0x7f2a6b628590
2013-06-12 14:59:33.459+0000: 1696: debug : virObjectUnref:260 : OBJECT_DISPOSE: obj=0x7f2a6b628590
2013-06-12 14:59:33.459+0000: 1696: debug : virNetClientDispose:600 : RPC_CLIENT_DISPOSE: client=0x7f2a6b628590
2013-06-12 14:59:33.459+0000: 1696: debug : virObjectUnref:258 : OBJECT_UNREF: obj=0x7f2a6b625130
2013-06-12 14:59:33.459+0000: 1696: debug : virObjectUnref:258 : OBJECT_UNREF: obj=0x7f2a6b6287a0
2013-06-12 14:59:33.459+0000: 1696: debug : virObjectUnref:258 : OBJECT_UNREF: obj=0x7f2a6b6284b0
2013-06-12 14:59:33.459+0000: 1696: debug : virObjectUnref:258 : OBJECT_UNREF: obj=0x7f2a6b6280b0
2013-06-12 14:59:33.459+0000: 1696: debug : virFileClose:73 : Closed fd 9
2013-06-12 14:59:33.459+0000: 1697: debug : virEventRunDefaultImpl:270 : running default event implementation
2013-06-12 14:59:33.459+0000: 1696: debug : virFileClose:73 : Closed fd 8
2013-06-12 14:59:33.459+0000: 1697: debug : virEventPollCleanupTimeouts:516 : Cleanup 0
2013-06-12 14:59:33.459+0000: 1696: debug : virNetMessageClear:56 : msg=0x7f2a6b6285f8 nfds=0
2013-06-12 14:59:33.459+0000: 1697: debug : virEventPollCleanupTimeouts:552 : Found 0 out of 0 timeout slots used, releasing 0
2013-06-12 14:59:33.459+0000: 1696: debug : virObjectUnref:258 : OBJECT_UNREF: obj=0x7f2a6b6287a0
2013-06-12 14:59:33.459+0000: 1697: debug : virEventPollCleanupHandles:564 : Cleanup 1
2013-06-12 14:59:33.459+0000: 1696: debug : virObjectUnref:260 : OBJECT_DISPOSE: obj=0x7f2a6b6287a0
2013-06-12 14:59:33.459+0000: 1697: debug : virEventPollMakePollFDs:393 : Prepare n=0 w=1, f=5 e=1 d=0
2013-06-12 14:59:33.459+0000: 1696: debug : virObjectUnref:258 : OBJECT_UNREF: obj=0x7f2a6b6284b0
2013-06-12 14:59:33.459+0000: 1697: debug : virEventPollCalculateTimeout:332 : Calculate expiry of 0 timers
2013-06-12 14:59:33.459+0000: 1696: debug : virObjectUnref:260 : OBJECT_DISPOSE: obj=0x7f2a6b6284b0
2013-06-12 14:59:33.459+0000: 1697: debug : virEventPollCalculateTimeout:361 : Timeout at 0 due in -1 ms
2013-06-12 14:59:33.459+0000: 1696: debug : virObjectUnref:258 : OBJECT_UNREF: obj=0x7f2a6b6280b0
2013-06-12 14:59:33.459+0000: 1697: debug : virEventPollRunOnce:629 : EVENT_POLL_RUN: nhandles=1 timeout=-1
2013-06-12 14:59:33.459+0000: 1696: debug : virObjectUnref:260 : OBJECT_DISPOSE: obj=0x7f2a6b6280b0
2013-06-12 14:59:33.459+0000: 1697: debug : virEventPollRunOnce:640 : Poll got 1 event(s)
2013-06-12 14:59:33.459+0000: 1697: debug : virEventPollDispatchTimeouts:425 : Dispatch 0
2013-06-12 14:59:33.459+0000: 1696: debug : virObjectUnref:258 : OBJECT_UNREF: obj=0x7f2a6b625130
2013-06-12 14:59:33.459+0000: 1697: debug : virEventPollDispatchHandles:470 : Dispatch 1
2013-06-12 14:59:33.459+0000: 1696: debug : virObjectUnref:260 : OBJECT_DISPOSE: obj=0x7f2a6b625130
2013-06-12 14:59:33.459+0000: 1697: debug : virEventPollDispatchHandles:484 : i=0 w=1
2013-06-12 14:59:33.459+0000: 1697: debug : virEventPollDispatchHandles:498 : EVENT_POLL_DISPATCH_HANDLE: watch=1 events=1
2013-06-12 14:59:33.459+0000: 1697: debug : virEventPollCleanupTimeouts:516 : Cleanup 0
2013-06-12 14:59:33.459+0000: 1697: debug : virEventPollCleanupTimeouts:552 : Found 0 out of 0 timeout slots used, releasing 0
2013-06-12 14:59:33.459+0000: 1697: debug : virEventPollCleanupHandles:564 : Cleanup 1
2013-06-12 14:59:33.459+0000: 1696: debug : virEventPollAddTimeout:225 : Used 0 timeout slots, adding at least 10 more
2013-06-12 14:59:33.563+0000: 1696: debug : virEventPollInterruptLocked:712 : Skip interrupt, 0 139819907258112
2013-06-12 14:59:33.563+0000: 1696: debug : virEventPollAddTimeout:248 : EVENT_POLL_ADD_TIMEOUT: timer=1 frequency=0 cb=0x7f2a6a8198f0 opaque=(nil) ff=(nil)
2013-06-12 14:59:33.563+0000: 1696: debug : virEventPollRemoveTimeout:300 : EVENT_POLL_REMOVE_TIMEOUT: timer=1
2013-06-12 14:59:33.563+0000: 1696: debug : virEventPollInterruptLocked:712 : Skip interrupt, 0 139819907258112


Thanks,
Chandrashekar
     `

[-- Attachment #2: Type: text/html, Size: 22863 bytes --]

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [Qemu-devel] virsh live migration w/o shared storage fails with error as vm is not running
  2013-06-13  5:01 [Qemu-devel] virsh live migration w/o shared storage fails with error as vm is not running chandrashekar shastri
@ 2013-06-13  8:44 ` Stefan Hajnoczi
  2013-06-13 17:26   ` chandrashekar shastri
  2013-07-05  6:23   ` chandrashekar shastri
  0 siblings, 2 replies; 7+ messages in thread
From: Stefan Hajnoczi @ 2013-06-13  8:44 UTC (permalink / raw)
  To: chandrashekar shastri; +Cc: libvir-list, qemu-devel, virt-tools-list

On Thu, Jun 13, 2013 at 10:31:04AM +0530, chandrashekar shastri wrote:
> We are testing the upstream KVM with :
> 
> Kernel, Qemu, Libvirt, Virt-Manager is built from the source (git).
> 
> kernel version : 3.9.0+
> qemu version : QEMU emulator version 1.5.0
> libvirt version : 1.0.5
> virt-install : 0.600.3
> 
> I have followed the below steps to test the "Live migration w/o
> shared storage" feature :
> 
> 1. Created the qemu-img create -f qcow2 vm.qcow2 12G on the
> destination host which is same as the source.
> 2. Started the guest on the source
> 3. Started the vncdisplay to monitor the guest
> 4. Initiated the migration "virsh migrate --live rhel64-64
> qemu+ssh://9.126.89.202/system --verbose --copy-storage-all"
> 5. It started the copying the storage from souce to destination
> (conitinously monitored it was growing)
> 6. Guest on the destination was paused and was running on the source
> 7. At some point the VM on the source shutdown and got an error on
> the vnc display as " Viewport:    write: Broken pipe (32)" and the
> VM on the destination was undefined.
> 
> Below is the libvirt debug log, please let me with your comments.
> 
> Debug log:
> --------------

What about /var/log/libvirt/qemu/rhel64-64.log?  That is the QEMU
command-line and stderr log.

Also can you try without copy-storage-all just to see if migration
completes successfully?  The guest will act weird once it migrates since
the disk is zeroed but it will isolate the failure to
--copy-storage-all.

Stefan

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [Qemu-devel] virsh live migration w/o shared storage fails with error as vm is not running
  2013-06-13  8:44 ` Stefan Hajnoczi
@ 2013-06-13 17:26   ` chandrashekar shastri
  2013-06-13 21:45     ` Paolo Bonzini
  2013-07-05  6:23   ` chandrashekar shastri
  1 sibling, 1 reply; 7+ messages in thread
From: chandrashekar shastri @ 2013-06-13 17:26 UTC (permalink / raw)
  To: Stefan Hajnoczi; +Cc: libvir-list, qemu-devel, virt-tools-list

On 06/13/2013 02:14 PM, Stefan Hajnoczi wrote:
> On Thu, Jun 13, 2013 at 10:31:04AM +0530, chandrashekar shastri wrote:
>> We are testing the upstream KVM with :
>>
>> Kernel, Qemu, Libvirt, Virt-Manager is built from the source (git).
>>
>> kernel version : 3.9.0+
>> qemu version : QEMU emulator version 1.5.0
>> libvirt version : 1.0.5
>> virt-install : 0.600.3
>>
>> I have followed the below steps to test the "Live migration w/o
>> shared storage" feature :
>>
>> 1. Created the qemu-img create -f qcow2 vm.qcow2 12G on the
>> destination host which is same as the source.
>> 2. Started the guest on the source
>> 3. Started the vncdisplay to monitor the guest
>> 4. Initiated the migration "virsh migrate --live rhel64-64
>> qemu+ssh://9.126.89.202/system --verbose --copy-storage-all"
>> 5. It started the copying the storage from souce to destination
>> (conitinously monitored it was growing)
>> 6. Guest on the destination was paused and was running on the source
>> 7. At some point the VM on the source shutdown and got an error on
>> the vnc display as " Viewport:    write: Broken pipe (32)" and the
>> VM on the destination was undefined.
>>
>> Below is the libvirt debug log, please let me with your comments.
>>
>> Debug log:
>> --------------
> What about /var/log/libvirt/qemu/rhel64-64.log?  That is the QEMU
> command-line and stderr log.
>
> Also can you try without copy-storage-all just to see if migration
> completes successfully?  The guest will act weird once it migrates since
> the disk is zeroed but it will isolate the failure to
> --copy-storage-all.
I have scheduled live migration with shared storage (nfs), looks like 
that is only not working properly.
I have turned on the verbose it goes to 99% and sometimes even reaches 
100% again comes back 96%
very inconsistent. I will update the result by tomorrow.

Thanks,
Chandrashekar
>
> Stefan
>

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [Qemu-devel] virsh live migration w/o shared storage fails with error as vm is not running
  2013-06-13 17:26   ` chandrashekar shastri
@ 2013-06-13 21:45     ` Paolo Bonzini
  2013-06-18 11:00       ` chandrashekar shastri
  2013-06-19 11:21       ` chandrashekar shastri
  0 siblings, 2 replies; 7+ messages in thread
From: Paolo Bonzini @ 2013-06-13 21:45 UTC (permalink / raw)
  To: chandrashekar shastri
  Cc: libvir-list, Stefan Hajnoczi, qemu-devel, virt-tools-list

Il 13/06/2013 13:26, chandrashekar shastri ha scritto:
> On 06/13/2013 02:14 PM, Stefan Hajnoczi wrote:
>> On Thu, Jun 13, 2013 at 10:31:04AM +0530, chandrashekar shastri wrote:
>>> We are testing the upstream KVM with :
>>>
>>> Kernel, Qemu, Libvirt, Virt-Manager is built from the source (git).
>>>
>>> kernel version : 3.9.0+
>>> qemu version : QEMU emulator version 1.5.0
>>> libvirt version : 1.0.5

Please try with libvirt 1.0.5.2.

>>> virt-install : 0.600.3
>>>
>>> I have followed the below steps to test the "Live migration w/o
>>> shared storage" feature :
>>>
>>> 1. Created the qemu-img create -f qcow2 vm.qcow2 12G on the
>>> destination host which is same as the source.
>>> 2. Started the guest on the source
>>> 3. Started the vncdisplay to monitor the guest
>>> 4. Initiated the migration "virsh migrate --live rhel64-64
>>> qemu+ssh://9.126.89.202/system --verbose --copy-storage-all"

I recently found a bug here related to IPv4/IPv6.  I need to understand
if it is in QEMU or libvirt.

Paolo

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [Qemu-devel] virsh live migration w/o shared storage fails with error as vm is not running
  2013-06-13 21:45     ` Paolo Bonzini
@ 2013-06-18 11:00       ` chandrashekar shastri
  2013-06-19 11:21       ` chandrashekar shastri
  1 sibling, 0 replies; 7+ messages in thread
From: chandrashekar shastri @ 2013-06-18 11:00 UTC (permalink / raw)
  To: Paolo Bonzini; +Cc: libvir-list, Stefan Hajnoczi, qemu-devel, virt-tools-list

On 06/14/2013 03:15 AM, Paolo Bonzini wrote:
> Il 13/06/2013 13:26, chandrashekar shastri ha scritto:
>> On 06/13/2013 02:14 PM, Stefan Hajnoczi wrote:
>>> On Thu, Jun 13, 2013 at 10:31:04AM +0530, chandrashekar shastri wrote:
>>>> We are testing the upstream KVM with :
>>>>
>>>> Kernel, Qemu, Libvirt, Virt-Manager is built from the source (git).
>>>>
>>>> kernel version : 3.9.0+
>>>> qemu version : QEMU emulator version 1.5.0
>>>> libvirt version : 1.0.5
> Please try with libvirt 1.0.5.2.
I pulled the latest libvirt from git to test this and some other issues 
as suggested by commuinty.
But I am stuck and not able to make any progress because libvirt 
compliation is failing with below error:

Sorry for Laszlo, Stefan, Martin, Paolo and others for not following the 
things what they have suggessted.

Below is the error what I am getting when I tried to compile the libvirt:

###############################################################
You may need to use the following Makefile variables when linking.
Use them in <program>_LDADD when linking a program, or
in <library>_a_LDFLAGS or <library>_la_LDFLAGS when linking a library.
   $(GETADDRINFO_LIB)
   $(GETHOSTNAME_LIB)
   $(HOSTENT_LIB)
   $(INET_NTOP_LIB)
   $(INET_PTON_LIB)
   $(LDEXP_LIBM)
   $(LIBSOCKET)
   $(LIB_CLOCK_GETTIME)
   $(LIB_EXECINFO)
   $(LIB_FDATASYNC)
   $(LIB_POLL)
   $(LIB_PTHREAD)
   $(LIB_PTHREAD_SIGMASK)
   $(LIB_SELECT)
   $(LTLIBINTL) when linking with libtool, $(LIBINTL) otherwise
   $(LTLIBTHREAD) when linking with libtool, $(LIBTHREAD) otherwise
   $(PTY_LIB)
   $(SERVENT_LIB)

Don't forget to
   - "include gnulib.mk" from within "gnulib/lib/Makefile.am",
   - "include gnulib.mk" from within "gnulib/tests/Makefile.am",
   - mention "-I gnulib/m4" in ACLOCAL_AMFLAGS in Makefile.am,
   - mention "gnulib/m4/gnulib-cache.m4" in EXTRA_DIST in Makefile.am,
   - invoke gl_EARLY in ./configure.ac, right after AC_PROG_CC,
   - invoke gl_INIT in ./configure.ac.
running: AUTOPOINT=true LIBTOOLIZE=true autoreconf --verbose --install 
--force -I gnulib/m4  --no-recursive
autoreconf: Entering directory `.'
autoreconf: running: true --force
autoreconf: running: aclocal -I m4 -I gnulib/m4 --force -I m4 -I gnulib/m4
autoreconf: configure.ac: tracing
autoreconf: running: true --copy --force
autoreconf: running: /usr/bin/autoconf --include=gnulib/m4 --force
autoreconf: running: /usr/bin/autoheader --include=gnulib/m4 --force
autoreconf: running: automake --add-missing --copy --force-missing
configure.ac:2424: error: required file 'gnulib/lib/Makefile.in' not found
configure.ac:2424: error: required file 'gnulib/tests/Makefile.in' not found
autoreconf: automake failed with exit status: 1

###########################################################################


I didn't face any issue with libvirt 1.0.5. Please let me know if I am 
missing anything here or it is really a bug with "gnulib".

Thanks,
Chandrashekar
>
>>>> virt-install : 0.600.3
>>>>
>>>> I have followed the below steps to test the "Live migration w/o
>>>> shared storage" feature :
>>>>
>>>> 1. Created the qemu-img create -f qcow2 vm.qcow2 12G on the
>>>> destination host which is same as the source.
>>>> 2. Started the guest on the source
>>>> 3. Started the vncdisplay to monitor the guest
>>>> 4. Initiated the migration "virsh migrate --live rhel64-64
>>>> qemu+ssh://9.126.89.202/system --verbose --copy-storage-all"
> I recently found a bug here related to IPv4/IPv6.  I need to understand
> if it is in QEMU or libvirt.
>
> Paolo
>

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [Qemu-devel] virsh live migration w/o shared storage fails with error as vm is not running
  2013-06-13 21:45     ` Paolo Bonzini
  2013-06-18 11:00       ` chandrashekar shastri
@ 2013-06-19 11:21       ` chandrashekar shastri
  1 sibling, 0 replies; 7+ messages in thread
From: chandrashekar shastri @ 2013-06-19 11:21 UTC (permalink / raw)
  To: Paolo Bonzini; +Cc: libvir-list, Stefan Hajnoczi, qemu-devel, virt-tools-list

On 06/14/2013 03:15 AM, Paolo Bonzini wrote:
> Il 13/06/2013 13:26, chandrashekar shastri ha scritto:
>> On 06/13/2013 02:14 PM, Stefan Hajnoczi wrote:
>>> On Thu, Jun 13, 2013 at 10:31:04AM +0530, chandrashekar shastri wrote:
>>>> We are testing the upstream KVM with :
>>>>
>>>> Kernel, Qemu, Libvirt, Virt-Manager is built from the source (git).
>>>>
>>>> kernel version : 3.9.0+
>>>> qemu version : QEMU emulator version 1.5.0
>>>> libvirt version : 1.0.5
> Please try with libvirt 1.0.5.2.
I tried with libvirt 1.0.6 and it is still failing, hence reported the 
bug in the launchpad.
Bug #1192499 :  virsh migration copy-storage-all fails with "Unable to 
read from monitor: Connection reset by peer"
>>>> virt-install : 0.600.3
>>>>
>>>> I have followed the below steps to test the "Live migration w/o
>>>> shared storage" feature :
>>>>
>>>> 1. Created the qemu-img create -f qcow2 vm.qcow2 12G on the
>>>> destination host which is same as the source.
>>>> 2. Started the guest on the source
>>>> 3. Started the vncdisplay to monitor the guest
>>>> 4. Initiated the migration "virsh migrate --live rhel64-64
>>>> qemu+ssh://9.126.89.202/system --verbose --copy-storage-all"
> I recently found a bug here related to IPv4/IPv6.  I need to understand
> if it is in QEMU or libvirt.
>
> Paolo
>
Chandrashekar

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [Qemu-devel] virsh live migration w/o shared storage fails with error as vm is not running
  2013-06-13  8:44 ` Stefan Hajnoczi
  2013-06-13 17:26   ` chandrashekar shastri
@ 2013-07-05  6:23   ` chandrashekar shastri
  1 sibling, 0 replies; 7+ messages in thread
From: chandrashekar shastri @ 2013-07-05  6:23 UTC (permalink / raw)
  To: Stefan Hajnoczi; +Cc: libvir-list, Paolo Bonzini, qemu-devel

On 06/13/2013 02:14 PM, Stefan Hajnoczi wrote:
> On Thu, Jun 13, 2013 at 10:31:04AM +0530, chandrashekar shastri wrote:
>> We are testing the upstream KVM with :
>>
>> Kernel, Qemu, Libvirt, Virt-Manager is built from the source (git).
>>
>> kernel version : 3.9.0+
>> qemu version : QEMU emulator version 1.5.0
>> libvirt version : 1.0.5
>> virt-install : 0.600.3
>>
>> I have followed the below steps to test the "Live migration w/o
>> shared storage" feature :
>>
>> 1. Created the qemu-img create -f qcow2 vm.qcow2 12G on the
>> destination host which is same as the source.
>> 2. Started the guest on the source
>> 3. Started the vncdisplay to monitor the guest
>> 4. Initiated the migration "virsh migrate --live rhel64-64
>> qemu+ssh://9.126.89.202/system --verbose --copy-storage-all"
>> 5. It started the copying the storage from souce to destination
>> (conitinously monitored it was growing)
>> 6. Guest on the destination was paused and was running on the source
>> 7. At some point the VM on the source shutdown and got an error on
>> the vnc display as " Viewport:    write: Broken pipe (32)" and the
>> VM on the destination was undefined.
>>
>> Below is the libvirt debug log, please let me with your comments.
>>
>> Debug log:
>> --------------
> What about /var/log/libvirt/qemu/rhel64-64.log?  That is the QEMU
> command-line and stderr log.
I have attached all source and destination logs, including the sosreport 
of both source and destination in the bug.
https://bugs.launchpad.net/ubuntu/+source/libvirt/+bug/1192499
>
> Also can you try without copy-storage-all just to see if migration
> completes successfully?  The guest will act weird once it migrates since
> the disk is zeroed but it will isolate the failure to
> --copy-storage-all.
Without copy-storage-all (meaning with NFS shared storage the migration 
works fine).
> Stefan
>
Please let me know if you need more info.

Thanks,
Shastri

^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2013-07-05  6:24 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2013-06-13  5:01 [Qemu-devel] virsh live migration w/o shared storage fails with error as vm is not running chandrashekar shastri
2013-06-13  8:44 ` Stefan Hajnoczi
2013-06-13 17:26   ` chandrashekar shastri
2013-06-13 21:45     ` Paolo Bonzini
2013-06-18 11:00       ` chandrashekar shastri
2013-06-19 11:21       ` chandrashekar shastri
2013-07-05  6:23   ` chandrashekar shastri

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).