From mboxrd@z Thu Jan 1 00:00:00 1970 From: Jonathan Ludlam Subject: Re: XCP: sr driver question wrt vm-migrate Date: Wed, 16 Jun 2010 13:06:28 +0100 Message-ID: References: <20100608071147.8D4DB719F7@kuma.localdomain> <20100616061920.4AE78718FD@kuma.localdomain> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable Return-path: In-Reply-To: <20100616061920.4AE78718FD@kuma.localdomain> Content-Language: en-US List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Sender: xen-devel-bounces@lists.xensource.com Errors-To: xen-devel-bounces@lists.xensource.com To: YAMAMOTO Takashi Cc: "xen-devel@lists.xensource.com" List-Id: xen-devel@lists.xenproject.org This is usually the result of a failure earier on. Could you grep through t= he logs to get the whole trace of what went on? Best thing to do is grep fo= r VM.pool_migrate, then find the task reference (the hex string beginning w= ith 'R:' immediately after the 'VM.pool_migrate') and grep for this string = in the logs on both the source and destination machines.=20 Have a look through these, and if it's still not obvious what went wrong, = post them to the list and we can have a look. Cheers, Jon On 16 Jun 2010, at 07:19, YAMAMOTO Takashi wrote: > hi, >=20 > after making my sr driver defer the attach operation as you suggested, > i got migration work. thanks! >=20 > however, when repeating live migration between two hosts for testing, > i got the following error. it doesn't seem so reproducable. > do you have any idea? >=20 > YAMAMOTO Takashi >=20 > + xe vm-migrate live=3Dtrue uuid=3D23ecfa58-aa30-ea6a-f9fe-7cb2a5487592 h= ost=3D67b8b07b-8c50-4677-a511-beb196ea766f > An error occurred during the migration process. > vm: 23ecfa58-aa30-ea6a-f9fe-7cb2a5487592 (CentOS53x64-1) > source: eea41bdd-d2ce-4a9a-bc51-1ca286320296 (s6) > destination: 67b8b07b-8c50-4677-a511-beb196ea766f (s1) > msg: Caught exception INTERNAL_ERROR: [ Xapi_vm_migrate.Remote_failed("un= marshalling result code from remote") ] at last minute during migration >=20 >> hi, >>=20 >> i'll try deferring the attach operation to vdi_activate. >> thanks! >>=20 >> YAMAMOTO Takashi >>=20 >>> Yup, vdi activate is the way forward. >>>=20 >>> If you advertise VDI_ACTIVATE and VDI_DEACTIVATE in the 'get_driver_inf= o' response, xapi will call the following during the start-migrate-shutdown= lifecycle: >>>=20 >>> VM start: >>>=20 >>> host A: VDI.attach >>> host A: VDI.activate >>>=20 >>> VM migrate: >>>=20 >>> host B: VDI.attach >>>=20 >>> (VM pauses on host A) >>>=20 >>> host A: VDI.deactivate >>> host B: VDI.activate >>>=20 >>> (VM unpauses on host B) >>>=20 >>> host A: VDI.detach >>>=20 >>> VM shutdown: >>>=20 >>> host B: VDI.deactivate >>> host B: VDI.detach >>>=20 >>> so the disk is never activated on both hosts at once, but it does still= go through a period when it is attached to both hosts at once. So you coul= d, for example, check that the disk *could* be attached on the vdi_attach S= MAPI call, and actually attach it properly on the vdi_activate call. >>>=20 >>> Hope this helps, >>>=20 >>> Jon >>>=20 >>>=20 >>> On 7 Jun 2010, at 09:26, YAMAMOTO Takashi wrote: >>>=20 >>>> hi, >>>>=20 >>>> on vm-migrate, xapi attaches a vdi on the migrate-to host >>>> before detaching it on the migrate-from host. >>>> unfortunately it doesn't work for our product, which doesn't >>>> provide a way to attach a volume to multiple hosts at the same time. >>>> is VDI_ACTIVATE something what i can use as a workaround? >>>> or any other suggestions? >>>>=20 >>>> YAMAMOTO Takashi >>>>=20 >>>> _______________________________________________ >>>> Xen-devel mailing list >>>> Xen-devel@lists.xensource.com >>>> http://lists.xensource.com/xen-devel >>>=20 >>>=20 >>> _______________________________________________ >>> Xen-devel mailing list >>> Xen-devel@lists.xensource.com >>> http://lists.xensource.com/xen-devel >>=20 >> _______________________________________________ >> Xen-devel mailing list >> Xen-devel@lists.xensource.com >> http://lists.xensource.com/xen-devel