From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:59997) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1cjnKc-0004gX-VX for qemu-devel@nongnu.org; Fri, 03 Mar 2017 08:33:15 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1cjnKY-00086d-2H for qemu-devel@nongnu.org; Fri, 03 Mar 2017 08:33:14 -0500 Received: from mx1.redhat.com ([209.132.183.28]:58612) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1cjnKX-00086Y-QP for qemu-devel@nongnu.org; Fri, 03 Mar 2017 08:33:10 -0500 References: <33183CC9F5247A488A2544077AF19020DA1D01C2@DGGEMA505-MBX.china.huawei.com> <20170303120054.GB2439@work-vm> <10d66d73-a269-acb4-bc3d-2250793cba8e@redhat.com> <20170303131150.GC2439@work-vm> <20170303132653.GD2439@work-vm> From: Paolo Bonzini Message-ID: <4b0de5de-aec7-0388-0a68-cf9e02b48b1d@redhat.com> Date: Fri, 3 Mar 2017 14:33:05 +0100 MIME-Version: 1.0 In-Reply-To: <20170303132653.GD2439@work-vm> Content-Type: text/plain; charset=windows-1252 Content-Transfer-Encoding: 7bit Subject: Re: [Qemu-devel] [Bug?] BQL about live migration List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: "Dr. David Alan Gilbert" Cc: "Gonglei (Arei)" , "quintela@redhat.com" , "qemu-devel@nongnu.org" , yanghongyang , Huangzhichao On 03/03/2017 14:26, Dr. David Alan Gilbert wrote: > * Paolo Bonzini (pbonzini@redhat.com) wrote: >> >> >> On 03/03/2017 14:11, Dr. David Alan Gilbert wrote: >>> * Paolo Bonzini (pbonzini@redhat.com) wrote: >>>> >>>> >>>> On 03/03/2017 13:00, Dr. David Alan Gilbert wrote: >>>>> Ouch that's pretty nasty; I remember Paolo explaining to me a while ago that >>>>> their were times when run_on_cpu would have to drop the BQL and I worried about it, >>>>> but this is the 1st time I've seen an error due to it. >>>>> >>>>> Do you know what the migration state was at that point? Was it MIGRATION_STATUS_CANCELLING? >>>>> I'm thinking perhaps we should stop 'cont' from continuing while migration is in >>>>> MIGRATION_STATUS_CANCELLING. Do we send an event when we hit CANCELLED - so that >>>>> perhaps libvirt could avoid sending the 'cont' until then? >>>> >>>> No, there's no event, though I thought libvirt would poll until >>>> "query-migrate" returns the cancelled state. Of course that is a small >>>> consolation, because a segfault is unacceptable. >>> >>> I think you might get an event if you set the new migrate capability called >>> 'events' on! >>> >>> void migrate_set_state(int *state, int old_state, int new_state) >>> { >>> if (atomic_cmpxchg(state, old_state, new_state) == old_state) { >>> trace_migrate_set_state(new_state); >>> migrate_generate_event(new_state); >>> } >>> } >>> >>> static void migrate_generate_event(int new_state) >>> { >>> if (migrate_use_events()) { >>> qapi_event_send_migration(new_state, &error_abort); >>> } >>> } >>> >>> That event feature went in sometime after 2.3.0. >>> >>>> One possibility is to suspend the monitor in qmp_migrate_cancel and >>>> resume it (with add_migration_state_change_notifier) when we hit the >>>> CANCELLED state. I'm not sure what the latency would be between the end >>>> of migrate_fd_cancel and finally reaching CANCELLED. >>> >>> I don't like suspending monitors; it can potentially take quite a significant >>> time to do a cancel. >>> How about making 'cont' fail if we're in CANCELLING? >> >> Actually I thought that would be the case already (in fact CANCELLING is >> internal only; the outside world sees it as "active" in query-migrate). >> >> Lei, what is the runstate? (That is, why did cont succeed at all)? > > I suspect it's RUN_STATE_FINISH_MIGRATE - we set that before we do the device > save, and that's what we get at the end of a migrate and it's legal to restart > from there. Yeah, but I think we get there at the end of a failed migrate only. So perhaps we can introduce a new state RUN_STATE_FAILED_MIGRATE and forbid "cont" from finish-migrate (only allow it from failed-migrate)? Paolo >> Paolo >> >>> I'd really love to see the 'run_on_cpu' being more careful about the BQL; >>> we really need all of the rest of the devices to stay quiesced at times. >> >> That's not really possible, because of how condition variables work. :( > > *Really* we need to find a solution to that - there's probably lots of > other things that can spring up in that small window other than the > 'cont'. > > Dave > > -- > Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK >