From mboxrd@z Thu Jan 1 00:00:00 1970 From: "Michael S. Tsirkin" Subject: Re: [RFC Design Doc]Speed up live migration by skipping free pages Date: Fri, 25 Mar 2016 00:16:22 +0200 Message-ID: <20160324202738-mutt-send-email-mst@redhat.com> References: <20160324090004.GA2230@work-vm> <20160324102354.GB2230@work-vm> <20160324165530-mutt-send-email-mst@redhat.com> <20160324175503-mutt-send-email-mst@redhat.com> <20160324181031-mutt-send-email-mst@redhat.com> <20160324174933.GA11662@work-vm> Mime-Version: 1.0 Content-Type: text/plain; charset=iso-8859-1 Content-Transfer-Encoding: QUOTED-PRINTABLE Cc: "Li, Liang Z" , Wei Yang , "qemu-devel@nongnu.org" , "kvm@vger.kernel.org" , "linux-kernel@vger.kenel.org" , "pbonzini@redhat.com" , "rth@twiddle.net" , "ehabkost@redhat.com" , "amit.shah@redhat.com" , "quintela@redhat.com" , "mohan_parthasarathy@hpe.com" , "jitendra.kolhe@hpe.com" , "simhan@hpe.com" , "rkagan@virtuozzo.com" , "riel@redhat.com" To: "Dr. David Alan Gilbert" Return-path: Received: from mx1.redhat.com ([209.132.183.28]:57679 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750772AbcCXWQ3 (ORCPT ); Thu, 24 Mar 2016 18:16:29 -0400 Content-Disposition: inline In-Reply-To: <20160324174933.GA11662@work-vm> Sender: kvm-owner@vger.kernel.org List-ID: On Thu, Mar 24, 2016 at 05:49:33PM +0000, Dr. David Alan Gilbert wrote: > * Michael S. Tsirkin (mst@redhat.com) wrote: > > On Thu, Mar 24, 2016 at 04:05:16PM +0000, Li, Liang Z wrote: > > >=20 > > >=20 > > > On=A0%D, %SN wrote: > > > %Q > > >=20 > > > %C > > >=20 > > > Liang > > >=20 > > >=20 > > > > -----Original Message----- > > > > From: Michael S. Tsirkin [mailto:mst@redhat.com] > > > > Sent: Thursday, March 24, 2016 11:57 PM > > > > To: Li, Liang Z > > > > Cc: Dr. David Alan Gilbert; Wei Yang; qemu-devel@nongnu.org; > > > > kvm@vger.kernel.org; linux-kernel@vger.kenel.org; pbonzini@redh= at.com; > > > > rth@twiddle.net; ehabkost@redhat.com; amit.shah@redhat.com; > > > > quintela@redhat.com; mohan_parthasarathy@hpe.com; > > > > jitendra.kolhe@hpe.com; simhan@hpe.com; rkagan@virtuozzo.com; > > > > riel@redhat.com > > > > Subject: Re: [RFC Design Doc]Speed up live migration by skippin= g free pages > > > >=20 > > > > On Thu, Mar 24, 2016 at 03:53:25PM +0000, Li, Liang Z wrote: > > > > > > > > > Not very complex, we can implement like this: > > > > > > > > > > > > > > > > > > 1. Set all the bits in the migration_bitmap_rcu->bmap= to 1 2. > > > > > > > > > Clear all the bits in ram_list. > > > > > > > > > dirty_memory[DIRTY_MEMORY_MIGRATION] > > > > > > > > > 3. Send the get_free_page_bitmap request 4. Start to = send > > > > > > > > > pages to destination and check if the free_page_bitma= p is ready > > > > > > > > > if (is_ready) { > > > > > > > > > filter out the free pages from migration_b= itmap_rcu->bmap; > > > > > > > > > migration_bitmap_sync(); > > > > > > > > > } > > > > > > > > > continue until live migration complete. > > > > > > > > > > > > > > > > > > > > > > > > > > > Is that right? > > > > > > > > > > > > > > > > The order I'm trying to understand is something like: > > > > > > > > > > > > > > > > a) Send the get_free_page_bitmap request > > > > > > > > b) Start sending pages > > > > > > > > c) Reach the end of memory > > > > > > > > [ is_ready is false - guest hasn't made free map = yet ] > > > > > > > > d) normal migration_bitmap_sync() at end of first p= ass > > > > > > > > e) Carry on sending dirty pages > > > > > > > > f) is_ready is true > > > > > > > > f.1) filter out free pages? > > > > > > > > f.2) migration_bitmap_sync() > > > > > > > > > > > > > > > > It's f.1 I'm worried about. If the guest started gener= ating the > > > > > > > > free bitmap before (d), then a page marked as 'free' in= f.1 > > > > > > > > might have become dirty before (d) and so (f.2) doesn't= set the > > > > > > > > dirty again, and so we can't filter out pages in f.1. > > > > > > > > > > > > > > > > > > > > > > As you described, the order is incorrect. > > > > > > > > > > > > > > Liang > > > > > > > > > > > > > > > > > > So to make it safe, what is required is to make sure no fre= e list us > > > > > > outstanding before calling migration_bitmap_sync. > > > > > > > > > > > > If one is outstanding, filter out pages before calling > > > > migration_bitmap_sync. > > > > > > > > > > > > Of course, if we just do it like we normally do with migrat= ion, then > > > > > > by the time we call migration_bitmap_sync dirty bitmap is c= ompletely > > > > > > empty, so there won't be anything to filter out. > > > > > > > > > > > > One way to address this is call migration_bitmap_sync in th= e IO > > > > > > handler, while VCPU is stopped, then make sure to filter ou= t pages > > > > > > before the next migration_bitmap_sync. > > > > > > > > > > > > Another is to start filtering out pages upon IO handler, bu= t make > > > > > > sure to flush the queue before calling migration_bitmap_syn= c. > > > > > > > > > > > > > > > > It's really complex, maybe we should switch to a simple start= , just > > > > > skip the free page in the ram bulk stage and make it asynchro= nous? > > > > > > > > > > Liang > > > >=20 > > > > You mean like your patches do? No, blocking bulk migration unti= l guest > > > > response is basically a non-starter. > > > >=20 > > >=20 > > > No, don't wait anymore. Like below (copy from previous thread) > > > -------------------------------------------------------------- > > > 1. Set all the bits in the migration_bitmap_rcu->bmap to 1=20 > > > 2. Clear all the bits in ram_list.dirty_memory[DIRTY_MEMORY_MIGRA= TION] > > > 3. Send the get_free_page_bitmap request=20 > > > 4. Start to send pages to destination and check if the free_page= _bitmap is ready > > > if (is_ready) { > > > filter out the free pages from migration_bitmap_rcu->bmap; > > > migration_bitmap_sync(); > > > } > > > continue until live migration complete. > > > --------------------------------------------------------------- > > > Can this work? > > >=20 > > > Liang > >=20 > > Not if you get the ready bit asynchronously like you wrote here > > since is_ready can get set while you called migration_bitmap_sync. > >=20 > > As I said previously, > > to make this work you need to filter out synchronously while VCPU i= s > > stopped and while free pages from list are not being used. > >=20 > > Alternatively prevent getting free page list > > and filtering them out from > > guest from racing with migration_bitmap_sync. > >=20 > > For example, flush the VQ after migration_bitmap_sync. > > So: > >=20 > > lock > > migration_bitmap_sync(); > > while (elem =3D virtqueue_pop) { > > virtqueue_push(elem) > > g_free(elem) > > } > > unlock > >=20 > >=20 > > while in handle_output > >=20 > > lock > > while (elem =3D virtqueue_pop) { > > list =3D get_free_list(elem) > > filter_out_free(list) > > virtqueue_push(elem) > > free(elem) > > } > > unlock > >=20 > >=20 > > lock prevents migration_bitmap_sync from racing > > against handle_output >=20 > I think the easier way is just to ignore the guests > free list response if it comes back after the first > pass. >=20 > Dave That's a subset of course - after the first pass =3D=3D after migration_bitmap_sync. But it's really nasty - for example, how do you know it's the response from this migration round and not a previous one? It is really better to just keep things orthogonal and not introduce arbitrary limitations. =46or example, with post-copy there's no first pass, and it can still benefit from this optimization. > >=20 > >=20 > > This way you can actually use ioeventfd > > for this VQ so VCPU won't be blocked. > >=20 > > I do not think this is so complex, and > > this way you can add requests for guest > > free bitmap at an arbitary interval > > either in host or in guest. > >=20 > > For example, add a value that says how often > > should guest update the bitmap, set it to 0 > > to disable updates after migration done. > >=20 > > Or, make guest resubmit a new one when we consume > > the old one, run handle_output about through > > a periodic timer on host. > >=20 > >=20 > > > > -- > > > > MST > -- > Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK