From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:56872) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1Ur6Vs-0007y7-T6 for qemu-devel@nongnu.org; Mon, 24 Jun 2013 09:09:01 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1Ur6Vr-0000MI-Mi for qemu-devel@nongnu.org; Mon, 24 Jun 2013 09:08:56 -0400 Received: from g6t0184.atlanta.hp.com ([15.193.32.61]:7554) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1Ur6Vr-0000ME-If for qemu-devel@nongnu.org; Mon, 24 Jun 2013 09:08:55 -0400 Message-ID: <51C844E5.9010705@hp.com> Date: Mon, 24 Jun 2013 06:08:53 -0700 From: Chegu Vinod MIME-Version: 1.0 References: <1372018280-133901-1-git-send-email-chegu_vinod@hp.com> <51C84332.5020603@redhat.com> In-Reply-To: <51C84332.5020603@redhat.com> Content-Type: text/plain; charset=ISO-8859-15; format=flowed Content-Transfer-Encoding: 7bit Subject: Re: [Qemu-devel] [PATCH v7 3/3] Force auto-convegence of live migration List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Paolo Bonzini Cc: owasserm@redhat.com, qemu-devel@nongnu.org, anthony@codemonkey.ws, quintela@redhat.com On 6/24/2013 6:01 AM, Paolo Bonzini wrote: > One nit and one question: > > Il 23/06/2013 22:11, Chegu Vinod ha scritto: >> @@ -404,6 +413,23 @@ static void migration_bitmap_sync(void) >> >> /* more than 1 second = 1000 millisecons */ >> if (end_time > start_time + 1000) { >> + if (migrate_auto_converge()) { >> + /* The following detection logic can be refined later. For now: >> + Check to see if the dirtied bytes is 50% more than the approx. >> + amount of bytes that just got transferred since the last time we >> + were in this routine. If that happens >N times (for now N==4) >> + we turn on the throttle down logic */ >> + bytes_xfer_now = ram_bytes_transferred(); >> + if (s->dirty_pages_rate && >> + (num_dirty_pages_period * TARGET_PAGE_SIZE > >> + (bytes_xfer_now - bytes_xfer_prev)/2) && >> + (dirty_rate_high_cnt++ > 4)) { >> + trace_migration_throttle(); >> + mig_throttle_on = true; >> + dirty_rate_high_cnt = 0; >> + } >> + bytes_xfer_prev = bytes_xfer_now; >> + } > > Missing: > > else { > mig_throttle_on = false; > } Ok. >> +/* Stub function that's gets run on the vcpu when its brought out of the >> + VM to run inside qemu via async_run_on_cpu()*/ >> +static void mig_sleep_cpu(void *opq) >> +{ >> + qemu_mutex_unlock_iothread(); >> + g_usleep(30*1000); >> + qemu_mutex_lock_iothread(); >> +} >> + >> + /* If it has been more than 40 ms since the last time the guest >> + * was throttled then do it again. >> + */ >> + if (40 < (t1-t0)/1000000) { > You're stealing 75% of the CPU time, isn't that a lot? Depends on the dirty rate vs. transfer rate... I had tried 50% too and it took much longer for the migration to converge. Vinod > >> + mig_throttle_guest_down(); >> + t0 = t1; >> + } >> +} >> > Paolo > > . >