From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([208.118.235.92]:34950) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1UaZpM-0002Kr-O2 for qemu-devel@nongnu.org; Thu, 09 May 2013 19:00:49 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1UaZpK-00083M-Ec for qemu-devel@nongnu.org; Thu, 09 May 2013 19:00:44 -0400 Received: from g4t0017.houston.hp.com ([15.201.24.20]:43852) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1UaZpK-00082s-77 for qemu-devel@nongnu.org; Thu, 09 May 2013 19:00:42 -0400 Message-ID: <518C2A96.7010508@hp.com> Date: Thu, 09 May 2013 16:00:38 -0700 From: Chegu Vinod MIME-Version: 1.0 References: <1368128600-30721-1-git-send-email-chegu_vinod@hp.com> <1368128600-30721-4-git-send-email-chegu_vinod@hp.com> <20130509222439.0e1c5041@thinkpad> In-Reply-To: <20130509222439.0e1c5041@thinkpad> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Subject: Re: [Qemu-devel] [RFC PATCH v5 3/3] Force auto-convegence of live migration List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Igor Mammedov Cc: quintela@redhat.com, qemu-devel@nongnu.org, owasserm@redhat.com, anthony@codemonkey.ws, pbonzini@redhat.com On 5/9/2013 1:24 PM, Igor Mammedov wrote: > On Thu, 9 May 2013 12:43:20 -0700 > Chegu Vinod wrote: > >> If a user chooses to turn on the auto-converge migration capability >> these changes detect the lack of convergence and throttle down the >> guest. i.e. force the VCPUs out of the guest for some duration >> and let the migration thread catchup and help converge. >> > [...] >> + >> +static void mig_delay_vcpu(void) >> +{ >> + qemu_mutex_unlock_iothread(); >> + g_usleep(50*1000); >> + qemu_mutex_lock_iothread(); >> +} >> + >> +/* Stub used for getting the vcpu out of VM and into qemu via >> + run_on_cpu()*/ >> +static void mig_kick_cpu(void *opq) >> +{ >> + mig_delay_vcpu(); >> + return; >> +} >> + >> +/* To reduce the dirty rate explicitly disallow the VCPUs from spending >> + much time in the VM. The migration thread will try to catchup. >> + Workload will experience a performance drop. >> +*/ >> +void migration_throttle_down(void) >> +{ >> + if (throttling_needed()) { >> + CPUArchState *penv = first_cpu; >> + while (penv) { >> + qemu_mutex_lock_iothread(); > Locking it here and the unlocking it inside of queued work doesn't look nice. Yes...but see below. > What exactly are you protecting with this lock? It was my understanding that BQL is supposed to be held when the vcpu threads start entering and executing in the qemu context (as qemu is not MP safe).. Still true? In this specific use case I was concerned about the fraction of the time when a given vcpu thread is in the qemu context but not executing the callback routine...and was hence holding the BQL.Holding the BQL and g_usleep'ng is not only bad but would slow down the migration thread...hence the "doesn't look nice" stuff :( For this specific use case If its not really required to even bother with the BQL then pl. do let me know. Also pl. refer to version 3 of my patch....I was doing a g_usleep() in kvm_cpu_exec() and was not messing much with the BQL....but that was deemed as not a good thing either. Thanks Vinod > >> + async_run_on_cpu(ENV_GET_CPU(penv), mig_kick_cpu, NULL); >> + qemu_mutex_unlock_iothread(); >> + penv = penv->next_cpu; >> + } >> + } >> +} > >