From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:46095) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1adWEC-0007SC-Sz for qemu-devel@nongnu.org; Wed, 09 Mar 2016 00:00:10 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1adWE9-0005kW-M0 for qemu-devel@nongnu.org; Wed, 09 Mar 2016 00:00:08 -0500 Received: from e28smtp09.in.ibm.com ([125.16.236.9]:48492) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1adWE8-0005iQ-Vb for qemu-devel@nongnu.org; Wed, 09 Mar 2016 00:00:05 -0500 Received: from localhost by e28smtp09.in.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Wed, 9 Mar 2016 10:30:02 +0530 Date: Wed, 9 Mar 2016 10:29:53 +0530 From: Bharata B Rao Message-ID: <20160309045952.GC29692@in.ibm.com> References: <1457074461-14285-1-git-send-email-bharata@linux.vnet.ibm.com> <1457074461-14285-4-git-send-email-bharata@linux.vnet.ibm.com> <56DDD116.7000401@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <56DDD116.7000401@redhat.com> Subject: Re: [Qemu-devel] [RFC PATCH v1 03/10] cpu: Reclaim vCPU objects Reply-To: bharata@linux.vnet.ibm.com List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Thomas Huth Cc: mjrosato@linux.vnet.ibm.com, agraf@suse.de, Zhu Guihua , pkrempa@redhat.com, ehabkost@redhat.com, aik@ozlabs.ru, qemu-devel@nongnu.org, armbru@redhat.com, borntraeger@de.ibm.com, qemu-ppc@nongnu.org, Chen Fan , pbonzini@redhat.com, Gu Zheng , imammedo@redhat.com, mdroth@linux.vnet.ibm.com, afaerber@suse.de, david@gibson.dropbear.id.au On Mon, Mar 07, 2016 at 08:05:58PM +0100, Thomas Huth wrote: > On 04.03.2016 07:54, Bharata B Rao wrote: > > From: Gu Zheng > > > > In order to deal well with the kvm vcpus (which can not be removed without any > > protection), we do not close KVM vcpu fd, just record and mark it as stopped > > into a list, so that we can reuse it for the appending cpu hot-add request if > > possible. It is also the approach that kvm guys suggested: > > https://www.mail-archive.com/kvm@vger.kernel.org/msg102839.html > > > > Signed-off-by: Chen Fan > > Signed-off-by: Gu Zheng > > Signed-off-by: Zhu Guihua > > Signed-off-by: Bharata B Rao > > [- Explicit CPU_REMOVE() from qemu_kvm/tcg_destroy_vcpu() > > isn't needed as it is done from cpu_exec_exit() > > - Use iothread mutex instead of global mutex during > > destroy > > - Don't cleanup vCPU object from vCPU thread context > > but leave it to the callers (device_add/device_del)] > > Reviewed-by: David Gibson > > --- > > cpus.c | 38 +++++++++++++++++++++++++++++++++++ > > include/qom/cpu.h | 10 +++++++++ > > include/sysemu/kvm.h | 1 + > > kvm-all.c | 57 +++++++++++++++++++++++++++++++++++++++++++++++++++- > > kvm-stub.c | 5 +++++ > > 5 files changed, 110 insertions(+), 1 deletion(-) > > > > diff --git a/cpus.c b/cpus.c > > index 9592163..07cc054 100644 > > --- a/cpus.c > > +++ b/cpus.c > > @@ -953,6 +953,18 @@ void async_run_on_cpu(CPUState *cpu, void (*func)(void *data), void *data) > > qemu_cpu_kick(cpu); > > } > > > > +static void qemu_kvm_destroy_vcpu(CPUState *cpu) > > +{ > > + if (kvm_destroy_vcpu(cpu) < 0) { > > + error_report("kvm_destroy_vcpu failed"); > > + exit(EXIT_FAILURE); > > + } > > +} > > + > > +static void qemu_tcg_destroy_vcpu(CPUState *cpu) > > +{ > > +} > > + > > static void flush_queued_work(CPUState *cpu) > > { > > struct qemu_work_item *wi; > > @@ -1053,6 +1065,11 @@ static void *qemu_kvm_cpu_thread_fn(void *arg) > > } > > } > > qemu_kvm_wait_io_event(cpu); > > + if (cpu->exit && !cpu_can_run(cpu)) { > > + qemu_kvm_destroy_vcpu(cpu); > > + qemu_mutex_unlock_iothread(); > > + return NULL; > > + } > > My comment from last time still applies: > > You could increase readability of the code by changing the condition of > the loop instead - currently it is a "while (1)" ... you could turn that > into a "do { ... } while (!cpu->exit || cpu_can_run(cpu))" and then > destroy the cpu after the loop. Sorry for missing this, will take of this and the other comment in this thread in the next version. Regards, Bharata.