linuxppc-dev.lists.ozlabs.org archive mirror
 help / color / mirror / Atom feed
From: David Gibson <david@gibson.dropbear.id.au>
To: "Cédric Le Goater" <clg@kaod.org>
Cc: linuxppc-dev@lists.ozlabs.org, Paul Mackerras <paulus@samba.org>,
	kvm@vger.kernel.org, kvm-ppc@vger.kernel.org
Subject: Re: [PATCH v3 17/17] KVM: PPC: Book3S HV: XIVE: clear the vCPU interrupt presenters
Date: Tue, 19 Mar 2019 16:37:36 +1100	[thread overview]
Message-ID: <20190319053736.GE31018@umbus.fritz.box> (raw)
In-Reply-To: <20190315120609.25910-18-clg@kaod.org>

[-- Attachment #1: Type: text/plain, Size: 5652 bytes --]

On Fri, Mar 15, 2019 at 01:06:09PM +0100, Cédric Le Goater wrote:
> When the VM boots, the CAS negotiation process determines which
> interrupt mode to use and invokes a machine reset. At that time, the
> previous KVM interrupt device is 'destroyed' before the chosen one is
> created. Upon destruction, the vCPU interrupt presenters using the KVM
> device should be cleared first, the machine will reconnect them later
> to the new device after it is created.
> 
> Signed-off-by: Cédric Le Goater <clg@kaod.org>

Reviewed-by: David Gibson <david@gibson.dropbear.id.au>

> ---
> 
>  Changes since v2 :
> 
>  - removed comments on possible race in kvmppc_native_connect_vcpu()
>    for the XIVE KVM device. This is still an issue in the
>    XICS-over-XIVE device.
> 
>  arch/powerpc/kvm/book3s_xics.c        | 19 +++++++++++++
>  arch/powerpc/kvm/book3s_xive.c        | 39 +++++++++++++++++++++++++--
>  arch/powerpc/kvm/book3s_xive_native.c | 12 +++++++++
>  3 files changed, 68 insertions(+), 2 deletions(-)
> 
> diff --git a/arch/powerpc/kvm/book3s_xics.c b/arch/powerpc/kvm/book3s_xics.c
> index f27ee57ab46e..81cdabf4295f 100644
> --- a/arch/powerpc/kvm/book3s_xics.c
> +++ b/arch/powerpc/kvm/book3s_xics.c
> @@ -1342,6 +1342,25 @@ static void kvmppc_xics_free(struct kvm_device *dev)
>  	struct kvmppc_xics *xics = dev->private;
>  	int i;
>  	struct kvm *kvm = xics->kvm;
> +	struct kvm_vcpu *vcpu;
> +
> +	/*
> +	 * When destroying the VM, the vCPUs are destroyed first and
> +	 * the vCPU list should be empty. If this is not the case,
> +	 * then we are simply destroying the device and we should
> +	 * clean up the vCPU interrupt presenters first.
> +	 */
> +	if (atomic_read(&kvm->online_vcpus) != 0) {
> +		/*
> +		 * call kick_all_cpus_sync() to ensure that all CPUs
> +		 * have executed any pending interrupts
> +		 */
> +		if (is_kvmppc_hv_enabled(kvm))
> +			kick_all_cpus_sync();
> +
> +		kvm_for_each_vcpu(i, vcpu, kvm)
> +			kvmppc_xics_free_icp(vcpu);
> +	}
>  
>  	debugfs_remove(xics->dentry);
>  
> diff --git a/arch/powerpc/kvm/book3s_xive.c b/arch/powerpc/kvm/book3s_xive.c
> index 480a3fc6b9fd..cf6a4c6c5a28 100644
> --- a/arch/powerpc/kvm/book3s_xive.c
> +++ b/arch/powerpc/kvm/book3s_xive.c
> @@ -1100,11 +1100,19 @@ void kvmppc_xive_disable_vcpu_interrupts(struct kvm_vcpu *vcpu)
>  void kvmppc_xive_cleanup_vcpu(struct kvm_vcpu *vcpu)
>  {
>  	struct kvmppc_xive_vcpu *xc = vcpu->arch.xive_vcpu;
> -	struct kvmppc_xive *xive = xc->xive;
> +	struct kvmppc_xive *xive;
>  	int i;
>  
> +	if (!kvmppc_xics_enabled(vcpu))
> +		return;
> +
> +	if (!xc)
> +		return;
> +
>  	pr_devel("cleanup_vcpu(cpu=%d)\n", xc->server_num);
>  
> +	xive = xc->xive;
> +
>  	/* Ensure no interrupt is still routed to that VP */
>  	xc->valid = false;
>  	kvmppc_xive_disable_vcpu_interrupts(vcpu);
> @@ -1141,6 +1149,10 @@ void kvmppc_xive_cleanup_vcpu(struct kvm_vcpu *vcpu)
>  	}
>  	/* Free the VP */
>  	kfree(xc);
> +
> +	/* Cleanup the vcpu */
> +	vcpu->arch.irq_type = KVMPPC_IRQ_DEFAULT;
> +	vcpu->arch.xive_vcpu = NULL;
>  }
>  
>  int kvmppc_xive_connect_vcpu(struct kvm_device *dev,
> @@ -1158,7 +1170,7 @@ int kvmppc_xive_connect_vcpu(struct kvm_device *dev,
>  	}
>  	if (xive->kvm != vcpu->kvm)
>  		return -EPERM;
> -	if (vcpu->arch.irq_type)
> +	if (vcpu->arch.irq_type != KVMPPC_IRQ_DEFAULT)
>  		return -EBUSY;
>  	if (kvmppc_xive_find_server(vcpu->kvm, cpu)) {
>  		pr_devel("Duplicate !\n");
> @@ -1828,8 +1840,31 @@ static void kvmppc_xive_free(struct kvm_device *dev)
>  {
>  	struct kvmppc_xive *xive = dev->private;
>  	struct kvm *kvm = xive->kvm;
> +	struct kvm_vcpu *vcpu;
>  	int i;
>  
> +	/*
> +	 * When destroying the VM, the vCPUs are destroyed first and
> +	 * the vCPU list should be empty. If this is not the case,
> +	 * then we are simply destroying the device and we should
> +	 * clean up the vCPU interrupt presenters first.
> +	 */
> +	if (atomic_read(&kvm->online_vcpus) != 0) {
> +		/*
> +		 * call kick_all_cpus_sync() to ensure that all CPUs
> +		 * have executed any pending interrupts
> +		 */
> +		if (is_kvmppc_hv_enabled(kvm))
> +			kick_all_cpus_sync();
> +
> +		/*
> +		 * TODO: There is still a race window with the early
> +		 * checks in kvmppc_native_connect_vcpu()
> +		 */
> +		kvm_for_each_vcpu(i, vcpu, kvm)
> +			kvmppc_xive_cleanup_vcpu(vcpu);
> +	}
> +
>  	debugfs_remove(xive->dentry);
>  
>  	if (kvm)
> diff --git a/arch/powerpc/kvm/book3s_xive_native.c b/arch/powerpc/kvm/book3s_xive_native.c
> index 67a1bb26a4cc..8f7be5e23177 100644
> --- a/arch/powerpc/kvm/book3s_xive_native.c
> +++ b/arch/powerpc/kvm/book3s_xive_native.c
> @@ -956,8 +956,20 @@ static void kvmppc_xive_native_free(struct kvm_device *dev)
>  {
>  	struct kvmppc_xive *xive = dev->private;
>  	struct kvm *kvm = xive->kvm;
> +	struct kvm_vcpu *vcpu;
>  	int i;
>  
> +	/*
> +	 * When destroying the VM, the vCPUs are destroyed first and
> +	 * the vCPU list should be empty. If this is not the case,
> +	 * then we are simply destroying the device and we should
> +	 * clean up the vCPU interrupt presenters first.
> +	 */
> +	if (atomic_read(&kvm->online_vcpus) != 0) {
> +		kvm_for_each_vcpu(i, vcpu, kvm)
> +			kvmppc_xive_native_cleanup_vcpu(vcpu);
> +	}
> +
>  	debugfs_remove(xive->dentry);
>  
>  	pr_devel("Destroying xive native device\n");

-- 
David Gibson			| I'll have my music baroque, and my code
david AT gibson.dropbear.id.au	| minimalist, thank you.  NOT _the_ _other_
				| _way_ _around_!
http://www.ozlabs.org/~dgibson

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

      reply	other threads:[~2019-03-19  5:41 UTC|newest]

Thread overview: 38+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-03-15 12:05 [PATCH v3 00/17] KVM: PPC: Book3S HV: add XIVE native exploitation mode Cédric Le Goater
2019-03-15 12:05 ` [PATCH v3 01/17] powerpc/xive: add OPAL extensions for the XIVE native exploitation support Cédric Le Goater
2019-03-15 12:05 ` [PATCH v3 02/17] KVM: PPC: Book3S HV: add a new KVM device for the XIVE native exploitation mode Cédric Le Goater
2019-03-17 23:48   ` David Gibson
2019-03-15 12:05 ` [PATCH v3 03/17] KVM: PPC: Book3S HV: XIVE: introduce a new capability KVM_CAP_PPC_IRQ_XIVE Cédric Le Goater
2019-03-18  0:19   ` David Gibson
2019-03-18 10:00     ` Cédric Le Goater
2019-03-15 12:05 ` [PATCH v3 04/17] KVM: PPC: Book3S HV: XIVE: add a control to initialize a source Cédric Le Goater
2019-03-18  1:38   ` David Gibson
2019-03-15 12:05 ` [PATCH v3 05/17] KVM: PPC: Book3S HV: XIVE: add a control to configure " Cédric Le Goater
2019-03-15 12:05 ` [PATCH v3 06/17] KVM: PPC: Book3S HV: XIVE: add controls for the EQ configuration Cédric Le Goater
2019-03-18  3:23   ` David Gibson
2019-03-18 14:12     ` Cédric Le Goater
2019-03-18 14:38       ` Cédric Le Goater
2019-03-19  4:54       ` David Gibson
2019-03-19 15:47         ` Cédric Le Goater
2019-03-20  3:44           ` David Gibson
2019-03-20  6:44             ` Cédric Le Goater
2019-03-15 12:05 ` [PATCH v3 07/17] KVM: PPC: Book3S HV: XIVE: add a global reset control Cédric Le Goater
2019-03-18  3:25   ` David Gibson
2019-03-15 12:06 ` [PATCH v3 08/17] KVM: PPC: Book3S HV: XIVE: add a control to sync the sources Cédric Le Goater
2019-03-18  3:28   ` David Gibson
2019-03-15 12:06 ` [PATCH v3 09/17] KVM: PPC: Book3S HV: XIVE: add a control to dirty the XIVE EQ pages Cédric Le Goater
2019-03-18  3:31   ` David Gibson
2019-03-15 12:06 ` [PATCH v3 10/17] KVM: PPC: Book3S HV: XIVE: add get/set accessors for the VP XIVE state Cédric Le Goater
2019-03-19  5:08   ` David Gibson
2019-03-15 12:06 ` [PATCH v3 11/17] KVM: introduce a 'mmap' method for KVM devices Cédric Le Goater
2019-03-18  3:32   ` David Gibson
2019-03-15 12:06 ` [PATCH v3 12/17] KVM: PPC: Book3S HV: XIVE: add a TIMA mapping Cédric Le Goater
2019-03-15 12:06 ` [PATCH v3 13/17] KVM: PPC: Book3S HV: XIVE: add a mapping for the source ESB pages Cédric Le Goater
2019-03-15 12:06 ` [PATCH v3 14/17] KVM: PPC: Book3S HV: XIVE: add passthrough support Cédric Le Goater
2019-03-19  5:22   ` David Gibson
2019-03-15 12:06 ` [PATCH v3 15/17] KVM: PPC: Book3S HV: XIVE: activate XIVE exploitation mode Cédric Le Goater
2019-03-18  6:42   ` David Gibson
2019-03-15 12:06 ` [PATCH v3 16/17] KVM: introduce a KVM_DESTROY_DEVICE ioctl Cédric Le Goater
2019-03-18  6:42   ` David Gibson
2019-03-15 12:06 ` [PATCH v3 17/17] KVM: PPC: Book3S HV: XIVE: clear the vCPU interrupt presenters Cédric Le Goater
2019-03-19  5:37   ` David Gibson [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20190319053736.GE31018@umbus.fritz.box \
    --to=david@gibson.dropbear.id.au \
    --cc=clg@kaod.org \
    --cc=kvm-ppc@vger.kernel.org \
    --cc=kvm@vger.kernel.org \
    --cc=linuxppc-dev@lists.ozlabs.org \
    --cc=paulus@samba.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).