public inbox for linux-s390@vger.kernel.org
 help / color / mirror / Atom feed
From: Christian Borntraeger <borntraeger@de.ibm.com>
To: Eric Farman <farman@linux.ibm.com>,
	kvm@vger.kernel.org, linux-s390@vger.kernel.org
Cc: Janosch Frank <frankja@linux.ibm.com>,
	David Hildenbrand <david@redhat.com>,
	Cornelia Huck <cohuck@redhat.com>,
	Michael Mueller <mimu@linux.ibm.com>,
	Heiko Carstens <heiko.carstens@de.ibm.com>,
	Vasily Gorbik <gor@linux.ibm.com>, Thomas Huth <thuth@redhat.com>
Subject: Re: [PATCH] KVM: s390: Fix PV check in deliverable_irqs()
Date: Thu, 16 Apr 2020 08:17:21 +0200	[thread overview]
Message-ID: <e8720727-ceee-e668-2a52-54b2b5039087@de.ibm.com> (raw)
In-Reply-To: <20200415190353.63625-1-farman@linux.ibm.com>

On 15.04.20 21:03, Eric Farman wrote:
> The diag 0x44 handler, which handles a directed yield, goes into a
> a codepath that does a kvm_for_each_vcpu() and ultimately
> deliverable_irqs().  The new check for kvm_s390_pv_cpu_is_protected()
> contains an assertion that the vcpu->mutex is held, which isn't going
> to be the case in this scenario.
> 
> The result is a plethora of these messages if the lock debugging
> is enabled, and thus an implication that we have a problem.
> 
>   WARNING: CPU: 9 PID: 16167 at arch/s390/kvm/kvm-s390.h:239 deliverable_irqs+0x1c6/0x1d0 [kvm]
>   ...snip...
>   Call Trace:
>    [<000003ff80429bf2>] deliverable_irqs+0x1ca/0x1d0 [kvm]
>   ([<000003ff80429b34>] deliverable_irqs+0x10c/0x1d0 [kvm])
>    [<000003ff8042ba82>] kvm_s390_vcpu_has_irq+0x2a/0xa8 [kvm]
>    [<000003ff804101e2>] kvm_arch_dy_runnable+0x22/0x38 [kvm]
>    [<000003ff80410284>] kvm_vcpu_on_spin+0x8c/0x1d0 [kvm]
>    [<000003ff80436888>] kvm_s390_handle_diag+0x3b0/0x768 [kvm]
>    [<000003ff80425af4>] kvm_handle_sie_intercept+0x1cc/0xcd0 [kvm]
>    [<000003ff80422bb0>] __vcpu_run+0x7b8/0xfd0 [kvm]
>    [<000003ff80423de6>] kvm_arch_vcpu_ioctl_run+0xee/0x3e0 [kvm]
>    [<000003ff8040ccd8>] kvm_vcpu_ioctl+0x2c8/0x8d0 [kvm]
>    [<00000001504ced06>] ksys_ioctl+0xae/0xe8
>    [<00000001504cedaa>] __s390x_sys_ioctl+0x2a/0x38
>    [<0000000150cb9034>] system_call+0xd8/0x2d8
>   2 locks held by CPU 2/KVM/16167:
>    #0: 00000001951980c0 (&vcpu->mutex){+.+.}, at: kvm_vcpu_ioctl+0x90/0x8d0 [kvm]
>    #1: 000000019599c0f0 (&kvm->srcu){....}, at: __vcpu_run+0x4bc/0xfd0 [kvm]
>   Last Breaking-Event-Address:
>    [<000003ff80429b34>] deliverable_irqs+0x10c/0x1d0 [kvm]
>   irq event stamp: 11967
>   hardirqs last  enabled at (11975): [<00000001502992f2>] console_unlock+0x4ca/0x650
>   hardirqs last disabled at (11982): [<0000000150298ee8>] console_unlock+0xc0/0x650
>   softirqs last  enabled at (7940): [<0000000150cba6ca>] __do_softirq+0x422/0x4d8
>   softirqs last disabled at (7929): [<00000001501cd688>] do_softirq_own_stack+0x70/0x80
> 
> Considering what's being done here, let's fix this by removing the
> mutex assertion rather than acquiring the mutex for every other vcpu.
> 
> Fixes: 201ae986ead7 ("KVM: s390: protvirt: Implement interrupt injection")

Yes, when adding that check I missed that path. We do have other places that use
kvm_s390_pv_cpu_get_handle instead of kvm_s390_pv_cpu_is_protected when we know
that this place has cases without the mutex being hold. And yes kvm_vcpu_on_spin
is such a place. 

The alternative would be to copy kvm_s390_vcpu_has_irq into a newly create
s390 version of kvm_arch_dy_runnable with a private copy of kvm_s390_vcpu_has_irq.
I think your patch is preferrable as it avoids code duplication with just tiny 
difference. After all it is just a sanity check.

Reviewed-by: Christian Borntraeger <borntraeger@de.ibm.com>


> Signed-off-by: Eric Farman <farman@linux.ibm.com>
> ---
>  arch/s390/kvm/interrupt.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/arch/s390/kvm/interrupt.c b/arch/s390/kvm/interrupt.c
> index 8191106bf7b9..bfb481134994 100644
> --- a/arch/s390/kvm/interrupt.c
> +++ b/arch/s390/kvm/interrupt.c
> @@ -393,7 +393,7 @@ static unsigned long deliverable_irqs(struct kvm_vcpu *vcpu)
>  	if (psw_mchk_disabled(vcpu))
>  		active_mask &= ~IRQ_PEND_MCHK_MASK;
>  	/* PV guest cpus can have a single interruption injected at a time. */
> -	if (kvm_s390_pv_cpu_is_protected(vcpu) &&
> +	if (kvm_s390_pv_cpu_get_handle(vcpu) &&
>  	    vcpu->arch.sie_block->iictl != IICTL_CODE_NONE)
>  		active_mask &= ~(IRQ_PEND_EXT_II_MASK |
>  				 IRQ_PEND_IO_MASK |
> 

  reply	other threads:[~2020-04-16  6:17 UTC|newest]

Thread overview: 3+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-04-15 19:03 [PATCH] KVM: s390: Fix PV check in deliverable_irqs() Eric Farman
2020-04-16  6:17 ` Christian Borntraeger [this message]
2020-04-16  6:38   ` Cornelia Huck

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=e8720727-ceee-e668-2a52-54b2b5039087@de.ibm.com \
    --to=borntraeger@de.ibm.com \
    --cc=cohuck@redhat.com \
    --cc=david@redhat.com \
    --cc=farman@linux.ibm.com \
    --cc=frankja@linux.ibm.com \
    --cc=gor@linux.ibm.com \
    --cc=heiko.carstens@de.ibm.com \
    --cc=kvm@vger.kernel.org \
    --cc=linux-s390@vger.kernel.org \
    --cc=mimu@linux.ibm.com \
    --cc=thuth@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox