qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: Cornelia Huck <cohuck@redhat.com>
To: Christian Borntraeger <borntraeger@de.ibm.com>
Cc: David Hildenbrand <david@redhat.com>,
	qemu-s390x@nongnu.org, qemu-devel@nongnu.org,
	Richard Henderson <rth@twiddle.net>,
	Alexander Graf <agraf@suse.de>, Thomas Huth <thuth@redhat.com>
Subject: Re: [Qemu-devel] [PATCH RFC] s390x/kvm: call cpu_synchronize_state() on every kvm_arch_handle_exit()
Date: Thu, 5 Apr 2018 10:03:46 +0200	[thread overview]
Message-ID: <20180405100346.7789e8e1.cohuck@redhat.com> (raw)
In-Reply-To: <11ca3563-dd2c-f8b0-8336-323177477a00@de.ibm.com>

On Thu, 5 Apr 2018 09:53:45 +0200
Christian Borntraeger <borntraeger@de.ibm.com> wrote:

> On 03/26/2018 11:20 AM, David Hildenbrand wrote:

> FWIW, I think your patch even fixes a bug:
> 
> --- snip ----
> static int handle_diag(S390CPU *cpu, struct kvm_run *run, uint32_t ipb)
> {
>     int r = 0;
>     uint16_t func_code;
> 
>     /*
>      * For any diagnose call we support, bits 48-63 of the resulting
>      * address specify the function code; the remainder is ignored.
>      */
>     func_code = decode_basedisp_rs(&cpu->env, ipb, NULL) & DIAG_KVM_CODE_MASK;
> 
> ---->  
> 	static inline hwaddr decode_basedisp_s(CPUS390XState *env, uint32_t ipb,
> 	                                       uint8_t *ar)
> 	{
> 	    hwaddr addr = 0;
> 	    uint8_t reg;
> 	    
> 	    reg = ipb >> 28;
> 	    if (reg > 0) {
> 	        addr = env->regs[reg];
> 
> ----> we do the sync_regs after this in the diag handler!  
> So currently we only handle the case with base reg == 0 correctly.
> So
> diag x,y,0x500(0) 
> works
> 
> 
> but things like
> lghi 1,0x500
> diag x,y,0(1)
> 
> not unless I miss something.

I think you are right.

> 
> 
> [...]
> > @@ -1778,6 +1760,8 @@ int kvm_arch_handle_exit(CPUState *cs, struct kvm_run *run)
> > 
> >      qemu_mutex_lock_iothread();
> > 
> > +    cpu_synchronize_state(CPU(cpu));
> > +
> >      switch (run->exit_reason) {
> >          case KVM_EXIT_S390_SIEIC:
> >              ret = handle_intercept(cpu);
> >   
> 
> Does it make sense to do this hunk NOW for 2.12 (cc stable)
> and queue your full patch for 2.13?
> 

I'd prefer a cpu_synchronize_state() at the start of handle_diag() and
a respin of this patch for 2.13, but that should be fine as well.

I'll queue a proper patch for 2.12-rc.

  parent reply	other threads:[~2018-04-05  8:03 UTC|newest]

Thread overview: 7+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-03-26  9:20 [Qemu-devel] [PATCH RFC] s390x/kvm: call cpu_synchronize_state() on every kvm_arch_handle_exit() David Hildenbrand
2018-03-26  9:36 ` David Hildenbrand
2018-04-05  7:53 ` Christian Borntraeger
2018-04-05  8:03   ` David Hildenbrand
2018-04-05  8:03   ` Cornelia Huck [this message]
2018-04-05  8:19   ` Thomas Huth
2018-04-05  8:47     ` [Qemu-devel] [qemu-s390x] " Christian Borntraeger

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20180405100346.7789e8e1.cohuck@redhat.com \
    --to=cohuck@redhat.com \
    --cc=agraf@suse.de \
    --cc=borntraeger@de.ibm.com \
    --cc=david@redhat.com \
    --cc=qemu-devel@nongnu.org \
    --cc=qemu-s390x@nongnu.org \
    --cc=rth@twiddle.net \
    --cc=thuth@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).