From: Thomas Huth <thuth@redhat.com>
To: David Hildenbrand <david@redhat.com>, qemu-devel@nongnu.org
Cc: cohuck@redhat.com, Christian Borntraeger <borntraeger@de.ibm.com>,
Alexander Graf <agraf@suse.de>,
Richard Henderson <rth@twiddle.net>
Subject: Re: [Qemu-devel] [PATCH v2] s390x/kvm: fix and cleanup storing CPU status
Date: Mon, 25 Sep 2017 08:03:34 +0200 [thread overview]
Message-ID: <ad63a57a-ecb7-b7db-a120-800c87164709@redhat.com> (raw)
In-Reply-To: <20170922140338.6068-1-david@redhat.com>
On 22.09.2017 16:03, David Hildenbrand wrote:
> env->psa is a 64bit value, while we copy 4 bytes into the save area,
> resulting always in 0 getting stored.
>
> Let's try to reduce such errors by using a proper structure. While at
> it, use correct cpu->be conversion (and get_psw_mask()), as we will be
> reusing this code for TCG soon.
>
> Signed-off-by: David Hildenbrand <david@redhat.com>
> ---
>
> v1 -> v2:
> - dropped QEMU_PACKED
> - Moved QEMU_BUILD_BUG_ON()
> - Retested if it works now
>
> target/s390x/kvm.c | 62 ++++++++++++++++++++++++++++++++++++------------------
> 1 file changed, 42 insertions(+), 20 deletions(-)
>
> diff --git a/target/s390x/kvm.c b/target/s390x/kvm.c
> index ebb75cafaa..b63fcc5f1f 100644
> --- a/target/s390x/kvm.c
> +++ b/target/s390x/kvm.c
> @@ -1553,22 +1553,37 @@ static int do_store_adtl_status(S390CPU *cpu, hwaddr addr, hwaddr len)
> return 0;
> }
>
> +struct sigp_save_area {
> + uint64_t fprs[16]; /* 0x0000 */
> + uint64_t grs[16]; /* 0x0080 */
> + PSW psw; /* 0x0100 */
> + uint8_t pad_0x0110[0x0118 - 0x0110]; /* 0x0110 */
> + uint32_t prefix; /* 0x0118 */
> + uint32_t fpc; /* 0x011c */
> + uint8_t pad_0x0120[0x0124 - 0x0120]; /* 0x0120 */
> + uint32_t todpr; /* 0x0124 */
> + uint64_t cputm; /* 0x0128 */
> + uint64_t ckc; /* 0x0130 */
> + uint8_t pad_0x0138[0x0140 - 0x0138]; /* 0x0138 */
> + uint32_t ars[16]; /* 0x0140 */
> + uint64_t crs[16]; /* 0x0384 */
> +};
> +QEMU_BUILD_BUG_ON(sizeof(struct sigp_save_area) != 512);
> +
> #define KVM_S390_STORE_STATUS_DEF_ADDR offsetof(LowCore, floating_pt_save_area)
> -#define SAVE_AREA_SIZE 512
> static int kvm_s390_store_status(S390CPU *cpu, hwaddr addr, bool store_arch)
> {
> static const uint8_t ar_id = 1;
> - uint64_t ckc = cpu->env.ckc >> 8;
> - void *mem;
> + struct sigp_save_area *sa;
> + hwaddr len = sizeof(*sa);
> int i;
> - hwaddr len = SAVE_AREA_SIZE;
>
> - mem = cpu_physical_memory_map(addr, &len, 1);
> - if (!mem) {
> + sa = cpu_physical_memory_map(addr, &len, 1);
> + if (!sa) {
> return -EFAULT;
> }
> - if (len != SAVE_AREA_SIZE) {
> - cpu_physical_memory_unmap(mem, len, 1, 0);
> + if (len != sizeof(*sa)) {
> + cpu_physical_memory_unmap(sa, len, 1, 0);
> return -EFAULT;
> }
>
> @@ -1576,19 +1591,26 @@ static int kvm_s390_store_status(S390CPU *cpu, hwaddr addr, bool store_arch)
> cpu_physical_memory_write(offsetof(LowCore, ar_access_id), &ar_id, 1);
> }
> for (i = 0; i < 16; ++i) {
> - *((uint64_t *)mem + i) = get_freg(&cpu->env, i)->ll;
> - }
> - memcpy(mem + 128, &cpu->env.regs, 128);
> - memcpy(mem + 256, &cpu->env.psw, 16);
> - memcpy(mem + 280, &cpu->env.psa, 4);
> - memcpy(mem + 284, &cpu->env.fpc, 4);
> - memcpy(mem + 292, &cpu->env.todpr, 4);
> - memcpy(mem + 296, &cpu->env.cputm, 8);
> - memcpy(mem + 304, &ckc, 8);
> - memcpy(mem + 320, &cpu->env.aregs, 64);
> - memcpy(mem + 384, &cpu->env.cregs, 128);
> + sa->fprs[i] = cpu_to_be64(get_freg(&cpu->env, i)->ll);
> + }
> + for (i = 0; i < 16; ++i) {
> + sa->grs[i] = cpu_to_be64(cpu->env.regs[i]);
> + }
> + sa->psw.addr = cpu_to_be64(cpu->env.psw.addr);
> + sa->psw.mask = cpu_to_be64(get_psw_mask(&cpu->env));
> + sa->prefix = cpu_to_be32(cpu->env.psa);
> + sa->fpc = cpu_to_be32(cpu->env.fpc);
> + sa->todpr = cpu_to_be32(cpu->env.todpr);
> + sa->cputm = cpu_to_be64(cpu->env.cputm);
> + sa->ckc = cpu_to_be64(cpu->env.ckc >> 8);
> + for (i = 0; i < 16; ++i) {
> + sa->ars[i] = cpu_to_be32(cpu->env.aregs[i]);
> + }
> + for (i = 0; i < 16; ++i) {
> + sa->ars[i] = cpu_to_be64(cpu->env.cregs[i]);
> + }
>
> - cpu_physical_memory_unmap(mem, len, 1, len);
> + cpu_physical_memory_unmap(sa, len, 1, len);
>
> return 0;
> }
Reviewed-by: Thomas Huth <thuth@redhat.com>
next prev parent reply other threads:[~2017-09-25 6:03 UTC|newest]
Thread overview: 4+ messages / expand[flat|nested] mbox.gz Atom feed top
2017-09-22 14:03 [Qemu-devel] [PATCH v2] s390x/kvm: fix and cleanup storing CPU status David Hildenbrand
2017-09-22 14:28 ` Richard Henderson
2017-09-25 6:03 ` Thomas Huth [this message]
2017-09-26 10:04 ` Cornelia Huck
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=ad63a57a-ecb7-b7db-a120-800c87164709@redhat.com \
--to=thuth@redhat.com \
--cc=agraf@suse.de \
--cc=borntraeger@de.ibm.com \
--cc=cohuck@redhat.com \
--cc=david@redhat.com \
--cc=qemu-devel@nongnu.org \
--cc=rth@twiddle.net \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).