From: David Hildenbrand <david@redhat.com>
To: Cornelia Huck <cohuck@redhat.com>
Cc: borntraeger@de.ibm.com, qemu-s390x@nongnu.org,
Janosch Frank <frankja@linux.ibm.com>,
qemu-devel@nongnu.org
Subject: Re: [PATCH] s390x: kvm: Fix number of cpu reports for stsi 3.2.2
Date: Tue, 31 Mar 2020 12:38:10 +0200 [thread overview]
Message-ID: <ee414bea-60e4-18c3-6130-1ea5758618c0@redhat.com> (raw)
In-Reply-To: <20200331121129.3f752286.cohuck@redhat.com>
On 31.03.20 12:11, Cornelia Huck wrote:
> On Mon, 30 Mar 2020 18:04:09 +0200
> David Hildenbrand <david@redhat.com> wrote:
>
>> On 30.03.20 17:38, Janosch Frank wrote:
>>> The cpu number reporting is handled by KVM and QEMU only fills in the
>>> VM name, uuid and other values.
>>>
>>> Unfortuantely KVM doesn't report reserved cpus and doesn't even know
>>
>> s/Unfortuantely/Unfortunately/
>>
>>> they exist until the are created via the ioctl.
>>>
>>> So let's fix up the cpu values after KVM has written its values to the
>>> 3.2.2 sysib.
>>
>> Maybe mention "similar to TCG in target/s390x/misc_helper.c:HELPER(stsi)".
>>
>>>
>>> Signed-off-by: Janosch Frank <frankja@linux.ibm.com>
>>> ---
>>> target/s390x/kvm.c | 18 +++++++++++++++++-
>>> 1 file changed, 17 insertions(+), 1 deletion(-)
>>>
>>> diff --git a/target/s390x/kvm.c b/target/s390x/kvm.c
>>> index 3630c15f45a48864..a1c4890bdf0c65e4 100644
>>> --- a/target/s390x/kvm.c
>>> +++ b/target/s390x/kvm.c
>>> @@ -1819,8 +1819,10 @@ static int handle_tsch(S390CPU *cpu)
>>>
>>> static void insert_stsi_3_2_2(S390CPU *cpu, __u64 addr, uint8_t ar)
>>> {
>>> + const MachineState *ms = MACHINE(qdev_get_machine());
>>> + uint16_t total_cpus = 0, conf_cpus = 0, reserved_cpus = 0;
>>> SysIB_322 sysib;
>>> - int del;
>>> + int del, i;
>>>
>>> if (s390_is_pv()) {
>>> s390_cpu_pv_mem_read(cpu, 0, &sysib, sizeof(sysib));
>>> @@ -1842,6 +1844,20 @@ static void insert_stsi_3_2_2(S390CPU *cpu, __u64 addr, uint8_t ar)
>>> memset(sysib.ext_names[del], 0,
>>> sizeof(sysib.ext_names[0]) * (sysib.count - del));
>>> }
>>> +
>>> + /* count the cpus and split them into configured and reserved ones */
>>> + for (i = 0; i < ms->possible_cpus->len; i++) {
>>> + total_cpus++;
>>> + if (ms->possible_cpus->cpus[i].cpu) {
>>> + conf_cpus++;
>>> + } else {
>>> + reserved_cpus++;
>>> + }
>>> + }
>>
>> We could of course factor this calculation out :)
>>
>> (and one could shrink the variables from 3 to 2)
>
> I'd vote for queuing this one on s390-fixes now (with the patch
> description tweaks) and doing any cleanup on top for the next release.
> Ok?
Fine with me.
--
Thanks,
David / dhildenb
next prev parent reply other threads:[~2020-03-31 10:40 UTC|newest]
Thread overview: 6+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-03-30 15:38 [PATCH] s390x: kvm: Fix number of cpu reports for stsi 3.2.2 Janosch Frank
2020-03-30 16:04 ` David Hildenbrand
2020-03-31 10:11 ` Cornelia Huck
2020-03-31 10:38 ` David Hildenbrand [this message]
2020-03-31 11:01 ` [PATCH v2] " Janosch Frank
2020-03-31 12:01 ` Cornelia Huck
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=ee414bea-60e4-18c3-6130-1ea5758618c0@redhat.com \
--to=david@redhat.com \
--cc=borntraeger@de.ibm.com \
--cc=cohuck@redhat.com \
--cc=frankja@linux.ibm.com \
--cc=qemu-devel@nongnu.org \
--cc=qemu-s390x@nongnu.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).