qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: Thomas Huth <thuth@redhat.com>
To: "Cédric Le Goater" <clg@kaod.org>,
	"Claudio Imbrenda" <imbrenda@linux.ibm.com>,
	"Janosch Frank" <frankja@linux.ibm.com>
Cc: qemu-s390x@nongnu.org, qemu-devel@nongnu.org,
	"Halil Pasic" <pasic@linux.ibm.com>,
	"Christian Borntraeger" <borntraeger@linux.ibm.com>,
	"Richard Henderson" <richard.henderson@linaro.org>,
	"David Hildenbrand" <david@redhat.com>,
	"Ilya Leoshkevich" <iii@linux.ibm.com>,
	"Eric Farman" <farman@linux.ibm.com>,
	"Cédric Le Goater" <clg@redhat.com>,
	"Claudio Imbrenda" <imbrenda@linux.ibm.com>
Subject: Re: [PATCH 2/5] s390x/pv: Implement CGS check handler
Date: Thu, 5 Jan 2023 15:53:43 +0100	[thread overview]
Message-ID: <0ec1a61f-4ae9-e867-11ea-4f29944d31f8@redhat.com> (raw)
In-Reply-To: <41e8c81a-ac90-3b57-c34b-c4e461323391@kaod.org>

On 05/01/2023 15.47, Cédric Le Goater wrote:
> On 1/5/23 14:58, Claudio Imbrenda wrote:
>> On Thu, 5 Jan 2023 12:42:54 +0100
>> Thomas Huth <thuth@redhat.com> wrote:
>>
>>> On 04/01/2023 12.51, Cédric Le Goater wrote:
>>>> From: Cédric Le Goater <clg@redhat.com>
>>>>
>>>> When a protected VM is started with the maximum number of CPUs (248),
>>>> the service call providing information on the CPUs requires more
>>>> buffer space than allocated and QEMU disgracefully aborts :
>>>>
>>>>       LOADPARM=[........]
>>>>       Using virtio-blk.
>>>>       Using SCSI scheme.
>>>>       
>>>> ................................................................................... 
>>>>
>>>>       qemu-system-s390x: KVM_S390_MEM_OP failed: Argument list too long
>>>>
>>>> Implement a test for this limitation in the ConfidentialGuestSupportClass
>>>> check handler and provide some valid information to the user before the
>>>> machine starts.
>>>>
>>>> Signed-off-by: Cédric Le Goater <clg@redhat.com>
>>>> ---
>>>>    hw/s390x/pv.c | 23 +++++++++++++++++++++++
>>>>    1 file changed, 23 insertions(+)
>>>>
>>>> diff --git a/hw/s390x/pv.c b/hw/s390x/pv.c
>>>> index 8dfe92d8df..3a7ec70634 100644
>>>> --- a/hw/s390x/pv.c
>>>> +++ b/hw/s390x/pv.c
>>>> @@ -266,6 +266,26 @@ int s390_pv_kvm_init(ConfidentialGuestSupport *cgs, 
>>>> Error **errp)
>>>>        return 0;
>>>>    }
>>>> +static bool s390_pv_check_cpus(Error **errp)
>>>> +{
>>>> +    MachineState *ms = MACHINE(qdev_get_machine());
>>>> +    MachineClass *mc = MACHINE_GET_CLASS(ms);
>>>> +    uint32_t pv_max_cpus = mc->max_cpus - 1;
>>>
>>> Not sure whether "mc->max_cpus - 1" is the right approach here. I think it
>>> would be better to calculate the amount of CPUs that we can support.
>>>
>>> So AFAIK the problem is that SCLP information that is gathered during
>>> read_SCP_info() in hw/s390x/sclp.c. If protected virtualization is enabled,
>>> everything has to fit in one page (i.e. 4096 bytes) there.
>>>
>>> So we have space for
>>>
>>>    (TARGET_PAGE_SIZE - offset_cpu) / sizeof(CPUEntry)
>>>
>>> CPUs.
>>>
>>> With S390_FEAT_EXTENDED_LENGTH_SCCB enabled, offset_cpu is 144 (see struct
>>> ReadInfo in sclp.h), otherwise it is 128.
>>>
>>> That means, with S390_FEAT_EXTENDED_LENGTH_SCCB we can have a maximum of:
>>>
>>>    (4096 - 144) / 16 = 247 CPUs
>>>
>>> which is what you were trying to check with the mc->max_cpus - 1 here.
> 
> yes. That's much better.
> 
>>> But with "-cpu els=off", it sounds like we could fit all 248 also with
>>> protected VMs? Could you please give it a try?
> 
> It runs. Unfortunately, QEMU also complains with :
> 
>    qemu-system-s390x: warning: 'diag318' requires 'els'.
>    qemu-system-s390x: warning: 'diag318' requires 'els'.
>    qemu-system-s390x: warning: 'diag318' requires 'els'.
> 
> when els is off.

There is also a switch for that: -cpu els=off,diag318=off

>>> Anyway, instead of using "pv_max_cpus = mc->max_cpus - 1" I'd suggest to use
>>> something like this instead:
>>>
>>>    int offset_cpu = s390_has_feat(S390_FEAT_EXTENDED_LENGTH_SCCB) ?
>>>                        offsetof(ReadInfo, entries) :
>>>                        SCLP_READ_SCP_INFO_FIXED_CPU_OFFSET;
>>>    pv_max_cpus = (TARGET_PAGE_SIZE - offset_cpu) /sizeof(CPUEntry);
>>
>> I agree with Thomas here
> 
> 
> The problem is that QEMU doesn't know about the S390_FEAT_EXTENDED_LENGTH_SCCB
> feature when the PV object link is first checked. So #248 CPUs is considered
> valid, but when DIAG308_PV_START is called, it fails.

Drat. Is there any chance that the check could be done somewhere later?

> Let's simplify and use :
> 
>      int offset_cpu = offsetof(ReadInfo, entries);
>      pv_max_cpus = (TARGET_PAGE_SIZE - offset_cpu) /sizeof(CPUEntry);
> 
> ?

Depends ... if it is possible to use 248 CPUs with -cpu els=off,diag318=off 
then it would be nicer to allow that, too?

  Thomas



  reply	other threads:[~2023-01-05 14:56 UTC|newest]

Thread overview: 18+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-01-04 11:51 [PATCH 0/5] s390x/pv: Improve protected VM support Cédric Le Goater
2023-01-04 11:51 ` [PATCH 1/5] confidential guest support: Introduce a 'check' class handler Cédric Le Goater
2023-01-05  8:46   ` Thomas Huth
2023-01-05 10:34     ` Philippe Mathieu-Daudé
2023-01-05 13:56     ` Cédric Le Goater
2023-01-04 11:51 ` [PATCH 2/5] s390x/pv: Implement CGS check handler Cédric Le Goater
2023-01-05 11:42   ` Thomas Huth
2023-01-05 13:58     ` Claudio Imbrenda
2023-01-05 14:47       ` Cédric Le Goater
2023-01-05 14:53         ` Thomas Huth [this message]
2023-01-05 17:12           ` Cédric Le Goater
2023-01-04 11:51 ` [PATCH 3/5] s390x/pv: Check for support on the host Cédric Le Goater
2023-01-05 11:46   ` Thomas Huth
2023-01-04 11:51 ` [PATCH 4/5] s390x/pv: Introduce a s390_pv_check() helper for runtime Cédric Le Goater
2023-01-05 12:33   ` Thomas Huth
2023-01-05 14:24   ` Claudio Imbrenda
2023-01-04 11:51 ` [PATCH 5/5] s390x/pv: Move check on hugepage under s390_pv_guest_check() Cédric Le Goater
2023-01-05 12:37   ` Thomas Huth

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=0ec1a61f-4ae9-e867-11ea-4f29944d31f8@redhat.com \
    --to=thuth@redhat.com \
    --cc=borntraeger@linux.ibm.com \
    --cc=clg@kaod.org \
    --cc=clg@redhat.com \
    --cc=david@redhat.com \
    --cc=farman@linux.ibm.com \
    --cc=frankja@linux.ibm.com \
    --cc=iii@linux.ibm.com \
    --cc=imbrenda@linux.ibm.com \
    --cc=pasic@linux.ibm.com \
    --cc=qemu-devel@nongnu.org \
    --cc=qemu-s390x@nongnu.org \
    --cc=richard.henderson@linaro.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).