From: "Edgar E. Iglesias" <edgar.iglesias@gmail.com>
To: Jason Andryuk <jason.andryuk@amd.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>,
qemu-devel@nongnu.org, anthony@xenproject.org, paul@xen.org,
peter.maydell@linaro.org, alex.bennee@linaro.org,
xenia.ragiadakou@amd.com, edgar.iglesias@amd.com,
xen-devel@lists.xenproject.org, qemu-arm@nongnu.org,
andrew.cooper3@citrix.com
Subject: Re: [PATCH v1 04/10] hw/arm: xenpvh: Add support for SMP guests
Date: Tue, 20 Aug 2024 16:13:10 +0200 [thread overview]
Message-ID: <CAJy5ezonjsd95GhkoagrivQy_Vme7wyj1xLvVd9ZxNP_tJyBRA@mail.gmail.com> (raw)
In-Reply-To: <93de8d6d-6123-4038-a566-d134206ba608@amd.com>
[-- Attachment #1: Type: text/plain, Size: 8078 bytes --]
On Sat, Aug 17, 2024 at 2:45 AM Jason Andryuk <jason.andryuk@amd.com> wrote:
> On 2024-08-16 12:53, Stefano Stabellini wrote:
> > On Fri, 16 Aug 2024, Edgar E. Iglesias wrote:
> >> On Thu, Aug 15, 2024 at 2:30 AM Stefano Stabellini <
> sstabellini@kernel.org> wrote:
> >> On Wed, 14 Aug 2024, Edgar E. Iglesias wrote:
> >> > On Tue, Aug 13, 2024 at 03:52:32PM -0700, Stefano Stabellini
> wrote:
> >> > > On Tue, 13 Aug 2024, Edgar E. Iglesias wrote:
> >> > > > On Mon, Aug 12, 2024 at 06:47:17PM -0700, Stefano
> Stabellini wrote:
> >> > > > > On Mon, 12 Aug 2024, Edgar E. Iglesias wrote:
> >> > > > > > From: "Edgar E. Iglesias" <edgar.iglesias@amd.com>
> >> > > > > >
> >> > > > > > Add SMP support for Xen PVH ARM guests. Create
> max_cpus ioreq
> >> > > > > > servers to handle hotplug.
> >> > > > > >
> >> > > > > > Signed-off-by: Edgar E. Iglesias <
> edgar.iglesias@amd.com>
> >> > > > > > ---
> >> > > > > > hw/arm/xen_arm.c | 5 +++--
> >> > > > > > 1 file changed, 3 insertions(+), 2 deletions(-)
> >> > > > > >
> >> > > > > > diff --git a/hw/arm/xen_arm.c b/hw/arm/xen_arm.c
> >> > > > > > index 5f75cc3779..ef8315969c 100644
> >> > > > > > --- a/hw/arm/xen_arm.c
> >> > > > > > +++ b/hw/arm/xen_arm.c
> >> > > > > > @@ -173,7 +173,7 @@ static void
> xen_arm_init(MachineState *machine)
> >> > > > > >
> >> > > > > > xen_init_ram(machine);
> >> > > > > >
> >> > > > > > - xen_register_ioreq(xam->state, machine->smp.cpus,
> &xen_memory_listener);
> >> > > > > > + xen_register_ioreq(xam->state,
> machine->smp.max_cpus, &xen_memory_listener);
> >> > > > > >
> >> > > > > > xen_create_virtio_mmio_devices(xam);
> >> > > > > >
> >> > > > > > @@ -218,7 +218,8 @@ static void
> xen_arm_machine_class_init(ObjectClass *oc, void *data)
> >> > > > > > MachineClass *mc = MACHINE_CLASS(oc);
> >> > > > > > mc->desc = "Xen PVH ARM machine";
> >> > > > > > mc->init = xen_arm_init;
> >> > > > > > - mc->max_cpus = 1;
> >> > > > > > + /* MAX number of vcpus supported by Xen. */
> >> > > > > > + mc->max_cpus = GUEST_MAX_VCPUS;
> >> > > > >
> >> > > > > Will this cause allocations of data structures with 128
> elements?
> >> > > > > Looking at hw/xen/xen-hvm-common.c:xen_do_ioreq_register
> it seems
> >> > > > > possible? Or
> hw/xen/xen-hvm-common.c:xen_do_ioreq_register is called
> >> > > >
> >> > > > Yes, in theory there's probably overhead with this but as
> you correctly
> >> > > > noted below, a PVH aware xl will set the max_cpus option
> to a lower value.
> >> > > >
> >> > > > With a non-pvh aware xl, I was a little worried about the
> overhead
> >> > > > but I couldn't see any visible slow-down on ARM neither in
> boot or in network
> >> > > > performance (I didn't run very sophisticated benchmarks).
> >> > >
> >> > > What do you mean by "non-pvh aware xl"? All useful versions
> of xl
> >> > > support pvh?
> >> >
> >> >
> >> > I mean an xl without our PVH patches merged.
> >> > xl in upstream doesn't know much about PVH yet.
> >> > Even for ARM, we're still carrying significant patches in our
> tree.
> >>
> >> Oh I see. In that case, I don't think we need to support
> "non-pvh aware xl".
> >>
> >>
> >> > > > > later on with the precise vCPU value which should be
> provided to QEMU
> >> > > > > via the -smp command line option
> >> > > > >
> (tools/libs/light/libxl_dm.c:libxl__build_device_model_args_new)?
> >> > > >
> >> > > > Yes, a pvh aware xl will for example pass -smp 2,maxcpus=4
> based on
> >> > > > values from the xl.cfg. If the user doesn't set maxvcpus
> in xl.cfg, xl
> >> > > > will set maxvcpus to the same value as vcpus.
> >> > >
> >> > > OK good. In that case if this is just an initial value meant
> to be
> >> > > overwritten, I think it is best to keep it as 1.
> >> >
> >> > Sorry but that won't work. I think the confusion here may be
> that
> >> > it's easy to mix up mc->max_cpus and machine->smp.max_cpus,
> these are
> >> > not the same. They have different purposes.
> >> >
> >> > I'll try to clarify the 3 values in play.
> >> >
> >> > machine-smp.cpus:
> >> > Number of guest vcpus active at boot.
> >> > Passed to QEMU via the -smp command-line option.
> >> > We don't use this value in QEMU's ARM PVH machines.
> >> >
> >> > machine->smp.max_cpus:
> >> > Max number of vcpus that the guest can use (equal or larger
> than machine-smp.cpus).
> >> > Will be set by xl via the "-smp X,maxcpus=Y" command-line
> option to QEMU.
> >> > Taken from maxvcpus from xl.cfg, same as XEN_DMOP_nr_vcpus.
> >> > This is what we use for xen_register_ioreq().
> >> >
> >> > mc->max_cpus:
> >> > Absolute MAX in QEMU used to cap the -smp command-line options.
> >> > If xl tries to set -smp (machine->smp.max_cpus) larger than
> this, QEMU will bail out.
> >> > Used to setup xen_register_ioreq() ONLY if -smp maxcpus was
> NOT set (i.e by a non PVH aware xl).
> >> > Cannot be 1 because that would limit QEMU to MAX 1 vcpu.
> >> >
> >> > I guess we could set mc->max_cpus to what XEN_DMOP_nr_vcpus
> returns but I'll
> >> > have to check if we can even issue that hypercall this early
> in QEMU since
> >> > mc->max_cpus is setup before we even parse the machine
> options. We may
> >> > not yet know what domid we're attaching to yet.
> >>
> >> If mc->max_cpus is the absolute max and it will not be used if
> -smp is
> >> passed to QEMU, then I think it is OK to use GUEST_MAX_VCPUS
> >>
> >> Looking at this a little more. If users (xl) don't pass an -smp option
> we actually default to smp.max_cpus=1.
> >> So, another option is to simply remove the upper limit in QEMU (e.g we
> can set mc->max_cpus to something very large like UINT32_MAX).
> >> That would avoid early hypercalls, avoid using GUEST_MAX_VCPUS and
> always let xl dictate the max_cpus value using the -smp cmdline option.
> >
> > As the expectation is that there will be always a smp.max_cpus option
> > passed to QEMU, I would avoid an extra early hypercall.
> >
> > For the initial value, I would use something static and large, but not
> > unreasonably large as UINT32_MAX to be more resilient in (erroneous)
> > cases where smp.max_cpus is not passed.
> >
> > So I would initialize it to GUEST_MAX_VCPUS, or if we don't want to use
> > GUEST_MAX_VCPUS, something equivalent in the 64-256 range.
>
Thanks Stefano,
I'm going to send a v2 following this suggestion of using GUEST_MAX_VCPUS.
Will also add comments clarifying that this is a MAX value for the
command-line option
and not what gets passed to register_ioreq.
We can continue the discussion from there to see if we want to change
things,
I don't have a strong opinion here so I'm happy to go either way.
> >
> > Alternative we can have a runtime check and exit with a warning if
> > smp.max_cpus is not set.
>
> FYI, xl only passes a -smp option when the domU has more than 1 vcpu.
> Though that implies only a single vcpu.
>
>
Thanks Jason, yes, in that case the default of cpus=1, maxcpus=1 gets set.
I was initially under the wrong assumption that without -smp options, the
max would get set.
This is what I was trying to clarify in my previous email:
>> Looking at this a little more. If users (xl) don't pass an -smp option
we actually default to smp.max_cpus=1.
Best regards,
Edgar
[-- Attachment #2: Type: text/html, Size: 11036 bytes --]
next prev parent reply other threads:[~2024-08-20 14:14 UTC|newest]
Thread overview: 35+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-08-12 13:05 [PATCH v1 00/10] xen: pvh: Partial QOM:fication with new x86 PVH machine Edgar E. Iglesias
2024-08-12 13:05 ` [PATCH v1 01/10] MAINTAINERS: Add docs/system/arm/xenpvh.rst Edgar E. Iglesias
2024-08-13 1:45 ` Stefano Stabellini
2024-08-12 13:05 ` [PATCH v1 02/10] hw/arm: xenpvh: Update file header to use SPDX Edgar E. Iglesias
2024-08-13 1:45 ` Stefano Stabellini
2024-08-12 13:05 ` [PATCH v1 03/10] hw/arm: xenpvh: Tweak machine description Edgar E. Iglesias
2024-08-13 1:45 ` Stefano Stabellini
2024-08-12 13:05 ` [PATCH v1 04/10] hw/arm: xenpvh: Add support for SMP guests Edgar E. Iglesias
2024-08-13 1:47 ` Stefano Stabellini
2024-08-13 17:02 ` Edgar E. Iglesias
2024-08-13 17:20 ` Andrew Cooper
2024-08-13 22:52 ` Stefano Stabellini
2024-08-14 11:49 ` Edgar E. Iglesias
2024-08-15 0:30 ` Stefano Stabellini
2024-08-16 10:39 ` Edgar E. Iglesias
2024-08-16 16:53 ` Stefano Stabellini
2024-08-16 22:58 ` Jason Andryuk
2024-08-20 14:13 ` Edgar E. Iglesias [this message]
2024-08-12 13:06 ` [PATCH v1 05/10] hw/arm: xenpvh: Break out a common PVH module Edgar E. Iglesias
2024-08-13 1:47 ` Stefano Stabellini
2024-08-14 12:03 ` Edgar E. Iglesias
2024-08-12 13:06 ` [PATCH v1 06/10] hw/arm: xenpvh: Rename xen_arm.c -> xen-pvh.c Edgar E. Iglesias
2024-08-13 1:48 ` Stefano Stabellini
2024-08-12 13:06 ` [PATCH v1 07/10] hw/arm: xenpvh: Reverse virtio-mmio creation order Edgar E. Iglesias
2024-08-13 1:48 ` Stefano Stabellini
2024-08-12 13:06 ` [PATCH v1 08/10] hw/xen: pvh-common: Add support for creating PCIe/GPEX Edgar E. Iglesias
2024-08-13 1:48 ` Stefano Stabellini
2024-08-14 15:26 ` Edgar E. Iglesias
2024-08-15 0:29 ` Stefano Stabellini
2024-08-12 13:06 ` [PATCH v1 09/10] hw/i386/xen: Add a Xen PVH x86 machine Edgar E. Iglesias
2024-08-13 1:48 ` Stefano Stabellini
2024-08-14 15:50 ` Edgar E. Iglesias
2024-08-15 0:19 ` Stefano Stabellini
2024-08-12 13:06 ` [PATCH v1 10/10] docs/system/i386: xenpvh: Add a basic description Edgar E. Iglesias
2024-08-13 1:49 ` Stefano Stabellini
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=CAJy5ezonjsd95GhkoagrivQy_Vme7wyj1xLvVd9ZxNP_tJyBRA@mail.gmail.com \
--to=edgar.iglesias@gmail.com \
--cc=alex.bennee@linaro.org \
--cc=andrew.cooper3@citrix.com \
--cc=anthony@xenproject.org \
--cc=edgar.iglesias@amd.com \
--cc=jason.andryuk@amd.com \
--cc=paul@xen.org \
--cc=peter.maydell@linaro.org \
--cc=qemu-arm@nongnu.org \
--cc=qemu-devel@nongnu.org \
--cc=sstabellini@kernel.org \
--cc=xen-devel@lists.xenproject.org \
--cc=xenia.ragiadakou@amd.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).