From: Greg Kurz <groug@kaod.org>
To: "Cédric Le Goater" <clg@kaod.org>
Cc: qemu-ppc@nongnu.org, qemu-devel@nongnu.org,
David Gibson <david@gibson.dropbear.id.au>
Subject: Re: [PATCH 1/7] spapr, xics: Get number of servers with a XICSFabricClass method
Date: Thu, 3 Oct 2019 14:49:52 +0200 [thread overview]
Message-ID: <20191003144952.181da0e2@bahia.lan> (raw)
In-Reply-To: <a00c6fee-42b8-c923-386f-5fa909f6f99b@kaod.org>
On Thu, 3 Oct 2019 14:24:06 +0200
Cédric Le Goater <clg@kaod.org> wrote:
> On 03/10/2019 14:00, Greg Kurz wrote:
> > The number of servers, ie. upper bound of the highest VCPU id, is
> > currently only needed to generate the "interrupt-controller" node
> > in the DT. Soon it will be needed to inform the XICS-on-XIVE KVM
> > device that it can allocates less resources in the XIVE HW.
> >
> > Add a method to XICSFabricClass for this purpose.
>
> This is sPAPR code and PowerNV does not care.
>
Then PowerNV doesn't need to implement the method.
> why can not we simply call spapr_max_server_number(spapr) ?
>
Because the backend shouldn't reach out to sPAPR machine
internals. XICSFabric is the natural interface for ICS/ICP
if they need something from the machine.
>
> > Implement it
> > for sPAPR and use it to generate the "interrupt-controller" node.
> >
> > Signed-off-by: Greg Kurz <groug@kaod.org>
> > ---
> > hw/intc/xics.c | 7 +++++++
> > hw/intc/xics_spapr.c | 3 ++-
> > hw/ppc/spapr.c | 8 ++++++++
> > include/hw/ppc/xics.h | 2 ++
> > 4 files changed, 19 insertions(+), 1 deletion(-)
> >
> > diff --git a/hw/intc/xics.c b/hw/intc/xics.c
> > index dfe7dbd254ab..f82072935266 100644
> > --- a/hw/intc/xics.c
> > +++ b/hw/intc/xics.c
> > @@ -716,6 +716,13 @@ ICPState *xics_icp_get(XICSFabric *xi, int server)
> > return xic->icp_get(xi, server);
> > }
> >
> > +uint32_t xics_nr_servers(XICSFabric *xi)
> > +{
> > + XICSFabricClass *xic = XICS_FABRIC_GET_CLASS(xi);
> > +
> > + return xic->nr_servers(xi);
> > +}
> > +
> > void ics_set_irq_type(ICSState *ics, int srcno, bool lsi)
> > {
> > assert(!(ics->irqs[srcno].flags & XICS_FLAGS_IRQ_MASK));
> > diff --git a/hw/intc/xics_spapr.c b/hw/intc/xics_spapr.c
> > index 6e5eb24b3cca..aa568ed0dc0d 100644
> > --- a/hw/intc/xics_spapr.c
> > +++ b/hw/intc/xics_spapr.c
> > @@ -311,8 +311,9 @@ static void ics_spapr_realize(DeviceState *dev, Error **errp)
> > void spapr_dt_xics(SpaprMachineState *spapr, uint32_t nr_servers, void *fdt,
> > uint32_t phandle)
> > {
> > + ICSState *ics = spapr->ics;
> > uint32_t interrupt_server_ranges_prop[] = {
> > - 0, cpu_to_be32(nr_servers),
> > + 0, cpu_to_be32(xics_nr_servers(ics->xics)),
> > };
> > int node;
> >
> > diff --git a/hw/ppc/spapr.c b/hw/ppc/spapr.c
> > index 514a17ae74d6..b8b9796c88e4 100644
> > --- a/hw/ppc/spapr.c
> > +++ b/hw/ppc/spapr.c
> > @@ -4266,6 +4266,13 @@ static ICPState *spapr_icp_get(XICSFabric *xi, int vcpu_id)
> > return cpu ? spapr_cpu_state(cpu)->icp : NULL;
> > }
> >
> > +static uint32_t spapr_nr_servers(XICSFabric *xi)
> > +{
> > + SpaprMachineState *spapr = SPAPR_MACHINE(xi);
> > +
> > + return spapr_max_server_number(spapr);
> > +}
> > +
> > static void spapr_pic_print_info(InterruptStatsProvider *obj,
> > Monitor *mon)
> > {
> > @@ -4423,6 +4430,7 @@ static void spapr_machine_class_init(ObjectClass *oc, void *data)
> > xic->ics_get = spapr_ics_get;
> > xic->ics_resend = spapr_ics_resend;
> > xic->icp_get = spapr_icp_get;
> > + xic->nr_servers = spapr_nr_servers;
> > ispc->print_info = spapr_pic_print_info;
> > /* Force NUMA node memory size to be a multiple of
> > * SPAPR_MEMORY_BLOCK_SIZE (256M) since that's the granularity
> > diff --git a/include/hw/ppc/xics.h b/include/hw/ppc/xics.h
> > index 1e6a9300eb2b..e6bb1239e8f8 100644
> > --- a/include/hw/ppc/xics.h
> > +++ b/include/hw/ppc/xics.h
> > @@ -151,9 +151,11 @@ typedef struct XICSFabricClass {
> > ICSState *(*ics_get)(XICSFabric *xi, int irq);
> > void (*ics_resend)(XICSFabric *xi);
> > ICPState *(*icp_get)(XICSFabric *xi, int server);
> > + uint32_t (*nr_servers)(XICSFabric *xi);
> > } XICSFabricClass;
> >
> > ICPState *xics_icp_get(XICSFabric *xi, int server);
> > +uint32_t xics_nr_servers(XICSFabric *xi);
> >
> > /* Internal XICS interfaces */
> > void icp_set_cppr(ICPState *icp, uint8_t cppr);
> >
>
next prev parent reply other threads:[~2019-10-03 12:50 UTC|newest]
Thread overview: 31+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-10-03 12:00 [PATCH 0/7] spapr: Use less XIVE HW resources in KVM Greg Kurz
2019-10-03 12:00 ` [PATCH 1/7] spapr, xics: Get number of servers with a XICSFabricClass method Greg Kurz
2019-10-03 12:24 ` Cédric Le Goater
2019-10-03 12:49 ` Greg Kurz [this message]
2019-10-03 12:58 ` Cédric Le Goater
2019-10-03 13:02 ` Greg Kurz
2019-10-03 13:19 ` Cédric Le Goater
2019-10-03 13:41 ` Greg Kurz
2019-10-03 13:59 ` Cédric Le Goater
2019-10-03 14:58 ` Greg Kurz
2019-10-03 12:01 ` [PATCH 2/7] spapr, xive: Turn "nr-ends" property into "nr-servers" property Greg Kurz
2019-10-03 12:21 ` Cédric Le Goater
2019-10-03 12:44 ` Greg Kurz
2019-10-04 4:07 ` David Gibson
2019-10-04 5:53 ` Cédric Le Goater
2019-10-04 6:52 ` Greg Kurz
2019-10-04 7:27 ` Cédric Le Goater
2019-10-04 6:51 ` Greg Kurz
2019-10-05 10:23 ` David Gibson
2019-10-03 12:01 ` [PATCH 3/7] spapr, xics, xive: Drop nr_servers argument in DT-related functions Greg Kurz
2019-10-03 12:25 ` Cédric Le Goater
2019-10-03 12:52 ` Greg Kurz
2019-10-03 12:01 ` [PATCH RFC 4/7] linux-headers: Update against 5.3-rc2 Greg Kurz
2019-10-03 12:01 ` [PATCH 5/7] spapr/xics: Configure number of servers in KVM Greg Kurz
2019-10-03 12:29 ` Cédric Le Goater
2019-10-03 12:55 ` Greg Kurz
2019-10-03 12:01 ` [PATCH 6/7] spapr/xive: " Greg Kurz
2019-10-03 12:30 ` Cédric Le Goater
2019-10-03 12:02 ` [PATCH 7/7] spapr: Set VSMT to smp_threads by default Greg Kurz
2019-10-14 6:12 ` David Gibson
2019-10-14 11:31 ` Greg Kurz
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20191003144952.181da0e2@bahia.lan \
--to=groug@kaod.org \
--cc=clg@kaod.org \
--cc=david@gibson.dropbear.id.au \
--cc=qemu-devel@nongnu.org \
--cc=qemu-ppc@nongnu.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).