From: sudeep.holla@arm.com (Sudeep Holla)
To: linux-arm-kernel@lists.infradead.org
Subject: [PATCH] drivers/firmware: psci_checker: stash and use topology_core_cpumask for hotplug tests
Date: Thu, 19 Jul 2018 10:50:58 +0100 [thread overview]
Message-ID: <20180719095058.GA8702@e107155-lin> (raw)
In-Reply-To: <20180718164930.GB21924@red-moon>
On Wed, Jul 18, 2018 at 05:49:30PM +0100, Lorenzo Pieralisi wrote:
> On Wed, Jul 18, 2018 at 12:25:32PM +0100, Sudeep Holla wrote:
> > Commit 7f9545aa1a91 ("arm64: smp: remove cpu and numa topology
> > information when hotplugging out CPU") updates the cpu topology when
> > the CPU is hotplugged out. However the PSCI checker code uses the
> > topology_core_cpumask pointers for some of the cpu hotplug testing.
> > Since the pointer to the core_cpumask of the first CPU in the group
> > is used, which when that CPU itself is hotpugged out is just set to
> > itself, the testing terminates after that particular CPU is tested out.
> > But the intention of this tests is to cover all the CPU in the group.
> >
> > In order to support that, we need to stash the topology_core_cpumask
> > before the start of the test and use that value instead of pointer to
> > a cpumask which will be updated on CPU hotplug.
> >
> > Fixes: 7f9545aa1a91a9a4 ("arm64: smp: remove cpu and numa topology
> > information when hotplugging out CPU")
> > Reported-by: Geert Uytterhoeven <geert@linux-m68k.org>
> > Tested-by: Geert Uytterhoeven <geert+renesas@glider.be>
> > Cc: Mark Rutland <mark.rutland@arm.com>
> > Cc: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
> > Signed-off-by: Sudeep Holla <sudeep.holla@arm.com>
> > ---
> > drivers/firmware/psci_checker.c | 30 ++++++++++++++++++++----------
> > 1 file changed, 20 insertions(+), 10 deletions(-)
> >
> > diff --git a/drivers/firmware/psci_checker.c b/drivers/firmware/psci_checker.c
> > index bb1c068bff19..e330c6cb45f5 100644
> > --- a/drivers/firmware/psci_checker.c
> > +++ b/drivers/firmware/psci_checker.c
> > @@ -77,21 +77,23 @@ static int psci_ops_check(void)
> > return 0;
> > }
> >
> > -static int find_cpu_groups(const struct cpumask *cpus,
> > - const struct cpumask **cpu_groups)
> > +static int find_cpu_groups(cpumask_var_t *cpu_groups)
> > {
> > unsigned int nb = 0;
> > cpumask_var_t tmp;
> >
> > if (!alloc_cpumask_var(&tmp, GFP_KERNEL))
> > return -ENOMEM;
> > - cpumask_copy(tmp, cpus);
> > + cpumask_copy(tmp, cpu_online_mask);
> >
> > while (!cpumask_empty(tmp)) {
> > const struct cpumask *cpu_group =
> > topology_core_cpumask(cpumask_any(tmp));
> >
> > - cpu_groups[nb++] = cpu_group;
> > + if (cpu_groups && cpu_groups[nb])
> > + cpumask_copy(cpu_groups[nb], cpu_group);
> > +
> > + nb++;
> > cpumask_andnot(tmp, tmp, cpu_group);
> > }
> >
> > @@ -169,25 +171,31 @@ static unsigned int down_and_up_cpus(const struct cpumask *cpus,
> > static int hotplug_tests(void)
> > {
> > int err;
> > - cpumask_var_t offlined_cpus;
> > + cpumask_var_t offlined_cpus, *cpu_groups;
> > int i, nb_cpu_group;
> > - const struct cpumask **cpu_groups;
> > char *page_buf;
> >
> > + /* first run to just get the number of cpu groups */
> > + nb_cpu_group = find_cpu_groups(NULL);
> > +
> > err = -ENOMEM;
> > if (!alloc_cpumask_var(&offlined_cpus, GFP_KERNEL))
> > return err;
> > - /* We may have up to nb_available_cpus cpu_groups. */
> > - cpu_groups = kmalloc_array(nb_available_cpus, sizeof(*cpu_groups),
> > - GFP_KERNEL);
> > + cpu_groups = kcalloc(nb_cpu_group, sizeof(cpu_groups), GFP_KERNEL);
> > if (!cpu_groups)
> > goto out_free_cpus;
> > +
> > + for (i = 0; i < nb_cpu_group; ++i)
> > + if (!alloc_cpumask_var(&cpu_groups[i], GFP_KERNEL))
> > + goto out_free_cpu_groups;
> > +
> > page_buf = (char *)__get_free_page(GFP_KERNEL);
> > if (!page_buf)
> > goto out_free_cpu_groups;
> >
> > err = 0;
> > - nb_cpu_group = find_cpu_groups(cpu_online_mask, cpu_groups);
> > + /* second run to populate/copy the cpumask */
> > + nb_cpu_group = find_cpu_groups(cpu_groups);
> >
> > /*
> > * Of course the last CPU cannot be powered down and cpu_down() should
> > @@ -212,6 +220,8 @@ static int hotplug_tests(void)
> >
> > free_page((unsigned long)page_buf);
> > out_free_cpu_groups:
> > + for (i = 0; i < nb_cpu_group; ++i)
> > + free_cpumask_var(cpu_groups[i]);
> > kfree(cpu_groups);
> > out_free_cpus:
> > free_cpumask_var(offlined_cpus);
>
> Hi Sudeep,
>
> thanks for the patch. I reckon that adding two functions, say,
> alloc_cpu_groups() and free_cpu_groups() would make the code
> more readable instead of relying on find_cpu_groups() to first
> count then copy; it is for readability rather than correctness.
>
I agree, I can say I was lazy and started trying to keep delta small.
I will respin.
--
Regards,
Sudeep
next prev parent reply other threads:[~2018-07-19 9:50 UTC|newest]
Thread overview: 7+ messages / expand[flat|nested] mbox.gz Atom feed top
2018-07-18 11:25 [PATCH] drivers/firmware: psci_checker: stash and use topology_core_cpumask for hotplug tests Sudeep Holla
2018-07-18 16:49 ` Lorenzo Pieralisi
2018-07-19 9:50 ` Sudeep Holla [this message]
2018-07-19 13:35 ` [PATCH v2] " Sudeep Holla
2018-07-19 14:20 ` Lorenzo Pieralisi
2018-07-19 15:04 ` Sudeep Holla
2018-07-19 16:00 ` [PATCH v3] " Sudeep Holla
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20180719095058.GA8702@e107155-lin \
--to=sudeep.holla@arm.com \
--cc=linux-arm-kernel@lists.infradead.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).