From: kajoljain <kjain@linux.ibm.com>
To: Michael Ellerman <mpe@ellerman.id.au>, linuxppc-dev@lists.ozlabs.org
Cc: nathanl@linux.ibm.com, ego@linux.vnet.ibm.com, suka@us.ibm.com,
maddy@linux.vnet.ibm.com, anju@linux.vnet.ibm.com
Subject: Re: [PATCH v3 1/2] powerpc/perf/hv-24x7: Add cpu hotplug support
Date: Tue, 7 Jul 2020 16:29:22 +0530 [thread overview]
Message-ID: <03a3af5c-5d04-8c55-5597-d231f0ab014b@linux.ibm.com> (raw)
In-Reply-To: <87zh8d5oab.fsf@mpe.ellerman.id.au>
On 7/6/20 8:43 AM, Michael Ellerman wrote:
> Kajol Jain <kjain@linux.ibm.com> writes:
>> Patch here adds cpu hotplug functions to hv_24x7 pmu.
>> A new cpuhp_state "CPUHP_AP_PERF_POWERPC_HV_24x7_ONLINE" enum
>> is added.
>>
>> The online callback function updates the cpumask only if its
>> empty. As the primary intention of adding hotplug support
>> is to designate a CPU to make HCALL to collect the
>> counter data.
>>
>> The offline function test and clear corresponding cpu in a cpumask
>> and update cpumask to any other active cpu.
>>
>> Signed-off-by: Kajol Jain <kjain@linux.ibm.com>
>> Reviewed-by: Gautham R. Shenoy <ego@linux.vnet.ibm.com>
>> ---
>> arch/powerpc/perf/hv-24x7.c | 45 +++++++++++++++++++++++++++++++++++++
>> include/linux/cpuhotplug.h | 1 +
>> 2 files changed, 46 insertions(+)
>>
>> diff --git a/arch/powerpc/perf/hv-24x7.c b/arch/powerpc/perf/hv-24x7.c
>> index db213eb7cb02..ce4739e2b407 100644
>> --- a/arch/powerpc/perf/hv-24x7.c
>> +++ b/arch/powerpc/perf/hv-24x7.c
>> @@ -31,6 +31,8 @@ static int interface_version;
>> /* Whether we have to aggregate result data for some domains. */
>> static bool aggregate_result_elements;
>>
>> +static cpumask_t hv_24x7_cpumask;
>> +
>> static bool domain_is_valid(unsigned domain)
>> {
>> switch (domain) {
>> @@ -1641,6 +1643,44 @@ static struct pmu h_24x7_pmu = {
>> .capabilities = PERF_PMU_CAP_NO_EXCLUDE,
>> };
>>
>> +static int ppc_hv_24x7_cpu_online(unsigned int cpu)
>> +{
>> + /* Make this CPU the designated target for counter collection */
>
> The comment implies every newly onlined CPU will become the target, but
> actually it's only the first onlined CPU.
>
> So I think the comment needs updating, or you could just drop the
> comment, I think the code is fairly clear by itself.
Hi Michael,
Thanks for reviewing the patch. Sure I will update it accordingly.
>
>> + if (cpumask_empty(&hv_24x7_cpumask))
>> + cpumask_set_cpu(cpu, &hv_24x7_cpumask);
>> +
>> + return 0;
>> +}
>> +
>> +static int ppc_hv_24x7_cpu_offline(unsigned int cpu)
>> +{
>> + int target = -1;
>
> No need to initialise target, you assign to it unconditionally below.
Ok Will change.
>
>> + /* Check if exiting cpu is used for collecting 24x7 events */
>> + if (!cpumask_test_and_clear_cpu(cpu, &hv_24x7_cpumask))
>> + return 0;
>> +
>> + /* Find a new cpu to collect 24x7 events */
>> + target = cpumask_last(cpu_active_mask);
>
> Any reason to use cpumask_last() vs cpumask_first(), or a randomly
> chosen CPU?
Incase we sequentially offline multiple cpus, taking cpumask_first() may add some latency in
that scenario.
So, I was trying to test benchmark in power9 lpar with 16 cpu, by off-lining cpu 0-14
With cpumask_last: This is what I got.
real 0m2.812s
user 0m0.002s
sys 0m0.003s
With cpulast_any:
real 0m3.690s
user 0m0.002s
sys 0m0.062s
That's why I just went with cpumask_last thing. Please Let me know if any changes required.
>
>> + if (target < 0 || target >= nr_cpu_ids)
>> + return -1;
>> +
>> + /* Migrate 24x7 events to the new target */
>> + cpumask_set_cpu(target, &hv_24x7_cpumask);
>> + perf_pmu_migrate_context(&h_24x7_pmu, cpu, target);
>> +
>> + return 0;
>> +}
>> +
>> +static int hv_24x7_cpu_hotplug_init(void)
>> +{
>> + return cpuhp_setup_state(CPUHP_AP_PERF_POWERPC_HV_24x7_ONLINE,
>> + "perf/powerpc/hv_24x7:online",
>> + ppc_hv_24x7_cpu_online,
>> + ppc_hv_24x7_cpu_offline);
>> +}
>> +
>> static int hv_24x7_init(void)
>> {
>> int r;
>> @@ -1685,6 +1725,11 @@ static int hv_24x7_init(void)
>> if (r)
>> return r;
>>
>> + /* init cpuhotplug */
>> + r = hv_24x7_cpu_hotplug_init();
>> + if (r)
>> + pr_err("hv_24x7: CPU hotplug init failed\n");
>> +
>
> The hotplug initialisation shouldn't fail unless something is badly
> wrong. I think you should just fail initialisation of the entire PMU if
> that happens, which will make the error handling in the next patch much
> simpler.
>
I will update it.
Thanks,
Kajol Jain
> cheers
>
>> r = perf_pmu_register(&h_24x7_pmu, h_24x7_pmu.name, -1);
>> if (r)
>> return r;
next prev parent reply other threads:[~2020-07-07 11:02 UTC|newest]
Thread overview: 9+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-06-26 10:28 [PATCH v3 0/2] Add cpu hotplug support for powerpc/perf/hv-24x7 Kajol Jain
2020-06-26 10:28 ` [PATCH v3 1/2] powerpc/perf/hv-24x7: Add cpu hotplug support Kajol Jain
2020-07-06 3:13 ` Michael Ellerman
2020-07-07 4:15 ` Madhavan Srinivasan
2020-07-07 4:56 ` Michael Ellerman
2020-07-07 5:36 ` Madhavan Srinivasan
2020-07-07 10:59 ` kajoljain [this message]
2020-06-26 10:28 ` [PATCH v3 2/2] powerpc/hv-24x7: Add sysfs files inside hv-24x7 device to show cpumask Kajol Jain
2020-07-02 14:36 ` Gautham R Shenoy
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=03a3af5c-5d04-8c55-5597-d231f0ab014b@linux.ibm.com \
--to=kjain@linux.ibm.com \
--cc=anju@linux.vnet.ibm.com \
--cc=ego@linux.vnet.ibm.com \
--cc=linuxppc-dev@lists.ozlabs.org \
--cc=maddy@linux.vnet.ibm.com \
--cc=mpe@ellerman.id.au \
--cc=nathanl@linux.ibm.com \
--cc=suka@us.ibm.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).