* Re: [PATCH] sched/arm64: store cpu topology before notify_cpu_starting
[not found] ` <20200401114215.36640-1-cj.chengjian@huawei.com>
@ 2020-04-01 13:23 ` Valentin Schneider
2020-04-06 8:00 ` chengjian (D)
2020-04-09 9:59 ` Sudeep Holla
0 siblings, 2 replies; 5+ messages in thread
From: Valentin Schneider @ 2020-04-01 13:23 UTC (permalink / raw)
To: Cheng Jian
Cc: joelaf, peterz, fweisbec, huawei.libin, joel, mingo, aubrey.li,
naravamudan, aaron.lwe, jdesfossez, w.f, pawan.kumar.gupta, pjt,
kerrnel, keescook, xiexiuqi, vpillai, tglx, linux-arm-kernel,
pauld, mgorman, torvalds, linux-kernel, aubrey.intel,
Sudeep Holla, pbonzini, tim.c.chen
(+LAKML, +Sudeep)
On Wed, Apr 01 2020, Cheng Jian wrote:
> when SCHED_CORE enabled, sched_cpu_starting() uses thread_sibling as
> SMT_MASK to initialize rq->core, but only after store_cpu_topology(),
> the thread_sibling is ready for use.
>
> notify_cpu_starting()
> -> sched_cpu_starting() # use thread_sibling
>
> store_cpu_topology(cpu)
> -> update_siblings_masks # set thread_sibling
>
> Fix this by doing notify_cpu_starting later, just like x86 do.
>
I haven't been following the sched core stuff closely; can't this
rq->core assignment be done in sched_cpu_activate() instead? We already
look at the cpu_smt_mask() in there, and it is valid (we go through the
entirety of secondary_start_kernel() before getting anywhere near
CPUHP_AP_ACTIVE).
I don't think this breaks anything, but without this dependency in
sched_cpu_starting() then there isn't really a reason for this move.
> Signed-off-by: Cheng Jian <cj.chengjian@huawei.com>
> ---
> arch/arm64/kernel/smp.c | 11 ++++++++---
> 1 file changed, 8 insertions(+), 3 deletions(-)
>
> diff --git a/arch/arm64/kernel/smp.c b/arch/arm64/kernel/smp.c
> index 5407bf5d98ac..a427c14e82af 100644
> --- a/arch/arm64/kernel/smp.c
> +++ b/arch/arm64/kernel/smp.c
> @@ -236,13 +236,18 @@ asmlinkage notrace void secondary_start_kernel(void)
> cpuinfo_store_cpu();
>
> /*
> - * Enable GIC and timers.
> + * Store cpu topology before notify_cpu_starting,
> + * CPUHP_AP_SCHED_STARTING requires SMT topology
> + * been initialized for SCHED_CORE.
> */
> - notify_cpu_starting(cpu);
> -
> store_cpu_topology(cpu);
> numa_add_cpu(cpu);
>
> + /*
> + * Enable GIC and timers.
> + */
> + notify_cpu_starting(cpu);
> +
> /*
> * OK, now it's safe to let the boot CPU continue. Wait for
> * the CPU migration code to notice that the CPU is online
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [PATCH] sched/arm64: store cpu topology before notify_cpu_starting
2020-04-01 13:23 ` [PATCH] sched/arm64: store cpu topology before notify_cpu_starting Valentin Schneider
@ 2020-04-06 8:00 ` chengjian (D)
2020-04-09 9:59 ` Sudeep Holla
1 sibling, 0 replies; 5+ messages in thread
From: chengjian (D) @ 2020-04-06 8:00 UTC (permalink / raw)
To: Valentin Schneider
Cc: joelaf, chengjian (D), peterz, fweisbec, huawei.libin, joel,
mingo, aubrey.li, naravamudan, aaron.lwe, jdesfossez, w.f,
pawan.kumar.gupta, pjt, kerrnel, keescook, xiexiuqi, vpillai,
tglx, linux-arm-kernel, pauld, mgorman, torvalds, linux-kernel,
aubrey.intel, Sudeep Holla, pbonzini, tim.c.chen
On 2020/4/1 21:23, Valentin Schneider wrote:
> (+LAKML, +Sudeep)
>
> On Wed, Apr 01 2020, Cheng Jian wrote:
>> when SCHED_CORE enabled, sched_cpu_starting() uses thread_sibling as
>> SMT_MASK to initialize rq->core, but only after store_cpu_topology(),
>> the thread_sibling is ready for use.
>>
>> notify_cpu_starting()
>> -> sched_cpu_starting() # use thread_sibling
>>
>> store_cpu_topology(cpu)
>> -> update_siblings_masks # set thread_sibling
>>
>> Fix this by doing notify_cpu_starting later, just like x86 do.
>>
> I haven't been following the sched core stuff closely; can't this
> rq->core assignment be done in sched_cpu_activate() instead? We already
> look at the cpu_smt_mask() in there, and it is valid (we go through the
> entirety of secondary_start_kernel() before getting anywhere near
> CPUHP_AP_ACTIVE).
>
> I don't think this breaks anything, but without this dependency in
> sched_cpu_starting() then there isn't really a reason for this move.
Yes, it is correct to put the rq-> core assignment in sched_cpu_active().
The cpu_smt_mask is already valid here.
I have made such an attempt on my own branch and passed the test.
Thank you.
-- Cheng Jian
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [PATCH] sched/arm64: store cpu topology before notify_cpu_starting
2020-04-01 13:23 ` [PATCH] sched/arm64: store cpu topology before notify_cpu_starting Valentin Schneider
2020-04-06 8:00 ` chengjian (D)
@ 2020-04-09 9:59 ` Sudeep Holla
2020-04-09 10:32 ` Valentin Schneider
1 sibling, 1 reply; 5+ messages in thread
From: Sudeep Holla @ 2020-04-09 9:59 UTC (permalink / raw)
To: Valentin Schneider
Cc: joelaf, Cheng Jian, peterz, fweisbec, huawei.libin, joel, mingo,
aubrey.li, naravamudan, aaron.lwe, torvalds, jdesfossez, w.f,
pawan.kumar.gupta, pjt, kerrnel, keescook, xiexiuqi, vpillai,
tglx, linux-arm-kernel, pauld, mgorman, linux-kernel,
aubrey.intel, Sudeep Holla, pbonzini, tim.c.chen
On Wed, Apr 01, 2020 at 02:23:33PM +0100, Valentin Schneider wrote:
>
> (+LAKML, +Sudeep)
>
Thanks Valentin.
> On Wed, Apr 01 2020, Cheng Jian wrote:
> > when SCHED_CORE enabled, sched_cpu_starting() uses thread_sibling as
> > SMT_MASK to initialize rq->core, but only after store_cpu_topology(),
> > the thread_sibling is ready for use.
> >
> > notify_cpu_starting()
> > -> sched_cpu_starting() # use thread_sibling
> >
> > store_cpu_topology(cpu)
> > -> update_siblings_masks # set thread_sibling
> >
> > Fix this by doing notify_cpu_starting later, just like x86 do.
> >
>
> I haven't been following the sched core stuff closely; can't this
> rq->core assignment be done in sched_cpu_activate() instead? We already
> look at the cpu_smt_mask() in there, and it is valid (we go through the
> entirety of secondary_start_kernel() before getting anywhere near
> CPUHP_AP_ACTIVE).
>
I too came to same conclusion. Did you see any issues ? Or is it
just code inspection in parity with x86 ?
> I don't think this breaks anything, but without this dependency in
> sched_cpu_starting() then there isn't really a reason for this move.
>
Based on the commit message, had a quick look at x86 code and I agree
this shouldn't break anything. However the commit message does make
complete sense to me, especially reference to sched_cpu_starting
while smt_masks are accessed in sched_cpu_activate. Or am I missing
to understand something here ?
--
Regards,
Sudeep
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [PATCH] sched/arm64: store cpu topology before notify_cpu_starting
2020-04-09 9:59 ` Sudeep Holla
@ 2020-04-09 10:32 ` Valentin Schneider
2020-04-09 11:08 ` Sudeep Holla
0 siblings, 1 reply; 5+ messages in thread
From: Valentin Schneider @ 2020-04-09 10:32 UTC (permalink / raw)
To: Sudeep Holla
Cc: joelaf, Cheng Jian, peterz, fweisbec, huawei.libin, joel, mingo,
aubrey.li, naravamudan, aaron.lwe, torvalds, jdesfossez, w.f,
pawan.kumar.gupta, pjt, kerrnel, keescook, xiexiuqi, vpillai,
tglx, linux-arm-kernel, pauld, mgorman, linux-kernel,
aubrey.intel, pbonzini, tim.c.chen
On 09/04/20 10:59, Sudeep Holla wrote:
> On Wed, Apr 01, 2020 at 02:23:33PM +0100, Valentin Schneider wrote:
>>
>> (+LAKML, +Sudeep)
>>
>
> Thanks Valentin.
>
>> On Wed, Apr 01 2020, Cheng Jian wrote:
>> > when SCHED_CORE enabled, sched_cpu_starting() uses thread_sibling as
>> > SMT_MASK to initialize rq->core, but only after store_cpu_topology(),
>> > the thread_sibling is ready for use.
>> >
>> > notify_cpu_starting()
>> > -> sched_cpu_starting() # use thread_sibling
>> >
>> > store_cpu_topology(cpu)
>> > -> update_siblings_masks # set thread_sibling
>> >
>> > Fix this by doing notify_cpu_starting later, just like x86 do.
>> >
>>
>> I haven't been following the sched core stuff closely; can't this
>> rq->core assignment be done in sched_cpu_activate() instead? We already
>> look at the cpu_smt_mask() in there, and it is valid (we go through the
>> entirety of secondary_start_kernel() before getting anywhere near
>> CPUHP_AP_ACTIVE).
>>
>
> I too came to same conclusion. Did you see any issues ? Or is it
> just code inspection in parity with x86 ?
>
With mainline this isn't a problem; with the core scheduling stuff there is
an expectation that we can use the SMT masks in sched_cpu_starting().
>> I don't think this breaks anything, but without this dependency in
>> sched_cpu_starting() then there isn't really a reason for this move.
>>
>
> Based on the commit message, had a quick look at x86 code and I agree
> this shouldn't break anything. However the commit message does make
> complete sense to me, especially reference to sched_cpu_starting
> while smt_masks are accessed in sched_cpu_activate. Or am I missing
> to understand something here ?
As stated above, it's not a problem for mainline, and AIUI we can change
the core scheduling bits to only use the SMT mask in sched_cpu_activate()
instead, therefore not requiring any change in the arch code.
I'm not aware of any written rule that the topology masks should be usable
from a given hotplug state upwards, only that right now we need them in
sched_cpu_(de)activate() for SMT scheduling - and that is already working
fine.
So really this should be considering as a simple neutral cleanup; I don't
really have any opinion on picking it up or not.
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [PATCH] sched/arm64: store cpu topology before notify_cpu_starting
2020-04-09 10:32 ` Valentin Schneider
@ 2020-04-09 11:08 ` Sudeep Holla
0 siblings, 0 replies; 5+ messages in thread
From: Sudeep Holla @ 2020-04-09 11:08 UTC (permalink / raw)
To: Valentin Schneider
Cc: joelaf, Cheng Jian, peterz, fweisbec, huawei.libin, joel, mingo,
aubrey.li, naravamudan, aaron.lwe, torvalds, jdesfossez, w.f,
pawan.kumar.gupta, pjt, kerrnel, keescook, xiexiuqi, vpillai,
tglx, linux-arm-kernel, pauld, mgorman, linux-kernel,
aubrey.intel, Sudeep Holla, pbonzini, tim.c.chen
On Thu, Apr 09, 2020 at 11:32:12AM +0100, Valentin Schneider wrote:
>
> On 09/04/20 10:59, Sudeep Holla wrote:
> > On Wed, Apr 01, 2020 at 02:23:33PM +0100, Valentin Schneider wrote:
> >>
> >> (+LAKML, +Sudeep)
> >>
> >
> > Thanks Valentin.
> >
> >> On Wed, Apr 01 2020, Cheng Jian wrote:
> >> > when SCHED_CORE enabled, sched_cpu_starting() uses thread_sibling as
> >> > SMT_MASK to initialize rq->core, but only after store_cpu_topology(),
> >> > the thread_sibling is ready for use.
> >> >
> >> > notify_cpu_starting()
> >> > -> sched_cpu_starting() # use thread_sibling
> >> >
> >> > store_cpu_topology(cpu)
> >> > -> update_siblings_masks # set thread_sibling
> >> >
> >> > Fix this by doing notify_cpu_starting later, just like x86 do.
> >> >
> >>
> >> I haven't been following the sched core stuff closely; can't this
> >> rq->core assignment be done in sched_cpu_activate() instead? We already
> >> look at the cpu_smt_mask() in there, and it is valid (we go through the
> >> entirety of secondary_start_kernel() before getting anywhere near
> >> CPUHP_AP_ACTIVE).
> >>
> >
> > I too came to same conclusion. Did you see any issues ? Or is it
> > just code inspection in parity with x86 ?
> >
>
> With mainline this isn't a problem; with the core scheduling stuff there is
> an expectation that we can use the SMT masks in sched_cpu_starting().
>
Ah, OK. I prefer this to be specified in the commit message as it is not
obvious.
> >> I don't think this breaks anything, but without this dependency in
> >> sched_cpu_starting() then there isn't really a reason for this move.
> >>
> >
> > Based on the commit message, had a quick look at x86 code and I agree
> > this shouldn't break anything. However the commit message does make
> > complete sense to me, especially reference to sched_cpu_starting
> > while smt_masks are accessed in sched_cpu_activate. Or am I missing
> > to understand something here ?
>
> As stated above, it's not a problem for mainline, and AIUI we can change
> the core scheduling bits to only use the SMT mask in sched_cpu_activate()
> instead, therefore not requiring any change in the arch code.
>
Either way is fine. If it is already set expectation that SMT masks needs
to be set before sched_cpu_starting, then let us just stick with that.
> I'm not aware of any written rule that the topology masks should be usable
> from a given hotplug state upwards, only that right now we need them in
> sched_cpu_(de)activate() for SMT scheduling - and that is already working
> fine.
>
Sure, we can at-least document as part of this change even if it is just
in ARM64 so that someone need not wonder the same in future.
> So really this should be considering as a simple neutral cleanup; I don't
> really have any opinion on picking it up or not.
I am fine with the change too, just need some tweaking in the commit
message.
--
Regards,
Sudeep
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 5+ messages in thread
end of thread, other threads:[~2020-04-09 11:09 UTC | newest]
Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
[not found] <855831b59e1b3774b11c3e33050eac4cc4639f06.1583332765.git.vpillai@digitalocean.com>
[not found] ` <20200401114215.36640-1-cj.chengjian@huawei.com>
2020-04-01 13:23 ` [PATCH] sched/arm64: store cpu topology before notify_cpu_starting Valentin Schneider
2020-04-06 8:00 ` chengjian (D)
2020-04-09 9:59 ` Sudeep Holla
2020-04-09 10:32 ` Valentin Schneider
2020-04-09 11:08 ` Sudeep Holla
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).