* [PATCH v3] sched: Fix some spelling mistakes in the scheduler module
@ 2025-10-09 2:46 Jianyun Gao
2025-10-09 6:01 ` Madadi Vineeth Reddy
0 siblings, 1 reply; 3+ messages in thread
From: Jianyun Gao @ 2025-10-09 2:46 UTC (permalink / raw)
To: linux-kernel
Cc: jianyun.gao, Ingo Molnar, Peter Zijlstra, Juri Lelli,
Vincent Guittot, Dietmar Eggemann, Steven Rostedt, Ben Segall,
Mel Gorman, Valentin Schneider
From: "jianyun.gao" <jianyungao89@gmail.com>
The following are some spelling mistakes existing in the scheduler
module. Just fix it!
slection -> selection
achitectures -> architectures
excempt -> exempt
incorectly -> incorrectly
litle -> little
faireness -> fairness
condtion -> condition
Signed-off-by: jianyun.gao <jianyungao89@gmail.com>
---
v3:
Change "except" to "exempt" in v2.
The previous version is here:
https://lore.kernel.org/lkml/20250929061213.1659258-1-jianyungao89@gmail.com/
kernel/sched/core.c | 2 +-
kernel/sched/cputime.c | 2 +-
kernel/sched/fair.c | 8 ++++----
kernel/sched/wait_bit.c | 2 +-
4 files changed, 7 insertions(+), 7 deletions(-)
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 7f1e5cb94c53..af5076e40567 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -6858,7 +6858,7 @@ static void __sched notrace __schedule(int sched_mode)
/*
* We pass task_is_blocked() as the should_block arg
* in order to keep mutex-blocked tasks on the runqueue
- * for slection with proxy-exec (without proxy-exec
+ * for selection with proxy-exec (without proxy-exec
* task_is_blocked() will always be false).
*/
try_to_block_task(rq, prev, &prev_state,
diff --git a/kernel/sched/cputime.c b/kernel/sched/cputime.c
index 7097de2c8cda..2429be5a5e40 100644
--- a/kernel/sched/cputime.c
+++ b/kernel/sched/cputime.c
@@ -585,7 +585,7 @@ void cputime_adjust(struct task_cputime *curr, struct prev_cputime *prev,
stime = mul_u64_u64_div_u64(stime, rtime, stime + utime);
/*
* Because mul_u64_u64_div_u64() can approximate on some
- * achitectures; enforce the constraint that: a*b/(b+c) <= a.
+ * architectures; enforce the constraint that: a*b/(b+c) <= a.
*/
if (unlikely(stime > rtime))
stime = rtime;
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 18a30ae35441..b1c335719f49 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -5381,7 +5381,7 @@ dequeue_entity(struct cfs_rq *cfs_rq, struct sched_entity *se, int flags)
bool delay = sleep;
/*
* DELAY_DEQUEUE relies on spurious wakeups, special task
- * states must not suffer spurious wakeups, excempt them.
+ * states must not suffer spurious wakeups, exempt them.
*/
if (flags & (DEQUEUE_SPECIAL | DEQUEUE_THROTTLE))
delay = false;
@@ -5842,7 +5842,7 @@ static bool enqueue_throttled_task(struct task_struct *p)
* target cfs_rq's limbo list.
*
* Do not do that when @p is current because the following race can
- * cause @p's group_node to be incorectly re-insterted in its rq's
+ * cause @p's group_node to be incorrectly re-insterted in its rq's
* cfs_tasks list, despite being throttled:
*
* cpuX cpuY
@@ -12161,7 +12161,7 @@ static inline bool update_newidle_cost(struct sched_domain *sd, u64 cost)
* sched_balance_newidle() bumps the cost whenever newidle
* balance fails, and we don't want things to grow out of
* control. Use the sysctl_sched_migration_cost as the upper
- * limit, plus a litle extra to avoid off by ones.
+ * limit, plus a little extra to avoid off by ones.
*/
sd->max_newidle_lb_cost =
min(cost, sysctl_sched_migration_cost + 200);
@@ -13176,7 +13176,7 @@ static void propagate_entity_cfs_rq(struct sched_entity *se)
* If a task gets attached to this cfs_rq and before being queued,
* it gets migrated to another CPU due to reasons like affinity
* change, make sure this cfs_rq stays on leaf cfs_rq list to have
- * that removed load decayed or it can cause faireness problem.
+ * that removed load decayed or it can cause fairness problem.
*/
if (!cfs_rq_pelt_clock_throttled(cfs_rq))
list_add_leaf_cfs_rq(cfs_rq);
diff --git a/kernel/sched/wait_bit.c b/kernel/sched/wait_bit.c
index 1088d3b7012c..47ab3bcd2ebc 100644
--- a/kernel/sched/wait_bit.c
+++ b/kernel/sched/wait_bit.c
@@ -207,7 +207,7 @@ EXPORT_SYMBOL(init_wait_var_entry);
* given variable to change. wait_var_event() can be waiting for an
* arbitrary condition to be true and associates that condition with an
* address. Calling wake_up_var() suggests that the condition has been
- * made true, but does not strictly require the condtion to use the
+ * made true, but does not strictly require the condition to use the
* address given.
*
* The wake-up is sent to tasks in a waitqueue selected by hash from a
--
2.34.1
^ permalink raw reply related [flat|nested] 3+ messages in thread
* Re: [PATCH v3] sched: Fix some spelling mistakes in the scheduler module
2025-10-09 2:46 [PATCH v3] sched: Fix some spelling mistakes in the scheduler module Jianyun Gao
@ 2025-10-09 6:01 ` Madadi Vineeth Reddy
2025-10-09 6:55 ` Jianyun Gao
0 siblings, 1 reply; 3+ messages in thread
From: Madadi Vineeth Reddy @ 2025-10-09 6:01 UTC (permalink / raw)
To: Jianyun Gao
Cc: Ingo Molnar, Peter Zijlstra, Juri Lelli, Vincent Guittot,
Dietmar Eggemann, Steven Rostedt, Ben Segall, Mel Gorman,
Valentin Schneider, linux-kernel, Madadi Vineeth Reddy
Hi Jianyun,
On 09/10/25 08:16, Jianyun Gao wrote:
> From: "jianyun.gao" <jianyungao89@gmail.com>
>
> The following are some spelling mistakes existing in the scheduler
> module. Just fix it!
>
> slection -> selection
> achitectures -> architectures
> excempt -> exempt
> incorectly -> incorrectly
> litle -> little
> faireness -> fairness
> condtion -> condition
>
> Signed-off-by: jianyun.gao <jianyungao89@gmail.com>
> ---
> v3:
> Change "except" to "exempt" in v2.
It should be "excempt" to "exempt"
> The previous version is here:
>
> https://lore.kernel.org/lkml/20250929061213.1659258-1-jianyungao89@gmail.com/
>
> kernel/sched/core.c | 2 +-
> kernel/sched/cputime.c | 2 +-
> kernel/sched/fair.c | 8 ++++----
> kernel/sched/wait_bit.c | 2 +-
> 4 files changed, 7 insertions(+), 7 deletions(-)
>
> diff --git a/kernel/sched/core.c b/kernel/sched/core.c
> index 7f1e5cb94c53..af5076e40567 100644
> --- a/kernel/sched/core.c
> +++ b/kernel/sched/core.c
> @@ -6858,7 +6858,7 @@ static void __sched notrace __schedule(int sched_mode)
> /*
> * We pass task_is_blocked() as the should_block arg
> * in order to keep mutex-blocked tasks on the runqueue
> - * for slection with proxy-exec (without proxy-exec
> + * for selection with proxy-exec (without proxy-exec
> * task_is_blocked() will always be false).
> */
> try_to_block_task(rq, prev, &prev_state,
> diff --git a/kernel/sched/cputime.c b/kernel/sched/cputime.c
> index 7097de2c8cda..2429be5a5e40 100644
> --- a/kernel/sched/cputime.c
> +++ b/kernel/sched/cputime.c
> @@ -585,7 +585,7 @@ void cputime_adjust(struct task_cputime *curr, struct prev_cputime *prev,
> stime = mul_u64_u64_div_u64(stime, rtime, stime + utime);
> /*
> * Because mul_u64_u64_div_u64() can approximate on some
> - * achitectures; enforce the constraint that: a*b/(b+c) <= a.
> + * architectures; enforce the constraint that: a*b/(b+c) <= a.
> */
> if (unlikely(stime > rtime))
> stime = rtime;
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index 18a30ae35441..b1c335719f49 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -5381,7 +5381,7 @@ dequeue_entity(struct cfs_rq *cfs_rq, struct sched_entity *se, int flags)
> bool delay = sleep;
> /*
> * DELAY_DEQUEUE relies on spurious wakeups, special task
> - * states must not suffer spurious wakeups, excempt them.
> + * states must not suffer spurious wakeups, exempt them.
> */
> if (flags & (DEQUEUE_SPECIAL | DEQUEUE_THROTTLE))
> delay = false;
> @@ -5842,7 +5842,7 @@ static bool enqueue_throttled_task(struct task_struct *p)
> * target cfs_rq's limbo list.
> *
> * Do not do that when @p is current because the following race can
> - * cause @p's group_node to be incorectly re-insterted in its rq's
> + * cause @p's group_node to be incorrectly re-insterted in its rq's
s/re-insterted/re-inserted/
Thanks,
Madadi Vineeth Reddy
> * cfs_tasks list, despite being throttled:
> *
> * cpuX cpuY
> @@ -12161,7 +12161,7 @@ static inline bool update_newidle_cost(struct sched_domain *sd, u64 cost)
> * sched_balance_newidle() bumps the cost whenever newidle
> * balance fails, and we don't want things to grow out of
> * control. Use the sysctl_sched_migration_cost as the upper
> - * limit, plus a litle extra to avoid off by ones.
> + * limit, plus a little extra to avoid off by ones.
> */
> sd->max_newidle_lb_cost =
> min(cost, sysctl_sched_migration_cost + 200);
> @@ -13176,7 +13176,7 @@ static void propagate_entity_cfs_rq(struct sched_entity *se)
> * If a task gets attached to this cfs_rq and before being queued,
> * it gets migrated to another CPU due to reasons like affinity
> * change, make sure this cfs_rq stays on leaf cfs_rq list to have
> - * that removed load decayed or it can cause faireness problem.
> + * that removed load decayed or it can cause fairness problem.
> */
> if (!cfs_rq_pelt_clock_throttled(cfs_rq))
> list_add_leaf_cfs_rq(cfs_rq);
> diff --git a/kernel/sched/wait_bit.c b/kernel/sched/wait_bit.c
> index 1088d3b7012c..47ab3bcd2ebc 100644
> --- a/kernel/sched/wait_bit.c
> +++ b/kernel/sched/wait_bit.c
> @@ -207,7 +207,7 @@ EXPORT_SYMBOL(init_wait_var_entry);
> * given variable to change. wait_var_event() can be waiting for an
> * arbitrary condition to be true and associates that condition with an
> * address. Calling wake_up_var() suggests that the condition has been
> - * made true, but does not strictly require the condtion to use the
> + * made true, but does not strictly require the condition to use the
> * address given.
> *
> * The wake-up is sent to tasks in a waitqueue selected by hash from a
^ permalink raw reply [flat|nested] 3+ messages in thread
* Re: [PATCH v3] sched: Fix some spelling mistakes in the scheduler module
2025-10-09 6:01 ` Madadi Vineeth Reddy
@ 2025-10-09 6:55 ` Jianyun Gao
0 siblings, 0 replies; 3+ messages in thread
From: Jianyun Gao @ 2025-10-09 6:55 UTC (permalink / raw)
To: Madadi Vineeth Reddy
Cc: Ingo Molnar, Peter Zijlstra, Juri Lelli, Vincent Guittot,
Dietmar Eggemann, Steven Rostedt, Ben Segall, Mel Gorman,
Valentin Schneider, linux-kernel
Hi Madadi,
Thank you for your review. I will fix it in the next patch.
On Thu, Oct 9, 2025 at 2:01 PM Madadi Vineeth Reddy
<vineethr@linux.ibm.com> wrote:
>
> Hi Jianyun,
>
> On 09/10/25 08:16, Jianyun Gao wrote:
> > From: "jianyun.gao" <jianyungao89@gmail.com>
> >
> > The following are some spelling mistakes existing in the scheduler
> > module. Just fix it!
> >
> > slection -> selection
> > achitectures -> architectures
> > excempt -> exempt
> > incorectly -> incorrectly
> > litle -> little
> > faireness -> fairness
> > condtion -> condition
> >
> > Signed-off-by: jianyun.gao <jianyungao89@gmail.com>
> > ---
> > v3:
> > Change "except" to "exempt" in v2.
>
> It should be "excempt" to "exempt"
>
> > The previous version is here:
> >
> > https://lore.kernel.org/lkml/20250929061213.1659258-1-jianyungao89@gmail.com/
> >
> > kernel/sched/core.c | 2 +-
> > kernel/sched/cputime.c | 2 +-
> > kernel/sched/fair.c | 8 ++++----
> > kernel/sched/wait_bit.c | 2 +-
> > 4 files changed, 7 insertions(+), 7 deletions(-)
> >
> > diff --git a/kernel/sched/core.c b/kernel/sched/core.c
> > index 7f1e5cb94c53..af5076e40567 100644
> > --- a/kernel/sched/core.c
> > +++ b/kernel/sched/core.c
> > @@ -6858,7 +6858,7 @@ static void __sched notrace __schedule(int sched_mode)
> > /*
> > * We pass task_is_blocked() as the should_block arg
> > * in order to keep mutex-blocked tasks on the runqueue
> > - * for slection with proxy-exec (without proxy-exec
> > + * for selection with proxy-exec (without proxy-exec
> > * task_is_blocked() will always be false).
> > */
> > try_to_block_task(rq, prev, &prev_state,
> > diff --git a/kernel/sched/cputime.c b/kernel/sched/cputime.c
> > index 7097de2c8cda..2429be5a5e40 100644
> > --- a/kernel/sched/cputime.c
> > +++ b/kernel/sched/cputime.c
> > @@ -585,7 +585,7 @@ void cputime_adjust(struct task_cputime *curr, struct prev_cputime *prev,
> > stime = mul_u64_u64_div_u64(stime, rtime, stime + utime);
> > /*
> > * Because mul_u64_u64_div_u64() can approximate on some
> > - * achitectures; enforce the constraint that: a*b/(b+c) <= a.
> > + * architectures; enforce the constraint that: a*b/(b+c) <= a.
> > */
> > if (unlikely(stime > rtime))
> > stime = rtime;
> > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> > index 18a30ae35441..b1c335719f49 100644
> > --- a/kernel/sched/fair.c
> > +++ b/kernel/sched/fair.c
> > @@ -5381,7 +5381,7 @@ dequeue_entity(struct cfs_rq *cfs_rq, struct sched_entity *se, int flags)
> > bool delay = sleep;
> > /*
> > * DELAY_DEQUEUE relies on spurious wakeups, special task
> > - * states must not suffer spurious wakeups, excempt them.
> > + * states must not suffer spurious wakeups, exempt them.
> > */
> > if (flags & (DEQUEUE_SPECIAL | DEQUEUE_THROTTLE))
> > delay = false;
> > @@ -5842,7 +5842,7 @@ static bool enqueue_throttled_task(struct task_struct *p)
> > * target cfs_rq's limbo list.
> > *
> > * Do not do that when @p is current because the following race can
> > - * cause @p's group_node to be incorectly re-insterted in its rq's
> > + * cause @p's group_node to be incorrectly re-insterted in its rq's
>
> s/re-insterted/re-inserted/
>
> Thanks,
> Madadi Vineeth Reddy
>
> > * cfs_tasks list, despite being throttled:
> > *
> > * cpuX cpuY
> > @@ -12161,7 +12161,7 @@ static inline bool update_newidle_cost(struct sched_domain *sd, u64 cost)
> > * sched_balance_newidle() bumps the cost whenever newidle
> > * balance fails, and we don't want things to grow out of
> > * control. Use the sysctl_sched_migration_cost as the upper
> > - * limit, plus a litle extra to avoid off by ones.
> > + * limit, plus a little extra to avoid off by ones.
> > */
> > sd->max_newidle_lb_cost =
> > min(cost, sysctl_sched_migration_cost + 200);
> > @@ -13176,7 +13176,7 @@ static void propagate_entity_cfs_rq(struct sched_entity *se)
> > * If a task gets attached to this cfs_rq and before being queued,
> > * it gets migrated to another CPU due to reasons like affinity
> > * change, make sure this cfs_rq stays on leaf cfs_rq list to have
> > - * that removed load decayed or it can cause faireness problem.
> > + * that removed load decayed or it can cause fairness problem.
> > */
> > if (!cfs_rq_pelt_clock_throttled(cfs_rq))
> > list_add_leaf_cfs_rq(cfs_rq);
> > diff --git a/kernel/sched/wait_bit.c b/kernel/sched/wait_bit.c
> > index 1088d3b7012c..47ab3bcd2ebc 100644
> > --- a/kernel/sched/wait_bit.c
> > +++ b/kernel/sched/wait_bit.c
> > @@ -207,7 +207,7 @@ EXPORT_SYMBOL(init_wait_var_entry);
> > * given variable to change. wait_var_event() can be waiting for an
> > * arbitrary condition to be true and associates that condition with an
> > * address. Calling wake_up_var() suggests that the condition has been
> > - * made true, but does not strictly require the condtion to use the
> > + * made true, but does not strictly require the condition to use the
> > * address given.
> > *
> > * The wake-up is sent to tasks in a waitqueue selected by hash from a
>
^ permalink raw reply [flat|nested] 3+ messages in thread
end of thread, other threads:[~2025-10-09 6:55 UTC | newest]
Thread overview: 3+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-10-09 2:46 [PATCH v3] sched: Fix some spelling mistakes in the scheduler module Jianyun Gao
2025-10-09 6:01 ` Madadi Vineeth Reddy
2025-10-09 6:55 ` Jianyun Gao
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox