* [PATCH 0/3] sched: code refine/clean for cfs-bandwidth
@ 2013-06-04 6:22 Michael Wang
2013-06-04 6:23 ` [PATCH 1/3] sched: don't repeat the initialization in sched_init() Michael Wang
` (2 more replies)
0 siblings, 3 replies; 16+ messages in thread
From: Michael Wang @ 2013-06-04 6:22 UTC (permalink / raw)
To: LKML, Peter Zijlstra, Ingo Molnar
Code refine and clean patch set.
Michael Wang (3):
[PATCH 1/3] sched: don't repeat the initialization in sched_init()
[PATCH 2/3] sched: code refine in unthrottle_cfs_rq()
[PATCH 3/3] sched: remove the useless declaration in kernel/sched/fair.c
---
b/kernel/sched/core.c | 46 +++++++++++++++++++++++++---------------------
b/kernel/sched/fair.c | 2 +-
kernel/sched/fair.c | 4 ----
3 files changed, 26 insertions(+), 26 deletions(-)
^ permalink raw reply [flat|nested] 16+ messages in thread* [PATCH 1/3] sched: don't repeat the initialization in sched_init() 2013-06-04 6:22 [PATCH 0/3] sched: code refine/clean for cfs-bandwidth Michael Wang @ 2013-06-04 6:23 ` Michael Wang 2013-06-04 6:52 ` Paul Turner 2013-06-05 2:24 ` [PATCH v2 " Michael Wang 2013-06-04 6:23 ` [PATCH 2/3] sched: code refine in unthrottle_cfs_rq() Michael Wang 2013-06-04 6:24 ` [PATCH 3/3] sched: remove the useless declaration in kernel/sched/fair.c Michael Wang 2 siblings, 2 replies; 16+ messages in thread From: Michael Wang @ 2013-06-04 6:23 UTC (permalink / raw) To: LKML, Peter Zijlstra, Ingo Molnar In sched_init(), there is no need to initialize 'root_task_group.shares' and 'root_task_group.cfs_bandwidth' repeatedly. CC: Ingo Molnar <mingo@kernel.org> CC: Peter Zijlstra <peterz@infradead.org> Signed-off-by: Michael Wang <wangyun@linux.vnet.ibm.com> --- kernel/sched/core.c | 46 +++++++++++++++++++++++++--------------------- 1 files changed, 25 insertions(+), 21 deletions(-) diff --git a/kernel/sched/core.c b/kernel/sched/core.c index 58453b8..c0c3716 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -6955,6 +6955,31 @@ void __init sched_init(void) #endif /* CONFIG_CGROUP_SCHED */ +#ifdef CONFIG_FAIR_GROUP_SCHED + root_task_group.shares = ROOT_TASK_GROUP_LOAD; + + /* + * How much cpu bandwidth does root_task_group get? + * + * In case of task-groups formed thr' the cgroup filesystem, it + * gets 100% of the cpu resources in the system. This overall + * system cpu resource is divided among the tasks of + * root_task_group and its child task-groups in a fair manner, + * based on each entity's (task or task-group's) weight + * (se->load.weight). + * + * In other words, if root_task_group has 10 tasks of weight + * 1024) and two child groups A0 and A1 (of weight 1024 each), + * then A0's share of the cpu resource is: + * + * A0's bandwidth = 1024 / (10*1024 + 1024 + 1024) = 8.33% + * + * We achieve this by letting root_task_group's tasks sit + * directly in rq->cfs (i.e root_task_group->se[] = NULL). + */ + init_cfs_bandwidth(&root_task_group.cfs_bandwidth); +#endif + for_each_possible_cpu(i) { struct rq *rq; @@ -6966,28 +6991,7 @@ void __init sched_init(void) init_cfs_rq(&rq->cfs); init_rt_rq(&rq->rt, rq); #ifdef CONFIG_FAIR_GROUP_SCHED - root_task_group.shares = ROOT_TASK_GROUP_LOAD; INIT_LIST_HEAD(&rq->leaf_cfs_rq_list); - /* - * How much cpu bandwidth does root_task_group get? - * - * In case of task-groups formed thr' the cgroup filesystem, it - * gets 100% of the cpu resources in the system. This overall - * system cpu resource is divided among the tasks of - * root_task_group and its child task-groups in a fair manner, - * based on each entity's (task or task-group's) weight - * (se->load.weight). - * - * In other words, if root_task_group has 10 tasks of weight - * 1024) and two child groups A0 and A1 (of weight 1024 each), - * then A0's share of the cpu resource is: - * - * A0's bandwidth = 1024 / (10*1024 + 1024 + 1024) = 8.33% - * - * We achieve this by letting root_task_group's tasks sit - * directly in rq->cfs (i.e root_task_group->se[] = NULL). - */ - init_cfs_bandwidth(&root_task_group.cfs_bandwidth); init_tg_cfs_entry(&root_task_group, &rq->cfs, NULL, i, NULL); #endif /* CONFIG_FAIR_GROUP_SCHED */ -- 1.7.4.1 ^ permalink raw reply related [flat|nested] 16+ messages in thread
* Re: [PATCH 1/3] sched: don't repeat the initialization in sched_init() 2013-06-04 6:23 ` [PATCH 1/3] sched: don't repeat the initialization in sched_init() Michael Wang @ 2013-06-04 6:52 ` Paul Turner 2013-06-04 7:23 ` Michael Wang 2013-06-05 2:24 ` [PATCH v2 " Michael Wang 1 sibling, 1 reply; 16+ messages in thread From: Paul Turner @ 2013-06-04 6:52 UTC (permalink / raw) To: Michael Wang; +Cc: LKML, Peter Zijlstra, Ingo Molnar On Mon, Jun 3, 2013 at 11:23 PM, Michael Wang <wangyun@linux.vnet.ibm.com> wrote: > In sched_init(), there is no need to initialize 'root_task_group.shares' and > 'root_task_group.cfs_bandwidth' repeatedly. > > CC: Ingo Molnar <mingo@kernel.org> > CC: Peter Zijlstra <peterz@infradead.org> > Signed-off-by: Michael Wang <wangyun@linux.vnet.ibm.com> > --- > kernel/sched/core.c | 46 +++++++++++++++++++++++++--------------------- > 1 files changed, 25 insertions(+), 21 deletions(-) > > diff --git a/kernel/sched/core.c b/kernel/sched/core.c > index 58453b8..c0c3716 100644 > --- a/kernel/sched/core.c > +++ b/kernel/sched/core.c > @@ -6955,6 +6955,31 @@ void __init sched_init(void) > > #endif /* CONFIG_CGROUP_SCHED */ > > +#ifdef CONFIG_FAIR_GROUP_SCHED > + root_task_group.shares = ROOT_TASK_GROUP_LOAD; > + > + /* > + * How much cpu bandwidth does root_task_group get? > + * > + * In case of task-groups formed thr' the cgroup filesystem, it > + * gets 100% of the cpu resources in the system. This overall > + * system cpu resource is divided among the tasks of > + * root_task_group and its child task-groups in a fair manner, > + * based on each entity's (task or task-group's) weight > + * (se->load.weight). > + * > + * In other words, if root_task_group has 10 tasks of weight > + * 1024) and two child groups A0 and A1 (of weight 1024 each), > + * then A0's share of the cpu resource is: > + * > + * A0's bandwidth = 1024 / (10*1024 + 1024 + 1024) = 8.33% > + * > + * We achieve this by letting root_task_group's tasks sit > + * directly in rq->cfs (i.e root_task_group->se[] = NULL). > + */ This comment has become unglued from what it's supposed to be attached to (it's tied to root_task_group.shares & init_tg_cfs_entry, not init_cfs_bandwidth). > + init_cfs_bandwidth(&root_task_group.cfs_bandwidth); > +#endif > + > for_each_possible_cpu(i) { > struct rq *rq; > > @@ -6966,28 +6991,7 @@ void __init sched_init(void) > init_cfs_rq(&rq->cfs); > init_rt_rq(&rq->rt, rq); > #ifdef CONFIG_FAIR_GROUP_SCHED > - root_task_group.shares = ROOT_TASK_GROUP_LOAD; > INIT_LIST_HEAD(&rq->leaf_cfs_rq_list); > - /* > - * How much cpu bandwidth does root_task_group get? > - * > - * In case of task-groups formed thr' the cgroup filesystem, it > - * gets 100% of the cpu resources in the system. This overall > - * system cpu resource is divided among the tasks of > - * root_task_group and its child task-groups in a fair manner, > - * based on each entity's (task or task-group's) weight > - * (se->load.weight). > - * > - * In other words, if root_task_group has 10 tasks of weight > - * 1024) and two child groups A0 and A1 (of weight 1024 each), > - * then A0's share of the cpu resource is: > - * > - * A0's bandwidth = 1024 / (10*1024 + 1024 + 1024) = 8.33% > - * > - * We achieve this by letting root_task_group's tasks sit > - * directly in rq->cfs (i.e root_task_group->se[] = NULL). > - */ > - init_cfs_bandwidth(&root_task_group.cfs_bandwidth); > init_tg_cfs_entry(&root_task_group, &rq->cfs, NULL, i, NULL); > #endif /* CONFIG_FAIR_GROUP_SCHED */ > > -- > 1.7.4.1 > > -- > To unsubscribe from this list: send the line "unsubscribe linux-kernel" in > the body of a message to majordomo@vger.kernel.org > More majordomo info at http://vger.kernel.org/majordomo-info.html > Please read the FAQ at http://www.tux.org/lkml/ ^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH 1/3] sched: don't repeat the initialization in sched_init() 2013-06-04 6:52 ` Paul Turner @ 2013-06-04 7:23 ` Michael Wang 0 siblings, 0 replies; 16+ messages in thread From: Michael Wang @ 2013-06-04 7:23 UTC (permalink / raw) To: Paul Turner; +Cc: LKML, Peter Zijlstra, Ingo Molnar Hi, Paul On 06/04/2013 02:52 PM, Paul Turner wrote: > On Mon, Jun 3, 2013 at 11:23 PM, Michael Wang [snip] > > This comment has become unglued from what it's supposed to be attached > to (it's tied to root_task_group.shares & init_tg_cfs_entry, not > init_cfs_bandwidth). Thanks for your review and notify :) What about put the comment with init_tg_cfs_entry()? 'root_task_group.shares' may not needed to be covered under the comment, after all, it won't have any peers to flaunt it's share... Regards, Michael Wang > >> + init_cfs_bandwidth(&root_task_group.cfs_bandwidth); >> +#endif >> + >> for_each_possible_cpu(i) { >> struct rq *rq; >> >> @@ -6966,28 +6991,7 @@ void __init sched_init(void) >> init_cfs_rq(&rq->cfs); >> init_rt_rq(&rq->rt, rq); >> #ifdef CONFIG_FAIR_GROUP_SCHED >> - root_task_group.shares = ROOT_TASK_GROUP_LOAD; >> INIT_LIST_HEAD(&rq->leaf_cfs_rq_list); >> - /* >> - * How much cpu bandwidth does root_task_group get? >> - * >> - * In case of task-groups formed thr' the cgroup filesystem, it >> - * gets 100% of the cpu resources in the system. This overall >> - * system cpu resource is divided among the tasks of >> - * root_task_group and its child task-groups in a fair manner, >> - * based on each entity's (task or task-group's) weight >> - * (se->load.weight). >> - * >> - * In other words, if root_task_group has 10 tasks of weight >> - * 1024) and two child groups A0 and A1 (of weight 1024 each), >> - * then A0's share of the cpu resource is: >> - * >> - * A0's bandwidth = 1024 / (10*1024 + 1024 + 1024) = 8.33% >> - * >> - * We achieve this by letting root_task_group's tasks sit >> - * directly in rq->cfs (i.e root_task_group->se[] = NULL). >> - */ >> - init_cfs_bandwidth(&root_task_group.cfs_bandwidth); >> init_tg_cfs_entry(&root_task_group, &rq->cfs, NULL, i, NULL); >> #endif /* CONFIG_FAIR_GROUP_SCHED */ >> >> -- >> 1.7.4.1 >> >> -- >> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in >> the body of a message to majordomo@vger.kernel.org >> More majordomo info at http://vger.kernel.org/majordomo-info.html >> Please read the FAQ at http://www.tux.org/lkml/ > ^ permalink raw reply [flat|nested] 16+ messages in thread
* [PATCH v2 1/3] sched: don't repeat the initialization in sched_init() 2013-06-04 6:23 ` [PATCH 1/3] sched: don't repeat the initialization in sched_init() Michael Wang 2013-06-04 6:52 ` Paul Turner @ 2013-06-05 2:24 ` Michael Wang 2013-06-05 11:06 ` Peter Zijlstra 1 sibling, 1 reply; 16+ messages in thread From: Michael Wang @ 2013-06-05 2:24 UTC (permalink / raw) To: LKML, Peter Zijlstra, Ingo Molnar, Paul Turner v2: Move comments back before init_tg_cfs_entry(). (Thanks for the notify from pjt) In sched_init(), there is no need to initialize 'root_task_group.shares' and 'root_task_group.cfs_bandwidth' repeatedly. CC: Paul Tuner <pjt@google.com> CC: Ingo Molnar <mingo@kernel.org> CC: Peter Zijlstra <peterz@infradead.org> Signed-off-by: Michael Wang <wangyun@linux.vnet.ibm.com> --- kernel/sched/core.c | 7 +++++-- 1 files changed, 5 insertions(+), 2 deletions(-) diff --git a/kernel/sched/core.c b/kernel/sched/core.c index 58453b8..96f69da 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -6955,6 +6955,11 @@ void __init sched_init(void) #endif /* CONFIG_CGROUP_SCHED */ +#ifdef CONFIG_FAIR_GROUP_SCHED + root_task_group.shares = ROOT_TASK_GROUP_LOAD; + init_cfs_bandwidth(&root_task_group.cfs_bandwidth); +#endif + for_each_possible_cpu(i) { struct rq *rq; @@ -6966,7 +6971,6 @@ void __init sched_init(void) init_cfs_rq(&rq->cfs); init_rt_rq(&rq->rt, rq); #ifdef CONFIG_FAIR_GROUP_SCHED - root_task_group.shares = ROOT_TASK_GROUP_LOAD; INIT_LIST_HEAD(&rq->leaf_cfs_rq_list); /* * How much cpu bandwidth does root_task_group get? @@ -6987,7 +6991,6 @@ void __init sched_init(void) * We achieve this by letting root_task_group's tasks sit * directly in rq->cfs (i.e root_task_group->se[] = NULL). */ - init_cfs_bandwidth(&root_task_group.cfs_bandwidth); init_tg_cfs_entry(&root_task_group, &rq->cfs, NULL, i, NULL); #endif /* CONFIG_FAIR_GROUP_SCHED */ -- 1.7.4.1 ^ permalink raw reply related [flat|nested] 16+ messages in thread
* Re: [PATCH v2 1/3] sched: don't repeat the initialization in sched_init() 2013-06-05 2:24 ` [PATCH v2 " Michael Wang @ 2013-06-05 11:06 ` Peter Zijlstra 2013-06-06 2:19 ` Michael Wang 0 siblings, 1 reply; 16+ messages in thread From: Peter Zijlstra @ 2013-06-05 11:06 UTC (permalink / raw) To: Michael Wang; +Cc: LKML, Ingo Molnar, Paul Turner On Wed, Jun 05, 2013 at 10:24:18AM +0800, Michael Wang wrote: > v2: > Move comments back before init_tg_cfs_entry(). (Thanks for the notify from pjt) > > In sched_init(), there is no need to initialize 'root_task_group.shares' and > 'root_task_group.cfs_bandwidth' repeatedly. > > CC: Paul Tuner <pjt@google.com> > CC: Ingo Molnar <mingo@kernel.org> > CC: Peter Zijlstra <peterz@infradead.org> > Signed-off-by: Michael Wang <wangyun@linux.vnet.ibm.com> > --- > kernel/sched/core.c | 7 +++++-- > 1 files changed, 5 insertions(+), 2 deletions(-) > > diff --git a/kernel/sched/core.c b/kernel/sched/core.c > index 58453b8..96f69da 100644 > --- a/kernel/sched/core.c > +++ b/kernel/sched/core.c > @@ -6955,6 +6955,11 @@ void __init sched_init(void) > > #endif /* CONFIG_CGROUP_SCHED */ > > +#ifdef CONFIG_FAIR_GROUP_SCHED > + root_task_group.shares = ROOT_TASK_GROUP_LOAD; > + init_cfs_bandwidth(&root_task_group.cfs_bandwidth); > +#endif > + > for_each_possible_cpu(i) { > struct rq *rq; > > @@ -6966,7 +6971,6 @@ void __init sched_init(void) > init_cfs_rq(&rq->cfs); > init_rt_rq(&rq->rt, rq); > #ifdef CONFIG_FAIR_GROUP_SCHED > - root_task_group.shares = ROOT_TASK_GROUP_LOAD; > INIT_LIST_HEAD(&rq->leaf_cfs_rq_list); > /* > * How much cpu bandwidth does root_task_group get? > @@ -6987,7 +6991,6 @@ void __init sched_init(void) > * We achieve this by letting root_task_group's tasks sit > * directly in rq->cfs (i.e root_task_group->se[] = NULL). > */ > - init_cfs_bandwidth(&root_task_group.cfs_bandwidth); > init_tg_cfs_entry(&root_task_group, &rq->cfs, NULL, i, NULL); > #endif /* CONFIG_FAIR_GROUP_SCHED */ I would actually like a patch reducing the #ifdef forest there, not adding to it. There's no actual harm in doing the initialization mutliple times, right? ^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH v2 1/3] sched: don't repeat the initialization in sched_init() 2013-06-05 11:06 ` Peter Zijlstra @ 2013-06-06 2:19 ` Michael Wang 0 siblings, 0 replies; 16+ messages in thread From: Michael Wang @ 2013-06-06 2:19 UTC (permalink / raw) To: Peter Zijlstra; +Cc: LKML, Ingo Molnar, Paul Turner On 06/05/2013 07:06 PM, Peter Zijlstra wrote: > On Wed, Jun 05, 2013 at 10:24:18AM +0800, Michael Wang wrote: >> v2: >> Move comments back before init_tg_cfs_entry(). (Thanks for the notify from pjt) >> >> In sched_init(), there is no need to initialize 'root_task_group.shares' and >> 'root_task_group.cfs_bandwidth' repeatedly. >> >> CC: Paul Tuner <pjt@google.com> >> CC: Ingo Molnar <mingo@kernel.org> >> CC: Peter Zijlstra <peterz@infradead.org> >> Signed-off-by: Michael Wang <wangyun@linux.vnet.ibm.com> >> --- >> kernel/sched/core.c | 7 +++++-- >> 1 files changed, 5 insertions(+), 2 deletions(-) >> >> diff --git a/kernel/sched/core.c b/kernel/sched/core.c >> index 58453b8..96f69da 100644 >> --- a/kernel/sched/core.c >> +++ b/kernel/sched/core.c >> @@ -6955,6 +6955,11 @@ void __init sched_init(void) >> >> #endif /* CONFIG_CGROUP_SCHED */ >> >> +#ifdef CONFIG_FAIR_GROUP_SCHED >> + root_task_group.shares = ROOT_TASK_GROUP_LOAD; >> + init_cfs_bandwidth(&root_task_group.cfs_bandwidth); >> +#endif >> + >> for_each_possible_cpu(i) { >> struct rq *rq; >> >> @@ -6966,7 +6971,6 @@ void __init sched_init(void) >> init_cfs_rq(&rq->cfs); >> init_rt_rq(&rq->rt, rq); >> #ifdef CONFIG_FAIR_GROUP_SCHED >> - root_task_group.shares = ROOT_TASK_GROUP_LOAD; >> INIT_LIST_HEAD(&rq->leaf_cfs_rq_list); >> /* >> * How much cpu bandwidth does root_task_group get? >> @@ -6987,7 +6991,6 @@ void __init sched_init(void) >> * We achieve this by letting root_task_group's tasks sit >> * directly in rq->cfs (i.e root_task_group->se[] = NULL). >> */ >> - init_cfs_bandwidth(&root_task_group.cfs_bandwidth); >> init_tg_cfs_entry(&root_task_group, &rq->cfs, NULL, i, NULL); >> #endif /* CONFIG_FAIR_GROUP_SCHED */ > > I would actually like a patch reducing the #ifdef forest there, not > adding to it. I see :) > > There's no actual harm in doing the initialization mutliple times, > right? Yeah, it's safe to redo the init, cost some cycles but not so expensive. Regards, Michael Wang > -- > To unsubscribe from this list: send the line "unsubscribe linux-kernel" in > the body of a message to majordomo@vger.kernel.org > More majordomo info at http://vger.kernel.org/majordomo-info.html > Please read the FAQ at http://www.tux.org/lkml/ > ^ permalink raw reply [flat|nested] 16+ messages in thread
* [PATCH 2/3] sched: code refine in unthrottle_cfs_rq() 2013-06-04 6:22 [PATCH 0/3] sched: code refine/clean for cfs-bandwidth Michael Wang 2013-06-04 6:23 ` [PATCH 1/3] sched: don't repeat the initialization in sched_init() Michael Wang @ 2013-06-04 6:23 ` Michael Wang 2013-06-05 11:15 ` Peter Zijlstra ` (2 more replies) 2013-06-04 6:24 ` [PATCH 3/3] sched: remove the useless declaration in kernel/sched/fair.c Michael Wang 2 siblings, 3 replies; 16+ messages in thread From: Michael Wang @ 2013-06-04 6:23 UTC (permalink / raw) To: LKML, Peter Zijlstra, Ingo Molnar Directly use rq to save some code. CC: Ingo Molnar <mingo@kernel.org> CC: Peter Zijlstra <peterz@infradead.org> Signed-off-by: Michael Wang <wangyun@linux.vnet.ibm.com> --- kernel/sched/fair.c | 2 +- 1 files changed, 1 insertions(+), 1 deletions(-) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index c61a614..1e10911 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -2298,7 +2298,7 @@ void unthrottle_cfs_rq(struct cfs_rq *cfs_rq) int enqueue = 1; long task_delta; - se = cfs_rq->tg->se[cpu_of(rq_of(cfs_rq))]; + se = cfs_rq->tg->se[cpu_of(rq)]; cfs_rq->throttled = 0; raw_spin_lock(&cfs_b->lock); -- 1.7.4.1 ^ permalink raw reply related [flat|nested] 16+ messages in thread
* Re: [PATCH 2/3] sched: code refine in unthrottle_cfs_rq() 2013-06-04 6:23 ` [PATCH 2/3] sched: code refine in unthrottle_cfs_rq() Michael Wang @ 2013-06-05 11:15 ` Peter Zijlstra 2013-06-06 2:22 ` Michael Wang 2013-06-06 2:39 ` [PATCH v2 " Michael Wang 2013-06-19 18:39 ` [tip:sched/core] sched: Refine the code " tip-bot for Michael Wang 2 siblings, 1 reply; 16+ messages in thread From: Peter Zijlstra @ 2013-06-05 11:15 UTC (permalink / raw) To: Michael Wang; +Cc: LKML, Ingo Molnar On Tue, Jun 04, 2013 at 02:23:39PM +0800, Michael Wang wrote: > Directly use rq to save some code. > > CC: Ingo Molnar <mingo@kernel.org> > CC: Peter Zijlstra <peterz@infradead.org> > Signed-off-by: Michael Wang <wangyun@linux.vnet.ibm.com> Please send patches against tip/master; the below didn't apply cleanly. It was a trivial conflict so I applied force and made it fit. Thanks! > --- > kernel/sched/fair.c | 2 +- > 1 files changed, 1 insertions(+), 1 deletions(-) > > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c > index c61a614..1e10911 100644 > --- a/kernel/sched/fair.c > +++ b/kernel/sched/fair.c > @@ -2298,7 +2298,7 @@ void unthrottle_cfs_rq(struct cfs_rq *cfs_rq) > int enqueue = 1; > long task_delta; > > - se = cfs_rq->tg->se[cpu_of(rq_of(cfs_rq))]; > + se = cfs_rq->tg->se[cpu_of(rq)]; > > cfs_rq->throttled = 0; > raw_spin_lock(&cfs_b->lock); > -- > 1.7.4.1 > ^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH 2/3] sched: code refine in unthrottle_cfs_rq() 2013-06-05 11:15 ` Peter Zijlstra @ 2013-06-06 2:22 ` Michael Wang 0 siblings, 0 replies; 16+ messages in thread From: Michael Wang @ 2013-06-06 2:22 UTC (permalink / raw) To: Peter Zijlstra; +Cc: LKML, Ingo Molnar On 06/05/2013 07:15 PM, Peter Zijlstra wrote: > On Tue, Jun 04, 2013 at 02:23:39PM +0800, Michael Wang wrote: >> Directly use rq to save some code. >> >> CC: Ingo Molnar <mingo@kernel.org> >> CC: Peter Zijlstra <peterz@infradead.org> >> Signed-off-by: Michael Wang <wangyun@linux.vnet.ibm.com> > > Please send patches against tip/master; the below didn't apply cleanly. > It was a trivial conflict so I applied force and made it fit. My sincere apologies on that, please allow me to resend the accepted patches based on latest tip/master, forgive me for create extra work like that... Regards, Michael Wang > > Thanks! > >> --- >> kernel/sched/fair.c | 2 +- >> 1 files changed, 1 insertions(+), 1 deletions(-) >> >> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c >> index c61a614..1e10911 100644 >> --- a/kernel/sched/fair.c >> +++ b/kernel/sched/fair.c >> @@ -2298,7 +2298,7 @@ void unthrottle_cfs_rq(struct cfs_rq *cfs_rq) >> int enqueue = 1; >> long task_delta; >> >> - se = cfs_rq->tg->se[cpu_of(rq_of(cfs_rq))]; >> + se = cfs_rq->tg->se[cpu_of(rq)]; >> >> cfs_rq->throttled = 0; >> raw_spin_lock(&cfs_b->lock); >> -- >> 1.7.4.1 >> > -- > To unsubscribe from this list: send the line "unsubscribe linux-kernel" in > the body of a message to majordomo@vger.kernel.org > More majordomo info at http://vger.kernel.org/majordomo-info.html > Please read the FAQ at http://www.tux.org/lkml/ > ^ permalink raw reply [flat|nested] 16+ messages in thread
* [PATCH v2 2/3] sched: code refine in unthrottle_cfs_rq() 2013-06-04 6:23 ` [PATCH 2/3] sched: code refine in unthrottle_cfs_rq() Michael Wang 2013-06-05 11:15 ` Peter Zijlstra @ 2013-06-06 2:39 ` Michael Wang 2013-06-19 18:39 ` [tip:sched/core] sched: Refine the code " tip-bot for Michael Wang 2 siblings, 0 replies; 16+ messages in thread From: Michael Wang @ 2013-06-06 2:39 UTC (permalink / raw) To: LKML, Peter Zijlstra, Ingo Molnar v2: re-based on latest tip/master Directly use rq to save some code. CC: Ingo Molnar <mingo@kernel.org> CC: Peter Zijlstra <peterz@infradead.org> Signed-off-by: Michael Wang <wangyun@linux.vnet.ibm.com> --- kernel/sched/fair.c | 2 +- 1 files changed, 1 insertions(+), 1 deletions(-) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 143dcdb..0cea941 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -2275,7 +2275,7 @@ static void throttle_cfs_rq(struct cfs_rq *cfs_rq) struct sched_entity *se; long task_delta, dequeue = 1; - se = cfs_rq->tg->se[cpu_of(rq_of(cfs_rq))]; + se = cfs_rq->tg->se[cpu_of(rq)]; /* freeze hierarchy runnable averages while throttled */ rcu_read_lock(); -- 1.7.4.1 ^ permalink raw reply related [flat|nested] 16+ messages in thread
* [tip:sched/core] sched: Refine the code in unthrottle_cfs_rq() 2013-06-04 6:23 ` [PATCH 2/3] sched: code refine in unthrottle_cfs_rq() Michael Wang 2013-06-05 11:15 ` Peter Zijlstra 2013-06-06 2:39 ` [PATCH v2 " Michael Wang @ 2013-06-19 18:39 ` tip-bot for Michael Wang 2 siblings, 0 replies; 16+ messages in thread From: tip-bot for Michael Wang @ 2013-06-19 18:39 UTC (permalink / raw) To: linux-tip-commits; +Cc: linux-kernel, hpa, mingo, peterz, wangyun, tglx Commit-ID: 22b958d8cc5127d22d2ad2141277d312d93fad6c Gitweb: http://git.kernel.org/tip/22b958d8cc5127d22d2ad2141277d312d93fad6c Author: Michael Wang <wangyun@linux.vnet.ibm.com> AuthorDate: Tue, 4 Jun 2013 14:23:39 +0800 Committer: Ingo Molnar <mingo@kernel.org> CommitDate: Wed, 19 Jun 2013 12:58:41 +0200 sched: Refine the code in unthrottle_cfs_rq() Directly use rq to save some code. Signed-off-by: Michael Wang <wangyun@linux.vnet.ibm.com> Signed-off-by: Peter Zijlstra <peterz@infradead.org> Link: http://lkml.kernel.org/r/51AD87EB.1070605@linux.vnet.ibm.com Signed-off-by: Ingo Molnar <mingo@kernel.org> --- kernel/sched/fair.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 143dcdb..47a30be 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -2315,7 +2315,7 @@ void unthrottle_cfs_rq(struct cfs_rq *cfs_rq) int enqueue = 1; long task_delta; - se = cfs_rq->tg->se[cpu_of(rq_of(cfs_rq))]; + se = cfs_rq->tg->se[cpu_of(rq)]; cfs_rq->throttled = 0; ^ permalink raw reply related [flat|nested] 16+ messages in thread
* [PATCH 3/3] sched: remove the useless declaration in kernel/sched/fair.c 2013-06-04 6:22 [PATCH 0/3] sched: code refine/clean for cfs-bandwidth Michael Wang 2013-06-04 6:23 ` [PATCH 1/3] sched: don't repeat the initialization in sched_init() Michael Wang 2013-06-04 6:23 ` [PATCH 2/3] sched: code refine in unthrottle_cfs_rq() Michael Wang @ 2013-06-04 6:24 ` Michael Wang 2013-06-05 11:16 ` Peter Zijlstra ` (2 more replies) 2 siblings, 3 replies; 16+ messages in thread From: Michael Wang @ 2013-06-04 6:24 UTC (permalink / raw) To: LKML, Peter Zijlstra, Ingo Molnar default_cfs_period(), do_sched_cfs_period_timer(), do_sched_cfs_slack_timer() already defined previously, no need to declare again. CC: Ingo Molnar <mingo@kernel.org> CC: Peter Zijlstra <peterz@infradead.org> Signed-off-by: Michael Wang <wangyun@linux.vnet.ibm.com> --- kernel/sched/fair.c | 4 ---- 1 files changed, 0 insertions(+), 4 deletions(-) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index c61a614..73cad33 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -2599,10 +2599,6 @@ static void check_cfs_rq_runtime(struct cfs_rq *cfs_rq) throttle_cfs_rq(cfs_rq); } -static inline u64 default_cfs_period(void); -static int do_sched_cfs_period_timer(struct cfs_bandwidth *cfs_b, int overrun); -static void do_sched_cfs_slack_timer(struct cfs_bandwidth *cfs_b); - static enum hrtimer_restart sched_cfs_slack_timer(struct hrtimer *timer) { struct cfs_bandwidth *cfs_b = -- 1.7.4.1 ^ permalink raw reply related [flat|nested] 16+ messages in thread
* Re: [PATCH 3/3] sched: remove the useless declaration in kernel/sched/fair.c 2013-06-04 6:24 ` [PATCH 3/3] sched: remove the useless declaration in kernel/sched/fair.c Michael Wang @ 2013-06-05 11:16 ` Peter Zijlstra 2013-06-06 2:39 ` [PATCH v2 " Michael Wang 2013-06-19 18:39 ` [tip:sched/core] sched: Femove the useless declaration in kernel/ sched/fair.c tip-bot for Michael Wang 2 siblings, 0 replies; 16+ messages in thread From: Peter Zijlstra @ 2013-06-05 11:16 UTC (permalink / raw) To: Michael Wang; +Cc: LKML, Ingo Molnar On Tue, Jun 04, 2013 at 02:24:08PM +0800, Michael Wang wrote: > default_cfs_period(), do_sched_cfs_period_timer(), do_sched_cfs_slack_timer() > already defined previously, no need to declare again. > > CC: Ingo Molnar <mingo@kernel.org> > CC: Peter Zijlstra <peterz@infradead.org> > Signed-off-by: Michael Wang <wangyun@linux.vnet.ibm.com> Thanks! ^ permalink raw reply [flat|nested] 16+ messages in thread
* [PATCH v2 3/3] sched: remove the useless declaration in kernel/sched/fair.c 2013-06-04 6:24 ` [PATCH 3/3] sched: remove the useless declaration in kernel/sched/fair.c Michael Wang 2013-06-05 11:16 ` Peter Zijlstra @ 2013-06-06 2:39 ` Michael Wang 2013-06-19 18:39 ` [tip:sched/core] sched: Femove the useless declaration in kernel/ sched/fair.c tip-bot for Michael Wang 2 siblings, 0 replies; 16+ messages in thread From: Michael Wang @ 2013-06-06 2:39 UTC (permalink / raw) To: LKML, Peter Zijlstra, Ingo Molnar v2: re-based on latest tip/master default_cfs_period(), do_sched_cfs_period_timer(), do_sched_cfs_slack_timer() already defined previously, no need to declare again. CC: Ingo Molnar <mingo@kernel.org> CC: Peter Zijlstra <peterz@infradead.org> Signed-off-by: Michael Wang <wangyun@linux.vnet.ibm.com> --- kernel/sched/fair.c | 4 ---- 1 files changed, 0 insertions(+), 4 deletions(-) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 0cea941..9efc50f 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -2618,10 +2618,6 @@ static void check_cfs_rq_runtime(struct cfs_rq *cfs_rq) throttle_cfs_rq(cfs_rq); } -static inline u64 default_cfs_period(void); -static int do_sched_cfs_period_timer(struct cfs_bandwidth *cfs_b, int overrun); -static void do_sched_cfs_slack_timer(struct cfs_bandwidth *cfs_b); - static enum hrtimer_restart sched_cfs_slack_timer(struct hrtimer *timer) { struct cfs_bandwidth *cfs_b = -- 1.7.4.1 ^ permalink raw reply related [flat|nested] 16+ messages in thread
* [tip:sched/core] sched: Femove the useless declaration in kernel/ sched/fair.c 2013-06-04 6:24 ` [PATCH 3/3] sched: remove the useless declaration in kernel/sched/fair.c Michael Wang 2013-06-05 11:16 ` Peter Zijlstra 2013-06-06 2:39 ` [PATCH v2 " Michael Wang @ 2013-06-19 18:39 ` tip-bot for Michael Wang 2 siblings, 0 replies; 16+ messages in thread From: tip-bot for Michael Wang @ 2013-06-19 18:39 UTC (permalink / raw) To: linux-tip-commits; +Cc: linux-kernel, hpa, mingo, peterz, wangyun, tglx Commit-ID: 8404c90d050733b3404dc36c500f63ccb0c972ce Gitweb: http://git.kernel.org/tip/8404c90d050733b3404dc36c500f63ccb0c972ce Author: Michael Wang <wangyun@linux.vnet.ibm.com> AuthorDate: Tue, 4 Jun 2013 14:24:08 +0800 Committer: Ingo Molnar <mingo@kernel.org> CommitDate: Wed, 19 Jun 2013 12:58:41 +0200 sched: Femove the useless declaration in kernel/sched/fair.c default_cfs_period(), do_sched_cfs_period_timer(), do_sched_cfs_slack_timer() already defined previously, no need to declare again. Signed-off-by: Michael Wang <wangyun@linux.vnet.ibm.com> Signed-off-by: Peter Zijlstra <peterz@infradead.org> Link: http://lkml.kernel.org/r/51AD8808.7020608@linux.vnet.ibm.com Signed-off-by: Ingo Molnar <mingo@kernel.org> --- kernel/sched/fair.c | 4 ---- 1 file changed, 4 deletions(-) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 47a30be..c0ac2c3 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -2618,10 +2618,6 @@ static void check_cfs_rq_runtime(struct cfs_rq *cfs_rq) throttle_cfs_rq(cfs_rq); } -static inline u64 default_cfs_period(void); -static int do_sched_cfs_period_timer(struct cfs_bandwidth *cfs_b, int overrun); -static void do_sched_cfs_slack_timer(struct cfs_bandwidth *cfs_b); - static enum hrtimer_restart sched_cfs_slack_timer(struct hrtimer *timer) { struct cfs_bandwidth *cfs_b = ^ permalink raw reply related [flat|nested] 16+ messages in thread
end of thread, other threads:[~2013-06-19 18:41 UTC | newest] Thread overview: 16+ messages (download: mbox.gz follow: Atom feed -- links below jump to the message on this page -- 2013-06-04 6:22 [PATCH 0/3] sched: code refine/clean for cfs-bandwidth Michael Wang 2013-06-04 6:23 ` [PATCH 1/3] sched: don't repeat the initialization in sched_init() Michael Wang 2013-06-04 6:52 ` Paul Turner 2013-06-04 7:23 ` Michael Wang 2013-06-05 2:24 ` [PATCH v2 " Michael Wang 2013-06-05 11:06 ` Peter Zijlstra 2013-06-06 2:19 ` Michael Wang 2013-06-04 6:23 ` [PATCH 2/3] sched: code refine in unthrottle_cfs_rq() Michael Wang 2013-06-05 11:15 ` Peter Zijlstra 2013-06-06 2:22 ` Michael Wang 2013-06-06 2:39 ` [PATCH v2 " Michael Wang 2013-06-19 18:39 ` [tip:sched/core] sched: Refine the code " tip-bot for Michael Wang 2013-06-04 6:24 ` [PATCH 3/3] sched: remove the useless declaration in kernel/sched/fair.c Michael Wang 2013-06-05 11:16 ` Peter Zijlstra 2013-06-06 2:39 ` [PATCH v2 " Michael Wang 2013-06-19 18:39 ` [tip:sched/core] sched: Femove the useless declaration in kernel/ sched/fair.c tip-bot for Michael Wang
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox