From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753306AbcFFVr4 (ORCPT ); Mon, 6 Jun 2016 17:47:56 -0400 Received: from mail.efficios.com ([78.47.125.74]:37054 "EHLO mail.efficios.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751190AbcFFVrz (ORCPT ); Mon, 6 Jun 2016 17:47:55 -0400 Date: Mon, 6 Jun 2016 21:47:52 +0000 (UTC) From: Mathieu Desnoyers To: Julien Desfossez , Peter Zijlstra , Ingo Molnar Cc: Thomas Gleixner , rostedt , linux-kernel@vger.kernel.org Message-ID: <1547310235.29814.1465249672015.JavaMail.zimbra@efficios.com> In-Reply-To: <2082030050.22688.1464614283967.JavaMail.zimbra@efficios.com> References: <1464362168-17064-1-git-send-email-jdesfossez@efficios.com> <2082030050.22688.1464614283967.JavaMail.zimbra@efficios.com> Subject: Re: [RFC PATCH 1/2] sched: encapsulate priority changes in a sched_set_prio static function MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-Originating-IP: [78.47.125.74] X-Mailer: Zimbra 8.6.0_GA_1178 (ZimbraWebClient - FF46 (Linux)/8.6.0_GA_1178) Thread-Topic: sched: encapsulate priority changes in a sched_set_prio static function Thread-Index: zkWtp88SYoCRfToCARBvmpcywEcFGLgdKVUZ Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org ----- On May 30, 2016, at 9:18 AM, Mathieu Desnoyers mathieu.desnoyers@efficios.com wrote: > ----- On May 27, 2016, at 5:16 PM, Julien Desfossez jdesfossez@efficios.com > wrote: > >> Currently, the priority of tasks is modified directly in the scheduling >> functions. Encapsulate priority updates to enable instrumentation of >> priority changes. This will enable analysis of real-time scheduling >> delays per thread priority, which cannot be performed accurately if we >> only trace the priority of the currently scheduled processes. >> >> The call sites that modify the priority of a task are mostly system >> calls: sched_setscheduler, sched_setattr, sched_process_fork and >> set_user_nice. Priority can also be dynamically boosted through >> priority inheritance of rt_mutex by rt_mutex_setprio. >> >> Signed-off-by: Julien Desfossez > > Reviewed-by: Mathieu Desnoyers CCing Ingo and Peter on the first patch of the series too, so they can let us know if we missed anything fundamental related to sched_deadline. Thanks, Mathieu > >> --- >> include/linux/sched.h | 3 ++- >> kernel/sched/core.c | 19 +++++++++++++------ >> 2 files changed, 15 insertions(+), 7 deletions(-) >> >> diff --git a/include/linux/sched.h b/include/linux/sched.h >> index 52c4847..48b35c0 100644 >> --- a/include/linux/sched.h >> +++ b/include/linux/sched.h >> @@ -1409,7 +1409,8 @@ struct task_struct { >> #endif >> int on_rq; >> >> - int prio, static_prio, normal_prio; >> + int prio; /* Updated through sched_set_prio() */ >> + int static_prio, normal_prio; >> unsigned int rt_priority; >> const struct sched_class *sched_class; >> struct sched_entity se; >> diff --git a/kernel/sched/core.c b/kernel/sched/core.c >> index d1f7149..6946b8f 100644 >> --- a/kernel/sched/core.c >> +++ b/kernel/sched/core.c >> @@ -2230,6 +2230,11 @@ int sysctl_schedstats(struct ctl_table *table, int write, >> #endif >> #endif >> >> +static void sched_set_prio(struct task_struct *p, int prio) >> +{ >> + p->prio = prio; >> +} >> + >> /* >> * fork()/clone()-time setup: >> */ >> @@ -2249,7 +2254,7 @@ int sched_fork(unsigned long clone_flags, struct >> task_struct *p) >> /* >> * Make sure we do not leak PI boosting priority to the child. >> */ >> - p->prio = current->normal_prio; >> + sched_set_prio(p, current->normal_prio); >> >> /* >> * Revert to default priority/policy on fork if requested. >> @@ -2262,7 +2267,8 @@ int sched_fork(unsigned long clone_flags, struct >> task_struct *p) >> } else if (PRIO_TO_NICE(p->static_prio) < 0) >> p->static_prio = NICE_TO_PRIO(0); >> >> - p->prio = p->normal_prio = __normal_prio(p); >> + p->normal_prio = __normal_prio(p); >> + sched_set_prio(p, p->normal_prio); >> set_load_weight(p); >> >> /* >> @@ -3477,7 +3483,7 @@ void rt_mutex_setprio(struct task_struct *p, int prio) >> p->sched_class = &fair_sched_class; >> } >> >> - p->prio = prio; >> + sched_set_prio(p, prio); >> >> if (running) >> p->sched_class->set_curr_task(rq); >> @@ -3524,7 +3530,7 @@ void set_user_nice(struct task_struct *p, long nice) >> p->static_prio = NICE_TO_PRIO(nice); >> set_load_weight(p); >> old_prio = p->prio; >> - p->prio = effective_prio(p); >> + sched_set_prio(p, effective_prio(p)); >> delta = p->prio - old_prio; >> >> if (queued) { >> @@ -3731,9 +3737,10 @@ static void __setscheduler(struct rq *rq, struct >> task_struct *p, >> * sched_setscheduler(). >> */ >> if (keep_boost) >> - p->prio = rt_mutex_get_effective_prio(p, normal_prio(p)); >> + sched_set_prio(p, rt_mutex_get_effective_prio(p, >> + normal_prio(p))); >> else >> - p->prio = normal_prio(p); >> + sched_set_prio(p, normal_prio(p)); >> >> if (dl_prio(p->prio)) >> p->sched_class = &dl_sched_class; >> -- >> 1.9.1 > > -- > Mathieu Desnoyers > EfficiOS Inc. > http://www.efficios.com -- Mathieu Desnoyers EfficiOS Inc. http://www.efficios.com