From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1161319AbcE3NSM (ORCPT ); Mon, 30 May 2016 09:18:12 -0400 Received: from mail.efficios.com ([78.47.125.74]:50140 "EHLO mail.efficios.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1161029AbcE3NSK (ORCPT ); Mon, 30 May 2016 09:18:10 -0400 Date: Mon, 30 May 2016 13:18:03 +0000 (UTC) From: Mathieu Desnoyers To: Julien Desfossez Cc: Thomas Gleixner , rostedt , linux-kernel@vger.kernel.org Message-ID: <2082030050.22688.1464614283967.JavaMail.zimbra@efficios.com> In-Reply-To: <1464362168-17064-1-git-send-email-jdesfossez@efficios.com> References: <1464362168-17064-1-git-send-email-jdesfossez@efficios.com> Subject: Re: [RFC PATCH 1/2] sched: encapsulate priority changes in a sched_set_prio static function MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-Originating-IP: [78.47.125.74] X-Mailer: Zimbra 8.6.0_GA_1178 (ZimbraWebClient - FF46 (Linux)/8.6.0_GA_1178) Thread-Topic: sched: encapsulate priority changes in a sched_set_prio static function Thread-Index: zkWtp88SYoCRfToCARBvmpcywEcFGA== Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org ----- On May 27, 2016, at 5:16 PM, Julien Desfossez jdesfossez@efficios.com wrote: > Currently, the priority of tasks is modified directly in the scheduling > functions. Encapsulate priority updates to enable instrumentation of > priority changes. This will enable analysis of real-time scheduling > delays per thread priority, which cannot be performed accurately if we > only trace the priority of the currently scheduled processes. > > The call sites that modify the priority of a task are mostly system > calls: sched_setscheduler, sched_setattr, sched_process_fork and > set_user_nice. Priority can also be dynamically boosted through > priority inheritance of rt_mutex by rt_mutex_setprio. > > Signed-off-by: Julien Desfossez Reviewed-by: Mathieu Desnoyers > --- > include/linux/sched.h | 3 ++- > kernel/sched/core.c | 19 +++++++++++++------ > 2 files changed, 15 insertions(+), 7 deletions(-) > > diff --git a/include/linux/sched.h b/include/linux/sched.h > index 52c4847..48b35c0 100644 > --- a/include/linux/sched.h > +++ b/include/linux/sched.h > @@ -1409,7 +1409,8 @@ struct task_struct { > #endif > int on_rq; > > - int prio, static_prio, normal_prio; > + int prio; /* Updated through sched_set_prio() */ > + int static_prio, normal_prio; > unsigned int rt_priority; > const struct sched_class *sched_class; > struct sched_entity se; > diff --git a/kernel/sched/core.c b/kernel/sched/core.c > index d1f7149..6946b8f 100644 > --- a/kernel/sched/core.c > +++ b/kernel/sched/core.c > @@ -2230,6 +2230,11 @@ int sysctl_schedstats(struct ctl_table *table, int write, > #endif > #endif > > +static void sched_set_prio(struct task_struct *p, int prio) > +{ > + p->prio = prio; > +} > + > /* > * fork()/clone()-time setup: > */ > @@ -2249,7 +2254,7 @@ int sched_fork(unsigned long clone_flags, struct > task_struct *p) > /* > * Make sure we do not leak PI boosting priority to the child. > */ > - p->prio = current->normal_prio; > + sched_set_prio(p, current->normal_prio); > > /* > * Revert to default priority/policy on fork if requested. > @@ -2262,7 +2267,8 @@ int sched_fork(unsigned long clone_flags, struct > task_struct *p) > } else if (PRIO_TO_NICE(p->static_prio) < 0) > p->static_prio = NICE_TO_PRIO(0); > > - p->prio = p->normal_prio = __normal_prio(p); > + p->normal_prio = __normal_prio(p); > + sched_set_prio(p, p->normal_prio); > set_load_weight(p); > > /* > @@ -3477,7 +3483,7 @@ void rt_mutex_setprio(struct task_struct *p, int prio) > p->sched_class = &fair_sched_class; > } > > - p->prio = prio; > + sched_set_prio(p, prio); > > if (running) > p->sched_class->set_curr_task(rq); > @@ -3524,7 +3530,7 @@ void set_user_nice(struct task_struct *p, long nice) > p->static_prio = NICE_TO_PRIO(nice); > set_load_weight(p); > old_prio = p->prio; > - p->prio = effective_prio(p); > + sched_set_prio(p, effective_prio(p)); > delta = p->prio - old_prio; > > if (queued) { > @@ -3731,9 +3737,10 @@ static void __setscheduler(struct rq *rq, struct > task_struct *p, > * sched_setscheduler(). > */ > if (keep_boost) > - p->prio = rt_mutex_get_effective_prio(p, normal_prio(p)); > + sched_set_prio(p, rt_mutex_get_effective_prio(p, > + normal_prio(p))); > else > - p->prio = normal_prio(p); > + sched_set_prio(p, normal_prio(p)); > > if (dl_prio(p->prio)) > p->sched_class = &dl_sched_class; > -- > 1.9.1 -- Mathieu Desnoyers EfficiOS Inc. http://www.efficios.com