From: Gabriele Monaco <gmonaco@redhat.com>
To: Nam Cao <namcao@linutronix.de>
Cc: Steven Rostedt <rostedt@goodmis.org>,
Masami Hiramatsu <mhiramat@kernel.org>,
Mathieu Desnoyers <mathieu.desnoyers@efficios.com>,
linux-trace-kernel@vger.kernel.org, linux-kernel@vger.kernel.org,
Ingo Molnar <mingo@redhat.com>,
Peter Zijlstra <peterz@infradead.org>,
Juri Lelli <juri.lelli@redhat.com>,
Vincent Guittot <vincent.guittot@linaro.org>,
Dietmar Eggemann <dietmar.eggemann@arm.com>,
Ben Segall <bsegall@google.com>, Mel Gorman <mgorman@suse.de>,
Valentin Schneider <vschneid@redhat.com>
Subject: Re: [PATCH 4/5] sched: Add rt task enqueue/dequeue trace points
Date: Thu, 31 Jul 2025 10:39:25 +0200 [thread overview]
Message-ID: <5a7c492bbad7d56f347ee629734fdbab275d6333.camel@redhat.com> (raw)
In-Reply-To: <20250731073520.ktIOaGts@linutronix.de>
On Thu, 2025-07-31 at 09:35 +0200, Nam Cao wrote:
> On Wed, Jul 30, 2025 at 06:18:45PM +0200, Gabriele Monaco wrote:
> > Well, thinking about it again, these tracepoints might simplify
> > things
> > considerably when tasks change policy..
> >
> > Syscalls may fail, for that you could register to sys_exit and
> > check
> > the return value, but at that point the policy changed already, so
> > you
> > cannot tell if it's a relevant event or not (e.g. same policy).
> > Also sched_setscheduler_nocheck would be out of the picture here,
> > not
> > sure how recurrent that is though (and might not matter if you only
> > focus on userspace tasks).
> >
> > If you go down the route of adding tracepoints, why not have other
> > classes benefit too? I believe calling them from the enqueue_task /
> > dequeue_task in sched/core.c would allow you to easily filter out
> > by
> > policy anyway (haven't tested).
>
> Something like the untested patch below?
>
> Will you have a use case for it too? Then I will try to accommodate
> your use case, otherwise I will do just enough for my case.
Well, I'm still defining the best set of tracepoints I need, if you see
it cleaner go ahead the way you're currently doing, then.
Unless anyone else complains let's keep it like this.
Thanks,
Gabriele
>
> Nam
>
> diff --git a/include/trace/events/sched.h
> b/include/trace/events/sched.h
> index c38f12f7f903..b50668052f99 100644
> --- a/include/trace/events/sched.h
> +++ b/include/trace/events/sched.h
> @@ -906,6 +906,14 @@ DECLARE_TRACE(dequeue_task_rt,
> TP_PROTO(int cpu, struct task_struct *task),
> TP_ARGS(cpu, task));
>
> +DECLARE_TRACE(enqueue_task,
> + TP_PROTO(int cpu, struct task_struct *task),
> + TP_ARGS(cpu, task));
> +
> +DECLARE_TRACE(dequeue_task,
> + TP_PROTO(int cpu, struct task_struct *task),
> + TP_ARGS(cpu, task));
> +
> #endif /* _TRACE_SCHED_H */
>
> /* This part must be outside protection */
> diff --git a/kernel/sched/core.c b/kernel/sched/core.c
> index b485e0639616..2af90532982a 100644
> --- a/kernel/sched/core.c
> +++ b/kernel/sched/core.c
> @@ -2077,6 +2077,8 @@ unsigned long get_wchan(struct task_struct *p)
>
> void enqueue_task(struct rq *rq, struct task_struct *p, int flags)
> {
> + trace_enqueue_task_tp(rq->cpu, p);
> +
> if (!(flags & ENQUEUE_NOCLOCK))
> update_rq_clock(rq);
>
> @@ -2103,6 +2105,8 @@ void enqueue_task(struct rq *rq, struct
> task_struct *p, int flags)
> */
> inline bool dequeue_task(struct rq *rq, struct task_struct *p, int
> flags)
> {
> + trace_dequeue_task_tp(rq->cpu, p);
> +
> if (sched_core_enabled(rq))
> sched_core_dequeue(rq, p, flags);
>
next prev parent reply other threads:[~2025-07-31 8:39 UTC|newest]
Thread overview: 32+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-07-30 12:45 [PATCH 0/5] rv: LTL per-cpu monitor type and real-time scheduling monitor Nam Cao
2025-07-30 12:45 ` [PATCH 1/5] rv/ltl: Prepare for other monitor types Nam Cao
2025-07-31 9:04 ` Gabriele Monaco
2025-07-31 9:28 ` Nam Cao
2025-07-31 10:14 ` Gabriele Monaco
2025-07-30 12:45 ` [PATCH 2/5] rv/ltl: Support per-cpu monitors Nam Cao
2025-07-31 8:02 ` Gabriele Monaco
2025-08-01 6:26 ` Nam Cao
2025-07-30 12:45 ` [PATCH 3/5] verification/rvgen/ltl: Support per-cpu monitor generation Nam Cao
2025-07-30 12:45 ` [PATCH 4/5] sched: Add rt task enqueue/dequeue trace points Nam Cao
2025-07-30 13:53 ` Gabriele Monaco
2025-07-30 15:18 ` Nam Cao
2025-07-30 16:18 ` Gabriele Monaco
2025-07-31 7:35 ` Nam Cao
2025-07-31 8:39 ` Gabriele Monaco [this message]
2025-08-01 3:42 ` K Prateek Nayak
2025-08-01 7:29 ` Nam Cao
2025-08-01 9:56 ` K Prateek Nayak
2025-08-01 11:04 ` Gabriele Monaco
2025-08-04 3:07 ` K Prateek Nayak
2025-08-04 5:49 ` Nam Cao
2025-07-30 12:45 ` [PATCH 5/5] rv: Add rts monitor Nam Cao
2025-07-31 7:47 ` Gabriele Monaco
2025-08-01 7:58 ` Nam Cao
2025-08-01 9:14 ` Gabriele Monaco
2025-08-04 6:05 ` Nam Cao
2025-08-05 8:40 ` Gabriele Monaco
2025-08-05 12:22 ` Nam Cao
2025-08-05 15:45 ` Nam Cao
2025-08-06 8:15 ` Gabriele Monaco
2025-08-06 8:46 ` Nam Cao
2025-08-06 9:03 ` Gabriele Monaco
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=5a7c492bbad7d56f347ee629734fdbab275d6333.camel@redhat.com \
--to=gmonaco@redhat.com \
--cc=bsegall@google.com \
--cc=dietmar.eggemann@arm.com \
--cc=juri.lelli@redhat.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-trace-kernel@vger.kernel.org \
--cc=mathieu.desnoyers@efficios.com \
--cc=mgorman@suse.de \
--cc=mhiramat@kernel.org \
--cc=mingo@redhat.com \
--cc=namcao@linutronix.de \
--cc=peterz@infradead.org \
--cc=rostedt@goodmis.org \
--cc=vincent.guittot@linaro.org \
--cc=vschneid@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).