public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: Juri Lelli <juri.lelli@gmail.com>
To: Dietmar Eggemann <dietmar.eggemann@arm.com>
Cc: Miguel de Dios <migueldedios@google.com>,
	Steve Muckle <smuckle@google.com>,
	Peter Zijlstra <peterz@infradead.org>,
	Ingo Molnar <mingo@redhat.com>,
	linux-kernel@vger.kernel.org, kernel-team@android.com,
	Todd Kjos <tkjos@google.com>, Paul Turner <pjt@google.com>,
	Quentin Perret <quentin.perret@arm.com>,
	Patrick Bellasi <Patrick.Bellasi@arm.com>,
	Chris Redpath <Chris.Redpath@arm.com>,
	Morten Rasmussen <Morten.Rasmussen@arm.com>,
	John Dias <joaodias@google.com>
Subject: Re: [PATCH] sched/fair: vruntime should normalize when switching from fair
Date: Fri, 7 Sep 2018 09:16:02 +0200	[thread overview]
Message-ID: <20180907071602.GA29405@localhost.localdomain> (raw)
In-Reply-To: <5fa77995-428e-077e-e236-7cc4a2e82577@arm.com>

On 06/09/18 16:25, Dietmar Eggemann wrote:
> Hi Juri,
> 
> On 08/23/2018 11:54 PM, Juri Lelli wrote:
> > On 23/08/18 18:52, Dietmar Eggemann wrote:
> > > Hi,
> > > 
> > > On 08/21/2018 01:54 AM, Miguel de Dios wrote:
> > > > On 08/17/2018 11:27 AM, Steve Muckle wrote:
> > > > > From: John Dias <joaodias@google.com>
> 
> [...]
> 
> > > 
> > > I tried to catch this issue on my Arm64 Juno board using pi_test (and a
> > > slightly adapted pip_test (usleep_val = 1500 and keep low as cfs)) from
> > > rt-tests but wasn't able to do so.
> > > 
> > > # pi_stress --inversions=1 --duration=1 --groups=1 --sched id=low,policy=cfs
> > > 
> > > Starting PI Stress Test
> > > Number of thread groups: 1
> > > Duration of test run: 1 seconds
> > > Number of inversions per group: 1
> > >       Admin thread SCHED_FIFO priority 4
> > > 1 groups of 3 threads will be created
> > >        High thread SCHED_FIFO priority 3
> > >         Med thread SCHED_FIFO priority 2
> > >         Low thread SCHED_OTHER nice 0
> > > 
> > > # ./pip_stress
> > > 
> > > In both cases, the cfs task entering  rt_mutex_setprio() is queued, so
> > > dequeue_task_fair()->dequeue_entity(), which subtracts cfs_rq->min_vruntime
> > > from se->vruntime, is called on it before it gets the rt prio.
> > > 
> > > Maybe it requires a very specific use of the pthread library to provoke this
> > > issue by making sure that the cfs tasks really blocks/sleeps?
> > 
> > Maybe one could play with rt-app to recreate such specific use case?
> > 
> > https://github.com/scheduler-tools/rt-app/blob/master/doc/tutorial.txt#L459
> 
> I played a little bit with rt-app on hikey960 to re-create Steve's test
> program.

Oh, nice! Thanks for sharing what you have got.

> Since there is no semaphore support (sem_wait(), sem_post()) I used
> condition variables (wait: pthread_cond_wait() , signal:
> pthread_cond_signal()). It's not really the same since this is stateless but
> sleeps before the signals help to maintain the state in this easy example.
> 
> This provokes the vruntime issue e.g. for cpus 0,4 and it doesn't for 0,1:
> 
> 
>     "global": {
>         "calibration" : 130,
> 	"pi_enabled" : true
>     },
>     "tasks": {
>         "rt_task": {
> 	    "loop" : 100,
> 	    "policy" : "SCHED_FIFO",
> 	    "cpus" : [0],
> 
> 	    "lock" : "b_mutex",
> 	    "wait" : { "ref" : "b_cond", "mutex" : "b_mutex" },
> 	    "unlock" : "b_mutex",
> 	    "sleep" : 3000,
> 	    "lock1" : "a_mutex",
> 	    "signal" : "a_cond",
> 	    "unlock1" : "a_mutex",
> 	    "lock2" : "pi-mutex",
> 	    "unlock2" : "pi-mutex"
>         },
> 	"cfs_task": {
> 	    "loop" : 100,
> 	    "policy" : "SCHED_OTHER",
> 	    "cpus" : [4],
> 
> 	    "lock" : "pi-mutex",
> 	    "sleep" : 3000,
> 	    "lock1" : "b_mutex",
> 	    "signal" : "b_cond",
> 	    "unlock" : "b_mutex",
> 	    "lock2" : "a_mutex",
> 	    "wait" : { "ref" : "a_cond", "mutex" : "a_mutex" },
> 	    "unlock1" : "a_mutex",
> 	    "unlock2" : "pi-mutex"
> 	}
>     }
> }
> 
> Adding semaphores is possible but rt-app has no easy way to initialize
> individual objects, e.g. sem_init(..., value). The only way I see is via the
> global section, like "pi_enabled". But then, this is true for all objects of
> this kind (in this case mutexes)?

Right, global section should work fine. Why do you think this is a
problem/limitation?

> So the following couple of lines extension to rt-app works because both
> semaphores can be initialized to 0:
> 
>  {
>     "global": {
>         "calibration" : 130,
> 	"pi_enabled" : true
>     },
>     "tasks": {
>         "rt_task": {
> 	    "loop" : 100,
> 	    "policy" : "SCHED_FIFO",
> 	    "cpus" : [0],
> 
> 	    "sem_wait" : "b_sem",
> 	    "sleep" : 1000,
> 	    "sem_post" : "a_sem",
> 
> 	    "lock" : "pi-mutex",
> 	    "unlock" : "pi-mutex"
>         },
> 	"cfs_task": {
> 	    "loop" : 100,
> 	    "policy" : "SCHED_OTHER",
> 	    "cpus" : [4],
> 
> 	    "lock" : "pi-mutex",
> 	    "sleep" : 1000,
> 	    "sem_post" : "b_sem",
> 	    "sem_wait" : "a_sem",
> 	    "unlock" : "pi-mutex"
> 	}
>     }
> }
> 
> Any thoughts on that? I can see something like this as infrastructure to
> create a regression test case based on rt-app and standard ftrace.

Agree. I guess we should add your first example to the repo (you'd be
very welcome to create a PR) already and then work to support the second?

  reply	other threads:[~2018-09-07  7:16 UTC|newest]

Thread overview: 29+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-08-17 18:27 [PATCH] sched/fair: vruntime should normalize when switching from fair Steve Muckle
2018-08-20 23:54 ` Miguel de Dios
2018-08-23 16:52   ` Dietmar Eggemann
2018-08-24  6:54     ` Juri Lelli
2018-08-24 21:17       ` Steve Muckle
2018-09-06 23:25       ` Dietmar Eggemann
2018-09-07  7:16         ` Juri Lelli [this message]
2018-09-07  7:58           ` Vincent Guittot
2018-09-11  6:24             ` Dietmar Eggemann
2018-08-24  9:32   ` Peter Zijlstra
2018-08-24  9:47     ` Peter Zijlstra
2018-08-24 21:24       ` Steve Muckle
2018-08-27 11:14         ` Peter Zijlstra
2018-08-28 14:53           ` Dietmar Eggemann
2018-08-29 10:54             ` Dietmar Eggemann
2018-08-29 11:59               ` Peter Zijlstra
2018-08-29 15:33                 ` Dietmar Eggemann
2018-08-31 22:24                   ` Steve Muckle
2018-09-26  9:50             ` Wanpeng Li
2018-09-26 22:38               ` Dietmar Eggemann
2018-09-27  1:19                 ` Wanpeng Li
2018-09-27 13:22                   ` Dietmar Eggemann
2018-09-28  0:43                     ` Wanpeng Li
2018-09-28 16:10                       ` Steve Muckle
2018-09-28 16:45                         ` Joel Fernandes
2018-09-28 17:35                         ` Dietmar Eggemann
2018-09-29  1:07                           ` Wanpeng Li
2018-09-28 17:11                       ` Dietmar Eggemann
2018-09-28 16:43                   ` Joel Fernandes

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20180907071602.GA29405@localhost.localdomain \
    --to=juri.lelli@gmail.com \
    --cc=Chris.Redpath@arm.com \
    --cc=Morten.Rasmussen@arm.com \
    --cc=Patrick.Bellasi@arm.com \
    --cc=dietmar.eggemann@arm.com \
    --cc=joaodias@google.com \
    --cc=kernel-team@android.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=migueldedios@google.com \
    --cc=mingo@redhat.com \
    --cc=peterz@infradead.org \
    --cc=pjt@google.com \
    --cc=quentin.perret@arm.com \
    --cc=smuckle@google.com \
    --cc=tkjos@google.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox