public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: Juri Lelli <juri.lelli@arm.com>
To: Steven Rostedt <rostedt@goodmis.org>
Cc: Peter Zijlstra <peterz@infradead.org>,
	Ingo Molnar <mingo@kernel.org>,
	LKML <linux-kernel@vger.kernel.org>,
	Clark Williams <williams@redhat.com>,
	John Kacur <jkacur@redhat.com>,
	Daniel Bristot de Oliveira <bristot@redhat.com>,
	Juri Lelli <juri.lelli@gmail.com>
Subject: Re: [BUG] Corrupted SCHED_DEADLINE bandwidth with cpusets
Date: Thu, 4 Feb 2016 18:32:59 +0000	[thread overview]
Message-ID: <20160204183259.GF29586@e106622-lin> (raw)
In-Reply-To: <20160204123103.058642ed@gandalf.local.home>

On 04/02/16 12:31, Steven Rostedt wrote:
> On Thu, 4 Feb 2016 16:30:49 +0000
> Juri Lelli <juri.lelli@arm.com> wrote:
> 
> > I've actually changed a bit this approach, and things seem better here.
> > Could you please give this a try? (You can also fetch the same branch).
> 
> It appears to fix the one issue I pointed out, but it doesn't fix the
> issue with cpusets.
> 
>  # burn&
>  # TASK=$!
>  # schedtool -E -t 2000000:20000000 $TASK
>  # grep dl /proc/sched_debug
> dl_rq[0]:
>   .dl_nr_running                 : 0
>   .dl_bw->bw                     : 996147
>   .dl_bw->total_bw               : 104857
> dl_rq[1]:
>   .dl_nr_running                 : 0
>   .dl_bw->bw                     : 996147
>   .dl_bw->total_bw               : 104857
> dl_rq[2]:
>   .dl_nr_running                 : 0
>   .dl_bw->bw                     : 996147
>   .dl_bw->total_bw               : 104857
> dl_rq[3]:
>   .dl_nr_running                 : 0
>   .dl_bw->bw                     : 996147
>   .dl_bw->total_bw               : 104857
> dl_rq[4]:
>   .dl_nr_running                 : 0
>   .dl_bw->bw                     : 996147
>   .dl_bw->total_bw               : 104857
> dl_rq[5]:
>   .dl_nr_running                 : 0
>   .dl_bw->bw                     : 996147
>   .dl_bw->total_bw               : 104857
> dl_rq[6]:
>   .dl_nr_running                 : 0
>   .dl_bw->bw                     : 996147
>   .dl_bw->total_bw               : 104857
> dl_rq[7]:
>   .dl_nr_running                 : 0
>   .dl_bw->bw                     : 996147
>   .dl_bw->total_bw               : 104857
> 
>  # mkdir /sys/fs/cgroup/cpuset/my_cpuset
>  # echo 1 > /sys/fs/cgroup/cpuset/my_cpuset/cpuset.cpus
>  # grep dl /proc/sched_debug
> dl_rq[0]:
>   .dl_nr_running                 : 0
>   .dl_bw->bw                     : 996147
>   .dl_bw->total_bw               : 209714
> dl_rq[1]:
>   .dl_nr_running                 : 0
>   .dl_bw->bw                     : 996147
>   .dl_bw->total_bw               : 209714
> dl_rq[2]:
>   .dl_nr_running                 : 0
>   .dl_bw->bw                     : 996147
>   .dl_bw->total_bw               : 209714
> dl_rq[3]:
>   .dl_nr_running                 : 0
>   .dl_bw->bw                     : 996147
>   .dl_bw->total_bw               : 209714
> dl_rq[4]:
>   .dl_nr_running                 : 0
>   .dl_bw->bw                     : 996147
>   .dl_bw->total_bw               : 209714
> dl_rq[5]:
>   .dl_nr_running                 : 0
>   .dl_bw->bw                     : 996147
>   .dl_bw->total_bw               : 209714
> dl_rq[6]:
>   .dl_nr_running                 : 0
>   .dl_bw->bw                     : 996147
>   .dl_bw->total_bw               : 209714
> dl_rq[7]:
>   .dl_nr_running                 : 0
>   .dl_bw->bw                     : 996147
>   .dl_bw->total_bw               : 209714
> 
> It appears to add double the bandwidth.
> 

Mmm.. IIUC that's because we don't destroy any root_domain in this case,
as sched_load_balance of the parent is still set. So we add again to the
existing one. I could fix that with some flag indicating when we
actually destroy root_domain(s), but I fear it will make this solution
uglier than it is already :/. More thinking required.

Thanks for testing.

Best,

- Juri

      reply	other threads:[~2016-02-04 18:32 UTC|newest]

Thread overview: 8+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-02-03 18:55 [BUG] Corrupted SCHED_DEADLINE bandwidth with cpusets Steven Rostedt
2016-02-03 18:57 ` Steven Rostedt
2016-02-04  9:54 ` Juri Lelli
2016-02-04 12:04   ` Juri Lelli
2016-02-04 12:27     ` Juri Lelli
2016-02-04 16:30       ` Juri Lelli
2016-02-04 17:31         ` Steven Rostedt
2016-02-04 18:32           ` Juri Lelli [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20160204183259.GF29586@e106622-lin \
    --to=juri.lelli@arm.com \
    --cc=bristot@redhat.com \
    --cc=jkacur@redhat.com \
    --cc=juri.lelli@gmail.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mingo@kernel.org \
    --cc=peterz@infradead.org \
    --cc=rostedt@goodmis.org \
    --cc=williams@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox