cgroups.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Peter Zijlstra <peterz@infradead.org>
To: Juri Lelli <juri.lelli@redhat.com>
Cc: mingo@redhat.com, rostedt@goodmis.org, tj@kernel.org,
	linux-kernel@vger.kernel.org, luca.abeni@santannapisa.it,
	claudio@evidence.eu.com, tommaso.cucinotta@santannapisa.it,
	bristot@redhat.com, mathieu.poirier@linaro.org,
	lizefan@huawei.com, cgroups@vger.kernel.org
Subject: Re: [PATCH v6 4/5] sched/core: Prevent race condition between cpuset and __sched_setscheduler()
Date: Mon, 4 Feb 2019 13:10:29 +0100	[thread overview]
Message-ID: <20190204121029.GD17550@hirez.programming.kicks-ass.net> (raw)
In-Reply-To: <20190117084739.17078-5-juri.lelli@redhat.com>

On Thu, Jan 17, 2019 at 09:47:38AM +0100, Juri Lelli wrote:
> No synchronisation mechanism exists between the cpuset subsystem and calls
> to function __sched_setscheduler(). As such, it is possible that new root
> domains are created on the cpuset side while a deadline acceptance test
> is carried out in __sched_setscheduler(), leading to a potential oversell
> of CPU bandwidth.
> 
> Grab callback_lock from core scheduler, so to prevent situations such as
> the one described above from happening.

> diff --git a/kernel/sched/core.c b/kernel/sched/core.c
> index f5263383170e..d928a42b8852 100644
> --- a/kernel/sched/core.c
> +++ b/kernel/sched/core.c
> @@ -4224,6 +4224,13 @@ static int __sched_setscheduler(struct task_struct *p,
>  	rq = task_rq_lock(p, &rf);
>  	update_rq_clock(rq);
>  
> +	/*
> +	 * Make sure we don't race with the cpuset subsystem where root
> +	 * domains can be rebuilt or modified while operations like DL
> +	 * admission checks are carried out.
> +	 */
> +	cpuset_read_only_lock();
> +
>  	/*
>  	 * Changing the policy of the stop threads its a very bad idea:
>  	 */
> @@ -4285,6 +4292,7 @@ static int __sched_setscheduler(struct task_struct *p,
>  	/* Re-check policy now with rq lock held: */
>  	if (unlikely(oldpolicy != -1 && oldpolicy != p->policy)) {
>  		policy = oldpolicy = -1;
> +		cpuset_read_only_unlock();
>  		task_rq_unlock(rq, p, &rf);
>  		goto recheck;
>  	}
> @@ -4342,6 +4350,7 @@ static int __sched_setscheduler(struct task_struct *p,
>  
>  	/* Avoid rq from going away on us: */
>  	preempt_disable();
> +	cpuset_read_only_unlock();
>  	task_rq_unlock(rq, p, &rf);
>  
>  	if (pi)
> @@ -4354,6 +4363,7 @@ static int __sched_setscheduler(struct task_struct *p,
>  	return 0;
>  
>  unlock:
> +	cpuset_read_only_unlock();
>  	task_rq_unlock(rq, p, &rf);
>  	return retval;
>  }

Why take callback_lock inside rq->lock and not the other way around?
AFAICT there is no pre-existing order so we can pick one here.

  reply	other threads:[~2019-02-04 12:10 UTC|newest]

Thread overview: 21+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-01-17  8:47 [PATCH v6 0/5] sched/deadline: fix cpusets bandwidth accounting Juri Lelli
2019-01-17  8:47 ` [PATCH v6 1/5] sched/topology: Adding function partition_sched_domains_locked() Juri Lelli
2019-01-17  8:47 ` [PATCH v6 2/5] sched/core: Streamlining calls to task_rq_unlock() Juri Lelli
2019-01-17  8:47 ` [PATCH v6 3/5] cgroup/cpuset: make callback_lock raw Juri Lelli
2019-02-04 11:55   ` Peter Zijlstra
2019-02-05  9:18     ` Juri Lelli
2019-02-04 12:02   ` Peter Zijlstra
2019-02-04 12:07     ` Peter Zijlstra
2019-02-05  9:18       ` Juri Lelli
2019-01-17  8:47 ` [PATCH v6 4/5] sched/core: Prevent race condition between cpuset and __sched_setscheduler() Juri Lelli
2019-02-04 12:10   ` Peter Zijlstra [this message]
2019-02-05  9:51     ` Juri Lelli
2019-02-05 11:20       ` Peter Zijlstra
2019-02-05 11:49         ` Juri Lelli
2019-01-17  8:47 ` [PATCH v6 5/5] cpuset: Rebuild root domain deadline accounting information Juri Lelli
2019-01-18 16:17 ` [PATCH v6 0/5] sched/deadline: fix cpusets bandwidth accounting Tejun Heo
2019-01-18 16:46   ` Juri Lelli
2019-02-04  9:02     ` Juri Lelli
2019-02-04 12:18       ` Peter Zijlstra
2019-02-04 18:45         ` Waiman Long
2019-02-05  9:18           ` Juri Lelli

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20190204121029.GD17550@hirez.programming.kicks-ass.net \
    --to=peterz@infradead.org \
    --cc=bristot@redhat.com \
    --cc=cgroups@vger.kernel.org \
    --cc=claudio@evidence.eu.com \
    --cc=juri.lelli@redhat.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=lizefan@huawei.com \
    --cc=luca.abeni@santannapisa.it \
    --cc=mathieu.poirier@linaro.org \
    --cc=mingo@redhat.com \
    --cc=rostedt@goodmis.org \
    --cc=tj@kernel.org \
    --cc=tommaso.cucinotta@santannapisa.it \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).