From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752867AbdKUE6A convert rfc822-to-8bit (ORCPT ); Mon, 20 Nov 2017 23:58:00 -0500 Received: from smtprelay0199.hostedemail.com ([216.40.44.199]:42688 "EHLO smtprelay.hostedemail.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1752358AbdKUE57 (ORCPT ); Mon, 20 Nov 2017 23:57:59 -0500 X-Session-Marker: 726F737465647440676F6F646D69732E6F7267 X-Spam-Summary: 2,0,0,,d41d8cd98f00b204,rostedt@goodmis.org,:::::::,RULES_HIT:41:355:379:541:599:800:960:968:973:988:989:1260:1277:1311:1313:1314:1345:1359:1437:1513:1515:1516:1518:1521:1535:1544:1593:1594:1711:1730:1747:1777:1792:2194:2198:2199:2200:2393:2553:2559:2562:2741:2895:2910:3138:3139:3140:3141:3142:3165:3355:3622:3865:3866:3867:3868:3870:3871:3872:3873:3874:4117:4250:4321:4605:5007:6261:7576:7875:7901:7903:9545:9592:10004:10848:10967:11026:11232:11473:11658:11914:12043:12262:12291:12294:12296:12438:12555:12663:12679:12683:12740:12895:12986:13161:13229:13255:13439:14096:14097:14181:14659:14721:14877:21080:21433:21451:21611:21627:30054:30060:30090:30091,0,RBL:none,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:,MSBL:0,DNSBL:none,Custom_rules:0:0:0,LFtime:2,LUA_SUMMARY:none X-HE-Tag: farm40_286a840c5211c X-Filterd-Recvd-Size: 6046 Date: Mon, 20 Nov 2017 23:57:51 -0500 From: Steven Rostedt To: joe.korty@concurrent-rt.com Cc: Thomas Gleixner , Peter Zijlstra , Linux Kernel Mailing List Subject: Re: [PATCH] 4.4.86-rt99: fix sync breakage between nr_cpus_allowed and cpus_allowed Message-ID: <20171120235751.0424cf23@vmware.local.home> In-Reply-To: <20171120230207.19a4bc14@vmware.local.home> References: <20171115192529.GA14158@zipoli.concurrent-rt.com> <20171117174851.2a253785@gandalf.local.home> <20171120163040.GA25993@zipoli.concurrent-rt.com> <20171120230207.19a4bc14@vmware.local.home> X-Mailer: Claws Mail 3.15.1 (GTK+ 2.24.31; x86_64-pc-linux-gnu) MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 8BIT Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon, 20 Nov 2017 23:02:07 -0500 Steven Rostedt wrote: > Ideally, I would like to stay close to what upstream -rt does. Would > you be able to backport the 4.11-rt patch? > > I'm currently working on releasing 4.9-rt and 4.4-rt with the latest > backports. I could easily add this one too. Speaking of which. I just backported this patch to 4.4-rt. Is this what you are talking about? -- Steve >>From 1dc89be37874bfc7bb4a0ea7c45492d7db39f62b Mon Sep 17 00:00:00 2001 From: Sebastian Andrzej Siewior Date: Mon, 19 Jun 2017 09:55:47 +0200 Subject: [PATCH] sched/migrate disable: handle updated task-mask mg-dis section If task's cpumask changes while in the task is in a migrate_disable() section then we don't react on it after a migrate_enable(). It matters however if current CPU is no longer part of the cpumask. We also miss the ->set_cpus_allowed() callback. This patch fixes it by setting task->migrate_disable_update once we this "delayed" hook. This bug was introduced while fixing unrelated issue in migrate_disable() in v4.4-rt3 (update_migrate_disable() got removed during that). Cc: stable-rt@vger.kernel.org Signed-off-by: Sebastian Andrzej Siewior Signed-off-by: Steven Rostedt (VMware) --- include/linux/sched.h | 1 kernel/sched/core.c | 59 ++++++++++++++++++++++++++++++++++++++++++++------ 2 files changed, 54 insertions(+), 6 deletions(-) Index: stable-rt.git/include/linux/sched.h =================================================================== --- stable-rt.git.orig/include/linux/sched.h 2017-11-20 23:43:24.214077537 -0500 +++ stable-rt.git/include/linux/sched.h 2017-11-20 23:43:24.154079278 -0500 @@ -1438,6 +1438,7 @@ struct task_struct { unsigned int policy; #ifdef CONFIG_PREEMPT_RT_FULL int migrate_disable; + int migrate_disable_update; # ifdef CONFIG_SCHED_DEBUG int migrate_disable_atomic; # endif Index: stable-rt.git/kernel/sched/core.c =================================================================== --- stable-rt.git.orig/kernel/sched/core.c 2017-11-20 23:43:24.214077537 -0500 +++ stable-rt.git/kernel/sched/core.c 2017-11-20 23:56:05.071687323 -0500 @@ -1212,18 +1212,14 @@ void set_cpus_allowed_common(struct task p->nr_cpus_allowed = cpumask_weight(new_mask); } -void do_set_cpus_allowed(struct task_struct *p, const struct cpumask *new_mask) +static void __do_set_cpus_allowed_tail(struct task_struct *p, + const struct cpumask *new_mask) { struct rq *rq = task_rq(p); bool queued, running; lockdep_assert_held(&p->pi_lock); - if (__migrate_disabled(p)) { - cpumask_copy(&p->cpus_allowed, new_mask); - return; - } - queued = task_on_rq_queued(p); running = task_current(rq, p); @@ -1246,6 +1242,20 @@ void do_set_cpus_allowed(struct task_str enqueue_task(rq, p, ENQUEUE_RESTORE); } +void do_set_cpus_allowed(struct task_struct *p, const struct cpumask *new_mask) +{ + if (__migrate_disabled(p)) { + lockdep_assert_held(&p->pi_lock); + + cpumask_copy(&p->cpus_allowed, new_mask); +#if defined(CONFIG_PREEMPT_RT_FULL) && defined(CONFIG_SMP) + p->migrate_disable_update = 1; +#endif + return; + } + __do_set_cpus_allowed_tail(p, new_mask); +} + static DEFINE_PER_CPU(struct cpumask, sched_cpumasks); static DEFINE_MUTEX(sched_down_mutex); static cpumask_t sched_down_cpumask; @@ -3231,6 +3241,43 @@ void migrate_enable(void) */ p->migrate_disable = 0; + if (p->migrate_disable_update) { + unsigned long flags; + struct rq *rq; + + rq = task_rq_lock(p, &flags); + update_rq_clock(rq); + + __do_set_cpus_allowed_tail(p, &p->cpus_allowed); + task_rq_unlock(rq, p, &flags); + + p->migrate_disable_update = 0; + + WARN_ON(smp_processor_id() != task_cpu(p)); + if (!cpumask_test_cpu(task_cpu(p), &p->cpus_allowed)) { + const struct cpumask *cpu_valid_mask = cpu_active_mask; + struct migration_arg arg; + unsigned int dest_cpu; + + if (p->flags & PF_KTHREAD) { + /* + * Kernel threads are allowed on online && !active CPUs + */ + cpu_valid_mask = cpu_online_mask; + } + dest_cpu = cpumask_any_and(cpu_valid_mask, &p->cpus_allowed); + arg.task = p; + arg.dest_cpu = dest_cpu; + + unpin_current_cpu(); + preempt_lazy_enable(); + preempt_enable(); + stop_one_cpu(task_cpu(p), migration_cpu_stop, &arg); + tlb_migrate_finish(p->mm); + return; + } + } + unpin_current_cpu(); preempt_enable(); preempt_lazy_enable();