From mboxrd@z Thu Jan 1 00:00:00 1970 From: Frederic Weisbecker Subject: Re: [PATCH v14 04/14] task_isolation: add initial support Date: Thu, 11 Aug 2016 20:11:33 +0200 Message-ID: <20160811181132.GD4214@lerouge> References: <1470774596-17341-1-git-send-email-cmetcalf@mellanox.com> <1470774596-17341-5-git-send-email-cmetcalf@mellanox.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Return-path: Content-Disposition: inline In-Reply-To: <1470774596-17341-5-git-send-email-cmetcalf@mellanox.com> Sender: linux-doc-owner@vger.kernel.org To: Chris Metcalf Cc: Gilad Ben Yossef , Steven Rostedt , Ingo Molnar , Peter Zijlstra , Andrew Morton , Rik van Riel , Tejun Heo , Thomas Gleixner , "Paul E. McKenney" , Christoph Lameter , Viresh Kumar , Catalin Marinas , Will Deacon , Andy Lutomirski , Michal Hocko , linux-mm@kvack.org, linux-doc@vger.kernel.org, linux-api@vger.kernel.org, linux-kernel@vger.kernel.org List-Id: linux-api@vger.kernel.org On Tue, Aug 09, 2016 at 04:29:46PM -0400, Chris Metcalf wrote: > +/* > + * Each time we try to prepare for return to userspace in a process > + * with task isolation enabled, we run this code to quiesce whatever > + * subsystems we can readily quiesce to avoid later interrupts. > + */ > +void task_isolation_enter(void) > +{ > + WARN_ON_ONCE(irqs_disabled()); > + > + /* Drain the pagevecs to avoid unnecessary IPI flushes later. */ > + lru_add_drain(); > + > + /* Quieten the vmstat worker so it won't interrupt us. */ > + quiet_vmstat_sync(); So, this is going to be called everytime we resume to userspace while in task isolation mode, right? Do we need to quiesce vmstat everytime before entering userspace? I thought that vmstat only need to be offlined once and for all? And how about lru? > + > + /* > + * Request rescheduling unless we are in full dynticks mode. > + * We would eventually get pre-empted without this, and if > + * there's another task waiting, it would run; but by > + * explicitly requesting the reschedule, we may reduce the > + * latency. We could directly call schedule() here as well, > + * but since our caller is the standard place where schedule() > + * is called, we defer to the caller. > + * > + * A more substantive approach here would be to use a struct > + * completion here explicitly, and complete it when we shut > + * down dynticks, but since we presumably have nothing better > + * to do on this core anyway, just spinning seems plausible. > + */ > + if (!tick_nohz_tick_stopped()) > + set_tsk_need_resched(current); Again, that won't help :-)