linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [RFC][4.1.15-rt17 PATCH] mm: swap: lru drain don't use workqueue with PREEMPT_RT_FULL
@ 2016-01-11  0:43 l
  2016-01-12 12:01 ` Thomas Gleixner
  0 siblings, 1 reply; 2+ messages in thread
From: l @ 2016-01-11  0:43 UTC (permalink / raw)
  To: Sebastian Andrzej Siewior
  Cc: LKML, Thomas Gleixner, rostedt, John Kacur, linux-mm,
	Leandro Dorileo

From: Leandro Dorileo <leandro.maciel.dorileo@intel.com>

Running a smp system with an -rt kernel, with CONFIG_PREEMPT_RT_FULL,
in a heavy cpu load scenario and an arbitrary process tries to mlockall
with MCL_CURRENT flag that process will block indefinitely - until the
process resulting in the heavy cpu load finishes(the process's set the
sched priority > 0).

Since MCL_CURRENT flag is passed to mlockall it will try to drain the
lru in all cpus. The lru_add_drain_all() will start an workqueue to
drain lru on each online cpu and then try to flush the work(will wait
until the work's finished).

The drain for the heavy loaded core will never finished - like
mentioned before - until the process resulting in the heavy cpu load
finishes. The work will never be scheduled, even if the calling process
has been so.

This patch adds an lru_add_drain_all() implementation for such
situation, and synchronously do the lru drain on behalf of the calling
process.

Signed-off-by: Leandro Dorileo <leandro.maciel.dorileo@intel.com>
---
 mm/swap.c | 18 ++++++++++++++++++
 1 file changed, 18 insertions(+)

diff --git a/mm/swap.c b/mm/swap.c
index 1785ac6..df807b4 100644
--- a/mm/swap.c
+++ b/mm/swap.c
@@ -864,6 +864,23 @@ void lru_add_drain(void)
 	local_unlock_cpu(swapvec_lock);
 }
 
+#ifdef CONFIG_PREEMPT_RT_FULL
+void lru_add_drain_all(void)
+{
+	static DEFINE_MUTEX(lock);
+	int cpu;
+
+	mutex_lock(&lock);
+	get_online_cpus();
+
+	for_each_online_cpu(cpu) {
+		smp_call_function_single(cpu, lru_add_drain, NULL, 1);
+	}
+
+	put_online_cpus();
+	mutex_unlock(&lock);
+}
+#else
 static void lru_add_drain_per_cpu(struct work_struct *dummy)
 {
 	lru_add_drain();
@@ -900,6 +917,7 @@ void lru_add_drain_all(void)
 	put_online_cpus();
 	mutex_unlock(&lock);
 }
+#endif
 
 /**
  * release_pages - batched page_cache_release()
-- 
2.7.0

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 2+ messages in thread

* Re: [RFC][4.1.15-rt17 PATCH] mm: swap: lru drain don't use workqueue with PREEMPT_RT_FULL
  2016-01-11  0:43 [RFC][4.1.15-rt17 PATCH] mm: swap: lru drain don't use workqueue with PREEMPT_RT_FULL l
@ 2016-01-12 12:01 ` Thomas Gleixner
  0 siblings, 0 replies; 2+ messages in thread
From: Thomas Gleixner @ 2016-01-12 12:01 UTC (permalink / raw)
  To: l
  Cc: Sebastian Andrzej Siewior, LKML, rostedt, John Kacur, linux-mm,
	Leandro Dorileo

On Sun, 10 Jan 2016, l@dorileo.org wrote:
> +#ifdef CONFIG_PREEMPT_RT_FULL
> +void lru_add_drain_all(void)
> +{
> +	static DEFINE_MUTEX(lock);
> +	int cpu;
> +
> +	mutex_lock(&lock);
> +	get_online_cpus();
> +
> +	for_each_online_cpu(cpu) {
> +		smp_call_function_single(cpu, lru_add_drain, NULL, 1);

How is that supposed to work on RT? Not at all, because lru_add_drain() takes
'sleeping' spinlocks and you cannot do that from hard interrupt context.

Enable lockdep (what you should have done before posting) and watch the
fireworks.

Thanks,

	tglx

^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2016-01-12 12:01 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2016-01-11  0:43 [RFC][4.1.15-rt17 PATCH] mm: swap: lru drain don't use workqueue with PREEMPT_RT_FULL l
2016-01-12 12:01 ` Thomas Gleixner

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).