kvm.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Andrew Jones <drjones@redhat.com>
To: kvm@vger.kernel.org
Cc: pbonzini@redhat.com, rkrcmar@redhat.com
Subject: [PATCH kvm-unit-tests v2 2/2] arm/arm64: smp: add deadlock detection
Date: Fri,  2 Jun 2017 17:31:09 +0200	[thread overview]
Message-ID: <20170602153109.2904-3-drjones@redhat.com> (raw)
In-Reply-To: <20170602153109.2904-1-drjones@redhat.com>

on_cpu() and friends are a bit risky when implemented without IPIs
(no preemption), because we can easily get deadlocks. Luckily, it's
also easy to detect those deadlocks, and assert, to at least make
them easier to debug.

Signed-off-by: Andrew Jones <drjones@redhat.com>
---
 lib/arm/smp.c | 30 +++++++++++++++++++++++++-----
 1 file changed, 25 insertions(+), 5 deletions(-)

diff --git a/lib/arm/smp.c b/lib/arm/smp.c
index b4b43237e32e..bb999243de63 100644
--- a/lib/arm/smp.c
+++ b/lib/arm/smp.c
@@ -76,9 +76,24 @@ typedef void (*on_cpu_func)(void *);
 struct on_cpu_info {
 	on_cpu_func func;
 	void *data;
+	cpumask_t waiters;
 };
 static struct on_cpu_info on_cpu_info[NR_CPUS];
 
+static void cpu_wait(int cpu)
+{
+	int me = smp_processor_id();
+
+	if (cpu == me)
+		return;
+
+	cpumask_set_cpu(me, &on_cpu_info[cpu].waiters);
+	assert_msg(!cpumask_test_cpu(cpu, &on_cpu_info[me].waiters), "CPU%d <=> CPU%d deadlock detected", me, cpu);
+	while (!cpu_idle(cpu))
+		wfe();
+	cpumask_clear_cpu(me, &on_cpu_info[cpu].waiters);
+}
+
 void do_idle(void)
 {
 	int cpu = smp_processor_id();
@@ -117,8 +132,7 @@ void on_cpu_async(int cpu, void (*func)(void *data), void *data)
 	spin_unlock(&lock);
 
 	for (;;) {
-		while (!cpu_idle(cpu))
-			wfe();
+		cpu_wait(cpu);
 		spin_lock(&lock);
 		if ((volatile void *)on_cpu_info[cpu].func == NULL)
 			break;
@@ -134,9 +148,7 @@ void on_cpu_async(int cpu, void (*func)(void *data), void *data)
 void on_cpu(int cpu, void (*func)(void *data), void *data)
 {
 	on_cpu_async(cpu, func, data);
-
-	while (!cpu_idle(cpu))
-		wfe();
+	cpu_wait(cpu);
 }
 
 void on_cpus(void (*func)(void))
@@ -150,6 +162,14 @@ void on_cpus(void (*func)(void))
 	}
 	func();
 
+	for_each_present_cpu(cpu) {
+		if (cpu == me)
+			continue;
+		cpumask_set_cpu(me, &on_cpu_info[cpu].waiters);
+		assert_msg(!cpumask_test_cpu(cpu, &on_cpu_info[me].waiters), "CPU%d <=> CPU%d deadlock detected", me, cpu);
+	}
 	while (cpumask_weight(&cpu_idle_mask) < nr_cpus - 1)
 		wfe();
+	for_each_present_cpu(cpu)
+		cpumask_clear_cpu(me, &on_cpu_info[cpu].waiters);
 }
-- 
2.9.4

  parent reply	other threads:[~2017-06-02 15:31 UTC|newest]

Thread overview: 5+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-06-02 15:31 [PATCH kvm-unit-tests v2 0/2] arm/arm64: smp: on_cpu improvements Andrew Jones
2017-06-02 15:31 ` [PATCH kvm-unit-tests v2 1/2] arm/arm64: smp: cpu0 is special Andrew Jones
2017-06-02 15:31 ` Andrew Jones [this message]
2017-06-03  8:28 ` [PATCH kvm-unit-tests v2 3/2] arm/arm64: smp: detect deadlock cycles Andrew Jones
2017-06-07 15:00 ` [PATCH kvm-unit-tests v2 0/2] arm/arm64: smp: on_cpu improvements Radim Krčmář

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20170602153109.2904-3-drjones@redhat.com \
    --to=drjones@redhat.com \
    --cc=kvm@vger.kernel.org \
    --cc=pbonzini@redhat.com \
    --cc=rkrcmar@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).