public inbox for intel-gfx@lists.freedesktop.org
 help / color / mirror / Atom feed
* [PATCH] drm/i915: Take runtime pm reference on hangcheck_info
@ 2015-02-05 10:16 Mika Kuoppala
  2015-02-05 10:25 ` Chris Wilson
                   ` (2 more replies)
  0 siblings, 3 replies; 9+ messages in thread
From: Mika Kuoppala @ 2015-02-05 10:16 UTC (permalink / raw)
  To: intel-gfx

We read the coherent current seqno and actual head from ring.
For hardware access we need to take runtime_pm reference, which brings in
locking. As this debugfs entry is for debugging hangcheck behaviour,
including locking problems, we need to be flexible on taking them.

Try to see if we get a lock and if so, get seqno and actual head
from hardware. If we don't have exclusive access, get lazy coherent
seqno and print token acthd for which the user can see that the
seqno is of different nature.

Testcase: igt/pm_rpm/debugfs-read
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=88910
Tested-by: Ding Heng <hengx.ding@intel.com>
Signed-off-by: Mika Kuoppala <mika.kuoppala@intel.com>
---
 drivers/gpu/drm/i915/i915_debugfs.c | 26 +++++++++++++++++++++++---
 1 file changed, 23 insertions(+), 3 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_debugfs.c b/drivers/gpu/drm/i915/i915_debugfs.c
index 9af17fb..5a6b0e2 100644
--- a/drivers/gpu/drm/i915/i915_debugfs.c
+++ b/drivers/gpu/drm/i915/i915_debugfs.c
@@ -1223,8 +1223,11 @@ out:
 static int i915_hangcheck_info(struct seq_file *m, void *unused)
 {
 	struct drm_info_node *node = m->private;
-	struct drm_i915_private *dev_priv = to_i915(node->minor->dev);
+	struct drm_device *dev = node->minor->dev;
+	struct drm_i915_private *dev_priv = dev->dev_private;
 	struct intel_engine_cs *ring;
+	u64 acthd[I915_NUM_RINGS];
+	u32 seqno[I915_NUM_RINGS];
 	int i;
 
 	if (!i915.enable_hangcheck) {
@@ -1232,6 +1235,23 @@ static int i915_hangcheck_info(struct seq_file *m, void *unused)
 		return 0;
 	}
 
+	if (mutex_trylock(&dev->struct_mutex)) {
+		intel_runtime_pm_get(dev_priv);
+
+		for_each_ring(ring, dev_priv, i) {
+			seqno[i] = ring->get_seqno(ring, false);
+			acthd[i] = intel_ring_get_active_head(ring);
+		}
+
+		intel_runtime_pm_put(dev_priv);
+		mutex_unlock(&dev->struct_mutex);
+	} else {
+		for_each_ring(ring, dev_priv, i) {
+			seqno[i] = ring->get_seqno(ring, true);
+			acthd[i] = -1;
+		}
+	}
+
 	if (delayed_work_pending(&dev_priv->gpu_error.hangcheck_work)) {
 		seq_printf(m, "Hangcheck active, fires in %dms\n",
 			   jiffies_to_msecs(dev_priv->gpu_error.hangcheck_work.timer.expires -
@@ -1242,12 +1262,12 @@ static int i915_hangcheck_info(struct seq_file *m, void *unused)
 	for_each_ring(ring, dev_priv, i) {
 		seq_printf(m, "%s:\n", ring->name);
 		seq_printf(m, "\tseqno = %x [current %x]\n",
-			   ring->hangcheck.seqno, ring->get_seqno(ring, false));
+			   ring->hangcheck.seqno, seqno[i]);
 		seq_printf(m, "\taction = %d\n", ring->hangcheck.action);
 		seq_printf(m, "\tscore = %d\n", ring->hangcheck.score);
 		seq_printf(m, "\tACTHD = 0x%08llx [current 0x%08llx]\n",
 			   (long long)ring->hangcheck.acthd,
-			   (long long)intel_ring_get_active_head(ring));
+			   (long long)acthd[i]);
 		seq_printf(m, "\tmax ACTHD = 0x%08llx\n",
 			   (long long)ring->hangcheck.max_acthd);
 	}
-- 
1.9.1

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2015-02-09 12:30 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2015-02-05 10:16 [PATCH] drm/i915: Take runtime pm reference on hangcheck_info Mika Kuoppala
2015-02-05 10:25 ` Chris Wilson
2015-02-05 14:29 ` Daniel Vetter
2015-02-05 15:54   ` Mika Kuoppala
2015-02-05 16:41   ` Mika Kuoppala
2015-02-05 16:56     ` Chris Wilson
2015-02-09 12:31       ` Jani Nikula
2015-02-06  4:25     ` shuang.he
2015-02-05 15:33 ` shuang.he

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox