public inbox for intel-gfx@lists.freedesktop.org
 help / color / mirror / Atom feed
From: Jani Nikula <jani.nikula@linux.intel.com>
To: Chris Wilson <chris@chris-wilson.co.uk>
Cc: intel-gfx@lists.freedesktop.org
Subject: Re: [PATCH] drm/i915: Fix nesting of filelist_mutex vs struct_mutex in i915_ppgtt_info
Date: Mon, 22 Aug 2016 15:28:03 +0300	[thread overview]
Message-ID: <87pop1t2cs.fsf@intel.com> (raw)
In-Reply-To: <20160822121557.GC856@nuc-i3427.alporthouse.com>

On Mon, 22 Aug 2016, Chris Wilson <chris@chris-wilson.co.uk> wrote:
> On Mon, Aug 22, 2016 at 03:09:48PM +0300, Jani Nikula wrote:
>> 
>> On Mon, 22 Aug 2016, Chris Wilson <chris@chris-wilson.co.uk> wrote:
>> > [  284.922349] ======================================================
>> > [  284.922355] [ INFO: possible circular locking dependency detected ]
>> > [  284.922361] 4.8.0-rc2+ #430 Tainted: G        W
>> > [  284.922366] -------------------------------------------------------
>> > [  284.922371] cat/1197 is trying to acquire lock:
>> > [  284.922376]  (&dev->filelist_mutex){+.+...}, at: [<ffffffffa0055ba2>] i915_ppgtt_info+0x82/0x390 [i915]
>> > [  284.922423]
>> > [  284.922423] but task is already holding lock:
>> > [  284.922429]  (&dev->struct_mutex){+.+.+.}, at: [<ffffffffa0055b55>] i915_ppgtt_info+0x35/0x390 [i915]
>> > [  284.922465]
>> > [  284.922465] which lock already depends on the new lock.
>> > [  284.922465]
>> > [  284.922471]
>> > [  284.922471] the existing dependency chain (in reverse order) is:
>> > [  284.922477]
>> > -> #1 (&dev->struct_mutex){+.+.+.}:
>> > [  284.922493]        [<ffffffff81087710>] lock_acquire+0x60/0x80
>> > [  284.922505]        [<ffffffff8143e96f>] mutex_lock_nested+0x5f/0x360
>> > [  284.922520]        [<ffffffffa004f877>] print_context_stats+0x37/0xf0 [i915]
>> > [  284.922549]        [<ffffffffa00535f5>] i915_gem_object_info+0x265/0x490 [i915]
>> > [  284.922581]        [<ffffffff81144491>] seq_read+0xe1/0x3b0
>> > [  284.922592]        [<ffffffff811f77b3>] full_proxy_read+0x83/0xb0
>> > [  284.922604]        [<ffffffff8111ba03>] __vfs_read+0x23/0x110
>> > [  284.922616]        [<ffffffff8111c9b9>] vfs_read+0x89/0x110
>> > [  284.922626]        [<ffffffff8111dbf4>] SyS_read+0x44/0xa0
>> > [  284.922636]        [<ffffffff81442be9>] entry_SYSCALL_64_fastpath+0x1c/0xac
>> > [  284.922648]
>> > -> #0 (&dev->filelist_mutex){+.+...}:
>> > [  284.922667]        [<ffffffff810871fc>] __lock_acquire+0x10fc/0x1270
>> > [  284.922678]        [<ffffffff81087710>] lock_acquire+0x60/0x80
>> > [  284.922689]        [<ffffffff8143e96f>] mutex_lock_nested+0x5f/0x360
>> > [  284.922701]        [<ffffffffa0055ba2>] i915_ppgtt_info+0x82/0x390 [i915]
>> > [  284.922729]        [<ffffffff81144491>] seq_read+0xe1/0x3b0
>> > [  284.922739]        [<ffffffff811f77b3>] full_proxy_read+0x83/0xb0
>> > [  284.922750]        [<ffffffff8111ba03>] __vfs_read+0x23/0x110
>> > [  284.922761]        [<ffffffff8111c9b9>] vfs_read+0x89/0x110
>> > [  284.922771]        [<ffffffff8111dbf4>] SyS_read+0x44/0xa0
>> > [  284.922781]        [<ffffffff81442be9>] entry_SYSCALL_64_fastpath+0x1c/0xac
>> > [  284.922793]
>> > [  284.922793] other info that might help us debug this:
>> > [  284.922793]
>> > [  284.922809]  Possible unsafe locking scenario:
>> > [  284.922809]
>> > [  284.922818]        CPU0                    CPU1
>> > [  284.922825]        ----                    ----
>> > [  284.922831]   lock(&dev->struct_mutex);
>> > [  284.922842]                                lock(&dev->filelist_mutex);
>> > [  284.922854]                                lock(&dev->struct_mutex);
>> > [  284.922865]   lock(&dev->filelist_mutex);
>> > [  284.922875]
>> > [  284.922875]  *** DEADLOCK ***
>> > [  284.922875]
>> > [  284.922888] 3 locks held by cat/1197:
>> > [  284.922895]  #0:  (debugfs_srcu){......}, at: [<ffffffff811f7730>] full_proxy_read+0x0/0xb0
>> > [  284.922919]  #1:  (&p->lock){+.+.+.}, at: [<ffffffff811443e8>] seq_read+0x38/0x3b0
>> > [  284.922942]  #2:  (&dev->struct_mutex){+.+.+.}, at: [<ffffffffa0055b55>] i915_ppgtt_info+0x35/0x390 [i915]
>> > [  284.922983]
>> 
>> Do we have a regressing commit reference?
>
> For an unlikely ABBA debugfs deadlock that no one reported?

Of course, that one line in the commit message would have been
sufficient for me to not ask...

BR,
Jani.


>
> 	1d2ac403ae3bfde7c50328ee0d39d3fb3d8d9823
> 	drm: Protect dev->filelist with its own mutex
>
> -Chris

-- 
Jani Nikula, Intel Open Source Technology Center
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

  reply	other threads:[~2016-08-22 12:28 UTC|newest]

Thread overview: 6+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-08-22 11:35 [PATCH] drm/i915: Fix nesting of filelist_mutex vs struct_mutex in i915_ppgtt_info Chris Wilson
2016-08-22 11:44 ` Joonas Lahtinen
2016-08-22 12:09 ` Jani Nikula
2016-08-22 12:15   ` Chris Wilson
2016-08-22 12:28     ` Jani Nikula [this message]
2016-08-22 12:11 ` ✗ Ro.CI.BAT: warning for " Patchwork

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=87pop1t2cs.fsf@intel.com \
    --to=jani.nikula@linux.intel.com \
    --cc=chris@chris-wilson.co.uk \
    --cc=intel-gfx@lists.freedesktop.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox