stable.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: <gregkh@linuxfoundation.org>
To: notasas@gmail.com, alexander.deucher@amd.com,
	christian.koenig@amd.com, david1.zhou@amd.com,
	gregkh@linuxfoundation.org
Cc: <stable@vger.kernel.org>, <stable-commits@vger.kernel.org>
Subject: Patch "drm/amdgpu: fix sched fence slab teardown" has been added to the 4.8-stable tree
Date: Thu, 17 Nov 2016 08:15:09 +0100	[thread overview]
Message-ID: <1479366909118130@kroah.com> (raw)


This is a note to let you know that I've just added the patch titled

    drm/amdgpu: fix sched fence slab teardown

to the 4.8-stable tree which can be found at:
    http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary

The filename of the patch is:
     drm-amdgpu-fix-sched-fence-slab-teardown.patch
and it can be found in the queue-4.8 subdirectory.

If you, or anyone else, feels it should not be added to the stable tree,
please let <stable@vger.kernel.org> know about it.


>From a053fb7e512c77f0742bceb578b10025256e1911 Mon Sep 17 00:00:00 2001
From: Grazvydas Ignotas <notasas@gmail.com>
Date: Sun, 23 Oct 2016 21:31:44 +0300
Subject: drm/amdgpu: fix sched fence slab teardown
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

From: Grazvydas Ignotas <notasas@gmail.com>

commit a053fb7e512c77f0742bceb578b10025256e1911 upstream.

To free fences, call_rcu() is used, which calls amd_sched_fence_free()
after a grace period. During teardown, there is no guarantee all
callbacks have finished, so sched_fence_slab may be destroyed before
all fences have been freed. If we are lucky, this results in some slab
warnings, if not, we get a crash in one of rcu threads because callback
is called after amdgpu has already been unloaded.

Fix it with a rcu_barrier().

Fixes: 189e0fb76304 ("drm/amdgpu: RCU protected amd_sched_fence_release")
Acked-by: Chunming Zhou <david1.zhou@amd.com>
Reviewed-by: Christian König <christian.koenig@amd.com>
Signed-off-by: Grazvydas Ignotas <notasas@gmail.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

---
 drivers/gpu/drm/amd/scheduler/gpu_scheduler.c |    1 +
 1 file changed, 1 insertion(+)

--- a/drivers/gpu/drm/amd/scheduler/gpu_scheduler.c
+++ b/drivers/gpu/drm/amd/scheduler/gpu_scheduler.c
@@ -645,6 +645,7 @@ void amd_sched_fini(struct amd_gpu_sched
 {
 	if (sched->thread)
 		kthread_stop(sched->thread);
+	rcu_barrier();
 	if (atomic_dec_and_test(&sched_fence_slab_ref))
 		kmem_cache_destroy(sched_fence_slab);
 }


Patches currently in stable-queue which might be from notasas@gmail.com are

queue-4.8/drm-amdgpu-fix-sched-fence-slab-teardown.patch
queue-4.8/drm-amd-fix-scheduler-fence-teardown-order-v2.patch

                 reply	other threads:[~2016-11-17  7:15 UTC|newest]

Thread overview: [no followups] expand[flat|nested]  mbox.gz  Atom feed

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1479366909118130@kroah.com \
    --to=gregkh@linuxfoundation.org \
    --cc=alexander.deucher@amd.com \
    --cc=christian.koenig@amd.com \
    --cc=david1.zhou@amd.com \
    --cc=notasas@gmail.com \
    --cc=stable-commits@vger.kernel.org \
    --cc=stable@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).