AMD-GFX Archive on lore.kernel.org
 help / color / mirror / Atom feed
From: xinhui pan <xinhui.pan@amd.com>
To: <amd-gfx@lists.freedesktop.org>
Cc: <alexander.deucher@amd.com>, <christian.koenig@amd.com>,
	<chenli@uniontech.com>, <dri-devel@lists.freedesktop.org>,
	xinhui pan <xinhui.pan@amd.com>
Subject: [PATCH v2 0/2] Fix a hung during memory pressure test
Date: Mon, 6 Sep 2021 09:12:08 +0800	[thread overview]
Message-ID: <20210906011210.80327-1-xinhui.pan@amd.com> (raw)

A long time ago, someone reports system got hung during memory test.
In recent days, I am trying to look for or understand the potential
deadlock in ttm/amdgpu code.

This patchset aims to fix the deadlock during ttm populate.

TTM has a parameter called pages_limit, when allocated GTT memory
reaches this limit, swapout would be triggered. As ttm_bo_swapout does
not return the correct retval, populate might get hung.

UVD ib test uses GTT which might be insufficient. So a gpu recovery
would hung if populate hung.

I have made one drm test which alloc two GTT BOs, submit gfx copy
commands and free these BOs without waiting fence. What's more, these
gfx copy commands will cause gfx ring hang. So gpu recovery would be
triggered.

Now here is one possible deadlock case.
gpu_recovery
 -> stop drm scheduler
 -> asic reset
   -> ib test
      -> tt populate (uvd ib test)
	->  ttm_bo_swapout (BO A) // this always fails as the fence of
	BO A would not be signaled by schedluer or HW. Hit deadlock.

I paste the drm test patch below.
#modprobe ttm pages_limit=65536
#amdgpu_test -s 1 -t 4
---
 tests/amdgpu/basic_tests.c | 32 ++++++++++++++------------------
 1 file changed, 14 insertions(+), 18 deletions(-)

diff --git a/tests/amdgpu/basic_tests.c b/tests/amdgpu/basic_tests.c
index dbf02fee..f85ed340 100644
--- a/tests/amdgpu/basic_tests.c
+++ b/tests/amdgpu/basic_tests.c
@@ -65,13 +65,16 @@ static void amdgpu_direct_gma_test(void);
 static void amdgpu_command_submission_write_linear_helper(unsigned ip_type);
 static void amdgpu_command_submission_const_fill_helper(unsigned ip_type);
 static void amdgpu_command_submission_copy_linear_helper(unsigned ip_type);
-static void amdgpu_test_exec_cs_helper(amdgpu_context_handle context_handle,
+static void _amdgpu_test_exec_cs_helper(amdgpu_context_handle context_handle,
 				       unsigned ip_type,
 				       int instance, int pm4_dw, uint32_t *pm4_src,
 				       int res_cnt, amdgpu_bo_handle *resources,
 				       struct amdgpu_cs_ib_info *ib_info,
-				       struct amdgpu_cs_request *ibs_request);
+				       struct amdgpu_cs_request *ibs_request, int sync, int repeat);
  
+#define amdgpu_test_exec_cs_helper(...) \
+	_amdgpu_test_exec_cs_helper(__VA_ARGS__, 1, 1)
+
 CU_TestInfo basic_tests[] = {
 	{ "Query Info Test",  amdgpu_query_info_test },
 	{ "Userptr Test",  amdgpu_userptr_test },
@@ -1341,12 +1344,12 @@ static void amdgpu_command_submission_compute(void)
  * pm4_src, resources, ib_info, and ibs_request
  * submit command stream described in ibs_request and wait for this IB accomplished
  */
-static void amdgpu_test_exec_cs_helper(amdgpu_context_handle context_handle,
+static void _amdgpu_test_exec_cs_helper(amdgpu_context_handle context_handle,
 				       unsigned ip_type,
 				       int instance, int pm4_dw, uint32_t *pm4_src,
 				       int res_cnt, amdgpu_bo_handle *resources,
 				       struct amdgpu_cs_ib_info *ib_info,
-				       struct amdgpu_cs_request *ibs_request)
+				       struct amdgpu_cs_request *ibs_request, int sync, int repeat)
 {
 	int r;
 	uint32_t expired;
@@ -1395,12 +1398,15 @@ static void amdgpu_test_exec_cs_helper(amdgpu_context_handle context_handle,
 	CU_ASSERT_NOT_EQUAL(ibs_request, NULL);
 
 	/* submit CS */
-	r = amdgpu_cs_submit(context_handle, 0, ibs_request, 1);
+	while (repeat--)
+		r = amdgpu_cs_submit(context_handle, 0, ibs_request, 1);
 	CU_ASSERT_EQUAL(r, 0);
 
 	r = amdgpu_bo_list_destroy(ibs_request->resources);
 	CU_ASSERT_EQUAL(r, 0);
 
+	if (!sync)
+		return;
 	fence_status.ip_type = ip_type;
 	fence_status.ip_instance = 0;
 	fence_status.ring = ibs_request->ring;
@@ -1667,7 +1673,7 @@ static void amdgpu_command_submission_sdma_const_fill(void)
 
 static void amdgpu_command_submission_copy_linear_helper(unsigned ip_type)
 {
-	const int sdma_write_length = 1024;
+	const int sdma_write_length = (255) << 20;
 	const int pm4_dw = 256;
 	amdgpu_context_handle context_handle;
 	amdgpu_bo_handle bo1, bo2;
@@ -1715,8 +1721,6 @@ static void amdgpu_command_submission_copy_linear_helper(unsigned ip_type)
 							    &bo1_va_handle);
 				CU_ASSERT_EQUAL(r, 0);
 
-				/* set bo1 */
-				memset((void*)bo1_cpu, 0xaa, sdma_write_length);
 
 				/* allocate UC bo2 for sDMA use */
 				r = amdgpu_bo_alloc_and_map(device_handle,
@@ -1727,8 +1731,6 @@ static void amdgpu_command_submission_copy_linear_helper(unsigned ip_type)
 							    &bo2_va_handle);
 				CU_ASSERT_EQUAL(r, 0);
 
-				/* clear bo2 */
-				memset((void*)bo2_cpu, 0, sdma_write_length);
 
 				resources[0] = bo1;
 				resources[1] = bo2;
@@ -1785,17 +1787,11 @@ static void amdgpu_command_submission_copy_linear_helper(unsigned ip_type)
 					}
 				}
 
-				amdgpu_test_exec_cs_helper(context_handle,
+				_amdgpu_test_exec_cs_helper(context_handle,
 							   ip_type, ring_id,
 							   i, pm4,
 							   2, resources,
-							   ib_info, ibs_request);
-
-				/* verify if SDMA test result meets with expected */
-				i = 0;
-				while(i < sdma_write_length) {
-					CU_ASSERT_EQUAL(bo2_cpu[i++], 0xaa);
-				}
+							   ib_info, ibs_request, 0, 100);
 				r = amdgpu_bo_unmap_and_free(bo1, bo1_va_handle, bo1_mc,
 							     sdma_write_length);
 				CU_ASSERT_EQUAL(r, 0);
-- 

*** BLURB HERE ***

xinhui pan (2):
  drm/ttm: Fix a deadlock if the target BO is not idle during swap
  drm/amdpgu: Use VRAM domain in UVD IB test

 drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c | 8 ++++++++
 drivers/gpu/drm/ttm/ttm_bo.c            | 6 +++---
 2 files changed, 11 insertions(+), 3 deletions(-)

-- 
2.25.1


             reply	other threads:[~2021-09-06  1:12 UTC|newest]

Thread overview: 9+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-09-06  1:12 xinhui pan [this message]
2021-09-06  1:12 ` [PATCH v2 1/2] drm/ttm: Fix a deadlock if the target BO is not idle during swap xinhui pan
2021-09-06 11:26   ` Christian König
2021-09-07  1:52     ` Pan, Xinhui
2021-09-06  1:12 ` [PATCH v2 2/2] drm/amdpgu: Use VRAM domain in UVD IB test xinhui pan
2021-09-06  9:01   ` Christian König
2021-09-06  9:04 ` [PATCH v2 0/2] Fix a hung during memory pressure test Christian König
2021-09-06 10:16   ` Pan, Xinhui
2021-09-06 11:04     ` Christian König

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20210906011210.80327-1-xinhui.pan@amd.com \
    --to=xinhui.pan@amd.com \
    --cc=alexander.deucher@amd.com \
    --cc=amd-gfx@lists.freedesktop.org \
    --cc=chenli@uniontech.com \
    --cc=christian.koenig@amd.com \
    --cc=dri-devel@lists.freedesktop.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox