* [PATCH v5 0/5] drm/ttm,amdgpu: Introduce LRU bulk move functionality
@ 2018-08-22 7:52 Huang Rui
2018-08-22 7:52 ` [PATCH v5 1/5] drm/ttm: add helper structures for bulk moves on lru list Huang Rui
` (3 more replies)
0 siblings, 4 replies; 23+ messages in thread
From: Huang Rui @ 2018-08-22 7:52 UTC (permalink / raw)
To: dri-devel, amd-gfx; +Cc: Huang Rui
The idea and proposal is originally from Christian, and I continue to work to
deliver it.
Background:
amdgpu driver will move all PD/PT and PerVM BOs into idle list. Then move all of
them on the end of LRU list one by one. Thus, that cause so many BOs moved to
the end of the LRU, and impact performance seriously.
Then Christian provided a workaround to not move PD/PT BOs on LRU with below
patch:
Commit 0bbf32026cf5ba41e9922b30e26e1bed1ecd38ae ("drm/amdgpu: band aid
validating VM PTs")
However, the final solution should bulk move all PD/PT and PerVM BOs on the LRU
instead of one by one.
Whenever amdgpu_vm_validate_pt_bos() is called and we have BOs which need to be
validated we move all BOs together to the end of the LRU without dropping the
lock for the LRU.
While doing so we note the beginning and end of this block in the LRU list.
Now when amdgpu_vm_validate_pt_bos() is called and we don't have anything to do,
we don't move every BO one by one, but instead cut the LRU list into pieces so
that we bulk move everything to the end in just one operation.
Test data:
+--------------+-----------------+-----------+---------------------------------------+
| |The Talos |Clpeak(OCL)|BusSpeedReadback(OCL) |
| |Principle(Vulkan)| | |
+------------------------------------------------------------------------------------+
| | | |0.319 ms(1k) 0.314 ms(2K) 0.308 ms(4K) |
| Original | 147.7 FPS | 76.86 us |0.307 ms(8K) 0.310 ms(16K) |
+------------------------------------------------------------------------------------+
| Orignial + WA| | |0.254 ms(1K) 0.241 ms(2K) |
|(don't move | 162.1 FPS | 42.15 us |0.230 ms(4K) 0.223 ms(8K) 0.204 ms(16K)|
|PT BOs on LRU)| | | |
+------------------------------------------------------------------------------------+
| Bulk move | 163.1 FPS | 40.52 us |0.244 ms(1K) 0.252 ms(2K) 0.213 ms(4K) |
| | | |0.214 ms(8K) 0.225 ms(16K) |
+--------------+-----------------+-----------+---------------------------------------+
After test them with above three benchmarks include vulkan and opencl. We can
see the visible improvement than original, and even better than original with
workaround.
Changes from V1 -> V2:
- Fix to missed the BOs in relocated/moved that should be also moved to the end
of LRU.
Changes from V2 -> V3:
- Remove unused parameter and use list_for_each_entry instead of the one with
save entry.
Changes from V3 -> V4:
- Move the amdgpu_vm_move_to_lru_tail after command submission, at that time,
all bo will be back on idle list.
Changes from V4 -> V5:
- Remove amdgpu_vm_move_to_lru_tail_by_list(), use bulk_moveable instread of
validated, and move ttm_bo_bulk_move_lru_tail() also into
amdgpu_vm_move_to_lru_tail().
Thanks,
Ray
Christian König (2):
drm/ttm: add helper structures for bulk moves on lru list
drm/ttm: revise ttm_bo_move_to_lru_tail to support bulk moves
Huang Rui (3):
drm/ttm: add bulk move function on LRU
drm/amdgpu: use bulk moves for efficient VM LRU handling (v5)
drm/amdgpu: move PD/PT bos on LRU again
drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c | 10 +++++
drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c | 68 +++++++++++++++++++----------
drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h | 11 ++++-
drivers/gpu/drm/ttm/ttm_bo.c | 78 +++++++++++++++++++++++++++++++++-
include/drm/ttm/ttm_bo_api.h | 16 ++++++-
include/drm/ttm/ttm_bo_driver.h | 28 ++++++++++++
6 files changed, 186 insertions(+), 25 deletions(-)
--
2.7.4
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel
^ permalink raw reply [flat|nested] 23+ messages in thread
* [PATCH v5 1/5] drm/ttm: add helper structures for bulk moves on lru list
2018-08-22 7:52 [PATCH v5 0/5] drm/ttm,amdgpu: Introduce LRU bulk move functionality Huang Rui
@ 2018-08-22 7:52 ` Huang Rui
2018-08-22 7:52 ` [PATCH v5 2/5] drm/ttm: revise ttm_bo_move_to_lru_tail to support bulk moves Huang Rui
` (2 subsequent siblings)
3 siblings, 0 replies; 23+ messages in thread
From: Huang Rui @ 2018-08-22 7:52 UTC (permalink / raw)
To: dri-devel, amd-gfx; +Cc: Huang Rui, Christian König
From: Christian König <christian.koenig@amd.com>
Add bulk move pos to store the pointer of first and last buffer object.
The list in between will be bulk moved on lru list.
Signed-off-by: Christian König <christian.koenig@amd.com>
Signed-off-by: Huang Rui <ray.huang@amd.com>
Tested-by: Mike Lothian <mike@fireburn.co.uk>
Tested-by: Dieter Nützel <Dieter@nuetzel-hh.de>
Acked-by: Chunming Zhou <david1.zhou@amd.com>
Reviewed-by: Junwei Zhang <Jerry.Zhang@amd.com>
---
include/drm/ttm/ttm_bo_driver.h | 28 ++++++++++++++++++++++++++++
1 file changed, 28 insertions(+)
diff --git a/include/drm/ttm/ttm_bo_driver.h b/include/drm/ttm/ttm_bo_driver.h
index 3234cc3..e4fee8e 100644
--- a/include/drm/ttm/ttm_bo_driver.h
+++ b/include/drm/ttm/ttm_bo_driver.h
@@ -491,6 +491,34 @@ struct ttm_bo_device {
};
/**
+ * struct ttm_lru_bulk_move_pos
+ *
+ * @first: first BO in the bulk move range
+ * @last: last BO in the bulk move range
+ *
+ * Positions for a lru bulk move.
+ */
+struct ttm_lru_bulk_move_pos {
+ struct ttm_buffer_object *first;
+ struct ttm_buffer_object *last;
+};
+
+/**
+ * struct ttm_lru_bulk_move
+ *
+ * @tt: first/last lru entry for BOs in the TT domain
+ * @vram: first/last lru entry for BOs in the VRAM domain
+ * @swap: first/last lru entry for BOs on the swap list
+ *
+ * Helper structure for bulk moves on the LRU list.
+ */
+struct ttm_lru_bulk_move {
+ struct ttm_lru_bulk_move_pos tt[TTM_MAX_BO_PRIORITY];
+ struct ttm_lru_bulk_move_pos vram[TTM_MAX_BO_PRIORITY];
+ struct ttm_lru_bulk_move_pos swap[TTM_MAX_BO_PRIORITY];
+};
+
+/**
* ttm_flag_masked
*
* @old: Pointer to the result and original value.
--
2.7.4
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel
^ permalink raw reply related [flat|nested] 23+ messages in thread
* [PATCH v5 2/5] drm/ttm: revise ttm_bo_move_to_lru_tail to support bulk moves
2018-08-22 7:52 [PATCH v5 0/5] drm/ttm,amdgpu: Introduce LRU bulk move functionality Huang Rui
2018-08-22 7:52 ` [PATCH v5 1/5] drm/ttm: add helper structures for bulk moves on lru list Huang Rui
@ 2018-08-22 7:52 ` Huang Rui
[not found] ` <1534924375-5837-1-git-send-email-ray.huang-5C7GfCeVMHo@public.gmane.org>
2018-08-22 8:24 ` [PATCH v5 0/5] drm/ttm,amdgpu: Introduce LRU bulk move functionality Christian König
3 siblings, 0 replies; 23+ messages in thread
From: Huang Rui @ 2018-08-22 7:52 UTC (permalink / raw)
To: dri-devel, amd-gfx; +Cc: Huang Rui, Christian König
From: Christian König <christian.koenig@amd.com>
When move a BO to the end of LRU, it need remember the BO positions.
Make sure all moved bo in between "first" and "last". And they will be bulk
moving together.
Signed-off-by: Christian König <christian.koenig@amd.com>
Signed-off-by: Huang Rui <ray.huang@amd.com>
Tested-by: Mike Lothian <mike@fireburn.co.uk>
Tested-by: Dieter Nützel <Dieter@nuetzel-hh.de>
Acked-by: Chunming Zhou <david1.zhou@amd.com>
Reviewed-by: Junwei Zhang <Jerry.Zhang@amd.com>
---
drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c | 8 ++++----
drivers/gpu/drm/ttm/ttm_bo.c | 26 +++++++++++++++++++++++++-
include/drm/ttm/ttm_bo_api.h | 6 +++++-
3 files changed, 34 insertions(+), 6 deletions(-)
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
index 015613b..9c84770 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
@@ -297,9 +297,9 @@ int amdgpu_vm_validate_pt_bos(struct amdgpu_device *adev, struct amdgpu_vm *vm,
break;
spin_lock(&glob->lru_lock);
- ttm_bo_move_to_lru_tail(&bo->tbo);
+ ttm_bo_move_to_lru_tail(&bo->tbo, NULL);
if (bo->shadow)
- ttm_bo_move_to_lru_tail(&bo->shadow->tbo);
+ ttm_bo_move_to_lru_tail(&bo->shadow->tbo, NULL);
spin_unlock(&glob->lru_lock);
}
@@ -319,9 +319,9 @@ int amdgpu_vm_validate_pt_bos(struct amdgpu_device *adev, struct amdgpu_vm *vm,
if (!bo->parent)
continue;
- ttm_bo_move_to_lru_tail(&bo->tbo);
+ ttm_bo_move_to_lru_tail(&bo->tbo, NULL);
if (bo->shadow)
- ttm_bo_move_to_lru_tail(&bo->shadow->tbo);
+ ttm_bo_move_to_lru_tail(&bo->shadow->tbo, NULL);
}
spin_unlock(&glob->lru_lock);
diff --git a/drivers/gpu/drm/ttm/ttm_bo.c b/drivers/gpu/drm/ttm/ttm_bo.c
index 7c48472..7117b6b 100644
--- a/drivers/gpu/drm/ttm/ttm_bo.c
+++ b/drivers/gpu/drm/ttm/ttm_bo.c
@@ -214,12 +214,36 @@ void ttm_bo_del_sub_from_lru(struct ttm_buffer_object *bo)
}
EXPORT_SYMBOL(ttm_bo_del_sub_from_lru);
-void ttm_bo_move_to_lru_tail(struct ttm_buffer_object *bo)
+static void ttm_bo_bulk_move_set_pos(struct ttm_lru_bulk_move_pos *pos,
+ struct ttm_buffer_object *bo)
+{
+ if (!pos->first)
+ pos->first = bo;
+ pos->last = bo;
+}
+
+void ttm_bo_move_to_lru_tail(struct ttm_buffer_object *bo,
+ struct ttm_lru_bulk_move *bulk)
{
reservation_object_assert_held(bo->resv);
ttm_bo_del_from_lru(bo);
ttm_bo_add_to_lru(bo);
+
+ if (bulk && !(bo->mem.placement & TTM_PL_FLAG_NO_EVICT)) {
+ switch (bo->mem.mem_type) {
+ case TTM_PL_TT:
+ ttm_bo_bulk_move_set_pos(&bulk->tt[bo->priority], bo);
+ break;
+
+ case TTM_PL_VRAM:
+ ttm_bo_bulk_move_set_pos(&bulk->vram[bo->priority], bo);
+ break;
+ }
+ if (bo->ttm && !(bo->ttm->page_flags &
+ (TTM_PAGE_FLAG_SG | TTM_PAGE_FLAG_SWAPPED)))
+ ttm_bo_bulk_move_set_pos(&bulk->swap[bo->priority], bo);
+ }
}
EXPORT_SYMBOL(ttm_bo_move_to_lru_tail);
diff --git a/include/drm/ttm/ttm_bo_api.h b/include/drm/ttm/ttm_bo_api.h
index a01ba20..0d4eb81 100644
--- a/include/drm/ttm/ttm_bo_api.h
+++ b/include/drm/ttm/ttm_bo_api.h
@@ -51,6 +51,8 @@ struct ttm_placement;
struct ttm_place;
+struct ttm_lru_bulk_move;
+
/**
* struct ttm_bus_placement
*
@@ -405,12 +407,14 @@ void ttm_bo_del_from_lru(struct ttm_buffer_object *bo);
* ttm_bo_move_to_lru_tail
*
* @bo: The buffer object.
+ * @bulk: optional bulk move structure to remember BO positions
*
* Move this BO to the tail of all lru lists used to lookup and reserve an
* object. This function must be called with struct ttm_bo_global::lru_lock
* held, and is used to make a BO less likely to be considered for eviction.
*/
-void ttm_bo_move_to_lru_tail(struct ttm_buffer_object *bo);
+void ttm_bo_move_to_lru_tail(struct ttm_buffer_object *bo,
+ struct ttm_lru_bulk_move *bulk);
/**
* ttm_bo_lock_delayed_workqueue
--
2.7.4
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel
^ permalink raw reply related [flat|nested] 23+ messages in thread
* [PATCH v5 3/5] drm/ttm: add bulk move function on LRU
[not found] ` <1534924375-5837-1-git-send-email-ray.huang-5C7GfCeVMHo@public.gmane.org>
@ 2018-08-22 7:52 ` Huang Rui
2018-08-22 7:52 ` [PATCH v5 4/5] drm/amdgpu: use bulk moves for efficient VM LRU handling (v5) Huang Rui
2018-08-22 7:52 ` [PATCH v5 5/5] drm/amdgpu: move PD/PT bos on LRU again Huang Rui
2 siblings, 0 replies; 23+ messages in thread
From: Huang Rui @ 2018-08-22 7:52 UTC (permalink / raw)
To: dri-devel-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW,
amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW
Cc: Huang Rui, Christian König
This function allow us to bulk move a group of BOs to the tail of their LRU.
The positions of group of BOs are stored on the (first, last) bulk_move_pos
structure.
Signed-off-by: Christian König <christian.koenig@amd.com>
Signed-off-by: Huang Rui <ray.huang@amd.com>
Tested-by: Mike Lothian <mike@fireburn.co.uk>
Tested-by: Dieter Nützel <Dieter@nuetzel-hh.de>
Acked-by: Chunming Zhou <david1.zhou@amd.com>
Reviewed-by: Junwei Zhang <Jerry.Zhang@amd.com>
---
drivers/gpu/drm/ttm/ttm_bo.c | 52 ++++++++++++++++++++++++++++++++++++++++++++
include/drm/ttm/ttm_bo_api.h | 10 +++++++++
2 files changed, 62 insertions(+)
diff --git a/drivers/gpu/drm/ttm/ttm_bo.c b/drivers/gpu/drm/ttm/ttm_bo.c
index 7117b6b..39d9d55 100644
--- a/drivers/gpu/drm/ttm/ttm_bo.c
+++ b/drivers/gpu/drm/ttm/ttm_bo.c
@@ -247,6 +247,58 @@ void ttm_bo_move_to_lru_tail(struct ttm_buffer_object *bo,
}
EXPORT_SYMBOL(ttm_bo_move_to_lru_tail);
+static void ttm_bo_bulk_move_helper(struct ttm_lru_bulk_move_pos *pos,
+ struct list_head *lru, bool is_swap)
+{
+ struct list_head entries, before;
+ struct list_head *list1, *list2;
+
+ list1 = is_swap ? &pos->last->swap : &pos->last->lru;
+ list2 = is_swap ? pos->first->swap.prev : pos->first->lru.prev;
+
+ list_cut_position(&entries, lru, list1);
+ list_cut_position(&before, &entries, list2);
+ list_splice(&before, lru);
+ list_splice_tail(&entries, lru);
+}
+
+void ttm_bo_bulk_move_lru_tail(struct ttm_lru_bulk_move *bulk)
+{
+ unsigned i;
+
+ for (i = 0; i < TTM_MAX_BO_PRIORITY; ++i) {
+ struct ttm_mem_type_manager *man;
+
+ if (!bulk->tt[i].first)
+ continue;
+
+ man = &bulk->tt[i].first->bdev->man[TTM_PL_TT];
+ ttm_bo_bulk_move_helper(&bulk->tt[i], &man->lru[i], false);
+ }
+
+ for (i = 0; i < TTM_MAX_BO_PRIORITY; ++i) {
+ struct ttm_mem_type_manager *man;
+
+ if (!bulk->vram[i].first)
+ continue;
+
+ man = &bulk->vram[i].first->bdev->man[TTM_PL_VRAM];
+ ttm_bo_bulk_move_helper(&bulk->vram[i], &man->lru[i], false);
+ }
+
+ for (i = 0; i < TTM_MAX_BO_PRIORITY; ++i) {
+ struct ttm_lru_bulk_move_pos *pos = &bulk->swap[i];
+ struct list_head *lru;
+
+ if (!pos->first)
+ continue;
+
+ lru = &pos->first->bdev->glob->swap_lru[i];
+ ttm_bo_bulk_move_helper(&bulk->swap[i], lru, true);
+ }
+}
+EXPORT_SYMBOL(ttm_bo_bulk_move_lru_tail);
+
static int ttm_bo_handle_move_mem(struct ttm_buffer_object *bo,
struct ttm_mem_reg *mem, bool evict,
struct ttm_operation_ctx *ctx)
diff --git a/include/drm/ttm/ttm_bo_api.h b/include/drm/ttm/ttm_bo_api.h
index 0d4eb81..8c19470 100644
--- a/include/drm/ttm/ttm_bo_api.h
+++ b/include/drm/ttm/ttm_bo_api.h
@@ -417,6 +417,16 @@ void ttm_bo_move_to_lru_tail(struct ttm_buffer_object *bo,
struct ttm_lru_bulk_move *bulk);
/**
+ * ttm_bo_bulk_move_lru_tail
+ *
+ * @bulk: bulk move structure
+ *
+ * Bulk move BOs to the LRU tail, only valid to use when driver makes sure that
+ * BO order never changes. Should be called with ttm_bo_global::lru_lock held.
+ */
+void ttm_bo_bulk_move_lru_tail(struct ttm_lru_bulk_move *bulk);
+
+/**
* ttm_bo_lock_delayed_workqueue
*
* Prevent the delayed workqueue from running.
--
2.7.4
_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx
^ permalink raw reply related [flat|nested] 23+ messages in thread
* [PATCH v5 4/5] drm/amdgpu: use bulk moves for efficient VM LRU handling (v5)
[not found] ` <1534924375-5837-1-git-send-email-ray.huang-5C7GfCeVMHo@public.gmane.org>
2018-08-22 7:52 ` [PATCH v5 3/5] drm/ttm: add bulk move function on LRU Huang Rui
@ 2018-08-22 7:52 ` Huang Rui
[not found] ` <1534924375-5837-5-git-send-email-ray.huang-5C7GfCeVMHo@public.gmane.org>
2018-08-22 7:52 ` [PATCH v5 5/5] drm/amdgpu: move PD/PT bos on LRU again Huang Rui
2 siblings, 1 reply; 23+ messages in thread
From: Huang Rui @ 2018-08-22 7:52 UTC (permalink / raw)
To: dri-devel-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW,
amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW
Cc: Huang Rui, Christian König
I continue to work for bulk moving that based on the proposal by Christian.
Background:
amdgpu driver will move all PD/PT and PerVM BOs into idle list. Then move all of
them on the end of LRU list one by one. Thus, that cause so many BOs moved to
the end of the LRU, and impact performance seriously.
Then Christian provided a workaround to not move PD/PT BOs on LRU with below
patch:
Commit 0bbf32026cf5ba41e9922b30e26e1bed1ecd38ae ("drm/amdgpu: band aid
validating VM PTs")
However, the final solution should bulk move all PD/PT and PerVM BOs on the LRU
instead of one by one.
Whenever amdgpu_vm_validate_pt_bos() is called and we have BOs which need to be
validated we move all BOs together to the end of the LRU without dropping the
lock for the LRU.
While doing so we note the beginning and end of this block in the LRU list.
Now when amdgpu_vm_validate_pt_bos() is called and we don't have anything to do,
we don't move every BO one by one, but instead cut the LRU list into pieces so
that we bulk move everything to the end in just one operation.
Test data:
+--------------+-----------------+-----------+---------------------------------------+
| |The Talos |Clpeak(OCL)|BusSpeedReadback(OCL) |
| |Principle(Vulkan)| | |
+------------------------------------------------------------------------------------+
| | | |0.319 ms(1k) 0.314 ms(2K) 0.308 ms(4K) |
| Original | 147.7 FPS | 76.86 us |0.307 ms(8K) 0.310 ms(16K) |
+------------------------------------------------------------------------------------+
| Orignial + WA| | |0.254 ms(1K) 0.241 ms(2K) |
|(don't move | 162.1 FPS | 42.15 us |0.230 ms(4K) 0.223 ms(8K) 0.204 ms(16K)|
|PT BOs on LRU)| | | |
+------------------------------------------------------------------------------------+
| Bulk move | 163.1 FPS | 40.52 us |0.244 ms(1K) 0.252 ms(2K) 0.213 ms(4K) |
| | | |0.214 ms(8K) 0.225 ms(16K) |
+--------------+-----------------+-----------+---------------------------------------+
After test them with above three benchmarks include vulkan and opencl. We can
see the visible improvement than original, and even better than original with
workaround.
v2: move all BOs include idle, relocated, and moved list to the end of LRU and
put them together.
v3: remove unused parameter and use list_for_each_entry instead of the one with
save entry.
v4: move the amdgpu_vm_move_to_lru_tail after command submission, at that time,
all bo will be back on idle list.
v5: remove amdgpu_vm_move_to_lru_tail_by_list(), use bulk_moveable instread of
validated, and move ttm_bo_bulk_move_lru_tail() also into
amdgpu_vm_move_to_lru_tail().
Signed-off-by: Christian König <christian.koenig@amd.com>
Signed-off-by: Huang Rui <ray.huang@amd.com>
Tested-by: Mike Lothian <mike@fireburn.co.uk>
Tested-by: Dieter Nützel <Dieter@nuetzel-hh.de>
Acked-by: Chunming Zhou <david1.zhou@amd.com>
---
drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c | 10 ++++++
drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c | 66 +++++++++++++++++++++++-----------
drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h | 11 +++++-
3 files changed, 65 insertions(+), 22 deletions(-)
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
index 502b94f..4efdbd2 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
@@ -1260,6 +1260,15 @@ static int amdgpu_cs_submit(struct amdgpu_cs_parser *p,
return 0;
}
+static void amdgpu_cs_vm_move_on_lru(struct amdgpu_device *adev,
+ struct amdgpu_cs_parser *p)
+{
+ struct amdgpu_fpriv *fpriv = p->filp->driver_priv;
+ struct amdgpu_vm *vm = &fpriv->vm;
+
+ amdgpu_vm_move_to_lru_tail(adev, vm);
+}
+
int amdgpu_cs_ioctl(struct drm_device *dev, void *data, struct drm_file *filp)
{
struct amdgpu_device *adev = dev->dev_private;
@@ -1310,6 +1319,7 @@ int amdgpu_cs_ioctl(struct drm_device *dev, void *data, struct drm_file *filp)
r = amdgpu_cs_submit(&parser, cs);
+ amdgpu_cs_vm_move_on_lru(adev, &parser);
out:
amdgpu_cs_parser_fini(&parser, r, reserved_buffers);
return r;
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
index 9c84770..db1f28a 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
@@ -268,6 +268,47 @@ void amdgpu_vm_get_pd_bo(struct amdgpu_vm *vm,
}
/**
+ * amdgpu_vm_move_to_lru_tail - move all BOs to the end of LRU
+ *
+ * @adev: amdgpu device pointer
+ * @vm: vm providing the BOs
+ *
+ * Move all BOs to the end of LRU and remember their positions to put them
+ * together.
+ */
+void amdgpu_vm_move_to_lru_tail(struct amdgpu_device *adev,
+ struct amdgpu_vm *vm)
+{
+ struct ttm_bo_global *glob = adev->mman.bdev.glob;
+ struct amdgpu_vm_bo_base *bo_base;
+
+ if (vm->bulk_moveable) {
+ spin_lock(&glob->lru_lock);
+ ttm_bo_bulk_move_lru_tail(&vm->lru_bulk_move);
+ spin_unlock(&glob->lru_lock);
+ return;
+ }
+
+ memset(&vm->lru_bulk_move, 0, sizeof(vm->lru_bulk_move));
+
+ spin_lock(&glob->lru_lock);
+ list_for_each_entry(bo_base, &vm->idle, vm_status) {
+ struct amdgpu_bo *bo = bo_base->bo;
+
+ if (!bo->parent)
+ continue;
+
+ ttm_bo_move_to_lru_tail(&bo->tbo, &vm->lru_bulk_move);
+ if (bo->shadow)
+ ttm_bo_move_to_lru_tail(&bo->shadow->tbo,
+ &vm->lru_bulk_move);
+ }
+ spin_unlock(&glob->lru_lock);
+
+ vm->bulk_moveable = true;
+}
+
+/**
* amdgpu_vm_validate_pt_bos - validate the page table BOs
*
* @adev: amdgpu device pointer
@@ -284,10 +325,11 @@ int amdgpu_vm_validate_pt_bos(struct amdgpu_device *adev, struct amdgpu_vm *vm,
int (*validate)(void *p, struct amdgpu_bo *bo),
void *param)
{
- struct ttm_bo_global *glob = adev->mman.bdev.glob;
struct amdgpu_vm_bo_base *bo_base, *tmp;
int r = 0;
+ vm->bulk_moveable &= list_empty(&vm->evicted);
+
list_for_each_entry_safe(bo_base, tmp, &vm->evicted, vm_status) {
struct amdgpu_bo *bo = bo_base->bo;
@@ -295,12 +337,6 @@ int amdgpu_vm_validate_pt_bos(struct amdgpu_device *adev, struct amdgpu_vm *vm,
r = validate(param, bo);
if (r)
break;
-
- spin_lock(&glob->lru_lock);
- ttm_bo_move_to_lru_tail(&bo->tbo, NULL);
- if (bo->shadow)
- ttm_bo_move_to_lru_tail(&bo->shadow->tbo, NULL);
- spin_unlock(&glob->lru_lock);
}
if (bo->tbo.type != ttm_bo_type_kernel) {
@@ -312,20 +348,7 @@ int amdgpu_vm_validate_pt_bos(struct amdgpu_device *adev, struct amdgpu_vm *vm,
}
}
- spin_lock(&glob->lru_lock);
- list_for_each_entry(bo_base, &vm->idle, vm_status) {
- struct amdgpu_bo *bo = bo_base->bo;
-
- if (!bo->parent)
- continue;
-
- ttm_bo_move_to_lru_tail(&bo->tbo, NULL);
- if (bo->shadow)
- ttm_bo_move_to_lru_tail(&bo->shadow->tbo, NULL);
- }
- spin_unlock(&glob->lru_lock);
-
- return r;
+ return 0;
}
/**
@@ -2596,6 +2619,7 @@ int amdgpu_vm_init(struct amdgpu_device *adev, struct amdgpu_vm *vm,
return r;
vm->pte_support_ats = false;
+ vm->bulk_moveable = true;
if (vm_context == AMDGPU_VM_CONTEXT_COMPUTE) {
vm->use_cpu_for_update = !!(adev->vm_manager.vm_update_mode &
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h
index 67a15d4..bbdde40 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h
@@ -29,6 +29,7 @@
#include <linux/rbtree.h>
#include <drm/gpu_scheduler.h>
#include <drm/drm_file.h>
+#include <drm/ttm/ttm_bo_driver.h>
#include "amdgpu_sync.h"
#include "amdgpu_ring.h"
@@ -226,6 +227,11 @@ struct amdgpu_vm {
/* Some basic info about the task */
struct amdgpu_task_info task_info;
+
+ /* Store positions of group of BOs */
+ struct ttm_lru_bulk_move lru_bulk_move;
+ /* mark whether can do the bulk move */
+ bool bulk_moveable;
};
struct amdgpu_vm_manager {
@@ -330,8 +336,11 @@ bool amdgpu_vm_need_pipeline_sync(struct amdgpu_ring *ring,
void amdgpu_vm_check_compute_bug(struct amdgpu_device *adev);
void amdgpu_vm_get_task_info(struct amdgpu_device *adev, unsigned int pasid,
- struct amdgpu_task_info *task_info);
+ struct amdgpu_task_info *task_info);
void amdgpu_vm_set_task_info(struct amdgpu_vm *vm);
+void amdgpu_vm_move_to_lru_tail(struct amdgpu_device *adev,
+ struct amdgpu_vm *vm);
+
#endif
--
2.7.4
_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx
^ permalink raw reply related [flat|nested] 23+ messages in thread
* [PATCH v5 5/5] drm/amdgpu: move PD/PT bos on LRU again
[not found] ` <1534924375-5837-1-git-send-email-ray.huang-5C7GfCeVMHo@public.gmane.org>
2018-08-22 7:52 ` [PATCH v5 3/5] drm/ttm: add bulk move function on LRU Huang Rui
2018-08-22 7:52 ` [PATCH v5 4/5] drm/amdgpu: use bulk moves for efficient VM LRU handling (v5) Huang Rui
@ 2018-08-22 7:52 ` Huang Rui
[not found] ` <1534924375-5837-6-git-send-email-ray.huang-5C7GfCeVMHo@public.gmane.org>
2 siblings, 1 reply; 23+ messages in thread
From: Huang Rui @ 2018-08-22 7:52 UTC (permalink / raw)
To: dri-devel-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW,
amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW
Cc: Huang Rui
The new bulk moving functionality is ready, the overhead of moving PD/PT bos to
LRU is fixed. So move them on LRU again.
Signed-off-by: Huang Rui <ray.huang@amd.com>
Tested-by: Mike Lothian <mike@fireburn.co.uk>
Tested-by: Dieter Nützel <Dieter@nuetzel-hh.de>
Acked-by: Chunming Zhou <david1.zhou@amd.com>
Reviewed-by: Junwei Zhang <Jerry.Zhang@amd.com>
---
drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
index db1f28a..d195a3d 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
@@ -1107,7 +1107,7 @@ int amdgpu_vm_update_directories(struct amdgpu_device *adev,
struct amdgpu_vm_bo_base,
vm_status);
bo_base->moved = false;
- list_del_init(&bo_base->vm_status);
+ list_move(&bo_base->vm_status, &vm->idle);
bo = bo_base->bo->parent;
if (!bo)
--
2.7.4
_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx
^ permalink raw reply related [flat|nested] 23+ messages in thread
* Re: [PATCH v5 4/5] drm/amdgpu: use bulk moves for efficient VM LRU handling (v5)
[not found] ` <1534924375-5837-5-git-send-email-ray.huang-5C7GfCeVMHo@public.gmane.org>
@ 2018-08-22 8:07 ` Zhang, Jerry (Junwei)
[not found] ` <5B7D19B8.2060307-5C7GfCeVMHo@public.gmane.org>
0 siblings, 1 reply; 23+ messages in thread
From: Zhang, Jerry (Junwei) @ 2018-08-22 8:07 UTC (permalink / raw)
To: Huang Rui, dri-devel-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW,
amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW
Cc: Christian König
On 08/22/2018 03:52 PM, Huang Rui wrote:
> I continue to work for bulk moving that based on the proposal by Christian.
>
> Background:
> amdgpu driver will move all PD/PT and PerVM BOs into idle list. Then move all of
> them on the end of LRU list one by one. Thus, that cause so many BOs moved to
> the end of the LRU, and impact performance seriously.
>
> Then Christian provided a workaround to not move PD/PT BOs on LRU with below
> patch:
> Commit 0bbf32026cf5ba41e9922b30e26e1bed1ecd38ae ("drm/amdgpu: band aid
> validating VM PTs")
>
> However, the final solution should bulk move all PD/PT and PerVM BOs on the LRU
> instead of one by one.
>
> Whenever amdgpu_vm_validate_pt_bos() is called and we have BOs which need to be
> validated we move all BOs together to the end of the LRU without dropping the
> lock for the LRU.
>
> While doing so we note the beginning and end of this block in the LRU list.
>
> Now when amdgpu_vm_validate_pt_bos() is called and we don't have anything to do,
> we don't move every BO one by one, but instead cut the LRU list into pieces so
> that we bulk move everything to the end in just one operation.
>
> Test data:
> +--------------+-----------------+-----------+---------------------------------------+
> | |The Talos |Clpeak(OCL)|BusSpeedReadback(OCL) |
> | |Principle(Vulkan)| | |
> +------------------------------------------------------------------------------------+
> | | | |0.319 ms(1k) 0.314 ms(2K) 0.308 ms(4K) |
> | Original | 147.7 FPS | 76.86 us |0.307 ms(8K) 0.310 ms(16K) |
> +------------------------------------------------------------------------------------+
> | Orignial + WA| | |0.254 ms(1K) 0.241 ms(2K) |
> |(don't move | 162.1 FPS | 42.15 us |0.230 ms(4K) 0.223 ms(8K) 0.204 ms(16K)|
> |PT BOs on LRU)| | | |
> +------------------------------------------------------------------------------------+
> | Bulk move | 163.1 FPS | 40.52 us |0.244 ms(1K) 0.252 ms(2K) 0.213 ms(4K) |
> | | | |0.214 ms(8K) 0.225 ms(16K) |
> +--------------+-----------------+-----------+---------------------------------------+
>
> After test them with above three benchmarks include vulkan and opencl. We can
> see the visible improvement than original, and even better than original with
> workaround.
>
> v2: move all BOs include idle, relocated, and moved list to the end of LRU and
> put them together.
> v3: remove unused parameter and use list_for_each_entry instead of the one with
> save entry.
> v4: move the amdgpu_vm_move_to_lru_tail after command submission, at that time,
> all bo will be back on idle list.
> v5: remove amdgpu_vm_move_to_lru_tail_by_list(), use bulk_moveable instread of
> validated, and move ttm_bo_bulk_move_lru_tail() also into
> amdgpu_vm_move_to_lru_tail().
>
> Signed-off-by: Christian König <christian.koenig@amd.com>
> Signed-off-by: Huang Rui <ray.huang@amd.com>
> Tested-by: Mike Lothian <mike@fireburn.co.uk>
> Tested-by: Dieter Nützel <Dieter@nuetzel-hh.de>
> Acked-by: Chunming Zhou <david1.zhou@amd.com>
> ---
> drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c | 10 ++++++
> drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c | 66 +++++++++++++++++++++++-----------
> drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h | 11 +++++-
> 3 files changed, 65 insertions(+), 22 deletions(-)
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
> index 502b94f..4efdbd2 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
> @@ -1260,6 +1260,15 @@ static int amdgpu_cs_submit(struct amdgpu_cs_parser *p,
> return 0;
> }
>
> +static void amdgpu_cs_vm_move_on_lru(struct amdgpu_device *adev,
> + struct amdgpu_cs_parser *p)
> +{
> + struct amdgpu_fpriv *fpriv = p->filp->driver_priv;
> + struct amdgpu_vm *vm = &fpriv->vm;
> +
> + amdgpu_vm_move_to_lru_tail(adev, vm);
> +}
> +
> int amdgpu_cs_ioctl(struct drm_device *dev, void *data, struct drm_file *filp)
> {
> struct amdgpu_device *adev = dev->dev_private;
> @@ -1310,6 +1319,7 @@ int amdgpu_cs_ioctl(struct drm_device *dev, void *data, struct drm_file *filp)
>
> r = amdgpu_cs_submit(&parser, cs);
>
> + amdgpu_cs_vm_move_on_lru(adev, &parser);
Looks we can call amdgpu_vm_move_to_lru_tail() directly.
> out:
> amdgpu_cs_parser_fini(&parser, r, reserved_buffers);
> return r;
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
> index 9c84770..db1f28a 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
> @@ -268,6 +268,47 @@ void amdgpu_vm_get_pd_bo(struct amdgpu_vm *vm,
> }
>
> /**
> + * amdgpu_vm_move_to_lru_tail - move all BOs to the end of LRU
> + *
> + * @adev: amdgpu device pointer
> + * @vm: vm providing the BOs
> + *
> + * Move all BOs to the end of LRU and remember their positions to put them
> + * together.
> + */
> +void amdgpu_vm_move_to_lru_tail(struct amdgpu_device *adev,
> + struct amdgpu_vm *vm)
> +{
> + struct ttm_bo_global *glob = adev->mman.bdev.glob;
> + struct amdgpu_vm_bo_base *bo_base;
> +
> + if (vm->bulk_moveable) {
> + spin_lock(&glob->lru_lock);
> + ttm_bo_bulk_move_lru_tail(&vm->lru_bulk_move);
> + spin_unlock(&glob->lru_lock);
> + return;
> + }
Question:
Why we handle bulk move in next command submission instead of current cs process?
> +
> + memset(&vm->lru_bulk_move, 0, sizeof(vm->lru_bulk_move));
> +
> + spin_lock(&glob->lru_lock);
> + list_for_each_entry(bo_base, &vm->idle, vm_status) {
> + struct amdgpu_bo *bo = bo_base->bo;
> +
> + if (!bo->parent)
> + continue;
> +
> + ttm_bo_move_to_lru_tail(&bo->tbo, &vm->lru_bulk_move);
> + if (bo->shadow)
> + ttm_bo_move_to_lru_tail(&bo->shadow->tbo,
> + &vm->lru_bulk_move);
> + }
> + spin_unlock(&glob->lru_lock);
> +
> + vm->bulk_moveable = true;
> +}
> +
> +/**
> * amdgpu_vm_validate_pt_bos - validate the page table BOs
> *
> * @adev: amdgpu device pointer
> @@ -284,10 +325,11 @@ int amdgpu_vm_validate_pt_bos(struct amdgpu_device *adev, struct amdgpu_vm *vm,
> int (*validate)(void *p, struct amdgpu_bo *bo),
> void *param)
> {
> - struct ttm_bo_global *glob = adev->mman.bdev.glob;
> struct amdgpu_vm_bo_base *bo_base, *tmp;
> int r = 0;
>
> + vm->bulk_moveable &= list_empty(&vm->evicted);
> +
> list_for_each_entry_safe(bo_base, tmp, &vm->evicted, vm_status) {
> struct amdgpu_bo *bo = bo_base->bo;
>
> @@ -295,12 +337,6 @@ int amdgpu_vm_validate_pt_bos(struct amdgpu_device *adev, struct amdgpu_vm *vm,
> r = validate(param, bo);
> if (r)
> break;
> -
> - spin_lock(&glob->lru_lock);
> - ttm_bo_move_to_lru_tail(&bo->tbo, NULL);
> - if (bo->shadow)
> - ttm_bo_move_to_lru_tail(&bo->shadow->tbo, NULL);
> - spin_unlock(&glob->lru_lock);
> }
>
> if (bo->tbo.type != ttm_bo_type_kernel) {
> @@ -312,20 +348,7 @@ int amdgpu_vm_validate_pt_bos(struct amdgpu_device *adev, struct amdgpu_vm *vm,
> }
> }
>
> - spin_lock(&glob->lru_lock);
> - list_for_each_entry(bo_base, &vm->idle, vm_status) {
> - struct amdgpu_bo *bo = bo_base->bo;
> -
> - if (!bo->parent)
> - continue;
> -
> - ttm_bo_move_to_lru_tail(&bo->tbo, NULL);
> - if (bo->shadow)
> - ttm_bo_move_to_lru_tail(&bo->shadow->tbo, NULL);
> - }
> - spin_unlock(&glob->lru_lock);
> -
> - return r;
> + return 0;
Will it break from validate() and return r?
> }
>
> /**
> @@ -2596,6 +2619,7 @@ int amdgpu_vm_init(struct amdgpu_device *adev, struct amdgpu_vm *vm,
> return r;
>
> vm->pte_support_ats = false;
> + vm->bulk_moveable = true;
>
> if (vm_context == AMDGPU_VM_CONTEXT_COMPUTE) {
> vm->use_cpu_for_update = !!(adev->vm_manager.vm_update_mode &
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h
> index 67a15d4..bbdde40 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h
> @@ -29,6 +29,7 @@
> #include <linux/rbtree.h>
> #include <drm/gpu_scheduler.h>
> #include <drm/drm_file.h>
> +#include <drm/ttm/ttm_bo_driver.h>
>
> #include "amdgpu_sync.h"
> #include "amdgpu_ring.h"
> @@ -226,6 +227,11 @@ struct amdgpu_vm {
>
> /* Some basic info about the task */
> struct amdgpu_task_info task_info;
> +
> + /* Store positions of group of BOs */
> + struct ttm_lru_bulk_move lru_bulk_move;
> + /* mark whether can do the bulk move */
> + bool bulk_moveable;
> };
>
> struct amdgpu_vm_manager {
> @@ -330,8 +336,11 @@ bool amdgpu_vm_need_pipeline_sync(struct amdgpu_ring *ring,
> void amdgpu_vm_check_compute_bug(struct amdgpu_device *adev);
>
> void amdgpu_vm_get_task_info(struct amdgpu_device *adev, unsigned int pasid,
> - struct amdgpu_task_info *task_info);
> + struct amdgpu_task_info *task_info);
This change looks not related to bulk move
Regards,
Jerry
>
> void amdgpu_vm_set_task_info(struct amdgpu_vm *vm);
>
> +void amdgpu_vm_move_to_lru_tail(struct amdgpu_device *adev,
> + struct amdgpu_vm *vm);
> +
> #endif
>
_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [PATCH v5 0/5] drm/ttm,amdgpu: Introduce LRU bulk move functionality
2018-08-22 7:52 [PATCH v5 0/5] drm/ttm,amdgpu: Introduce LRU bulk move functionality Huang Rui
` (2 preceding siblings ...)
[not found] ` <1534924375-5837-1-git-send-email-ray.huang-5C7GfCeVMHo@public.gmane.org>
@ 2018-08-22 8:24 ` Christian König
[not found] ` <51ebd226-3290-5ea5-e272-0d566a119aca-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
3 siblings, 1 reply; 23+ messages in thread
From: Christian König @ 2018-08-22 8:24 UTC (permalink / raw)
To: Huang Rui, dri-devel, amd-gfx
Please commit patches #1, #2 and #3, doesn't make much sense to send
them out even more often.
Jerry's comments on patch #4 sound valid to me as well, but with those
minor issues fixes/commented I think we can commit it.
Thanks for taking care of this,
Christian.
Am 22.08.2018 um 09:52 schrieb Huang Rui:
> The idea and proposal is originally from Christian, and I continue to work to
> deliver it.
>
> Background:
> amdgpu driver will move all PD/PT and PerVM BOs into idle list. Then move all of
> them on the end of LRU list one by one. Thus, that cause so many BOs moved to
> the end of the LRU, and impact performance seriously.
>
> Then Christian provided a workaround to not move PD/PT BOs on LRU with below
> patch:
> Commit 0bbf32026cf5ba41e9922b30e26e1bed1ecd38ae ("drm/amdgpu: band aid
> validating VM PTs")
>
> However, the final solution should bulk move all PD/PT and PerVM BOs on the LRU
> instead of one by one.
>
> Whenever amdgpu_vm_validate_pt_bos() is called and we have BOs which need to be
> validated we move all BOs together to the end of the LRU without dropping the
> lock for the LRU.
>
> While doing so we note the beginning and end of this block in the LRU list.
>
> Now when amdgpu_vm_validate_pt_bos() is called and we don't have anything to do,
> we don't move every BO one by one, but instead cut the LRU list into pieces so
> that we bulk move everything to the end in just one operation.
>
> Test data:
> +--------------+-----------------+-----------+---------------------------------------+
> | |The Talos |Clpeak(OCL)|BusSpeedReadback(OCL) |
> | |Principle(Vulkan)| | |
> +------------------------------------------------------------------------------------+
> | | | |0.319 ms(1k) 0.314 ms(2K) 0.308 ms(4K) |
> | Original | 147.7 FPS | 76.86 us |0.307 ms(8K) 0.310 ms(16K) |
> +------------------------------------------------------------------------------------+
> | Orignial + WA| | |0.254 ms(1K) 0.241 ms(2K) |
> |(don't move | 162.1 FPS | 42.15 us |0.230 ms(4K) 0.223 ms(8K) 0.204 ms(16K)|
> |PT BOs on LRU)| | | |
> +------------------------------------------------------------------------------------+
> | Bulk move | 163.1 FPS | 40.52 us |0.244 ms(1K) 0.252 ms(2K) 0.213 ms(4K) |
> | | | |0.214 ms(8K) 0.225 ms(16K) |
> +--------------+-----------------+-----------+---------------------------------------+
>
> After test them with above three benchmarks include vulkan and opencl. We can
> see the visible improvement than original, and even better than original with
> workaround.
>
> Changes from V1 -> V2:
> - Fix to missed the BOs in relocated/moved that should be also moved to the end
> of LRU.
>
> Changes from V2 -> V3:
> - Remove unused parameter and use list_for_each_entry instead of the one with
> save entry.
>
> Changes from V3 -> V4:
> - Move the amdgpu_vm_move_to_lru_tail after command submission, at that time,
> all bo will be back on idle list.
>
> Changes from V4 -> V5:
> - Remove amdgpu_vm_move_to_lru_tail_by_list(), use bulk_moveable instread of
> validated, and move ttm_bo_bulk_move_lru_tail() also into
> amdgpu_vm_move_to_lru_tail().
>
> Thanks,
> Ray
>
> Christian König (2):
> drm/ttm: add helper structures for bulk moves on lru list
> drm/ttm: revise ttm_bo_move_to_lru_tail to support bulk moves
>
> Huang Rui (3):
> drm/ttm: add bulk move function on LRU
> drm/amdgpu: use bulk moves for efficient VM LRU handling (v5)
> drm/amdgpu: move PD/PT bos on LRU again
>
> drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c | 10 +++++
> drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c | 68 +++++++++++++++++++----------
> drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h | 11 ++++-
> drivers/gpu/drm/ttm/ttm_bo.c | 78 +++++++++++++++++++++++++++++++++-
> include/drm/ttm/ttm_bo_api.h | 16 ++++++-
> include/drm/ttm/ttm_bo_driver.h | 28 ++++++++++++
> 6 files changed, 186 insertions(+), 25 deletions(-)
>
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [PATCH v5 4/5] drm/amdgpu: use bulk moves for efficient VM LRU handling (v5)
[not found] ` <5B7D19B8.2060307-5C7GfCeVMHo@public.gmane.org>
@ 2018-08-22 8:33 ` Huang Rui
2018-08-22 8:38 ` Huang Rui
2018-08-22 8:51 ` Zhang, Jerry (Junwei)
0 siblings, 2 replies; 23+ messages in thread
From: Huang Rui @ 2018-08-22 8:33 UTC (permalink / raw)
To: Zhang, Jerry
Cc: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW@public.gmane.org,
dri-devel-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW@public.gmane.org,
Koenig, Christian
On Wed, Aug 22, 2018 at 04:07:20PM +0800, Zhang, Jerry wrote:
> On 08/22/2018 03:52 PM, Huang Rui wrote:
> > I continue to work for bulk moving that based on the proposal by Christian.
> >
> > Background:
> > amdgpu driver will move all PD/PT and PerVM BOs into idle list. Then move all of
> > them on the end of LRU list one by one. Thus, that cause so many BOs moved to
> > the end of the LRU, and impact performance seriously.
> >
> > Then Christian provided a workaround to not move PD/PT BOs on LRU with below
> > patch:
> > Commit 0bbf32026cf5ba41e9922b30e26e1bed1ecd38ae ("drm/amdgpu: band aid
> > validating VM PTs")
> >
> > However, the final solution should bulk move all PD/PT and PerVM BOs on the LRU
> > instead of one by one.
> >
> > Whenever amdgpu_vm_validate_pt_bos() is called and we have BOs which need to be
> > validated we move all BOs together to the end of the LRU without dropping the
> > lock for the LRU.
> >
> > While doing so we note the beginning and end of this block in the LRU list.
> >
> > Now when amdgpu_vm_validate_pt_bos() is called and we don't have anything to do,
> > we don't move every BO one by one, but instead cut the LRU list into pieces so
> > that we bulk move everything to the end in just one operation.
> >
> > Test data:
> > +--------------+-----------------+-----------+---------------------------------------+
> > | |The Talos |Clpeak(OCL)|BusSpeedReadback(OCL) |
> > | |Principle(Vulkan)| | |
> > +------------------------------------------------------------------------------------+
> > | | | |0.319 ms(1k) 0.314 ms(2K) 0.308 ms(4K) |
> > | Original | 147.7 FPS | 76.86 us |0.307 ms(8K) 0.310 ms(16K) |
> > +------------------------------------------------------------------------------------+
> > | Orignial + WA| | |0.254 ms(1K) 0.241 ms(2K) |
> > |(don't move | 162.1 FPS | 42.15 us |0.230 ms(4K) 0.223 ms(8K) 0.204 ms(16K)|
> > |PT BOs on LRU)| | | |
> > +------------------------------------------------------------------------------------+
> > | Bulk move | 163.1 FPS | 40.52 us |0.244 ms(1K) 0.252 ms(2K) 0.213 ms(4K) |
> > | | | |0.214 ms(8K) 0.225 ms(16K) |
> > +--------------+-----------------+-----------+---------------------------------------+
> >
> > After test them with above three benchmarks include vulkan and opencl. We can
> > see the visible improvement than original, and even better than original with
> > workaround.
> >
> > v2: move all BOs include idle, relocated, and moved list to the end of LRU and
> > put them together.
> > v3: remove unused parameter and use list_for_each_entry instead of the one with
> > save entry.
> > v4: move the amdgpu_vm_move_to_lru_tail after command submission, at that time,
> > all bo will be back on idle list.
> > v5: remove amdgpu_vm_move_to_lru_tail_by_list(), use bulk_moveable instread of
> > validated, and move ttm_bo_bulk_move_lru_tail() also into
> > amdgpu_vm_move_to_lru_tail().
> >
> > Signed-off-by: Christian König <christian.koenig@amd.com>
> > Signed-off-by: Huang Rui <ray.huang@amd.com>
> > Tested-by: Mike Lothian <mike@fireburn.co.uk>
> > Tested-by: Dieter Nützel <Dieter@nuetzel-hh.de>
> > Acked-by: Chunming Zhou <david1.zhou@amd.com>
> > ---
> > drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c | 10 ++++++
> > drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c | 66 +++++++++++++++++++++++-----------
> > drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h | 11 +++++-
> > 3 files changed, 65 insertions(+), 22 deletions(-)
> >
> > diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
> > index 502b94f..4efdbd2 100644
> > --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
> > +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
> > @@ -1260,6 +1260,15 @@ static int amdgpu_cs_submit(struct amdgpu_cs_parser *p,
> > return 0;
> > }
> >
> > +static void amdgpu_cs_vm_move_on_lru(struct amdgpu_device *adev,
> > + struct amdgpu_cs_parser *p)
> > +{
> > + struct amdgpu_fpriv *fpriv = p->filp->driver_priv;
> > + struct amdgpu_vm *vm = &fpriv->vm;
> > +
> > + amdgpu_vm_move_to_lru_tail(adev, vm);
> > +}
> > +
> > int amdgpu_cs_ioctl(struct drm_device *dev, void *data, struct drm_file *filp)
> > {
> > struct amdgpu_device *adev = dev->dev_private;
> > @@ -1310,6 +1319,7 @@ int amdgpu_cs_ioctl(struct drm_device *dev, void *data, struct drm_file *filp)
> >
> > r = amdgpu_cs_submit(&parser, cs);
> >
> > + amdgpu_cs_vm_move_on_lru(adev, &parser);
>
> Looks we can call amdgpu_vm_move_to_lru_tail() directly.
Both ok, here, I just
>
> > out:
> > amdgpu_cs_parser_fini(&parser, r, reserved_buffers);
> > return r;
> > diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
> > index 9c84770..db1f28a 100644
> > --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
> > +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
> > @@ -268,6 +268,47 @@ void amdgpu_vm_get_pd_bo(struct amdgpu_vm *vm,
> > }
> >
> > /**
> > + * amdgpu_vm_move_to_lru_tail - move all BOs to the end of LRU
> > + *
> > + * @adev: amdgpu device pointer
> > + * @vm: vm providing the BOs
> > + *
> > + * Move all BOs to the end of LRU and remember their positions to put them
> > + * together.
> > + */
> > +void amdgpu_vm_move_to_lru_tail(struct amdgpu_device *adev,
> > + struct amdgpu_vm *vm)
> > +{
> > + struct ttm_bo_global *glob = adev->mman.bdev.glob;
> > + struct amdgpu_vm_bo_base *bo_base;
> > +
> > + if (vm->bulk_moveable) {
> > + spin_lock(&glob->lru_lock);
> > + ttm_bo_bulk_move_lru_tail(&vm->lru_bulk_move);
> > + spin_unlock(&glob->lru_lock);
> > + return;
> > + }
>
> Question:
> Why we handle bulk move in next command submission instead of current cs process?
Bulk move is to move all pt and per-vm bos to the end of lru, after the cs
is done, all the bos will move into the idle list again from moved and
relocated list. Only bo from evicted is validated, we will remember and
store the bo positions.
>
> > +
> > + memset(&vm->lru_bulk_move, 0, sizeof(vm->lru_bulk_move));
> > +
> > + spin_lock(&glob->lru_lock);
> > + list_for_each_entry(bo_base, &vm->idle, vm_status) {
> > + struct amdgpu_bo *bo = bo_base->bo;
> > +
> > + if (!bo->parent)
> > + continue;
> > +
> > + ttm_bo_move_to_lru_tail(&bo->tbo, &vm->lru_bulk_move);
> > + if (bo->shadow)
> > + ttm_bo_move_to_lru_tail(&bo->shadow->tbo,
> > + &vm->lru_bulk_move);
> > + }
> > + spin_unlock(&glob->lru_lock);
> > +
> > + vm->bulk_moveable = true;
> > +}
> > +
> > +/**
> > * amdgpu_vm_validate_pt_bos - validate the page table BOs
> > *
> > * @adev: amdgpu device pointer
> > @@ -284,10 +325,11 @@ int amdgpu_vm_validate_pt_bos(struct amdgpu_device *adev, struct amdgpu_vm *vm,
> > int (*validate)(void *p, struct amdgpu_bo *bo),
> > void *param)
> > {
> > - struct ttm_bo_global *glob = adev->mman.bdev.glob;
> > struct amdgpu_vm_bo_base *bo_base, *tmp;
> > int r = 0;
> >
> > + vm->bulk_moveable &= list_empty(&vm->evicted);
> > +
> > list_for_each_entry_safe(bo_base, tmp, &vm->evicted, vm_status) {
> > struct amdgpu_bo *bo = bo_base->bo;
> >
> > @@ -295,12 +337,6 @@ int amdgpu_vm_validate_pt_bos(struct amdgpu_device *adev, struct amdgpu_vm *vm,
> > r = validate(param, bo);
> > if (r)
> > break;
> > -
> > - spin_lock(&glob->lru_lock);
> > - ttm_bo_move_to_lru_tail(&bo->tbo, NULL);
> > - if (bo->shadow)
> > - ttm_bo_move_to_lru_tail(&bo->shadow->tbo, NULL);
> > - spin_unlock(&glob->lru_lock);
> > }
> >
> > if (bo->tbo.type != ttm_bo_type_kernel) {
> > @@ -312,20 +348,7 @@ int amdgpu_vm_validate_pt_bos(struct amdgpu_device *adev, struct amdgpu_vm *vm,
> > }
> > }
> >
> > - spin_lock(&glob->lru_lock);
> > - list_for_each_entry(bo_base, &vm->idle, vm_status) {
> > - struct amdgpu_bo *bo = bo_base->bo;
> > -
> > - if (!bo->parent)
> > - continue;
> > -
> > - ttm_bo_move_to_lru_tail(&bo->tbo, NULL);
> > - if (bo->shadow)
> > - ttm_bo_move_to_lru_tail(&bo->shadow->tbo, NULL);
> > - }
> > - spin_unlock(&glob->lru_lock);
> > -
> > - return r;
> > + return 0;
>
> Will it break from validate() and return r?
Nice founding, this is my typo, that I don't modify it back.
>
> > }
> >
> > /**
> > @@ -2596,6 +2619,7 @@ int amdgpu_vm_init(struct amdgpu_device *adev, struct amdgpu_vm *vm,
> > return r;
> >
> > vm->pte_support_ats = false;
> > + vm->bulk_moveable = true;
> >
> > if (vm_context == AMDGPU_VM_CONTEXT_COMPUTE) {
> > vm->use_cpu_for_update = !!(adev->vm_manager.vm_update_mode &
> > diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h
> > index 67a15d4..bbdde40 100644
> > --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h
> > +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h
> > @@ -29,6 +29,7 @@
> > #include <linux/rbtree.h>
> > #include <drm/gpu_scheduler.h>
> > #include <drm/drm_file.h>
> > +#include <drm/ttm/ttm_bo_driver.h>
> >
> > #include "amdgpu_sync.h"
> > #include "amdgpu_ring.h"
> > @@ -226,6 +227,11 @@ struct amdgpu_vm {
> >
> > /* Some basic info about the task */
> > struct amdgpu_task_info task_info;
> > +
> > + /* Store positions of group of BOs */
> > + struct ttm_lru_bulk_move lru_bulk_move;
> > + /* mark whether can do the bulk move */
> > + bool bulk_moveable;
> > };
> >
> > struct amdgpu_vm_manager {
> > @@ -330,8 +336,11 @@ bool amdgpu_vm_need_pipeline_sync(struct amdgpu_ring *ring,
> > void amdgpu_vm_check_compute_bug(struct amdgpu_device *adev);
> >
> > void amdgpu_vm_get_task_info(struct amdgpu_device *adev, unsigned int pasid,
> > - struct amdgpu_task_info *task_info);
> > + struct amdgpu_task_info *task_info);
>
> This change looks not related to bulk move
>
Yes, that is code style clean up to algin the first member of "(".
Thanks,
Ray
_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [PATCH v5 4/5] drm/amdgpu: use bulk moves for efficient VM LRU handling (v5)
2018-08-22 8:33 ` Huang Rui
@ 2018-08-22 8:38 ` Huang Rui
2018-08-22 8:45 ` Zhang, Jerry (Junwei)
2018-08-22 8:51 ` Zhang, Jerry (Junwei)
1 sibling, 1 reply; 23+ messages in thread
From: Huang Rui @ 2018-08-22 8:38 UTC (permalink / raw)
To: Zhang, Jerry
Cc: dri-devel-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW@public.gmane.org,
amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW@public.gmane.org,
Koenig, Christian
On Wed, Aug 22, 2018 at 04:33:30PM +0800, Huang Rui wrote:
> On Wed, Aug 22, 2018 at 04:07:20PM +0800, Zhang, Jerry wrote:
> > On 08/22/2018 03:52 PM, Huang Rui wrote:
> > > I continue to work for bulk moving that based on the proposal by Christian.
> > >
> > > Background:
> > > amdgpu driver will move all PD/PT and PerVM BOs into idle list. Then move all of
> > > them on the end of LRU list one by one. Thus, that cause so many BOs moved to
> > > the end of the LRU, and impact performance seriously.
> > >
> > > Then Christian provided a workaround to not move PD/PT BOs on LRU with below
> > > patch:
> > > Commit 0bbf32026cf5ba41e9922b30e26e1bed1ecd38ae ("drm/amdgpu: band aid
> > > validating VM PTs")
> > >
> > > However, the final solution should bulk move all PD/PT and PerVM BOs on the LRU
> > > instead of one by one.
> > >
> > > Whenever amdgpu_vm_validate_pt_bos() is called and we have BOs which need to be
> > > validated we move all BOs together to the end of the LRU without dropping the
> > > lock for the LRU.
> > >
> > > While doing so we note the beginning and end of this block in the LRU list.
> > >
> > > Now when amdgpu_vm_validate_pt_bos() is called and we don't have anything to do,
> > > we don't move every BO one by one, but instead cut the LRU list into pieces so
> > > that we bulk move everything to the end in just one operation.
> > >
> > > Test data:
> > > +--------------+-----------------+-----------+---------------------------------------+
> > > | |The Talos |Clpeak(OCL)|BusSpeedReadback(OCL) |
> > > | |Principle(Vulkan)| | |
> > > +------------------------------------------------------------------------------------+
> > > | | | |0.319 ms(1k) 0.314 ms(2K) 0.308 ms(4K) |
> > > | Original | 147.7 FPS | 76.86 us |0.307 ms(8K) 0.310 ms(16K) |
> > > +------------------------------------------------------------------------------------+
> > > | Orignial + WA| | |0.254 ms(1K) 0.241 ms(2K) |
> > > |(don't move | 162.1 FPS | 42.15 us |0.230 ms(4K) 0.223 ms(8K) 0.204 ms(16K)|
> > > |PT BOs on LRU)| | | |
> > > +------------------------------------------------------------------------------------+
> > > | Bulk move | 163.1 FPS | 40.52 us |0.244 ms(1K) 0.252 ms(2K) 0.213 ms(4K) |
> > > | | | |0.214 ms(8K) 0.225 ms(16K) |
> > > +--------------+-----------------+-----------+---------------------------------------+
> > >
> > > After test them with above three benchmarks include vulkan and opencl. We can
> > > see the visible improvement than original, and even better than original with
> > > workaround.
> > >
> > > v2: move all BOs include idle, relocated, and moved list to the end of LRU and
> > > put them together.
> > > v3: remove unused parameter and use list_for_each_entry instead of the one with
> > > save entry.
> > > v4: move the amdgpu_vm_move_to_lru_tail after command submission, at that time,
> > > all bo will be back on idle list.
> > > v5: remove amdgpu_vm_move_to_lru_tail_by_list(), use bulk_moveable instread of
> > > validated, and move ttm_bo_bulk_move_lru_tail() also into
> > > amdgpu_vm_move_to_lru_tail().
> > >
> > > Signed-off-by: Christian König <christian.koenig@amd.com>
> > > Signed-off-by: Huang Rui <ray.huang@amd.com>
> > > Tested-by: Mike Lothian <mike@fireburn.co.uk>
> > > Tested-by: Dieter Nützel <Dieter@nuetzel-hh.de>
> > > Acked-by: Chunming Zhou <david1.zhou@amd.com>
> > > ---
> > > drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c | 10 ++++++
> > > drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c | 66 +++++++++++++++++++++++-----------
> > > drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h | 11 +++++-
> > > 3 files changed, 65 insertions(+), 22 deletions(-)
> > >
> > > diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
> > > index 502b94f..4efdbd2 100644
> > > --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
> > > +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
> > > @@ -1260,6 +1260,15 @@ static int amdgpu_cs_submit(struct amdgpu_cs_parser *p,
> > > return 0;
> > > }
> > >
> > > +static void amdgpu_cs_vm_move_on_lru(struct amdgpu_device *adev,
> > > + struct amdgpu_cs_parser *p)
> > > +{
> > > + struct amdgpu_fpriv *fpriv = p->filp->driver_priv;
> > > + struct amdgpu_vm *vm = &fpriv->vm;
> > > +
> > > + amdgpu_vm_move_to_lru_tail(adev, vm);
> > > +}
> > > +
> > > int amdgpu_cs_ioctl(struct drm_device *dev, void *data, struct drm_file *filp)
> > > {
> > > struct amdgpu_device *adev = dev->dev_private;
> > > @@ -1310,6 +1319,7 @@ int amdgpu_cs_ioctl(struct drm_device *dev, void *data, struct drm_file *filp)
> > >
> > > r = amdgpu_cs_submit(&parser, cs);
> > >
> > > + amdgpu_cs_vm_move_on_lru(adev, &parser);
> >
> > Looks we can call amdgpu_vm_move_to_lru_tail() directly.
>
> Both ok, here, I just
>
Missed this comment. My intention is to align vm member in vm functions.
Anyway, both is ok for me.
Thanks,
Ray
_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [PATCH v5 0/5] drm/ttm,amdgpu: Introduce LRU bulk move functionality
[not found] ` <51ebd226-3290-5ea5-e272-0d566a119aca-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
@ 2018-08-22 8:43 ` Huang Rui
2018-09-02 8:12 ` [PATCH v5 0/5] drm/ttm, amdgpu: " Mike Lothian
0 siblings, 1 reply; 23+ messages in thread
From: Huang Rui @ 2018-08-22 8:43 UTC (permalink / raw)
To: Koenig, Christian
Cc: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW@public.gmane.org,
dri-devel-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW@public.gmane.org
On Wed, Aug 22, 2018 at 04:24:02PM +0800, Christian König wrote:
> Please commit patches #1, #2 and #3, doesn't make much sense to send
> them out even more often.
>
> Jerry's comments on patch #4 sound valid to me as well, but with those
> minor issues fixes/commented I think we can commit it.
>
> Thanks for taking care of this,
> Christian.
OK. Thanks to your time.
Thanks,
Ray
>
> Am 22.08.2018 um 09:52 schrieb Huang Rui:
> > The idea and proposal is originally from Christian, and I continue to work to
> > deliver it.
> >
> > Background:
> > amdgpu driver will move all PD/PT and PerVM BOs into idle list. Then move all of
> > them on the end of LRU list one by one. Thus, that cause so many BOs moved to
> > the end of the LRU, and impact performance seriously.
> >
> > Then Christian provided a workaround to not move PD/PT BOs on LRU with below
> > patch:
> > Commit 0bbf32026cf5ba41e9922b30e26e1bed1ecd38ae ("drm/amdgpu: band aid
> > validating VM PTs")
> >
> > However, the final solution should bulk move all PD/PT and PerVM BOs on the LRU
> > instead of one by one.
> >
> > Whenever amdgpu_vm_validate_pt_bos() is called and we have BOs which need to be
> > validated we move all BOs together to the end of the LRU without dropping the
> > lock for the LRU.
> >
> > While doing so we note the beginning and end of this block in the LRU list.
> >
> > Now when amdgpu_vm_validate_pt_bos() is called and we don't have anything to do,
> > we don't move every BO one by one, but instead cut the LRU list into pieces so
> > that we bulk move everything to the end in just one operation.
> >
> > Test data:
> > +--------------+-----------------+-----------+---------------------------------------+
> > | |The Talos |Clpeak(OCL)|BusSpeedReadback(OCL) |
> > | |Principle(Vulkan)| | |
> > +------------------------------------------------------------------------------------+
> > | | | |0.319 ms(1k) 0.314 ms(2K) 0.308 ms(4K) |
> > | Original | 147.7 FPS | 76.86 us |0.307 ms(8K) 0.310 ms(16K) |
> > +------------------------------------------------------------------------------------+
> > | Orignial + WA| | |0.254 ms(1K) 0.241 ms(2K) |
> > |(don't move | 162.1 FPS | 42.15 us |0.230 ms(4K) 0.223 ms(8K) 0.204 ms(16K)|
> > |PT BOs on LRU)| | | |
> > +------------------------------------------------------------------------------------+
> > | Bulk move | 163.1 FPS | 40.52 us |0.244 ms(1K) 0.252 ms(2K) 0.213 ms(4K) |
> > | | | |0.214 ms(8K) 0.225 ms(16K) |
> > +--------------+-----------------+-----------+---------------------------------------+
> >
> > After test them with above three benchmarks include vulkan and opencl. We can
> > see the visible improvement than original, and even better than original with
> > workaround.
> >
> > Changes from V1 -> V2:
> > - Fix to missed the BOs in relocated/moved that should be also moved to the end
> > of LRU.
> >
> > Changes from V2 -> V3:
> > - Remove unused parameter and use list_for_each_entry instead of the one with
> > save entry.
> >
> > Changes from V3 -> V4:
> > - Move the amdgpu_vm_move_to_lru_tail after command submission, at that time,
> > all bo will be back on idle list.
> >
> > Changes from V4 -> V5:
> > - Remove amdgpu_vm_move_to_lru_tail_by_list(), use bulk_moveable instread of
> > validated, and move ttm_bo_bulk_move_lru_tail() also into
> > amdgpu_vm_move_to_lru_tail().
> >
> > Thanks,
> > Ray
> >
> > Christian König (2):
> > drm/ttm: add helper structures for bulk moves on lru list
> > drm/ttm: revise ttm_bo_move_to_lru_tail to support bulk moves
> >
> > Huang Rui (3):
> > drm/ttm: add bulk move function on LRU
> > drm/amdgpu: use bulk moves for efficient VM LRU handling (v5)
> > drm/amdgpu: move PD/PT bos on LRU again
> >
> > drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c | 10 +++++
> > drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c | 68 +++++++++++++++++++----------
> > drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h | 11 ++++-
> > drivers/gpu/drm/ttm/ttm_bo.c | 78 +++++++++++++++++++++++++++++++++-
> > include/drm/ttm/ttm_bo_api.h | 16 ++++++-
> > include/drm/ttm/ttm_bo_driver.h | 28 ++++++++++++
> > 6 files changed, 186 insertions(+), 25 deletions(-)
> >
>
_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [PATCH v5 4/5] drm/amdgpu: use bulk moves for efficient VM LRU handling (v5)
2018-08-22 8:38 ` Huang Rui
@ 2018-08-22 8:45 ` Zhang, Jerry (Junwei)
[not found] ` <5B7D22A7.4090306-5C7GfCeVMHo@public.gmane.org>
0 siblings, 1 reply; 23+ messages in thread
From: Zhang, Jerry (Junwei) @ 2018-08-22 8:45 UTC (permalink / raw)
To: Huang Rui
Cc: dri-devel-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW@public.gmane.org,
amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW@public.gmane.org,
Koenig, Christian
On 08/22/2018 04:38 PM, Huang Rui wrote:
> On Wed, Aug 22, 2018 at 04:33:30PM +0800, Huang Rui wrote:
>> On Wed, Aug 22, 2018 at 04:07:20PM +0800, Zhang, Jerry wrote:
>>> On 08/22/2018 03:52 PM, Huang Rui wrote:
>>>> I continue to work for bulk moving that based on the proposal by Christian.
>>>>
>>>> Background:
>>>> amdgpu driver will move all PD/PT and PerVM BOs into idle list. Then move all of
>>>> them on the end of LRU list one by one. Thus, that cause so many BOs moved to
>>>> the end of the LRU, and impact performance seriously.
>>>>
>>>> Then Christian provided a workaround to not move PD/PT BOs on LRU with below
>>>> patch:
>>>> Commit 0bbf32026cf5ba41e9922b30e26e1bed1ecd38ae ("drm/amdgpu: band aid
>>>> validating VM PTs")
>>>>
>>>> However, the final solution should bulk move all PD/PT and PerVM BOs on the LRU
>>>> instead of one by one.
>>>>
>>>> Whenever amdgpu_vm_validate_pt_bos() is called and we have BOs which need to be
>>>> validated we move all BOs together to the end of the LRU without dropping the
>>>> lock for the LRU.
>>>>
>>>> While doing so we note the beginning and end of this block in the LRU list.
>>>>
>>>> Now when amdgpu_vm_validate_pt_bos() is called and we don't have anything to do,
>>>> we don't move every BO one by one, but instead cut the LRU list into pieces so
>>>> that we bulk move everything to the end in just one operation.
>>>>
>>>> Test data:
>>>> +--------------+-----------------+-----------+---------------------------------------+
>>>> | |The Talos |Clpeak(OCL)|BusSpeedReadback(OCL) |
>>>> | |Principle(Vulkan)| | |
>>>> +------------------------------------------------------------------------------------+
>>>> | | | |0.319 ms(1k) 0.314 ms(2K) 0.308 ms(4K) |
>>>> | Original | 147.7 FPS | 76.86 us |0.307 ms(8K) 0.310 ms(16K) |
>>>> +------------------------------------------------------------------------------------+
>>>> | Orignial + WA| | |0.254 ms(1K) 0.241 ms(2K) |
>>>> |(don't move | 162.1 FPS | 42.15 us |0.230 ms(4K) 0.223 ms(8K) 0.204 ms(16K)|
>>>> |PT BOs on LRU)| | | |
>>>> +------------------------------------------------------------------------------------+
>>>> | Bulk move | 163.1 FPS | 40.52 us |0.244 ms(1K) 0.252 ms(2K) 0.213 ms(4K) |
>>>> | | | |0.214 ms(8K) 0.225 ms(16K) |
>>>> +--------------+-----------------+-----------+---------------------------------------+
>>>>
>>>> After test them with above three benchmarks include vulkan and opencl. We can
>>>> see the visible improvement than original, and even better than original with
>>>> workaround.
>>>>
>>>> v2: move all BOs include idle, relocated, and moved list to the end of LRU and
>>>> put them together.
>>>> v3: remove unused parameter and use list_for_each_entry instead of the one with
>>>> save entry.
>>>> v4: move the amdgpu_vm_move_to_lru_tail after command submission, at that time,
>>>> all bo will be back on idle list.
>>>> v5: remove amdgpu_vm_move_to_lru_tail_by_list(), use bulk_moveable instread of
>>>> validated, and move ttm_bo_bulk_move_lru_tail() also into
>>>> amdgpu_vm_move_to_lru_tail().
>>>>
>>>> Signed-off-by: Christian König <christian.koenig@amd.com>
>>>> Signed-off-by: Huang Rui <ray.huang@amd.com>
>>>> Tested-by: Mike Lothian <mike@fireburn.co.uk>
>>>> Tested-by: Dieter Nützel <Dieter@nuetzel-hh.de>
>>>> Acked-by: Chunming Zhou <david1.zhou@amd.com>
>>>> ---
>>>> drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c | 10 ++++++
>>>> drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c | 66 +++++++++++++++++++++++-----------
>>>> drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h | 11 +++++-
>>>> 3 files changed, 65 insertions(+), 22 deletions(-)
>>>>
>>>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
>>>> index 502b94f..4efdbd2 100644
>>>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
>>>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
>>>> @@ -1260,6 +1260,15 @@ static int amdgpu_cs_submit(struct amdgpu_cs_parser *p,
>>>> return 0;
>>>> }
>>>>
>>>> +static void amdgpu_cs_vm_move_on_lru(struct amdgpu_device *adev,
>>>> + struct amdgpu_cs_parser *p)
>>>> +{
>>>> + struct amdgpu_fpriv *fpriv = p->filp->driver_priv;
>>>> + struct amdgpu_vm *vm = &fpriv->vm;
>>>> +
>>>> + amdgpu_vm_move_to_lru_tail(adev, vm);
>>>> +}
>>>> +
>>>> int amdgpu_cs_ioctl(struct drm_device *dev, void *data, struct drm_file *filp)
>>>> {
>>>> struct amdgpu_device *adev = dev->dev_private;
>>>> @@ -1310,6 +1319,7 @@ int amdgpu_cs_ioctl(struct drm_device *dev, void *data, struct drm_file *filp)
>>>>
>>>> r = amdgpu_cs_submit(&parser, cs);
>>>>
>>>> + amdgpu_cs_vm_move_on_lru(adev, &parser);
>>>
>>> Looks we can call amdgpu_vm_move_to_lru_tail() directly.
>>
>> Both ok, here, I just
>>
>
> Missed this comment. My intention is to align vm member in vm functions.
> Anyway, both is ok for me.
Thanks for explanation, got it.
BTW, Personally I'd prefer to call vm function directly, especially in kernel space.
Regards,
Jerry
>
> Thanks,
> Ray
>
_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [PATCH v5 4/5] drm/amdgpu: use bulk moves for efficient VM LRU handling (v5)
[not found] ` <5B7D22A7.4090306-5C7GfCeVMHo@public.gmane.org>
@ 2018-08-22 8:49 ` Huang Rui
0 siblings, 0 replies; 23+ messages in thread
From: Huang Rui @ 2018-08-22 8:49 UTC (permalink / raw)
To: Zhang, Jerry (Junwei)
Cc: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW@public.gmane.org,
dri-devel-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW@public.gmane.org,
Koenig, Christian
On Wed, Aug 22, 2018 at 04:45:27PM +0800, Zhang, Jerry (Junwei) wrote:
> On 08/22/2018 04:38 PM, Huang Rui wrote:
> >On Wed, Aug 22, 2018 at 04:33:30PM +0800, Huang Rui wrote:
> >>On Wed, Aug 22, 2018 at 04:07:20PM +0800, Zhang, Jerry wrote:
> >>>On 08/22/2018 03:52 PM, Huang Rui wrote:
> >>>>I continue to work for bulk moving that based on the proposal by Christian.
> >>>>
> >>>>Background:
> >>>>amdgpu driver will move all PD/PT and PerVM BOs into idle list. Then move all of
> >>>>them on the end of LRU list one by one. Thus, that cause so many BOs moved to
> >>>>the end of the LRU, and impact performance seriously.
> >>>>
> >>>>Then Christian provided a workaround to not move PD/PT BOs on LRU with below
> >>>>patch:
> >>>>Commit 0bbf32026cf5ba41e9922b30e26e1bed1ecd38ae ("drm/amdgpu: band aid
> >>>>validating VM PTs")
> >>>>
> >>>>However, the final solution should bulk move all PD/PT and PerVM BOs on the LRU
> >>>>instead of one by one.
> >>>>
> >>>>Whenever amdgpu_vm_validate_pt_bos() is called and we have BOs which need to be
> >>>>validated we move all BOs together to the end of the LRU without dropping the
> >>>>lock for the LRU.
> >>>>
> >>>>While doing so we note the beginning and end of this block in the LRU list.
> >>>>
> >>>>Now when amdgpu_vm_validate_pt_bos() is called and we don't have anything to do,
> >>>>we don't move every BO one by one, but instead cut the LRU list into pieces so
> >>>>that we bulk move everything to the end in just one operation.
> >>>>
> >>>>Test data:
> >>>>+--------------+-----------------+-----------+---------------------------------------+
> >>>>| |The Talos |Clpeak(OCL)|BusSpeedReadback(OCL) |
> >>>>| |Principle(Vulkan)| | |
> >>>>+------------------------------------------------------------------------------------+
> >>>>| | | |0.319 ms(1k) 0.314 ms(2K) 0.308 ms(4K) |
> >>>>| Original | 147.7 FPS | 76.86 us |0.307 ms(8K) 0.310 ms(16K) |
> >>>>+------------------------------------------------------------------------------------+
> >>>>| Orignial + WA| | |0.254 ms(1K) 0.241 ms(2K) |
> >>>>|(don't move | 162.1 FPS | 42.15 us |0.230 ms(4K) 0.223 ms(8K) 0.204 ms(16K)|
> >>>>|PT BOs on LRU)| | | |
> >>>>+------------------------------------------------------------------------------------+
> >>>>| Bulk move | 163.1 FPS | 40.52 us |0.244 ms(1K) 0.252 ms(2K) 0.213 ms(4K) |
> >>>>| | | |0.214 ms(8K) 0.225 ms(16K) |
> >>>>+--------------+-----------------+-----------+---------------------------------------+
> >>>>
> >>>>After test them with above three benchmarks include vulkan and opencl. We can
> >>>>see the visible improvement than original, and even better than original with
> >>>>workaround.
> >>>>
> >>>>v2: move all BOs include idle, relocated, and moved list to the end of LRU and
> >>>>put them together.
> >>>>v3: remove unused parameter and use list_for_each_entry instead of the one with
> >>>>save entry.
> >>>>v4: move the amdgpu_vm_move_to_lru_tail after command submission, at that time,
> >>>>all bo will be back on idle list.
> >>>>v5: remove amdgpu_vm_move_to_lru_tail_by_list(), use bulk_moveable instread of
> >>>>validated, and move ttm_bo_bulk_move_lru_tail() also into
> >>>>amdgpu_vm_move_to_lru_tail().
> >>>>
> >>>>Signed-off-by: Christian König <christian.koenig@amd.com>
> >>>>Signed-off-by: Huang Rui <ray.huang@amd.com>
> >>>>Tested-by: Mike Lothian <mike@fireburn.co.uk>
> >>>>Tested-by: Dieter Nützel <Dieter@nuetzel-hh.de>
> >>>>Acked-by: Chunming Zhou <david1.zhou@amd.com>
> >>>>---
> >>>> drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c | 10 ++++++
> >>>> drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c | 66 +++++++++++++++++++++++-----------
> >>>> drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h | 11 +++++-
> >>>> 3 files changed, 65 insertions(+), 22 deletions(-)
> >>>>
> >>>>diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
> >>>>index 502b94f..4efdbd2 100644
> >>>>--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
> >>>>+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
> >>>>@@ -1260,6 +1260,15 @@ static int amdgpu_cs_submit(struct amdgpu_cs_parser *p,
> >>>> return 0;
> >>>> }
> >>>>
> >>>>+static void amdgpu_cs_vm_move_on_lru(struct amdgpu_device *adev,
> >>>>+ struct amdgpu_cs_parser *p)
> >>>>+{
> >>>>+ struct amdgpu_fpriv *fpriv = p->filp->driver_priv;
> >>>>+ struct amdgpu_vm *vm = &fpriv->vm;
> >>>>+
> >>>>+ amdgpu_vm_move_to_lru_tail(adev, vm);
> >>>>+}
> >>>>+
> >>>> int amdgpu_cs_ioctl(struct drm_device *dev, void *data, struct drm_file *filp)
> >>>> {
> >>>> struct amdgpu_device *adev = dev->dev_private;
> >>>>@@ -1310,6 +1319,7 @@ int amdgpu_cs_ioctl(struct drm_device *dev, void *data, struct drm_file *filp)
> >>>>
> >>>> r = amdgpu_cs_submit(&parser, cs);
> >>>>
> >>>>+ amdgpu_cs_vm_move_on_lru(adev, &parser);
> >>>
> >>>Looks we can call amdgpu_vm_move_to_lru_tail() directly.
> >>
> >>Both ok, here, I just
> >>
> >
> >Missed this comment. My intention is to align vm member in vm functions.
> >Anyway, both is ok for me.
>
> Thanks for explanation, got it.
> BTW, Personally I'd prefer to call vm function directly, especially in kernel space.
>
Nevermind. :-)
I will use amdgpu_vm_move_to_lru_tail() directly in next version as your
comments.
Thanks,
Ray
_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [PATCH v5 4/5] drm/amdgpu: use bulk moves for efficient VM LRU handling (v5)
2018-08-22 8:33 ` Huang Rui
2018-08-22 8:38 ` Huang Rui
@ 2018-08-22 8:51 ` Zhang, Jerry (Junwei)
1 sibling, 0 replies; 23+ messages in thread
From: Zhang, Jerry (Junwei) @ 2018-08-22 8:51 UTC (permalink / raw)
To: Huang Rui
Cc: dri-devel-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW@public.gmane.org,
amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW@public.gmane.org,
Koenig, Christian
On 08/22/2018 04:33 PM, Huang Rui wrote:
> On Wed, Aug 22, 2018 at 04:07:20PM +0800, Zhang, Jerry wrote:
>> On 08/22/2018 03:52 PM, Huang Rui wrote:
>>> I continue to work for bulk moving that based on the proposal by Christian.
>>>
>>> Background:
>>> amdgpu driver will move all PD/PT and PerVM BOs into idle list. Then move all of
>>> them on the end of LRU list one by one. Thus, that cause so many BOs moved to
>>> the end of the LRU, and impact performance seriously.
>>>
>>> Then Christian provided a workaround to not move PD/PT BOs on LRU with below
>>> patch:
>>> Commit 0bbf32026cf5ba41e9922b30e26e1bed1ecd38ae ("drm/amdgpu: band aid
>>> validating VM PTs")
>>>
>>> However, the final solution should bulk move all PD/PT and PerVM BOs on the LRU
>>> instead of one by one.
>>>
>>> Whenever amdgpu_vm_validate_pt_bos() is called and we have BOs which need to be
>>> validated we move all BOs together to the end of the LRU without dropping the
>>> lock for the LRU.
>>>
>>> While doing so we note the beginning and end of this block in the LRU list.
>>>
>>> Now when amdgpu_vm_validate_pt_bos() is called and we don't have anything to do,
>>> we don't move every BO one by one, but instead cut the LRU list into pieces so
>>> that we bulk move everything to the end in just one operation.
>>>
>>> Test data:
>>> +--------------+-----------------+-----------+---------------------------------------+
>>> | |The Talos |Clpeak(OCL)|BusSpeedReadback(OCL) |
>>> | |Principle(Vulkan)| | |
>>> +------------------------------------------------------------------------------------+
>>> | | | |0.319 ms(1k) 0.314 ms(2K) 0.308 ms(4K) |
>>> | Original | 147.7 FPS | 76.86 us |0.307 ms(8K) 0.310 ms(16K) |
>>> +------------------------------------------------------------------------------------+
>>> | Orignial + WA| | |0.254 ms(1K) 0.241 ms(2K) |
>>> |(don't move | 162.1 FPS | 42.15 us |0.230 ms(4K) 0.223 ms(8K) 0.204 ms(16K)|
>>> |PT BOs on LRU)| | | |
>>> +------------------------------------------------------------------------------------+
>>> | Bulk move | 163.1 FPS | 40.52 us |0.244 ms(1K) 0.252 ms(2K) 0.213 ms(4K) |
>>> | | | |0.214 ms(8K) 0.225 ms(16K) |
>>> +--------------+-----------------+-----------+---------------------------------------+
>>>
>>> After test them with above three benchmarks include vulkan and opencl. We can
>>> see the visible improvement than original, and even better than original with
>>> workaround.
>>>
>>> v2: move all BOs include idle, relocated, and moved list to the end of LRU and
>>> put them together.
>>> v3: remove unused parameter and use list_for_each_entry instead of the one with
>>> save entry.
>>> v4: move the amdgpu_vm_move_to_lru_tail after command submission, at that time,
>>> all bo will be back on idle list.
>>> v5: remove amdgpu_vm_move_to_lru_tail_by_list(), use bulk_moveable instread of
>>> validated, and move ttm_bo_bulk_move_lru_tail() also into
>>> amdgpu_vm_move_to_lru_tail().
>>>
>>> Signed-off-by: Christian König <christian.koenig@amd.com>
>>> Signed-off-by: Huang Rui <ray.huang@amd.com>
>>> Tested-by: Mike Lothian <mike@fireburn.co.uk>
>>> Tested-by: Dieter Nützel <Dieter@nuetzel-hh.de>
>>> Acked-by: Chunming Zhou <david1.zhou@amd.com>
>>> ---
>>> drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c | 10 ++++++
>>> drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c | 66 +++++++++++++++++++++++-----------
>>> drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h | 11 +++++-
>>> 3 files changed, 65 insertions(+), 22 deletions(-)
>>>
>>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
>>> index 502b94f..4efdbd2 100644
>>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
>>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
>>> @@ -1260,6 +1260,15 @@ static int amdgpu_cs_submit(struct amdgpu_cs_parser *p,
>>> return 0;
>>> }
>>>
>>> +static void amdgpu_cs_vm_move_on_lru(struct amdgpu_device *adev,
>>> + struct amdgpu_cs_parser *p)
>>> +{
>>> + struct amdgpu_fpriv *fpriv = p->filp->driver_priv;
>>> + struct amdgpu_vm *vm = &fpriv->vm;
>>> +
>>> + amdgpu_vm_move_to_lru_tail(adev, vm);
>>> +}
>>> +
>>> int amdgpu_cs_ioctl(struct drm_device *dev, void *data, struct drm_file *filp)
>>> {
>>> struct amdgpu_device *adev = dev->dev_private;
>>> @@ -1310,6 +1319,7 @@ int amdgpu_cs_ioctl(struct drm_device *dev, void *data, struct drm_file *filp)
>>>
>>> r = amdgpu_cs_submit(&parser, cs);
>>>
>>> + amdgpu_cs_vm_move_on_lru(adev, &parser);
>>
>> Looks we can call amdgpu_vm_move_to_lru_tail() directly.
>
> Both ok, here, I just
>
>>
>>> out:
>>> amdgpu_cs_parser_fini(&parser, r, reserved_buffers);
>>> return r;
>>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
>>> index 9c84770..db1f28a 100644
>>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
>>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
>>> @@ -268,6 +268,47 @@ void amdgpu_vm_get_pd_bo(struct amdgpu_vm *vm,
>>> }
>>>
>>> /**
>>> + * amdgpu_vm_move_to_lru_tail - move all BOs to the end of LRU
>>> + *
>>> + * @adev: amdgpu device pointer
>>> + * @vm: vm providing the BOs
>>> + *
>>> + * Move all BOs to the end of LRU and remember their positions to put them
>>> + * together.
>>> + */
>>> +void amdgpu_vm_move_to_lru_tail(struct amdgpu_device *adev,
>>> + struct amdgpu_vm *vm)
>>> +{
>>> + struct ttm_bo_global *glob = adev->mman.bdev.glob;
>>> + struct amdgpu_vm_bo_base *bo_base;
>>> +
>>> + if (vm->bulk_moveable) {
>>> + spin_lock(&glob->lru_lock);
>>> + ttm_bo_bulk_move_lru_tail(&vm->lru_bulk_move);
>>> + spin_unlock(&glob->lru_lock);
>>> + return;
>>> + }
>>
>> Question:
>> Why we handle bulk move in next command submission instead of current cs process?
>
> Bulk move is to move all pt and per-vm bos to the end of lru, after the cs
> is done, all the bos will move into the idle list again from moved and
> relocated list. Only bo from evicted is validated, we will remember and
> store the bo positions.
Thanks to reply.
with others fix, feel free to add my RB in this patch.
Regards,
Jerry
>
>>
>>> +
>>> + memset(&vm->lru_bulk_move, 0, sizeof(vm->lru_bulk_move));
>>> +
>>> + spin_lock(&glob->lru_lock);
>>> + list_for_each_entry(bo_base, &vm->idle, vm_status) {
>>> + struct amdgpu_bo *bo = bo_base->bo;
>>> +
>>> + if (!bo->parent)
>>> + continue;
>>> +
>>> + ttm_bo_move_to_lru_tail(&bo->tbo, &vm->lru_bulk_move);
>>> + if (bo->shadow)
>>> + ttm_bo_move_to_lru_tail(&bo->shadow->tbo,
>>> + &vm->lru_bulk_move);
>>> + }
>>> + spin_unlock(&glob->lru_lock);
>>> +
>>> + vm->bulk_moveable = true;
>>> +}
>>> +
>>> +/**
>>> * amdgpu_vm_validate_pt_bos - validate the page table BOs
>>> *
>>> * @adev: amdgpu device pointer
>>> @@ -284,10 +325,11 @@ int amdgpu_vm_validate_pt_bos(struct amdgpu_device *adev, struct amdgpu_vm *vm,
>>> int (*validate)(void *p, struct amdgpu_bo *bo),
>>> void *param)
>>> {
>>> - struct ttm_bo_global *glob = adev->mman.bdev.glob;
>>> struct amdgpu_vm_bo_base *bo_base, *tmp;
>>> int r = 0;
>>>
>>> + vm->bulk_moveable &= list_empty(&vm->evicted);
>>> +
>>> list_for_each_entry_safe(bo_base, tmp, &vm->evicted, vm_status) {
>>> struct amdgpu_bo *bo = bo_base->bo;
>>>
>>> @@ -295,12 +337,6 @@ int amdgpu_vm_validate_pt_bos(struct amdgpu_device *adev, struct amdgpu_vm *vm,
>>> r = validate(param, bo);
>>> if (r)
>>> break;
>>> -
>>> - spin_lock(&glob->lru_lock);
>>> - ttm_bo_move_to_lru_tail(&bo->tbo, NULL);
>>> - if (bo->shadow)
>>> - ttm_bo_move_to_lru_tail(&bo->shadow->tbo, NULL);
>>> - spin_unlock(&glob->lru_lock);
>>> }
>>>
>>> if (bo->tbo.type != ttm_bo_type_kernel) {
>>> @@ -312,20 +348,7 @@ int amdgpu_vm_validate_pt_bos(struct amdgpu_device *adev, struct amdgpu_vm *vm,
>>> }
>>> }
>>>
>>> - spin_lock(&glob->lru_lock);
>>> - list_for_each_entry(bo_base, &vm->idle, vm_status) {
>>> - struct amdgpu_bo *bo = bo_base->bo;
>>> -
>>> - if (!bo->parent)
>>> - continue;
>>> -
>>> - ttm_bo_move_to_lru_tail(&bo->tbo, NULL);
>>> - if (bo->shadow)
>>> - ttm_bo_move_to_lru_tail(&bo->shadow->tbo, NULL);
>>> - }
>>> - spin_unlock(&glob->lru_lock);
>>> -
>>> - return r;
>>> + return 0;
>>
>> Will it break from validate() and return r?
>
> Nice founding, this is my typo, that I don't modify it back.
>
>>
>>> }
>>>
>>> /**
>>> @@ -2596,6 +2619,7 @@ int amdgpu_vm_init(struct amdgpu_device *adev, struct amdgpu_vm *vm,
>>> return r;
>>>
>>> vm->pte_support_ats = false;
>>> + vm->bulk_moveable = true;
>>>
>>> if (vm_context == AMDGPU_VM_CONTEXT_COMPUTE) {
>>> vm->use_cpu_for_update = !!(adev->vm_manager.vm_update_mode &
>>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h
>>> index 67a15d4..bbdde40 100644
>>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h
>>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h
>>> @@ -29,6 +29,7 @@
>>> #include <linux/rbtree.h>
>>> #include <drm/gpu_scheduler.h>
>>> #include <drm/drm_file.h>
>>> +#include <drm/ttm/ttm_bo_driver.h>
>>>
>>> #include "amdgpu_sync.h"
>>> #include "amdgpu_ring.h"
>>> @@ -226,6 +227,11 @@ struct amdgpu_vm {
>>>
>>> /* Some basic info about the task */
>>> struct amdgpu_task_info task_info;
>>> +
>>> + /* Store positions of group of BOs */
>>> + struct ttm_lru_bulk_move lru_bulk_move;
>>> + /* mark whether can do the bulk move */
>>> + bool bulk_moveable;
>>> };
>>>
>>> struct amdgpu_vm_manager {
>>> @@ -330,8 +336,11 @@ bool amdgpu_vm_need_pipeline_sync(struct amdgpu_ring *ring,
>>> void amdgpu_vm_check_compute_bug(struct amdgpu_device *adev);
>>>
>>> void amdgpu_vm_get_task_info(struct amdgpu_device *adev, unsigned int pasid,
>>> - struct amdgpu_task_info *task_info);
>>> + struct amdgpu_task_info *task_info);
>>
>> This change looks not related to bulk move
>>
>
> Yes, that is code style clean up to algin the first member of "(".
>
> Thanks,
> Ray
> _______________________________________________
> dri-devel mailing list
> dri-devel@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/dri-devel
>
_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [PATCH v5 5/5] drm/amdgpu: move PD/PT bos on LRU again
[not found] ` <1534924375-5837-6-git-send-email-ray.huang-5C7GfCeVMHo@public.gmane.org>
@ 2018-08-28 9:14 ` Michel Dänzer
[not found] ` <9528d248-f784-f5c8-28f2-12f694491cfe-otUistvHUpPR7s880joybQ@public.gmane.org>
0 siblings, 1 reply; 23+ messages in thread
From: Michel Dänzer @ 2018-08-28 9:14 UTC (permalink / raw)
To: Huang Rui; +Cc: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW
[-- Attachment #1: Type: text/plain, Size: 1643 bytes --]
Hi Ray,
On 2018-08-22 9:52 a.m., Huang Rui wrote:
> The new bulk moving functionality is ready, the overhead of moving PD/PT bos to
> LRU is fixed. So move them on LRU again.
>
> Signed-off-by: Huang Rui <ray.huang-5C7GfCeVMHo@public.gmane.org>
> Tested-by: Mike Lothian <mike-4+n8WJKc9ve9FHfhHBbuYA@public.gmane.org>
> Tested-by: Dieter Nützel <Dieter-0hun7QTegEsDD4udEopG9Q@public.gmane.org>
> Acked-by: Chunming Zhou <david1.zhou-5C7GfCeVMHo@public.gmane.org>
> Reviewed-by: Junwei Zhang <Jerry.Zhang-5C7GfCeVMHo@public.gmane.org>
> ---
> drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
> index db1f28a..d195a3d 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
> @@ -1107,7 +1107,7 @@ int amdgpu_vm_update_directories(struct amdgpu_device *adev,
> struct amdgpu_vm_bo_base,
> vm_status);
> bo_base->moved = false;
> - list_del_init(&bo_base->vm_status);
> + list_move(&bo_base->vm_status, &vm->idle);
>
> bo = bo_base->bo->parent;
> if (!bo)
>
Since this change, I'm getting various badness when running piglit using
radeonsi on Bonaire, see the attached dmesg excerpt.
Reverting just this change on top of current amd-staging-drm-next avoids
the problem.
Looks like some list manipulation isn't sufficiently protected against
concurrent execution?
--
Earthling Michel Dänzer | http://www.amd.com
Libre software enthusiast | Mesa and X developer
[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #2: kern.log --]
[-- Type: text/x-log; name="kern.log", Size: 32844 bytes --]
Aug 27 17:34:59 kaveri kernel: [ 567.429026] WARNING: CPU: 7 PID: 12214 at drivers/gpu/drm//ttm/ttm_bo.c:228 ttm_bo_move_to_lru_tail+0x28b/0x3d0 [ttm]
Aug 27 17:34:59 kaveri kernel: [ 567.429029] Modules linked in: fuse(E) lz4(E) lz4_compress(E) amdkfd(OE) amdgpu(OE) cpufreq_powersave(E) cpufreq_userspace(E) cpufreq_conservative(E) chash(OE) gpu_sched(OE) binfmt_misc(E) nls_ascii(E) nls_cp437(E) vfat(E) edac_mce_amd(E) fat(E) kvm(E) irqbypass(E) crct10dif_pclmul(E) crc32_pclmul(E) ghash_clmulni_intel(E) pcbc(E) radeon(OE) snd_hda_codec_realtek(E) snd_hda_codec_generic(E) snd_hda_codec_hdmi(E) ttm(OE) snd_hda_intel(E) wmi_bmof(E) aesni_intel(E) efi_pstore(E) aes_x86_64(E) crypto_simd(E) drm_kms_helper(OE) cryptd(E) glue_helper(E) pcspkr(E) efivars(E) snd_hda_codec(E) k10temp(E) drm(OE) snd_hda_core(E) snd_hwdep(E) snd_pcm(E) snd_timer(E) i2c_algo_bit(E) fb_sys_fops(E) r8169(E) sp5100_tco(E) syscopyarea(E) snd(E) sysfillrect(E) ccp(E) sg(E) mii(E) sysimgblt(E) soundcore(E) i2c_piix4(E)
Aug 27 17:34:59 kaveri kernel: [ 567.429118] rng_core(E) wmi(E) button(E) acpi_cpufreq(E) tcp_bbr(E) sch_fq(E) sunrpc(E) nct6775(E) hwmon_vid(E) efivarfs(E) ip_tables(E) x_tables(E) ext4(E) crc32c_generic(E) crc16(E) mbcache(E) jbd2(E) fscrypto(E) dm_mod(E) raid10(E) raid1(E) raid0(E) multipath(E) linear(E) md_mod(E) sd_mod(E) evdev(E) hid_generic(E) usbhid(E) hid(E) ahci(E) libahci(E) xhci_pci(E) libata(E) xhci_hcd(E) crc32c_intel(E) scsi_mod(E) usbcore(E) gpio_amdpt(E) gpio_generic(E)
Aug 27 17:34:59 kaveri kernel: [ 567.429182] CPU: 7 PID: 12214 Comm: shader_run:cs0 Tainted: G W OE 4.18.0-rc1+ #111
Aug 27 17:34:59 kaveri kernel: [ 567.429184] Hardware name: Micro-Star International Co., Ltd. MS-7A34/B350 TOMAHAWK (MS-7A34), BIOS 1.80 09/13/2017
Aug 27 17:34:59 kaveri kernel: [ 567.429191] RIP: 0010:ttm_bo_move_to_lru_tail+0x28b/0x3d0 [ttm]
Aug 27 17:34:59 kaveri kernel: [ 567.429192] Code: c1 ea 03 80 3c 02 00 0f 85 e6 00 00 00 48 8b 83 e8 01 00 00 be ff ff ff ff 48 8d 78 60 e8 1d 08 1e c3 85 c0 0f 85 c3 fd ff ff <0f> 0b e9 bc fd ff ff 48 8d bb d0 01 00 00 48 b8 00 00 00 00 00 fc
Aug 27 17:34:59 kaveri kernel: [ 567.429285] RSP: 0018:ffff8803e6aa76a8 EFLAGS: 00010246
Aug 27 17:34:59 kaveri kernel: [ 567.429290] RAX: 0000000000000000 RBX: ffff8803d7f4aad0 RCX: 1ffff1007afe95f1
Aug 27 17:34:59 kaveri kernel: [ 567.429292] RDX: 0000000000000000 RSI: ffff8803c61b57a0 RDI: 0000000000000246
Aug 27 17:34:59 kaveri kernel: [ 567.429295] RBP: ffff8803d4c58638 R08: ffffed007cd54ebe R09: ffffed007cd54ebe
Aug 27 17:34:59 kaveri kernel: [ 567.429297] R10: 0000000000000001 R11: ffffed007cd54ebe R12: ffff8803d4c58078
Aug 27 17:34:59 kaveri kernel: [ 567.429299] R13: ffff8803d4c58000 R14: ffff8803d7f4aa80 R15: dffffc0000000000
Aug 27 17:34:59 kaveri kernel: [ 567.429301] FS: 00007ff030b2d700(0000) GS:ffff8803ee1c0000(0000) knlGS:0000000000000000
Aug 27 17:34:59 kaveri kernel: [ 567.429304] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
Aug 27 17:34:59 kaveri kernel: [ 567.429306] CR2: 00007ff00cee6a90 CR3: 00000003471e6000 CR4: 00000000003406e0
Aug 27 17:34:59 kaveri kernel: [ 567.429307] Call Trace:
Aug 27 17:34:59 kaveri kernel: [ 567.429363] amdgpu_vm_move_to_lru_tail+0x128/0x240 [amdgpu]
Aug 27 17:34:59 kaveri kernel: [ 567.429420] amdgpu_cs_ioctl+0x967/0x4ba0 [amdgpu]
Aug 27 17:34:59 kaveri kernel: [ 567.429433] ? lock_acquire+0x10b/0x330
Aug 27 17:34:59 kaveri kernel: [ 567.429487] ? amdgpu_cs_find_mapping+0x3c0/0x3c0 [amdgpu]
Aug 27 17:34:59 kaveri kernel: [ 567.429491] ? _raw_spin_unlock_irq+0x29/0x40
Aug 27 17:34:59 kaveri kernel: [ 567.429499] ? __lock_acquire+0x605/0x3670
Aug 27 17:34:59 kaveri kernel: [ 567.429502] ? finish_task_switch+0x18e/0x670
Aug 27 17:34:59 kaveri kernel: [ 567.429509] ? __schedule+0x80b/0x1be0
Aug 27 17:34:59 kaveri kernel: [ 567.429520] ? debug_check_no_locks_freed+0x2c0/0x2c0
Aug 27 17:34:59 kaveri kernel: [ 567.429607] ? amdgpu_cs_find_mapping+0x3c0/0x3c0 [amdgpu]
Aug 27 17:34:59 kaveri kernel: [ 567.429627] drm_ioctl_kernel+0x197/0x220 [drm]
Aug 27 17:34:59 kaveri kernel: [ 567.429644] ? drm_setversion+0x800/0x800 [drm]
Aug 27 17:34:59 kaveri kernel: [ 567.429652] ? __check_object_size+0x149/0x360
Aug 27 17:34:59 kaveri kernel: [ 567.429671] drm_ioctl+0x40e/0x860 [drm]
Aug 27 17:34:59 kaveri kernel: [ 567.429727] ? amdgpu_cs_find_mapping+0x3c0/0x3c0 [amdgpu]
Aug 27 17:34:59 kaveri kernel: [ 567.429744] ? drm_version+0x390/0x390 [drm]
Aug 27 17:34:59 kaveri kernel: [ 567.429756] ? lock_downgrade+0x5e0/0x5e0
Aug 27 17:34:59 kaveri kernel: [ 567.429763] ? _raw_spin_unlock_irqrestore+0x32/0x60
Aug 27 17:34:59 kaveri kernel: [ 567.429769] ? trace_hardirqs_on_caller+0x381/0x570
Aug 27 17:34:59 kaveri kernel: [ 567.429821] amdgpu_drm_ioctl+0xcc/0x1b0 [amdgpu]
Aug 27 17:34:59 kaveri kernel: [ 567.429829] do_vfs_ioctl+0x192/0xf30
Aug 27 17:34:59 kaveri kernel: [ 567.429835] ? find_held_lock+0x32/0x1c0
Aug 27 17:34:59 kaveri kernel: [ 567.429841] ? ioctl_preallocate+0x1b0/0x1b0
Aug 27 17:34:59 kaveri kernel: [ 567.429847] ? __fget+0x1c8/0x300
Aug 27 17:34:59 kaveri kernel: [ 567.429852] ? lock_downgrade+0x5e0/0x5e0
Aug 27 17:34:59 kaveri kernel: [ 567.429864] ? __fget+0x1e0/0x300
Aug 27 17:34:59 kaveri kernel: [ 567.429876] ksys_ioctl+0x70/0x80
Aug 27 17:34:59 kaveri kernel: [ 567.429882] __x64_sys_ioctl+0x6f/0xb0
Aug 27 17:34:59 kaveri kernel: [ 567.429885] ? trace_hardirqs_on_caller+0x381/0x570
Aug 27 17:34:59 kaveri kernel: [ 567.429889] do_syscall_64+0xa5/0x3f0
Aug 27 17:34:59 kaveri kernel: [ 567.429895] entry_SYSCALL_64_after_hwframe+0x49/0xbe
Aug 27 17:34:59 kaveri kernel: [ 567.429897] RIP: 0033:0x7ff037449067
Aug 27 17:34:59 kaveri kernel: [ 567.429899] Code: b3 66 90 48 8b 05 21 7e 0c 00 64 c7 00 26 00 00 00 48 c7 c0 ff ff ff ff c3 66 2e 0f 1f 84 00 00 00 00 00 b8 10 00 00 00 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 8b 0d f1 7d 0c 00 f7 d8 64 89 01 48
Aug 27 17:34:59 kaveri kernel: [ 567.429991] RSP: 002b:00007ff030b2cbb8 EFLAGS: 00000246 ORIG_RAX: 0000000000000010
Aug 27 17:34:59 kaveri kernel: [ 567.429996] RAX: ffffffffffffffda RBX: 00007ff030b2cd28 RCX: 00007ff037449067
Aug 27 17:34:59 kaveri kernel: [ 567.429998] RDX: 00007ff030b2cc30 RSI: 00000000c0186444 RDI: 0000000000000006
Aug 27 17:34:59 kaveri kernel: [ 567.430000] RBP: 00007ff030b2cbe0 R08: 00007ff030b2cd80 R09: 00007ff030b2cd28
Aug 27 17:34:59 kaveri kernel: [ 567.430002] R10: 00007ff030b2cd80 R11: 0000000000000246 R12: 00007ff030b2cc30
Aug 27 17:34:59 kaveri kernel: [ 567.430004] R13: 00000000c0186444 R14: 0000000000000006 R15: 00005594f4e417d8
Aug 27 17:34:59 kaveri kernel: [ 567.430019] irq event stamp: 5909674
Aug 27 17:34:59 kaveri kernel: [ 567.430022] hardirqs last enabled at (5909673): [<ffffffff85a00a60>] restore_regs_and_return_to_kernel+0x0/0x30
Aug 27 17:34:59 kaveri kernel: [ 567.430025] hardirqs last disabled at (5909674): [<ffffffff85a011ef>] error_entry+0x7f/0x100
Aug 27 17:34:59 kaveri kernel: [ 567.430028] softirqs last enabled at (5909646): [<ffffffff85c00620>] __do_softirq+0x620/0x919
Aug 27 17:34:59 kaveri kernel: [ 567.430031] softirqs last disabled at (5909575): [<ffffffff8433443e>] irq_exit+0x19e/0x1d0
Aug 27 17:34:59 kaveri kernel: [ 567.430033] ---[ end trace 701de91db5737054 ]---
Aug 27 17:34:59 kaveri kernel: [ 567.430087] WARNING: CPU: 7 PID: 12214 at drivers/gpu/drm//ttm/ttm_bo.c:166 ttm_bo_add_to_lru+0x2ec/0x580 [ttm]
Aug 27 17:34:59 kaveri kernel: [ 567.430091] Modules linked in: fuse(E) lz4(E) lz4_compress(E) amdkfd(OE) amdgpu(OE) cpufreq_powersave(E) cpufreq_userspace(E) cpufreq_conservative(E) chash(OE) gpu_sched(OE) binfmt_misc(E) nls_ascii(E) nls_cp437(E) vfat(E) edac_mce_amd(E) fat(E) kvm(E) irqbypass(E) crct10dif_pclmul(E) crc32_pclmul(E) ghash_clmulni_intel(E) pcbc(E) radeon(OE) snd_hda_codec_realtek(E) snd_hda_codec_generic(E) snd_hda_codec_hdmi(E) ttm(OE) snd_hda_intel(E) wmi_bmof(E) aesni_intel(E) efi_pstore(E) aes_x86_64(E) crypto_simd(E) drm_kms_helper(OE) cryptd(E) glue_helper(E) pcspkr(E) efivars(E) snd_hda_codec(E) k10temp(E) drm(OE) snd_hda_core(E) snd_hwdep(E) snd_pcm(E) snd_timer(E) i2c_algo_bit(E) fb_sys_fops(E) r8169(E) sp5100_tco(E) syscopyarea(E) snd(E) sysfillrect(E) ccp(E) sg(E) mii(E) sysimgblt(E) soundcore(E) i2c_piix4(E)
Aug 27 17:34:59 kaveri kernel: [ 567.430250] rng_core(E) wmi(E) button(E) acpi_cpufreq(E) tcp_bbr(E) sch_fq(E) sunrpc(E) nct6775(E) hwmon_vid(E) efivarfs(E) ip_tables(E) x_tables(E) ext4(E) crc32c_generic(E) crc16(E) mbcache(E) jbd2(E) fscrypto(E) dm_mod(E) raid10(E) raid1(E) raid0(E) multipath(E) linear(E) md_mod(E) sd_mod(E) evdev(E) hid_generic(E) usbhid(E) hid(E) ahci(E) libahci(E) xhci_pci(E) libata(E) xhci_hcd(E) crc32c_intel(E) scsi_mod(E) usbcore(E) gpio_amdpt(E) gpio_generic(E)
Aug 27 17:34:59 kaveri kernel: [ 567.430329] CPU: 7 PID: 12214 Comm: shader_run:cs0 Tainted: G W OE 4.18.0-rc1+ #111
Aug 27 17:34:59 kaveri kernel: [ 567.430333] Hardware name: Micro-Star International Co., Ltd. MS-7A34/B350 TOMAHAWK (MS-7A34), BIOS 1.80 09/13/2017
Aug 27 17:34:59 kaveri kernel: [ 567.430341] RIP: 0010:ttm_bo_add_to_lru+0x2ec/0x580 [ttm]
Aug 27 17:34:59 kaveri kernel: [ 567.430345] Code: c1 ea 03 80 3c 02 00 0f 85 ab 01 00 00 48 8b 83 e8 01 00 00 be ff ff ff ff 48 8d 78 60 e8 2c 3b 1e c3 85 c0 0f 85 87 fd ff ff <0f> 0b e9 80 fd ff ff 48 b8 00 00 00 00 00 fc ff df 49 8d 7c 24 10
Aug 27 17:34:59 kaveri kernel: [ 567.430487] RSP: 0018:ffff8803e6aa7660 EFLAGS: 00010246
Aug 27 17:34:59 kaveri kernel: [ 567.430491] RAX: 0000000000000000 RBX: ffff8803d7f4aad0 RCX: 0000000000000000
Aug 27 17:34:59 kaveri kernel: [ 567.430493] RDX: 0000000000000000 RSI: ffff8803c61b57a0 RDI: 0000000000000246
Aug 27 17:34:59 kaveri kernel: [ 567.430495] RBP: ffff8803d4c58638 R08: ffffed007cd54ec1 R09: ffffed007cd54ec1
Aug 27 17:34:59 kaveri kernel: [ 567.430497] R10: 0000000000000001 R11: ffffed007cd54ec1 R12: ffff880371902c00
Aug 27 17:34:59 kaveri kernel: [ 567.430499] R13: ffff8803d4c58000 R14: ffff8803d7f4aa80 R15: dffffc0000000000
Aug 27 17:34:59 kaveri kernel: [ 567.430502] FS: 00007ff030b2d700(0000) GS:ffff8803ee1c0000(0000) knlGS:0000000000000000
Aug 27 17:34:59 kaveri kernel: [ 567.430504] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
Aug 27 17:34:59 kaveri kernel: [ 567.430506] CR2: 00007ff00cee6a90 CR3: 00000003471e6000 CR4: 00000000003406e0
Aug 27 17:34:59 kaveri kernel: [ 567.430508] Call Trace:
Aug 27 17:34:59 kaveri kernel: [ 567.430522] ttm_bo_move_to_lru_tail+0x5e/0x3d0 [ttm]
Aug 27 17:34:59 kaveri kernel: [ 567.430580] amdgpu_vm_move_to_lru_tail+0x128/0x240 [amdgpu]
Aug 27 17:34:59 kaveri kernel: [ 567.430639] amdgpu_cs_ioctl+0x967/0x4ba0 [amdgpu]
Aug 27 17:34:59 kaveri kernel: [ 567.430653] ? lock_acquire+0x10b/0x330
Aug 27 17:34:59 kaveri kernel: [ 567.430709] ? amdgpu_cs_find_mapping+0x3c0/0x3c0 [amdgpu]
Aug 27 17:34:59 kaveri kernel: [ 567.430714] ? _raw_spin_unlock_irq+0x29/0x40
Aug 27 17:34:59 kaveri kernel: [ 567.430725] ? __lock_acquire+0x605/0x3670
Aug 27 17:34:59 kaveri kernel: [ 567.430729] ? finish_task_switch+0x18e/0x670
Aug 27 17:34:59 kaveri kernel: [ 567.430739] ? __schedule+0x80b/0x1be0
Aug 27 17:34:59 kaveri kernel: [ 567.430753] ? debug_check_no_locks_freed+0x2c0/0x2c0
Aug 27 17:34:59 kaveri kernel: [ 567.430840] ? amdgpu_cs_find_mapping+0x3c0/0x3c0 [amdgpu]
Aug 27 17:34:59 kaveri kernel: [ 567.430858] drm_ioctl_kernel+0x197/0x220 [drm]
Aug 27 17:34:59 kaveri kernel: [ 567.430875] ? drm_setversion+0x800/0x800 [drm]
Aug 27 17:34:59 kaveri kernel: [ 567.430882] ? __check_object_size+0x149/0x360
Aug 27 17:34:59 kaveri kernel: [ 567.430901] drm_ioctl+0x40e/0x860 [drm]
Aug 27 17:34:59 kaveri kernel: [ 567.430957] ? amdgpu_cs_find_mapping+0x3c0/0x3c0 [amdgpu]
Aug 27 17:34:59 kaveri kernel: [ 567.430974] ? drm_version+0x390/0x390 [drm]
Aug 27 17:34:59 kaveri kernel: [ 567.430987] ? lock_downgrade+0x5e0/0x5e0
Aug 27 17:34:59 kaveri kernel: [ 567.430994] ? _raw_spin_unlock_irqrestore+0x32/0x60
Aug 27 17:34:59 kaveri kernel: [ 567.431000] ? trace_hardirqs_on_caller+0x381/0x570
Aug 27 17:34:59 kaveri kernel: [ 567.431053] amdgpu_drm_ioctl+0xcc/0x1b0 [amdgpu]
Aug 27 17:34:59 kaveri kernel: [ 567.431060] do_vfs_ioctl+0x192/0xf30
Aug 27 17:34:59 kaveri kernel: [ 567.431066] ? find_held_lock+0x32/0x1c0
Aug 27 17:34:59 kaveri kernel: [ 567.431070] ? ioctl_preallocate+0x1b0/0x1b0
Aug 27 17:34:59 kaveri kernel: [ 567.431076] ? __fget+0x1c8/0x300
Aug 27 17:34:59 kaveri kernel: [ 567.431082] ? lock_downgrade+0x5e0/0x5e0
Aug 27 17:34:59 kaveri kernel: [ 567.431093] ? __fget+0x1e0/0x300
Aug 27 17:34:59 kaveri kernel: [ 567.431105] ksys_ioctl+0x70/0x80
Aug 27 17:34:59 kaveri kernel: [ 567.431112] __x64_sys_ioctl+0x6f/0xb0
Aug 27 17:34:59 kaveri kernel: [ 567.431116] ? trace_hardirqs_on_caller+0x381/0x570
Aug 27 17:34:59 kaveri kernel: [ 567.431120] do_syscall_64+0xa5/0x3f0
Aug 27 17:34:59 kaveri kernel: [ 567.431126] entry_SYSCALL_64_after_hwframe+0x49/0xbe
Aug 27 17:34:59 kaveri kernel: [ 567.431128] RIP: 0033:0x7ff037449067
Aug 27 17:34:59 kaveri kernel: [ 567.431130] Code: b3 66 90 48 8b 05 21 7e 0c 00 64 c7 00 26 00 00 00 48 c7 c0 ff ff ff ff c3 66 2e 0f 1f 84 00 00 00 00 00 b8 10 00 00 00 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 8b 0d f1 7d 0c 00 f7 d8 64 89 01 48
Aug 27 17:34:59 kaveri kernel: [ 567.431223] RSP: 002b:00007ff030b2cbb8 EFLAGS: 00000246 ORIG_RAX: 0000000000000010
Aug 27 17:34:59 kaveri kernel: [ 567.431227] RAX: ffffffffffffffda RBX: 00007ff030b2cd28 RCX: 00007ff037449067
Aug 27 17:34:59 kaveri kernel: [ 567.431229] RDX: 00007ff030b2cc30 RSI: 00000000c0186444 RDI: 0000000000000006
Aug 27 17:34:59 kaveri kernel: [ 567.431231] RBP: 00007ff030b2cbe0 R08: 00007ff030b2cd80 R09: 00007ff030b2cd28
Aug 27 17:34:59 kaveri kernel: [ 567.431234] R10: 00007ff030b2cd80 R11: 0000000000000246 R12: 00007ff030b2cc30
Aug 27 17:34:59 kaveri kernel: [ 567.431236] R13: 00000000c0186444 R14: 0000000000000006 R15: 00005594f4e417d8
Aug 27 17:34:59 kaveri kernel: [ 567.431249] irq event stamp: 5909680
Aug 27 17:34:59 kaveri kernel: [ 567.431253] hardirqs last enabled at (5909679): [<ffffffff85a00a60>] restore_regs_and_return_to_kernel+0x0/0x30
Aug 27 17:34:59 kaveri kernel: [ 567.431256] hardirqs last disabled at (5909680): [<ffffffff85a011ef>] error_entry+0x7f/0x100
Aug 27 17:34:59 kaveri kernel: [ 567.431259] softirqs last enabled at (5909646): [<ffffffff85c00620>] __do_softirq+0x620/0x919
Aug 27 17:34:59 kaveri kernel: [ 567.431262] softirqs last disabled at (5909575): [<ffffffff8433443e>] irq_exit+0x19e/0x1d0
Aug 27 17:34:59 kaveri kernel: [ 567.431264] ---[ end trace 701de91db5737055 ]---
Aug 27 17:36:49 kaveri kernel: [ 677.611767] list_del corruption. prev->next should be ffff8803c92400f8, but was ffff88033cd1f620
Aug 27 17:36:49 kaveri kernel: [ 677.611878] ------------[ cut here ]------------
Aug 27 17:36:49 kaveri kernel: [ 677.611881] kernel BUG at lib/list_debug.c:53!
Aug 27 17:36:49 kaveri kernel: [ 677.611894] invalid opcode: 0000 [#1] SMP KASAN NOPTI
Aug 27 17:36:49 kaveri kernel: [ 677.611901] CPU: 4 PID: 126 Comm: kworker/4:1 Tainted: G W OE 4.18.0-rc1+ #111
Aug 27 17:36:49 kaveri kernel: [ 677.611905] Hardware name: Micro-Star International Co., Ltd. MS-7A34/B350 TOMAHAWK (MS-7A34), BIOS 1.80 09/13/2017
Aug 27 17:36:49 kaveri kernel: [ 677.611919] Workqueue: events ttm_bo_delayed_workqueue [ttm]
Aug 27 17:36:49 kaveri kernel: [ 677.611929] RIP: 0010:__list_del_entry_valid+0xe6/0x150
Aug 27 17:36:49 kaveri kernel: [ 677.611932] Code: 89 ea 48 c7 c7 80 b1 fe 85 e8 df 30 6e ff 0f 0b 48 c7 c7 e0 b1 fe 85 e8 d1 30 6e ff 0f 0b 48 c7 c7 40 b2 fe 85 e8 c3 30 6e ff <0f> 0b 48 c7 c7 a0 b2 fe 85 e8 b5 30 6e ff 0f 0b 48 89 df 48 89 34
Aug 27 17:36:49 kaveri kernel: [ 677.611996] RSP: 0018:ffff8803ea59fc08 EFLAGS: 00010286
Aug 27 17:36:49 kaveri kernel: [ 677.612000] RAX: 0000000000000054 RBX: ffff880371902fd8 RCX: ffffffff84d63612
Aug 27 17:36:49 kaveri kernel: [ 677.612003] RDX: 0000000000000000 RSI: 0000000000000008 RDI: ffff8803ee125a2c
Aug 27 17:36:49 kaveri kernel: [ 677.612006] RBP: ffff8803df0255f8 R08: ffffed007dc24e51 R09: ffffed007dc24e51
Aug 27 17:36:49 kaveri kernel: [ 677.612008] R10: 0000000000000001 R11: ffffed007dc24e50 R12: ffff8803c9240100
Aug 27 17:36:49 kaveri kernel: [ 677.612011] R13: 0000000000000000 R14: 0000000000000001 R15: 0000000000000001
Aug 27 17:36:49 kaveri kernel: [ 677.612014] FS: 0000000000000000(0000) GS:ffff8803ee100000(0000) knlGS:0000000000000000
Aug 27 17:36:49 kaveri kernel: [ 677.612017] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
Aug 27 17:36:49 kaveri kernel: [ 677.612019] CR2: 00007fafc37fdf38 CR3: 000000036a614000 CR4: 00000000003406e0
Aug 27 17:36:49 kaveri kernel: [ 677.612021] Call Trace:
Aug 27 17:36:49 kaveri kernel: [ 677.612027] ? lock_acquire+0x10b/0x330
Aug 27 17:36:49 kaveri kernel: [ 677.612035] ttm_bo_del_from_lru+0x17c/0x320 [ttm]
Aug 27 17:36:49 kaveri kernel: [ 677.612043] ttm_bo_cleanup_refs+0x14b/0x510 [ttm]
Aug 27 17:36:49 kaveri kernel: [ 677.612052] ttm_bo_delayed_delete+0x165/0x570 [ttm]
Aug 27 17:36:49 kaveri kernel: [ 677.612060] ? ttm_bo_cleanup_refs+0x510/0x510 [ttm]
Aug 27 17:36:49 kaveri kernel: [ 677.612065] ? process_one_work+0x76a/0x16c0
Aug 27 17:36:49 kaveri kernel: [ 677.612074] ttm_bo_delayed_workqueue+0x17/0x60 [ttm]
Aug 27 17:36:49 kaveri kernel: [ 677.612078] process_one_work+0x7fd/0x16c0
Aug 27 17:36:49 kaveri kernel: [ 677.612087] ? drain_workqueue+0x380/0x380
Aug 27 17:36:49 kaveri kernel: [ 677.612091] ? lock_acquire+0x10b/0x330
Aug 27 17:36:49 kaveri kernel: [ 677.612102] worker_thread+0x87/0xb50
Aug 27 17:36:49 kaveri kernel: [ 677.612111] ? process_one_work+0x16c0/0x16c0
Aug 27 17:36:49 kaveri kernel: [ 677.612116] kthread+0x2db/0x390
Aug 27 17:36:49 kaveri kernel: [ 677.612120] ? kthread_create_worker_on_cpu+0xc0/0xc0
Aug 27 17:36:49 kaveri kernel: [ 677.612127] ret_from_fork+0x27/0x50
Aug 27 17:36:49 kaveri kernel: [ 677.612137] Modules linked in: fuse(E) lz4(E) lz4_compress(E) amdkfd(OE) amdgpu(OE) cpufreq_powersave(E) cpufreq_userspace(E) cpufreq_conservative(E) chash(OE) gpu_sched(OE) binfmt_misc(E) nls_ascii(E) nls_cp437(E) vfat(E) edac_mce_amd(E) fat(E) kvm(E) irqbypass(E) crct10dif_pclmul(E) crc32_pclmul(E) ghash_clmulni_intel(E) pcbc(E) radeon(OE) snd_hda_codec_realtek(E) snd_hda_codec_generic(E) snd_hda_codec_hdmi(E) ttm(OE) snd_hda_intel(E) wmi_bmof(E) aesni_intel(E) efi_pstore(E) aes_x86_64(E) crypto_simd(E) drm_kms_helper(OE) cryptd(E) glue_helper(E) pcspkr(E) efivars(E) snd_hda_codec(E) k10temp(E) drm(OE) snd_hda_core(E) snd_hwdep(E) snd_pcm(E) snd_timer(E) i2c_algo_bit(E) fb_sys_fops(E) r8169(E) sp5100_tco(E) syscopyarea(E) snd(E) sysfillrect(E) ccp(E) sg(E) mii(E) sysimgblt(E) soundcore(E) i2c_piix4(E)
Aug 27 17:36:49 kaveri kernel: [ 677.612230] rng_core(E) wmi(E) button(E) acpi_cpufreq(E) tcp_bbr(E) sch_fq(E) sunrpc(E) nct6775(E) hwmon_vid(E) efivarfs(E) ip_tables(E) x_tables(E) ext4(E) crc32c_generic(E) crc16(E) mbcache(E) jbd2(E) fscrypto(E) dm_mod(E) raid10(E) raid1(E) raid0(E) multipath(E) linear(E) md_mod(E) sd_mod(E) evdev(E) hid_generic(E) usbhid(E) hid(E) ahci(E) libahci(E) xhci_pci(E) libata(E) xhci_hcd(E) crc32c_intel(E) scsi_mod(E) usbcore(E) gpio_amdpt(E) gpio_generic(E)
Aug 27 17:36:49 kaveri kernel: [ 677.612274] ---[ end trace 701de91db5737056 ]---
Aug 27 17:36:49 kaveri kernel: [ 677.612278] RIP: 0010:__list_del_entry_valid+0xe6/0x150
Aug 27 17:36:49 kaveri kernel: [ 677.612280] Code: 89 ea 48 c7 c7 80 b1 fe 85 e8 df 30 6e ff 0f 0b 48 c7 c7 e0 b1 fe 85 e8 d1 30 6e ff 0f 0b 48 c7 c7 40 b2 fe 85 e8 c3 30 6e ff <0f> 0b 48 c7 c7 a0 b2 fe 85 e8 b5 30 6e ff 0f 0b 48 89 df 48 89 34
Aug 27 17:36:49 kaveri kernel: [ 677.612331] RSP: 0018:ffff8803ea59fc08 EFLAGS: 00010286
Aug 27 17:36:49 kaveri kernel: [ 677.612334] RAX: 0000000000000054 RBX: ffff880371902fd8 RCX: ffffffff84d63612
Aug 27 17:36:49 kaveri kernel: [ 677.612336] RDX: 0000000000000000 RSI: 0000000000000008 RDI: ffff8803ee125a2c
Aug 27 17:36:49 kaveri kernel: [ 677.612339] RBP: ffff8803df0255f8 R08: ffffed007dc24e51 R09: ffffed007dc24e51
Aug 27 17:36:49 kaveri kernel: [ 677.612341] R10: 0000000000000001 R11: ffffed007dc24e50 R12: ffff8803c9240100
Aug 27 17:36:49 kaveri kernel: [ 677.612343] R13: 0000000000000000 R14: 0000000000000001 R15: 0000000000000001
Aug 27 17:36:49 kaveri kernel: [ 677.612346] FS: 0000000000000000(0000) GS:ffff8803ee100000(0000) knlGS:0000000000000000
Aug 27 17:36:49 kaveri kernel: [ 677.612349] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
Aug 27 17:36:49 kaveri kernel: [ 677.612351] CR2: 00007fafc37fdf38 CR3: 000000036a614000 CR4: 00000000003406e0
Aug 27 17:37:15 kaveri kernel: [ 703.632630] INFO: rcu_sched self-detected stall on CPU
Aug 27 17:37:15 kaveri kernel: [ 703.632632] INFO: rcu_sched self-detected stall on CPU
Aug 27 17:37:15 kaveri kernel: [ 703.632634] INFO: rcu_sched self-detected stall on CPU
Aug 27 17:37:15 kaveri kernel: [ 703.632645] 10-....: (6500 ticks this GP) idle=74e/1/4611686018427387906 softirq=80088/80088 fqs=2719
Aug 27 17:37:15 kaveri kernel: [ 703.632646]
Aug 27 17:37:15 kaveri kernel: [ 703.632650] 9-....: (6500 ticks this GP) idle=5a6/1/4611686018427387906 softirq=76170/76170 fqs=2719
Aug 27 17:37:15 kaveri kernel: [ 703.632653] (t=6500 jiffies g=13087 c=13086 q=2517)
Aug 27 17:37:15 kaveri kernel: [ 703.632653]
Aug 27 17:37:15 kaveri kernel: [ 703.632656] Sending NMI from CPU 10 to CPUs 9:
Aug 27 17:37:15 kaveri kernel: [ 703.633439] (t=6500 jiffies g=13087 c=13086 q=2517)
Aug 27 17:37:15 kaveri kernel: [ 703.633654] NMI backtrace for cpu 9
Aug 27 17:37:15 kaveri kernel: [ 703.633655] CPU: 9 PID: 5858 Comm: shader_runner Tainted: G D W OE 4.18.0-rc1+ #111
Aug 27 17:37:15 kaveri kernel: [ 703.633656] Hardware name: Micro-Star International Co., Ltd. MS-7A34/B350 TOMAHAWK (MS-7A34), BIOS 1.80 09/13/2017
Aug 27 17:37:15 kaveri kernel: [ 703.633657] RIP: 0010:__memcpy+0x17/0x20
Aug 27 17:37:15 kaveri kernel: [ 703.633657] Code: f5 fb fe eb 9e e8 69 f5 fb fe e9 75 ff ff ff 90 90 90 90 0f 1f 44 00 00 48 89 f8 48 89 d1 48 c1 e9 03 83 e2 07 f3 48 a5 89 d1 <f3> a4 c3 66 0f 1f 44 00 00 48 89 f8 48 89 d1 f3 a4 c3 0f 1f 80 00
Aug 27 17:37:15 kaveri kernel: [ 703.633675] RSP: 0018:ffff8803ee247b18 EFLAGS: 00000006
Aug 27 17:37:15 kaveri kernel: [ 703.633677] RAX: ffffffff8810db82 RBX: ffff8803ee247b78 RCX: 0000000000000002
Aug 27 17:37:15 kaveri kernel: [ 703.633677] RDX: 0000000000000003 RSI: ffffffff85e8913e RDI: ffffffff8810db83
Aug 27 17:37:15 kaveri kernel: [ 703.633678] RBP: ffffffff8810df40 R08: fffffbfff1021b71 R09: fffffbfff1021b71
Aug 27 17:37:15 kaveri kernel: [ 703.633679] R10: 0000000000000001 R11: fffffbfff1021b70 R12: ffffffff85e8913d
Aug 27 17:37:15 kaveri kernel: [ 703.633679] R13: dffffc0000000000 R14: ffffffff85e89140 R15: ffffffff8810db82
Aug 27 17:37:15 kaveri kernel: [ 703.633680] FS: 00007fafe6fd87c0(0000) GS:ffff8803ee240000(0000) knlGS:0000000000000000
Aug 27 17:37:15 kaveri kernel: [ 703.633681] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
Aug 27 17:37:15 kaveri kernel: [ 703.633681] CR2: 00007fafc97f9f38 CR3: 0000000337028000 CR4: 00000000003406e0
Aug 27 17:37:15 kaveri kernel: [ 703.633682] Call Trace:
Aug 27 17:37:15 kaveri kernel: [ 703.633682] <IRQ>
Aug 27 17:37:15 kaveri kernel: [ 703.633683] vsnprintf+0x1ff/0x10a0
Aug 27 17:37:15 kaveri kernel: [ 703.633683] ? pointer+0x660/0x660
Aug 27 17:37:15 kaveri kernel: [ 703.633684] ? native_queued_spin_lock_slowpath+0x179/0x7e0
Aug 27 17:37:15 kaveri kernel: [ 703.633684] vscnprintf+0x9/0x30
Aug 27 17:37:15 kaveri kernel: [ 703.633685] vprintk_emit+0xcb/0x6f0
Aug 27 17:37:15 kaveri kernel: [ 703.633685] printk+0x9c/0xc3
Aug 27 17:37:15 kaveri kernel: [ 703.633685] ? kmsg_dump_rewind_nolock+0xd9/0xd9
Aug 27 17:37:15 kaveri kernel: [ 703.633686] rcu_check_callbacks+0x1016/0x1e50
Aug 27 17:37:15 kaveri kernel: [ 703.633686] update_process_times+0x28/0x50
Aug 27 17:37:15 kaveri kernel: [ 703.633687] tick_sched_handle+0x73/0x160
Aug 27 17:37:15 kaveri kernel: [ 703.633687] tick_sched_timer+0x37/0xf0
Aug 27 17:37:15 kaveri kernel: [ 703.633688] ? tick_sched_do_timer+0x140/0x140
Aug 27 17:37:15 kaveri kernel: [ 703.633688] __hrtimer_run_queues+0x291/0xa20
Aug 27 17:37:15 kaveri kernel: [ 703.633689] ? hrtimer_cancel+0x20/0x20
Aug 27 17:37:15 kaveri kernel: [ 703.633689] ? ktime_get_update_offsets_now+0xed/0x2d0
Aug 27 17:37:15 kaveri kernel: [ 703.633690] hrtimer_interrupt+0x29a/0x770
Aug 27 17:37:15 kaveri kernel: [ 703.633690] ? rcu_nmi_enter+0x60/0x110
Aug 27 17:37:15 kaveri kernel: [ 703.633691] smp_apic_timer_interrupt+0xd5/0x490
Aug 27 17:37:15 kaveri kernel: [ 703.633691] apic_timer_interrupt+0xf/0x20
Aug 27 17:37:15 kaveri kernel: [ 703.633692] </IRQ>
Aug 27 17:37:15 kaveri kernel: [ 703.633692] RIP: 0010:native_queued_spin_lock_slowpath+0x290/0x7e0
Aug 27 17:37:15 kaveri kernel: [ 703.633693] Code: fc ff df 49 c1 ef 03 41 83 e5 07 49 01 c7 41 83 c5 03 f3 90 41 0f b6 07 41 38 c5 7c 08 84 c0 0f 85 c3 04 00 00 8b 45 08 85 c0 <74> e6 48 89 ea 48 b8 00 00 00 00 00 fc ff df 48 c1 ea 03 80 3c 02
Aug 27 17:37:15 kaveri kernel: [ 703.633711] RSP: 0018:ffff8803c460f600 EFLAGS: 00000246 ORIG_RAX: ffffffffffffff13
Aug 27 17:37:15 kaveri kernel: [ 703.633712] RAX: 0000000000000000 RBX: ffff8803dc592b50 RCX: ffffffff844475f9
Aug 27 17:37:15 kaveri kernel: [ 703.633713] RDX: 0000000000000000 RSI: 0000000000000004 RDI: ffffffff8633d780
Aug 27 17:37:15 kaveri kernel: [ 703.633713] RBP: ffff8803ee26b700 R08: ffffed007b8b256b R09: ffffed007b8b256b
Aug 27 17:37:15 kaveri kernel: [ 703.633714] R10: 0000000000000001 R11: ffffed007b8b256a R12: 0000000000280000
Aug 27 17:37:15 kaveri kernel: [ 703.633715] R13: 0000000000000003 R14: ffff8803ee26b708 R15: ffffed007dc4d6e1
Aug 27 17:37:15 kaveri kernel: [ 703.633715] ? native_queued_spin_lock_slowpath+0x179/0x7e0
Aug 27 17:37:15 kaveri kernel: [ 703.633716] do_raw_spin_lock+0x160/0x1f0
Aug 27 17:37:15 kaveri kernel: [ 703.633716] amdgpu_bo_do_create+0xc36/0x1030 [amdgpu]
Aug 27 17:37:15 kaveri kernel: [ 703.633717] ? deref_stack_reg+0xad/0xe0
Aug 27 17:37:15 kaveri kernel: [ 703.633717] ? amdgpu_bo_placement_from_domain+0x860/0x860 [amdgpu]
Aug 27 17:37:15 kaveri kernel: [ 703.633718] ? unwind_next_frame+0xf0f/0x1820
Aug 27 17:37:15 kaveri kernel: [ 703.633718] amdgpu_bo_create+0xa3/0x920 [amdgpu]
Aug 27 17:37:15 kaveri kernel: [ 703.633719] ? amdgpu_bo_do_create+0x1030/0x1030 [amdgpu]
Aug 27 17:37:15 kaveri kernel: [ 703.633719] ? alloc_pid+0x48/0x770
Aug 27 17:37:15 kaveri kernel: [ 703.633720] ? is_bpf_text_address+0x78/0xe0
Aug 27 17:37:15 kaveri kernel: [ 703.633721] ? kernel_text_address+0x111/0x120
Aug 27 17:37:15 kaveri kernel: [ 703.633721] amdgpu_gem_object_create+0x140/0x240 [amdgpu]
Aug 27 17:37:15 kaveri kernel: [ 703.633722] ? amdgpu_gem_object_free+0xa0/0xa0 [amdgpu]
Aug 27 17:37:15 kaveri kernel: [ 703.633722] ? debug_check_no_locks_freed+0x2c0/0x2c0
Aug 27 17:37:15 kaveri kernel: [ 703.633723] ? save_stack+0x89/0xb0
Aug 27 17:37:15 kaveri kernel: [ 703.633723] ? drm_dev_enter+0x5/0xf0 [drm]
Aug 27 17:37:15 kaveri kernel: [ 703.633724] amdgpu_gem_create_ioctl+0x4ef/0x800 [amdgpu]
Aug 27 17:37:15 kaveri kernel: [ 703.633724] ? amdgpu_gem_object_close+0x420/0x420 [amdgpu]
Aug 27 17:37:15 kaveri kernel: [ 703.633725] ? drm_dev_exit+0x5/0x30 [drm]
Aug 27 17:37:15 kaveri kernel: [ 703.633725] ? lock_acquire+0x10b/0x330
Aug 27 17:37:15 kaveri kernel: [ 703.633726] ? lock_downgrade+0x5e0/0x5e0
Aug 27 17:37:15 kaveri kernel: [ 703.633726] ? amdgpu_gem_object_close+0x420/0x420 [amdgpu]
Aug 27 17:37:15 kaveri kernel: [ 703.633727] drm_ioctl_kernel+0x197/0x220 [drm]
Aug 27 17:37:15 kaveri kernel: [ 703.633727] ? drm_setversion+0x800/0x800 [drm]
Aug 27 17:37:15 kaveri kernel: [ 703.633728] ? __check_object_size+0x149/0x360
Aug 27 17:37:15 kaveri kernel: [ 703.633728] drm_ioctl+0x40e/0x860 [drm]
Aug 27 17:37:15 kaveri kernel: [ 703.633729] ? amdgpu_gem_object_close+0x420/0x420 [amdgpu]
Aug 27 17:37:15 kaveri kernel: [ 703.633729] ? drm_version+0x390/0x390 [drm]
Aug 27 17:37:15 kaveri kernel: [ 703.633730] ? lock_downgrade+0x5e0/0x5e0
Aug 27 17:37:15 kaveri kernel: [ 703.633730] ? __pm_runtime_resume+0x79/0x100
Aug 27 17:37:15 kaveri kernel: [ 703.633731] ? debug_check_no_locks_freed+0x2c0/0x2c0
Aug 27 17:37:15 kaveri kernel: [ 703.633731] ? do_raw_spin_unlock+0x54/0x220
Aug 27 17:37:15 kaveri kernel: [ 703.633732] amdgpu_drm_ioctl+0xcc/0x1b0 [amdgpu]
Aug 27 17:37:15 kaveri kernel: [ 703.633732] do_vfs_ioctl+0x192/0xf30
Aug 27 17:37:15 kaveri kernel: [ 703.633733] ? cpu_cgroup_fork+0x120/0x120
Aug 27 17:37:15 kaveri kernel: [ 703.633733] ? wake_up_new_task+0x645/0xb10
Aug 27 17:37:15 kaveri kernel: [ 703.633734] ? ioctl_preallocate+0x1b0/0x1b0
Aug 27 17:37:15 kaveri kernel: [ 703.633734] ? __fget+0x1c8/0x300
Aug 27 17:37:15 kaveri kernel: [ 703.633735] ? lock_downgrade+0x5e0/0x5e0
Aug 27 17:37:15 kaveri kernel: [ 703.633735] ? __fget+0x49/0x300
Aug 27 17:37:15 kaveri kernel: [ 703.633736] ? lock_downgrade+0x5e0/0x5e0
Aug 27 17:37:15 kaveri kernel: [ 703.633736] ? __fget+0x1e0/0x300
Aug 27 17:37:15 kaveri kernel: [ 703.633737] ksys_ioctl+0x70/0x80
Aug 27 17:37:15 kaveri kernel: [ 703.633737] __x64_sys_ioctl+0x6f/0xb0
Aug 27 17:37:15 kaveri kernel: [ 703.633737] do_syscall_64+0xa5/0x3f0
Aug 27 17:37:15 kaveri kernel: [ 703.633738] entry_SYSCALL_64_after_hwframe+0x49/0xbe
Aug 27 17:37:15 kaveri kernel: [ 703.633738] RIP: 0033:0x7fafe8f46067
Aug 27 17:37:15 kaveri kernel: [ 703.633739] Code: b3 66 90 48 8b 05 21 7e 0c 00 64 c7 00 26 00 00 00 48 c7 c0 ff ff ff ff c3 66 2e 0f 1f 84 00 00 00 00 00 b8 10 00 00 00 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 8b 0d f1 7d 0c 00 f7 d8 64 89 01 48
Aug 27 17:37:15 kaveri kernel: [ 703.633757] RSP: 002b:00007fffc739b858 EFLAGS: 00000246 ORIG_RAX: 0000000000000010
Aug 27 17:37:15 kaveri kernel: [ 703.633758] RAX: ffffffffffffffda RBX: 00007fffc739b940 RCX: 00007fafe8f46067
Aug 27 17:37:15 kaveri kernel: [ 703.633759] RDX: 00007fffc739b8b0 RSI: 00000000c0206440 RDI: 0000000000000006
Aug 27 17:37:15 kaveri kernel: [ 703.633759] RBP: 00007fffc739b880 R08: 000055919dcce650 R09: 0000000000000004
Aug 27 17:37:15 kaveri kernel: [ 703.633760] R10: 0000000000000000 R11: 0000000000000246 R12: 00007fffc739b8b0
Aug 27 17:37:15 kaveri kernel: [ 703.633761] R13: 00000000c0206440 R14: 0000000000000006 R15: 000055919dcce650
Aug 27 17:37:15 kaveri kernel: [ 703.633763] NMI backtrace for cpu 10
Aug 27 17:37:15 kaveri kernel: [ 703.633863] CPU: 10 PID: 5853 Comm: shader_runner Tainted: G D W OE 4.18.0-rc1+ #111
Aug 27 17:37:15 kaveri kernel: [ 703.633865] Hardware name: Micro-Star International Co., Ltd. MS-7A34/B350 TOMAHAWK (MS-7A34), BIOS 1.80 09/13/2017
Aug 27 17:37:15 kaveri kernel: [ 703.633867] Call Trace:
Aug 27 17:37:15 kaveri kernel: [ 703.633870] <IRQ>
Aug 27 17:37:15 kaveri kernel: [ 703.633875] dump_stack+0x9a/0xeb
Aug 27 17:37:15 kaveri kernel: [ 703.633880] nmi_cpu_backtrace+0x126/0x140
Aug 27 17:37:15 kaveri kernel: [ 703.633884] ? lapic_can_unplug_cpu+0xa0/0xa0
Aug 27 17:37:15 kaveri kernel: [ 703.633888] nmi_trigger_cpumask_backtrace+0xb9/0xf0
Aug 27 17:37:15 kaveri kernel: [ 703.633892] rcu_dump_cpu_stacks+0x18b/0x1d9
Aug 27 17:37:15 kaveri kernel: [ 703.633897] rcu_check_callbacks+0x1026/0x1e50
Aug 27 17:37:15 kaveri kernel: [ 703.633905] update_process_times+0x28/0x50
Aug 27 17:37:15 kaveri kernel: [ 703.633908] tick_sched_handle+0x73/0x160
Aug 27 17:37:15 kaveri kernel: [ 703.633912] tick_sched_timer+0x37/0xf0
Aug 27 17:37:15 kaveri kernel: [ 703.633915] ? tick_sched_do_timer+0x140/0x140
Aug 27 17:37:15 kaveri kernel: [ 703.633918] __hrtimer_run_queues+0x291/0xa20
Aug 27 17:37:15 kaveri kernel: [ 703.633924] ? hrtimer_cancel+0x20/0x20
Aug 27 17:37:15 kaveri kernel: [ 703.633927] ? ktime_get_update_offsets_now+0xed/0x2d0
Aug 27 17:37:15 kaveri kernel: [ 703.633932] hrtimer_interrupt+0x29a/0x770
Aug 27 17:37:15 kaveri kernel: [ 703.633937] ? rcu_nmi_enter+0x60/0x110
Aug 27 17:37:15 kaveri kernel: [ 703.633941] smp_apic_timer_interrupt+0xd5/0x490
Aug 27 17:37:15 kaveri kernel: [ 703.633945] apic_timer_interrupt+0xf/0x20
Aug 27 17:37:15 kaveri kernel: [ 703.633948] </IRQ>
[-- Attachment #3: Type: text/plain, Size: 154 bytes --]
_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [PATCH v5 5/5] drm/amdgpu: move PD/PT bos on LRU again
[not found] ` <9528d248-f784-f5c8-28f2-12f694491cfe-otUistvHUpPR7s880joybQ@public.gmane.org>
@ 2018-08-28 17:03 ` Michel Dänzer
[not found] ` <0e7db3c6-0feb-edba-fb7b-58e9d69f3859-otUistvHUpPR7s880joybQ@public.gmane.org>
0 siblings, 1 reply; 23+ messages in thread
From: Michel Dänzer @ 2018-08-28 17:03 UTC (permalink / raw)
To: Huang Rui; +Cc: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW
On 2018-08-28 11:14 a.m., Michel Dänzer wrote:
>
> Hi Ray,
>
>
> On 2018-08-22 9:52 a.m., Huang Rui wrote:
>> The new bulk moving functionality is ready, the overhead of moving PD/PT bos to
>> LRU is fixed. So move them on LRU again.
>>
>> Signed-off-by: Huang Rui <ray.huang@amd.com>
>> Tested-by: Mike Lothian <mike@fireburn.co.uk>
>> Tested-by: Dieter Nützel <Dieter@nuetzel-hh.de>
>> Acked-by: Chunming Zhou <david1.zhou@amd.com>
>> Reviewed-by: Junwei Zhang <Jerry.Zhang@amd.com>
>> ---
>> drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c | 2 +-
>> 1 file changed, 1 insertion(+), 1 deletion(-)
>>
>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
>> index db1f28a..d195a3d 100644
>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
>> @@ -1107,7 +1107,7 @@ int amdgpu_vm_update_directories(struct amdgpu_device *adev,
>> struct amdgpu_vm_bo_base,
>> vm_status);
>> bo_base->moved = false;
>> - list_del_init(&bo_base->vm_status);
>> + list_move(&bo_base->vm_status, &vm->idle);
>>
>> bo = bo_base->bo->parent;
>> if (!bo)
>>
>
> Since this change, I'm getting various badness when running piglit using
> radeonsi on Bonaire, see the attached dmesg excerpt.
>
> Reverting just this change on top of current amd-staging-drm-next avoids
> the problem.
>
> Looks like some list manipulation isn't sufficiently protected against
> concurrent execution?
KASAN pointed me to one issue:
https://patchwork.freedesktop.org/patch/246212/
However, this doesn't fully fix the problem.
--
Earthling Michel Dänzer | http://www.amd.com
Libre software enthusiast | Mesa and X developer
_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [PATCH v5 5/5] drm/amdgpu: move PD/PT bos on LRU again
[not found] ` <0e7db3c6-0feb-edba-fb7b-58e9d69f3859-otUistvHUpPR7s880joybQ@public.gmane.org>
@ 2018-08-29 7:52 ` Michel Dänzer
[not found] ` <33a2fd23-0173-7faa-2927-bebf10929c58-otUistvHUpPR7s880joybQ@public.gmane.org>
0 siblings, 1 reply; 23+ messages in thread
From: Michel Dänzer @ 2018-08-29 7:52 UTC (permalink / raw)
To: Huang Rui; +Cc: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW
On 2018-08-28 7:03 p.m., Michel Dänzer wrote:
> On 2018-08-28 11:14 a.m., Michel Dänzer wrote:
>> On 2018-08-22 9:52 a.m., Huang Rui wrote:
>>> The new bulk moving functionality is ready, the overhead of moving PD/PT bos to
>>> LRU is fixed. So move them on LRU again.
>>>
>>> Signed-off-by: Huang Rui <ray.huang@amd.com>
>>> Tested-by: Mike Lothian <mike@fireburn.co.uk>
>>> Tested-by: Dieter Nützel <Dieter@nuetzel-hh.de>
>>> Acked-by: Chunming Zhou <david1.zhou@amd.com>
>>> Reviewed-by: Junwei Zhang <Jerry.Zhang@amd.com>
>>> ---
>>> drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c | 2 +-
>>> 1 file changed, 1 insertion(+), 1 deletion(-)
>>>
>>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
>>> index db1f28a..d195a3d 100644
>>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
>>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
>>> @@ -1107,7 +1107,7 @@ int amdgpu_vm_update_directories(struct amdgpu_device *adev,
>>> struct amdgpu_vm_bo_base,
>>> vm_status);
>>> bo_base->moved = false;
>>> - list_del_init(&bo_base->vm_status);
>>> + list_move(&bo_base->vm_status, &vm->idle);
>>>
>>> bo = bo_base->bo->parent;
>>> if (!bo)
>>>
>>
>> Since this change, I'm getting various badness when running piglit using
>> radeonsi on Bonaire, see the attached dmesg excerpt.
>>
>> Reverting just this change on top of current amd-staging-drm-next avoids
>> the problem.
>>
>> Looks like some list manipulation isn't sufficiently protected against
>> concurrent execution?
>
> KASAN pointed me to one issue:
> https://patchwork.freedesktop.org/patch/246212/
>
> However, this doesn't fully fix the problem.
Ray, any ideas yet for solving this? If not, let's revert this change
for now.
--
Earthling Michel Dänzer | http://www.amd.com
Libre software enthusiast | Mesa and X developer
_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [PATCH v5 5/5] drm/amdgpu: move PD/PT bos on LRU again
[not found] ` <33a2fd23-0173-7faa-2927-bebf10929c58-otUistvHUpPR7s880joybQ@public.gmane.org>
@ 2018-08-29 8:57 ` Christian König
[not found] ` <37d1ce1e-52e6-8c10-0dc4-8482f37c6803-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
0 siblings, 1 reply; 23+ messages in thread
From: Christian König @ 2018-08-29 8:57 UTC (permalink / raw)
To: Michel Dänzer, Huang Rui; +Cc: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW
Am 29.08.2018 um 09:52 schrieb Michel Dänzer:
> On 2018-08-28 7:03 p.m., Michel Dänzer wrote:
>> On 2018-08-28 11:14 a.m., Michel Dänzer wrote:
>>> On 2018-08-22 9:52 a.m., Huang Rui wrote:
>>>> The new bulk moving functionality is ready, the overhead of moving PD/PT bos to
>>>> LRU is fixed. So move them on LRU again.
>>>>
>>>> Signed-off-by: Huang Rui <ray.huang@amd.com>
>>>> Tested-by: Mike Lothian <mike@fireburn.co.uk>
>>>> Tested-by: Dieter Nützel <Dieter@nuetzel-hh.de>
>>>> Acked-by: Chunming Zhou <david1.zhou@amd.com>
>>>> Reviewed-by: Junwei Zhang <Jerry.Zhang@amd.com>
>>>> ---
>>>> drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c | 2 +-
>>>> 1 file changed, 1 insertion(+), 1 deletion(-)
>>>>
>>>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
>>>> index db1f28a..d195a3d 100644
>>>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
>>>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
>>>> @@ -1107,7 +1107,7 @@ int amdgpu_vm_update_directories(struct amdgpu_device *adev,
>>>> struct amdgpu_vm_bo_base,
>>>> vm_status);
>>>> bo_base->moved = false;
>>>> - list_del_init(&bo_base->vm_status);
>>>> + list_move(&bo_base->vm_status, &vm->idle);
>>>>
>>>> bo = bo_base->bo->parent;
>>>> if (!bo)
>>>>
>>> Since this change, I'm getting various badness when running piglit using
>>> radeonsi on Bonaire, see the attached dmesg excerpt.
>>>
>>> Reverting just this change on top of current amd-staging-drm-next avoids
>>> the problem.
>>>
>>> Looks like some list manipulation isn't sufficiently protected against
>>> concurrent execution?
>> KASAN pointed me to one issue:
>> https://patchwork.freedesktop.org/patch/246212/
>>
>> However, this doesn't fully fix the problem.
> Ray, any ideas yet for solving this? If not, let's revert this change
> for now.
I've gone over this multiple times now as well, but can't find anything
obvious wrong either.
If we don't have any more ideas I would say revert it for now and try to
debug it further.
BTW: Any idea how to force the issue?
Christian.
_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [PATCH v5 5/5] drm/amdgpu: move PD/PT bos on LRU again
[not found] ` <37d1ce1e-52e6-8c10-0dc4-8482f37c6803-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
@ 2018-08-29 9:00 ` Michel Dänzer
2018-08-29 14:51 ` Michel Dänzer
1 sibling, 0 replies; 23+ messages in thread
From: Michel Dänzer @ 2018-08-29 9:00 UTC (permalink / raw)
To: christian.koenig-5C7GfCeVMHo, Huang Rui
Cc: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW
On 2018-08-29 10:57 a.m., Christian König wrote:
> Am 29.08.2018 um 09:52 schrieb Michel Dänzer:
>> On 2018-08-28 7:03 p.m., Michel Dänzer wrote:
>>> On 2018-08-28 11:14 a.m., Michel Dänzer wrote:
>>>> On 2018-08-22 9:52 a.m., Huang Rui wrote:
>>>>> The new bulk moving functionality is ready, the overhead of moving
>>>>> PD/PT bos to
>>>>> LRU is fixed. So move them on LRU again.
>>>>>
>>>>> Signed-off-by: Huang Rui <ray.huang@amd.com>
>>>>> Tested-by: Mike Lothian <mike@fireburn.co.uk>
>>>>> Tested-by: Dieter Nützel <Dieter@nuetzel-hh.de>
>>>>> Acked-by: Chunming Zhou <david1.zhou@amd.com>
>>>>> Reviewed-by: Junwei Zhang <Jerry.Zhang@amd.com>
>>>>> ---
>>>>> drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c | 2 +-
>>>>> 1 file changed, 1 insertion(+), 1 deletion(-)
>>>>>
>>>>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
>>>>> b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
>>>>> index db1f28a..d195a3d 100644
>>>>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
>>>>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
>>>>> @@ -1107,7 +1107,7 @@ int amdgpu_vm_update_directories(struct
>>>>> amdgpu_device *adev,
>>>>> struct amdgpu_vm_bo_base,
>>>>> vm_status);
>>>>> bo_base->moved = false;
>>>>> - list_del_init(&bo_base->vm_status);
>>>>> + list_move(&bo_base->vm_status, &vm->idle);
>>>>> bo = bo_base->bo->parent;
>>>>> if (!bo)
>>>>>
>>>> Since this change, I'm getting various badness when running piglit
>>>> using
>>>> radeonsi on Bonaire, see the attached dmesg excerpt.
>>>>
>>>> Reverting just this change on top of current amd-staging-drm-next
>>>> avoids
>>>> the problem.
>>>>
>>>> Looks like some list manipulation isn't sufficiently protected against
>>>> concurrent execution?
>>> KASAN pointed me to one issue:
>>> https://patchwork.freedesktop.org/patch/246212/
>>>
>>> However, this doesn't fully fix the problem.
>> Ray, any ideas yet for solving this? If not, let's revert this change
>> for now.
>
> I've gone over this multiple times now as well, but can't find anything
> obvious wrong either.
Thanks for looking into it.
> If we don't have any more ideas I would say revert it for now and try to
> debug it further.
Yep.
> BTW: Any idea how to force the issue?
Not specifically. It happens reliably and pretty quickly for me when
running the piglit gpu profile.
--
Earthling Michel Dänzer | http://www.amd.com
Libre software enthusiast | Mesa and X developer
_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [PATCH v5 5/5] drm/amdgpu: move PD/PT bos on LRU again
[not found] ` <37d1ce1e-52e6-8c10-0dc4-8482f37c6803-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
2018-08-29 9:00 ` Michel Dänzer
@ 2018-08-29 14:51 ` Michel Dänzer
[not found] ` <b040c891-fcb8-b9bb-e7ac-beffdc130093-otUistvHUpPR7s880joybQ@public.gmane.org>
1 sibling, 1 reply; 23+ messages in thread
From: Michel Dänzer @ 2018-08-29 14:51 UTC (permalink / raw)
To: christian.koenig-5C7GfCeVMHo, Huang Rui
Cc: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW
On 2018-08-29 10:57 a.m., Christian König wrote:
> Am 29.08.2018 um 09:52 schrieb Michel Dänzer:
>> On 2018-08-28 7:03 p.m., Michel Dänzer wrote:
>>> On 2018-08-28 11:14 a.m., Michel Dänzer wrote:
>>>> On 2018-08-22 9:52 a.m., Huang Rui wrote:
>>>>> The new bulk moving functionality is ready, the overhead of moving
>>>>> PD/PT bos to
>>>>> LRU is fixed. So move them on LRU again.
>>>>>
>>>>> Signed-off-by: Huang Rui <ray.huang@amd.com>
>>>>> Tested-by: Mike Lothian <mike@fireburn.co.uk>
>>>>> Tested-by: Dieter Nützel <Dieter@nuetzel-hh.de>
>>>>> Acked-by: Chunming Zhou <david1.zhou@amd.com>
>>>>> Reviewed-by: Junwei Zhang <Jerry.Zhang@amd.com>
>>>>> ---
>>>>> drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c | 2 +-
>>>>> 1 file changed, 1 insertion(+), 1 deletion(-)
>>>>>
>>>>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
>>>>> b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
>>>>> index db1f28a..d195a3d 100644
>>>>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
>>>>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
>>>>> @@ -1107,7 +1107,7 @@ int amdgpu_vm_update_directories(struct
>>>>> amdgpu_device *adev,
>>>>> struct amdgpu_vm_bo_base,
>>>>> vm_status);
>>>>> bo_base->moved = false;
>>>>> - list_del_init(&bo_base->vm_status);
>>>>> + list_move(&bo_base->vm_status, &vm->idle);
>>>>> bo = bo_base->bo->parent;
>>>>> if (!bo)
>>>>>
>>>> Since this change, I'm getting various badness when running piglit
>>>> using
>>>> radeonsi on Bonaire, see the attached dmesg excerpt.
>>>>
>>>> Reverting just this change on top of current amd-staging-drm-next
>>>> avoids
>>>> the problem.
>>>>
>>>> Looks like some list manipulation isn't sufficiently protected against
>>>> concurrent execution?
>>> KASAN pointed me to one issue:
>>> https://patchwork.freedesktop.org/patch/246212/
>>>
>>> However, this doesn't fully fix the problem.
>> Ray, any ideas yet for solving this? If not, let's revert this change
>> for now.
>
> I've gone over this multiple times now as well, but can't find anything
> obvious wrong either.
After looking at the code, one question: Why does vm->moved need a
spinlock, but not vm->idle? What is protecting against concurrent access
to the latter?
--
Earthling Michel Dänzer | http://www.amd.com
Libre software enthusiast | Mesa and X developer
_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [PATCH v5 5/5] drm/amdgpu: move PD/PT bos on LRU again
[not found] ` <b040c891-fcb8-b9bb-e7ac-beffdc130093-otUistvHUpPR7s880joybQ@public.gmane.org>
@ 2018-08-29 15:00 ` Christian König
0 siblings, 0 replies; 23+ messages in thread
From: Christian König @ 2018-08-29 15:00 UTC (permalink / raw)
To: Michel Dänzer, christian.koenig-5C7GfCeVMHo, Huang Rui
Cc: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW
Am 29.08.2018 um 16:51 schrieb Michel Dänzer:
> On 2018-08-29 10:57 a.m., Christian König wrote:
>> Am 29.08.2018 um 09:52 schrieb Michel Dänzer:
>>> On 2018-08-28 7:03 p.m., Michel Dänzer wrote:
>>>> On 2018-08-28 11:14 a.m., Michel Dänzer wrote:
>>>>> On 2018-08-22 9:52 a.m., Huang Rui wrote:
>>>>>> The new bulk moving functionality is ready, the overhead of moving
>>>>>> PD/PT bos to
>>>>>> LRU is fixed. So move them on LRU again.
>>>>>>
>>>>>> Signed-off-by: Huang Rui <ray.huang@amd.com>
>>>>>> Tested-by: Mike Lothian <mike@fireburn.co.uk>
>>>>>> Tested-by: Dieter Nützel <Dieter@nuetzel-hh.de>
>>>>>> Acked-by: Chunming Zhou <david1.zhou@amd.com>
>>>>>> Reviewed-by: Junwei Zhang <Jerry.Zhang@amd.com>
>>>>>> ---
>>>>>> drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c | 2 +-
>>>>>> 1 file changed, 1 insertion(+), 1 deletion(-)
>>>>>>
>>>>>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
>>>>>> b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
>>>>>> index db1f28a..d195a3d 100644
>>>>>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
>>>>>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
>>>>>> @@ -1107,7 +1107,7 @@ int amdgpu_vm_update_directories(struct
>>>>>> amdgpu_device *adev,
>>>>>> struct amdgpu_vm_bo_base,
>>>>>> vm_status);
>>>>>> bo_base->moved = false;
>>>>>> - list_del_init(&bo_base->vm_status);
>>>>>> + list_move(&bo_base->vm_status, &vm->idle);
>>>>>> bo = bo_base->bo->parent;
>>>>>> if (!bo)
>>>>>>
>>>>> Since this change, I'm getting various badness when running piglit
>>>>> using
>>>>> radeonsi on Bonaire, see the attached dmesg excerpt.
>>>>>
>>>>> Reverting just this change on top of current amd-staging-drm-next
>>>>> avoids
>>>>> the problem.
>>>>>
>>>>> Looks like some list manipulation isn't sufficiently protected against
>>>>> concurrent execution?
>>>> KASAN pointed me to one issue:
>>>> https://patchwork.freedesktop.org/patch/246212/
>>>>
>>>> However, this doesn't fully fix the problem.
>>> Ray, any ideas yet for solving this? If not, let's revert this change
>>> for now.
>> I've gone over this multiple times now as well, but can't find anything
>> obvious wrong either.
> After looking at the code, one question: Why does vm->moved need a
> spinlock, but not vm->idle? What is protecting against concurrent access
> to the latter?
The moved state is occupied by both normal and per VM BOs, e.g. BOs with
different reservation objects.
All other states are only used by per VM BOs or PDs/PTs, so we only put
the BOs on those when the reservation object of the root BO is locked.
We could probably split the moved state into two separate ones to
further avoid that lock.
Christian.
_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [PATCH v5 0/5] drm/ttm, amdgpu: Introduce LRU bulk move functionality
2018-08-22 8:43 ` Huang Rui
@ 2018-09-02 8:12 ` Mike Lothian
[not found] ` <CAHbf0-GsMRx9uZp=FRMf947-BNocaCegiP8W3+w65tOhykOpvg-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
0 siblings, 1 reply; 23+ messages in thread
From: Mike Lothian @ 2018-09-02 8:12 UTC (permalink / raw)
To: Huang Rui
Cc: dri-devel-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW@public.gmane.org,
Koenig, Christian,
amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW@public.gmane.org
[-- Attachment #1.1: Type: text/plain, Size: 5264 bytes --]
Hi
Is there an updated series? These no longer apply for me
Thanks
Mike
On Wed, 22 Aug 2018 at 09:42 Huang Rui <ray.huang-5C7GfCeVMHo@public.gmane.org> wrote:
> On Wed, Aug 22, 2018 at 04:24:02PM +0800, Christian König wrote:
> > Please commit patches #1, #2 and #3, doesn't make much sense to send
> > them out even more often.
> >
> > Jerry's comments on patch #4 sound valid to me as well, but with those
> > minor issues fixes/commented I think we can commit it.
> >
> > Thanks for taking care of this,
> > Christian.
>
> OK. Thanks to your time.
>
> Thanks,
> Ray
>
> >
> > Am 22.08.2018 um 09:52 schrieb Huang Rui:
> > > The idea and proposal is originally from Christian, and I continue to
> work to
> > > deliver it.
> > >
> > > Background:
> > > amdgpu driver will move all PD/PT and PerVM BOs into idle list. Then
> move all of
> > > them on the end of LRU list one by one. Thus, that cause so many BOs
> moved to
> > > the end of the LRU, and impact performance seriously.
> > >
> > > Then Christian provided a workaround to not move PD/PT BOs on LRU with
> below
> > > patch:
> > > Commit 0bbf32026cf5ba41e9922b30e26e1bed1ecd38ae ("drm/amdgpu: band aid
> > > validating VM PTs")
> > >
> > > However, the final solution should bulk move all PD/PT and PerVM BOs
> on the LRU
> > > instead of one by one.
> > >
> > > Whenever amdgpu_vm_validate_pt_bos() is called and we have BOs which
> need to be
> > > validated we move all BOs together to the end of the LRU without
> dropping the
> > > lock for the LRU.
> > >
> > > While doing so we note the beginning and end of this block in the LRU
> list.
> > >
> > > Now when amdgpu_vm_validate_pt_bos() is called and we don't have
> anything to do,
> > > we don't move every BO one by one, but instead cut the LRU list into
> pieces so
> > > that we bulk move everything to the end in just one operation.
> > >
> > > Test data:
> > >
> +--------------+-----------------+-----------+---------------------------------------+
> > > | |The Talos |Clpeak(OCL)|BusSpeedReadback(OCL)
> |
> > > | |Principle(Vulkan)| |
> |
> > >
> +------------------------------------------------------------------------------------+
> > > | | | |0.319 ms(1k) 0.314
> ms(2K) 0.308 ms(4K) |
> > > | Original | 147.7 FPS | 76.86 us |0.307 ms(8K) 0.310
> ms(16K) |
> > >
> +------------------------------------------------------------------------------------+
> > > | Orignial + WA| | |0.254 ms(1K) 0.241
> ms(2K) |
> > > |(don't move | 162.1 FPS | 42.15 us |0.230 ms(4K) 0.223
> ms(8K) 0.204 ms(16K)|
> > > |PT BOs on LRU)| | |
> |
> > >
> +------------------------------------------------------------------------------------+
> > > | Bulk move | 163.1 FPS | 40.52 us |0.244 ms(1K) 0.252
> ms(2K) 0.213 ms(4K) |
> > > | | | |0.214 ms(8K) 0.225
> ms(16K) |
> > >
> +--------------+-----------------+-----------+---------------------------------------+
> > >
> > > After test them with above three benchmarks include vulkan and opencl.
> We can
> > > see the visible improvement than original, and even better than
> original with
> > > workaround.
> > >
> > > Changes from V1 -> V2:
> > > - Fix to missed the BOs in relocated/moved that should be also moved
> to the end
> > > of LRU.
> > >
> > > Changes from V2 -> V3:
> > > - Remove unused parameter and use list_for_each_entry instead of the
> one with
> > > save entry.
> > >
> > > Changes from V3 -> V4:
> > > - Move the amdgpu_vm_move_to_lru_tail after command submission, at
> that time,
> > > all bo will be back on idle list.
> > >
> > > Changes from V4 -> V5:
> > > - Remove amdgpu_vm_move_to_lru_tail_by_list(), use bulk_moveable
> instread of
> > > validated, and move ttm_bo_bulk_move_lru_tail() also into
> > > amdgpu_vm_move_to_lru_tail().
> > >
> > > Thanks,
> > > Ray
> > >
> > > Christian König (2):
> > > drm/ttm: add helper structures for bulk moves on lru list
> > > drm/ttm: revise ttm_bo_move_to_lru_tail to support bulk moves
> > >
> > > Huang Rui (3):
> > > drm/ttm: add bulk move function on LRU
> > > drm/amdgpu: use bulk moves for efficient VM LRU handling (v5)
> > > drm/amdgpu: move PD/PT bos on LRU again
> > >
> > > drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c | 10 +++++
> > > drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c | 68
> +++++++++++++++++++----------
> > > drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h | 11 ++++-
> > > drivers/gpu/drm/ttm/ttm_bo.c | 78
> +++++++++++++++++++++++++++++++++-
> > > include/drm/ttm/ttm_bo_api.h | 16 ++++++-
> > > include/drm/ttm/ttm_bo_driver.h | 28 ++++++++++++
> > > 6 files changed, 186 insertions(+), 25 deletions(-)
> > >
> >
> _______________________________________________
> amd-gfx mailing list
> amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW@public.gmane.org
> https://lists.freedesktop.org/mailman/listinfo/amd-gfx
>
[-- Attachment #1.2: Type: text/html, Size: 6740 bytes --]
[-- Attachment #2: Type: text/plain, Size: 154 bytes --]
_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [PATCH v5 0/5] drm/ttm,amdgpu: Introduce LRU bulk move functionality
[not found] ` <CAHbf0-GsMRx9uZp=FRMf947-BNocaCegiP8W3+w65tOhykOpvg-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
@ 2018-09-02 15:11 ` Koenig, Christian
0 siblings, 0 replies; 23+ messages in thread
From: Koenig, Christian @ 2018-09-02 15:11 UTC (permalink / raw)
To: Mike Lothian
Cc: Huang, Ray,
dri-devel-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW@public.gmane.org,
amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW@public.gmane.org
[-- Attachment #1.1: Type: text/plain, Size: 5387 bytes --]
That one is already committed to amd-staging-drm-next.
But I've fixed a few bugs with that just yesterday, not sure if the public copy of amd-staging-drm-next is already up to date.
Christian.
Am 02.09.2018 10:12 schrieb Mike Lothian <mike-4+n8WJKc9ve9FHfhHBbuYA@public.gmane.org>:
Hi
Is there an updated series? These no longer apply for me
Thanks
Mike
On Wed, 22 Aug 2018 at 09:42 Huang Rui <ray.huang-5C7GfCeVMHo@public.gmane.org<mailto:ray.huang@amd.com>> wrote:
On Wed, Aug 22, 2018 at 04:24:02PM +0800, Christian König wrote:
> Please commit patches #1, #2 and #3, doesn't make much sense to send
> them out even more often.
>
> Jerry's comments on patch #4 sound valid to me as well, but with those
> minor issues fixes/commented I think we can commit it.
>
> Thanks for taking care of this,
> Christian.
OK. Thanks to your time.
Thanks,
Ray
>
> Am 22.08.2018 um 09:52 schrieb Huang Rui:
> > The idea and proposal is originally from Christian, and I continue to work to
> > deliver it.
> >
> > Background:
> > amdgpu driver will move all PD/PT and PerVM BOs into idle list. Then move all of
> > them on the end of LRU list one by one. Thus, that cause so many BOs moved to
> > the end of the LRU, and impact performance seriously.
> >
> > Then Christian provided a workaround to not move PD/PT BOs on LRU with below
> > patch:
> > Commit 0bbf32026cf5ba41e9922b30e26e1bed1ecd38ae ("drm/amdgpu: band aid
> > validating VM PTs")
> >
> > However, the final solution should bulk move all PD/PT and PerVM BOs on the LRU
> > instead of one by one.
> >
> > Whenever amdgpu_vm_validate_pt_bos() is called and we have BOs which need to be
> > validated we move all BOs together to the end of the LRU without dropping the
> > lock for the LRU.
> >
> > While doing so we note the beginning and end of this block in the LRU list.
> >
> > Now when amdgpu_vm_validate_pt_bos() is called and we don't have anything to do,
> > we don't move every BO one by one, but instead cut the LRU list into pieces so
> > that we bulk move everything to the end in just one operation.
> >
> > Test data:
> > +--------------+-----------------+-----------+---------------------------------------+
> > | |The Talos |Clpeak(OCL)|BusSpeedReadback(OCL) |
> > | |Principle(Vulkan)| | |
> > +------------------------------------------------------------------------------------+
> > | | | |0.319 ms(1k) 0.314 ms(2K) 0.308 ms(4K) |
> > | Original | 147.7 FPS | 76.86 us |0.307 ms(8K) 0.310 ms(16K) |
> > +------------------------------------------------------------------------------------+
> > | Orignial + WA| | |0.254 ms(1K) 0.241 ms(2K) |
> > |(don't move | 162.1 FPS | 42.15 us |0.230 ms(4K) 0.223 ms(8K) 0.204 ms(16K)|
> > |PT BOs on LRU)| | | |
> > +------------------------------------------------------------------------------------+
> > | Bulk move | 163.1 FPS | 40.52 us |0.244 ms(1K) 0.252 ms(2K) 0.213 ms(4K) |
> > | | | |0.214 ms(8K) 0.225 ms(16K) |
> > +--------------+-----------------+-----------+---------------------------------------+
> >
> > After test them with above three benchmarks include vulkan and opencl. We can
> > see the visible improvement than original, and even better than original with
> > workaround.
> >
> > Changes from V1 -> V2:
> > - Fix to missed the BOs in relocated/moved that should be also moved to the end
> > of LRU.
> >
> > Changes from V2 -> V3:
> > - Remove unused parameter and use list_for_each_entry instead of the one with
> > save entry.
> >
> > Changes from V3 -> V4:
> > - Move the amdgpu_vm_move_to_lru_tail after command submission, at that time,
> > all bo will be back on idle list.
> >
> > Changes from V4 -> V5:
> > - Remove amdgpu_vm_move_to_lru_tail_by_list(), use bulk_moveable instread of
> > validated, and move ttm_bo_bulk_move_lru_tail() also into
> > amdgpu_vm_move_to_lru_tail().
> >
> > Thanks,
> > Ray
> >
> > Christian König (2):
> > drm/ttm: add helper structures for bulk moves on lru list
> > drm/ttm: revise ttm_bo_move_to_lru_tail to support bulk moves
> >
> > Huang Rui (3):
> > drm/ttm: add bulk move function on LRU
> > drm/amdgpu: use bulk moves for efficient VM LRU handling (v5)
> > drm/amdgpu: move PD/PT bos on LRU again
> >
> > drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c | 10 +++++
> > drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c | 68 +++++++++++++++++++----------
> > drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h | 11 ++++-
> > drivers/gpu/drm/ttm/ttm_bo.c | 78 +++++++++++++++++++++++++++++++++-
> > include/drm/ttm/ttm_bo_api.h | 16 ++++++-
> > include/drm/ttm/ttm_bo_driver.h | 28 ++++++++++++
> > 6 files changed, 186 insertions(+), 25 deletions(-)
> >
>
_______________________________________________
amd-gfx mailing list
amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW@public.gmane.org<mailto:amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW@public.gmane.org>
https://lists.freedesktop.org/mailman/listinfo/amd-gfx
[-- Attachment #1.2: Type: text/html, Size: 8782 bytes --]
[-- Attachment #2: Type: text/plain, Size: 154 bytes --]
_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx
^ permalink raw reply [flat|nested] 23+ messages in thread
end of thread, other threads:[~2018-09-02 15:11 UTC | newest]
Thread overview: 23+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2018-08-22 7:52 [PATCH v5 0/5] drm/ttm,amdgpu: Introduce LRU bulk move functionality Huang Rui
2018-08-22 7:52 ` [PATCH v5 1/5] drm/ttm: add helper structures for bulk moves on lru list Huang Rui
2018-08-22 7:52 ` [PATCH v5 2/5] drm/ttm: revise ttm_bo_move_to_lru_tail to support bulk moves Huang Rui
[not found] ` <1534924375-5837-1-git-send-email-ray.huang-5C7GfCeVMHo@public.gmane.org>
2018-08-22 7:52 ` [PATCH v5 3/5] drm/ttm: add bulk move function on LRU Huang Rui
2018-08-22 7:52 ` [PATCH v5 4/5] drm/amdgpu: use bulk moves for efficient VM LRU handling (v5) Huang Rui
[not found] ` <1534924375-5837-5-git-send-email-ray.huang-5C7GfCeVMHo@public.gmane.org>
2018-08-22 8:07 ` Zhang, Jerry (Junwei)
[not found] ` <5B7D19B8.2060307-5C7GfCeVMHo@public.gmane.org>
2018-08-22 8:33 ` Huang Rui
2018-08-22 8:38 ` Huang Rui
2018-08-22 8:45 ` Zhang, Jerry (Junwei)
[not found] ` <5B7D22A7.4090306-5C7GfCeVMHo@public.gmane.org>
2018-08-22 8:49 ` Huang Rui
2018-08-22 8:51 ` Zhang, Jerry (Junwei)
2018-08-22 7:52 ` [PATCH v5 5/5] drm/amdgpu: move PD/PT bos on LRU again Huang Rui
[not found] ` <1534924375-5837-6-git-send-email-ray.huang-5C7GfCeVMHo@public.gmane.org>
2018-08-28 9:14 ` Michel Dänzer
[not found] ` <9528d248-f784-f5c8-28f2-12f694491cfe-otUistvHUpPR7s880joybQ@public.gmane.org>
2018-08-28 17:03 ` Michel Dänzer
[not found] ` <0e7db3c6-0feb-edba-fb7b-58e9d69f3859-otUistvHUpPR7s880joybQ@public.gmane.org>
2018-08-29 7:52 ` Michel Dänzer
[not found] ` <33a2fd23-0173-7faa-2927-bebf10929c58-otUistvHUpPR7s880joybQ@public.gmane.org>
2018-08-29 8:57 ` Christian König
[not found] ` <37d1ce1e-52e6-8c10-0dc4-8482f37c6803-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
2018-08-29 9:00 ` Michel Dänzer
2018-08-29 14:51 ` Michel Dänzer
[not found] ` <b040c891-fcb8-b9bb-e7ac-beffdc130093-otUistvHUpPR7s880joybQ@public.gmane.org>
2018-08-29 15:00 ` Christian König
2018-08-22 8:24 ` [PATCH v5 0/5] drm/ttm,amdgpu: Introduce LRU bulk move functionality Christian König
[not found] ` <51ebd226-3290-5ea5-e272-0d566a119aca-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
2018-08-22 8:43 ` Huang Rui
2018-09-02 8:12 ` [PATCH v5 0/5] drm/ttm, amdgpu: " Mike Lothian
[not found] ` <CAHbf0-GsMRx9uZp=FRMf947-BNocaCegiP8W3+w65tOhykOpvg-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2018-09-02 15:11 ` [PATCH v5 0/5] drm/ttm,amdgpu: " Koenig, Christian
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).