From: Zhu Lingshan <lingshan.zhu@amd.com>
To: <felix.kuehling@amd.com>, <alexander.deucher@amd.com>
Cc: <ray.huang@amd.com>, <amd-gfx@lists.freedesktop.org>,
Zhu Lingshan <lingshan.zhu@amd.com>
Subject: [PATCH V6 15/18] amdkfd: record kfd context id in amdkfd_fence
Date: Wed, 22 Oct 2025 15:30:40 +0800 [thread overview]
Message-ID: <20251022073043.13009-16-lingshan.zhu@amd.com> (raw)
In-Reply-To: <20251022073043.13009-1-lingshan.zhu@amd.com>
This commit records the context id of the
cooresponding kfd process in amdkfd_fence
Signed-off-by: Zhu Lingshan <lingshan.zhu@amd.com>
Reviewed-by: Felix Kuehling <felix.kuehling@amd.com>
---
drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.h | 4 +++-
drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_fence.c | 4 +++-
drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c | 4 ++--
drivers/gpu/drm/amd/amdkfd/kfd_svm.c | 2 +-
4 files changed, 9 insertions(+), 5 deletions(-)
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.h
index 28b54d7ee1f5..087e8fe2c077 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.h
@@ -98,6 +98,7 @@ struct amdgpu_amdkfd_fence {
spinlock_t lock;
char timeline_name[TASK_COMM_LEN];
struct svm_range_bo *svm_bo;
+ uint16_t context_id;
};
struct amdgpu_kfd_dev {
@@ -190,7 +191,8 @@ int amdgpu_queue_mask_bit_to_set_resource_bit(struct amdgpu_device *adev,
struct amdgpu_amdkfd_fence *amdgpu_amdkfd_fence_create(u64 context,
struct mm_struct *mm,
- struct svm_range_bo *svm_bo);
+ struct svm_range_bo *svm_bo,
+ u16 context_id);
int amdgpu_amdkfd_drm_client_create(struct amdgpu_device *adev);
#if defined(CONFIG_DEBUG_FS)
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_fence.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_fence.c
index 1ef758ac5076..4119d0a9235e 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_fence.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_fence.c
@@ -62,7 +62,8 @@ static atomic_t fence_seq = ATOMIC_INIT(0);
struct amdgpu_amdkfd_fence *amdgpu_amdkfd_fence_create(u64 context,
struct mm_struct *mm,
- struct svm_range_bo *svm_bo)
+ struct svm_range_bo *svm_bo,
+ u16 context_id)
{
struct amdgpu_amdkfd_fence *fence;
@@ -76,6 +77,7 @@ struct amdgpu_amdkfd_fence *amdgpu_amdkfd_fence_create(u64 context,
get_task_comm(fence->timeline_name, current);
spin_lock_init(&fence->lock);
fence->svm_bo = svm_bo;
+ fence->context_id = context_id;
dma_fence_init(&fence->base, &amdkfd_fence_ops, &fence->lock,
context, atomic_inc_return(&fence_seq));
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c
index 722440d62290..20f834336811 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c
@@ -1405,7 +1405,7 @@ static int init_kfd_vm(struct amdgpu_vm *vm, void **process_info,
info->eviction_fence =
amdgpu_amdkfd_fence_create(dma_fence_context_alloc(1),
current->mm,
- NULL);
+ NULL, process->context_id);
if (!info->eviction_fence) {
pr_err("Failed to create eviction fence\n");
ret = -ENOMEM;
@@ -3056,7 +3056,7 @@ int amdgpu_amdkfd_gpuvm_restore_process_bos(void *info, struct dma_fence __rcu *
amdgpu_amdkfd_fence_create(
process_info->eviction_fence->base.context,
process_info->eviction_fence->mm,
- NULL);
+ NULL, process_info->context_id);
if (!new_fence) {
pr_err("Failed to create eviction fence\n");
diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_svm.c b/drivers/gpu/drm/amd/amdkfd/kfd_svm.c
index 9d72411c3379..04582aef1b41 100644
--- a/drivers/gpu/drm/amd/amdkfd/kfd_svm.c
+++ b/drivers/gpu/drm/amd/amdkfd/kfd_svm.c
@@ -585,7 +585,7 @@ svm_range_vram_node_new(struct kfd_node *node, struct svm_range *prange,
svm_bo->eviction_fence =
amdgpu_amdkfd_fence_create(dma_fence_context_alloc(1),
mm,
- svm_bo);
+ svm_bo, p->context_id);
mmput(mm);
INIT_WORK(&svm_bo->eviction_work, svm_range_evict_svm_bo_worker);
svm_bo->evicting = 0;
--
2.51.0
next prev parent reply other threads:[~2025-10-22 7:31 UTC|newest]
Thread overview: 23+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-10-22 7:30 [PATCH V6 00/18] amdkfd: Implement kfd multiple contexts Zhu Lingshan
2025-10-22 7:30 ` [PATCH V6 01/18] amdkfd: enlarge the hashtable of kfd_process Zhu Lingshan
2025-10-22 7:30 ` [PATCH V6 02/18] amdkfd: mark the first kfd_process as the primary one Zhu Lingshan
2025-10-22 7:30 ` [PATCH V6 03/18] amdkfd: find_process_by_mm always return the primary context Zhu Lingshan
2025-10-22 7:30 ` [PATCH V6 04/18] amdkfd: Introduce kfd_create_process_sysfs as a separate function Zhu Lingshan
2025-10-22 7:30 ` [PATCH V6 05/18] amdkfd: destroy kfd secondary contexts through fd close Zhu Lingshan
2025-10-22 7:30 ` [PATCH V6 06/18] amdkfd: process svm ioctl only on the primary kfd process Zhu Lingshan
2025-10-22 7:30 ` [PATCH V6 07/18] amdkfd: process USERPTR allocation " Zhu Lingshan
2025-10-22 7:30 ` [PATCH V6 08/18] amdkfd: identify a secondary kfd process by its id Zhu Lingshan
2025-10-30 21:17 ` Felix Kuehling
2025-10-22 7:30 ` [PATCH V6 09/18] amdkfd: find kfd_process by filep->private_data in kfd_mmap Zhu Lingshan
2025-10-22 7:30 ` [PATCH V6 10/18] amdkfd: remove DIQ support Zhu Lingshan
2025-10-22 7:30 ` [PATCH V6 11/18] amdkfd: process pointer of a HIQ should be NULL Zhu Lingshan
2025-10-22 7:30 ` [PATCH V6 12/18] amdkfd: remove test_kq Zhu Lingshan
2025-10-22 7:30 ` [PATCH V6 13/18] amdkfd: introduce new helper kfd_lookup_process_by_id Zhu Lingshan
2025-10-22 7:30 ` [PATCH V6 14/18] amdkfd: record kfd context id into kfd process_info Zhu Lingshan
2025-10-22 7:30 ` Zhu Lingshan [this message]
2025-10-22 7:30 ` [PATCH V6 16/18] amdkfd: fence handler evict and restore a kfd process by its context id Zhu Lingshan
2025-10-22 7:30 ` [PATCH V6 17/18] amdkfd: process debug trap ioctl only on a primary context Zhu Lingshan
2025-10-30 21:18 ` Felix Kuehling
2025-10-22 7:30 ` [PATCH V6 18/18] amdkfd: introduce new ioctl AMDKFD_IOC_CREATE_PROCESS Zhu Lingshan
2025-10-30 21:28 ` Felix Kuehling
2025-10-30 21:15 ` [PATCH V6 00/18] amdkfd: Implement kfd multiple contexts Felix Kuehling
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20251022073043.13009-16-lingshan.zhu@amd.com \
--to=lingshan.zhu@amd.com \
--cc=alexander.deucher@amd.com \
--cc=amd-gfx@lists.freedesktop.org \
--cc=felix.kuehling@amd.com \
--cc=ray.huang@amd.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox