From: Nirmoy Das <nirmoy.das@linux.intel.com>
To: "Zeng, Oak" <oak.zeng@intel.com>,
"Das, Nirmoy" <nirmoy.das@intel.com>,
"intel-xe@lists.freedesktop.org" <intel-xe@lists.freedesktop.org>
Subject: Re: [PATCH v3 2/7] drm/xe: Consolidate setting PTE_AE into one place
Date: Mon, 22 Apr 2024 10:18:15 +0200 [thread overview]
Message-ID: <b1aa42d7-c018-44f5-8e30-894392255b36@linux.intel.com> (raw)
In-Reply-To: <SA1PR11MB69915FD38304C44B99E5880C920D2@SA1PR11MB6991.namprd11.prod.outlook.com>
Hi Oak,
On 4/19/2024 8:35 PM, Zeng, Oak wrote:
>
>> -----Original Message-----
>> From: Intel-xe <intel-xe-bounces@lists.freedesktop.org> On Behalf Of
>> Nirmoy Das
>> Sent: Monday, April 15, 2024 10:52 AM
>> To: intel-xe@lists.freedesktop.org
>> Cc: Das, Nirmoy <nirmoy.das@intel.com>
>> Subject: [PATCH v3 2/7] drm/xe: Consolidate setting PTE_AE into one place
>>
>> Currently decision to set PTE_AE is spread between xe_pt
>> and xe_vm files and there is no reason to be keep it that
>> way. Consolidate the logic for better maintainability.
>>
>> Atomics is not expected on userptr memory so this patch
>> also making sure PTE_AE is only applied when a buffer object
>> exist.
>>
>> Signed-off-by: Nirmoy Das <nirmoy.das@intel.com>
>> ---
>> drivers/gpu/drm/xe/xe_pt.c | 4 +---
>> drivers/gpu/drm/xe/xe_vm.c | 7 ++++---
>> 2 files changed, 5 insertions(+), 6 deletions(-)
>>
>> diff --git a/drivers/gpu/drm/xe/xe_pt.c b/drivers/gpu/drm/xe/xe_pt.c
>> index 5b7930f46cf3..7dc13a8bb44f 100644
>> --- a/drivers/gpu/drm/xe/xe_pt.c
>> +++ b/drivers/gpu/drm/xe/xe_pt.c
>> @@ -597,7 +597,6 @@ static int
>> xe_pt_stage_bind(struct xe_tile *tile, struct xe_vma *vma,
>> struct xe_vm_pgtable_update *entries, u32 *num_entries)
>> {
>> - struct xe_device *xe = tile_to_xe(tile);
>> struct xe_bo *bo = xe_vma_bo(vma);
>> bool is_devmem = !xe_vma_is_userptr(vma) && bo &&
>> (xe_bo_is_vram(bo) || xe_bo_is_stolen_devmem(bo));
>> @@ -619,8 +618,7 @@ xe_pt_stage_bind(struct xe_tile *tile, struct xe_vma
>> *vma,
>> struct xe_pt *pt = xe_vma_vm(vma)->pt_root[tile->id];
>> int ret;
>>
>> - if ((vma->gpuva.flags & XE_VMA_ATOMIC_PTE_BIT) &&
>> - (is_devmem || !IS_DGFX(xe)))
>> + if (vma->gpuva.flags & XE_VMA_ATOMIC_PTE_BIT)
>> xe_walk.default_pte |= XE_USM_PPGTT_PTE_AE;
>>
>> if (is_devmem) {
>> diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c
>> index 2dbba55e7785..b1dcaa35b6cc 100644
>> --- a/drivers/gpu/drm/xe/xe_vm.c
>> +++ b/drivers/gpu/drm/xe/xe_vm.c
>> @@ -806,9 +806,6 @@ static struct xe_vma *xe_vma_create(struct xe_vm
>> *vm,
>> for_each_tile(tile, vm->xe, id)
>> vma->tile_mask |= 0x1 << id;
>>
>> - if (GRAPHICS_VER(vm->xe) >= 20 || vm->xe->info.platform ==
>> XE_PVC)
>> - vma->gpuva.flags |= XE_VMA_ATOMIC_PTE_BIT;
>> -
>> vma->pat_index = pat_index;
>>
>> if (bo) {
>> @@ -816,6 +813,10 @@ static struct xe_vma *xe_vma_create(struct xe_vm
>> *vm,
>>
>> xe_bo_assert_held(bo);
>>
>> + if (vm->xe->info.has_atomic_enable_pte_bit &&
>> + (xe_bo_is_vram(bo) || !IS_DGFX(vm->xe)))
> This is vma creation time. The xe_bo_is_vram works for device or host allocation. But for shared allocation, we can't decide bo placement at vma creation time, as bo can be migrated after vma creation. So I think this should be determined somewhere right before the page table programing, at lease bo migration is done.
Thanks for raising this. In that case, I will drop this patch.
>
>
> Also regarding the IS_DGFX... I think on some dgfx platform, we can support device atomic to host memory, for example, when dgpu is connected to host through cxl. I need to double confirm this with HW
I think CXL should be able to handle atomics even without migration. We
can look into that once we have a mechanism to detect such platform.
Thanks,
Nirmoy
>
>
> Oak
>
>
>
>
>> + vma->gpuva.flags |= XE_VMA_ATOMIC_PTE_BIT;
>> +
>> vm_bo = drm_gpuvm_bo_obtain(vma->gpuva.vm, &bo-
>>> ttm.base);
>> if (IS_ERR(vm_bo)) {
>> xe_vma_free(vma);
>> --
>> 2.42.0
next prev parent reply other threads:[~2024-04-22 8:18 UTC|newest]
Thread overview: 36+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-04-15 14:52 [PATCH v3 0/7] Enable device atomics with a VM bind flag Nirmoy Das
2024-04-15 14:52 ` [PATCH v3 1/7] drm/xe: Introduce has_atomic_enable_pte_bit device info Nirmoy Das
2024-04-19 16:06 ` Zeng, Oak
2024-04-15 14:52 ` [PATCH v3 2/7] drm/xe: Consolidate setting PTE_AE into one place Nirmoy Das
2024-04-16 14:33 ` Nirmoy Das
2024-04-19 18:35 ` Zeng, Oak
2024-04-22 8:18 ` Nirmoy Das [this message]
2024-04-15 14:52 ` [PATCH v3 3/7] drm/xe: Add function to check if BO has single placement Nirmoy Das
2024-04-15 14:52 ` [PATCH v3 4/7] drm/xe: Move vm bind bo validation to a helper function Nirmoy Das
2024-04-16 0:55 ` Matthew Brost
2024-04-16 13:32 ` Nirmoy Das
2024-04-19 20:14 ` Zeng, Oak
2024-04-15 14:52 ` [PATCH v3 5/7] drm/xe: Introduce has_device_atomics_on_smem device info Nirmoy Das
2024-04-19 20:24 ` Zeng, Oak
2024-04-15 14:52 ` [PATCH v3 6/7] drm/xe/uapi: Introduce VMA bind flag for device atomics Nirmoy Das
2024-04-19 7:16 ` Lionel Landwerlin
2024-04-22 8:39 ` Nirmoy Das
2024-04-19 21:04 ` Zeng, Oak
2024-04-22 10:12 ` Nirmoy Das
2024-04-22 21:39 ` Zeng, Oak
2024-04-23 12:33 ` Nirmoy Das
2024-04-15 14:52 ` [PATCH v3 7/7] drm/xe/uapi: Add a query flag for has_device_atomics_on_smem Nirmoy Das
2024-04-19 7:08 ` Lionel Landwerlin
2024-04-22 8:53 ` Nirmoy Das
2024-04-19 21:06 ` Zeng, Oak
2024-04-15 21:19 ` ✓ CI.Patch_applied: success for Enable device atomics with a VM bind flag (rev3) Patchwork
2024-04-15 21:19 ` ✓ CI.checkpatch: " Patchwork
2024-04-15 21:21 ` ✓ CI.KUnit: " Patchwork
2024-04-15 21:37 ` ✓ CI.Build: " Patchwork
2024-04-15 21:40 ` ✓ CI.Hooks: " Patchwork
2024-04-15 21:41 ` ✓ CI.checksparse: " Patchwork
2024-04-15 22:22 ` ✗ CI.BAT: failure " Patchwork
2024-04-16 13:46 ` ✓ CI.FULL: success " Patchwork
2024-04-19 7:17 ` [PATCH v3 0/7] Enable device atomics with a VM bind flag Lionel Landwerlin
2024-04-22 10:13 ` Nirmoy Das
2024-04-22 14:50 ` Souza, Jose
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=b1aa42d7-c018-44f5-8e30-894392255b36@linux.intel.com \
--to=nirmoy.das@linux.intel.com \
--cc=intel-xe@lists.freedesktop.org \
--cc=nirmoy.das@intel.com \
--cc=oak.zeng@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox