From: "Christian König" <christian.koenig@amd.com>
To: "Thomas Hellström" <thomas.hellstrom@linux.intel.com>,
"Zeng, Oak" <oak.zeng@intel.com>,
"Danilo Krummrich" <dakr@redhat.com>,
"Dave Airlie" <airlied@redhat.com>,
"Daniel Vetter" <daniel@ffwll.ch>,
"Felix Kuehling" <felix.kuehling@amd.com>,
"jglisse@redhat.com" <jglisse@redhat.com>
Cc: "Welty, Brian" <brian.welty@intel.com>,
"dri-devel@lists.freedesktop.org"
<dri-devel@lists.freedesktop.org>,
"intel-xe@lists.freedesktop.org" <intel-xe@lists.freedesktop.org>,
"Bommu, Krishnaiah" <krishnaiah.bommu@intel.com>,
"Ghimiray, Himal Prasad" <himal.prasad.ghimiray@intel.com>,
"Vishwanathapura,
Niranjana" <niranjana.vishwanathapura@intel.com>,
"Brost, Matthew" <matthew.brost@intel.com>,
"Gupta, saurabhg" <saurabhg.gupta@intel.com>
Subject: Re: Making drm_gpuvm work across gpu devices
Date: Fri, 1 Mar 2024 08:01:15 +0100 [thread overview]
Message-ID: <ef555237-4745-4765-b7e3-093460d7a8e5@amd.com> (raw)
In-Reply-To: <7eb835594110980f2e9f061512fd488bbd63fd11.camel@linux.intel.com>
Hi Thomas,
Am 29.02.24 um 18:12 schrieb Thomas Hellström:
> Hi, Christian.
>
> On Thu, 2024-02-29 at 10:41 +0100, Christian König wrote:
>> Am 28.02.24 um 20:51 schrieb Zeng, Oak:
>>> The mail wasn’t indent/preface correctly. Manually format it.
>>>
>>> *From:*Christian König <christian.koenig@amd.com>
>>> *Sent:* Tuesday, February 27, 2024 1:54 AM
>>> *To:* Zeng, Oak <oak.zeng@intel.com>; Danilo Krummrich
>>> <dakr@redhat.com>; Dave Airlie <airlied@redhat.com>; Daniel Vetter
>>> <daniel@ffwll.ch>; Felix Kuehling <felix.kuehling@amd.com>;
>>> jglisse@redhat.com
>>> *Cc:* Welty, Brian <brian.welty@intel.com>;
>>> dri-devel@lists.freedesktop.org; intel-xe@lists.freedesktop.org;
>>> Bommu, Krishnaiah <krishnaiah.bommu@intel.com>; Ghimiray, Himal
>>> Prasad
>>> <himal.prasad.ghimiray@intel.com>;
>>> Thomas.Hellstrom@linux.intel.com;
>>> Vishwanathapura, Niranjana <niranjana.vishwanathapura@intel.com>;
>>> Brost, Matthew <matthew.brost@intel.com>; Gupta, saurabhg
>>> <saurabhg.gupta@intel.com>
>>> *Subject:* Re: Making drm_gpuvm work across gpu devices
>>>
>>> Hi Oak,
>>>
>>> Am 23.02.24 um 21:12 schrieb Zeng, Oak:
>>>
>>> Hi Christian,
>>>
>>> I go back this old email to ask a question.
>>>
>>>
>>> sorry totally missed that one.
>>>
>>> Quote from your email:
>>>
>>> “Those ranges can then be used to implement the SVM feature
>>> required for higher level APIs and not something you need at
>>> the
>>> UAPI or even inside the low level kernel memory management.”
>>>
>>> “SVM is a high level concept of OpenCL, Cuda, ROCm etc.. This
>>> should not have any influence on the design of the kernel
>>> UAPI.”
>>>
>>> There are two category of SVM:
>>>
>>> 1.driver svm allocator: this is implemented in user space,
>>> i.g.,
>>> cudaMallocManaged (cuda) or zeMemAllocShared (L0) or
>>> clSVMAlloc(openCL). Intel already have gem_create/vm_bind in
>>> xekmd
>>> and our umd implemented clSVMAlloc and zeMemAllocShared on top
>>> of
>>> gem_create/vm_bind. Range A..B of the process address space is
>>> mapped into a range C..D of the GPU address space, exactly as
>>> you
>>> said.
>>>
>>> 2.system svm allocator: This doesn’t introduce extra driver
>>> API
>>> for memory allocation. Any valid CPU virtual address can be
>>> used
>>> directly transparently in a GPU program without any extra
>>> driver
>>> API call. Quote from kernel Documentation/vm/hmm.hst: “Any
>>> application memory region (private anonymous, shared memory, or
>>> regular file backed memory) can be used by a device
>>> transparently”
>>> and “to share the address space by duplicating the CPU page
>>> table
>>> in the device page table so the same address points to the same
>>> physical memory for any valid main memory address in the
>>> process
>>> address space”. In system svm allocator, we don’t need that
>>> A..B
>>> C..D mapping.
>>>
>>> It looks like you were talking of 1). Were you?
>>>
>>>
>>> No, even when you fully mirror the whole address space from a
>>> process
>>> into the GPU you still need to enable this somehow with an IOCTL.
>>>
>>> And while enabling this you absolutely should specify to which part
>>> of
>>> the address space this mirroring applies and where it maps to.
>>>
>>> */[Zeng, Oak] /*
>>>
>>> Lets say we have a hardware platform where both CPU and GPU support
>>> 57bit(use it for example. The statement apply to any address range)
>>> virtual address range, how do you decide “which part of the address
>>> space this mirroring applies”? You have to mirror the whole address
>>> space [0~2^57-1], do you? As you designed it, the gigantic
>>> vm_bind/mirroring happens at the process initialization time, and
>>> at
>>> that time, you don’t know which part of the address space will be
>>> used
>>> for gpu program. Remember for system allocator, *any* valid CPU
>>> address can be used for GPU program. If you add an offset to
>>> [0~2^57-1], you get an address out of 57bit address range. Is this
>>> a
>>> valid concern?
>>>
>> Well you can perfectly mirror on demand. You just need something
>> similar
>> to userfaultfd() for the GPU. This way you don't need to mirror the
>> full
>> address space, but can rather work with large chunks created on
>> demand,
>> let's say 1GiB or something like that.
>
> What we're looking at as the current design is an augmented userptr
> (A..B -> C..D mapping) which is internally sparsely populated in
> chunks. KMD manages the population using gpu pagefaults. We acknowledge
> that some parts of this mirror will not have a valid CPU mapping. That
> is, no vma so a gpu page-fault that resolves to such a mirror address
> will cause an error. Would you have any concerns / objections against
> such an approach?
Nope, as far as I can see that sounds like a perfectly valid design to me.
Regards,
Christian.
>
> Thanks,
> Thomas
>
>
>
next prev parent reply other threads:[~2024-03-01 7:01 UTC|newest]
Thread overview: 126+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-01-17 22:12 [PATCH 00/23] XeKmd basic SVM support Oak Zeng
2024-01-17 22:12 ` [PATCH 01/23] drm/xe/svm: Add SVM document Oak Zeng
2024-01-17 22:12 ` [PATCH 02/23] drm/xe/svm: Add svm key data structures Oak Zeng
2024-01-17 22:12 ` [PATCH 03/23] drm/xe/svm: create xe svm during vm creation Oak Zeng
2024-01-17 22:12 ` [PATCH 04/23] drm/xe/svm: Trace svm creation Oak Zeng
2024-01-17 22:12 ` [PATCH 05/23] drm/xe/svm: add helper to retrieve svm range from address Oak Zeng
2024-01-17 22:12 ` [PATCH 06/23] drm/xe/svm: Introduce a helper to build sg table from hmm range Oak Zeng
2024-04-05 0:39 ` Jason Gunthorpe
2024-04-05 3:33 ` Zeng, Oak
2024-04-05 12:37 ` Jason Gunthorpe
2024-04-05 16:42 ` Zeng, Oak
2024-04-05 18:02 ` Jason Gunthorpe
2024-04-09 16:45 ` Zeng, Oak
2024-04-09 17:24 ` Jason Gunthorpe
2024-04-23 21:17 ` Zeng, Oak
2024-04-24 2:31 ` Matthew Brost
2024-04-24 13:57 ` Jason Gunthorpe
2024-04-24 16:35 ` Matthew Brost
2024-04-24 16:44 ` Jason Gunthorpe
2024-04-24 16:56 ` Matthew Brost
2024-04-24 17:48 ` Jason Gunthorpe
2024-04-24 13:48 ` Jason Gunthorpe
2024-04-24 23:59 ` Zeng, Oak
2024-04-25 1:05 ` Jason Gunthorpe
2024-04-26 9:55 ` Thomas Hellström
2024-04-26 12:00 ` Jason Gunthorpe
2024-04-26 14:49 ` Thomas Hellström
2024-04-26 16:35 ` Jason Gunthorpe
2024-04-29 8:25 ` Thomas Hellström
2024-04-30 17:30 ` Jason Gunthorpe
2024-04-30 18:57 ` Daniel Vetter
2024-05-01 0:09 ` Jason Gunthorpe
2024-05-02 8:04 ` Daniel Vetter
2024-05-02 9:11 ` Thomas Hellström
2024-05-02 12:46 ` Jason Gunthorpe
2024-05-02 15:01 ` Thomas Hellström
2024-05-02 19:25 ` Zeng, Oak
2024-05-03 13:37 ` Jason Gunthorpe
2024-05-03 14:43 ` Zeng, Oak
2024-05-03 16:28 ` Jason Gunthorpe
2024-05-03 20:29 ` Zeng, Oak
2024-05-04 1:03 ` Dave Airlie
2024-05-06 13:04 ` Daniel Vetter
2024-05-06 23:50 ` Matthew Brost
2024-05-07 11:56 ` Jason Gunthorpe
2024-05-06 13:33 ` Jason Gunthorpe
2024-04-09 17:33 ` Matthew Brost
2024-01-17 22:12 ` [PATCH 07/23] drm/xe/svm: Add helper for binding hmm range to gpu Oak Zeng
2024-01-17 22:12 ` [PATCH 08/23] drm/xe/svm: Add helper to invalidate svm range from GPU Oak Zeng
2024-01-17 22:12 ` [PATCH 09/23] drm/xe/svm: Remap and provide memmap backing for GPU vram Oak Zeng
2024-01-17 22:12 ` [PATCH 10/23] drm/xe/svm: Introduce svm migration function Oak Zeng
2024-01-17 22:12 ` [PATCH 11/23] drm/xe/svm: implement functions to allocate and free device memory Oak Zeng
2024-01-17 22:12 ` [PATCH 12/23] drm/xe/svm: Trace buddy block allocation and free Oak Zeng
2024-01-17 22:12 ` [PATCH 13/23] drm/xe/svm: Handle CPU page fault Oak Zeng
2024-01-17 22:12 ` [PATCH 14/23] drm/xe/svm: trace svm range migration Oak Zeng
2024-01-17 22:12 ` [PATCH 15/23] drm/xe/svm: Implement functions to register and unregister mmu notifier Oak Zeng
2024-01-17 22:12 ` [PATCH 16/23] drm/xe/svm: Implement the mmu notifier range invalidate callback Oak Zeng
2024-01-17 22:12 ` [PATCH 17/23] drm/xe/svm: clean up svm range during process exit Oak Zeng
2024-01-17 22:12 ` [PATCH 18/23] drm/xe/svm: Move a few structures to xe_gt.h Oak Zeng
2024-01-17 22:12 ` [PATCH 19/23] drm/xe/svm: migrate svm range to vram Oak Zeng
2024-01-17 22:12 ` [PATCH 20/23] drm/xe/svm: Populate svm range Oak Zeng
2024-01-17 22:12 ` [PATCH 21/23] drm/xe/svm: GPU page fault support Oak Zeng
2024-01-23 2:06 ` Welty, Brian
2024-01-23 3:09 ` Zeng, Oak
2024-01-23 3:21 ` Making drm_gpuvm work across gpu devices Zeng, Oak
2024-01-23 11:13 ` Christian König
2024-01-23 19:37 ` Zeng, Oak
2024-01-23 20:17 ` Felix Kuehling
2024-01-25 1:39 ` Zeng, Oak
2024-01-23 23:56 ` Danilo Krummrich
2024-01-24 3:57 ` Zeng, Oak
2024-01-24 4:14 ` Zeng, Oak
2024-01-24 6:48 ` Christian König
2024-01-25 22:13 ` Danilo Krummrich
2024-01-24 8:33 ` Christian König
2024-01-25 1:17 ` Zeng, Oak
2024-01-25 1:25 ` David Airlie
2024-01-25 5:25 ` Zeng, Oak
2024-01-26 10:09 ` Christian König
2024-01-26 20:13 ` Zeng, Oak
2024-01-29 10:10 ` Christian König
2024-01-29 20:09 ` Zeng, Oak
2024-01-25 11:00 ` 回复:Making " 周春明(日月)
2024-01-25 17:00 ` Zeng, Oak
2024-01-25 17:15 ` Making " Felix Kuehling
2024-01-25 18:37 ` Zeng, Oak
2024-01-26 13:23 ` Christian König
2024-01-25 16:42 ` Zeng, Oak
2024-01-25 18:32 ` Daniel Vetter
2024-01-25 21:02 ` Zeng, Oak
2024-01-26 8:21 ` Thomas Hellström
2024-01-26 12:52 ` Christian König
2024-01-27 2:21 ` Zeng, Oak
2024-01-29 10:19 ` Christian König
2024-01-30 0:21 ` Zeng, Oak
2024-01-30 8:39 ` Christian König
2024-01-30 22:29 ` Zeng, Oak
2024-01-30 23:12 ` David Airlie
2024-01-31 9:15 ` Daniel Vetter
2024-01-31 20:17 ` Zeng, Oak
2024-01-31 20:59 ` Zeng, Oak
2024-02-01 8:52 ` Christian König
2024-02-29 18:22 ` Zeng, Oak
2024-03-08 4:43 ` Zeng, Oak
2024-03-08 10:07 ` Christian König
2024-01-30 8:43 ` Thomas Hellström
2024-01-29 15:03 ` Felix Kuehling
2024-01-29 15:33 ` Christian König
2024-01-29 16:24 ` Felix Kuehling
2024-01-29 16:28 ` Christian König
2024-01-29 17:52 ` Felix Kuehling
2024-01-29 19:03 ` Christian König
2024-01-29 20:24 ` Felix Kuehling
2024-02-23 20:12 ` Zeng, Oak
2024-02-27 6:54 ` Christian König
2024-02-27 15:58 ` Zeng, Oak
2024-02-28 19:51 ` Zeng, Oak
2024-02-29 9:41 ` Christian König
2024-02-29 16:05 ` Zeng, Oak
2024-02-29 17:12 ` Thomas Hellström
2024-03-01 7:01 ` Christian König [this message]
2024-01-17 22:12 ` [PATCH 22/23] drm/xe/svm: Add DRM_XE_SVM kernel config entry Oak Zeng
2024-01-17 22:12 ` [PATCH 23/23] drm/xe/svm: Add svm memory hints interface Oak Zeng
2024-01-18 2:45 ` ✓ CI.Patch_applied: success for XeKmd basic SVM support Patchwork
2024-01-18 2:46 ` ✗ CI.checkpatch: warning " Patchwork
2024-01-18 2:46 ` ✗ CI.KUnit: failure " Patchwork
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=ef555237-4745-4765-b7e3-093460d7a8e5@amd.com \
--to=christian.koenig@amd.com \
--cc=airlied@redhat.com \
--cc=brian.welty@intel.com \
--cc=dakr@redhat.com \
--cc=daniel@ffwll.ch \
--cc=dri-devel@lists.freedesktop.org \
--cc=felix.kuehling@amd.com \
--cc=himal.prasad.ghimiray@intel.com \
--cc=intel-xe@lists.freedesktop.org \
--cc=jglisse@redhat.com \
--cc=krishnaiah.bommu@intel.com \
--cc=matthew.brost@intel.com \
--cc=niranjana.vishwanathapura@intel.com \
--cc=oak.zeng@intel.com \
--cc=saurabhg.gupta@intel.com \
--cc=thomas.hellstrom@linux.intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox