From: "Huang, Honglei1" <Honglei1.Huang@amd.com>
To: Simona Vetter <simona.vetter@ffwll.ch>
Cc: Demi Marie Obenour <demiobenour@gmail.com>,
Huang Rui <ray.huang@amd.com>,
virtualization@lists.linux-foundation.org,
linux-kernel@vger.kernel.org,
Dmitry Osipenko <dmitry.osipenko@collabora.com>,
dri-devel@lists.freedesktop.org,
David Airlie <airlied@redhat.com>,
Gerd Hoffmann <kraxel@redhat.com>,
Gurchetan Singh <gurchetansingh@chromium.org>,
Chia-I Wu <olvaffe@gmail.com>,
Akihiko Odaki <akihiko.odaki@daynix.com>,
Lingshan Zhu <Lingshan.Zhu@amd.com>
Subject: Re: [RFC PATCH 3/3] drm/virtio: implement blob userptr resource object
Date: Thu, 6 Feb 2025 19:05:42 +0800 [thread overview]
Message-ID: <d2aebe45-92cd-43a8-9766-61a36c4539a8@amd.com> (raw)
In-Reply-To: <Z6Hv5esGqHWU2TVe@phenom.ffwll.local>
On 2025/2/4 18:45, Simona Vetter wrote:
> On Mon, Jan 20, 2025 at 05:54:10PM +0800, Huang, Honglei1 wrote:
>> On 2024/12/27 10:02, Huang, Honglei1 wrote:
>>> On 2024/12/22 9:59, Demi Marie Obenour wrote:
>>>> On 12/20/24 10:35 AM, Simona Vetter wrote:
>>>>> On Fri, Dec 20, 2024 at 06:04:09PM +0800, Honglei Huang wrote:
>>>>>> From: Honglei Huang<Honglei1.Huang@amd.com>
>>>>>>
>>>>>> A virtio-gpu userptr is based on HMM notifier.
>>>>>> Used for let host access guest userspace memory and
>>>>>> notice the change of userspace memory.
>>>>>> This series patches are in very beginning state,
>>>>>> User space are pinned currently to ensure the host
>>>>>> device memory operations are correct.
>>>>>> The free and unmap operations for userspace can be
>>>>>> handled by MMU notifier this is a simple and basice
>>>>>> SVM feature for this series patches.
>>>>>> The physical PFNS update operations is splited into
>>>>>> two OPs in here. The evicted memories won't be used
>>>>>> anymore but remap into host again to achieve same
>>>>>> effect with hmm_rang_fault.
>>>>> So in my opinion there are two ways to implement userptr that make sense:
>>>>>
>>>>> - pinned userptr with pin_user_pages(FOLL_LONGTERM). there is not mmu
>>>>> notifier
>>>>>
>>>>> - unpinnned userptr where you entirely rely on userptr and do not hold any
>>>>> page references or page pins at all, for full SVM integration. This
>>>>> should use hmm_range_fault ideally, since that's the version that
>>>>> doesn't ever grab any page reference pins.
>>>>>
>>>>> All the in-between variants are imo really bad hacks, whether they hold a
>>>>> page reference or a temporary page pin (which seems to be what you're
>>>>> doing here). In much older kernels there was some justification for them,
>>>>> because strange stuff happened over fork(), but with FOLL_LONGTERM this is
>>>>> now all sorted out. So there's really only fully pinned, or true svm left
>>>>> as clean design choices imo.
>>>>>
>>>>> With that background, why does pin_user_pages(FOLL_LONGTERM) not work for
>>>>> you?
>>>> +1 on using FOLL_LONGTERM. Fully dynamic memory management has a huge cost
>>>> in complexity that pinning everything avoids. Furthermore, this avoids the
>>>> host having to take action in response to guest memory reclaim requests.
>>>> This avoids additional complexity (and thus attack surface) on the host side.
>>>> Furthermore, since this is for ROCm and not for graphics, I am less concerned
>>>> about supporting systems that require swappable GPU VRAM.
>
> I kinda ignored this one, because from my point of view it's all clear.
> But I guess better to elaborate some more.
>
>>> Hi Sima and Demi, I totally agree the flag FOLL_LONGTERM is needed, I
>>> will add it in next version. And for the first pin variants
>>> implementation, the MMU notifier is also needed I think.
>>> Cause the userptr feature in UMD generally used like this: the
>>> registering of userptr
>>> always is explicitly invoked by user code like
>>> "registerMemoryToGPU(userptrAddr, ...)",
>>> but for the userptr release/free, there is no explicit API for it, at
>>> least in hsakmt/KFD stack.
>>> User just need call system call "free(userptrAddr)", thenkernel driver
>>> will release the userptr
>>> by MMU notifier callback.Virtio-GPU has no other way to know if user has
>>> been free the userptr
>>> except for MMU notifior.And in UMD theres is no way to get the free()
>>> operation is invoked by user.
>>> the only way is use MMU notifierin virtio-GPU driver and free the
>>> corresponding data in host
>>> by some virtio CMDs as far as I can see.
>
> Yes that's what I meant that you need real svm/hmm here. You cannot fake
> the amdkfd/hsakmt model with pin_user_pages, it fundamentally falls apart.
> One case is overcommit, where userspace malloc() a huge amount of virtual
> address space, registers it with the gpu, but only uses fairly little of
> it. If you pin that all, you run out of memory or at least thrash
> performance.
>
> For the hsa/kfd model, you must use hmm+mmu_notifier, or you're breaking
> the uapi contract.
>
>>> And for the second way that is use hmm_range_fault, there is a
>>> predictable issues as far as I can see, at least in hsakmt/KFD stack.
>>> That is the memory may migrate when GPU/device is working. In bare
>>> metal, when memory is migrating KFD driver will pause the compute work
>>> of the device in mmap_wirte_lock then use hmm_range_fault to remap the
>>> migrated/evicted memories to GPU then restore the compute work of device
>>> to ensure the correction of the data. But in virtio-GPU driver the
>>> migration happen in guest kernel, the evict mmu notifier callback
>>> happens in guest, a virtio CMD can be used for notify host but as lack
>>> of mmap_write_lock protection in host kernel, host will hold invalid
>>> data for a short period of time, this may lead to some issues. And it is
>>> hard to fix as far as I can see.
>
> I forgot the exact details, but when I looked amdkfd was taking too big
> locks in the migration paths, which results in deadlocks. Those were
> worked around by taking even bigger locks, the mmap_write_lock. But that's
> a design issue with amdkfd, not something fundamental.
>
> hmm only needs the mmu_notifier callbacks to work, meaning either context
> preemption and restarting, or tlb invalidation and gpu page fault
> handling. Those you can forward between guest and host with no issues, and
> with hw support like pasid/ats in iommu this already happens.
>
> Note that the drm_gpusvm library that's under discussion for xe had the
> same issue in version 1 of relying on mmap_write_lock to paper over design
> issues. But the recent versions should be fixed. Would be really good to
> look at all that. And then probably do a full redo of the svm support for
> amd in amdgpu.ko using all these new helper libraries, because my personal
> take is that fixing amdkfd is probably more work than just writing new
> clean code.
>
>>> Finally I will extract some APIs into helper according to your request,
>>> and I will refactor the whole userptr
>>> implementation, use some callbacks in page getting path, let the pin
>>> method and hmm_range_fault can be choiced
>>> in this series patches.
>>>
>>> Regards,
>>> Honglei
>>
>> Hi Sima,
>>
>> I modified the code, remove all the MMU nitifior and use pin_user_pages_fast
>> only. Under this implementation userptr fully
>> managed by UMD. We did a performance test, it decreased by 30% in
>> OpenCL stack in Geekbench6 benmark.
>> We use AMD V2000 for test:
>> use MMU notifior + pin_user_pages:
>> near 13000 score: https://browser.geekbench.com/v6/compute/3257793
>>
>> use pin_user_pages only:
>> near 10000 socre: https://browser.geekbench.com/v6/compute/3496228
>>
>> The code is under clean up, I will send out later.
>
> pin_user_pages is fundamentally broken for the hsa memory model, no amount
> of benchmarking different flavors of it against each another will change
> that.
>
>> And I found a another thing, it seems like in intel i915 userptr
>> implementation, the pin_user_pages is also used in MMU notifior.
>> Code path is: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/drivers/gpu/drm/i915/gem/i915_gem_userptr.c?h=v6.13#:~:text=ret%20%3D%20pin_user_pages_fast(obj%2D%3Euserptr.ptr%20%2B%20pinned%20*%20PAGE_SIZE%2C
>
> Yeah i915-gem isn't great code and shouldn't be used as example for
> anything. This is Dave&me asked Xe developers to create a lot of the
> infrastructure in drm libraries, to make sure we have a solid design here
> and not what i915-gem did.
>
> Also note that i915-gem userptr is for buffer based userptr, like for
> VK_KHR_external_memory if I remember correctly. This is not the hsa memory
> model at all.
>>
>> Patch set: https://lore.kernel.org/all/159353552439.22701.14005121342739071590@emeril.freedesktop.org/T/
>> https://patchwork.kernel.org/project/intel-gfx/patch/20210323155059.628690-17-maarten.lankhorst@linux.intel.com/#24064663
>>
>> And I didn't find the hmm_range_fault code path, maybe I missed it?
>
> i915-gem never supported the svm/hsa memory model in upstream, so yeah
> it's not there. For drm/xe see the work from Matt Brost that's currently
> under discussion on dri-devel for what it should all look like.
>
> Cheers, Sima
>
>>
>> Regards,
>> Honglei
>>
>
Understood, and really sorry for the misunderstanding on SVM feature.
This patch set is not going to implement the SVM/HSA feature for
virtio-gpu. Just to let host GPU access the guest userspace memory. I
will correct the feature description in patches. The full SVM/HSA are
heavyweight feature, need much more time to implement.
The goal of this patch set is not to achieve the SVM/HSA feature, called
by "userptr" or "guest blob memory" maybe is better.
Regards,
Honglei
next prev parent reply other threads:[~2025-02-06 11:05 UTC|newest]
Thread overview: 23+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-12-20 10:04 [RFC PATCH 1/3] virtio-gpu api: add blob userptr resource Honglei Huang
2024-12-20 10:04 ` [RFC PATCH 2/3] drm/virtgpu " Honglei Huang
2024-12-20 10:04 ` [RFC PATCH 3/3] drm/virtio: implement blob userptr resource object Honglei Huang
2024-12-20 15:35 ` Simona Vetter
2024-12-22 1:59 ` Demi Marie Obenour
2024-12-27 2:24 ` Huang, Honglei1
2025-01-08 17:05 ` Simona Vetter
2025-01-25 0:42 ` Demi Marie Obenour
2025-01-29 19:40 ` Demi Marie Obenour
2025-01-29 20:54 ` Demi Marie Obenour
2025-01-31 0:33 ` Demi Marie Obenour
2025-02-06 10:53 ` Huang, Honglei1
2025-02-06 18:21 ` Demi Marie Obenour
2025-02-07 11:07 ` Huang, Honglei1
2025-02-08 2:30 ` Demi Marie Obenour
2025-02-08 2:43 ` Demi Marie Obenour
[not found] ` <d259279c-9989-410f-907d-9bf0b318bc84@amd.com>
2025-02-08 19:47 ` Demi Marie Obenour
[not found] ` <40485636-7ea3-4e34-b4bb-1ccaaadd4e47@amd.com>
2025-01-20 9:54 ` Huang, Honglei1
2025-01-29 19:46 ` Demi Marie Obenour
2025-02-04 10:45 ` Simona Vetter
2025-02-06 11:05 ` Huang, Honglei1 [this message]
2025-02-03 8:25 ` [RFC PATCH 1/3] virtio-gpu api: add blob userptr resource Akihiko Odaki
2025-02-06 10:40 ` Huang, Honglei1
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=d2aebe45-92cd-43a8-9766-61a36c4539a8@amd.com \
--to=honglei1.huang@amd.com \
--cc=Lingshan.Zhu@amd.com \
--cc=airlied@redhat.com \
--cc=akihiko.odaki@daynix.com \
--cc=demiobenour@gmail.com \
--cc=dmitry.osipenko@collabora.com \
--cc=dri-devel@lists.freedesktop.org \
--cc=gurchetansingh@chromium.org \
--cc=kraxel@redhat.com \
--cc=linux-kernel@vger.kernel.org \
--cc=olvaffe@gmail.com \
--cc=ray.huang@amd.com \
--cc=simona.vetter@ffwll.ch \
--cc=virtualization@lists.linux-foundation.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).