AMD-GFX Archive on lore.kernel.org
 help / color / mirror / Atom feed
From: "Christian König" <christian.koenig@amd.com>
To: "Marek Olšák" <maraeo@gmail.com>
Cc: "Alex Deucher" <alexdeucher@gmail.com>,
	"Jesse.Zhang" <Jesse.Zhang@amd.com>,
	"Marek Olšák" <marek.olsak@amd.com>,
	amd-gfx@lists.freedesktop.org, Alexander.Deucher@amd.com
Subject: Re: [PATCH v2] drm/amdgpu: Limit BO list entry count to prevent resource exhaustion
Date: Fri, 13 Mar 2026 12:54:59 +0100	[thread overview]
Message-ID: <cb2aaf02-a537-4268-aba6-e33e32c551f2@amd.com> (raw)
In-Reply-To: <CAAxE2A5qX2e8rzZvSdUUbCvz2AN64=HU49pQptkYodOXiv32_Q@mail.gmail.com>

Yeah that matches my expectations.

And yes we need to make sure not to completely overflow such arrays.

Christian.

On 3/12/26 21:43, Marek Olšák wrote:
> I have just gathered real data on this and the result is surprising.
> Viewperf 13, Viewperf 2020, Unigine benchmarks, and others have been
> used to gather BO list data.
> 
> The maximum number of BOs that has been observed in the CS ioctl for
> radeonsi is 283, even though the actual number of OpenGL BOs can be on
> the order of 50k. That's thanks to the slab allocator in the Mesa
> amdgpu winsys.
> 
> RADV uses VM_ALWAYS_VALID by default currently. radeonsi could also
> start using it if the kernel memory management behaves optimally.
> 
> Old Mesa drivers might use more BOs, especially RADV which doesn't
> have a slab allocator.
> 
> Some limit may also be needed for lists of sync objects in all our
> ioctls and all arrays in general.
> 
> Marek
> 
> On Thu, Mar 12, 2026 at 1:59 PM Christian König
> <christian.koenig@amd.com> wrote:
>>
>> On 3/12/26 18:44, Alex Deucher wrote:
>>> + Marek,
>>>
>>> This was the feedback from Marek the last time this was brought up:
>>>
>>> "USHRT_MAX seems too low. Traces for workstation apps create 20-30k
>>> BOs, which is not very far from the limit. RADV doesn't suballocate
>>> BOs. Neither GL nor VK has a ilmit on the number of BOs that can be
>>> created. The hypothetical maximum number of BOs that can be allocated
>>> on a GPU with 32GB of addressable memory is 8 million."
>>>
>>> Does 128K sound more reasonable?
>>
>> I think so, yes. Event 64k seems reasonable large to me considering that only BOs which are not per VM need to be in the list.
>>
>> E.g. RADV barely uses this feature as far as I know.
>>
>> Regards,
>> Christian.
>>
>>>
>>> Alex
>>> On Thu, Mar 12, 2026 at 6:13 AM Jesse.Zhang <Jesse.Zhang@amd.com> wrote:
>>>>
>>>> Userspace can pass an arbitrary number of BO list entries via the
>>>> bo_number field. Although the previous multiplication overflow check
>>>> prevents out-of-bounds allocation, a large number of entries could still
>>>> cause excessive memory allocation (up to potentially gigabytes) and
>>>> unnecessarily long list processing times.
>>>>
>>>> Introduce a hard limit of 128k entries per BO list, which is more than
>>>> sufficient for any realistic use case (e.g., a single list containing all
>>>> buffers in a large scene). This prevents memory exhaustion attacks and
>>>> ensures predictable performance.
>>>>
>>>> Return -EINVAL if the requested entry count exceeds the limit
>>>>
>>>> Suggested-by: Christian König <christian.koenig@amd.com>
>>>> Signed-off-by: Jesse Zhang <jesse.zhang@amd.com>
>>>> ---
>>>>  drivers/gpu/drm/amd/amdgpu/amdgpu_bo_list.c | 4 ++++
>>>>  1 file changed, 4 insertions(+)
>>>>
>>>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_bo_list.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_bo_list.c
>>>> index 87ec46c56a6e..3270ea50bdc7 100644
>>>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_bo_list.c
>>>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_bo_list.c
>>>> @@ -36,6 +36,7 @@
>>>>
>>>>  #define AMDGPU_BO_LIST_MAX_PRIORITY    32u
>>>>  #define AMDGPU_BO_LIST_NUM_BUCKETS     (AMDGPU_BO_LIST_MAX_PRIORITY + 1)
>>>> +#define AMDGPU_BO_LIST_MAX_ENTRIES     (128 * 1024)
>>>>
>>>>  static void amdgpu_bo_list_free_rcu(struct rcu_head *rcu)
>>>>  {
>>>> @@ -188,6 +189,9 @@ int amdgpu_bo_create_list_entry_array(struct drm_amdgpu_bo_list_in *in,
>>>>         const uint32_t bo_number = in->bo_number;
>>>>         struct drm_amdgpu_bo_list_entry *info;
>>>>
>>>> +       if (bo_number > AMDGPU_BO_LIST_MAX_ENTRIES)
>>>> +               return -EINVAL;
>>>> +
>>>>         /* copy the handle array from userspace to a kernel buffer */
>>>>         if (likely(info_size == bo_info_size)) {
>>>>                 info = vmemdup_array_user(uptr, bo_number, info_size);
>>>> --
>>>> 2.49.0
>>>>
>>


      reply	other threads:[~2026-03-13 11:55 UTC|newest]

Thread overview: 6+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-03-12 10:13 [PATCH v2] drm/amdgpu: Limit BO list entry count to prevent resource exhaustion Jesse.Zhang
2026-03-12 10:14 ` Christian König
2026-03-12 17:44 ` Alex Deucher
2026-03-12 17:48   ` Christian König
2026-03-12 20:43     ` Marek Olšák
2026-03-13 11:54       ` Christian König [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=cb2aaf02-a537-4268-aba6-e33e32c551f2@amd.com \
    --to=christian.koenig@amd.com \
    --cc=Alexander.Deucher@amd.com \
    --cc=Jesse.Zhang@amd.com \
    --cc=alexdeucher@gmail.com \
    --cc=amd-gfx@lists.freedesktop.org \
    --cc=maraeo@gmail.com \
    --cc=marek.olsak@amd.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox