From: Oleksandr Natalenko <oleksandr@natalenko.name>
To: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>,
Maxime Ripard <mripard@kernel.org>,
Thomas Zimmermann <tzimmermann@suse.de>,
"Matthew Wilcox (Oracle)" <willy@infradead.org>
Cc: "Matthew Wilcox (Oracle)" <willy@infradead.org>,
David Airlie <airlied@gmail.com>, Daniel Vetter <daniel@ffwll.ch>,
dri-devel@lists.freedesktop.org, stable@vger.kernel.org
Subject: Re: [PATCH] drm: Do not overrun array in drm_gem_get_pages()
Date: Thu, 12 Oct 2023 10:01:23 +0200 [thread overview]
Message-ID: <2703014.mvXUDI8C0e@natalenko.name> (raw)
In-Reply-To: <20231005135648.2317298-1-willy@infradead.org>
[-- Attachment #1: Type: text/plain, Size: 1910 bytes --]
On čtvrtek 5. října 2023 15:56:47 CEST Matthew Wilcox (Oracle) wrote:
> If the shared memory object is larger than the DRM object that it backs,
> we can overrun the page array. Limit the number of pages we install
> from each folio to prevent this.
>
> Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
> Reported-by: Oleksandr Natalenko <oleksandr@natalenko.name>
> Tested-by: Oleksandr Natalenko <oleksandr@natalenko.name>
> Link: https://lore.kernel.org/lkml/13360591.uLZWGnKmhe@natalenko.name/
> Fixes: 3291e09a4638 ("drm: convert drm_gem_put_pages() to use a folio_batch")
> Cc: stable@vger.kernel.org # 6.5.x
> ---
> drivers/gpu/drm/drm_gem.c | 6 ++++--
> 1 file changed, 4 insertions(+), 2 deletions(-)
>
> diff --git a/drivers/gpu/drm/drm_gem.c b/drivers/gpu/drm/drm_gem.c
> index 6129b89bb366..44a948b80ee1 100644
> --- a/drivers/gpu/drm/drm_gem.c
> +++ b/drivers/gpu/drm/drm_gem.c
> @@ -540,7 +540,7 @@ struct page **drm_gem_get_pages(struct drm_gem_object *obj)
> struct page **pages;
> struct folio *folio;
> struct folio_batch fbatch;
> - int i, j, npages;
> + long i, j, npages;
>
> if (WARN_ON(!obj->filp))
> return ERR_PTR(-EINVAL);
> @@ -564,11 +564,13 @@ struct page **drm_gem_get_pages(struct drm_gem_object *obj)
>
> i = 0;
> while (i < npages) {
> + long nr;
> folio = shmem_read_folio_gfp(mapping, i,
> mapping_gfp_mask(mapping));
> if (IS_ERR(folio))
> goto fail;
> - for (j = 0; j < folio_nr_pages(folio); j++, i++)
> + nr = min(npages - i, folio_nr_pages(folio));
> + for (j = 0; j < nr; j++, i++)
> pages[i] = folio_file_page(folio, i);
>
> /* Make sure shmem keeps __GFP_DMA32 allocated pages in the
>
Gentle ping. It would be nice to have this picked so that it gets into the stable kernel rather sooner than later.
Thanks.
--
Oleksandr Natalenko (post-factum)
[-- Attachment #2: This is a digitally signed message part. --]
[-- Type: application/pgp-signature, Size: 833 bytes --]
next prev parent reply other threads:[~2023-10-12 8:01 UTC|newest]
Thread overview: 3+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-10-05 13:56 [PATCH] drm: Do not overrun array in drm_gem_get_pages() Matthew Wilcox (Oracle)
2023-10-12 8:01 ` Oleksandr Natalenko [this message]
2023-10-12 8:45 ` (subset) " Maxime Ripard
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=2703014.mvXUDI8C0e@natalenko.name \
--to=oleksandr@natalenko.name \
--cc=airlied@gmail.com \
--cc=daniel@ffwll.ch \
--cc=dri-devel@lists.freedesktop.org \
--cc=maarten.lankhorst@linux.intel.com \
--cc=mripard@kernel.org \
--cc=stable@vger.kernel.org \
--cc=tzimmermann@suse.de \
--cc=willy@infradead.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox