public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: Vineet Agarwal <agarwal.vineet2006@gmail.com>
To: mamin506@gmail.com, lizhi.hou@amd.com, ogabbay@kernel.org
Cc: dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org,
	Vineet Agarwal <agarwal.vineet2006@gmail.com>
Subject: [PATCH v3] drm/amdxdna: fix pinned_vm accounting and error handling in user buffer pinning
Date: Sun,  3 May 2026 11:20:24 +0530	[thread overview]
Message-ID: <20260503055258.643546-1-agarwal.vineet2006@gmail.com> (raw)
In-Reply-To: <20260502031746.621606-1-agarwal.vineet2006@gmail.com>

amdxdna_get_ubuf() incorrectly accounted mm->pinned_vm using the
requested number of pages before pin_user_pages_fast() completed.

Since pin_user_pages_fast() can return partial success, this could
lead to incorrect accounting and inconsistent cleanup on failure.

Additionally, the RLIMIT_MEMLOCK check was performed after pinning,
allowing excessive pin attempts before validation.

Fix this by:
  - checking RLIMIT_MEMLOCK before attempting to pin pages
  - handling partial pinning correctly and ensuring proper cleanup
  - updating mm->pinned_vm only after all pages are successfully pinned
  - removing incorrect error-path accounting and double-subtraction

Also fix missing rollback when dma_buf_export() fails, which could
leave mm->pinned_vm incremented without a corresponding release.

This ensures correct pinned memory accounting and consistent error
handling aligned with other subsystems using GUP.

Signed-off-by: Vineet Agarwal <agarwal.vineet2006@gmail.com>

Changes in v3:
- Fix missing pinned_vm rollback when dma_buf_export() fails
---
 drivers/accel/amdxdna/amdxdna_ubuf.c | 26 +++++++++++++-------------
 1 file changed, 13 insertions(+), 13 deletions(-)

diff --git a/drivers/accel/amdxdna/amdxdna_ubuf.c b/drivers/accel/amdxdna/amdxdna_ubuf.c
index fb999aa25318..efce6b94fb0c 100644
--- a/drivers/accel/amdxdna/amdxdna_ubuf.c
+++ b/drivers/accel/amdxdna/amdxdna_ubuf.c
@@ -129,7 +129,7 @@ struct dma_buf *amdxdna_get_ubuf(struct drm_device *dev,
 				 u32 num_entries, void __user *va_entries)
 {
 	struct amdxdna_dev *xdna = to_xdna_dev(dev);
-	unsigned long lock_limit, new_pinned;
+	unsigned long lock_limit;
 	struct amdxdna_drm_va_entry *va_ent;
 	struct amdxdna_ubuf_priv *ubuf;
 	u32 npages, start = 0;
@@ -176,18 +176,17 @@ struct dma_buf *amdxdna_get_ubuf(struct drm_device *dev,
 
 	ubuf->nr_pages = exp_info.size >> PAGE_SHIFT;
 	lock_limit = rlimit(RLIMIT_MEMLOCK) >> PAGE_SHIFT;
-	new_pinned = atomic64_add_return(ubuf->nr_pages, &ubuf->mm->pinned_vm);
-	if (new_pinned > lock_limit && !capable(CAP_IPC_LOCK)) {
-		XDNA_DBG(xdna, "New pin %ld, limit %ld, cap %d",
-			 new_pinned, lock_limit, capable(CAP_IPC_LOCK));
+
+	if (ubuf->nr_pages + atomic64_read(&ubuf->mm->pinned_vm) > lock_limit &&
+		!capable(CAP_IPC_LOCK)) {
 		ret = -ENOMEM;
-		goto sub_pin_cnt;
+		goto free_ent;
 	}
 
 	ubuf->pages = kvmalloc_objs(*ubuf->pages, ubuf->nr_pages);
 	if (!ubuf->pages) {
 		ret = -ENOMEM;
-		goto sub_pin_cnt;
+		goto free_ent;
 	}
 
 	for (i = 0; i < num_entries; i++) {
@@ -196,15 +195,17 @@ struct dma_buf *amdxdna_get_ubuf(struct drm_device *dev,
 		ret = pin_user_pages_fast(va_ent[i].vaddr, npages,
 					  FOLL_WRITE | FOLL_LONGTERM,
 					  &ubuf->pages[start]);
-		if (ret < 0 || ret != npages) {
+		if (ret < 0)
+			goto destroy_pages;
+		start += ret;
+		if (ret != npages) {
 			ret = -ENOMEM;
-			XDNA_ERR(xdna, "Failed to pin pages ret %d", ret);
 			goto destroy_pages;
 		}
-
-		start += ret;
 	}
 
+	atomic64_add(ubuf->nr_pages, &ubuf->mm->pinned_vm);
+
 	exp_info.ops = &amdxdna_ubuf_dmabuf_ops;
 	exp_info.priv = ubuf;
 	exp_info.flags = O_RDWR | O_CLOEXEC;
@@ -212,6 +213,7 @@ struct dma_buf *amdxdna_get_ubuf(struct drm_device *dev,
 	dbuf = dma_buf_export(&exp_info);
 	if (IS_ERR(dbuf)) {
 		ret = PTR_ERR(dbuf);
+		atomic64_sub(ubuf->nr_pages, &ubuf->mm->pinned_vm);
 		goto destroy_pages;
 	}
 	kvfree(va_ent);
@@ -222,8 +224,6 @@ struct dma_buf *amdxdna_get_ubuf(struct drm_device *dev,
 	if (start)
 		unpin_user_pages(ubuf->pages, start);
 	kvfree(ubuf->pages);
-sub_pin_cnt:
-	atomic64_sub(ubuf->nr_pages, &ubuf->mm->pinned_vm);
 free_ent:
 	kvfree(va_ent);
 free_ubuf:
-- 
2.54.0


  parent reply	other threads:[~2026-05-03  5:53 UTC|newest]

Thread overview: 4+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-05-02  3:17 [PATCH] drm/amdxdna: fix pinned_vm accounting and rlimit rollback Vineet Agarwal
2026-05-03  4:49 ` [PATCH] drm/amdxdna: fix pinned_vm accounting and error handling in user buffer pinning Vineet Agarwal
2026-05-03  5:50 ` Vineet Agarwal [this message]
2026-05-04 16:05   ` [PATCH v3] " Lizhi Hou

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20260503055258.643546-1-agarwal.vineet2006@gmail.com \
    --to=agarwal.vineet2006@gmail.com \
    --cc=dri-devel@lists.freedesktop.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=lizhi.hou@amd.com \
    --cc=mamin506@gmail.com \
    --cc=ogabbay@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox