From: Chenghai Huang <huangchenghai2@huawei.com>
To: <gregkh@linuxfoundation.org>, <zhangfei.gao@linaro.org>,
<wangzhou1@hisilicon.com>
Cc: <linux-kernel@vger.kernel.org>, <linux-crypto@vger.kernel.org>,
<linuxarm@openeuler.org>, <fanghao11@huawei.com>,
<shenyang39@huawei.com>, <liulongfang@huawei.com>,
<qianweili@huawei.com>
Subject: [PATCH v2 3/4] uacce: implement mremap in uacce_vm_ops to return -EPERM
Date: Tue, 16 Sep 2025 22:48:10 +0800 [thread overview]
Message-ID: <20250916144811.1799687-4-huangchenghai2@huawei.com> (raw)
In-Reply-To: <20250916144811.1799687-1-huangchenghai2@huawei.com>
From: Yang Shen <shenyang39@huawei.com>
The current uacce_vm_ops does not support the mremap operation of
vm_operations_struct. Implement .mremap to return -EPERM to remind
users.
The reason we need to explicitly disable mremap is that when the
driver does not implement .mremap, it uses the default mremap
method. This could lead to a risk scenario:
An application might first mmap address p1, then mremap to p2,
followed by munmap(p1), and finally munmap(p2). Since the default
mremap copies the original vma's vm_private_data (i.e., q) to the
new vma, both munmap operations would trigger vma_close, causing
q->qfr to be freed twice(qfr will be set to null here, so repeated
release is ok).
Fixes: 015d239ac014 ("uacce: add uacce driver")
Cc: stable@vger.kernel.org
Signed-off-by: Yang Shen <shenyang39@huawei.com>
Signed-off-by: Chenghai Huang <huangchenghai2@huawei.com>
---
drivers/misc/uacce/uacce.c | 6 ++++++
1 file changed, 6 insertions(+)
diff --git a/drivers/misc/uacce/uacce.c b/drivers/misc/uacce/uacce.c
index 770a931ef68d..dda71492874d 100644
--- a/drivers/misc/uacce/uacce.c
+++ b/drivers/misc/uacce/uacce.c
@@ -214,8 +214,14 @@ static void uacce_vma_close(struct vm_area_struct *vma)
}
}
+static int uacce_vma_mremap(struct vm_area_struct *area)
+{
+ return -EPERM;
+}
+
static const struct vm_operations_struct uacce_vm_ops = {
.close = uacce_vma_close,
+ .mremap = uacce_vma_mremap,
};
static int uacce_fops_mmap(struct file *filep, struct vm_area_struct *vma)
--
2.33.0
next prev parent reply other threads:[~2025-09-16 14:48 UTC|newest]
Thread overview: 12+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-09-16 14:48 [PATCH v2 0/4] uacce: driver fixes for memory leaks and state management Chenghai Huang
2025-09-16 14:48 ` [PATCH v2 1/4] uacce: fix for cdev memory leak Chenghai Huang
2025-09-16 15:14 ` Greg KH
2025-09-17 9:56 ` huangchenghai
2025-09-17 10:18 ` Greg KH
2025-09-26 8:47 ` huangchenghai
2025-09-26 9:37 ` linwenkai (C)
2025-09-16 14:48 ` [PATCH v2 2/4] uacce: fix isolate sysfs check condition Chenghai Huang
2025-09-16 15:15 ` Greg KH
2025-09-17 9:54 ` huangchenghai
2025-09-16 14:48 ` Chenghai Huang [this message]
2025-09-16 14:48 ` [PATCH v2 4/4] uacce: ensure safe queue release with state management Chenghai Huang
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20250916144811.1799687-4-huangchenghai2@huawei.com \
--to=huangchenghai2@huawei.com \
--cc=fanghao11@huawei.com \
--cc=gregkh@linuxfoundation.org \
--cc=linux-crypto@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linuxarm@openeuler.org \
--cc=liulongfang@huawei.com \
--cc=qianweili@huawei.com \
--cc=shenyang39@huawei.com \
--cc=wangzhou1@hisilicon.com \
--cc=zhangfei.gao@linaro.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox