From: Sayali Patil <sayalip@linux.ibm.com>
To: "Christophe Leroy (CS GROUP)" <chleroy@kernel.org>,
linuxppc-dev@lists.ozlabs.org, maddy@linux.ibm.com
Cc: aboorvad@linux.ibm.com, sshegde@linux.ibm.com,
riteshh@linux.ibm.com, hbathini@linux.ibm.com,
ming.lei@redhat.com, csander@purestorage.com, czhong@redhat.com,
venkat88@linux.ibm.com
Subject: Re: [PATCH v3 1/2] powerpc: fix KUAP warning in VMX usercopy path
Date: Wed, 4 Mar 2026 18:00:48 +0530 [thread overview]
Message-ID: <952c0abe-d126-49a7-acf7-91ac2021425b@linux.ibm.com> (raw)
In-Reply-To: <d3aa734c-fed7-4cec-9a39-e1c1d8f8c480@kernel.org>
[-- Attachment #1: Type: text/plain, Size: 10122 bytes --]
On 04/03/26 12:19, Christophe Leroy (CS GROUP) wrote:
> Hi Sayali,
>
> Le 04/03/2026 à 06:35, Sayali Patil a écrit :
>> On powerpc with PREEMPT_FULL or PREEMPT_LAZY and function tracing
>> enabled,
>> KUAP warnings can be triggered from the VMX usercopy path under memory
>> stress workloads.
>>
>> KUAP requires that no subfunctions are called once userspace access has
>> been enabled. The existing VMX copy implementation violates this
>> requirement by invoking enter_vmx_usercopy() from the assembly path
>> after
>> userspace access has already been enabled. If preemption occurs
>> in this window, the AMR state may not be preserved correctly,
>> leading to unexpected userspace access state and resulting in
>> KUAP warnings.
>>
>> Fix this by restructuring the VMX usercopy flow so that VMX selection
>> and VMX state management are centralized in raw_copy_tofrom_user(),
>> which is invoked by the raw_copy_{to,from,in}_user() wrappers.
>>
>> The new flow is:
>>
>> - raw_copy_{to,from,in}_user() calls raw_copy_tofrom_user()
>> - raw_copy_tofrom_user() decides whether to use the VMX path
>> based on size and CPU capability
>> - Call enter_vmx_usercopy() before enabling userspace access
>> - Enable userspace access as per the copy direction
>> and perform the VMX copy
>> - Disable userspace access as per the copy direction
>> - Call exit_vmx_usercopy()
>> - Fall back to the base copy routine if the VMX copy faults
>>
>> With this change, the VMX assembly routines no longer perform VMX state
>> management or call helper functions; they only implement the
>> copy operations.
>> The previous feature-section based VMX selection inside
>> __copy_tofrom_user_power7() is removed, and a dedicated
>> __copy_tofrom_user_power7_vmx() entry point is introduced.
>>
>> This ensures correct KUAP ordering, avoids subfunction calls
>> while KUAP is unlocked, and eliminates the warnings while preserving
>> the VMX fast path.
>>
>> Fixes: de78a9c42a79 ("powerpc: Add a framework for Kernel Userspace
>> Access Protection")
>> Reported-by: Shrikanth Hegde <sshegde@linux.ibm.com>
>> Closes:
>> https://lore.kernel.org/all/20260109064917.777587-2-sshegde@linux.ibm.com/
>> Suggested-by: Christophe Leroy <chleroy@kernel.org>
>> Co-developed-by: Aboorva Devarajan <aboorvad@linux.ibm.com>
>> Signed-off-by: Aboorva Devarajan <aboorvad@linux.ibm.com>
>> Signed-off-by: Sayali Patil <sayalip@linux.ibm.com>
>
> That looks almost good, some editorial comments below.
>
> With those fixed, you can add Reviewed-by: Christophe Leroy (CS
> GROUP) <chleroy@kernel.org>
>
>> ---
>>
>> v2->v3
>> - Addressd as per review feedback by removing usercopy_mode enum
>> and using the copy direction directly for KUAP permission handling.
>> - Integrated __copy_tofrom_user_vmx() functionality into
>> raw_copy_tofrom_user() in uaccess.h as a static __always_inline
>> implementation.
>> - Exported enter_vmx_usercopy() and exit_vmx_usercopy()
>> to support VMX usercopy handling from the common path.
>>
>> v2:
>> https://lore.kernel.org/all/20260228135319.238985-1-sayalip@linux.ibm.com/
>>
>> ---
>> arch/powerpc/include/asm/uaccess.h | 66 ++++++++++++++++++++++--------
>> arch/powerpc/lib/copyuser_64.S | 1 +
>> arch/powerpc/lib/copyuser_power7.S | 45 +++++++-------------
>> arch/powerpc/lib/vmx-helper.c | 2 +
>> 4 files changed, 66 insertions(+), 48 deletions(-)
>>
>> diff --git a/arch/powerpc/include/asm/uaccess.h
>> b/arch/powerpc/include/asm/uaccess.h
>> index ba1d878c3f40..8fd412671025 100644
>> --- a/arch/powerpc/include/asm/uaccess.h
>> +++ b/arch/powerpc/include/asm/uaccess.h
>> @@ -15,6 +15,9 @@
>> #define TASK_SIZE_MAX TASK_SIZE_USER64
>> #endif
>> +/* Threshold above which VMX copy path is used */
>> +#define VMX_COPY_THRESHOLD 3328
>> +
>> #include <asm-generic/access_ok.h>
>> /*
>> @@ -326,40 +329,67 @@ do { \
>> extern unsigned long __copy_tofrom_user(void __user *to,
>> const void __user *from, unsigned long size);
>> -#ifdef __powerpc64__
>> -static inline unsigned long
>> -raw_copy_in_user(void __user *to, const void __user *from, unsigned
>> long n)
>> +unsigned long __copy_tofrom_user_base(void __user *to,
>> + const void __user *from, unsigned long size);
>> +
>> +unsigned long __copy_tofrom_user_power7_vmx(void __user *to,
>> + const void __user *from, unsigned long size);
>> +
>> +
>
> Remove one line.
>
>> +static __always_inline bool will_use_vmx(unsigned long n)
>> +{
>> + return IS_ENABLED(CONFIG_ALTIVEC) &&
>> + cpu_has_feature(CPU_FTR_VMX_COPY) &&
>> + n > VMX_COPY_THRESHOLD;
>
> Avoid too many line when possible. Nowadays up to 100 chars per line
> are allowed.
>
> Take care of alignment of second line, the second line should start at
> same position as IS_ENABLED, meaning you have to insert 7 spaces
> instead of a tab.
>
>> +}
>> +
>> +static __always_inline unsigned long raw_copy_tofrom_user(void
>> __user *to,
>> + const void __user *from, unsigned long n,
>> + unsigned long dir)
>
> Subsequent lines should start at same position as the ( of the first
> line, therefore I'd suggest following form instead:
>
> static __always_inline unsigned long
> raw_copy_tofrom_user(void __user *to,const void __user *from, unsigned
> long n, unsigned long dir)
>
>> {
>> unsigned long ret;
>> - barrier_nospec();
>> - allow_user_access(to, KUAP_READ_WRITE);
>> + if (will_use_vmx(n) && enter_vmx_usercopy()) {
>> + allow_user_access(to, dir);
>> + ret = __copy_tofrom_user_power7_vmx(to, from, n);
>> + prevent_user_access(dir);
>> + exit_vmx_usercopy();
>> +
>> + if (unlikely(ret)) {
>> + allow_user_access(to, dir);
>> + ret = __copy_tofrom_user_base(to, from, n);
>> + prevent_user_access(dir);
>> + }
>> + return ret;
>> + }
>> +
>> + allow_user_access(to, dir);
>> ret = __copy_tofrom_user(to, from, n);
>> - prevent_user_access(KUAP_READ_WRITE);
>> + prevent_user_access(dir);
>> return ret;
>> }
>> +
>> +#ifdef __powerpc64__
>
> I know it was already there before, but checkpatch is not happy about
> __power64__. It should be replaced by CONFIG_PPC64.
>
>> +static inline unsigned long
>> +raw_copy_in_user(void __user *to, const void __user *from, unsigned
>> long n)
>> +{
>> + barrier_nospec();
>> + return raw_copy_tofrom_user(to, from, n, KUAP_READ_WRITE);
>> +}
>> #endif /* __powerpc64__ */
>> static inline unsigned long raw_copy_from_user(void *to,
>> const void __user *from, unsigned long n)
>
> Same problem with alignment of second line. Prefer the form used for
> raw_copy_in_user() or raw_copy_to_user(), ie:
>
> static inline unsigned long
> raw_copy_from_user(void *to, const void __user *from, unsigned long n)
>
>> {
>> - unsigned long ret;
>> -
>> - allow_user_access(NULL, KUAP_READ);
>> - ret = __copy_tofrom_user((__force void __user *)to, from, n);
>> - prevent_user_access(KUAP_READ);
>> - return ret;
>> + return raw_copy_tofrom_user((__force void __user *)to, from,
>> + n, KUAP_READ);
>
> 100 chars are allowed per line, this should fit on a single line.
>
>> }
>> static inline unsigned long
>> raw_copy_to_user(void __user *to, const void *from, unsigned long n)
>> {
>> - unsigned long ret;
>> -
>> - allow_user_access(to, KUAP_WRITE);
>> - ret = __copy_tofrom_user(to, (__force const void __user *)from, n);
>> - prevent_user_access(KUAP_WRITE);
>> - return ret;
>> + return raw_copy_tofrom_user(to, (__force const void __user *)from,
>> + n, KUAP_WRITE);
>
> 100 chars are allowed per line, this should fit on a single line.
>
>> }
>> unsigned long __arch_clear_user(void __user *addr, unsigned long
>> size);
>
>
> Run checkpatch before submitting patches:
>
> $ ./scripts/checkpatch.pl --strict -g HEAD~
> CHECK: Alignment should match open parenthesis
> #83: FILE: arch/powerpc/include/asm/uaccess.h:333:
> +unsigned long __copy_tofrom_user_base(void __user *to,
> + const void __user *from, unsigned long size);
>
> CHECK: Alignment should match open parenthesis
> #86: FILE: arch/powerpc/include/asm/uaccess.h:336:
> +unsigned long __copy_tofrom_user_power7_vmx(void __user *to,
> + const void __user *from, unsigned long size);
>
> CHECK: Please don't use multiple blank lines
> #88: FILE: arch/powerpc/include/asm/uaccess.h:338:
> +
> +
>
> CHECK: Alignment should match open parenthesis
> #97: FILE: arch/powerpc/include/asm/uaccess.h:347:
> +static __always_inline unsigned long raw_copy_tofrom_user(void __user
> *to,
> + const void __user *from, unsigned long n,
>
> CHECK: architecture specific defines should be avoided
> #125: FILE: arch/powerpc/include/asm/uaccess.h:372:
> +#ifdef __powerpc64__
>
> total: 0 errors, 0 warnings, 5 checks, 212 lines checked
>
> NOTE: For some of the reported defects, checkpatch may be able to
> mechanically convert to the typical style using --fix or
> --fix-inplace.
>
> Commit 3a44f6614d88 ("powerpc: fix KUAP warning in VMX usercopy path")
> has style problems, please review.
>
> NOTE: If any of the errors are false positives, please report
> them to the maintainer, see CHECKPATCH in MAINTAINERS.
>
Thanks Christophe for the review.
I have addressed the comments and incorporated the changes in v4.
As suggested, I have added:
Reviewed-by: Christophe Leroy (CS GROUP) <chleroy@kernel.org>
v4:
https://lore.kernel.org/all/20260304122201.153049-1-sayalip@linux.ibm.com/
[-- Attachment #2: Type: text/html, Size: 15766 bytes --]
prev parent reply other threads:[~2026-03-04 12:31 UTC|newest]
Thread overview: 4+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-03-04 5:35 [PATCH v3 1/2] powerpc: fix KUAP warning in VMX usercopy path Sayali Patil
2026-03-04 5:35 ` [PATCH v3 2/2] powerpc/selftests/copyloops: extend selftest to exercise __copy_tofrom_user_power7_vmx Sayali Patil
2026-03-04 6:49 ` [PATCH v3 1/2] powerpc: fix KUAP warning in VMX usercopy path Christophe Leroy (CS GROUP)
2026-03-04 12:30 ` Sayali Patil [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=952c0abe-d126-49a7-acf7-91ac2021425b@linux.ibm.com \
--to=sayalip@linux.ibm.com \
--cc=aboorvad@linux.ibm.com \
--cc=chleroy@kernel.org \
--cc=csander@purestorage.com \
--cc=czhong@redhat.com \
--cc=hbathini@linux.ibm.com \
--cc=linuxppc-dev@lists.ozlabs.org \
--cc=maddy@linux.ibm.com \
--cc=ming.lei@redhat.com \
--cc=riteshh@linux.ibm.com \
--cc=sshegde@linux.ibm.com \
--cc=venkat88@linux.ibm.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox