From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.ozlabs.org (lists.ozlabs.org [112.213.38.117]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id B836FEB7ED7 for ; Wed, 4 Mar 2026 12:31:02 +0000 (UTC) Received: from boromir.ozlabs.org (localhost [127.0.0.1]) by lists.ozlabs.org (Postfix) with ESMTP id 4fQsTd1ww0z3btf; Wed, 04 Mar 2026 23:31:01 +1100 (AEDT) Authentication-Results: lists.ozlabs.org; arc=none smtp.remote-ip=148.163.156.1 ARC-Seal: i=1; a=rsa-sha256; d=lists.ozlabs.org; s=201707; t=1772627461; cv=none; b=SZT8lZy6chfIzstkejpib2ZXpW8q/P1J8i8J23ZH1BXPG5nHdp2+V6Nd3B/LbZFKpwZpc0ETp0Cc5SOholPgwDSWe90YvH/Dx06hSKvDu6mjLTrlmBmBNMsk5D7gInepNEryExF42whDYVd3uN0BHDXnXPjgdeFakxWyCgvHly7Hyoeo9s+99apujaTtlaBSuDqfvbghY7adydt3lJTgiiERzKywV2falwA1cXqjG+9PFJ5ceqYj9JokjNbTqrxWU4GZAqhQLzbVZfATyqOVE5MkpJIjdG2d4je7IhsYlsvhLsT+ioHQQljZheKRX5PkJALqXUe+CDSFIHTR9HcuMQ== ARC-Message-Signature: i=1; a=rsa-sha256; d=lists.ozlabs.org; s=201707; t=1772627461; c=relaxed/relaxed; bh=qLxn/OLsHQEkhHIFf5JF+6MzqVqO92Kjj0mzpCvU4jY=; h=Content-Type:Message-ID:Date:MIME-Version:Subject:To:Cc: References:From:In-Reply-To; b=CPYJFJ4GmcdCFGHw1y5tzZW+7PSOjwhA3QIZCwLcSGZilS6m5hf4RIg+4CMVg2UvnF4+Hiitptm+0BFOlcxN3+xWx8jn30YAKTDDpQSRGh2/r9dKpuFcujsSTfHUgMzUOiLYJ6u0FqZeItZUE1V3FIsgHg1l/QaXFvXVVlI2m/pZtQ4ObcqMK7WPPpUjMnEIPHgmEnX//Irr/6KyIsOZUqkoXZB6yCmcYNGSf1KNCF6YL8a97GvZgRXssfv7DiI/ANKxxyuKYmXwGjKuAhOsrqcYbKSj3wg88+dE3Y3xHDSqOV7WC830kgOQPa02nuOR2ndpeMzgOJ9SnJ9c3l1Log== ARC-Authentication-Results: i=1; lists.ozlabs.org; dmarc=pass (p=none dis=none) header.from=linux.ibm.com; dkim=pass (2048-bit key; unprotected) header.d=ibm.com header.i=@ibm.com header.a=rsa-sha256 header.s=pp1 header.b=n2C7/tVt; dkim-atps=neutral; spf=pass (client-ip=148.163.156.1; helo=mx0a-001b2d01.pphosted.com; envelope-from=sayalip@linux.ibm.com; receiver=lists.ozlabs.org) smtp.mailfrom=linux.ibm.com Authentication-Results: lists.ozlabs.org; dmarc=pass (p=none dis=none) header.from=linux.ibm.com Authentication-Results: lists.ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=ibm.com header.i=@ibm.com header.a=rsa-sha256 header.s=pp1 header.b=n2C7/tVt; dkim-atps=neutral Authentication-Results: lists.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=linux.ibm.com (client-ip=148.163.156.1; helo=mx0a-001b2d01.pphosted.com; envelope-from=sayalip@linux.ibm.com; receiver=lists.ozlabs.org) Received: from mx0a-001b2d01.pphosted.com (mx0a-001b2d01.pphosted.com [148.163.156.1]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange x25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by lists.ozlabs.org (Postfix) with ESMTPS id 4fQsTc3QY8z3bt9 for ; Wed, 04 Mar 2026 23:31:00 +1100 (AEDT) Received: from pps.filterd (m0356517.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.18.1.11/8.18.1.11) with ESMTP id 623K75NN2148025; Wed, 4 Mar 2026 12:30:56 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ibm.com; h=cc :content-type:date:from:in-reply-to:message-id:mime-version :references:subject:to; s=pp1; bh=qLxn/OLsHQEkhHIFf5JF+6MzqVqO92 Kjj0mzpCvU4jY=; b=n2C7/tVt2Q8T2sNxtkzXcMOZrY/fh88q+5lvNFtEKTN+lj YrA3ptr/piycDgtwXdR6ykcVhiRlx8pPk6FFNWFh089JNvRKtfgg2VbzBGOuPqd3 AJIEjtR+Na6j9YCYKT0beecEKPg7dTegeZSW5gkl0JekN/cnTn/yh1spglFznpWU pwxKZdRsCBxDamTRpEhSOJj7sS5PaBOxSPJUrRF/o70fcYV5q27WU3t0g3K6OFnf dFfRT2l6J+Yl5oWK4o3A5REKxxvAdBeAD09EEliJoOtBZyEFzDkO0QYdOdfDaeXD A5dM5s6TF/HPcl5DU5RskElaleNX+9IetcFLDUYQ== Received: from ppma21.wdc07v.mail.ibm.com (5b.69.3da9.ip4.static.sl-reverse.com [169.61.105.91]) by mx0a-001b2d01.pphosted.com (PPS) with ESMTPS id 4cksjdf9ad-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Wed, 04 Mar 2026 12:30:56 +0000 (GMT) Received: from pps.filterd (ppma21.wdc07v.mail.ibm.com [127.0.0.1]) by ppma21.wdc07v.mail.ibm.com (8.18.1.2/8.18.1.2) with ESMTP id 6249wjoP016408; Wed, 4 Mar 2026 12:30:55 GMT Received: from smtprelay07.wdc07v.mail.ibm.com ([172.16.1.74]) by ppma21.wdc07v.mail.ibm.com (PPS) with ESMTPS id 4cmbpn6kt9-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Wed, 04 Mar 2026 12:30:55 +0000 Received: from smtpav05.wdc07v.mail.ibm.com (smtpav05.wdc07v.mail.ibm.com [10.39.53.232]) by smtprelay07.wdc07v.mail.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id 624CUrPj6750922 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Wed, 4 Mar 2026 12:30:53 GMT Received: from smtpav05.wdc07v.mail.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id BEB1858059; Wed, 4 Mar 2026 12:30:53 +0000 (GMT) Received: from smtpav05.wdc07v.mail.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id B2EDE58053; Wed, 4 Mar 2026 12:30:49 +0000 (GMT) Received: from [9.39.16.241] (unknown [9.39.16.241]) by smtpav05.wdc07v.mail.ibm.com (Postfix) with ESMTP; Wed, 4 Mar 2026 12:30:49 +0000 (GMT) Content-Type: multipart/alternative; boundary="------------CEudjGiKIbpTXS6v03iOnbjp" Message-ID: <952c0abe-d126-49a7-acf7-91ac2021425b@linux.ibm.com> Date: Wed, 4 Mar 2026 18:00:48 +0530 X-Mailing-List: linuxppc-dev@lists.ozlabs.org List-Id: List-Help: List-Owner: List-Post: List-Archive: , List-Subscribe: , , List-Unsubscribe: Precedence: list MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH v3 1/2] powerpc: fix KUAP warning in VMX usercopy path To: "Christophe Leroy (CS GROUP)" , linuxppc-dev@lists.ozlabs.org, maddy@linux.ibm.com Cc: aboorvad@linux.ibm.com, sshegde@linux.ibm.com, riteshh@linux.ibm.com, hbathini@linux.ibm.com, ming.lei@redhat.com, csander@purestorage.com, czhong@redhat.com, venkat88@linux.ibm.com References: <20260304053506.118632-1-sayalip@linux.ibm.com> Content-Language: en-IN From: Sayali Patil In-Reply-To: X-TM-AS-GCONF: 00 X-Authority-Analysis: v=2.4 cv=M9BA6iws c=1 sm=1 tr=0 ts=69a82600 cx=c_pps a=GFwsV6G8L6GxiO2Y/PsHdQ==:117 a=GFwsV6G8L6GxiO2Y/PsHdQ==:17 a=Yq5XynenixoA:10 a=VkNPw1HP01LnGYTKEx00:22 a=RnoormkPH1_aCDwRdu11:22 a=U7nrCbtTmkRpXpFmAIza:22 a=r77TgQKjGQsHNAKrUKIA:9 a=VwQbUJbxAAAA:8 a=VnNF1IyMAAAA:8 a=Je0a8u3kFQpj72er1Y4A:9 a=3ZKOabzyN94A:10 a=QEXdDO2ut3YA:10 a=zMrQQBovX4HMaXkNdmAA:9 a=4Gpx9Ovjr91xqG4I:21 a=_W_S_7VecoQA:10 a=lqcHg5cX4UMA:10 X-Proofpoint-ORIG-GUID: gSJRo9Eb7OgJODklY6v53UcxJBi5F4Al X-Proofpoint-GUID: gSJRo9Eb7OgJODklY6v53UcxJBi5F4Al X-Proofpoint-Spam-Details-Enc: AW1haW4tMjYwMzA0MDA5NiBTYWx0ZWRfX13u9tYt3OL1G ks30V08tYPIvOIbOYAT13rkTWAXvPBn4iHYxv3TFuLishAj6cDmFVV96nEodUs5/F3bL1dVSwJz /vm0QagL+f3UG3jw5Dq8lhfoVPQ+uszgKYqLM3aEylTHQOt4fzAvQLmt5ZWXU77eQ/HqxMVrWaF rJrvyDgdnTp+ydLXSRLvSXw181RPJqL+3aQNpd/V6Xlc77c3N/usL+eUmNXmTPkmg7vTFc4wj+t XiVjdYCGAC4i4FpK3b4jHpN07rPADFqC5PKKH3UyFDTbLMKmiAwy5QJql8ZqqL+hSiMhX51Jl9O gM8cRDWLGbvOGRKfTeZ5c43b9CYxYRtPrmBIZnnmnwQEJCflf0cT28yPH+p5EugZB7pmZJLspCQ BGBqgGywf32QMd7mAVScnVx1/j8JJyi2UGP/TFbab5w0HyqPK/fit+Qeskf6SDi+FdswQrL3sUx kT9Wjv2edHtd7Rf0tHg== X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1143,Hydra:6.1.51,FMLib:17.12.100.49 definitions=2026-03-04_05,2026-03-03_01,2025-10-01_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 clxscore=1015 priorityscore=1501 spamscore=0 adultscore=0 malwarescore=0 bulkscore=0 lowpriorityscore=0 impostorscore=0 phishscore=0 suspectscore=0 classifier=typeunknown authscore=0 authtc= authcc= route=outbound adjust=0 reason=mlx scancount=1 engine=8.22.0-2602130000 definitions=main-2603040096 This is a multi-part message in MIME format. --------------CEudjGiKIbpTXS6v03iOnbjp Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 8bit On 04/03/26 12:19, Christophe Leroy (CS GROUP) wrote: > Hi Sayali, > > Le 04/03/2026 à 06:35, Sayali Patil a écrit : >> On powerpc with PREEMPT_FULL or PREEMPT_LAZY and function tracing >> enabled, >> KUAP warnings can be triggered from the VMX usercopy path under memory >> stress workloads. >> >> KUAP requires that no subfunctions are called once userspace access has >> been enabled. The existing VMX copy implementation violates this >> requirement by invoking enter_vmx_usercopy() from the assembly path >> after >> userspace access has already been enabled. If preemption occurs >> in this window, the AMR state may not be preserved correctly, >> leading to unexpected userspace access state and resulting in >> KUAP warnings. >> >> Fix this by restructuring the VMX usercopy flow so that VMX selection >> and VMX state management are centralized in raw_copy_tofrom_user(), >> which is invoked by the raw_copy_{to,from,in}_user() wrappers. >> >> The new flow is: >> >>    - raw_copy_{to,from,in}_user() calls raw_copy_tofrom_user() >>    - raw_copy_tofrom_user() decides whether to use the VMX path >>      based on size and CPU capability >>    - Call enter_vmx_usercopy() before enabling userspace access >>    - Enable userspace access as per the copy direction >>      and perform the VMX copy >>    - Disable userspace access as per the copy direction >>    - Call exit_vmx_usercopy() >>    - Fall back to the base copy routine if the VMX copy faults >> >> With this change, the VMX assembly routines no longer perform VMX state >> management or call helper functions; they only implement the >> copy operations. >> The previous feature-section based VMX selection inside >> __copy_tofrom_user_power7() is removed, and a dedicated >> __copy_tofrom_user_power7_vmx() entry point is introduced. >> >> This ensures correct KUAP ordering, avoids subfunction calls >> while KUAP is unlocked, and eliminates the warnings while preserving >> the VMX fast path. >> >> Fixes: de78a9c42a79 ("powerpc: Add a framework for Kernel Userspace >> Access Protection") >> Reported-by: Shrikanth Hegde >> Closes: >> https://lore.kernel.org/all/20260109064917.777587-2-sshegde@linux.ibm.com/ >> Suggested-by: Christophe Leroy >> Co-developed-by: Aboorva Devarajan >> Signed-off-by: Aboorva Devarajan >> Signed-off-by: Sayali Patil > > That looks almost good, some editorial comments below. > > With those fixed, you can add  Reviewed-by: Christophe Leroy (CS > GROUP) > >> --- >> >> v2->v3 >>    - Addressd as per review feedback by removing usercopy_mode enum >>      and using the copy direction directly for KUAP permission handling. >>    - Integrated __copy_tofrom_user_vmx() functionality into >>      raw_copy_tofrom_user() in uaccess.h as a static __always_inline >>      implementation. >>    - Exported enter_vmx_usercopy() and exit_vmx_usercopy() >>      to support VMX usercopy handling from the common path. >> >> v2: >> https://lore.kernel.org/all/20260228135319.238985-1-sayalip@linux.ibm.com/ >> >> --- >>   arch/powerpc/include/asm/uaccess.h | 66 ++++++++++++++++++++++-------- >>   arch/powerpc/lib/copyuser_64.S     |  1 + >>   arch/powerpc/lib/copyuser_power7.S | 45 +++++++------------- >>   arch/powerpc/lib/vmx-helper.c      |  2 + >>   4 files changed, 66 insertions(+), 48 deletions(-) >> >> diff --git a/arch/powerpc/include/asm/uaccess.h >> b/arch/powerpc/include/asm/uaccess.h >> index ba1d878c3f40..8fd412671025 100644 >> --- a/arch/powerpc/include/asm/uaccess.h >> +++ b/arch/powerpc/include/asm/uaccess.h >> @@ -15,6 +15,9 @@ >>   #define TASK_SIZE_MAX        TASK_SIZE_USER64 >>   #endif >>   +/* Threshold above which VMX copy path is used */ >> +#define VMX_COPY_THRESHOLD 3328 >> + >>   #include >>     /* >> @@ -326,40 +329,67 @@ do {                                \ >>   extern unsigned long __copy_tofrom_user(void __user *to, >>           const void __user *from, unsigned long size); >>   -#ifdef __powerpc64__ >> -static inline unsigned long >> -raw_copy_in_user(void __user *to, const void __user *from, unsigned >> long n) >> +unsigned long __copy_tofrom_user_base(void __user *to, >> +        const void __user *from, unsigned long size); >> + >> +unsigned long __copy_tofrom_user_power7_vmx(void __user *to, >> +        const void __user *from, unsigned long size); >> + >> + > > Remove one line. > >> +static __always_inline bool will_use_vmx(unsigned long n) >> +{ >> +    return IS_ENABLED(CONFIG_ALTIVEC) && >> +        cpu_has_feature(CPU_FTR_VMX_COPY) && >> +        n > VMX_COPY_THRESHOLD; > > Avoid too many line when possible. Nowadays up to 100 chars per line > are allowed. > > Take care of alignment of second line, the second line should start at > same position as IS_ENABLED, meaning you have to insert 7 spaces > instead of a tab. > >> +} >> + >> +static __always_inline unsigned long raw_copy_tofrom_user(void >> __user *to, >> +        const void __user *from, unsigned long n, >> +        unsigned long dir) > > Subsequent lines should start at same position as the ( of the first > line, therefore I'd suggest following form instead: > > static __always_inline unsigned long > raw_copy_tofrom_user(void __user *to,const void __user *from, unsigned > long n, unsigned long dir) > >>   { >>       unsigned long ret; >>   -    barrier_nospec(); >> -    allow_user_access(to, KUAP_READ_WRITE); >> +    if (will_use_vmx(n) && enter_vmx_usercopy()) { >> +        allow_user_access(to, dir); >> +        ret = __copy_tofrom_user_power7_vmx(to, from, n); >> +        prevent_user_access(dir); >> +        exit_vmx_usercopy(); >> + >> +        if (unlikely(ret)) { >> +            allow_user_access(to, dir); >> +            ret = __copy_tofrom_user_base(to, from, n); >> +            prevent_user_access(dir); >> +        } >> +        return ret; >> +    } >> + >> +    allow_user_access(to, dir); >>       ret = __copy_tofrom_user(to, from, n); >> -    prevent_user_access(KUAP_READ_WRITE); >> +    prevent_user_access(dir); >>       return ret; >>   } >> + >> +#ifdef __powerpc64__ > > I know it was already there before, but checkpatch is not happy about > __power64__. It should be replaced by CONFIG_PPC64. > >> +static inline unsigned long >> +raw_copy_in_user(void __user *to, const void __user *from, unsigned >> long n) >> +{ >> +    barrier_nospec(); >> +    return raw_copy_tofrom_user(to, from, n, KUAP_READ_WRITE); >> +} >>   #endif /* __powerpc64__ */ >>     static inline unsigned long raw_copy_from_user(void *to, >>           const void __user *from, unsigned long n) > > Same problem with alignment of second line. Prefer the form used for > raw_copy_in_user() or raw_copy_to_user(), ie: > > static inline unsigned long > raw_copy_from_user(void *to, const void __user *from, unsigned long n) > >>   { >> -    unsigned long ret; >> - >> -    allow_user_access(NULL, KUAP_READ); >> -    ret = __copy_tofrom_user((__force void __user *)to, from, n); >> -    prevent_user_access(KUAP_READ); >> -    return ret; >> +    return raw_copy_tofrom_user((__force void __user *)to, from, >> +                    n, KUAP_READ); > > 100 chars are allowed per line, this should fit on a single line. > >>   } >>     static inline unsigned long >>   raw_copy_to_user(void __user *to, const void *from, unsigned long n) >>   { >> -    unsigned long ret; >> - >> -    allow_user_access(to, KUAP_WRITE); >> -    ret = __copy_tofrom_user(to, (__force const void __user *)from, n); >> -    prevent_user_access(KUAP_WRITE); >> -    return ret; >> +    return raw_copy_tofrom_user(to, (__force const void __user *)from, >> +                    n, KUAP_WRITE); > > 100 chars are allowed per line, this should fit on a single line. > >>   } >>     unsigned long __arch_clear_user(void __user *addr, unsigned long >> size); > > > Run checkpatch before submitting patches: > > $ ./scripts/checkpatch.pl --strict -g HEAD~ > CHECK: Alignment should match open parenthesis > #83: FILE: arch/powerpc/include/asm/uaccess.h:333: > +unsigned long __copy_tofrom_user_base(void __user *to, > +        const void __user *from, unsigned long size); > > CHECK: Alignment should match open parenthesis > #86: FILE: arch/powerpc/include/asm/uaccess.h:336: > +unsigned long __copy_tofrom_user_power7_vmx(void __user *to, > +        const void __user *from, unsigned long size); > > CHECK: Please don't use multiple blank lines > #88: FILE: arch/powerpc/include/asm/uaccess.h:338: > + > + > > CHECK: Alignment should match open parenthesis > #97: FILE: arch/powerpc/include/asm/uaccess.h:347: > +static __always_inline unsigned long raw_copy_tofrom_user(void __user > *to, > +        const void __user *from, unsigned long n, > > CHECK: architecture specific defines should be avoided > #125: FILE: arch/powerpc/include/asm/uaccess.h:372: > +#ifdef __powerpc64__ > > total: 0 errors, 0 warnings, 5 checks, 212 lines checked > > NOTE: For some of the reported defects, checkpatch may be able to >       mechanically convert to the typical style using --fix or > --fix-inplace. > > Commit 3a44f6614d88 ("powerpc: fix KUAP warning in VMX usercopy path") > has style problems, please review. > > NOTE: If any of the errors are false positives, please report >       them to the maintainer, see CHECKPATCH in MAINTAINERS. > Thanks Christophe for the review. I have addressed the comments and incorporated the changes in v4. As suggested, I have added: Reviewed-by: Christophe Leroy (CS GROUP) v4: https://lore.kernel.org/all/20260304122201.153049-1-sayalip@linux.ibm.com/ --------------CEudjGiKIbpTXS6v03iOnbjp Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: 8bit


On 04/03/26 12:19, Christophe Leroy (CS GROUP) wrote:
Hi Sayali,

Le 04/03/2026 à 06:35, Sayali Patil a écrit :
On powerpc with PREEMPT_FULL or PREEMPT_LAZY and function tracing enabled,
KUAP warnings can be triggered from the VMX usercopy path under memory
stress workloads.

KUAP requires that no subfunctions are called once userspace access has
been enabled. The existing VMX copy implementation violates this
requirement by invoking enter_vmx_usercopy() from the assembly path after
userspace access has already been enabled. If preemption occurs
in this window, the AMR state may not be preserved correctly,
leading to unexpected userspace access state and resulting in
KUAP warnings.

Fix this by restructuring the VMX usercopy flow so that VMX selection
and VMX state management are centralized in raw_copy_tofrom_user(),
which is invoked by the raw_copy_{to,from,in}_user() wrappers.

The new flow is:

   - raw_copy_{to,from,in}_user() calls raw_copy_tofrom_user()
   - raw_copy_tofrom_user() decides whether to use the VMX path
     based on size and CPU capability
   - Call enter_vmx_usercopy() before enabling userspace access
   - Enable userspace access as per the copy direction
     and perform the VMX copy
   - Disable userspace access as per the copy direction
   - Call exit_vmx_usercopy()
   - Fall back to the base copy routine if the VMX copy faults

With this change, the VMX assembly routines no longer perform VMX state
management or call helper functions; they only implement the
copy operations.
The previous feature-section based VMX selection inside
__copy_tofrom_user_power7() is removed, and a dedicated
__copy_tofrom_user_power7_vmx() entry point is introduced.

This ensures correct KUAP ordering, avoids subfunction calls
while KUAP is unlocked, and eliminates the warnings while preserving
the VMX fast path.

Fixes: de78a9c42a79 ("powerpc: Add a framework for Kernel Userspace Access Protection")
Reported-by: Shrikanth Hegde <sshegde@linux.ibm.com>
Closes: https://lore.kernel.org/all/20260109064917.777587-2-sshegde@linux.ibm.com/
Suggested-by: Christophe Leroy <chleroy@kernel.org>
Co-developed-by: Aboorva Devarajan <aboorvad@linux.ibm.com>
Signed-off-by: Aboorva Devarajan <aboorvad@linux.ibm.com>
Signed-off-by: Sayali Patil <sayalip@linux.ibm.com>

That looks almost good, some editorial comments below.

With those fixed, you can add  Reviewed-by: Christophe Leroy (CS GROUP) <chleroy@kernel.org>

---

v2->v3
   - Addressd as per review feedback by removing usercopy_mode enum
     and using the copy direction directly for KUAP permission handling.
   - Integrated __copy_tofrom_user_vmx() functionality into
     raw_copy_tofrom_user() in uaccess.h as a static __always_inline
     implementation.
   - Exported enter_vmx_usercopy() and exit_vmx_usercopy()
     to support VMX usercopy handling from the common path.

v2: https://lore.kernel.org/all/20260228135319.238985-1-sayalip@linux.ibm.com/

---
  arch/powerpc/include/asm/uaccess.h | 66 ++++++++++++++++++++++--------
  arch/powerpc/lib/copyuser_64.S     |  1 +
  arch/powerpc/lib/copyuser_power7.S | 45 +++++++-------------
  arch/powerpc/lib/vmx-helper.c      |  2 +
  4 files changed, 66 insertions(+), 48 deletions(-)

diff --git a/arch/powerpc/include/asm/uaccess.h b/arch/powerpc/include/asm/uaccess.h
index ba1d878c3f40..8fd412671025 100644
--- a/arch/powerpc/include/asm/uaccess.h
+++ b/arch/powerpc/include/asm/uaccess.h
@@ -15,6 +15,9 @@
  #define TASK_SIZE_MAX        TASK_SIZE_USER64
  #endif
  +/* Threshold above which VMX copy path is used */
+#define VMX_COPY_THRESHOLD 3328
+
  #include <asm-generic/access_ok.h>
    /*
@@ -326,40 +329,67 @@ do {                                \
  extern unsigned long __copy_tofrom_user(void __user *to,
          const void __user *from, unsigned long size);
  -#ifdef __powerpc64__
-static inline unsigned long
-raw_copy_in_user(void __user *to, const void __user *from, unsigned long n)
+unsigned long __copy_tofrom_user_base(void __user *to,
+        const void __user *from, unsigned long size);
+
+unsigned long __copy_tofrom_user_power7_vmx(void __user *to,
+        const void __user *from, unsigned long size);
+
+

Remove one line.

+static __always_inline bool will_use_vmx(unsigned long n)
+{
+    return IS_ENABLED(CONFIG_ALTIVEC) &&
+        cpu_has_feature(CPU_FTR_VMX_COPY) &&
+        n > VMX_COPY_THRESHOLD;

Avoid too many line when possible. Nowadays up to 100 chars per line are allowed.

Take care of alignment of second line, the second line should start at same position as IS_ENABLED, meaning you have to insert 7 spaces instead of a tab.

+}
+
+static __always_inline unsigned long raw_copy_tofrom_user(void __user *to,
+        const void __user *from, unsigned long n,
+        unsigned long dir)

Subsequent lines should start at same position as the ( of the first line, therefore I'd suggest following form instead:

static __always_inline unsigned long
raw_copy_tofrom_user(void __user *to,const void __user *from, unsigned long n, unsigned long dir)

  {
      unsigned long ret;
  -    barrier_nospec();
-    allow_user_access(to, KUAP_READ_WRITE);
+    if (will_use_vmx(n) && enter_vmx_usercopy()) {
+        allow_user_access(to, dir);
+        ret = __copy_tofrom_user_power7_vmx(to, from, n);
+        prevent_user_access(dir);
+        exit_vmx_usercopy();
+
+        if (unlikely(ret)) {
+            allow_user_access(to, dir);
+            ret = __copy_tofrom_user_base(to, from, n);
+            prevent_user_access(dir);
+        }
+        return ret;
+    }
+
+    allow_user_access(to, dir);
      ret = __copy_tofrom_user(to, from, n);
-    prevent_user_access(KUAP_READ_WRITE);
+    prevent_user_access(dir);
      return ret;
  }
+
+#ifdef __powerpc64__

I know it was already there before, but checkpatch is not happy about __power64__. It should be replaced by CONFIG_PPC64.

+static inline unsigned long
+raw_copy_in_user(void __user *to, const void __user *from, unsigned long n)
+{
+    barrier_nospec();
+    return raw_copy_tofrom_user(to, from, n, KUAP_READ_WRITE);
+}
  #endif /* __powerpc64__ */
    static inline unsigned long raw_copy_from_user(void *to,
          const void __user *from, unsigned long n)

Same problem with alignment of second line. Prefer the form used for raw_copy_in_user() or raw_copy_to_user(), ie:

static inline unsigned long
raw_copy_from_user(void *to, const void __user *from, unsigned long n)

  {
-    unsigned long ret;
-
-    allow_user_access(NULL, KUAP_READ);
-    ret = __copy_tofrom_user((__force void __user *)to, from, n);
-    prevent_user_access(KUAP_READ);
-    return ret;
+    return raw_copy_tofrom_user((__force void __user *)to, from,
+                    n, KUAP_READ);

100 chars are allowed per line, this should fit on a single line.

  }
    static inline unsigned long
  raw_copy_to_user(void __user *to, const void *from, unsigned long n)
  {
-    unsigned long ret;
-
-    allow_user_access(to, KUAP_WRITE);
-    ret = __copy_tofrom_user(to, (__force const void __user *)from, n);
-    prevent_user_access(KUAP_WRITE);
-    return ret;
+    return raw_copy_tofrom_user(to, (__force const void __user *)from,
+                    n, KUAP_WRITE);

100 chars are allowed per line, this should fit on a single line.

  }
    unsigned long __arch_clear_user(void __user *addr, unsigned long size);


Run checkpatch before submitting patches:

$ ./scripts/checkpatch.pl --strict -g HEAD~
CHECK: Alignment should match open parenthesis
#83: FILE: arch/powerpc/include/asm/uaccess.h:333:
+unsigned long __copy_tofrom_user_base(void __user *to,
+        const void __user *from, unsigned long size);

CHECK: Alignment should match open parenthesis
#86: FILE: arch/powerpc/include/asm/uaccess.h:336:
+unsigned long __copy_tofrom_user_power7_vmx(void __user *to,
+        const void __user *from, unsigned long size);

CHECK: Please don't use multiple blank lines
#88: FILE: arch/powerpc/include/asm/uaccess.h:338:
+
+

CHECK: Alignment should match open parenthesis
#97: FILE: arch/powerpc/include/asm/uaccess.h:347:
+static __always_inline unsigned long raw_copy_tofrom_user(void __user *to,
+        const void __user *from, unsigned long n,

CHECK: architecture specific defines should be avoided
#125: FILE: arch/powerpc/include/asm/uaccess.h:372:
+#ifdef __powerpc64__

total: 0 errors, 0 warnings, 5 checks, 212 lines checked

NOTE: For some of the reported defects, checkpatch may be able to
      mechanically convert to the typical style using --fix or --fix-inplace.

Commit 3a44f6614d88 ("powerpc: fix KUAP warning in VMX usercopy path") has style problems, please review.

NOTE: If any of the errors are false positives, please report
      them to the maintainer, see CHECKPATCH in MAINTAINERS. 

Thanks Christophe for the review.
I have addressed the comments and incorporated the changes in v4.

As suggested, I have added:
Reviewed-by: Christophe Leroy (CS GROUP) <chleroy@kernel.org>

v4: https://lore.kernel.org/all/20260304122201.153049-1-sayalip@linux.ibm.com/
--------------CEudjGiKIbpTXS6v03iOnbjp--