From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.ozlabs.org (lists.ozlabs.org [112.213.38.117]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 5A235EC0466 for ; Tue, 3 Mar 2026 09:19:47 +0000 (UTC) Received: from boromir.ozlabs.org (localhost [127.0.0.1]) by lists.ozlabs.org (Postfix) with ESMTP id 4fQ9HP1MdQz3bn4; Tue, 03 Mar 2026 20:19:45 +1100 (AEDT) Authentication-Results: lists.ozlabs.org; arc=none smtp.remote-ip=148.163.156.1 ARC-Seal: i=1; a=rsa-sha256; d=lists.ozlabs.org; s=201707; t=1772529585; cv=none; b=aOnBix1m0htU4Ttvs72T9Be2F1+9iSvwPzdDreduBXN+k3dkEDfjvU5sOKfSfT2fdEim2rT1pJuXe/LtYW7Qjqvh5uJrNKRKAniwrskydCECaweOD36LRfP7pCMEzns1bNKlbMPB8mOfn8h/mle5GC7Fc0qNAvkayL/mzafQkdX9alKLSrLL7/vJnbuu+VvBmsut5m0KKHa2V2r5Sv4Rq4W1I2ieB/fuVBtK784HbaC4QfVSBzr1DmqU/QOicb89HBXDMZ9O7ybjk8ePk2jUl2akWrLegcLBUMMGNha64FxF/baAnxpmfkKvcUr8BGveHXHkyfw2VABMZsB6N+suyQ== ARC-Message-Signature: i=1; a=rsa-sha256; d=lists.ozlabs.org; s=201707; t=1772529585; c=relaxed/relaxed; bh=d5AoPviil4cvtZroBgv9pS8+NbSF2NPtpjRwuaWUI34=; h=Content-Type:Message-ID:Date:MIME-Version:Subject:To:Cc: References:From:In-Reply-To; b=Q3/RFZXW7rAKqZDN7Aw3PVUM3io7LYvEkWwUy3DW5G0fitiKHjnOj52z22MHrGQCinB0s7aZ7J0rd0mjskQ4rK55bevIEuqmlKpZOpiyDJ24/cCjlYT4Cwq5v61lC0QwjRHFQmfci8MSQPxukVYXtIlouTJnzLQBy9WUZls547mSo+7d1NhElV6EnysGt6GGK+qs9fyKMQzX+FR9xUxbhqgcLw5OEiyAV24CxOMwPa/3NtQMQsozn0UjRoyn3LHGISteuYL96+Kb/juSEm5s+nc8PvjeKtTOjixxGPVZ3MgJe9vRPLfzL7MJ4EAEA8jC6/mCA6xxW4jXoRiJynhtwg== ARC-Authentication-Results: i=1; lists.ozlabs.org; dmarc=pass (p=none dis=none) header.from=linux.ibm.com; dkim=pass (2048-bit key; unprotected) header.d=ibm.com header.i=@ibm.com header.a=rsa-sha256 header.s=pp1 header.b=p5lNAxfL; dkim-atps=neutral; spf=pass (client-ip=148.163.156.1; helo=mx0a-001b2d01.pphosted.com; envelope-from=sayalip@linux.ibm.com; receiver=lists.ozlabs.org) smtp.mailfrom=linux.ibm.com Authentication-Results: lists.ozlabs.org; dmarc=pass (p=none dis=none) header.from=linux.ibm.com Authentication-Results: lists.ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=ibm.com header.i=@ibm.com header.a=rsa-sha256 header.s=pp1 header.b=p5lNAxfL; dkim-atps=neutral Authentication-Results: lists.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=linux.ibm.com (client-ip=148.163.156.1; helo=mx0a-001b2d01.pphosted.com; envelope-from=sayalip@linux.ibm.com; receiver=lists.ozlabs.org) Received: from mx0a-001b2d01.pphosted.com (mx0a-001b2d01.pphosted.com [148.163.156.1]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange x25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by lists.ozlabs.org (Postfix) with ESMTPS id 4fQ9HL2qmgz30Lw for ; Tue, 03 Mar 2026 20:19:41 +1100 (AEDT) Received: from pps.filterd (m0353729.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.18.1.11/8.18.1.11) with ESMTP id 62343HUB2056039; Tue, 3 Mar 2026 09:19:35 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ibm.com; h=cc :content-type:date:from:in-reply-to:message-id:mime-version :references:subject:to; s=pp1; bh=d5AoPviil4cvtZroBgv9pS8+NbSF2N PtpjRwuaWUI34=; b=p5lNAxfLf3V16grExpr3ahUrW9nhnZ+NT8d0SV3PCGwv8G ln7Ge6DxV1VDWTsmvNEyvZCEdeVnkQtiiXhZ8kWOHWFgmj/b2z4Woa/aIL1Frkrd 1wngMWrRc3VDcBrhirp0T1J2Fn09sDMYDdyyjJMx60u+lA8LWqictct3FpteFr6H SjFQSxmZZrHPPENgoqIt2QXxrpLRjhLRxwI83aXEa60hR11sUjbAELT2Dl29T5ur 9eJADnsROXOpwt2Nomz1qohylLDs6mwmDecX2U7G0z/pg+56gU1wcYuxjDUuVZZI nP6AB6x68W1ZYatewbm70Te9mX7s08UlOFoA7vUQ== Received: from ppma22.wdc07v.mail.ibm.com (5c.69.3da9.ip4.static.sl-reverse.com [169.61.105.92]) by mx0a-001b2d01.pphosted.com (PPS) with ESMTPS id 4ckskct9s3-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 03 Mar 2026 09:19:35 +0000 (GMT) Received: from pps.filterd (ppma22.wdc07v.mail.ibm.com [127.0.0.1]) by ppma22.wdc07v.mail.ibm.com (8.18.1.2/8.18.1.2) with ESMTP id 623463JH003201; Tue, 3 Mar 2026 09:19:34 GMT Received: from smtprelay05.dal12v.mail.ibm.com ([172.16.1.7]) by ppma22.wdc07v.mail.ibm.com (PPS) with ESMTPS id 4cmb2y1qeu-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 03 Mar 2026 09:19:33 +0000 Received: from smtpav06.dal12v.mail.ibm.com (smtpav06.dal12v.mail.ibm.com [10.241.53.105]) by smtprelay05.dal12v.mail.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id 6239JWBe30474944 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Tue, 3 Mar 2026 09:19:32 GMT Received: from smtpav06.dal12v.mail.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 7E2F758059; Tue, 3 Mar 2026 09:19:32 +0000 (GMT) Received: from smtpav06.dal12v.mail.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id AA37B58055; Tue, 3 Mar 2026 09:19:28 +0000 (GMT) Received: from [9.124.212.86] (unknown [9.124.212.86]) by smtpav06.dal12v.mail.ibm.com (Postfix) with ESMTP; Tue, 3 Mar 2026 09:19:28 +0000 (GMT) Content-Type: multipart/alternative; boundary="------------GDsHtgWSI6I2E99s0XQlnoG1" Message-ID: Date: Tue, 3 Mar 2026 14:49:27 +0530 X-Mailing-List: linuxppc-dev@lists.ozlabs.org List-Id: List-Help: List-Owner: List-Post: List-Archive: , List-Subscribe: , , List-Unsubscribe: Precedence: list MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH v2 1/2] powerpc: fix KUAP warning in VMX usercopy path To: "Christophe Leroy (CS GROUP)" , linuxppc-dev@lists.ozlabs.org, maddy@linux.ibm.com Cc: aboorvad@linux.ibm.com, sshegde@linux.ibm.com, riteshh@linux.ibm.com, hbathini@linux.ibm.com, ming.lei@redhat.com, csander@purestorage.com, czhong@redhat.com, venkat88@linux.ibm.com References: <20260228135319.238985-1-sayalip@linux.ibm.com> Content-Language: en-IN From: Sayali Patil In-Reply-To: X-TM-AS-GCONF: 00 X-Proofpoint-GUID: yYDpn2zqAleu5aIx17rT0fB-oWnKItp9 X-Proofpoint-Spam-Details-Enc: AW1haW4tMjYwMzAzMDA2OCBTYWx0ZWRfX08KayWLG4HZQ jTyH/mlCR2nrZRQfW39HwqTWoLO/Cv0wU1lyoS5juyMVC92NJkYkJhg7l7Vp6TvbDQQMtJTVeEQ wDmVU3iRKUQMJvYLi7PddcC593d3sty4hVQShJJ6y6o+DDjYjWMV/tukTjUN96ZLcCQyHosPCwE BRcI82o1zQc9Ffg+7gKV3sT4BQWW6cR6NmfwWjunPk7yElvMjD5XF1xH07CfTCi3OqMq2gA6l99 MrRZrGCLVxYVqi8Dqr3F9XBL8OIjrGxFsZJnlxUDhKVh5U2ZfoJkOqQP874+Jz/T4S8IeGbOonO D3aOft/3Dtc1fuVqr/dNXr1YWT1AiiisoUAG2kzV6vt12ObLG5Ae7V4Rh9CVwBtqNImdm1PAXDl /0CmvWbLYkyEX2DVTW9VJyTOcdKOOb6Y8HPRb/p2rygEmRuMTNe2KVjbrrUimJgm+aef8Qzbm/N yL/KgKTtuR9nsBMKLzg== X-Authority-Analysis: v=2.4 cv=H7DWAuYi c=1 sm=1 tr=0 ts=69a6a7a7 cx=c_pps a=5BHTudwdYE3Te8bg5FgnPg==:117 a=5BHTudwdYE3Te8bg5FgnPg==:17 a=Yq5XynenixoA:10 a=VkNPw1HP01LnGYTKEx00:22 a=RnoormkPH1_aCDwRdu11:22 a=uAbxVGIbfxUO_5tXvNgY:22 a=r77TgQKjGQsHNAKrUKIA:9 a=VwQbUJbxAAAA:8 a=VnNF1IyMAAAA:8 a=S_vwxrWDNdRTO_4ZBHMA:9 a=3ZKOabzyN94A:10 a=QEXdDO2ut3YA:10 a=pONHcXzaWnGdmSyld1oA:9 a=j1XAc8zQcIUnlMU7:21 a=_W_S_7VecoQA:10 a=lqcHg5cX4UMA:10 X-Proofpoint-ORIG-GUID: yYDpn2zqAleu5aIx17rT0fB-oWnKItp9 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1121,Hydra:6.1.51,FMLib:17.12.100.49 definitions=2026-03-02_05,2026-03-03_01,2025-10-01_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 impostorscore=0 suspectscore=0 phishscore=0 clxscore=1015 priorityscore=1501 adultscore=0 bulkscore=0 spamscore=0 malwarescore=0 lowpriorityscore=0 classifier=typeunknown authscore=0 authtc= authcc= route=outbound adjust=0 reason=mlx scancount=1 engine=8.22.0-2602130000 definitions=main-2603030068 This is a multi-part message in MIME format. --------------GDsHtgWSI6I2E99s0XQlnoG1 Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 8bit On 02/03/26 16:42, Christophe Leroy (CS GROUP) wrote: > Hi Sayali, > > Le 28/02/2026 à 14:53, Sayali Patil a écrit : >> On powerpc with PREEMPT_FULL or PREEMPT_LAZY and function tracing >> enabled, >> KUAP warnings can be triggered from the VMX usercopy path under memory >> stress workloads. >> >> KUAP requires that no subfunctions are called once userspace access has >> been enabled. The existing VMX copy implementation violates this >> requirement by invoking enter_vmx_usercopy() from the assembly path >> after >> userspace access has already been enabled. If preemption occurs >> in this window, the AMR state may not be preserved correctly, >> leading to unexpected userspace access state and resulting in >> KUAP warnings. >> >> Fix this by restructuring the VMX usercopy flow so that VMX selection >> and VMX state management are centralized in raw_copy_tofrom_user(), >> which is invoked by the raw_copy_{to,from,in}_user() wrappers. >> >> Introduce a usercopy_mode enum to describe the copy direction >> (IN, FROM, TO) and use it to derive the required KUAP permissions. >> Userspace access is now enabled and disabled through common helpers >> based on the selected mode, ensuring that the correct read/write >> permissions are applied consistently. >> >>   The new flow is: >> >>    - raw_copy_{to,from,in}_user() calls raw_copy_tofrom_user() >>    - raw_copy_tofrom_user() decides whether to use the VMX path >>      based on size and CPU capability >>    - Call enter_vmx_usercopy() before enabling userspace access >>    - Enable userspace access as per the usercopy mode >>      and perform the VMX copy >>    - Disable userspace access as per the usercopy mode >>    - Call exit_vmx_usercopy() >>    - Fall back to the base copy routine if the VMX copy faults >> >> With this change, the VMX assembly routines no longer perform VMX state >> management or call helper functions; they only implement the >> copy operations. >> The previous feature-section based VMX selection inside >> __copy_tofrom_user_power7() is removed, and a dedicated >> __copy_tofrom_user_power7_vmx() entry point is introduced. >> >> This ensures correct KUAP ordering, avoids subfunction calls >> while KUAP is unlocked, and eliminates the warnings while preserving >> the VMX fast path. >> >> Fixes: de78a9c42a79 ("powerpc: Add a framework for Kernel Userspace >> Access Protection") >> Reported-by: Shrikanth Hegde >> Closes: >> https://lore.kernel.org/all/20260109064917.777587-2-sshegde@linux.ibm.com/ >> Suggested-by: Christophe Leroy >> Co-developed-by: Aboorva Devarajan >> Signed-off-by: Aboorva Devarajan >> Signed-off-by: Sayali Patil >> --- >> >> v1->v2 >>    - Updated as per the review comments. >>    - Centralized VMX usercopy handling in __copy_tofrom_user_vmx() in >>      arch/powerpc/lib/vmx-helper.c. >>    - Introduced a usercopy_mode enum to describe the copy direction >>      (IN, FROM, TO) and derive the required KUAP permissions, avoiding >>      duplication across the different usercopy paths. > > I like the reduction of duplication you propose but I can't see the > added value of that enum, what about: > > diff --git a/arch/powerpc/include/asm/uaccess.h > b/arch/powerpc/include/asm/uaccess.h > index 63d6eb8b004e..14a3219db838 100644 > --- a/arch/powerpc/include/asm/uaccess.h > +++ b/arch/powerpc/include/asm/uaccess.h > @@ -329,12 +329,6 @@ do {                                \ >  extern unsigned long __copy_tofrom_user(void __user *to, >          const void __user *from, unsigned long size); > > -enum usercopy_mode { > -    USERCOPY_IN, > -    USERCOPY_FROM, > -    USERCOPY_TO, > -}; > - >  unsigned long __copy_tofrom_user_vmx(void __user *to, const void > __user *from, >                  unsigned long size, enum usercopy_mode mode); > > @@ -352,48 +346,18 @@ static inline bool will_use_vmx(unsigned long n) >          n > VMX_COPY_THRESHOLD; >  } > > -static inline void raw_copy_allow(void __user *to, enum usercopy_mode > mode) > -{ > -    switch (mode) { > -    case USERCOPY_IN: > -        allow_user_access(to, KUAP_READ_WRITE); > -        break; > -    case USERCOPY_FROM: > -        allow_user_access(NULL, KUAP_READ); > -        break; > -    case USERCOPY_TO: > -        allow_user_access(to, KUAP_WRITE); > -        break; > -    } > -} > - > -static inline void raw_copy_prevent(enum usercopy_mode mode) > -{ > -    switch (mode) { > -    case USERCOPY_IN: > -        prevent_user_access(KUAP_READ_WRITE); > -        break; > -    case USERCOPY_FROM: > -        prevent_user_access(KUAP_READ); > -        break; > -    case USERCOPY_TO: > -        prevent_user_access(KUAP_WRITE); > -        break; > -    } > -} > - >  static inline unsigned long raw_copy_tofrom_user(void __user *to, >          const void __user *from, unsigned long n, > -        enum usercopy_mode mode) > +        unsigned long dir) >  { >      unsigned long ret; > >      if (will_use_vmx(n)) >          return __copy_tofrom_user_vmx(to, from,    n, mode); > > -    raw_copy_allow(to, mode); > +    allow_user_access(to, dir); >      ret = __copy_tofrom_user(to, from, n); > -    raw_copy_prevent(mode); > +    prevent_user_access(dir); >      return ret; > >  } > @@ -403,22 +367,20 @@ static inline unsigned long >  raw_copy_in_user(void __user *to, const void __user *from, unsigned > long n) >  { >      barrier_nospec(); > -    return raw_copy_tofrom_user(to, from, n, USERCOPY_IN); > +    return raw_copy_tofrom_user(to, from, n, KUAP_READ_WRITE); >  } >  #endif /* __powerpc64__ */ > >  static inline unsigned long raw_copy_from_user(void *to, >          const void __user *from, unsigned long n) >  { > -    return raw_copy_tofrom_user((__force void __user *)to, from, > -                    n, USERCOPY_FROM); > +    return raw_copy_tofrom_user((__force void __user *)to, from, n, > KUAP_READ); >  } > >  static inline unsigned long >  raw_copy_to_user(void __user *to, const void *from, unsigned long n) >  { > -    return raw_copy_tofrom_user(to, (__force const void __user *)from, > -                    n, USERCOPY_TO); > +    return raw_copy_tofrom_user(to, (__force const void __user > *)from, n, KUAP_WRITE); >  } > >  unsigned long __arch_clear_user(void __user *addr, unsigned long size); > diff --git a/arch/powerpc/lib/vmx-helper.c > b/arch/powerpc/lib/vmx-helper.c > index 35080885204b..4610f7153fd9 100644 > --- a/arch/powerpc/lib/vmx-helper.c > +++ b/arch/powerpc/lib/vmx-helper.c > @@ -11,25 +11,25 @@ >  #include > >  unsigned long __copy_tofrom_user_vmx(void __user *to, const void > __user *from, > -            unsigned long size, enum usercopy_mode mode) > +            unsigned long size, unsigned long dir) >  { >      unsigned long ret; > >      if (!enter_vmx_usercopy()) { > -        raw_copy_allow(to, mode); > +        allow_user_access(to, dir); >          ret = __copy_tofrom_user(to, from, size); > -        raw_copy_prevent(mode); > +        prevent_user_access(dir); >          return ret; >      } > > -    raw_copy_allow(to, mode); > +    allow_user_access(to, dir); >      ret = __copy_tofrom_user_power7_vmx(to, from, size); > -    raw_copy_prevent(mode); > +    prevent_user_access(dir); >      exit_vmx_usercopy(); >      if (unlikely(ret)) { > -        raw_copy_allow(to, mode); > +        allow_user_access(to, dir); >          ret = __copy_tofrom_user_base(to, from, size); > -        raw_copy_prevent(mode); > +        prevent_user_access(dir); >      } > >      return ret; > > > > Christophe > > Hi Christophe, Thanks for the review. With the suggested change, we are hitting a compilation error. The issue is related to how KUAP enforces the access direction. allow_user_access() contains: BUILD_BUG_ON(!__builtin_constant_p(dir)); which requires that the access direction is a compile-time constant. If we pass a runtime value (for example, an unsigned long), the __builtin_constant_p() check fails and triggers the following build error. Error: In function 'allow_user_access', inlined from '__copy_tofrom_user_vmx' at arch/powerpc/lib/vmx-helper.c:19:3: BUILD_BUG_ON failed: !__builtin_constant_p(dir) 706 The previous implementation worked because allow_user_access() was invoked with enum constants (READ, WRITE, READ_WRITE), which satisfied the __builtin_constant_p() requirement. So in this case, the function must be called with a compile-time constant to satisfy KUAP. Please let me know if you would prefer a different approach. Regards, Sayali >> >> v1: >> https://lore.kernel.org/all/20260217124457.89219-1-sayalip@linux.ibm.com/ >> >> --- >>   arch/powerpc/include/asm/uaccess.h | 95 ++++++++++++++++++++++++------ >>   arch/powerpc/lib/copyuser_64.S     |  1 + >>   arch/powerpc/lib/copyuser_power7.S | 45 +++++--------- >>   arch/powerpc/lib/vmx-helper.c      | 26 ++++++++ >>   4 files changed, 119 insertions(+), 48 deletions(-) >> >> diff --git a/arch/powerpc/include/asm/uaccess.h >> b/arch/powerpc/include/asm/uaccess.h >> index ba1d878c3f40..63d6eb8b004e 100644 >> --- a/arch/powerpc/include/asm/uaccess.h >> +++ b/arch/powerpc/include/asm/uaccess.h >> @@ -15,6 +15,9 @@ >>   #define TASK_SIZE_MAX        TASK_SIZE_USER64 >>   #endif >>   +/* Threshold above which VMX copy path is used */ >> +#define VMX_COPY_THRESHOLD 3328 >> + >>   #include >>     /* >> @@ -326,40 +329,96 @@ do {                                \ >>   extern unsigned long __copy_tofrom_user(void __user *to, >>           const void __user *from, unsigned long size); >>   -#ifdef __powerpc64__ >> -static inline unsigned long >> -raw_copy_in_user(void __user *to, const void __user *from, unsigned >> long n) >> +enum usercopy_mode { >> +    USERCOPY_IN, >> +    USERCOPY_FROM, >> +    USERCOPY_TO, >> +}; >> + >> +unsigned long __copy_tofrom_user_vmx(void __user *to, const void >> __user *from, >> +                unsigned long size, enum usercopy_mode mode); >> + >> +unsigned long __copy_tofrom_user_base(void __user *to, >> +        const void __user *from, unsigned long size); >> + >> +unsigned long __copy_tofrom_user_power7_vmx(void __user *to, >> +        const void __user *from, unsigned long size); >> + >> + >> +static inline bool will_use_vmx(unsigned long n) >> +{ >> +    return IS_ENABLED(CONFIG_ALTIVEC) && >> +        cpu_has_feature(CPU_FTR_VMX_COPY) && >> +        n > VMX_COPY_THRESHOLD; >> +} >> + >> +static inline void raw_copy_allow(void __user *to, enum >> usercopy_mode mode) >> +{ >> +    switch (mode) { >> +    case USERCOPY_IN: >> +        allow_user_access(to, KUAP_READ_WRITE); >> +        break; >> +    case USERCOPY_FROM: >> +        allow_user_access(NULL, KUAP_READ); >> +        break; >> +    case USERCOPY_TO: >> +        allow_user_access(to, KUAP_WRITE); >> +        break; >> +    } >> +} >> + >> +static inline void raw_copy_prevent(enum usercopy_mode mode) >> +{ >> +    switch (mode) { >> +    case USERCOPY_IN: >> +        prevent_user_access(KUAP_READ_WRITE); >> +        break; >> +    case USERCOPY_FROM: >> +        prevent_user_access(KUAP_READ); >> +        break; >> +    case USERCOPY_TO: >> +        prevent_user_access(KUAP_WRITE); >> +        break; >> +    } >> +} >> + >> +static inline unsigned long raw_copy_tofrom_user(void __user *to, >> +        const void __user *from, unsigned long n, >> +        enum usercopy_mode mode) >>   { >>       unsigned long ret; >>   -    barrier_nospec(); >> -    allow_user_access(to, KUAP_READ_WRITE); >> +    if (will_use_vmx(n)) >> +        return __copy_tofrom_user_vmx(to, from,    n, mode); >> + >> +    raw_copy_allow(to, mode); >>       ret = __copy_tofrom_user(to, from, n); >> -    prevent_user_access(KUAP_READ_WRITE); >> +    raw_copy_prevent(mode); >>       return ret; >> + >> +} >> + >> +#ifdef __powerpc64__ >> +static inline unsigned long >> +raw_copy_in_user(void __user *to, const void __user *from, unsigned >> long n) >> +{ >> +    barrier_nospec(); >> +    return raw_copy_tofrom_user(to, from, n, USERCOPY_IN); >>   } >>   #endif /* __powerpc64__ */ >>     static inline unsigned long raw_copy_from_user(void *to, >>           const void __user *from, unsigned long n) >>   { >> -    unsigned long ret; >> - >> -    allow_user_access(NULL, KUAP_READ); >> -    ret = __copy_tofrom_user((__force void __user *)to, from, n); >> -    prevent_user_access(KUAP_READ); >> -    return ret; >> +    return raw_copy_tofrom_user((__force void __user *)to, from, >> +                    n, USERCOPY_FROM); >>   } >>     static inline unsigned long >>   raw_copy_to_user(void __user *to, const void *from, unsigned long n) >>   { >> -    unsigned long ret; >> - >> -    allow_user_access(to, KUAP_WRITE); >> -    ret = __copy_tofrom_user(to, (__force const void __user *)from, n); >> -    prevent_user_access(KUAP_WRITE); >> -    return ret; >> +    return raw_copy_tofrom_user(to, (__force const void __user *)from, >> +                    n, USERCOPY_TO); >>   } >>     unsigned long __arch_clear_user(void __user *addr, unsigned long >> size); >> diff --git a/arch/powerpc/lib/copyuser_64.S >> b/arch/powerpc/lib/copyuser_64.S >> index 9af969d2cc0c..25a99108caff 100644 >> --- a/arch/powerpc/lib/copyuser_64.S >> +++ b/arch/powerpc/lib/copyuser_64.S >> @@ -562,3 +562,4 @@ exc;    std    r10,32(3) >>       li    r5,4096 >>       b    .Ldst_aligned >>   EXPORT_SYMBOL(__copy_tofrom_user) >> +EXPORT_SYMBOL(__copy_tofrom_user_base) >> diff --git a/arch/powerpc/lib/copyuser_power7.S >> b/arch/powerpc/lib/copyuser_power7.S >> index 8474c682a178..17dbcfbae25f 100644 >> --- a/arch/powerpc/lib/copyuser_power7.S >> +++ b/arch/powerpc/lib/copyuser_power7.S >> @@ -5,13 +5,9 @@ >>    * >>    * Author: Anton Blanchard >>    */ >> +#include >>   #include >>   -#ifndef SELFTEST_CASE >> -/* 0 == don't use VMX, 1 == use VMX */ >> -#define SELFTEST_CASE    0 >> -#endif >> - >>   #ifdef __BIG_ENDIAN__ >>   #define LVS(VRT,RA,RB)        lvsl    VRT,RA,RB >>   #define VPERM(VRT,VRA,VRB,VRC)    vperm    VRT,VRA,VRB,VRC >> @@ -47,10 +43,14 @@ >>       ld    r15,STK_REG(R15)(r1) >>       ld    r14,STK_REG(R14)(r1) >>   .Ldo_err3: >> -    bl    CFUNC(exit_vmx_usercopy) >> +    ld      r6,STK_REG(R31)(r1)    /* original destination pointer */ >> +    ld      r5,STK_REG(R29)(r1)    /* original number of bytes */ >> +    subf    r7,r6,r3        /* #bytes copied */ >> +    subf    r3,r7,r5        /* #bytes not copied in r3 */ >>       ld    r0,STACKFRAMESIZE+16(r1) >>       mtlr    r0 >> -    b    .Lexit >> +    addi    r1,r1,STACKFRAMESIZE >> +    blr >>   #endif /* CONFIG_ALTIVEC */ >>     .Ldo_err2: >> @@ -74,7 +74,6 @@ >>     _GLOBAL(__copy_tofrom_user_power7) >>       cmpldi    r5,16 >> -    cmpldi    cr1,r5,3328 >>         std    r3,-STACKFRAMESIZE+STK_REG(R31)(r1) >>       std    r4,-STACKFRAMESIZE+STK_REG(R30)(r1) >> @@ -82,12 +81,6 @@ _GLOBAL(__copy_tofrom_user_power7) >>         blt    .Lshort_copy >>   -#ifdef CONFIG_ALTIVEC >> -test_feature = SELFTEST_CASE >> -BEGIN_FTR_SECTION >> -    bgt    cr1,.Lvmx_copy >> -END_FTR_SECTION_IFSET(CPU_FTR_ALTIVEC) >> -#endif >>     .Lnonvmx_copy: >>       /* Get the source 8B aligned */ >> @@ -263,23 +256,14 @@ err1;    stb    r0,0(r3) >>   15:    li    r3,0 >>       blr >>   -.Lunwind_stack_nonvmx_copy: >> -    addi    r1,r1,STACKFRAMESIZE >> -    b    .Lnonvmx_copy >> - >> -.Lvmx_copy: >>   #ifdef CONFIG_ALTIVEC >> +_GLOBAL(__copy_tofrom_user_power7_vmx) >>       mflr    r0 >>       std    r0,16(r1) >>       stdu    r1,-STACKFRAMESIZE(r1) >> -    bl    CFUNC(enter_vmx_usercopy) >> -    cmpwi    cr1,r3,0 >> -    ld    r0,STACKFRAMESIZE+16(r1) >> -    ld    r3,STK_REG(R31)(r1) >> -    ld    r4,STK_REG(R30)(r1) >> -    ld    r5,STK_REG(R29)(r1) >> -    mtlr    r0 >>   +    std     r3,STK_REG(R31)(r1) >> +    std     r5,STK_REG(R29)(r1) >>       /* >>        * We prefetch both the source and destination using enhanced >> touch >>        * instructions. We use a stream ID of 0 for the load side and >> @@ -300,8 +284,6 @@ err1;    stb    r0,0(r3) >>         DCBT_SETUP_STREAMS(r6, r7, r9, r10, r8) >>   -    beq    cr1,.Lunwind_stack_nonvmx_copy >> - >>       /* >>        * If source and destination are not relatively aligned we use a >>        * slower permute loop. >> @@ -478,7 +460,8 @@ err3;    lbz    r0,0(r4) >>   err3;    stb    r0,0(r3) >>     15:    addi    r1,r1,STACKFRAMESIZE >> -    b    CFUNC(exit_vmx_usercopy)    /* tail call optimise */ >> +    li r3,0 >> +    blr >>     .Lvmx_unaligned_copy: >>       /* Get the destination 16B aligned */ >> @@ -681,5 +664,7 @@ err3;    lbz    r0,0(r4) >>   err3;    stb    r0,0(r3) >>     15:    addi    r1,r1,STACKFRAMESIZE >> -    b    CFUNC(exit_vmx_usercopy)    /* tail call optimise */ >> +    li r3,0 >> +    blr >> +EXPORT_SYMBOL(__copy_tofrom_user_power7_vmx) >>   #endif /* CONFIG_ALTIVEC */ >> diff --git a/arch/powerpc/lib/vmx-helper.c >> b/arch/powerpc/lib/vmx-helper.c >> index 54340912398f..35080885204b 100644 >> --- a/arch/powerpc/lib/vmx-helper.c >> +++ b/arch/powerpc/lib/vmx-helper.c >> @@ -10,6 +10,32 @@ >>   #include >>   #include >>   +unsigned long __copy_tofrom_user_vmx(void __user *to, const void >> __user *from, >> +            unsigned long size, enum usercopy_mode mode) >> +{ >> +    unsigned long ret; >> + >> +    if (!enter_vmx_usercopy()) { >> +        raw_copy_allow(to, mode); >> +        ret = __copy_tofrom_user(to, from, size); >> +        raw_copy_prevent(mode); >> +        return ret; >> +    } >> + >> +    raw_copy_allow(to, mode); >> +    ret = __copy_tofrom_user_power7_vmx(to, from, size); >> +    raw_copy_prevent(mode); >> +    exit_vmx_usercopy(); >> +    if (unlikely(ret)) { >> +        raw_copy_allow(to, mode); >> +        ret = __copy_tofrom_user_base(to, from, size); >> +        raw_copy_prevent(mode); >> +    } >> + >> +    return ret; >> +} >> +EXPORT_SYMBOL(__copy_tofrom_user_vmx); >> + >>   int enter_vmx_usercopy(void) >>   { >>       if (in_interrupt()) > --------------GDsHtgWSI6I2E99s0XQlnoG1 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: 8bit


On 02/03/26 16:42, Christophe Leroy (CS GROUP) wrote:
Hi Sayali,

Le 28/02/2026 à 14:53, Sayali Patil a écrit :
On powerpc with PREEMPT_FULL or PREEMPT_LAZY and function tracing enabled,
KUAP warnings can be triggered from the VMX usercopy path under memory
stress workloads.

KUAP requires that no subfunctions are called once userspace access has
been enabled. The existing VMX copy implementation violates this
requirement by invoking enter_vmx_usercopy() from the assembly path after
userspace access has already been enabled. If preemption occurs
in this window, the AMR state may not be preserved correctly,
leading to unexpected userspace access state and resulting in
KUAP warnings.

Fix this by restructuring the VMX usercopy flow so that VMX selection
and VMX state management are centralized in raw_copy_tofrom_user(),
which is invoked by the raw_copy_{to,from,in}_user() wrappers.

Introduce a usercopy_mode enum to describe the copy direction
(IN, FROM, TO) and use it to derive the required KUAP permissions.
Userspace access is now enabled and disabled through common helpers
based on the selected mode, ensuring that the correct read/write
permissions are applied consistently.

  The new flow is:

   - raw_copy_{to,from,in}_user() calls raw_copy_tofrom_user()
   - raw_copy_tofrom_user() decides whether to use the VMX path
     based on size and CPU capability
   - Call enter_vmx_usercopy() before enabling userspace access
   - Enable userspace access as per the usercopy mode
     and perform the VMX copy
   - Disable userspace access as per the usercopy mode
   - Call exit_vmx_usercopy()
   - Fall back to the base copy routine if the VMX copy faults

With this change, the VMX assembly routines no longer perform VMX state
management or call helper functions; they only implement the
copy operations.
The previous feature-section based VMX selection inside
__copy_tofrom_user_power7() is removed, and a dedicated
__copy_tofrom_user_power7_vmx() entry point is introduced.

This ensures correct KUAP ordering, avoids subfunction calls
while KUAP is unlocked, and eliminates the warnings while preserving
the VMX fast path.

Fixes: de78a9c42a79 ("powerpc: Add a framework for Kernel Userspace Access Protection")
Reported-by: Shrikanth Hegde <sshegde@linux.ibm.com>
Closes: https://lore.kernel.org/all/20260109064917.777587-2-sshegde@linux.ibm.com/
Suggested-by: Christophe Leroy <chleroy@kernel.org>
Co-developed-by: Aboorva Devarajan <aboorvad@linux.ibm.com>
Signed-off-by: Aboorva Devarajan <aboorvad@linux.ibm.com>
Signed-off-by: Sayali Patil <sayalip@linux.ibm.com>
---

v1->v2
   - Updated as per the review comments.
   - Centralized VMX usercopy handling in __copy_tofrom_user_vmx() in
     arch/powerpc/lib/vmx-helper.c.
   - Introduced a usercopy_mode enum to describe the copy direction
     (IN, FROM, TO) and derive the required KUAP permissions, avoiding
     duplication across the different usercopy paths.

I like the reduction of duplication you propose but I can't see the added value of that enum, what about:

diff --git a/arch/powerpc/include/asm/uaccess.h b/arch/powerpc/include/asm/uaccess.h
index 63d6eb8b004e..14a3219db838 100644
--- a/arch/powerpc/include/asm/uaccess.h
+++ b/arch/powerpc/include/asm/uaccess.h
@@ -329,12 +329,6 @@ do {                                \
 extern unsigned long __copy_tofrom_user(void __user *to,
         const void __user *from, unsigned long size);

-enum usercopy_mode {
-    USERCOPY_IN,
-    USERCOPY_FROM,
-    USERCOPY_TO,
-};
-
 unsigned long __copy_tofrom_user_vmx(void __user *to, const void __user *from,
                 unsigned long size, enum usercopy_mode mode);

@@ -352,48 +346,18 @@ static inline bool will_use_vmx(unsigned long n)
         n > VMX_COPY_THRESHOLD;
 }

-static inline void raw_copy_allow(void __user *to, enum usercopy_mode mode)
-{
-    switch (mode) {
-    case USERCOPY_IN:
-        allow_user_access(to, KUAP_READ_WRITE);
-        break;
-    case USERCOPY_FROM:
-        allow_user_access(NULL, KUAP_READ);
-        break;
-    case USERCOPY_TO:
-        allow_user_access(to, KUAP_WRITE);
-        break;
-    }
-}
-
-static inline void raw_copy_prevent(enum usercopy_mode mode)
-{
-    switch (mode) {
-    case USERCOPY_IN:
-        prevent_user_access(KUAP_READ_WRITE);
-        break;
-    case USERCOPY_FROM:
-        prevent_user_access(KUAP_READ);
-        break;
-    case USERCOPY_TO:
-        prevent_user_access(KUAP_WRITE);
-        break;
-    }
-}
-
 static inline unsigned long raw_copy_tofrom_user(void __user *to,
         const void __user *from, unsigned long n,
-        enum usercopy_mode mode)
+        unsigned long dir)
 {
     unsigned long ret;

     if (will_use_vmx(n))
         return __copy_tofrom_user_vmx(to, from,    n, mode);

-    raw_copy_allow(to, mode);
+    allow_user_access(to, dir);
     ret = __copy_tofrom_user(to, from, n);
-    raw_copy_prevent(mode);
+    prevent_user_access(dir);
     return ret;

 }
@@ -403,22 +367,20 @@ static inline unsigned long
 raw_copy_in_user(void __user *to, const void __user *from, unsigned long n)
 {
     barrier_nospec();
-    return raw_copy_tofrom_user(to, from, n, USERCOPY_IN);
+    return raw_copy_tofrom_user(to, from, n, KUAP_READ_WRITE);
 }
 #endif /* __powerpc64__ */

 static inline unsigned long raw_copy_from_user(void *to,
         const void __user *from, unsigned long n)
 {
-    return raw_copy_tofrom_user((__force void __user *)to, from,
-                    n, USERCOPY_FROM);
+    return raw_copy_tofrom_user((__force void __user *)to, from, n, KUAP_READ);
 }

 static inline unsigned long
 raw_copy_to_user(void __user *to, const void *from, unsigned long n)
 {
-    return raw_copy_tofrom_user(to, (__force const void __user *)from,
-                    n, USERCOPY_TO);
+    return raw_copy_tofrom_user(to, (__force const void __user *)from, n, KUAP_WRITE);
 }

 unsigned long __arch_clear_user(void __user *addr, unsigned long size);
diff --git a/arch/powerpc/lib/vmx-helper.c b/arch/powerpc/lib/vmx-helper.c
index 35080885204b..4610f7153fd9 100644
--- a/arch/powerpc/lib/vmx-helper.c
+++ b/arch/powerpc/lib/vmx-helper.c
@@ -11,25 +11,25 @@
 #include <asm/switch_to.h>

 unsigned long __copy_tofrom_user_vmx(void __user *to, const void __user *from,
-            unsigned long size, enum usercopy_mode mode)
+            unsigned long size, unsigned long dir)
 {
     unsigned long ret;

     if (!enter_vmx_usercopy()) {
-        raw_copy_allow(to, mode);
+        allow_user_access(to, dir);
         ret = __copy_tofrom_user(to, from, size);
-        raw_copy_prevent(mode);
+        prevent_user_access(dir);
         return ret;
     }

-    raw_copy_allow(to, mode);
+    allow_user_access(to, dir);
     ret = __copy_tofrom_user_power7_vmx(to, from, size);
-    raw_copy_prevent(mode);
+    prevent_user_access(dir);
     exit_vmx_usercopy();
     if (unlikely(ret)) {
-        raw_copy_allow(to, mode);
+        allow_user_access(to, dir);
         ret = __copy_tofrom_user_base(to, from, size);
-        raw_copy_prevent(mode);
+        prevent_user_access(dir);
     }

     return ret;



Christophe 


Hi Christophe,
Thanks for the review.
With the suggested change, we are hitting a compilation error.

The issue is related to how KUAP enforces the access direction.
allow_user_access() contains:

BUILD_BUG_ON(!__builtin_constant_p(dir));

which requires that the access direction is a compile-time constant.
If we pass a runtime value (for example, an unsigned long), the
__builtin_constant_p() check fails and triggers the following build error.

Error:
In function 'allow_user_access', inlined from '__copy_tofrom_user_vmx' at arch/powerpc/lib/vmx-helper.c:19:3:
BUILD_BUG_ON failed: !__builtin_constant_p(dir) 706


The previous implementation worked because allow_user_access() was invoked with enum 
constants (READ, WRITE, READ_WRITE), which satisfied the __builtin_constant_p() requirement.
So in this case, the function must be called with a compile-time constant to satisfy KUAP.

Please let me know if you would prefer a different approach.

Regards,
Sayali




v1: https://lore.kernel.org/all/20260217124457.89219-1-sayalip@linux.ibm.com/

---
  arch/powerpc/include/asm/uaccess.h | 95 ++++++++++++++++++++++++------
  arch/powerpc/lib/copyuser_64.S     |  1 +
  arch/powerpc/lib/copyuser_power7.S | 45 +++++---------
  arch/powerpc/lib/vmx-helper.c      | 26 ++++++++
  4 files changed, 119 insertions(+), 48 deletions(-)

diff --git a/arch/powerpc/include/asm/uaccess.h b/arch/powerpc/include/asm/uaccess.h
index ba1d878c3f40..63d6eb8b004e 100644
--- a/arch/powerpc/include/asm/uaccess.h
+++ b/arch/powerpc/include/asm/uaccess.h
@@ -15,6 +15,9 @@
  #define TASK_SIZE_MAX        TASK_SIZE_USER64
  #endif
  +/* Threshold above which VMX copy path is used */
+#define VMX_COPY_THRESHOLD 3328
+
  #include <asm-generic/access_ok.h>
    /*
@@ -326,40 +329,96 @@ do {                                \
  extern unsigned long __copy_tofrom_user(void __user *to,
          const void __user *from, unsigned long size);
  -#ifdef __powerpc64__
-static inline unsigned long
-raw_copy_in_user(void __user *to, const void __user *from, unsigned long n)
+enum usercopy_mode {
+    USERCOPY_IN,
+    USERCOPY_FROM,
+    USERCOPY_TO,
+};
+
+unsigned long __copy_tofrom_user_vmx(void __user *to, const void __user *from,
+                unsigned long size, enum usercopy_mode mode);
+
+unsigned long __copy_tofrom_user_base(void __user *to,
+        const void __user *from, unsigned long size);
+
+unsigned long __copy_tofrom_user_power7_vmx(void __user *to,
+        const void __user *from, unsigned long size);
+
+
+static inline bool will_use_vmx(unsigned long n)
+{
+    return IS_ENABLED(CONFIG_ALTIVEC) &&
+        cpu_has_feature(CPU_FTR_VMX_COPY) &&
+        n > VMX_COPY_THRESHOLD;
+}
+
+static inline void raw_copy_allow(void __user *to, enum usercopy_mode mode)
+{
+    switch (mode) {
+    case USERCOPY_IN:
+        allow_user_access(to, KUAP_READ_WRITE);
+        break;
+    case USERCOPY_FROM:
+        allow_user_access(NULL, KUAP_READ);
+        break;
+    case USERCOPY_TO:
+        allow_user_access(to, KUAP_WRITE);
+        break;
+    }
+}
+
+static inline void raw_copy_prevent(enum usercopy_mode mode)
+{
+    switch (mode) {
+    case USERCOPY_IN:
+        prevent_user_access(KUAP_READ_WRITE);
+        break;
+    case USERCOPY_FROM:
+        prevent_user_access(KUAP_READ);
+        break;
+    case USERCOPY_TO:
+        prevent_user_access(KUAP_WRITE);
+        break;
+    }
+}
+
+static inline unsigned long raw_copy_tofrom_user(void __user *to,
+        const void __user *from, unsigned long n,
+        enum usercopy_mode mode)
  {
      unsigned long ret;
  -    barrier_nospec();
-    allow_user_access(to, KUAP_READ_WRITE);
+    if (will_use_vmx(n))
+        return __copy_tofrom_user_vmx(to, from,    n, mode);
+
+    raw_copy_allow(to, mode);
      ret = __copy_tofrom_user(to, from, n);
-    prevent_user_access(KUAP_READ_WRITE);
+    raw_copy_prevent(mode);
      return ret;
+
+}
+
+#ifdef __powerpc64__
+static inline unsigned long
+raw_copy_in_user(void __user *to, const void __user *from, unsigned long n)
+{
+    barrier_nospec();
+    return raw_copy_tofrom_user(to, from, n, USERCOPY_IN);
  }
  #endif /* __powerpc64__ */
    static inline unsigned long raw_copy_from_user(void *to,
          const void __user *from, unsigned long n)
  {
-    unsigned long ret;
-
-    allow_user_access(NULL, KUAP_READ);
-    ret = __copy_tofrom_user((__force void __user *)to, from, n);
-    prevent_user_access(KUAP_READ);
-    return ret;
+    return raw_copy_tofrom_user((__force void __user *)to, from,
+                    n, USERCOPY_FROM);
  }
    static inline unsigned long
  raw_copy_to_user(void __user *to, const void *from, unsigned long n)
  {
-    unsigned long ret;
-
-    allow_user_access(to, KUAP_WRITE);
-    ret = __copy_tofrom_user(to, (__force const void __user *)from, n);
-    prevent_user_access(KUAP_WRITE);
-    return ret;
+    return raw_copy_tofrom_user(to, (__force const void __user *)from,
+                    n, USERCOPY_TO);
  }
    unsigned long __arch_clear_user(void __user *addr, unsigned long size);
diff --git a/arch/powerpc/lib/copyuser_64.S b/arch/powerpc/lib/copyuser_64.S
index 9af969d2cc0c..25a99108caff 100644
--- a/arch/powerpc/lib/copyuser_64.S
+++ b/arch/powerpc/lib/copyuser_64.S
@@ -562,3 +562,4 @@ exc;    std    r10,32(3)
      li    r5,4096
      b    .Ldst_aligned
  EXPORT_SYMBOL(__copy_tofrom_user)
+EXPORT_SYMBOL(__copy_tofrom_user_base)
diff --git a/arch/powerpc/lib/copyuser_power7.S b/arch/powerpc/lib/copyuser_power7.S
index 8474c682a178..17dbcfbae25f 100644
--- a/arch/powerpc/lib/copyuser_power7.S
+++ b/arch/powerpc/lib/copyuser_power7.S
@@ -5,13 +5,9 @@
   *
   * Author: Anton Blanchard <anton@au.ibm.com>
   */
+#include <linux/export.h>
  #include <asm/ppc_asm.h>
  -#ifndef SELFTEST_CASE
-/* 0 == don't use VMX, 1 == use VMX */
-#define SELFTEST_CASE    0
-#endif
-
  #ifdef __BIG_ENDIAN__
  #define LVS(VRT,RA,RB)        lvsl    VRT,RA,RB
  #define VPERM(VRT,VRA,VRB,VRC)    vperm    VRT,VRA,VRB,VRC
@@ -47,10 +43,14 @@
      ld    r15,STK_REG(R15)(r1)
      ld    r14,STK_REG(R14)(r1)
  .Ldo_err3:
-    bl    CFUNC(exit_vmx_usercopy)
+    ld      r6,STK_REG(R31)(r1)    /* original destination pointer */
+    ld      r5,STK_REG(R29)(r1)    /* original number of bytes */
+    subf    r7,r6,r3        /* #bytes copied */
+    subf    r3,r7,r5        /* #bytes not copied in r3 */
      ld    r0,STACKFRAMESIZE+16(r1)
      mtlr    r0
-    b    .Lexit
+    addi    r1,r1,STACKFRAMESIZE
+    blr
  #endif /* CONFIG_ALTIVEC */
    .Ldo_err2:
@@ -74,7 +74,6 @@
    _GLOBAL(__copy_tofrom_user_power7)
      cmpldi    r5,16
-    cmpldi    cr1,r5,3328
        std    r3,-STACKFRAMESIZE+STK_REG(R31)(r1)
      std    r4,-STACKFRAMESIZE+STK_REG(R30)(r1)
@@ -82,12 +81,6 @@ _GLOBAL(__copy_tofrom_user_power7)
        blt    .Lshort_copy
  -#ifdef CONFIG_ALTIVEC
-test_feature = SELFTEST_CASE
-BEGIN_FTR_SECTION
-    bgt    cr1,.Lvmx_copy
-END_FTR_SECTION_IFSET(CPU_FTR_ALTIVEC)
-#endif
    .Lnonvmx_copy:
      /* Get the source 8B aligned */
@@ -263,23 +256,14 @@ err1;    stb    r0,0(r3)
  15:    li    r3,0
      blr
  -.Lunwind_stack_nonvmx_copy:
-    addi    r1,r1,STACKFRAMESIZE
-    b    .Lnonvmx_copy
-
-.Lvmx_copy:
  #ifdef CONFIG_ALTIVEC
+_GLOBAL(__copy_tofrom_user_power7_vmx)
      mflr    r0
      std    r0,16(r1)
      stdu    r1,-STACKFRAMESIZE(r1)
-    bl    CFUNC(enter_vmx_usercopy)
-    cmpwi    cr1,r3,0
-    ld    r0,STACKFRAMESIZE+16(r1)
-    ld    r3,STK_REG(R31)(r1)
-    ld    r4,STK_REG(R30)(r1)
-    ld    r5,STK_REG(R29)(r1)
-    mtlr    r0
  +    std     r3,STK_REG(R31)(r1)
+    std     r5,STK_REG(R29)(r1)
      /*
       * We prefetch both the source and destination using enhanced touch
       * instructions. We use a stream ID of 0 for the load side and
@@ -300,8 +284,6 @@ err1;    stb    r0,0(r3)
        DCBT_SETUP_STREAMS(r6, r7, r9, r10, r8)
  -    beq    cr1,.Lunwind_stack_nonvmx_copy
-
      /*
       * If source and destination are not relatively aligned we use a
       * slower permute loop.
@@ -478,7 +460,8 @@ err3;    lbz    r0,0(r4)
  err3;    stb    r0,0(r3)
    15:    addi    r1,r1,STACKFRAMESIZE
-    b    CFUNC(exit_vmx_usercopy)    /* tail call optimise */
+    li r3,0
+    blr
    .Lvmx_unaligned_copy:
      /* Get the destination 16B aligned */
@@ -681,5 +664,7 @@ err3;    lbz    r0,0(r4)
  err3;    stb    r0,0(r3)
    15:    addi    r1,r1,STACKFRAMESIZE
-    b    CFUNC(exit_vmx_usercopy)    /* tail call optimise */
+    li r3,0
+    blr
+EXPORT_SYMBOL(__copy_tofrom_user_power7_vmx)
  #endif /* CONFIG_ALTIVEC */
diff --git a/arch/powerpc/lib/vmx-helper.c b/arch/powerpc/lib/vmx-helper.c
index 54340912398f..35080885204b 100644
--- a/arch/powerpc/lib/vmx-helper.c
+++ b/arch/powerpc/lib/vmx-helper.c
@@ -10,6 +10,32 @@
  #include <linux/hardirq.h>
  #include <asm/switch_to.h>
  +unsigned long __copy_tofrom_user_vmx(void __user *to, const void __user *from,
+            unsigned long size, enum usercopy_mode mode)
+{
+    unsigned long ret;
+
+    if (!enter_vmx_usercopy()) {
+        raw_copy_allow(to, mode);
+        ret = __copy_tofrom_user(to, from, size);
+        raw_copy_prevent(mode);
+        return ret;
+    }
+
+    raw_copy_allow(to, mode);
+    ret = __copy_tofrom_user_power7_vmx(to, from, size);
+    raw_copy_prevent(mode);
+    exit_vmx_usercopy();
+    if (unlikely(ret)) {
+        raw_copy_allow(to, mode);
+        ret = __copy_tofrom_user_base(to, from, size);
+        raw_copy_prevent(mode);
+    }
+
+    return ret;
+}
+EXPORT_SYMBOL(__copy_tofrom_user_vmx);
+
  int enter_vmx_usercopy(void)
  {
      if (in_interrupt())

--------------GDsHtgWSI6I2E99s0XQlnoG1--