* Re: [PATCH v3] Faster Arm64 __arch_copy_from_user and __arch_copy_to_user
[not found] <20260316123100.82932-1-xiqi2@huawei.com>
@ 2026-05-15 17:38 ` Catalin Marinas
0 siblings, 0 replies; only message in thread
From: Catalin Marinas @ 2026-05-15 17:38 UTC (permalink / raw)
To: Qi Xi
Cc: will, sunnanyong, wangkefeng.wang, benniu, linux-arm-kernel,
linux-kernel
Hi Qi,
On Mon, Mar 16, 2026 at 08:31:00PM +0800, Qi Xi wrote:
> Based on Ben Niu's "Faster Arm64 __arch_copy_from_user and
> __arch_copy_to_user" patch [1], this implementation further optimizes
> and simplifies user space copies by:
>
> 1. Limiting optimization scope to >=128 bytes copies where PAN state matters.
> For <128 bytes copies, the implementation uses non-privileged
> instructions uniformly, simplifying the code and reducing maintenance
> cost.
At least this part makes the changes more manageable.
> 2. Adding "arm64.nopan" cmdline support using the standard idreg-override
> framework, allowing runtime PAN disable without building separate
> CONFIG_ARM64_PAN=y/n kernels as required by Ben Niu's version.
> The implementation maintains separate paths for PAN-enabled (using
> unprivileged ldtr/sttr) and PAN-disabled (using standard ldp/stp), with
> runtime selection via ALTERNATIVE() at the large copy loop entry.
I think you got them the other way around. Your new code requires PAN=0
to be able to use the privileged STP/LDP. However, disabling PAN does
not mean STTR/LDTR are no longer needed and the kernel should use
STP/LDP for uaccess.
Even on hardware without PAN, having STTR/LDTR is still very useful (we
had them on get/put_user() from the start; later fully used in the user
copy routines once we got rid of KERNEL_DS). They prevent accessing
kernel location with a user-controlled address if somehow they escape
access_ok() (and we did have bugs in this area). This has nothing to do
with PAN.
You don't even need PAN disabled globally for your code to work, just
toggle PAN briefly for the routine, similar to what we had since commit
338d4f49d6f7 ("arm64: kernel: Add support for Privileged Access Never").
Presumably for large copies, the PAN setting is lost in the noise.
That said, I don't think we should reduce the security in mainline for
this optimisation. And we should definitely not gate it on arm64.nopan
(more like a less_secure_uaccess_but_faster_on_some_hardware=1).
> 3. Retaining the critical path optimization from the original patch:
> reducing pointer update instructions through manual batch updates,
> processing 64 bytes per iteration with only one pair of add instructions.
>
> Performance improvements measured on Kunpeng 920 with PAN disabled:
[...]
> Real-world workloads:
> - RocksDB read-write mixed workload:
> Overall throughput improved by 2%.
> copy_to_user hotspot reduced from 3.3% to 2.7% of total CPU cycles.
> copy_from_user hotspot reduced from 2.25% to 0.85% of total CPU cycles.
>
> - BRPC rdma_performance (server side, baidu_std protocol over TCP):
> copy_to_user accounts for ~11.5% of total CPU cycles.
> After optimization, server CPU utilization reduced from 64% to 62%
> (2% absolute improvement, equivalent to ~17% reduction in
> copy_to_user overhead)
I agree there are some small improvements but it would be good to
reproduce them on other hardware as well.
> + /*
> + * interlace the load of next 64 bytes data block with store of the last
> + * loaded 64 bytes data.
> + */
> + stp_unpriv A_l, A_h, dst, #0
> + ldp_unpriv A_l, A_h, src, #0
> + stp_unpriv B_l, B_h, dst, #16
> + ldp_unpriv B_l, B_h, src, #16
> + stp_unpriv C_l, C_h, dst, #32
> + ldp_unpriv C_l, C_h, src, #32
> + stp_unpriv D_l, D_h, dst, #48
> + ldp_unpriv D_l, D_h, src, #48
> + add dst, dst, #64
> + add src, src, #64
> + subs count, count, #64
> + b.ge 1b
> + b .Llarge_done
This changes the semantics a bit, especially in the copy_from_user()
path. With your patch, we can write into the kernel buffer until, say,
offset #32. On the offset #48 ldp we get a fault but #dst has never been
updated, so we report nothing copied (almost, we have the fault path
attempting one more byte copy).
For copy_to_user(), we have some imp def behaviour already where a
faulting unaligned store at the end of a page may or may not write the
end of the page. Past discussions concluded that under-reporting on
copy_to_user(). Whether that's also fine for copy_from_user(), not sure.
I think Robin tried to address these at some point. There's also a
proposal for a kunit usercopy test for boundary conditions but we
couldn't agree on the semantics (architectures behave differently here):
https://lore.kernel.org/r/20230321122514.1743889-1-mark.rutland@arm.com/
We do run these internally but it would be good to get them upstream
before any other changes to this code.
--
Catalin
^ permalink raw reply [flat|nested] only message in thread
only message in thread, other threads:[~2026-05-15 17:38 UTC | newest]
Thread overview: (only message) (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
[not found] <20260316123100.82932-1-xiqi2@huawei.com>
2026-05-15 17:38 ` [PATCH v3] Faster Arm64 __arch_copy_from_user and __arch_copy_to_user Catalin Marinas
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox