From: Yang Shi <yang@os.amperecomputing.com>
To: Ryan Roberts <ryan.roberts@arm.com>, Dev Jain <dev.jain@arm.com>,
Catalin Marinas <catalin.marinas@arm.com>,
Will Deacon <will@kernel.org>,
Andrew Morton <akpm@linux-foundation.org>,
David Hildenbrand <david@redhat.com>,
Lorenzo Stoakes <lorenzo.stoakes@oracle.com>,
Ard Biesheuvel <ardb@kernel.org>,
scott@os.amperecomputing.com, cl@gentwo.org
Cc: linux-arm-kernel@lists.infradead.org,
linux-kernel@vger.kernel.org, linux-mm@kvack.org
Subject: Re: [PATCH v7 0/6] arm64: support FEAT_BBM level 2 and large block mapping when rodata=full
Date: Thu, 11 Sep 2025 15:03:31 -0700 [thread overview]
Message-ID: <92719b15-daf8-484f-b0db-72e23ae696ad@os.amperecomputing.com> (raw)
In-Reply-To: <55a79826-48e3-41c0-8dbd-b6398e7e49a6@os.amperecomputing.com>
>>> IIUC, the intent of the code is "reset direct map permission
>>> *without* leaving a
>>> RW+X window". The TLB flush call actually flushes both VA and direct
>>> map together.
>>> So if this is the intent, approach #2 may have VA with X permission
>>> but direct
>>> map may be RW at the mean time. It seems break the intent.
>> Ahh! Thanks, it's starting to make more sense now.
>>
>> Though on first sight it seems a bit mad to me to form a tlb flush
>> range that
>> covers all the direct map pages and all the lazy vunmap regions. Is that
>> intended to be a perf optimization or something else? It's not clear
>> from the
>> history.
>
> I think it should be mainly performance driven. I can't see how come
> two TLB flushes (for vmap and direct map respectively) don't work if I
> don't miss something.
>
>>
>>
>> Could this be split into 2 operations?
>>
>> 1. unmap the aliases (+ tlbi the aliases).
>> 2. set the direct memory back to default (+ tlbi the direct map region).
>>
>> The only 2 potential problems I can think of are;
>>
>> - Performance: 2 tlbis instead of 1, but conversely we probably
>> avoid flushing
>> a load of TLB entries that we didn't really need to.
>
> The two tlbis should work. But performance is definitely a concern. It
> may be hard to justify how much performance impact caused by over
> flush, but multiple TLBIs is definitely not preferred, particularly on
> some large scale machines. We have experienced some scalability issues
> with TLBI due to the large core count on Ampere systems.
>>
>> - Given there is now no lock around the tlbis (currently it's under
>> vmap_purge_lock) is there a race where a new alias can appear between
>> steps 1
>> and 2? I don't think so, because the memory is allocated to the
>> current mapping
>> so how is it going to get re-mapped?
>
> Yes, I agree. I don't think the race is real. The physical pages will
> not be freed until vm_reset_perms() is done. The VA may be
> reallocated, but it will be mapped to different physical pages.
>
>>
>>
>> Could this solve it?
>
> I think it could. But the potential performance impact (two TLBIs) is
> a real concern.
>
> Anyway the vmalloc user should call set_memory_*() for any RO/ROX
> mapping, set_memory_*() should split the page table before reaching
> vm_reset_perms() so it should not fail. If set_memory_*() is not
> called, it is a bug, it should be fixed, like ARM64 kprobes.
>
> It is definitely welcome to make it more robust, although the warning
> from split may mitigate this somehow. But I don't think this should be
> a blocker for this series IMHO.
Hi Ryan & Catalin,
Any more concerns about this? Shall we move forward with v8? We can
include the fix to kprobes in v8 or I can send it separately, either is
fine to me. Hopefully we can make v6.18.
Thanks,
Yang
>
> Thanks,
> Yang
>
>>
>>
>>
>>> Thanks,
>>> Yang
>>>
>>>> The benefit of approach 1 is that it is guarranteed that it is
>>>> impossible for
>>>> different CPUs to have different translations for the same VA in their
>>>> respective TLB. But for approach 2, it's possible that between
>>>> steps 1 and 2, 1
>>>> CPU has a RO entry and another CPU has a RW entry. But that will
>>>> get fixed once
>>>> the TLB is flushed - it's not really an issue.
>>>>
>>>> (There is probably also an obscure way to end up with 2 TLB entries
>>>> (one with RO
>>>> and one with RW) for the same CPU, but the arm64 architecture
>>>> permits that as
>>>> long as it's only a permission mismatch).
>>>>
>>>> Anyway, approach 2 is used when changing memory permissions on user
>>>> mappings, so
>>>> I don't see why we can't take the same approach here. That would
>>>> solve this
>>>> whole class of issue for us.
>>>>
>>>> Thanks,
>>>> Ryan
>>>>
>>>>
>>>>> Thanks,
>>>>> Yang
>>>>>
>>>>>> Thanks,
>>>>>> Ryan
>>>>>>
>>>>>>
>>>>>>> Tested the below patch with bpftrace kfunc (allocate bpf
>>>>>>> trampoline) and
>>>>>>> kprobes. It seems work well.
>>>>>>>
>>>>>>> diff --git a/arch/arm64/kernel/probes/kprobes.c
>>>>>>> b/arch/arm64/kernel/probes/
>>>>>>> kprobes.c
>>>>>>> index 0c5d408afd95..c4f8c4750f1e 100644
>>>>>>> --- a/arch/arm64/kernel/probes/kprobes.c
>>>>>>> +++ b/arch/arm64/kernel/probes/kprobes.c
>>>>>>> @@ -10,6 +10,7 @@
>>>>>>>
>>>>>>> #define pr_fmt(fmt) "kprobes: " fmt
>>>>>>>
>>>>>>> +#include <linux/execmem.h>
>>>>>>> #include <linux/extable.h>
>>>>>>> #include <linux/kasan.h>
>>>>>>> #include <linux/kernel.h>
>>>>>>> @@ -41,6 +42,17 @@ DEFINE_PER_CPU(struct kprobe_ctlblk,
>>>>>>> kprobe_ctlblk);
>>>>>>> static void __kprobes
>>>>>>> post_kprobe_handler(struct kprobe *, struct kprobe_ctlblk *,
>>>>>>> struct pt_regs
>>>>>>> *);
>>>>>>>
>>>>>>> +void *alloc_insn_page(void)
>>>>>>> +{
>>>>>>> + void *page;
>>>>>>> +
>>>>>>> + page = execmem_alloc(EXECMEM_KPROBES, PAGE_SIZE);
>>>>>>> + if (!page)
>>>>>>> + return NULL;
>>>>>>> + set_memory_rox((unsigned long)page, 1);
>>>>>>> + return page;
>>>>>>> +}
>>>>>>> +
>>>>>>> static void __kprobes arch_prepare_ss_slot(struct kprobe *p)
>>>>>>> {
>>>>>>> kprobe_opcode_t *addr = p->ainsn.xol_insn;
>>>>>>> diff --git a/arch/arm64/net/bpf_jit_comp.c
>>>>>>> b/arch/arm64/net/bpf_jit_comp.c
>>>>>>> index 52ffe115a8c4..3e301bc2cd66 100644
>>>>>>> --- a/arch/arm64/net/bpf_jit_comp.c
>>>>>>> +++ b/arch/arm64/net/bpf_jit_comp.c
>>>>>>> @@ -2717,11 +2717,6 @@ void arch_free_bpf_trampoline(void
>>>>>>> *image, unsigned int
>>>>>>> size)
>>>>>>> bpf_prog_pack_free(image, size);
>>>>>>> }
>>>>>>>
>>>>>>> -int arch_protect_bpf_trampoline(void *image, unsigned int size)
>>>>>>> -{
>>>>>>> - return 0;
>>>>>>> -}
>>>>>>> -
>>>>>>> int arch_prepare_bpf_trampoline(struct bpf_tramp_image *im,
>>>>>>> void *ro_image,
>>>>>>> void *ro_image_end, const struct
>>>>>>> btf_func_model *m,
>>>>>>> u32 flags, struct
>>>>>>> bpf_tramp_links *tlinks,
>>>>>>>
>>>>>>>
>>>>>>>> Thanks,
>>>>>>>> Yang
>>>>>>>>
>>>>>>>>>> Thanks,
>>>>>>>>>> Ryan
>>>>>>>>>>
>>>>>>>>>>> Thanks,
>>>>>>>>>>> Yang
>>>>>>>>>>>
>>>>>>>>>>>> Thanks,
>>>>>>>>>>>> Yang
>>>>>>>>>>>>
>>>>>>>>>>>>> Thanks,
>>>>>>>>>>>>> Ryan
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>
next prev parent reply other threads:[~2025-09-11 22:03 UTC|newest]
Thread overview: 51+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-08-29 11:52 [PATCH v7 0/6] arm64: support FEAT_BBM level 2 and large block mapping when rodata=full Ryan Roberts
2025-08-29 11:52 ` [PATCH v7 1/6] arm64: Enable permission change on arm64 kernel block mappings Ryan Roberts
[not found] ` <7705c29b-4f08-4b56-aab3-024795ee9124@huawei.com>
2025-09-04 11:06 ` Ryan Roberts
[not found] ` <a985c9f0-1561-45d7-860f-2717a0a72d9e@huawei.com>
2025-09-04 13:21 ` Ryan Roberts
2025-09-16 21:37 ` Yang Shi
2025-08-29 11:52 ` [PATCH v7 2/6] arm64: cpufeature: add AmpereOne to BBML2 allow list Ryan Roberts
2025-08-29 22:08 ` Yang Shi
2025-09-04 11:07 ` Ryan Roberts
2025-09-03 17:24 ` Catalin Marinas
2025-09-04 0:49 ` Yang Shi
2025-08-29 11:52 ` [PATCH v7 3/6] arm64: mm: support large block mapping when rodata=full Ryan Roberts
2025-09-03 19:15 ` Catalin Marinas
2025-09-04 0:52 ` Yang Shi
2025-09-04 11:09 ` Ryan Roberts
2025-09-04 11:15 ` Ryan Roberts
2025-09-04 14:57 ` Yang Shi
2025-08-29 11:52 ` [PATCH v7 4/6] arm64: mm: Optimize split_kernel_leaf_mapping() Ryan Roberts
2025-08-29 22:11 ` Yang Shi
2025-09-03 19:20 ` Catalin Marinas
2025-09-04 11:09 ` Ryan Roberts
2025-08-29 11:52 ` [PATCH v7 5/6] arm64: mm: split linear mapping if BBML2 unsupported on secondary CPUs Ryan Roberts
2025-09-04 16:59 ` Catalin Marinas
2025-09-04 17:54 ` Yang Shi
2025-09-08 15:25 ` Ryan Roberts
2025-08-29 11:52 ` [PATCH v7 6/6] arm64: mm: Optimize linear_map_split_to_ptes() Ryan Roberts
2025-08-29 22:27 ` Yang Shi
2025-09-04 11:10 ` Ryan Roberts
2025-09-04 14:58 ` Yang Shi
2025-09-04 17:00 ` Catalin Marinas
2025-09-01 5:04 ` [PATCH v7 0/6] arm64: support FEAT_BBM level 2 and large block mapping when rodata=full Dev Jain
2025-09-01 8:03 ` Ryan Roberts
2025-09-03 0:21 ` Yang Shi
2025-09-03 0:50 ` Yang Shi
2025-09-04 13:14 ` Ryan Roberts
2025-09-04 13:16 ` Ryan Roberts
2025-09-04 17:47 ` Yang Shi
2025-09-04 21:49 ` Yang Shi
2025-09-08 16:34 ` Ryan Roberts
2025-09-08 18:31 ` Yang Shi
2025-09-09 14:36 ` Ryan Roberts
2025-09-09 15:32 ` Yang Shi
2025-09-09 16:32 ` Ryan Roberts
2025-09-09 17:32 ` Yang Shi
2025-09-11 22:03 ` Yang Shi [this message]
2025-09-17 16:28 ` Ryan Roberts
2025-09-17 17:21 ` Yang Shi
2025-09-17 18:58 ` Ryan Roberts
2025-09-17 19:15 ` Yang Shi
2025-09-17 19:40 ` Ryan Roberts
2025-09-17 19:59 ` Yang Shi
2025-09-16 23:44 ` Yang Shi
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=92719b15-daf8-484f-b0db-72e23ae696ad@os.amperecomputing.com \
--to=yang@os.amperecomputing.com \
--cc=akpm@linux-foundation.org \
--cc=ardb@kernel.org \
--cc=catalin.marinas@arm.com \
--cc=cl@gentwo.org \
--cc=david@redhat.com \
--cc=dev.jain@arm.com \
--cc=linux-arm-kernel@lists.infradead.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=lorenzo.stoakes@oracle.com \
--cc=ryan.roberts@arm.com \
--cc=scott@os.amperecomputing.com \
--cc=will@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).