From: Christophe Leroy <christophe.leroy@csgroup.eu>
To: Hari Bathini <hbathini@linux.ibm.com>,
linuxppc-dev <linuxppc-dev@lists.ozlabs.org>,
"bpf@vger.kernel.org" <bpf@vger.kernel.org>
Cc: Michael Ellerman <mpe@ellerman.id.au>,
"Naveen N. Rao" <naveen.n.rao@linux.ibm.com>,
Alexei Starovoitov <ast@kernel.org>,
Daniel Borkmann <daniel@iogearbox.net>,
Andrii Nakryiko <andrii@kernel.org>,
Song Liu <songliubraving@fb.com>
Subject: Re: [PATCH v2 1/4] powerpc/code-patching: introduce patch_instructions()
Date: Fri, 10 Mar 2023 18:26:31 +0000 [thread overview]
Message-ID: <6bf2ec65-485e-afd4-13af-75ff24f2649a@csgroup.eu> (raw)
In-Reply-To: <20230309180213.180263-2-hbathini@linux.ibm.com>
Le 09/03/2023 à 19:02, Hari Bathini a écrit :
> patch_instruction() entails setting up pte, patching the instruction,
> clearing the pte and flushing the tlb. If multiple instructions need
> to be patched, every instruction would have to go through the above
> drill unnecessarily. Instead, introduce function patch_instructions()
> that patches multiple instructions at one go while setting up the pte,
> clearing the pte and flushing the tlb only once per page range of
> instructions. Observed ~5X improvement in speed of execution using
> patch_instructions() over patch_instructions(), when more instructions
> are to be patched.
I get a 13% degradation on the time needed to activate ftrace on a
powerpc 8xx.
Before your patch, activation ftrace takes 550k timebase ticks. After
your patch it takes 620k timebase ticks.
Christophe
>
> Signed-off-by: Hari Bathini <hbathini@linux.ibm.com>
> ---
> arch/powerpc/include/asm/code-patching.h | 1 +
> arch/powerpc/lib/code-patching.c | 151 ++++++++++++++++-------
> 2 files changed, 106 insertions(+), 46 deletions(-)
>
> diff --git a/arch/powerpc/include/asm/code-patching.h b/arch/powerpc/include/asm/code-patching.h
> index 3f881548fb61..059fc4fe700e 100644
> --- a/arch/powerpc/include/asm/code-patching.h
> +++ b/arch/powerpc/include/asm/code-patching.h
> @@ -74,6 +74,7 @@ int create_cond_branch(ppc_inst_t *instr, const u32 *addr,
> int patch_branch(u32 *addr, unsigned long target, int flags);
> int patch_instruction(u32 *addr, ppc_inst_t instr);
> int raw_patch_instruction(u32 *addr, ppc_inst_t instr);
> +int patch_instructions(u32 *addr, u32 *code, bool fill_inst, size_t len);
>
> static inline unsigned long patch_site_addr(s32 *site)
> {
> diff --git a/arch/powerpc/lib/code-patching.c b/arch/powerpc/lib/code-patching.c
> index b00112d7ad46..33857b9b53de 100644
> --- a/arch/powerpc/lib/code-patching.c
> +++ b/arch/powerpc/lib/code-patching.c
> @@ -278,77 +278,117 @@ static void unmap_patch_area(unsigned long addr)
> flush_tlb_kernel_range(addr, addr + PAGE_SIZE);
> }
>
> -static int __do_patch_instruction_mm(u32 *addr, ppc_inst_t instr)
> +static int __do_patch_instructions_mm(u32 *addr, u32 *code, bool fill_inst, size_t len)
> {
> - int err;
> - u32 *patch_addr;
> - unsigned long text_poke_addr;
> - pte_t *pte;
> - unsigned long pfn = get_patch_pfn(addr);
> - struct mm_struct *patching_mm;
> - struct mm_struct *orig_mm;
> + struct mm_struct *patching_mm, *orig_mm;
> + unsigned long text_poke_addr, pfn;
> + u32 *patch_addr, *end, *pend;
> + ppc_inst_t instr;
> spinlock_t *ptl;
> + int ilen, err;
> + pte_t *pte;
>
> patching_mm = __this_cpu_read(cpu_patching_context.mm);
> text_poke_addr = __this_cpu_read(cpu_patching_context.addr);
> - patch_addr = (u32 *)(text_poke_addr + offset_in_page(addr));
>
> pte = get_locked_pte(patching_mm, text_poke_addr, &ptl);
> if (!pte)
> return -ENOMEM;
>
> - __set_pte_at(patching_mm, text_poke_addr, pte, pfn_pte(pfn, PAGE_KERNEL), 0);
> + end = (void *)addr + len;
> + do {
> + pfn = get_patch_pfn(addr);
> + __set_pte_at(patching_mm, text_poke_addr, pte, pfn_pte(pfn, PAGE_KERNEL), 0);
>
> - /* order PTE update before use, also serves as the hwsync */
> - asm volatile("ptesync": : :"memory");
> -
> - /* order context switch after arbitrary prior code */
> - isync();
> -
> - orig_mm = start_using_temp_mm(patching_mm);
> -
> - err = __patch_instruction(addr, instr, patch_addr);
> + /* order PTE update before use, also serves as the hwsync */
> + asm volatile("ptesync": : :"memory");
>
> - /* hwsync performed by __patch_instruction (sync) if successful */
> - if (err)
> - mb(); /* sync */
> + /* order context switch after arbitrary prior code */
> + isync();
> +
> + orig_mm = start_using_temp_mm(patching_mm);
> +
> + patch_addr = (u32 *)(text_poke_addr + offset_in_page(addr));
> + pend = (void *)addr + PAGE_SIZE - offset_in_page(addr);
> + if (end < pend)
> + pend = end;
> +
> + while (addr < pend) {
> + instr = ppc_inst_read(code);
> + ilen = ppc_inst_len(instr);
> + err = __patch_instruction(addr, instr, patch_addr);
> + /* hwsync performed by __patch_instruction (sync) if successful */
> + if (err) {
> + mb(); /* sync */
> + break;
> + }
> +
> + patch_addr = (void *)patch_addr + ilen;
> + addr = (void *)addr + ilen;
> + if (!fill_inst)
> + code = (void *)code + ilen;
> + }
>
> - /* context synchronisation performed by __patch_instruction (isync or exception) */
> - stop_using_temp_mm(patching_mm, orig_mm);
> + /* context synchronisation performed by __patch_instruction (isync or exception) */
> + stop_using_temp_mm(patching_mm, orig_mm);
>
> - pte_clear(patching_mm, text_poke_addr, pte);
> - /*
> - * ptesync to order PTE update before TLB invalidation done
> - * by radix__local_flush_tlb_page_psize (in _tlbiel_va)
> - */
> - local_flush_tlb_page_psize(patching_mm, text_poke_addr, mmu_virtual_psize);
> + pte_clear(patching_mm, text_poke_addr, pte);
> + /*
> + * ptesync to order PTE update before TLB invalidation done
> + * by radix__local_flush_tlb_page_psize (in _tlbiel_va)
> + */
> + local_flush_tlb_page_psize(patching_mm, text_poke_addr, mmu_virtual_psize);
> + if (err)
> + break;
> + } while (addr < end);
>
> pte_unmap_unlock(pte, ptl);
>
> return err;
> }
>
> -static int __do_patch_instruction(u32 *addr, ppc_inst_t instr)
> +static int __do_patch_instructions(u32 *addr, u32 *code, bool fill_inst, size_t len)
> {
> - int err;
> - u32 *patch_addr;
> - unsigned long text_poke_addr;
> + unsigned long text_poke_addr, pfn;
> + u32 *patch_addr, *end, *pend;
> + ppc_inst_t instr;
> + int ilen, err;
> pte_t *pte;
> - unsigned long pfn = get_patch_pfn(addr);
>
> text_poke_addr = (unsigned long)__this_cpu_read(cpu_patching_context.addr) & PAGE_MASK;
> - patch_addr = (u32 *)(text_poke_addr + offset_in_page(addr));
> -
> pte = __this_cpu_read(cpu_patching_context.pte);
> - __set_pte_at(&init_mm, text_poke_addr, pte, pfn_pte(pfn, PAGE_KERNEL), 0);
> - /* See ptesync comment in radix__set_pte_at() */
> - if (radix_enabled())
> - asm volatile("ptesync": : :"memory");
>
> - err = __patch_instruction(addr, instr, patch_addr);
> + end = (void *)addr + len;
> + do {
> + pfn = get_patch_pfn(addr);
> + __set_pte_at(&init_mm, text_poke_addr, pte, pfn_pte(pfn, PAGE_KERNEL), 0);
> + /* See ptesync comment in radix__set_pte_at() */
> + if (radix_enabled())
> + asm volatile("ptesync": : :"memory");
> +
> + patch_addr = (u32 *)(text_poke_addr + offset_in_page(addr));
> + pend = (void *)addr + PAGE_SIZE - offset_in_page(addr);
> + if (end < pend)
> + pend = end;
> +
> + while (addr < pend) {
> + instr = ppc_inst_read(code);
> + ilen = ppc_inst_len(instr);
> + err = __patch_instruction(addr, instr, patch_addr);
> + if (err)
> + break;
> +
> + patch_addr = (void *)patch_addr + ilen;
> + addr = (void *)addr + ilen;
> + if (!fill_inst)
> + code = (void *)code + ilen;
> + }
>
> - pte_clear(&init_mm, text_poke_addr, pte);
> - flush_tlb_kernel_range(text_poke_addr, text_poke_addr + PAGE_SIZE);
> + pte_clear(&init_mm, text_poke_addr, pte);
> + flush_tlb_kernel_range(text_poke_addr, text_poke_addr + PAGE_SIZE);
> + if (err)
> + break;
> + } while (addr < end);
>
> return err;
> }
> @@ -369,15 +409,34 @@ int patch_instruction(u32 *addr, ppc_inst_t instr)
>
> local_irq_save(flags);
> if (mm_patch_enabled())
> - err = __do_patch_instruction_mm(addr, instr);
> + err = __do_patch_instructions_mm(addr, (u32 *)&instr, false, ppc_inst_len(instr));
> else
> - err = __do_patch_instruction(addr, instr);
> + err = __do_patch_instructions(addr, (u32 *)&instr, false, ppc_inst_len(instr));
> local_irq_restore(flags);
>
> return err;
> }
> NOKPROBE_SYMBOL(patch_instruction);
>
> +/*
> + * Patch 'addr' with 'len' bytes of instructions from 'code'.
> + */
> +int patch_instructions(u32 *addr, u32 *code, bool fill_inst, size_t len)
> +{
> + unsigned long flags;
> + int err;
> +
> + local_irq_save(flags);
> + if (mm_patch_enabled())
> + err = __do_patch_instructions_mm(addr, code, fill_inst, len);
> + else
> + err = __do_patch_instructions(addr, code, fill_inst, len);
> + local_irq_restore(flags);
> +
> + return err;
> +}
> +NOKPROBE_SYMBOL(patch_instructions);
> +
> int patch_branch(u32 *addr, unsigned long target, int flags)
> {
> ppc_inst_t instr;
next prev parent reply other threads:[~2023-03-10 18:26 UTC|newest]
Thread overview: 15+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-03-09 18:02 [PATCH v2 0/4] enable bpf_prog_pack allocator for powerpc Hari Bathini
2023-03-09 18:02 ` [PATCH v2 1/4] powerpc/code-patching: introduce patch_instructions() Hari Bathini
2023-03-10 18:26 ` Christophe Leroy [this message]
2023-03-11 7:42 ` Christophe Leroy
2023-03-11 8:02 ` Christophe Leroy
2023-03-09 18:02 ` [PATCH v2 2/4] powerpc/bpf: implement bpf_arch_text_copy Hari Bathini
2023-03-10 22:04 ` Song Liu
2023-03-09 18:02 ` [PATCH v2 3/4] powerpc/bpf: implement bpf_arch_text_invalidate for bpf_prog_pack Hari Bathini
2023-03-10 22:06 ` Song Liu
2023-03-09 18:02 ` [PATCH v2 4/4] powerpc/bpf: use bpf_jit_binary_pack_[alloc|finalize|free] Hari Bathini
2023-03-10 22:35 ` Song Liu
2023-08-25 15:40 ` Hari Bathini
2023-03-11 10:16 ` Christophe Leroy
2023-08-25 15:29 ` Hari Bathini
-- strict thread matches above, loose matches on Subject: below --
2023-03-09 18:00 [PATCH v2 0/4] enable bpf_prog_pack allocator for powerpc Hari Bathini
2023-03-09 18:00 ` [PATCH v2 1/4] powerpc/code-patching: introduce patch_instructions() Hari Bathini
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=6bf2ec65-485e-afd4-13af-75ff24f2649a@csgroup.eu \
--to=christophe.leroy@csgroup.eu \
--cc=andrii@kernel.org \
--cc=ast@kernel.org \
--cc=bpf@vger.kernel.org \
--cc=daniel@iogearbox.net \
--cc=hbathini@linux.ibm.com \
--cc=linuxppc-dev@lists.ozlabs.org \
--cc=mpe@ellerman.id.au \
--cc=naveen.n.rao@linux.ibm.com \
--cc=songliubraving@fb.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox