* [PATCH v2 0/2] powerpc: Enable lazy preemption
@ 2024-11-16 19:23 Shrikanth Hegde
2024-11-16 19:23 ` [PATCH v2 1/2] powerpc: Add preempt lazy support Shrikanth Hegde
` (3 more replies)
0 siblings, 4 replies; 13+ messages in thread
From: Shrikanth Hegde @ 2024-11-16 19:23 UTC (permalink / raw)
To: mpe, linuxppc-dev
Cc: sshegde, npiggin, christophe.leroy, maddy, bigeasy, ankur.a.arora,
linux-kernel
preempt=lazy has been merged into tip[1]. Lets Enable it for PowerPC.
This has been very lightly tested and as michael suggested could go
through a test cycle. If needed, patches can be merged. I have kept it
separate for easier bisect.
Lazy preemption support for kvm on powerpc is still to be done.
Refs:
[1]: https://lore.kernel.org/lkml/20241007074609.447006177@infradead.org/
v1: https://lore.kernel.org/all/20241108101853.277808-1-sshegde@linux.ibm.com/
Changes since v1:
- Change for vmx copy as suggested by Sebastian.
- Add rwb tags
Shrikanth Hegde (2):
powerpc: Add preempt lazy support
powerpc: Large user copy aware of full:rt:lazy preemption
arch/powerpc/Kconfig | 1 +
arch/powerpc/include/asm/thread_info.h | 9 ++++++---
arch/powerpc/kernel/interrupt.c | 4 ++--
arch/powerpc/lib/vmx-helper.c | 2 +-
4 files changed, 10 insertions(+), 6 deletions(-)
--
2.43.5
^ permalink raw reply [flat|nested] 13+ messages in thread* [PATCH v2 1/2] powerpc: Add preempt lazy support 2024-11-16 19:23 [PATCH v2 0/2] powerpc: Enable lazy preemption Shrikanth Hegde @ 2024-11-16 19:23 ` Shrikanth Hegde 2024-11-26 10:53 ` Christophe Leroy 2024-11-16 19:23 ` [PATCH v2 2/2] powerpc: Large user copy aware of full:rt:lazy preemption Shrikanth Hegde ` (2 subsequent siblings) 3 siblings, 1 reply; 13+ messages in thread From: Shrikanth Hegde @ 2024-11-16 19:23 UTC (permalink / raw) To: mpe, linuxppc-dev Cc: sshegde, npiggin, christophe.leroy, maddy, bigeasy, ankur.a.arora, linux-kernel Define preempt lazy bit for Powerpc. Use bit 9 which is free and within 16 bit range of NEED_RESCHED, so compiler can issue single andi. Since Powerpc doesn't use the generic entry/exit, add lazy check at exit to user. CONFIG_PREEMPTION is defined for lazy/full/rt so use it for return to kernel. Ran a few benchmarks and db workload on Power10. Performance is close to preempt=none/voluntary. Since Powerpc systems can have large core count and large memory, preempt lazy is going to be helpful in avoiding soft lockup issues. Reviewed-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Reviewed-by: Ankur Arora <ankur.a.arora@oracle.com> Signed-off-by: Shrikanth Hegde <sshegde@linux.ibm.com> --- arch/powerpc/Kconfig | 1 + arch/powerpc/include/asm/thread_info.h | 9 ++++++--- arch/powerpc/kernel/interrupt.c | 4 ++-- 3 files changed, 9 insertions(+), 5 deletions(-) diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig index 8094a01974cc..2f625aecf94b 100644 --- a/arch/powerpc/Kconfig +++ b/arch/powerpc/Kconfig @@ -145,6 +145,7 @@ config PPC select ARCH_HAS_NON_OVERLAPPING_ADDRESS_SPACE select ARCH_HAS_PHYS_TO_DMA select ARCH_HAS_PMEM_API + select ARCH_HAS_PREEMPT_LAZY select ARCH_HAS_PTE_DEVMAP if PPC_BOOK3S_64 select ARCH_HAS_PTE_SPECIAL select ARCH_HAS_SCALED_CPUTIME if VIRT_CPU_ACCOUNTING_NATIVE && PPC_BOOK3S_64 diff --git a/arch/powerpc/include/asm/thread_info.h b/arch/powerpc/include/asm/thread_info.h index 6ebca2996f18..2785c7462ebf 100644 --- a/arch/powerpc/include/asm/thread_info.h +++ b/arch/powerpc/include/asm/thread_info.h @@ -103,6 +103,7 @@ void arch_setup_new_exec(void); #define TIF_PATCH_PENDING 6 /* pending live patching update */ #define TIF_SYSCALL_AUDIT 7 /* syscall auditing active */ #define TIF_SINGLESTEP 8 /* singlestepping active */ +#define TIF_NEED_RESCHED_LAZY 9 /* Scheduler driven lazy preemption */ #define TIF_SECCOMP 10 /* secure computing */ #define TIF_RESTOREALL 11 /* Restore all regs (implies NOERROR) */ #define TIF_NOERROR 12 /* Force successful syscall return */ @@ -122,6 +123,7 @@ void arch_setup_new_exec(void); #define _TIF_SYSCALL_TRACE (1<<TIF_SYSCALL_TRACE) #define _TIF_SIGPENDING (1<<TIF_SIGPENDING) #define _TIF_NEED_RESCHED (1<<TIF_NEED_RESCHED) +#define _TIF_NEED_RESCHED_LAZY (1<<TIF_NEED_RESCHED_LAZY) #define _TIF_NOTIFY_SIGNAL (1<<TIF_NOTIFY_SIGNAL) #define _TIF_POLLING_NRFLAG (1<<TIF_POLLING_NRFLAG) #define _TIF_32BIT (1<<TIF_32BIT) @@ -142,9 +144,10 @@ void arch_setup_new_exec(void); _TIF_SYSCALL_EMU) #define _TIF_USER_WORK_MASK (_TIF_SIGPENDING | _TIF_NEED_RESCHED | \ - _TIF_NOTIFY_RESUME | _TIF_UPROBE | \ - _TIF_RESTORE_TM | _TIF_PATCH_PENDING | \ - _TIF_NOTIFY_SIGNAL) + _TIF_NEED_RESCHED_LAZY | _TIF_NOTIFY_RESUME | \ + _TIF_UPROBE | _TIF_RESTORE_TM | \ + _TIF_PATCH_PENDING | _TIF_NOTIFY_SIGNAL) + #define _TIF_PERSYSCALL_MASK (_TIF_RESTOREALL|_TIF_NOERROR) /* Bits in local_flags */ diff --git a/arch/powerpc/kernel/interrupt.c b/arch/powerpc/kernel/interrupt.c index af62ec974b97..8f4acc55407b 100644 --- a/arch/powerpc/kernel/interrupt.c +++ b/arch/powerpc/kernel/interrupt.c @@ -185,7 +185,7 @@ interrupt_exit_user_prepare_main(unsigned long ret, struct pt_regs *regs) ti_flags = read_thread_flags(); while (unlikely(ti_flags & (_TIF_USER_WORK_MASK & ~_TIF_RESTORE_TM))) { local_irq_enable(); - if (ti_flags & _TIF_NEED_RESCHED) { + if (ti_flags & (_TIF_NEED_RESCHED | _TIF_NEED_RESCHED_LAZY)) { schedule(); } else { /* @@ -396,7 +396,7 @@ notrace unsigned long interrupt_exit_kernel_prepare(struct pt_regs *regs) /* Returning to a kernel context with local irqs enabled. */ WARN_ON_ONCE(!(regs->msr & MSR_EE)); again: - if (IS_ENABLED(CONFIG_PREEMPT)) { + if (IS_ENABLED(CONFIG_PREEMPTION)) { /* Return to preemptible kernel context */ if (unlikely(read_thread_flags() & _TIF_NEED_RESCHED)) { if (preempt_count() == 0) -- 2.43.5 ^ permalink raw reply related [flat|nested] 13+ messages in thread
* Re: [PATCH v2 1/2] powerpc: Add preempt lazy support 2024-11-16 19:23 ` [PATCH v2 1/2] powerpc: Add preempt lazy support Shrikanth Hegde @ 2024-11-26 10:53 ` Christophe Leroy 2024-12-01 19:28 ` Shrikanth Hegde 0 siblings, 1 reply; 13+ messages in thread From: Christophe Leroy @ 2024-11-26 10:53 UTC (permalink / raw) To: Shrikanth Hegde, mpe, linuxppc-dev, Luming Yu Cc: npiggin, maddy, bigeasy, ankur.a.arora, linux-kernel Le 16/11/2024 à 20:23, Shrikanth Hegde a écrit : > Define preempt lazy bit for Powerpc. Use bit 9 which is free and within > 16 bit range of NEED_RESCHED, so compiler can issue single andi. > > Since Powerpc doesn't use the generic entry/exit, add lazy check at exit > to user. CONFIG_PREEMPTION is defined for lazy/full/rt so use it for > return to kernel. FWIW, there is work in progress on using generic entry/exit for powerpc, if you can help testing it that can help, see https://patchwork.ozlabs.org/project/linuxppc-dev/patch/F0AE0A4571CE3126+20241111031934.1579-2-luming.yu@shingroup.cn/ Christophe > > Ran a few benchmarks and db workload on Power10. Performance is close to > preempt=none/voluntary. > > Since Powerpc systems can have large core count and large memory, > preempt lazy is going to be helpful in avoiding soft lockup issues. > > Reviewed-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> > Reviewed-by: Ankur Arora <ankur.a.arora@oracle.com> > Signed-off-by: Shrikanth Hegde <sshegde@linux.ibm.com> > --- > arch/powerpc/Kconfig | 1 + > arch/powerpc/include/asm/thread_info.h | 9 ++++++--- > arch/powerpc/kernel/interrupt.c | 4 ++-- > 3 files changed, 9 insertions(+), 5 deletions(-) > > diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig > index 8094a01974cc..2f625aecf94b 100644 > --- a/arch/powerpc/Kconfig > +++ b/arch/powerpc/Kconfig > @@ -145,6 +145,7 @@ config PPC > select ARCH_HAS_NON_OVERLAPPING_ADDRESS_SPACE > select ARCH_HAS_PHYS_TO_DMA > select ARCH_HAS_PMEM_API > + select ARCH_HAS_PREEMPT_LAZY > select ARCH_HAS_PTE_DEVMAP if PPC_BOOK3S_64 > select ARCH_HAS_PTE_SPECIAL > select ARCH_HAS_SCALED_CPUTIME if VIRT_CPU_ACCOUNTING_NATIVE && PPC_BOOK3S_64 > diff --git a/arch/powerpc/include/asm/thread_info.h b/arch/powerpc/include/asm/thread_info.h > index 6ebca2996f18..2785c7462ebf 100644 > --- a/arch/powerpc/include/asm/thread_info.h > +++ b/arch/powerpc/include/asm/thread_info.h > @@ -103,6 +103,7 @@ void arch_setup_new_exec(void); > #define TIF_PATCH_PENDING 6 /* pending live patching update */ > #define TIF_SYSCALL_AUDIT 7 /* syscall auditing active */ > #define TIF_SINGLESTEP 8 /* singlestepping active */ > +#define TIF_NEED_RESCHED_LAZY 9 /* Scheduler driven lazy preemption */ > #define TIF_SECCOMP 10 /* secure computing */ > #define TIF_RESTOREALL 11 /* Restore all regs (implies NOERROR) */ > #define TIF_NOERROR 12 /* Force successful syscall return */ > @@ -122,6 +123,7 @@ void arch_setup_new_exec(void); > #define _TIF_SYSCALL_TRACE (1<<TIF_SYSCALL_TRACE) > #define _TIF_SIGPENDING (1<<TIF_SIGPENDING) > #define _TIF_NEED_RESCHED (1<<TIF_NEED_RESCHED) > +#define _TIF_NEED_RESCHED_LAZY (1<<TIF_NEED_RESCHED_LAZY) > #define _TIF_NOTIFY_SIGNAL (1<<TIF_NOTIFY_SIGNAL) > #define _TIF_POLLING_NRFLAG (1<<TIF_POLLING_NRFLAG) > #define _TIF_32BIT (1<<TIF_32BIT) > @@ -142,9 +144,10 @@ void arch_setup_new_exec(void); > _TIF_SYSCALL_EMU) > > #define _TIF_USER_WORK_MASK (_TIF_SIGPENDING | _TIF_NEED_RESCHED | \ > - _TIF_NOTIFY_RESUME | _TIF_UPROBE | \ > - _TIF_RESTORE_TM | _TIF_PATCH_PENDING | \ > - _TIF_NOTIFY_SIGNAL) > + _TIF_NEED_RESCHED_LAZY | _TIF_NOTIFY_RESUME | \ > + _TIF_UPROBE | _TIF_RESTORE_TM | \ > + _TIF_PATCH_PENDING | _TIF_NOTIFY_SIGNAL) > + > #define _TIF_PERSYSCALL_MASK (_TIF_RESTOREALL|_TIF_NOERROR) > > /* Bits in local_flags */ > diff --git a/arch/powerpc/kernel/interrupt.c b/arch/powerpc/kernel/interrupt.c > index af62ec974b97..8f4acc55407b 100644 > --- a/arch/powerpc/kernel/interrupt.c > +++ b/arch/powerpc/kernel/interrupt.c > @@ -185,7 +185,7 @@ interrupt_exit_user_prepare_main(unsigned long ret, struct pt_regs *regs) > ti_flags = read_thread_flags(); > while (unlikely(ti_flags & (_TIF_USER_WORK_MASK & ~_TIF_RESTORE_TM))) { > local_irq_enable(); > - if (ti_flags & _TIF_NEED_RESCHED) { > + if (ti_flags & (_TIF_NEED_RESCHED | _TIF_NEED_RESCHED_LAZY)) { > schedule(); > } else { > /* > @@ -396,7 +396,7 @@ notrace unsigned long interrupt_exit_kernel_prepare(struct pt_regs *regs) > /* Returning to a kernel context with local irqs enabled. */ > WARN_ON_ONCE(!(regs->msr & MSR_EE)); > again: > - if (IS_ENABLED(CONFIG_PREEMPT)) { > + if (IS_ENABLED(CONFIG_PREEMPTION)) { > /* Return to preemptible kernel context */ > if (unlikely(read_thread_flags() & _TIF_NEED_RESCHED)) { > if (preempt_count() == 0) ^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH v2 1/2] powerpc: Add preempt lazy support 2024-11-26 10:53 ` Christophe Leroy @ 2024-12-01 19:28 ` Shrikanth Hegde 2024-12-09 7:43 ` Luming Yu 0 siblings, 1 reply; 13+ messages in thread From: Shrikanth Hegde @ 2024-12-01 19:28 UTC (permalink / raw) To: Christophe Leroy, Luming Yu Cc: npiggin, maddy, bigeasy, ankur.a.arora, linux-kernel, mpe, linuxppc-dev, Thomas Gleixner On 11/26/24 16:23, Christophe Leroy wrote: > > > Le 16/11/2024 à 20:23, Shrikanth Hegde a écrit : >> Define preempt lazy bit for Powerpc. Use bit 9 which is free and within >> 16 bit range of NEED_RESCHED, so compiler can issue single andi. >> >> Since Powerpc doesn't use the generic entry/exit, add lazy check at exit >> to user. CONFIG_PREEMPTION is defined for lazy/full/rt so use it for >> return to kernel. > > FWIW, there is work in progress on using generic entry/exit for powerpc, > if you can help testing it that can help, see https:// > patchwork.ozlabs.org/project/linuxppc-dev/patch/ > F0AE0A4571CE3126+20241111031934.1579-2-luming.yu@shingroup.cn/ > I gave that series a try. After a lot of manual patching on tip tree and removal of multiple definition of regs_irqs_disabled, i was able to compile and boot. I am skimming through the series, but as far as i understand from the comments, it needs to be redesigned. I probably get it why. I will go through it in more detail to figure out how to do it better. I believe the changes have to stem from interrupt_64.S As far as the bits of this patch with that patch concerned, it is with NEED_RESCHED bits. There atleast couple of major issues in that patch series that are wrong. 1. It only tries to move exit to user to generic. exit to kernel is not. there is generic irqentry_exit that handles it for generic code. powerpc exit to kernel still there. 2. Even for exit to user, it ends up calling exit_to_user_mode_loop twice for the same syscall. that seems wrong. once in interrupt_exit_user_prepare_main and once through syscall_exit_to_user_mode in syscall_exit_prepare. > Christophe > So I guess, when we do eventually if move, this checks would be removed at that point along with rest of the code. > >> >> Ran a few benchmarks and db workload on Power10. Performance is close to >> preempt=none/voluntary. >> Since Powerpc systems can have large core count and large memory, >> preempt lazy is going to be helpful in avoiding soft lockup issues. >> >> Reviewed-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> >> Reviewed-by: Ankur Arora <ankur.a.arora@oracle.com> >> Signed-off-by: Shrikanth Hegde <sshegde@linux.ibm.com> >> --- >> arch/powerpc/Kconfig | 1 + >> arch/powerpc/include/asm/thread_info.h | 9 ++++++--- >> arch/powerpc/kernel/interrupt.c | 4 ++-- >> 3 files changed, 9 insertions(+), 5 deletions(-) >> >> diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig >> index 8094a01974cc..2f625aecf94b 100644 >> --- a/arch/powerpc/Kconfig >> +++ b/arch/powerpc/Kconfig >> @@ -145,6 +145,7 @@ config PPC >> select ARCH_HAS_NON_OVERLAPPING_ADDRESS_SPACE >> select ARCH_HAS_PHYS_TO_DMA >> select ARCH_HAS_PMEM_API >> + select ARCH_HAS_PREEMPT_LAZY >> select ARCH_HAS_PTE_DEVMAP if PPC_BOOK3S_64 >> select ARCH_HAS_PTE_SPECIAL >> select ARCH_HAS_SCALED_CPUTIME if >> VIRT_CPU_ACCOUNTING_NATIVE && PPC_BOOK3S_64 >> diff --git a/arch/powerpc/include/asm/thread_info.h b/arch/powerpc/ >> include/asm/thread_info.h >> index 6ebca2996f18..2785c7462ebf 100644 >> --- a/arch/powerpc/include/asm/thread_info.h >> +++ b/arch/powerpc/include/asm/thread_info.h >> @@ -103,6 +103,7 @@ void arch_setup_new_exec(void); >> #define TIF_PATCH_PENDING 6 /* pending live patching update */ >> #define TIF_SYSCALL_AUDIT 7 /* syscall auditing active */ >> #define TIF_SINGLESTEP 8 /* singlestepping active */ >> +#define TIF_NEED_RESCHED_LAZY 9 /* Scheduler driven lazy >> preemption */ >> #define TIF_SECCOMP 10 /* secure computing */ >> #define TIF_RESTOREALL 11 /* Restore all regs (implies >> NOERROR) */ >> #define TIF_NOERROR 12 /* Force successful syscall return */ >> @@ -122,6 +123,7 @@ void arch_setup_new_exec(void); >> #define _TIF_SYSCALL_TRACE (1<<TIF_SYSCALL_TRACE) >> #define _TIF_SIGPENDING (1<<TIF_SIGPENDING) >> #define _TIF_NEED_RESCHED (1<<TIF_NEED_RESCHED) >> +#define _TIF_NEED_RESCHED_LAZY (1<<TIF_NEED_RESCHED_LAZY) >> #define _TIF_NOTIFY_SIGNAL (1<<TIF_NOTIFY_SIGNAL) >> #define _TIF_POLLING_NRFLAG (1<<TIF_POLLING_NRFLAG) >> #define _TIF_32BIT (1<<TIF_32BIT) >> @@ -142,9 +144,10 @@ void arch_setup_new_exec(void); >> _TIF_SYSCALL_EMU) >> #define _TIF_USER_WORK_MASK (_TIF_SIGPENDING | _TIF_NEED_RESCHED | \ >> - _TIF_NOTIFY_RESUME | _TIF_UPROBE | \ >> - _TIF_RESTORE_TM | _TIF_PATCH_PENDING | \ >> - _TIF_NOTIFY_SIGNAL) >> + _TIF_NEED_RESCHED_LAZY | _TIF_NOTIFY_RESUME | \ >> + _TIF_UPROBE | _TIF_RESTORE_TM | \ >> + _TIF_PATCH_PENDING | _TIF_NOTIFY_SIGNAL) >> + >> #define _TIF_PERSYSCALL_MASK (_TIF_RESTOREALL|_TIF_NOERROR) >> /* Bits in local_flags */ >> diff --git a/arch/powerpc/kernel/interrupt.c b/arch/powerpc/kernel/ >> interrupt.c >> index af62ec974b97..8f4acc55407b 100644 >> --- a/arch/powerpc/kernel/interrupt.c >> +++ b/arch/powerpc/kernel/interrupt.c >> @@ -185,7 +185,7 @@ interrupt_exit_user_prepare_main(unsigned long >> ret, struct pt_regs *regs) >> ti_flags = read_thread_flags(); >> while (unlikely(ti_flags & (_TIF_USER_WORK_MASK & >> ~_TIF_RESTORE_TM))) { >> local_irq_enable(); >> - if (ti_flags & _TIF_NEED_RESCHED) { >> + if (ti_flags & (_TIF_NEED_RESCHED | _TIF_NEED_RESCHED_LAZY)) { >> schedule(); >> } else { >> /* >> @@ -396,7 +396,7 @@ notrace unsigned long >> interrupt_exit_kernel_prepare(struct pt_regs *regs) >> /* Returning to a kernel context with local irqs enabled. */ >> WARN_ON_ONCE(!(regs->msr & MSR_EE)); >> again: >> - if (IS_ENABLED(CONFIG_PREEMPT)) { >> + if (IS_ENABLED(CONFIG_PREEMPTION)) { >> /* Return to preemptible kernel context */ >> if (unlikely(read_thread_flags() & _TIF_NEED_RESCHED)) { >> if (preempt_count() == 0) ^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH v2 1/2] powerpc: Add preempt lazy support 2024-12-01 19:28 ` Shrikanth Hegde @ 2024-12-09 7:43 ` Luming Yu 0 siblings, 0 replies; 13+ messages in thread From: Luming Yu @ 2024-12-09 7:43 UTC (permalink / raw) To: Shrikanth Hegde Cc: Christophe Leroy, npiggin, maddy, bigeasy, ankur.a.arora, linux-kernel, mpe, linuxppc-dev, Thomas Gleixner On Mon, Dec 02, 2024 at 12:58:59AM +0530, Shrikanth Hegde wrote: > > > On 11/26/24 16:23, Christophe Leroy wrote: > > > > > > Le 16/11/2024 à 20:23, Shrikanth Hegde a écrit : > > > Define preempt lazy bit for Powerpc. Use bit 9 which is free and within > > > 16 bit range of NEED_RESCHED, so compiler can issue single andi. > > > > > > Since Powerpc doesn't use the generic entry/exit, add lazy check at exit > > > to user. CONFIG_PREEMPTION is defined for lazy/full/rt so use it for > > > return to kernel. > > > > FWIW, there is work in progress on using generic entry/exit for powerpc, > > if you can help testing it that can help, see https:// > > patchwork.ozlabs.org/project/linuxppc-dev/patch/ > > F0AE0A4571CE3126+20241111031934.1579-2-luming.yu@shingroup.cn/ > > > > I gave that series a try. After a lot of manual patching on tip tree and > removal of multiple definition of regs_irqs_disabled, i was able to compile > and boot. > > I am skimming through the series, but as far as i understand from the > comments, it needs to be redesigned. I probably get it why. I will go > through it in more detail to figure out how to do it better. I believe the > changes have to stem from interrupt_64.S Thanks for your time for the work. when you do it, besides compile, boot, and ppc linux-ci github workflow, please also do selftest make -C tools/testing/selftest TARGETS=seccomp run_tests to make sure all secommp 98 unit tests passed. > > As far as the bits of this patch with that patch concerned, it is with > NEED_RESCHED bits. There atleast couple of major issues in that patch series > that are wrong. > > 1. It only tries to move exit to user to generic. exit to kernel is not. > there is generic irqentry_exit that handles it for generic code. powerpc > exit to kernel still there. > > 2. Even for exit to user, it ends up calling exit_to_user_mode_loop twice > for the same syscall. that seems wrong. once in > interrupt_exit_user_prepare_main and once through syscall_exit_to_user_mode > in syscall_exit_prepare. I knew this as ppc system call uses interrupt path. I planned to delete ppc speicifc interrupt_exit_user_prepare_main later for this series. Now it might be good timing to complete it as there is a real use case need this to be done. :-) > > > > Christophe > > > > So I guess, when we do eventually if move, this checks would be removed at > that point along with rest of the code. > > > > > > > > > Ran a few benchmarks and db workload on Power10. Performance is close to > > > preempt=none/voluntary. > > > Since Powerpc systems can have large core count and large memory, > > > preempt lazy is going to be helpful in avoiding soft lockup issues. > > > > > > Reviewed-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> > > > Reviewed-by: Ankur Arora <ankur.a.arora@oracle.com> > > > Signed-off-by: Shrikanth Hegde <sshegde@linux.ibm.com> > > > --- > > > arch/powerpc/Kconfig | 1 + > > > arch/powerpc/include/asm/thread_info.h | 9 ++++++--- > > > arch/powerpc/kernel/interrupt.c | 4 ++-- > > > 3 files changed, 9 insertions(+), 5 deletions(-) > > > > > > diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig > > > index 8094a01974cc..2f625aecf94b 100644 > > > --- a/arch/powerpc/Kconfig > > > +++ b/arch/powerpc/Kconfig > > > @@ -145,6 +145,7 @@ config PPC > > > select ARCH_HAS_NON_OVERLAPPING_ADDRESS_SPACE > > > select ARCH_HAS_PHYS_TO_DMA > > > select ARCH_HAS_PMEM_API > > > + select ARCH_HAS_PREEMPT_LAZY > > > select ARCH_HAS_PTE_DEVMAP if PPC_BOOK3S_64 > > > select ARCH_HAS_PTE_SPECIAL > > > select ARCH_HAS_SCALED_CPUTIME if > > > VIRT_CPU_ACCOUNTING_NATIVE && PPC_BOOK3S_64 > > > diff --git a/arch/powerpc/include/asm/thread_info.h b/arch/powerpc/ > > > include/asm/thread_info.h > > > index 6ebca2996f18..2785c7462ebf 100644 > > > --- a/arch/powerpc/include/asm/thread_info.h > > > +++ b/arch/powerpc/include/asm/thread_info.h > > > @@ -103,6 +103,7 @@ void arch_setup_new_exec(void); > > > #define TIF_PATCH_PENDING 6 /* pending live patching update */ > > > #define TIF_SYSCALL_AUDIT 7 /* syscall auditing active */ > > > #define TIF_SINGLESTEP 8 /* singlestepping active */ > > > +#define TIF_NEED_RESCHED_LAZY 9 /* Scheduler driven lazy > > > preemption */ > > > #define TIF_SECCOMP 10 /* secure computing */ > > > #define TIF_RESTOREALL 11 /* Restore all regs (implies > > > NOERROR) */ > > > #define TIF_NOERROR 12 /* Force successful syscall return */ > > > @@ -122,6 +123,7 @@ void arch_setup_new_exec(void); > > > #define _TIF_SYSCALL_TRACE (1<<TIF_SYSCALL_TRACE) > > > #define _TIF_SIGPENDING (1<<TIF_SIGPENDING) > > > #define _TIF_NEED_RESCHED (1<<TIF_NEED_RESCHED) > > > +#define _TIF_NEED_RESCHED_LAZY (1<<TIF_NEED_RESCHED_LAZY) > > > #define _TIF_NOTIFY_SIGNAL (1<<TIF_NOTIFY_SIGNAL) > > > #define _TIF_POLLING_NRFLAG (1<<TIF_POLLING_NRFLAG) > > > #define _TIF_32BIT (1<<TIF_32BIT) > > > @@ -142,9 +144,10 @@ void arch_setup_new_exec(void); > > > _TIF_SYSCALL_EMU) > > > #define _TIF_USER_WORK_MASK (_TIF_SIGPENDING | _TIF_NEED_RESCHED | \ > > > - _TIF_NOTIFY_RESUME | _TIF_UPROBE | \ > > > - _TIF_RESTORE_TM | _TIF_PATCH_PENDING | \ > > > - _TIF_NOTIFY_SIGNAL) > > > + _TIF_NEED_RESCHED_LAZY | _TIF_NOTIFY_RESUME | \ > > > + _TIF_UPROBE | _TIF_RESTORE_TM | \ > > > + _TIF_PATCH_PENDING | _TIF_NOTIFY_SIGNAL) > > > + > > > #define _TIF_PERSYSCALL_MASK (_TIF_RESTOREALL|_TIF_NOERROR) > > > /* Bits in local_flags */ > > > diff --git a/arch/powerpc/kernel/interrupt.c b/arch/powerpc/kernel/ > > > interrupt.c > > > index af62ec974b97..8f4acc55407b 100644 > > > --- a/arch/powerpc/kernel/interrupt.c > > > +++ b/arch/powerpc/kernel/interrupt.c > > > @@ -185,7 +185,7 @@ interrupt_exit_user_prepare_main(unsigned long > > > ret, struct pt_regs *regs) > > > ti_flags = read_thread_flags(); > > > while (unlikely(ti_flags & (_TIF_USER_WORK_MASK & > > > ~_TIF_RESTORE_TM))) { > > > local_irq_enable(); > > > - if (ti_flags & _TIF_NEED_RESCHED) { > > > > > > + if (ti_flags & (_TIF_NEED_RESCHED | _TIF_NEED_RESCHED_LAZY)) { > > > schedule(); > > > } else { > > > /* > > > @@ -396,7 +396,7 @@ notrace unsigned long > > > interrupt_exit_kernel_prepare(struct pt_regs *regs) > > > /* Returning to a kernel context with local irqs enabled. */ > > > WARN_ON_ONCE(!(regs->msr & MSR_EE)); > > > again: > > > - if (IS_ENABLED(CONFIG_PREEMPT)) { > > > + if (IS_ENABLED(CONFIG_PREEMPTION)) { > > > /* Return to preemptible kernel context */ > > > if (unlikely(read_thread_flags() & _TIF_NEED_RESCHED)) { > > > if (preempt_count() == 0) > ^ permalink raw reply [flat|nested] 13+ messages in thread
* [PATCH v2 2/2] powerpc: Large user copy aware of full:rt:lazy preemption 2024-11-16 19:23 [PATCH v2 0/2] powerpc: Enable lazy preemption Shrikanth Hegde 2024-11-16 19:23 ` [PATCH v2 1/2] powerpc: Add preempt lazy support Shrikanth Hegde @ 2024-11-16 19:23 ` Shrikanth Hegde 2024-11-19 21:08 ` Ankur Arora 2024-11-20 8:00 ` Sebastian Andrzej Siewior 2024-12-10 11:22 ` [PATCH v2 0/2] powerpc: Enable lazy preemption Shrikanth Hegde 2025-01-01 9:08 ` Madhavan Srinivasan 3 siblings, 2 replies; 13+ messages in thread From: Shrikanth Hegde @ 2024-11-16 19:23 UTC (permalink / raw) To: mpe, linuxppc-dev Cc: sshegde, npiggin, christophe.leroy, maddy, bigeasy, ankur.a.arora, linux-kernel Large user copy_to/from (more than 16 bytes) uses vmx instructions to speed things up. Once the copy is done, it makes sense to try schedule as soon as possible for preemptible kernels. So do this for preempt=full/lazy and rt kernel. Not checking for lazy bit here, since it could lead to unnecessary context switches. Suggested-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Signed-off-by: Shrikanth Hegde <sshegde@linux.ibm.com> --- arch/powerpc/lib/vmx-helper.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/powerpc/lib/vmx-helper.c b/arch/powerpc/lib/vmx-helper.c index d491da8d1838..58ed6bd613a6 100644 --- a/arch/powerpc/lib/vmx-helper.c +++ b/arch/powerpc/lib/vmx-helper.c @@ -45,7 +45,7 @@ int exit_vmx_usercopy(void) * set and we are preemptible. The hack here is to schedule a * decrementer to fire here and reschedule for us if necessary. */ - if (IS_ENABLED(CONFIG_PREEMPT) && need_resched()) + if (IS_ENABLED(CONFIG_PREEMPTION) && need_resched()) set_dec(1); return 0; } -- 2.43.5 ^ permalink raw reply related [flat|nested] 13+ messages in thread
* Re: [PATCH v2 2/2] powerpc: Large user copy aware of full:rt:lazy preemption 2024-11-16 19:23 ` [PATCH v2 2/2] powerpc: Large user copy aware of full:rt:lazy preemption Shrikanth Hegde @ 2024-11-19 21:08 ` Ankur Arora 2024-11-20 8:03 ` Sebastian Andrzej Siewior 2024-11-20 8:00 ` Sebastian Andrzej Siewior 1 sibling, 1 reply; 13+ messages in thread From: Ankur Arora @ 2024-11-19 21:08 UTC (permalink / raw) To: Shrikanth Hegde Cc: mpe, linuxppc-dev, npiggin, christophe.leroy, maddy, bigeasy, ankur.a.arora, linux-kernel Shrikanth Hegde <sshegde@linux.ibm.com> writes: > Large user copy_to/from (more than 16 bytes) uses vmx instructions to > speed things up. Once the copy is done, it makes sense to try schedule > as soon as possible for preemptible kernels. So do this for > preempt=full/lazy and rt kernel. Note that this check will also fire for PREEMPT_DYNAMIC && preempt=none. So when power supports PREEMPT_DYNAMIC this will need to change to preempt_model_*() based checks. > Not checking for lazy bit here, since it could lead to unnecessary > context switches. Maybe: Not checking for lazy bit here, since we only want to schedule when a context switch is imminently required. > Suggested-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> > Signed-off-by: Shrikanth Hegde <sshegde@linux.ibm.com> > --- > arch/powerpc/lib/vmx-helper.c | 2 +- > 1 file changed, 1 insertion(+), 1 deletion(-) > > diff --git a/arch/powerpc/lib/vmx-helper.c b/arch/powerpc/lib/vmx-helper.c > index d491da8d1838..58ed6bd613a6 100644 > --- a/arch/powerpc/lib/vmx-helper.c > +++ b/arch/powerpc/lib/vmx-helper.c > @@ -45,7 +45,7 @@ int exit_vmx_usercopy(void) > * set and we are preemptible. The hack here is to schedule a > * decrementer to fire here and reschedule for us if necessary. > */ > - if (IS_ENABLED(CONFIG_PREEMPT) && need_resched()) > + if (IS_ENABLED(CONFIG_PREEMPTION) && need_resched()) > set_dec(1); > return 0; > } -- ankur ^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH v2 2/2] powerpc: Large user copy aware of full:rt:lazy preemption 2024-11-19 21:08 ` Ankur Arora @ 2024-11-20 8:03 ` Sebastian Andrzej Siewior 2024-11-20 15:33 ` Shrikanth Hegde 0 siblings, 1 reply; 13+ messages in thread From: Sebastian Andrzej Siewior @ 2024-11-20 8:03 UTC (permalink / raw) To: Ankur Arora Cc: Shrikanth Hegde, mpe, linuxppc-dev, npiggin, christophe.leroy, maddy, linux-kernel On 2024-11-19 13:08:31 [-0800], Ankur Arora wrote: > > Shrikanth Hegde <sshegde@linux.ibm.com> writes: > > > Large user copy_to/from (more than 16 bytes) uses vmx instructions to > > speed things up. Once the copy is done, it makes sense to try schedule > > as soon as possible for preemptible kernels. So do this for > > preempt=full/lazy and rt kernel. > > Note that this check will also fire for PREEMPT_DYNAMIC && preempt=none. > So when power supports PREEMPT_DYNAMIC this will need to change > to preempt_model_*() based checks. > > > Not checking for lazy bit here, since it could lead to unnecessary > > context switches. > > Maybe: > Not checking for lazy bit here, since we only want to schedule when > a context switch is imminently required. Isn't his behaviour here exactly what preempt_enable() would do? If the LAZY bit is set, it is delayed until return to userland or an explicit schedule() because it is done. If this LAZY bit turned into an actual scheduling request then it is acted upon. Sebastian ^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH v2 2/2] powerpc: Large user copy aware of full:rt:lazy preemption 2024-11-20 8:03 ` Sebastian Andrzej Siewior @ 2024-11-20 15:33 ` Shrikanth Hegde 0 siblings, 0 replies; 13+ messages in thread From: Shrikanth Hegde @ 2024-11-20 15:33 UTC (permalink / raw) To: Sebastian Andrzej Siewior, Ankur Arora Cc: mpe, linuxppc-dev, npiggin, christophe.leroy, maddy, linux-kernel, vschneid, mark.rutland On 11/20/24 13:33, Sebastian Andrzej Siewior wrote: > On 2024-11-19 13:08:31 [-0800], Ankur Arora wrote: >> >> Shrikanth Hegde <sshegde@linux.ibm.com> writes: >> Thanks Ankur and Sebastian for taking a look. >>> Large user copy_to/from (more than 16 bytes) uses vmx instructions to >>> speed things up. Once the copy is done, it makes sense to try schedule >>> as soon as possible for preemptible kernels. So do this for >>> preempt=full/lazy and rt kernel. >> >> Note that this check will also fire for PREEMPT_DYNAMIC && preempt=none. >> So when power supports PREEMPT_DYNAMIC this will need to change >> to preempt_model_*() based checks. Yes. This and return to kernel both needs to change when PowerPC support PREEMPT_DYNAMIC. I have a patch in work in which I essentially do check for the preemption model. Either below or based on static key. - if (IS_ENABLED(CONFIG_PREEMPTION) && need_resched()) + if (preempt_model_preemptible() && need_resched()) +mark +valentin More looking into how PREEMPPT_DYNAMIC works with static key, I have one query. This is more on PREEMPT_DYNAMIC than anything to with LAZY. I see many places use static_key based check instead of using preempt_model_preemptible such as dynamic_preempt_schedule, is it because static_key is faster? On the other hand, using preempt_model_preemptible could make the code simpler. >> >>> Not checking for lazy bit here, since it could lead to unnecessary >>> context switches. >> >> Maybe: >> Not checking for lazy bit here, since we only want to schedule when >> a context switch is imminently required. > > Isn't his behaviour here exactly what preempt_enable() would do? > If the LAZY bit is set, it is delayed until return to userland or an > explicit schedule() because it is done. If this LAZY bit turned into an > actual scheduling request then it is acted upon. > > Sebastian ^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH v2 2/2] powerpc: Large user copy aware of full:rt:lazy preemption 2024-11-16 19:23 ` [PATCH v2 2/2] powerpc: Large user copy aware of full:rt:lazy preemption Shrikanth Hegde 2024-11-19 21:08 ` Ankur Arora @ 2024-11-20 8:00 ` Sebastian Andrzej Siewior 2024-11-20 18:10 ` Shrikanth Hegde 1 sibling, 1 reply; 13+ messages in thread From: Sebastian Andrzej Siewior @ 2024-11-20 8:00 UTC (permalink / raw) To: Shrikanth Hegde Cc: mpe, linuxppc-dev, npiggin, christophe.leroy, maddy, ankur.a.arora, linux-kernel On 2024-11-17 00:53:06 [+0530], Shrikanth Hegde wrote: > Large user copy_to/from (more than 16 bytes) uses vmx instructions to > speed things up. Once the copy is done, it makes sense to try schedule > as soon as possible for preemptible kernels. So do this for > preempt=full/lazy and rt kernel. > > Not checking for lazy bit here, since it could lead to unnecessary > context switches. > > Suggested-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> > Signed-off-by: Shrikanth Hegde <sshegde@linux.ibm.com> > --- > arch/powerpc/lib/vmx-helper.c | 2 +- > 1 file changed, 1 insertion(+), 1 deletion(-) > > diff --git a/arch/powerpc/lib/vmx-helper.c b/arch/powerpc/lib/vmx-helper.c > index d491da8d1838..58ed6bd613a6 100644 > --- a/arch/powerpc/lib/vmx-helper.c > +++ b/arch/powerpc/lib/vmx-helper.c > @@ -45,7 +45,7 @@ int exit_vmx_usercopy(void) > * set and we are preemptible. The hack here is to schedule a > * decrementer to fire here and reschedule for us if necessary. > */ > - if (IS_ENABLED(CONFIG_PREEMPT) && need_resched()) > + if (IS_ENABLED(CONFIG_PREEMPTION) && need_resched()) > set_dec(1); Now looking at this again there is a comment why preempt_enable() is bad. An interrupt between preempt_enable_no_resched() and set_dec() is fine because irq-exit would preempt properly? Regular preemption works again once copy_to_user() is done? So if you copy 1GiB, you are blocked for that 1GiB? > return 0; > } Sebastian ^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH v2 2/2] powerpc: Large user copy aware of full:rt:lazy preemption 2024-11-20 8:00 ` Sebastian Andrzej Siewior @ 2024-11-20 18:10 ` Shrikanth Hegde 0 siblings, 0 replies; 13+ messages in thread From: Shrikanth Hegde @ 2024-11-20 18:10 UTC (permalink / raw) To: Sebastian Andrzej Siewior Cc: mpe, linuxppc-dev, npiggin, christophe.leroy, maddy, ankur.a.arora, linux-kernel On 11/20/24 13:30, Sebastian Andrzej Siewior wrote: > On 2024-11-17 00:53:06 [+0530], Shrikanth Hegde wrote: >> Large user copy_to/from (more than 16 bytes) uses vmx instructions to >> speed things up. Once the copy is done, it makes sense to try schedule >> as soon as possible for preemptible kernels. So do this for >> preempt=full/lazy and rt kernel. >> >> Not checking for lazy bit here, since it could lead to unnecessary >> context switches. >> >> Suggested-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> >> Signed-off-by: Shrikanth Hegde <sshegde@linux.ibm.com> >> --- >> arch/powerpc/lib/vmx-helper.c | 2 +- >> 1 file changed, 1 insertion(+), 1 deletion(-) >> >> diff --git a/arch/powerpc/lib/vmx-helper.c b/arch/powerpc/lib/vmx-helper.c >> index d491da8d1838..58ed6bd613a6 100644 >> --- a/arch/powerpc/lib/vmx-helper.c >> +++ b/arch/powerpc/lib/vmx-helper.c >> @@ -45,7 +45,7 @@ int exit_vmx_usercopy(void) >> * set and we are preemptible. The hack here is to schedule a >> * decrementer to fire here and reschedule for us if necessary. >> */ >> - if (IS_ENABLED(CONFIG_PREEMPT) && need_resched()) >> + if (IS_ENABLED(CONFIG_PREEMPTION) && need_resched()) >> set_dec(1); > > Now looking at this again there is a comment why preempt_enable() is > bad. An interrupt between preempt_enable_no_resched() and set_dec() is > fine because irq-exit would preempt properly? I think so. AFAIU the comment says issue lies with amr register not being saved across context switch. interrupt_exit_kernel_prepare saves it and restore using kuap_kernel_restore. Regular preemption works > again once copy_to_user() is done? So if you copy 1GiB, you are blocked > for that 1GiB? yes, regular preemption would work on exit of copy_to_user. Since the preempt_disable was done before copy starts, i think yes, it would be blocked until it is complete. > >> return 0; >> } > > Sebastian nick, mpe; please correct me if i am wrong. ^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH v2 0/2] powerpc: Enable lazy preemption 2024-11-16 19:23 [PATCH v2 0/2] powerpc: Enable lazy preemption Shrikanth Hegde 2024-11-16 19:23 ` [PATCH v2 1/2] powerpc: Add preempt lazy support Shrikanth Hegde 2024-11-16 19:23 ` [PATCH v2 2/2] powerpc: Large user copy aware of full:rt:lazy preemption Shrikanth Hegde @ 2024-12-10 11:22 ` Shrikanth Hegde 2025-01-01 9:08 ` Madhavan Srinivasan 3 siblings, 0 replies; 13+ messages in thread From: Shrikanth Hegde @ 2024-12-10 11:22 UTC (permalink / raw) To: mpe, maddy Cc: npiggin, christophe.leroy, bigeasy, ankur.a.arora, linux-kernel, linuxppc-dev On 11/17/24 00:53, Shrikanth Hegde wrote: > preempt=lazy has been merged into tip[1]. Lets Enable it for PowerPC. > > This has been very lightly tested and as michael suggested could go > through a test cycle. If needed, patches can be merged. I have kept it > separate for easier bisect. > > Lazy preemption support for kvm on powerpc is still to be done. > > Refs: > [1]: https://lore.kernel.org/lkml/20241007074609.447006177@infradead.org/ > v1: https://lore.kernel.org/all/20241108101853.277808-1-sshegde@linux.ibm.com/ > > Changes since v1: > - Change for vmx copy as suggested by Sebastian. > - Add rwb tags > > Shrikanth Hegde (2): > powerpc: Add preempt lazy support > powerpc: Large user copy aware of full:rt:lazy preemption > > arch/powerpc/Kconfig | 1 + > arch/powerpc/include/asm/thread_info.h | 9 ++++++--- > arch/powerpc/kernel/interrupt.c | 4 ++-- > arch/powerpc/lib/vmx-helper.c | 2 +- > 4 files changed, 10 insertions(+), 6 deletions(-) > Hi mpe, maddy. I see the lazy scheduling related changes are in powerpc tree. If there are no objections, can we please add support for lazy preemption so it goes through a cycle? Let me know your thoughts. ^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH v2 0/2] powerpc: Enable lazy preemption 2024-11-16 19:23 [PATCH v2 0/2] powerpc: Enable lazy preemption Shrikanth Hegde ` (2 preceding siblings ...) 2024-12-10 11:22 ` [PATCH v2 0/2] powerpc: Enable lazy preemption Shrikanth Hegde @ 2025-01-01 9:08 ` Madhavan Srinivasan 3 siblings, 0 replies; 13+ messages in thread From: Madhavan Srinivasan @ 2025-01-01 9:08 UTC (permalink / raw) To: mpe, linuxppc-dev, Shrikanth Hegde Cc: npiggin, christophe.leroy, bigeasy, ankur.a.arora, linux-kernel On Sun, 17 Nov 2024 00:53:04 +0530, Shrikanth Hegde wrote: > preempt=lazy has been merged into tip[1]. Lets Enable it for PowerPC. > > This has been very lightly tested and as michael suggested could go > through a test cycle. If needed, patches can be merged. I have kept it > separate for easier bisect. > > Lazy preemption support for kvm on powerpc is still to be done. > > [...] Applied to powerpc/next. [1/2] powerpc: Add preempt lazy support https://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux.git/commit/?h=next&id=00199ed6f2ca6601b2c5856fac64132303d9437a [2/2] powerpc: Large user copy aware of full:rt:lazy preemption https://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux.git/commit/?h=next&id=eda86a41a1c7700757c9217f74b9d57431c3e5f4 Thanks ^ permalink raw reply [flat|nested] 13+ messages in thread
end of thread, other threads:[~2025-01-01 9:09 UTC | newest] Thread overview: 13+ messages (download: mbox.gz follow: Atom feed -- links below jump to the message on this page -- 2024-11-16 19:23 [PATCH v2 0/2] powerpc: Enable lazy preemption Shrikanth Hegde 2024-11-16 19:23 ` [PATCH v2 1/2] powerpc: Add preempt lazy support Shrikanth Hegde 2024-11-26 10:53 ` Christophe Leroy 2024-12-01 19:28 ` Shrikanth Hegde 2024-12-09 7:43 ` Luming Yu 2024-11-16 19:23 ` [PATCH v2 2/2] powerpc: Large user copy aware of full:rt:lazy preemption Shrikanth Hegde 2024-11-19 21:08 ` Ankur Arora 2024-11-20 8:03 ` Sebastian Andrzej Siewior 2024-11-20 15:33 ` Shrikanth Hegde 2024-11-20 8:00 ` Sebastian Andrzej Siewior 2024-11-20 18:10 ` Shrikanth Hegde 2024-12-10 11:22 ` [PATCH v2 0/2] powerpc: Enable lazy preemption Shrikanth Hegde 2025-01-01 9:08 ` Madhavan Srinivasan
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).