* [PATCH v5 00/10] MIPS: IPI Improvements
@ 2024-09-08 10:20 Jiaxun Yang
2024-09-08 10:20 ` [PATCH v5 01/10] MIPS: smp: Make IPI interrupts scalable Jiaxun Yang
` (9 more replies)
0 siblings, 10 replies; 12+ messages in thread
From: Jiaxun Yang @ 2024-09-08 10:20 UTC (permalink / raw)
To: Thomas Bogendoerfer, Florian Fainelli,
Broadcom internal kernel review list, Huacai Chen,
Thomas Gleixner, Serge Semin, Paul Burton
Cc: linux-mips, linux-kernel, Jiaxun Yang
Hi all,
This series improved general handling to MIPS IPI interrupts, made
IPI numbers scalable, and switch to IPI-MUX for all GERNERIC_IPI
users on mux.
It is a prerequisite for enabling IRQ_WORK for MIPS.
It has been tested on MIPS Boston I6500, malta CoreFPGA3 47K MT/
interAptiv MPF, Loongson-2K, Cavium CN7130 (EdgeRouter 4), and an
unannounced interaptiv UP MT platform with EIC.
I don't really know broadcom platforms and SGI platforms well so
changes to those platforms are kept minimal (no functional change).
Please review.
Thanks
Signed-off-by: Jiaxun Yang <jiaxun.yang@flygoat.com>
---
Changes in v5:
- Actual v4, v4 was sent in mistake
- Link to v4: https://lore.kernel.org/r/20240907-b4-mips-ipi-improvements-v4-0-ac288f9aff0b@flygoat.com
Changes in v4:
- irqchip commit message and code style fixes (tglx)
- Link to v3: https://lore.kernel.org/r/20240810-b4-mips-ipi-improvements-v3-0-1224fd7c4096@flygoat.com
Changes in v3:
- Fix build errors reported by kernel test bot
- Rebasing to current next
- Link to v2: https://lore.kernel.org/r/20240705-b4-mips-ipi-improvements-v2-0-2d50b56268e8@flygoat.com
Changes in v2:
- Build warning fixes
- Massage commit messages
- Link to v1: https://lore.kernel.org/r/20240616-b4-mips-ipi-improvements-v1-0-e332687f1692@flygoat.com
---
Jiaxun Yang (10):
MIPS: smp: Make IPI interrupts scalable
MIPS: smp: Manage IPI interrupts as percpu_devid interrupts
MIPS: smp: Provide platform IPI virq & domain hooks
MIPS: Move mips_smp_ipi_init call after prepare_cpus
MIPS: smp: Implement IPI stats
irqchip/irq-mips-gic: Switch to ipi_mux
MIPS: Implement get_mips_sw_int hook
MIPS: GIC: Implement get_sw_int hook
irqchip/irq-mips-cpu: Rework software IRQ handling flow
MIPS: smp-mt: Rework IPI functions
arch/mips/Kconfig | 2 +
arch/mips/cavium-octeon/smp.c | 111 ++++++-----------
arch/mips/fw/arc/init.c | 1 -
arch/mips/generic/irq.c | 15 +++
arch/mips/include/asm/ipi.h | 71 +++++++++++
arch/mips/include/asm/irq.h | 1 +
arch/mips/include/asm/irq_cpu.h | 3 +
arch/mips/include/asm/mips-gic.h | 10 ++
arch/mips/include/asm/octeon/octeon.h | 2 +
arch/mips/include/asm/smp-ops.h | 8 +-
arch/mips/include/asm/smp.h | 41 +++----
arch/mips/kernel/irq.c | 21 ++++
arch/mips/kernel/smp-bmips.c | 43 ++++---
arch/mips/kernel/smp-cps.c | 2 +
arch/mips/kernel/smp-mt.c | 70 +++++++++++
arch/mips/kernel/smp.c | 213 ++++++++++++++++++++-------------
arch/mips/loongson64/smp.c | 24 ++--
arch/mips/mm/c-octeon.c | 3 +-
arch/mips/sgi-ip27/ip27-smp.c | 15 ++-
arch/mips/sgi-ip30/ip30-smp.c | 15 ++-
arch/mips/sibyte/bcm1480/smp.c | 19 +--
arch/mips/sibyte/sb1250/smp.c | 13 +-
drivers/irqchip/Kconfig | 2 +-
drivers/irqchip/irq-mips-cpu.c | 191 +++++++++---------------------
drivers/irqchip/irq-mips-gic.c | 217 +++++++++++++---------------------
25 files changed, 590 insertions(+), 523 deletions(-)
---
base-commit: 61c01d2e181adfba02fe09764f9fca1de2be0dbe
change-id: 20240616-b4-mips-ipi-improvements-f8c86b1dc677
Best regards,
--
Jiaxun Yang <jiaxun.yang@flygoat.com>
^ permalink raw reply [flat|nested] 12+ messages in thread
* [PATCH v5 01/10] MIPS: smp: Make IPI interrupts scalable
2024-09-08 10:20 [PATCH v5 00/10] MIPS: IPI Improvements Jiaxun Yang
@ 2024-09-08 10:20 ` Jiaxun Yang
2024-09-12 3:47 ` Florian Fainelli
2024-09-08 10:20 ` [PATCH v5 02/10] MIPS: smp: Manage IPI interrupts as percpu_devid interrupts Jiaxun Yang
` (8 subsequent siblings)
9 siblings, 1 reply; 12+ messages in thread
From: Jiaxun Yang @ 2024-09-08 10:20 UTC (permalink / raw)
To: Thomas Bogendoerfer, Florian Fainelli,
Broadcom internal kernel review list, Huacai Chen,
Thomas Gleixner, Serge Semin, Paul Burton
Cc: linux-mips, linux-kernel, Jiaxun Yang
Define enum ipi_message_type as other architectures did to
allow easy extension to number of IPI interrupts, fiddle
around platform IPI code to adopt to the new infra, add
extensive BUILD_BUG_ON on IPI numbers to ensure future
extensions won't break existing platforms.
IPI related stuff are pulled to asm/ipi.h to avoid include
linux/interrupt.h in asm/smp.h.
Tested-by: Serge Semin <fancer.lancer@gmail.com>
Signed-off-by: Jiaxun Yang <jiaxun.yang@flygoat.com>
---
arch/mips/cavium-octeon/smp.c | 111 ++++++++++--------------------
arch/mips/fw/arc/init.c | 1 -
arch/mips/include/asm/ipi.h | 34 ++++++++++
arch/mips/include/asm/octeon/octeon.h | 2 +
arch/mips/include/asm/smp-ops.h | 8 +--
arch/mips/include/asm/smp.h | 41 +++++------
arch/mips/kernel/smp-bmips.c | 43 ++++++------
arch/mips/kernel/smp-cps.c | 1 +
arch/mips/kernel/smp.c | 124 +++++++++++++++++-----------------
arch/mips/loongson64/smp.c | 24 +++----
arch/mips/mm/c-octeon.c | 3 +-
arch/mips/sgi-ip27/ip27-smp.c | 15 ++--
arch/mips/sgi-ip30/ip30-smp.c | 15 ++--
arch/mips/sibyte/bcm1480/smp.c | 19 +++---
arch/mips/sibyte/sb1250/smp.c | 13 ++--
15 files changed, 221 insertions(+), 233 deletions(-)
diff --git a/arch/mips/cavium-octeon/smp.c b/arch/mips/cavium-octeon/smp.c
index 08ea2cde1eb5..229bb8f1f791 100644
--- a/arch/mips/cavium-octeon/smp.c
+++ b/arch/mips/cavium-octeon/smp.c
@@ -17,6 +17,7 @@
#include <linux/export.h>
#include <linux/kexec.h>
+#include <asm/ipi.h>
#include <asm/mmu_context.h>
#include <asm/time.h>
#include <asm/setup.h>
@@ -40,30 +41,19 @@ EXPORT_SYMBOL(octeon_bootloader_entry_addr);
extern void kernel_entry(unsigned long arg1, ...);
-static void octeon_icache_flush(void)
+static irqreturn_t octeon_icache_flush(int irq, void *dev_id)
{
asm volatile ("synci 0($0)\n");
+ return IRQ_HANDLED;
}
-static void (*octeon_message_functions[8])(void) = {
- scheduler_ipi,
- generic_smp_call_function_interrupt,
- octeon_icache_flush,
-};
-
static irqreturn_t mailbox_interrupt(int irq, void *dev_id)
{
u64 mbox_clrx = CVMX_CIU_MBOX_CLRX(cvmx_get_core_num());
- u64 action;
- int i;
+ unsigned long action;
+ int op;
- /*
- * Make sure the function array initialization remains
- * correct.
- */
- BUILD_BUG_ON(SMP_RESCHEDULE_YOURSELF != (1 << 0));
- BUILD_BUG_ON(SMP_CALL_FUNCTION != (1 << 1));
- BUILD_BUG_ON(SMP_ICACHE_FLUSH != (1 << 2));
+ BUILD_BUG_ON(IPI_MAX > 8);
/*
* Load the mailbox register to figure out what we're supposed
@@ -79,16 +69,10 @@ static irqreturn_t mailbox_interrupt(int irq, void *dev_id)
/* Clear the mailbox to clear the interrupt */
cvmx_write_csr(mbox_clrx, action);
- for (i = 0; i < ARRAY_SIZE(octeon_message_functions) && action;) {
- if (action & 1) {
- void (*fn)(void) = octeon_message_functions[i];
-
- if (fn)
- fn();
- }
- action >>= 1;
- i++;
+ for_each_set_bit(op, &action, IPI_MAX) {
+ ipi_handlers[op](0, NULL);
}
+
return IRQ_HANDLED;
}
@@ -97,23 +81,23 @@ static irqreturn_t mailbox_interrupt(int irq, void *dev_id)
* cpu. When the function has finished, increment the finished field of
* call_data.
*/
-void octeon_send_ipi_single(int cpu, unsigned int action)
+void octeon_send_ipi_single(int cpu, enum ipi_message_type op)
{
int coreid = cpu_logical_map(cpu);
/*
pr_info("SMP: Mailbox send cpu=%d, coreid=%d, action=%u\n", cpu,
coreid, action);
*/
- cvmx_write_csr(CVMX_CIU_MBOX_SETX(coreid), action);
+ cvmx_write_csr(CVMX_CIU_MBOX_SETX(coreid), op);
}
-static inline void octeon_send_ipi_mask(const struct cpumask *mask,
- unsigned int action)
+static void octeon_send_ipi_mask(const struct cpumask *mask,
+ enum ipi_message_type op)
{
unsigned int i;
for_each_cpu(i, mask)
- octeon_send_ipi_single(i, action);
+ octeon_send_ipi_single(i, op);
}
/*
@@ -149,6 +133,9 @@ static void __init octeon_smp_setup(void)
unsigned int num_cores = cvmx_octeon_num_cores();
#endif
+ ipi_handlers[IPI_ICACHE_FLUSH] = octeon_icache_flush;
+ ipi_names[IPI_ICACHE_FLUSH] = "Octeon ICache Flush";
+
/* The present CPUs are initially just the boot cpu (CPU 0). */
for (id = 0; id < NR_CPUS; id++) {
set_cpu_possible(id, id == 0);
@@ -427,67 +414,41 @@ static const struct plat_smp_ops octeon_smp_ops = {
#endif
};
-static irqreturn_t octeon_78xx_reched_interrupt(int irq, void *dev_id)
-{
- scheduler_ipi();
- return IRQ_HANDLED;
-}
-
-static irqreturn_t octeon_78xx_call_function_interrupt(int irq, void *dev_id)
-{
- generic_smp_call_function_interrupt();
- return IRQ_HANDLED;
-}
-
-static irqreturn_t octeon_78xx_icache_flush_interrupt(int irq, void *dev_id)
-{
- octeon_icache_flush();
- return IRQ_HANDLED;
-}
-
/*
* Callout to firmware before smp_init
*/
static void octeon_78xx_prepare_cpus(unsigned int max_cpus)
{
- if (request_irq(OCTEON_IRQ_MBOX0 + 0,
- octeon_78xx_reched_interrupt,
- IRQF_PERCPU | IRQF_NO_THREAD, "Scheduler",
- octeon_78xx_reched_interrupt)) {
- panic("Cannot request_irq for SchedulerIPI");
- }
- if (request_irq(OCTEON_IRQ_MBOX0 + 1,
- octeon_78xx_call_function_interrupt,
- IRQF_PERCPU | IRQF_NO_THREAD, "SMP-Call",
- octeon_78xx_call_function_interrupt)) {
- panic("Cannot request_irq for SMP-Call");
- }
- if (request_irq(OCTEON_IRQ_MBOX0 + 2,
- octeon_78xx_icache_flush_interrupt,
- IRQF_PERCPU | IRQF_NO_THREAD, "ICache-Flush",
- octeon_78xx_icache_flush_interrupt)) {
- panic("Cannot request_irq for ICache-Flush");
+ int i;
+
+ /*
+ * FIXME: Hardware have 10 MBOX but only 4 virqs are reserved
+ * for CIU3 MBOX.
+ */
+ BUILD_BUG_ON(IPI_MAX > 4);
+
+ for (i = 0; i < IPI_MAX; i++) {
+ if (request_irq(OCTEON_IRQ_MBOX0 + i,
+ ipi_handlers[i],
+ IRQF_PERCPU | IRQF_NO_THREAD, "IPI",
+ ipi_handlers[i])) {
+ panic("Cannot request_irq for %s", ipi_names[i]);
+ }
}
}
-static void octeon_78xx_send_ipi_single(int cpu, unsigned int action)
+static void octeon_78xx_send_ipi_single(int cpu, enum ipi_message_type op)
{
- int i;
-
- for (i = 0; i < 8; i++) {
- if (action & 1)
- octeon_ciu3_mbox_send(cpu, i);
- action >>= 1;
- }
+ octeon_ciu3_mbox_send(cpu, op);
}
static void octeon_78xx_send_ipi_mask(const struct cpumask *mask,
- unsigned int action)
+ enum ipi_message_type op)
{
unsigned int cpu;
for_each_cpu(cpu, mask)
- octeon_78xx_send_ipi_single(cpu, action);
+ octeon_78xx_send_ipi_single(cpu, op);
}
static const struct plat_smp_ops octeon_78xx_smp_ops = {
diff --git a/arch/mips/fw/arc/init.c b/arch/mips/fw/arc/init.c
index f9d1dea9b2ca..3d69d2f851bc 100644
--- a/arch/mips/fw/arc/init.c
+++ b/arch/mips/fw/arc/init.c
@@ -12,7 +12,6 @@
#include <asm/bootinfo.h>
#include <asm/sgialib.h>
-#include <asm/smp-ops.h>
#undef DEBUG_PROM_INIT
diff --git a/arch/mips/include/asm/ipi.h b/arch/mips/include/asm/ipi.h
new file mode 100644
index 000000000000..df7a0ac4227a
--- /dev/null
+++ b/arch/mips/include/asm/ipi.h
@@ -0,0 +1,34 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+
+#include <linux/cpumask.h>
+#include <linux/interrupt.h>
+
+#ifndef __ASM_IPI_H
+#define __ASM_IPI_H
+
+#ifdef CONFIG_SMP
+extern const char *ipi_names[];
+extern irq_handler_t ipi_handlers[];
+
+#ifdef CONFIG_GENERIC_IRQ_IPI
+extern void mips_smp_send_ipi_single(int cpu,
+ enum ipi_message_type op);
+extern void mips_smp_send_ipi_mask(const struct cpumask *mask,
+ enum ipi_message_type op);
+
+/*
+ * This function will set up the necessary IPIs for Linux to communicate
+ * with the CPUs in mask.
+ * Return 0 on success.
+ */
+int mips_smp_ipi_allocate(const struct cpumask *mask);
+
+/*
+ * This function will free up IPIs allocated with mips_smp_ipi_allocate to the
+ * CPUs in mask, which must be a subset of the IPIs that have been configured.
+ * Return 0 on success.
+ */
+int mips_smp_ipi_free(const struct cpumask *mask);
+#endif /* CONFIG_GENERIC_IRQ_IPI */
+#endif /* CONFIG_SMP */
+#endif
diff --git a/arch/mips/include/asm/octeon/octeon.h b/arch/mips/include/asm/octeon/octeon.h
index 5c1d726c702f..534155e88107 100644
--- a/arch/mips/include/asm/octeon/octeon.h
+++ b/arch/mips/include/asm/octeon/octeon.h
@@ -49,6 +49,8 @@ extern void octeon_init_cvmcount(void);
extern void octeon_setup_delays(void);
extern void octeon_io_clk_delay(unsigned long);
+extern void octeon_send_ipi_single(int cpu, enum ipi_message_type op);
+
#define OCTEON_ARGV_MAX_ARGS 64
#define OCTEON_SERIAL_LEN 20
diff --git a/arch/mips/include/asm/smp-ops.h b/arch/mips/include/asm/smp-ops.h
index 1617b207723f..8cf4156cb301 100644
--- a/arch/mips/include/asm/smp-ops.h
+++ b/arch/mips/include/asm/smp-ops.h
@@ -20,8 +20,8 @@
struct task_struct;
struct plat_smp_ops {
- void (*send_ipi_single)(int cpu, unsigned int action);
- void (*send_ipi_mask)(const struct cpumask *mask, unsigned int action);
+ void (*send_ipi_single)(int cpu, enum ipi_message_type op);
+ void (*send_ipi_mask)(const struct cpumask *mask, enum ipi_message_type op);
void (*init_secondary)(void);
void (*smp_finish)(void);
int (*boot_secondary)(int cpu, struct task_struct *idle);
@@ -47,10 +47,6 @@ static inline void plat_smp_setup(void)
mp_ops->smp_setup();
}
-extern void mips_smp_send_ipi_single(int cpu, unsigned int action);
-extern void mips_smp_send_ipi_mask(const struct cpumask *mask,
- unsigned int action);
-
#else /* !CONFIG_SMP */
struct plat_smp_ops;
diff --git a/arch/mips/include/asm/smp.h b/arch/mips/include/asm/smp.h
index 2427d76f953f..0c7467f15014 100644
--- a/arch/mips/include/asm/smp.h
+++ b/arch/mips/include/asm/smp.h
@@ -16,8 +16,6 @@
#include <linux/threads.h>
#include <linux/cpumask.h>
-#include <asm/smp-ops.h>
-
extern int smp_num_siblings;
extern cpumask_t cpu_sibling_map[];
extern cpumask_t cpu_core_map[];
@@ -46,11 +44,6 @@ extern int __cpu_logical_map[NR_CPUS];
#define NO_PROC_ID (-1)
-#define SMP_RESCHEDULE_YOURSELF 0x1 /* XXX braindead */
-#define SMP_CALL_FUNCTION 0x2
-/* Octeon - Tell another core to flush its icache */
-#define SMP_ICACHE_FLUSH 0x4
-
/* Mask of CPUs which are currently definitely operating coherently */
extern cpumask_t cpu_coherent_mask;
@@ -62,6 +55,20 @@ extern void calculate_cpu_foreign_map(void);
asmlinkage void start_secondary(void);
+enum ipi_message_type {
+ IPI_RESCHEDULE,
+ IPI_CALL_FUNC,
+#ifdef CONFIG_CAVIUM_OCTEON_SOC
+ IPI_ICACHE_FLUSH,
+#endif
+#ifdef CONFIG_MACH_LOONGSON64
+ IPI_ASK_C0COUNT,
+#endif
+ IPI_MAX
+};
+
+#include <asm/smp-ops.h>
+
/*
* this function sends a 'reschedule' IPI to another CPU.
* it goes straight through and wastes no time serializing
@@ -71,7 +78,7 @@ static inline void arch_smp_send_reschedule(int cpu)
{
extern const struct plat_smp_ops *mp_ops; /* private */
- mp_ops->send_ipi_single(cpu, SMP_RESCHEDULE_YOURSELF);
+ mp_ops->send_ipi_single(cpu, IPI_RESCHEDULE);
}
#ifdef CONFIG_HOTPLUG_CPU
@@ -108,32 +115,18 @@ static inline void *kexec_nonboot_cpu_func(void)
}
#endif
-/*
- * This function will set up the necessary IPIs for Linux to communicate
- * with the CPUs in mask.
- * Return 0 on success.
- */
-int mips_smp_ipi_allocate(const struct cpumask *mask);
-
-/*
- * This function will free up IPIs allocated with mips_smp_ipi_allocate to the
- * CPUs in mask, which must be a subset of the IPIs that have been configured.
- * Return 0 on success.
- */
-int mips_smp_ipi_free(const struct cpumask *mask);
-
static inline void arch_send_call_function_single_ipi(int cpu)
{
extern const struct plat_smp_ops *mp_ops; /* private */
- mp_ops->send_ipi_single(cpu, SMP_CALL_FUNCTION);
+ mp_ops->send_ipi_single(cpu, IPI_CALL_FUNC);
}
static inline void arch_send_call_function_ipi_mask(const struct cpumask *mask)
{
extern const struct plat_smp_ops *mp_ops; /* private */
- mp_ops->send_ipi_mask(mask, SMP_CALL_FUNCTION);
+ mp_ops->send_ipi_mask(mask, IPI_CALL_FUNC);
}
#endif /* __ASM_SMP_H */
diff --git a/arch/mips/kernel/smp-bmips.c b/arch/mips/kernel/smp-bmips.c
index 35b8d810833c..fa9ccefa8392 100644
--- a/arch/mips/kernel/smp-bmips.c
+++ b/arch/mips/kernel/smp-bmips.c
@@ -32,6 +32,7 @@
#include <asm/processor.h>
#include <asm/bootinfo.h>
#include <asm/cacheflush.h>
+#include <asm/ipi.h>
#include <asm/tlbflush.h>
#include <asm/mipsregs.h>
#include <asm/bmips.h>
@@ -282,36 +283,31 @@ static void bmips_smp_finish(void)
* BMIPS5000 raceless IPIs
*
* Each CPU has two inbound SW IRQs which are independent of all other CPUs.
- * IPI0 is used for SMP_RESCHEDULE_YOURSELF
- * IPI1 is used for SMP_CALL_FUNCTION
*/
-static void bmips5000_send_ipi_single(int cpu, unsigned int action)
+static void bmips5000_send_ipi_single(int cpu, enum ipi_message_type op)
{
- write_c0_brcm_action(ACTION_SET_IPI(cpu, action == SMP_CALL_FUNCTION));
+ write_c0_brcm_action(ACTION_SET_IPI(cpu, op));
}
static irqreturn_t bmips5000_ipi_interrupt(int irq, void *dev_id)
{
int action = irq - IPI0_IRQ;
- write_c0_brcm_action(ACTION_CLR_IPI(smp_processor_id(), action));
+ BUILD_BUG_ON(IPI_MAX > 2);
- if (action == 0)
- scheduler_ipi();
- else
- generic_smp_call_function_interrupt();
+ write_c0_brcm_action(ACTION_CLR_IPI(smp_processor_id(), action));
- return IRQ_HANDLED;
+ return ipi_handlers[action](0, NULL);
}
static void bmips5000_send_ipi_mask(const struct cpumask *mask,
- unsigned int action)
+ enum ipi_message_type op)
{
unsigned int i;
for_each_cpu(i, mask)
- bmips5000_send_ipi_single(i, action);
+ bmips5000_send_ipi_single(i, op);
}
/*
@@ -325,23 +321,26 @@ static void bmips5000_send_ipi_mask(const struct cpumask *mask,
*/
static DEFINE_SPINLOCK(ipi_lock);
-static DEFINE_PER_CPU(int, ipi_action_mask);
+static DEFINE_PER_CPU(unsigned long, ipi_action_mask);
-static void bmips43xx_send_ipi_single(int cpu, unsigned int action)
+static void bmips43xx_send_ipi_single(int cpu, enum ipi_message_type op)
{
unsigned long flags;
spin_lock_irqsave(&ipi_lock, flags);
set_c0_cause(cpu ? C_SW1 : C_SW0);
- per_cpu(ipi_action_mask, cpu) |= action;
+ per_cpu(ipi_action_mask, cpu) |= BIT(op);
irq_enable_hazard();
spin_unlock_irqrestore(&ipi_lock, flags);
}
static irqreturn_t bmips43xx_ipi_interrupt(int irq, void *dev_id)
{
- unsigned long flags;
- int action, cpu = irq - IPI0_IRQ;
+ unsigned long flags, action;
+ int cpu = irq - IPI0_IRQ;
+ int op;
+
+ BUILD_BUG_ON(IPI_MAX > BITS_PER_LONG);
spin_lock_irqsave(&ipi_lock, flags);
action = __this_cpu_read(ipi_action_mask);
@@ -349,21 +348,19 @@ static irqreturn_t bmips43xx_ipi_interrupt(int irq, void *dev_id)
clear_c0_cause(cpu ? C_SW1 : C_SW0);
spin_unlock_irqrestore(&ipi_lock, flags);
- if (action & SMP_RESCHEDULE_YOURSELF)
- scheduler_ipi();
- if (action & SMP_CALL_FUNCTION)
- generic_smp_call_function_interrupt();
+ for_each_set_bit(op, &action, IPI_MAX)
+ ipi_handlers[op](0, NULL);
return IRQ_HANDLED;
}
static void bmips43xx_send_ipi_mask(const struct cpumask *mask,
- unsigned int action)
+ enum ipi_message_type op)
{
unsigned int i;
for_each_cpu(i, mask)
- bmips43xx_send_ipi_single(i, action);
+ bmips43xx_send_ipi_single(i, op);
}
#ifdef CONFIG_HOTPLUG_CPU
diff --git a/arch/mips/kernel/smp-cps.c b/arch/mips/kernel/smp-cps.c
index 395622c37325..b7bcbc4770f2 100644
--- a/arch/mips/kernel/smp-cps.c
+++ b/arch/mips/kernel/smp-cps.c
@@ -16,6 +16,7 @@
#include <linux/irq.h>
#include <asm/bcache.h>
+#include <asm/ipi.h>
#include <asm/mips-cps.h>
#include <asm/mips_mt.h>
#include <asm/mipsregs.h>
diff --git a/arch/mips/kernel/smp.c b/arch/mips/kernel/smp.c
index 0362fc5df7b0..62be2ca9f990 100644
--- a/arch/mips/kernel/smp.c
+++ b/arch/mips/kernel/smp.c
@@ -31,6 +31,7 @@
#include <asm/ginvt.h>
#include <asm/processor.h>
#include <asm/idle.h>
+#include <asm/ipi.h>
#include <asm/r4k-timer.h>
#include <asm/mips-cps.h>
#include <asm/mmu_context.h>
@@ -92,11 +93,6 @@ static int __init early_smt(char *s)
}
early_param("smt", early_smt);
-#ifdef CONFIG_GENERIC_IRQ_IPI
-static struct irq_desc *call_desc;
-static struct irq_desc *sched_desc;
-#endif
-
static inline void set_cpu_sibling_map(int cpu)
{
int i;
@@ -164,13 +160,42 @@ void register_smp_ops(const struct plat_smp_ops *ops)
mp_ops = ops;
}
+static irqreturn_t ipi_resched_interrupt(int irq, void *dev_id)
+{
+ scheduler_ipi();
+
+ return IRQ_HANDLED;
+}
+
+static irqreturn_t ipi_call_interrupt(int irq, void *dev_id)
+{
+ generic_smp_call_function_interrupt();
+
+ return IRQ_HANDLED;
+}
+
+
+const char *ipi_names[IPI_MAX] __read_mostly = {
+ [IPI_RESCHEDULE] = "Rescheduling interrupts",
+ [IPI_CALL_FUNC] = "Function call interrupts",
+};
+
+irq_handler_t ipi_handlers[IPI_MAX] __read_mostly = {
+ [IPI_RESCHEDULE] = ipi_resched_interrupt,
+ [IPI_CALL_FUNC] = ipi_call_interrupt,
+};
+
#ifdef CONFIG_GENERIC_IRQ_IPI
-void mips_smp_send_ipi_single(int cpu, unsigned int action)
+static int ipi_virqs[IPI_MAX] __ro_after_init;
+static struct irq_desc *ipi_desc[IPI_MAX] __read_mostly;
+
+void mips_smp_send_ipi_single(int cpu, enum ipi_message_type op)
{
- mips_smp_send_ipi_mask(cpumask_of(cpu), action);
+ mips_smp_send_ipi_mask(cpumask_of(cpu), op);
}
-void mips_smp_send_ipi_mask(const struct cpumask *mask, unsigned int action)
+void mips_smp_send_ipi_mask(const struct cpumask *mask,
+ enum ipi_message_type op)
{
unsigned long flags;
unsigned int core;
@@ -178,18 +203,7 @@ void mips_smp_send_ipi_mask(const struct cpumask *mask, unsigned int action)
local_irq_save(flags);
- switch (action) {
- case SMP_CALL_FUNCTION:
- __ipi_send_mask(call_desc, mask);
- break;
-
- case SMP_RESCHEDULE_YOURSELF:
- __ipi_send_mask(sched_desc, mask);
- break;
-
- default:
- BUG();
- }
+ __ipi_send_mask(ipi_desc[op], mask);
if (mips_cpc_present()) {
for_each_cpu(cpu, mask) {
@@ -211,21 +225,6 @@ void mips_smp_send_ipi_mask(const struct cpumask *mask, unsigned int action)
local_irq_restore(flags);
}
-
-static irqreturn_t ipi_resched_interrupt(int irq, void *dev_id)
-{
- scheduler_ipi();
-
- return IRQ_HANDLED;
-}
-
-static irqreturn_t ipi_call_interrupt(int irq, void *dev_id)
-{
- generic_smp_call_function_interrupt();
-
- return IRQ_HANDLED;
-}
-
static void smp_ipi_init_one(unsigned int virq, const char *name,
irq_handler_t handler)
{
@@ -236,11 +235,9 @@ static void smp_ipi_init_one(unsigned int virq, const char *name,
BUG_ON(ret);
}
-static unsigned int call_virq, sched_virq;
-
int mips_smp_ipi_allocate(const struct cpumask *mask)
{
- int virq;
+ int virq, i;
struct irq_domain *ipidomain;
struct device_node *node;
@@ -267,33 +264,30 @@ int mips_smp_ipi_allocate(const struct cpumask *mask)
* setup, if we're running with only a single CPU.
*/
if (!ipidomain) {
- BUG_ON(num_present_cpus() > 1);
+ WARN_ON(num_present_cpus() > 1);
return 0;
}
- virq = irq_reserve_ipi(ipidomain, mask);
- BUG_ON(!virq);
- if (!call_virq)
- call_virq = virq;
-
- virq = irq_reserve_ipi(ipidomain, mask);
- BUG_ON(!virq);
- if (!sched_virq)
- sched_virq = virq;
+ for (i = 0; i < IPI_MAX; i++) {
+ virq = irq_reserve_ipi(ipidomain, mask);
+ WARN_ON(!virq);
+ ipi_virqs[i] = virq;
+ }
if (irq_domain_is_ipi_per_cpu(ipidomain)) {
int cpu;
for_each_cpu(cpu, mask) {
- smp_ipi_init_one(call_virq + cpu, "IPI call",
- ipi_call_interrupt);
- smp_ipi_init_one(sched_virq + cpu, "IPI resched",
- ipi_resched_interrupt);
+ for (i = 0; i < IPI_MAX; i++) {
+ smp_ipi_init_one(ipi_virqs[i] + cpu, ipi_names[i],
+ ipi_handlers[i]);
+ }
}
} else {
- smp_ipi_init_one(call_virq, "IPI call", ipi_call_interrupt);
- smp_ipi_init_one(sched_virq, "IPI resched",
- ipi_resched_interrupt);
+ for (i = 0; i < IPI_MAX; i++) {
+ smp_ipi_init_one(ipi_virqs[i], ipi_names[i],
+ ipi_handlers[i]);
+ }
}
return 0;
@@ -301,6 +295,7 @@ int mips_smp_ipi_allocate(const struct cpumask *mask)
int mips_smp_ipi_free(const struct cpumask *mask)
{
+ int i;
struct irq_domain *ipidomain;
struct device_node *node;
@@ -321,25 +316,32 @@ int mips_smp_ipi_free(const struct cpumask *mask)
int cpu;
for_each_cpu(cpu, mask) {
- free_irq(call_virq + cpu, NULL);
- free_irq(sched_virq + cpu, NULL);
+ for (i = 0; i < IPI_MAX; i++)
+ free_irq(ipi_virqs[i] + cpu, NULL);
}
}
- irq_destroy_ipi(call_virq, mask);
- irq_destroy_ipi(sched_virq, mask);
+
+ for (i = 0; i < IPI_MAX; i++)
+ irq_destroy_ipi(ipi_virqs[i], mask);
+
return 0;
}
static int __init mips_smp_ipi_init(void)
{
+ int i;
+
if (num_possible_cpus() == 1)
return 0;
mips_smp_ipi_allocate(cpu_possible_mask);
- call_desc = irq_to_desc(call_virq);
- sched_desc = irq_to_desc(sched_virq);
+ for (i = 0; i < IPI_MAX; i++) {
+ ipi_desc[i] = irq_to_desc(ipi_virqs[i]);
+ if (!ipi_desc[i])
+ return -ENODEV;
+ }
return 0;
}
diff --git a/arch/mips/loongson64/smp.c b/arch/mips/loongson64/smp.c
index 147acd972a07..68db4d8625bf 100644
--- a/arch/mips/loongson64/smp.c
+++ b/arch/mips/loongson64/smp.c
@@ -13,8 +13,8 @@
#include <linux/smp.h>
#include <linux/cpufreq.h>
#include <linux/kexec.h>
+#include <asm/ipi.h>
#include <asm/processor.h>
-#include <asm/smp.h>
#include <asm/time.h>
#include <asm/tlbflush.h>
#include <asm/cacheflush.h>
@@ -367,35 +367,29 @@ static void ipi_mailbox_buf_init(void)
/*
* Simple enough, just poke the appropriate ipi register
*/
-static void loongson3_send_ipi_single(int cpu, unsigned int action)
+static void loongson3_send_ipi_single(int cpu, enum ipi_message_type op)
{
- ipi_write_action(cpu_logical_map(cpu), (u32)action);
+ ipi_write_action(cpu_logical_map(cpu), BIT(op));
}
static void
-loongson3_send_ipi_mask(const struct cpumask *mask, unsigned int action)
+loongson3_send_ipi_mask(const struct cpumask *mask, enum ipi_message_type op)
{
unsigned int i;
for_each_cpu(i, mask)
- ipi_write_action(cpu_logical_map(i), (u32)action);
+ ipi_write_action(cpu_logical_map(i), BIT(op));
}
static irqreturn_t loongson3_ipi_interrupt(int irq, void *dev_id)
{
- int cpu = smp_processor_id();
- unsigned int action;
+ int op, cpu = smp_processor_id();
+ unsigned long action;
action = ipi_read_clear(cpu);
- if (action & SMP_RESCHEDULE_YOURSELF)
- scheduler_ipi();
-
- if (action & SMP_CALL_FUNCTION) {
- irq_enter();
- generic_smp_call_function_interrupt();
- irq_exit();
- }
+ for_each_set_bit(op, &action, IPI_MAX)
+ ipi_handlers[op](0, NULL);
return IRQ_HANDLED;
}
diff --git a/arch/mips/mm/c-octeon.c b/arch/mips/mm/c-octeon.c
index b7393b61cfa7..5eef34720b87 100644
--- a/arch/mips/mm/c-octeon.c
+++ b/arch/mips/mm/c-octeon.c
@@ -62,7 +62,6 @@ static void local_octeon_flush_icache_range(unsigned long start,
*/
static void octeon_flush_icache_all_cores(struct vm_area_struct *vma)
{
- extern void octeon_send_ipi_single(int cpu, unsigned int action);
#ifdef CONFIG_SMP
int cpu;
cpumask_t mask;
@@ -85,7 +84,7 @@ static void octeon_flush_icache_all_cores(struct vm_area_struct *vma)
cpumask_clear_cpu(cpu, &mask);
#ifdef CONFIG_CAVIUM_OCTEON_SOC
for_each_cpu(cpu, &mask)
- octeon_send_ipi_single(cpu, SMP_ICACHE_FLUSH);
+ octeon_send_ipi_single(cpu, IPI_ICACHE_FLUSH);
#else
smp_call_function_many(&mask, (smp_call_func_t)octeon_local_flush_icache,
NULL, 1);
diff --git a/arch/mips/sgi-ip27/ip27-smp.c b/arch/mips/sgi-ip27/ip27-smp.c
index 62733e049570..0e01484535e0 100644
--- a/arch/mips/sgi-ip27/ip27-smp.c
+++ b/arch/mips/sgi-ip27/ip27-smp.c
@@ -96,15 +96,17 @@ static __init void intr_clear_all(nasid_t nasid)
REMOTE_HUB_CLR_INTR(nasid, i);
}
-static void ip27_send_ipi_single(int destid, unsigned int action)
+static void ip27_send_ipi_single(int destid, enum ipi_message_type op)
{
int irq;
- switch (action) {
- case SMP_RESCHEDULE_YOURSELF:
+ BUILD_BUG_ON(IPI_MAX > 2);
+
+ switch (op) {
+ case IPI_RESCHEDULE:
irq = CPU_RESCHED_A_IRQ;
break;
- case SMP_CALL_FUNCTION:
+ case IPI_CALL_FUNC:
irq = CPU_CALL_A_IRQ;
break;
default:
@@ -120,12 +122,13 @@ static void ip27_send_ipi_single(int destid, unsigned int action)
REMOTE_HUB_SEND_INTR(cpu_to_node(destid), irq);
}
-static void ip27_send_ipi_mask(const struct cpumask *mask, unsigned int action)
+static void ip27_send_ipi_mask(const struct cpumask *mask,
+ enum ipi_message_type op)
{
unsigned int i;
for_each_cpu(i, mask)
- ip27_send_ipi_single(i, action);
+ ip27_send_ipi_single(i, op);
}
static void ip27_init_cpu(void)
diff --git a/arch/mips/sgi-ip30/ip30-smp.c b/arch/mips/sgi-ip30/ip30-smp.c
index 4bfe654602b1..1c674c0cf419 100644
--- a/arch/mips/sgi-ip30/ip30-smp.c
+++ b/arch/mips/sgi-ip30/ip30-smp.c
@@ -43,15 +43,17 @@ struct mpconf {
u32 idleflag;
};
-static void ip30_smp_send_ipi_single(int cpu, u32 action)
+static void ip30_smp_send_ipi_single(int cpu, enum ipi_message_type op)
{
int irq;
- switch (action) {
- case SMP_RESCHEDULE_YOURSELF:
+ BUILD_BUG_ON(IPI_MAX > 2);
+
+ switch (op) {
+ case IPI_RESCHEDULE:
irq = HEART_L2_INT_RESCHED_CPU_0;
break;
- case SMP_CALL_FUNCTION:
+ case IPI_CALL_FUNC:
irq = HEART_L2_INT_CALL_CPU_0;
break;
default:
@@ -64,12 +66,13 @@ static void ip30_smp_send_ipi_single(int cpu, u32 action)
heart_write(BIT_ULL(irq), &heart_regs->set_isr);
}
-static void ip30_smp_send_ipi_mask(const struct cpumask *mask, u32 action)
+static void ip30_smp_send_ipi_mask(const struct cpumask *mask,
+ enum ipi_message_type op)
{
u32 i;
for_each_cpu(i, mask)
- ip30_smp_send_ipi_single(i, action);
+ ip30_smp_send_ipi_single(i, op);
}
static void __init ip30_smp_setup(void)
diff --git a/arch/mips/sibyte/bcm1480/smp.c b/arch/mips/sibyte/bcm1480/smp.c
index 5861e50255bf..040230e3f4a0 100644
--- a/arch/mips/sibyte/bcm1480/smp.c
+++ b/arch/mips/sibyte/bcm1480/smp.c
@@ -12,6 +12,7 @@
#include <asm/mmu_context.h>
#include <asm/io.h>
+#include <asm/ipi.h>
#include <asm/fw/cfe/cfe_api.h>
#include <asm/sibyte/sb1250.h>
#include <asm/sibyte/bcm1480_regs.h>
@@ -64,18 +65,18 @@ void bcm1480_smp_init(void)
* Simple enough; everything is set up, so just poke the appropriate mailbox
* register, and we should be set
*/
-static void bcm1480_send_ipi_single(int cpu, unsigned int action)
+static void bcm1480_send_ipi_single(int cpu, enum ipi_message_type op)
{
- __raw_writeq((((u64)action)<< 48), mailbox_0_set_regs[cpu]);
+ __raw_writeq((((u64)BIT_ULL(op)) << 48), mailbox_0_set_regs[cpu]);
}
static void bcm1480_send_ipi_mask(const struct cpumask *mask,
- unsigned int action)
+ enum ipi_message_type op)
{
unsigned int i;
for_each_cpu(i, mask)
- bcm1480_send_ipi_single(i, action);
+ bcm1480_send_ipi_single(i, op);
}
/*
@@ -159,19 +160,21 @@ void bcm1480_mailbox_interrupt(void)
{
int cpu = smp_processor_id();
int irq = K_BCM1480_INT_MBOX_0_0;
- unsigned int action;
+ u64 action;
+
+ BUILD_BUG_ON(IPI_MAX > 2);
kstat_incr_irq_this_cpu(irq);
/* Load the mailbox register to figure out what we're supposed to do */
action = (__raw_readq(mailbox_0_regs[cpu]) >> 48) & 0xffff;
/* Clear the mailbox to clear the interrupt */
- __raw_writeq(((u64)action)<<48, mailbox_0_clear_regs[cpu]);
+ __raw_writeq(((u64)action) << 48, mailbox_0_clear_regs[cpu]);
- if (action & SMP_RESCHEDULE_YOURSELF)
+ if (action & BIT_ULL(IPI_RESCHEDULE))
scheduler_ipi();
- if (action & SMP_CALL_FUNCTION) {
+ if (action & BIT_ULL(IPI_CALL_FUNC)) {
irq_enter();
generic_smp_call_function_interrupt();
irq_exit();
diff --git a/arch/mips/sibyte/sb1250/smp.c b/arch/mips/sibyte/sb1250/smp.c
index 7a794234e3d7..dc2c889fa0b6 100644
--- a/arch/mips/sibyte/sb1250/smp.c
+++ b/arch/mips/sibyte/sb1250/smp.c
@@ -12,6 +12,7 @@
#include <asm/mmu_context.h>
#include <asm/io.h>
+#include <asm/ipi.h>
#include <asm/fw/cfe/cfe_api.h>
#include <asm/sibyte/sb1250.h>
#include <asm/sibyte/sb1250_regs.h>
@@ -53,18 +54,18 @@ void sb1250_smp_init(void)
* Simple enough; everything is set up, so just poke the appropriate mailbox
* register, and we should be set
*/
-static void sb1250_send_ipi_single(int cpu, unsigned int action)
+static void sb1250_send_ipi_single(int cpu, enum ipi_message_type op)
{
- __raw_writeq((((u64)action) << 48), mailbox_set_regs[cpu]);
+ __raw_writeq((((u64)BIT_ULL(op)) << 48), mailbox_set_regs[cpu]);
}
static inline void sb1250_send_ipi_mask(const struct cpumask *mask,
- unsigned int action)
+ enum ipi_message_type op)
{
unsigned int i;
for_each_cpu(i, mask)
- sb1250_send_ipi_single(i, action);
+ sb1250_send_ipi_single(i, op);
}
/*
@@ -157,10 +158,10 @@ void sb1250_mailbox_interrupt(void)
/* Clear the mailbox to clear the interrupt */
____raw_writeq(((u64)action) << 48, mailbox_clear_regs[cpu]);
- if (action & SMP_RESCHEDULE_YOURSELF)
+ if (action & BIT(IPI_RESCHEDULE))
scheduler_ipi();
- if (action & SMP_CALL_FUNCTION) {
+ if (action & BIT(IPI_CALL_FUNC)) {
irq_enter();
generic_smp_call_function_interrupt();
irq_exit();
--
2.46.0
^ permalink raw reply related [flat|nested] 12+ messages in thread
* [PATCH v5 02/10] MIPS: smp: Manage IPI interrupts as percpu_devid interrupts
2024-09-08 10:20 [PATCH v5 00/10] MIPS: IPI Improvements Jiaxun Yang
2024-09-08 10:20 ` [PATCH v5 01/10] MIPS: smp: Make IPI interrupts scalable Jiaxun Yang
@ 2024-09-08 10:20 ` Jiaxun Yang
2024-09-08 10:20 ` [PATCH v5 03/10] MIPS: smp: Provide platform IPI virq & domain hooks Jiaxun Yang
` (7 subsequent siblings)
9 siblings, 0 replies; 12+ messages in thread
From: Jiaxun Yang @ 2024-09-08 10:20 UTC (permalink / raw)
To: Thomas Bogendoerfer, Florian Fainelli,
Broadcom internal kernel review list, Huacai Chen,
Thomas Gleixner, Serge Semin, Paul Burton
Cc: linux-mips, linux-kernel, Jiaxun Yang
IPI interrupts need to be enabled when a new CPU coming up.
Manage them as percpu_devid interrupts and invoke enable/disable
functions at appropriate time to perform enabling as required,
similar to what RISC-V and Arm doing.
This is required by generic IPI-Mux and some IPI drivers.
Tested-by: Serge Semin <fancer.lancer@gmail.com>
Signed-off-by: Jiaxun Yang <jiaxun.yang@flygoat.com>
---
arch/mips/include/asm/ipi.h | 11 +++++++++++
arch/mips/kernel/smp-cps.c | 1 +
arch/mips/kernel/smp.c | 26 ++++++++++++++++++++++++--
3 files changed, 36 insertions(+), 2 deletions(-)
diff --git a/arch/mips/include/asm/ipi.h b/arch/mips/include/asm/ipi.h
index df7a0ac4227a..88b507339f51 100644
--- a/arch/mips/include/asm/ipi.h
+++ b/arch/mips/include/asm/ipi.h
@@ -29,6 +29,17 @@ int mips_smp_ipi_allocate(const struct cpumask *mask);
* Return 0 on success.
*/
int mips_smp_ipi_free(const struct cpumask *mask);
+
+void mips_smp_ipi_enable(void);
+void mips_smp_ipi_disable(void);
+#else
+static inline void mips_smp_ipi_enable(void)
+{
+}
+
+static inline void mips_smp_ipi_disable(void)
+{
+}
#endif /* CONFIG_GENERIC_IRQ_IPI */
#endif /* CONFIG_SMP */
#endif
diff --git a/arch/mips/kernel/smp-cps.c b/arch/mips/kernel/smp-cps.c
index b7bcbc4770f2..6845884086f4 100644
--- a/arch/mips/kernel/smp-cps.c
+++ b/arch/mips/kernel/smp-cps.c
@@ -555,6 +555,7 @@ static int cps_cpu_disable(void)
smp_mb__after_atomic();
set_cpu_online(cpu, false);
calculate_cpu_foreign_map();
+ mips_smp_ipi_disable();
irq_migrate_all_off_this_cpu();
return 0;
diff --git a/arch/mips/kernel/smp.c b/arch/mips/kernel/smp.c
index 62be2ca9f990..9918bf341ffd 100644
--- a/arch/mips/kernel/smp.c
+++ b/arch/mips/kernel/smp.c
@@ -186,6 +186,7 @@ irq_handler_t ipi_handlers[IPI_MAX] __read_mostly = {
};
#ifdef CONFIG_GENERIC_IRQ_IPI
+static DEFINE_PER_CPU_READ_MOSTLY(int, ipi_dummy_dev);
static int ipi_virqs[IPI_MAX] __ro_after_init;
static struct irq_desc *ipi_desc[IPI_MAX] __read_mostly;
@@ -225,13 +226,29 @@ void mips_smp_send_ipi_mask(const struct cpumask *mask,
local_irq_restore(flags);
}
+void mips_smp_ipi_enable(void)
+{
+ int i;
+
+ for (i = 0; i < IPI_MAX; i++)
+ enable_percpu_irq(ipi_virqs[i], IRQ_TYPE_NONE);
+}
+
+void mips_smp_ipi_disable(void)
+{
+ int i;
+
+ for (i = 0; i < IPI_MAX; i++)
+ disable_percpu_irq(ipi_virqs[i]);
+}
+
static void smp_ipi_init_one(unsigned int virq, const char *name,
irq_handler_t handler)
{
int ret;
- irq_set_handler(virq, handle_percpu_irq);
- ret = request_irq(virq, handler, IRQF_PERCPU, name, NULL);
+ irq_set_percpu_devid(virq);
+ ret = request_percpu_irq(virq, handler, "IPI", &ipi_dummy_dev);
BUG_ON(ret);
}
@@ -343,6 +360,9 @@ static int __init mips_smp_ipi_init(void)
return -ENODEV;
}
+ /* Enable IPI for Boot CPU */
+ mips_smp_ipi_enable();
+
return 0;
}
early_initcall(mips_smp_ipi_init);
@@ -383,6 +403,8 @@ asmlinkage void start_secondary(void)
synchronise_count_slave(cpu);
+ mips_smp_ipi_enable();
+
/* The CPU is running and counters synchronised, now mark it online */
set_cpu_online(cpu, true);
--
2.46.0
^ permalink raw reply related [flat|nested] 12+ messages in thread
* [PATCH v5 03/10] MIPS: smp: Provide platform IPI virq & domain hooks
2024-09-08 10:20 [PATCH v5 00/10] MIPS: IPI Improvements Jiaxun Yang
2024-09-08 10:20 ` [PATCH v5 01/10] MIPS: smp: Make IPI interrupts scalable Jiaxun Yang
2024-09-08 10:20 ` [PATCH v5 02/10] MIPS: smp: Manage IPI interrupts as percpu_devid interrupts Jiaxun Yang
@ 2024-09-08 10:20 ` Jiaxun Yang
2024-09-08 10:20 ` [PATCH v5 04/10] MIPS: Move mips_smp_ipi_init call after prepare_cpus Jiaxun Yang
` (6 subsequent siblings)
9 siblings, 0 replies; 12+ messages in thread
From: Jiaxun Yang @ 2024-09-08 10:20 UTC (permalink / raw)
To: Thomas Bogendoerfer, Florian Fainelli,
Broadcom internal kernel review list, Huacai Chen,
Thomas Gleixner, Serge Semin, Paul Burton
Cc: linux-mips, linux-kernel, Jiaxun Yang
Provide platform virq & domain hooks to allow platform
interrupt controllers or SMP code to override IPI interrupt
allocation.
This is required by ipi-mux, the API is aligned with RISC-V
and Arm.
Tested-by: Serge Semin <fancer.lancer@gmail.com>
Signed-off-by: Jiaxun Yang <jiaxun.yang@flygoat.com>
---
arch/mips/include/asm/ipi.h | 17 ++++++++++
arch/mips/kernel/smp.c | 78 +++++++++++++++++++++++++--------------------
2 files changed, 61 insertions(+), 34 deletions(-)
diff --git a/arch/mips/include/asm/ipi.h b/arch/mips/include/asm/ipi.h
index 88b507339f51..7cac0f4ccf37 100644
--- a/arch/mips/include/asm/ipi.h
+++ b/arch/mips/include/asm/ipi.h
@@ -2,6 +2,7 @@
#include <linux/cpumask.h>
#include <linux/interrupt.h>
+#include <linux/irqdomain.h>
#ifndef __ASM_IPI_H
#define __ASM_IPI_H
@@ -32,6 +33,9 @@ int mips_smp_ipi_free(const struct cpumask *mask);
void mips_smp_ipi_enable(void);
void mips_smp_ipi_disable(void);
+extern bool mips_smp_ipi_have_virq_range(void);
+void mips_smp_ipi_set_irqdomain(struct irq_domain *d);
+extern void mips_smp_ipi_set_virq_range(int virq, int nr);
#else
static inline void mips_smp_ipi_enable(void)
{
@@ -41,5 +45,18 @@ static inline void mips_smp_ipi_disable(void)
{
}
#endif /* CONFIG_GENERIC_IRQ_IPI */
+#else
+static inline void mips_smp_ipi_set_virq_range(int virq, int nr)
+{
+}
+
+static inline void mips_smp_ipi_set_irqdomain(struct irq_domain *d)
+{
+}
+
+static inline bool mips_smp_ipi_have_virq_range(void)
+{
+ return false;
+}
#endif /* CONFIG_SMP */
#endif
diff --git a/arch/mips/kernel/smp.c b/arch/mips/kernel/smp.c
index 9918bf341ffd..d3c7486fee3d 100644
--- a/arch/mips/kernel/smp.c
+++ b/arch/mips/kernel/smp.c
@@ -189,6 +189,7 @@ irq_handler_t ipi_handlers[IPI_MAX] __read_mostly = {
static DEFINE_PER_CPU_READ_MOSTLY(int, ipi_dummy_dev);
static int ipi_virqs[IPI_MAX] __ro_after_init;
static struct irq_desc *ipi_desc[IPI_MAX] __read_mostly;
+static struct irq_domain *ipidomain;
void mips_smp_send_ipi_single(int cpu, enum ipi_message_type op)
{
@@ -255,11 +256,12 @@ static void smp_ipi_init_one(unsigned int virq, const char *name,
int mips_smp_ipi_allocate(const struct cpumask *mask)
{
int virq, i;
- struct irq_domain *ipidomain;
struct device_node *node;
- node = of_irq_find_parent(of_root);
- ipidomain = irq_find_matching_host(node, DOMAIN_BUS_IPI);
+ if (!ipidomain) {
+ node = of_irq_find_parent(of_root);
+ ipidomain = irq_find_matching_host(node, DOMAIN_BUS_IPI);
+ }
/*
* Some platforms have half DT setup. So if we found irq node but
@@ -291,43 +293,15 @@ int mips_smp_ipi_allocate(const struct cpumask *mask)
ipi_virqs[i] = virq;
}
- if (irq_domain_is_ipi_per_cpu(ipidomain)) {
- int cpu;
-
- for_each_cpu(cpu, mask) {
- for (i = 0; i < IPI_MAX; i++) {
- smp_ipi_init_one(ipi_virqs[i] + cpu, ipi_names[i],
- ipi_handlers[i]);
- }
- }
- } else {
- for (i = 0; i < IPI_MAX; i++) {
- smp_ipi_init_one(ipi_virqs[i], ipi_names[i],
- ipi_handlers[i]);
- }
- }
-
return 0;
}
int mips_smp_ipi_free(const struct cpumask *mask)
{
int i;
- struct irq_domain *ipidomain;
- struct device_node *node;
-
- node = of_irq_find_parent(of_root);
- ipidomain = irq_find_matching_host(node, DOMAIN_BUS_IPI);
-
- /*
- * Some platforms have half DT setup. So if we found irq node but
- * didn't find an ipidomain, try to search for one that is not in the
- * DT.
- */
- if (node && !ipidomain)
- ipidomain = irq_find_matching_host(NULL, DOMAIN_BUS_IPI);
- BUG_ON(!ipidomain);
+ if (!ipidomain)
+ return -ENODEV;
if (irq_domain_is_ipi_per_cpu(ipidomain)) {
int cpu;
@@ -344,6 +318,25 @@ int mips_smp_ipi_free(const struct cpumask *mask)
return 0;
}
+void mips_smp_ipi_set_virq_range(int virq, int nr)
+{
+ int i;
+
+ WARN_ON(nr < IPI_MAX);
+
+ for (i = 0; i < IPI_MAX; i++)
+ ipi_virqs[i] = virq + i;
+}
+
+void mips_smp_ipi_set_irqdomain(struct irq_domain *d)
+{
+ ipidomain = d;
+}
+
+bool mips_smp_ipi_have_virq_range(void)
+{
+ return ipi_virqs[0];
+}
static int __init mips_smp_ipi_init(void)
{
@@ -352,7 +345,24 @@ static int __init mips_smp_ipi_init(void)
if (num_possible_cpus() == 1)
return 0;
- mips_smp_ipi_allocate(cpu_possible_mask);
+ if (!mips_smp_ipi_have_virq_range())
+ mips_smp_ipi_allocate(cpu_possible_mask);
+
+ if (ipidomain && irq_domain_is_ipi_per_cpu(ipidomain)) {
+ int cpu;
+
+ for_each_possible_cpu(cpu) {
+ for (i = 0; i < IPI_MAX; i++) {
+ smp_ipi_init_one(ipi_virqs[i] + cpu, ipi_names[i],
+ ipi_handlers[i]);
+ }
+ }
+ } else {
+ for (i = 0; i < IPI_MAX; i++) {
+ smp_ipi_init_one(ipi_virqs[i], ipi_names[i],
+ ipi_handlers[i]);
+ }
+ }
for (i = 0; i < IPI_MAX; i++) {
ipi_desc[i] = irq_to_desc(ipi_virqs[i]);
--
2.46.0
^ permalink raw reply related [flat|nested] 12+ messages in thread
* [PATCH v5 04/10] MIPS: Move mips_smp_ipi_init call after prepare_cpus
2024-09-08 10:20 [PATCH v5 00/10] MIPS: IPI Improvements Jiaxun Yang
` (2 preceding siblings ...)
2024-09-08 10:20 ` [PATCH v5 03/10] MIPS: smp: Provide platform IPI virq & domain hooks Jiaxun Yang
@ 2024-09-08 10:20 ` Jiaxun Yang
2024-09-08 10:20 ` [PATCH v5 05/10] MIPS: smp: Implement IPI stats Jiaxun Yang
` (5 subsequent siblings)
9 siblings, 0 replies; 12+ messages in thread
From: Jiaxun Yang @ 2024-09-08 10:20 UTC (permalink / raw)
To: Thomas Bogendoerfer, Florian Fainelli,
Broadcom internal kernel review list, Huacai Chen,
Thomas Gleixner, Serge Semin, Paul Burton
Cc: linux-mips, linux-kernel, Jiaxun Yang
This will give platform code a genuine chance to setup
IPI IRQ in prepare_cpus.
For most systems IPI should be registered by irqchip drivers
fairly early, but if IPI IRQ is tightly coupled with platform's
SMP implementation it makes sense to do it here.
Tested-by: Serge Semin <fancer.lancer@gmail.com>
Signed-off-by: Jiaxun Yang <jiaxun.yang@flygoat.com>
---
arch/mips/kernel/smp.c | 8 +++++++-
1 file changed, 7 insertions(+), 1 deletion(-)
diff --git a/arch/mips/kernel/smp.c b/arch/mips/kernel/smp.c
index d3c7486fee3d..81ae65f21f73 100644
--- a/arch/mips/kernel/smp.c
+++ b/arch/mips/kernel/smp.c
@@ -375,7 +375,6 @@ static int __init mips_smp_ipi_init(void)
return 0;
}
-early_initcall(mips_smp_ipi_init);
#endif
/*
@@ -466,6 +465,13 @@ void __init smp_prepare_cpus(unsigned int max_cpus)
set_cpu_sibling_map(0);
set_cpu_core_map(0);
calculate_cpu_foreign_map();
+#ifdef CONFIG_GENERIC_IRQ_IPI
+ if (mips_smp_ipi_init()) {
+ pr_err("Failed to initialize IPI - disabling SMP");
+ init_cpu_present(cpumask_of(0));
+ return;
+ }
+#endif
#ifndef CONFIG_HOTPLUG_CPU
init_cpu_present(cpu_possible_mask);
#endif
--
2.46.0
^ permalink raw reply related [flat|nested] 12+ messages in thread
* [PATCH v5 05/10] MIPS: smp: Implement IPI stats
2024-09-08 10:20 [PATCH v5 00/10] MIPS: IPI Improvements Jiaxun Yang
` (3 preceding siblings ...)
2024-09-08 10:20 ` [PATCH v5 04/10] MIPS: Move mips_smp_ipi_init call after prepare_cpus Jiaxun Yang
@ 2024-09-08 10:20 ` Jiaxun Yang
2024-09-08 10:20 ` [PATCH v5 06/10] irqchip/irq-mips-gic: Switch to ipi_mux Jiaxun Yang
` (4 subsequent siblings)
9 siblings, 0 replies; 12+ messages in thread
From: Jiaxun Yang @ 2024-09-08 10:20 UTC (permalink / raw)
To: Thomas Bogendoerfer, Florian Fainelli,
Broadcom internal kernel review list, Huacai Chen,
Thomas Gleixner, Serge Semin, Paul Burton
Cc: linux-mips, linux-kernel, Jiaxun Yang
Show IPI statistics in arch_show_interrupts to help users
analysis IPI performance.
Tested-by: Serge Semin <fancer.lancer@gmail.com>
Signed-off-by: Jiaxun Yang <jiaxun.yang@flygoat.com>
---
arch/mips/include/asm/ipi.h | 9 +++++++++
arch/mips/kernel/irq.c | 4 ++++
arch/mips/kernel/smp.c | 13 +++++++++++++
3 files changed, 26 insertions(+)
diff --git a/arch/mips/include/asm/ipi.h b/arch/mips/include/asm/ipi.h
index 7cac0f4ccf37..7d310012962f 100644
--- a/arch/mips/include/asm/ipi.h
+++ b/arch/mips/include/asm/ipi.h
@@ -36,6 +36,7 @@ void mips_smp_ipi_disable(void);
extern bool mips_smp_ipi_have_virq_range(void);
void mips_smp_ipi_set_irqdomain(struct irq_domain *d);
extern void mips_smp_ipi_set_virq_range(int virq, int nr);
+extern void mips_smp_show_ipi_stats(struct seq_file *p, int prec);
#else
static inline void mips_smp_ipi_enable(void)
{
@@ -44,6 +45,10 @@ static inline void mips_smp_ipi_enable(void)
static inline void mips_smp_ipi_disable(void)
{
}
+
+static inline void mips_smp_show_ipi_stats(struct seq_file *p, int prec)
+{
+}
#endif /* CONFIG_GENERIC_IRQ_IPI */
#else
static inline void mips_smp_ipi_set_virq_range(int virq, int nr)
@@ -58,5 +63,9 @@ static inline bool mips_smp_ipi_have_virq_range(void)
{
return false;
}
+
+static inline void mips_smp_show_ipi_stats(struct seq_file *p, int prec)
+{
+}
#endif /* CONFIG_SMP */
#endif
diff --git a/arch/mips/kernel/irq.c b/arch/mips/kernel/irq.c
index 5e11582fe308..c3ea8d80e0cb 100644
--- a/arch/mips/kernel/irq.c
+++ b/arch/mips/kernel/irq.c
@@ -26,6 +26,8 @@
#include <linux/atomic.h>
#include <linux/uaccess.h>
+#include <asm/ipi.h>
+
void *irq_stack[NR_CPUS];
/*
@@ -42,6 +44,8 @@ atomic_t irq_err_count;
int arch_show_interrupts(struct seq_file *p, int prec)
{
seq_printf(p, "%*s: %10u\n", prec, "ERR", atomic_read(&irq_err_count));
+ mips_smp_show_ipi_stats(p, prec);
+
return 0;
}
diff --git a/arch/mips/kernel/smp.c b/arch/mips/kernel/smp.c
index 81ae65f21f73..aa02ca2e0fcf 100644
--- a/arch/mips/kernel/smp.c
+++ b/arch/mips/kernel/smp.c
@@ -243,6 +243,19 @@ void mips_smp_ipi_disable(void)
disable_percpu_irq(ipi_virqs[i]);
}
+void mips_smp_show_ipi_stats(struct seq_file *p, int prec)
+{
+ unsigned int cpu, i;
+
+ for (i = 0; i < IPI_MAX; i++) {
+ seq_printf(p, "%*s%u:%s", prec - 1, "IPI", i,
+ prec >= 4 ? " " : "");
+ for_each_online_cpu(cpu)
+ seq_printf(p, "%10u ", irq_desc_kstat_cpu(ipi_desc[i], cpu));
+ seq_printf(p, " %s\n", ipi_names[i]);
+ }
+}
+
static void smp_ipi_init_one(unsigned int virq, const char *name,
irq_handler_t handler)
{
--
2.46.0
^ permalink raw reply related [flat|nested] 12+ messages in thread
* [PATCH v5 06/10] irqchip/irq-mips-gic: Switch to ipi_mux
2024-09-08 10:20 [PATCH v5 00/10] MIPS: IPI Improvements Jiaxun Yang
` (4 preceding siblings ...)
2024-09-08 10:20 ` [PATCH v5 05/10] MIPS: smp: Implement IPI stats Jiaxun Yang
@ 2024-09-08 10:20 ` Jiaxun Yang
2024-09-08 10:20 ` [PATCH v5 07/10] MIPS: Implement get_mips_sw_int hook Jiaxun Yang
` (3 subsequent siblings)
9 siblings, 0 replies; 12+ messages in thread
From: Jiaxun Yang @ 2024-09-08 10:20 UTC (permalink / raw)
To: Thomas Bogendoerfer, Florian Fainelli,
Broadcom internal kernel review list, Huacai Chen,
Thomas Gleixner, Serge Semin, Paul Burton
Cc: linux-mips, linux-kernel, Jiaxun Yang
Use ipi_mux to implement IPI interrupts instead of
allocating separate vectors for each individual IPI messages.
This reduces number of reserved GIC shared vectors
from 3 per core to 1 per core, which relieves the scarcity
of GIC shared vectors on MSI enabled systems.
It also allows us to easily expand number of IPIs.
Tested-by: Serge Semin <fancer.lancer@gmail.com>
Signed-off-by: Jiaxun Yang <jiaxun.yang@flygoat.com>
---
v4:
- Improve commit message
- Wrap up gic_is_reserved
- Style fixes
---
drivers/irqchip/Kconfig | 1 +
drivers/irqchip/irq-mips-gic.c | 202 ++++++++++++++---------------------------
2 files changed, 69 insertions(+), 134 deletions(-)
diff --git a/drivers/irqchip/Kconfig b/drivers/irqchip/Kconfig
index d078bdc48c38..763070be0088 100644
--- a/drivers/irqchip/Kconfig
+++ b/drivers/irqchip/Kconfig
@@ -346,6 +346,7 @@ config KEYSTONE_IRQ
config MIPS_GIC
bool
select GENERIC_IRQ_IPI if SMP
+ select GENERIC_IRQ_IPI_MUX if SMP
select IRQ_DOMAIN_HIERARCHY
select MIPS_CM
diff --git a/drivers/irqchip/irq-mips-gic.c b/drivers/irqchip/irq-mips-gic.c
index 76253e864f23..4c36e10ee2d3 100644
--- a/drivers/irqchip/irq-mips-gic.c
+++ b/drivers/irqchip/irq-mips-gic.c
@@ -17,12 +17,14 @@
#include <linux/interrupt.h>
#include <linux/irq.h>
#include <linux/irqchip.h>
+#include <linux/irqchip/chained_irq.h>
#include <linux/irqdomain.h>
#include <linux/of_address.h>
#include <linux/percpu.h>
#include <linux/sched.h>
#include <linux/smp.h>
+#include <asm/ipi.h>
#include <asm/mips-cps.h>
#include <asm/setup.h>
#include <asm/traps.h>
@@ -58,7 +60,7 @@ static struct irq_chip gic_level_irq_controller, gic_edge_irq_controller;
#ifdef CONFIG_GENERIC_IRQ_IPI
static DECLARE_BITMAP(ipi_resrv, GIC_MAX_INTRS);
-static DECLARE_BITMAP(ipi_available, GIC_MAX_INTRS);
+static int cpu_ipi_intr[NR_CPUS] __read_mostly;
#endif /* CONFIG_GENERIC_IRQ_IPI */
static struct gic_all_vpes_chip_data {
@@ -66,6 +68,15 @@ static struct gic_all_vpes_chip_data {
bool mask;
} gic_all_vpes_chip_data[GIC_NUM_LOCAL_INTRS];
+#ifdef CONFIG_GENERIC_IRQ_IPI
+static inline bool gic_is_reserved(unsigned int intr)
+{
+ return test_bit(intr, ipi_resrv);
+}
+#else
+static inline bool gic_is_reserved(unsigned int intr) { return false; }
+#endif
+
static void gic_clear_pcpu_masks(unsigned int intr)
{
unsigned int i;
@@ -108,13 +119,6 @@ static void gic_bind_eic_interrupt(int irq, int set)
write_gic_vl_eic_shadow_set(irq, set);
}
-static void gic_send_ipi(struct irq_data *d, unsigned int cpu)
-{
- irq_hw_number_t hwirq = GIC_HWIRQ_TO_SHARED(irqd_to_hwirq(d));
-
- write_gic_wedge(GIC_WEDGE_RW | hwirq);
-}
-
int gic_get_c0_compare_int(void)
{
if (!gic_local_irq_is_routable(GIC_LOCAL_INT_TIMER))
@@ -181,6 +185,10 @@ static void gic_mask_irq(struct irq_data *d)
unsigned int intr = GIC_HWIRQ_TO_SHARED(d->hwirq);
write_gic_rmask(intr);
+
+ if (gic_is_reserved(intr))
+ return;
+
gic_clear_pcpu_masks(intr);
}
@@ -191,6 +199,9 @@ static void gic_unmask_irq(struct irq_data *d)
write_gic_smask(intr);
+ if (gic_is_reserved(intr))
+ return;
+
gic_clear_pcpu_masks(intr);
cpu = cpumask_first(irq_data_get_effective_affinity_mask(d));
set_bit(intr, per_cpu_ptr(pcpu_masks, cpu));
@@ -263,6 +274,9 @@ static int gic_set_affinity(struct irq_data *d, const struct cpumask *cpumask,
unsigned long flags;
unsigned int cpu;
+ if (gic_is_reserved(irq))
+ return -EINVAL;
+
cpu = cpumask_first_and(cpumask, cpu_online_mask);
if (cpu >= NR_CPUS)
return -EINVAL;
@@ -304,7 +318,6 @@ static struct irq_chip gic_edge_irq_controller = {
#ifdef CONFIG_SMP
.irq_set_affinity = gic_set_affinity,
#endif
- .ipi_send_single = gic_send_ipi,
};
static void gic_handle_local_int(bool chained)
@@ -475,12 +488,6 @@ static int gic_irq_domain_map(struct irq_domain *d, unsigned int virq,
u32 map;
if (hwirq >= GIC_SHARED_HWIRQ_BASE) {
-#ifdef CONFIG_GENERIC_IRQ_IPI
- /* verify that shared irqs don't conflict with an IPI irq */
- if (test_bit(GIC_HWIRQ_TO_SHARED(hwirq), ipi_resrv))
- return -EBUSY;
-#endif /* CONFIG_GENERIC_IRQ_IPI */
-
err = irq_domain_set_hwirq_and_chip(d, virq, hwirq,
&gic_level_irq_controller,
NULL);
@@ -570,146 +577,73 @@ static const struct irq_domain_ops gic_irq_domain_ops = {
};
#ifdef CONFIG_GENERIC_IRQ_IPI
-
-static int gic_ipi_domain_xlate(struct irq_domain *d, struct device_node *ctrlr,
- const u32 *intspec, unsigned int intsize,
- irq_hw_number_t *out_hwirq,
- unsigned int *out_type)
+static void gic_handle_ipi_irq(struct irq_desc *desc)
{
- /*
- * There's nothing to translate here. hwirq is dynamically allocated and
- * the irq type is always edge triggered.
- * */
- *out_hwirq = 0;
- *out_type = IRQ_TYPE_EDGE_RISING;
+ struct irq_chip *chip = irq_desc_get_chip(desc);
- return 0;
+ chained_irq_enter(chip, desc);
+ ipi_mux_process();
+ chained_irq_exit(chip, desc);
}
-static int gic_ipi_domain_alloc(struct irq_domain *d, unsigned int virq,
- unsigned int nr_irqs, void *arg)
+static void gic_ipi_send(unsigned int cpu)
{
- struct cpumask *ipimask = arg;
- irq_hw_number_t hwirq, base_hwirq;
- int cpu, ret, i;
-
- base_hwirq = find_first_bit(ipi_available, gic_shared_intrs);
- if (base_hwirq == gic_shared_intrs)
- return -ENOMEM;
-
- /* check that we have enough space */
- for (i = base_hwirq; i < nr_irqs; i++) {
- if (!test_bit(i, ipi_available))
- return -EBUSY;
- }
- bitmap_clear(ipi_available, base_hwirq, nr_irqs);
-
- /* map the hwirq for each cpu consecutively */
- i = 0;
- for_each_cpu(cpu, ipimask) {
- hwirq = GIC_SHARED_TO_HWIRQ(base_hwirq + i);
-
- ret = irq_domain_set_hwirq_and_chip(d, virq + i, hwirq,
- &gic_edge_irq_controller,
- NULL);
- if (ret)
- goto error;
-
- ret = irq_domain_set_hwirq_and_chip(d->parent, virq + i, hwirq,
- &gic_edge_irq_controller,
- NULL);
- if (ret)
- goto error;
-
- ret = irq_set_irq_type(virq + i, IRQ_TYPE_EDGE_RISING);
- if (ret)
- goto error;
-
- ret = gic_shared_irq_domain_map(d, virq + i, hwirq, cpu);
- if (ret)
- goto error;
-
- i++;
- }
-
- return 0;
-error:
- bitmap_set(ipi_available, base_hwirq, nr_irqs);
- return ret;
+ write_gic_wedge(GIC_WEDGE_RW | cpu_ipi_intr[cpu]);
}
-static void gic_ipi_domain_free(struct irq_domain *d, unsigned int virq,
- unsigned int nr_irqs)
+static int gic_ipi_mux_init(struct device_node *node, struct irq_domain *d)
{
- irq_hw_number_t base_hwirq;
- struct irq_data *data;
+ unsigned int i, v[2], num_ipis;
+ int ipi_virq, cpu = 0;
- data = irq_get_irq_data(virq);
- if (!data)
- return;
+ if (node && !of_property_read_u32_array(node, "mti,reserved-ipi-vectors", v, 2)) {
+ bitmap_set(ipi_resrv, v[0], v[1]);
+ } else {
+ /*
+ * Reserve 1 interrupts per possible CPU/VP for use as IPIs
+ */
+ num_ipis = num_possible_cpus();
+ bitmap_set(ipi_resrv, gic_shared_intrs - num_ipis, num_ipis);
+ }
- base_hwirq = GIC_HWIRQ_TO_SHARED(irqd_to_hwirq(data));
- bitmap_set(ipi_available, base_hwirq, nr_irqs);
-}
+ ipi_virq = ipi_mux_create(IPI_MAX, gic_ipi_send);
-static int gic_ipi_domain_match(struct irq_domain *d, struct device_node *node,
- enum irq_domain_bus_token bus_token)
-{
- bool is_ipi;
+ WARN_ON(bitmap_weight(ipi_resrv, GIC_MAX_INTRS) < num_possible_cpus());
- switch (bus_token) {
- case DOMAIN_BUS_IPI:
- is_ipi = d->bus_token == bus_token;
- return (!node || to_of_node(d->fwnode) == node) && is_ipi;
- break;
- default:
- return 0;
- }
-}
+ for_each_set_bit(i, ipi_resrv, GIC_MAX_INTRS) {
+ struct irq_fwspec fwspec;
+ int virq;
-static const struct irq_domain_ops gic_ipi_domain_ops = {
- .xlate = gic_ipi_domain_xlate,
- .alloc = gic_ipi_domain_alloc,
- .free = gic_ipi_domain_free,
- .match = gic_ipi_domain_match,
-};
+ fwspec.fwnode = of_node_to_fwnode(node);
+ fwspec.param_count = 3;
+ fwspec.param[0] = GIC_SHARED;
+ fwspec.param[1] = i;
+ fwspec.param[2] = IRQ_TYPE_EDGE_RISING;
-static int gic_register_ipi_domain(struct device_node *node)
-{
- struct irq_domain *gic_ipi_domain;
- unsigned int v[2], num_ipis;
+ virq = irq_create_fwspec_mapping(&fwspec);
+ if (!virq)
+ return -EINVAL;
- gic_ipi_domain = irq_domain_add_hierarchy(gic_irq_domain,
- IRQ_DOMAIN_FLAG_IPI_PER_CPU,
- GIC_NUM_LOCAL_INTRS + gic_shared_intrs,
- node, &gic_ipi_domain_ops, NULL);
- if (!gic_ipi_domain) {
- pr_err("Failed to add IPI domain");
- return -ENXIO;
- }
+ gic_shared_irq_domain_map(d, virq, GIC_SHARED_TO_HWIRQ(i), cpu);
+ irq_set_chained_handler(virq, gic_handle_ipi_irq);
+ gic_clear_pcpu_masks(i);
+ set_bit(i, per_cpu_ptr(pcpu_masks, cpu));
- irq_domain_update_bus_token(gic_ipi_domain, DOMAIN_BUS_IPI);
+ cpu_ipi_intr[cpu] = i;
- if (node &&
- !of_property_read_u32_array(node, "mti,reserved-ipi-vectors", v, 2)) {
- bitmap_set(ipi_resrv, v[0], v[1]);
- } else {
- /*
- * Reserve 2 interrupts per possible CPU/VP for use as IPIs,
- * meeting the requirements of arch/mips SMP.
- */
- num_ipis = 2 * num_possible_cpus();
- bitmap_set(ipi_resrv, gic_shared_intrs - num_ipis, num_ipis);
+ cpu++;
+ if (cpu >= num_possible_cpus())
+ break;
}
- bitmap_copy(ipi_available, ipi_resrv, GIC_MAX_INTRS);
+ mips_smp_ipi_set_virq_range(ipi_virq, IPI_MAX);
return 0;
}
#else /* !CONFIG_GENERIC_IRQ_IPI */
-static inline int gic_register_ipi_domain(struct device_node *node)
+static inline int gic_ipi_mux_init(struct device_node *node, struct irq_domain *d)
{
return 0;
}
@@ -809,10 +743,6 @@ static int __init gic_of_init(struct device_node *node,
return -ENXIO;
}
- ret = gic_register_ipi_domain(node);
- if (ret)
- return ret;
-
board_bind_eic_interrupt = &gic_bind_eic_interrupt;
/* Setup defaults */
@@ -822,6 +752,10 @@ static int __init gic_of_init(struct device_node *node,
write_gic_rmask(i);
}
+ ret = gic_ipi_mux_init(node, gic_irq_domain);
+ if (ret)
+ return ret;
+
return cpuhp_setup_state(CPUHP_AP_IRQ_MIPS_GIC_STARTING,
"irqchip/mips/gic:starting",
gic_cpu_startup, NULL);
--
2.46.0
^ permalink raw reply related [flat|nested] 12+ messages in thread
* [PATCH v5 07/10] MIPS: Implement get_mips_sw_int hook
2024-09-08 10:20 [PATCH v5 00/10] MIPS: IPI Improvements Jiaxun Yang
` (5 preceding siblings ...)
2024-09-08 10:20 ` [PATCH v5 06/10] irqchip/irq-mips-gic: Switch to ipi_mux Jiaxun Yang
@ 2024-09-08 10:20 ` Jiaxun Yang
2024-09-08 10:20 ` [PATCH v5 08/10] MIPS: GIC: Implement get_sw_int hook Jiaxun Yang
` (2 subsequent siblings)
9 siblings, 0 replies; 12+ messages in thread
From: Jiaxun Yang @ 2024-09-08 10:20 UTC (permalink / raw)
To: Thomas Bogendoerfer, Florian Fainelli,
Broadcom internal kernel review list, Huacai Chen,
Thomas Gleixner, Serge Semin, Paul Burton
Cc: linux-mips, linux-kernel, Jiaxun Yang
For MIPS CPUs with VEIC, SW0 and SW1 interrupts are also
routed through external sources.
We need such hook to allow architecture code to get interrupt
source from platform EIC controllers.
Tested-by: Serge Semin <fancer.lancer@gmail.com>
Signed-off-by: Jiaxun Yang <jiaxun.yang@flygoat.com>
---
arch/mips/include/asm/irq.h | 1 +
arch/mips/include/asm/irq_cpu.h | 3 +++
arch/mips/kernel/irq.c | 17 +++++++++++++++++
drivers/irqchip/irq-mips-cpu.c | 11 +++++++++++
4 files changed, 32 insertions(+)
diff --git a/arch/mips/include/asm/irq.h b/arch/mips/include/asm/irq.h
index 3a848e7e69f7..6edad40ef663 100644
--- a/arch/mips/include/asm/irq.h
+++ b/arch/mips/include/asm/irq.h
@@ -51,6 +51,7 @@ static inline int irq_canonicalize(int irq)
#else
#define irq_canonicalize(irq) (irq) /* Sane hardware, sane code ... */
#endif
+int get_mips_sw_int(int hwint);
asmlinkage void plat_irq_dispatch(void);
diff --git a/arch/mips/include/asm/irq_cpu.h b/arch/mips/include/asm/irq_cpu.h
index 83d7331ab215..50a99ba2d503 100644
--- a/arch/mips/include/asm/irq_cpu.h
+++ b/arch/mips/include/asm/irq_cpu.h
@@ -9,7 +9,10 @@
#ifndef _ASM_IRQ_CPU_H
#define _ASM_IRQ_CPU_H
+#include <linux/irqdomain.h>
+
extern void mips_cpu_irq_init(void);
+extern int mips_cpu_get_sw_int(int hwint);
#ifdef CONFIG_IRQ_DOMAIN
struct device_node;
diff --git a/arch/mips/kernel/irq.c b/arch/mips/kernel/irq.c
index c3ea8d80e0cb..c79504b12134 100644
--- a/arch/mips/kernel/irq.c
+++ b/arch/mips/kernel/irq.c
@@ -26,10 +26,27 @@
#include <linux/atomic.h>
#include <linux/uaccess.h>
+#include <asm/irq_cpu.h>
#include <asm/ipi.h>
void *irq_stack[NR_CPUS];
+int __weak get_mips_sw_int(int hwint)
+{
+ /* Only SW0 and SW1 */
+ WARN_ON(hwint > 1);
+
+ /* SW int is routed to external source */
+ if (cpu_has_veic)
+ return 0;
+
+#ifdef CONFIG_IRQ_MIPS_CPU
+ return mips_cpu_get_sw_int(hwint);
+#endif
+
+ return MIPS_CPU_IRQ_BASE + hwint;
+}
+
/*
* 'what should we do if we get a hw irq event on an illegal vector'.
* each architecture has to answer this themselves.
diff --git a/drivers/irqchip/irq-mips-cpu.c b/drivers/irqchip/irq-mips-cpu.c
index 0c7ae71a0af0..7b3501485d95 100644
--- a/drivers/irqchip/irq-mips-cpu.c
+++ b/drivers/irqchip/irq-mips-cpu.c
@@ -254,6 +254,17 @@ static inline void mips_cpu_register_ipi_domain(struct device_node *of_node) {}
#endif /* !CONFIG_GENERIC_IRQ_IPI */
+int mips_cpu_get_sw_int(int hwint)
+{
+ /* Only 0 and 1 for SW INT */
+ WARN_ON(hwint > 1);
+
+ if (!irq_domain)
+ return 0;
+
+ return irq_create_mapping(irq_domain, hwint);
+}
+
static void __init __mips_cpu_irq_init(struct device_node *of_node)
{
/* Mask interrupts. */
--
2.46.0
^ permalink raw reply related [flat|nested] 12+ messages in thread
* [PATCH v5 08/10] MIPS: GIC: Implement get_sw_int hook
2024-09-08 10:20 [PATCH v5 00/10] MIPS: IPI Improvements Jiaxun Yang
` (6 preceding siblings ...)
2024-09-08 10:20 ` [PATCH v5 07/10] MIPS: Implement get_mips_sw_int hook Jiaxun Yang
@ 2024-09-08 10:20 ` Jiaxun Yang
2024-09-08 10:20 ` [PATCH v5 09/10] irqchip/irq-mips-cpu: Rework software IRQ handling flow Jiaxun Yang
2024-09-08 10:20 ` [PATCH v5 10/10] MIPS: smp-mt: Rework IPI functions Jiaxun Yang
9 siblings, 0 replies; 12+ messages in thread
From: Jiaxun Yang @ 2024-09-08 10:20 UTC (permalink / raw)
To: Thomas Bogendoerfer, Florian Fainelli,
Broadcom internal kernel review list, Huacai Chen,
Thomas Gleixner, Serge Semin, Paul Burton
Cc: linux-mips, linux-kernel, Jiaxun Yang
SW0 and SW1 interrupts are routed through GIC in EIC
mode, implement get_sw_int hook for GIC and generic platform
to create IRQ mapping for SW0 and SW1 in such mode.
Tested-by: Serge Semin <fancer.lancer@gmail.com>
Signed-off-by: Jiaxun Yang <jiaxun.yang@flygoat.com>
---
arch/mips/generic/irq.c | 15 +++++++++++++++
arch/mips/include/asm/mips-gic.h | 10 ++++++++++
drivers/irqchip/irq-mips-gic.c | 15 +++++++++++++++
3 files changed, 40 insertions(+)
diff --git a/arch/mips/generic/irq.c b/arch/mips/generic/irq.c
index 933119262943..bc3599a76014 100644
--- a/arch/mips/generic/irq.c
+++ b/arch/mips/generic/irq.c
@@ -11,6 +11,7 @@
#include <linux/types.h>
#include <asm/irq.h>
+#include <asm/irq_cpu.h>
#include <asm/mips-cps.h>
#include <asm/time.h>
@@ -59,3 +60,17 @@ unsigned int get_c0_compare_int(void)
return mips_cpu_timer_irq;
}
+
+int get_mips_sw_int(int hwint)
+{
+ int mips_sw_int_irq;
+
+ if (mips_gic_present())
+ mips_sw_int_irq = gic_get_sw_int(hwint);
+ else if (cpu_has_veic)
+ panic("Unimplemented!");
+ else
+ mips_sw_int_irq = mips_cpu_get_sw_int(hwint);
+
+ return mips_sw_int_irq;
+}
diff --git a/arch/mips/include/asm/mips-gic.h b/arch/mips/include/asm/mips-gic.h
index fd9da5e3beaa..3e9d1b252500 100644
--- a/arch/mips/include/asm/mips-gic.h
+++ b/arch/mips/include/asm/mips-gic.h
@@ -388,4 +388,14 @@ extern int gic_get_c0_perfcount_int(void);
*/
extern int gic_get_c0_fdc_int(void);
+/**
+ * gic_get_sw_int() - Return software interrupt virq
+ *
+ * Determine the virq number to use for SWINT0 or SWINT1 interrupts,
+ * which may be routed via the GIC.
+ *
+ * Returns the virq number or a negative error number.
+ */
+extern int gic_get_sw_int(int hwirq);
+
#endif /* __MIPS_ASM_MIPS_CPS_H__ */
diff --git a/drivers/irqchip/irq-mips-gic.c b/drivers/irqchip/irq-mips-gic.c
index 4c36e10ee2d3..7fa567677c00 100644
--- a/drivers/irqchip/irq-mips-gic.c
+++ b/drivers/irqchip/irq-mips-gic.c
@@ -152,6 +152,21 @@ int gic_get_c0_fdc_int(void)
GIC_LOCAL_TO_HWIRQ(GIC_LOCAL_INT_FDC));
}
+int gic_get_sw_int(int hwint)
+{
+ int local_irq;
+
+ WARN_ON(hwint > 1);
+
+ local_irq = GIC_LOCAL_INT_SWINT0 + hwint;
+
+ if (!gic_local_irq_is_routable(local_irq))
+ return MIPS_CPU_IRQ_BASE + hwint;
+
+ return irq_create_mapping(gic_irq_domain,
+ GIC_LOCAL_TO_HWIRQ(local_irq));
+}
+
static void gic_handle_shared_int(bool chained)
{
unsigned int intr;
--
2.46.0
^ permalink raw reply related [flat|nested] 12+ messages in thread
* [PATCH v5 09/10] irqchip/irq-mips-cpu: Rework software IRQ handling flow
2024-09-08 10:20 [PATCH v5 00/10] MIPS: IPI Improvements Jiaxun Yang
` (7 preceding siblings ...)
2024-09-08 10:20 ` [PATCH v5 08/10] MIPS: GIC: Implement get_sw_int hook Jiaxun Yang
@ 2024-09-08 10:20 ` Jiaxun Yang
2024-09-08 10:20 ` [PATCH v5 10/10] MIPS: smp-mt: Rework IPI functions Jiaxun Yang
9 siblings, 0 replies; 12+ messages in thread
From: Jiaxun Yang @ 2024-09-08 10:20 UTC (permalink / raw)
To: Thomas Bogendoerfer, Florian Fainelli,
Broadcom internal kernel review list, Huacai Chen,
Thomas Gleixner, Serge Semin, Paul Burton
Cc: linux-mips, linux-kernel, Jiaxun Yang
Remove unnecessary irq_mask_ack, irq_eoi, irq_disable, irq_enable
irq_chip hooks for software interrupts, as they are simple edge
triggered IRQs and there is no need for those duplicated callbacks.
Don't mask IRQ in mips_mt_cpu_irq_ack, that is not expected for
edge IRQ handling flow.
Create a irq_chip for regular (non-MT) mode software interrupts,
as they are edge triggered IRQs thus need startup/ack callbacks
as well.
Tested-by: Serge Semin <fancer.lancer@gmail.com>
Signed-off-by: Jiaxun Yang <jiaxun.yang@flygoat.com>
---
v4:
- Reword commit message
- Style fixes
---
drivers/irqchip/irq-mips-cpu.c | 68 +++++++++++++++++++++++++++++-------------
1 file changed, 48 insertions(+), 20 deletions(-)
diff --git a/drivers/irqchip/irq-mips-cpu.c b/drivers/irqchip/irq-mips-cpu.c
index 7b3501485d95..b2318e915d88 100644
--- a/drivers/irqchip/irq-mips-cpu.c
+++ b/drivers/irqchip/irq-mips-cpu.c
@@ -49,7 +49,7 @@ static inline void mask_mips_irq(struct irq_data *d)
irq_disable_hazard();
}
-static struct irq_chip mips_cpu_irq_controller = {
+static const struct irq_chip mips_cpu_irq_controller = {
.name = "MIPS",
.irq_ack = mask_mips_irq,
.irq_mask = mask_mips_irq,
@@ -60,11 +60,33 @@ static struct irq_chip mips_cpu_irq_controller = {
.irq_enable = unmask_mips_irq,
};
+static unsigned int mips_sw_irq_startup(struct irq_data *d)
+{
+ clear_c0_cause(C_SW0 << d->hwirq);
+ back_to_back_c0_hazard();
+ unmask_mips_irq(d);
+ return 0;
+}
+
+static void mips_sw_irq_ack(struct irq_data *d)
+{
+ clear_c0_cause(C_SW0 << d->hwirq);
+ back_to_back_c0_hazard();
+}
+
+static const struct irq_chip mips_cpu_sw_irq_controller = {
+ .name = "MIPS",
+ .irq_startup = mips_sw_irq_startup,
+ .irq_ack = mips_sw_irq_ack,
+ .irq_mask = mask_mips_irq,
+ .irq_unmask = unmask_mips_irq,
+};
+
+#ifdef CONFIG_MIPS_MT
/*
* Basically the same as above but taking care of all the MT stuff
*/
-
-static unsigned int mips_mt_cpu_irq_startup(struct irq_data *d)
+static unsigned int mips_mt_sw_irq_startup(struct irq_data *d)
{
unsigned int vpflags = dvpe();
@@ -76,14 +98,14 @@ static unsigned int mips_mt_cpu_irq_startup(struct irq_data *d)
/*
* While we ack the interrupt interrupts are disabled and thus we don't need
- * to deal with concurrency issues. Same for mips_cpu_irq_end.
+ * to deal with concurrency issues.
*/
-static void mips_mt_cpu_irq_ack(struct irq_data *d)
+static void mips_mt_sw_irq_ack(struct irq_data *d)
{
unsigned int vpflags = dvpe();
+
clear_c0_cause(C_SW0 << d->hwirq);
evpe(vpflags);
- mask_mips_irq(d);
}
#ifdef CONFIG_GENERIC_IRQ_IPI
@@ -108,21 +130,17 @@ static void mips_mt_send_ipi(struct irq_data *d, unsigned int cpu)
}
#endif /* CONFIG_GENERIC_IRQ_IPI */
-
-static struct irq_chip mips_mt_cpu_irq_controller = {
+static const struct irq_chip mips_mt_cpu_sw_irq_controller = {
.name = "MIPS",
- .irq_startup = mips_mt_cpu_irq_startup,
- .irq_ack = mips_mt_cpu_irq_ack,
+ .irq_startup = mips_mt_sw_irq_startup,
+ .irq_ack = mips_mt_sw_irq_ack,
.irq_mask = mask_mips_irq,
- .irq_mask_ack = mips_mt_cpu_irq_ack,
.irq_unmask = unmask_mips_irq,
- .irq_eoi = unmask_mips_irq,
- .irq_disable = mask_mips_irq,
- .irq_enable = unmask_mips_irq,
#ifdef CONFIG_GENERIC_IRQ_IPI
.ipi_send_single = mips_mt_send_ipi,
#endif
};
+#endif
asmlinkage void __weak plat_irq_dispatch(void)
{
@@ -149,17 +167,27 @@ asmlinkage void __weak plat_irq_dispatch(void)
}
}
+#ifdef CONFIG_MIPS_MT
+static inline const struct irq_chip *mips_cpu_get_sw_irqchip(void)
+{
+ return cpu_has_mipsmt ? &mips_mt_cpu_sw_irq_controller : &mips_cpu_sw_irq_controller;
+}
+#else
+static inline const struct irq_chip *mips_cpu_get_sw_irqchip(void)
+{
+ return &mips_cpu_sw_irq_controller;
+}
+#endif
+
static int mips_cpu_intc_map(struct irq_domain *d, unsigned int irq,
irq_hw_number_t hw)
{
- struct irq_chip *chip;
+ const struct irq_chip *chip;
- if (hw < 2 && cpu_has_mipsmt) {
- /* Software interrupts are used for MT/CMT IPI */
- chip = &mips_mt_cpu_irq_controller;
- } else {
+ if (hw < 2)
+ chip = mips_cpu_get_sw_irqchip();
+ else
chip = &mips_cpu_irq_controller;
- }
if (cpu_has_vint)
set_vi_handler(hw, plat_irq_dispatch);
--
2.46.0
^ permalink raw reply related [flat|nested] 12+ messages in thread
* [PATCH v5 10/10] MIPS: smp-mt: Rework IPI functions
2024-09-08 10:20 [PATCH v5 00/10] MIPS: IPI Improvements Jiaxun Yang
` (8 preceding siblings ...)
2024-09-08 10:20 ` [PATCH v5 09/10] irqchip/irq-mips-cpu: Rework software IRQ handling flow Jiaxun Yang
@ 2024-09-08 10:20 ` Jiaxun Yang
9 siblings, 0 replies; 12+ messages in thread
From: Jiaxun Yang @ 2024-09-08 10:20 UTC (permalink / raw)
To: Thomas Bogendoerfer, Florian Fainelli,
Broadcom internal kernel review list, Huacai Chen,
Thomas Gleixner, Serge Semin, Paul Burton
Cc: linux-mips, linux-kernel, Jiaxun Yang
Move smp IRQ code from irq-mips-cpu to smp-mt as IPI is
not really relavant to CPU intc. In VEIC mode we can have
irq-mips-cpu not registered and SW interrupts comes from
EIC controllers.
Implement IPI with ipi-mux to allow easy extension to number
of IPIs.
Tested-by: Serge Semin <fancer.lancer@gmail.com>
Signed-off-by: Jiaxun Yang <jiaxun.yang@flygoat.com>
---
arch/mips/Kconfig | 2 +
arch/mips/kernel/smp-mt.c | 70 +++++++++++++++++++++++
drivers/irqchip/Kconfig | 1 -
drivers/irqchip/irq-mips-cpu.c | 124 +----------------------------------------
4 files changed, 74 insertions(+), 123 deletions(-)
diff --git a/arch/mips/Kconfig b/arch/mips/Kconfig
index 43da6d596e2b..08ef79093916 100644
--- a/arch/mips/Kconfig
+++ b/arch/mips/Kconfig
@@ -2189,6 +2189,8 @@ config MIPS_MT_SMP
depends on SYS_SUPPORTS_MULTITHREADING && !CPU_MICROMIPS
select CPU_MIPSR2_IRQ_VI
select CPU_MIPSR2_IRQ_EI
+ select GENERIC_IRQ_IPI
+ select GENERIC_IRQ_IPI_MUX
select SYNC_R4K
select MIPS_MT
select SMP
diff --git a/arch/mips/kernel/smp-mt.c b/arch/mips/kernel/smp-mt.c
index 7729cc733421..2f00077dbf07 100644
--- a/arch/mips/kernel/smp-mt.c
+++ b/arch/mips/kernel/smp-mt.c
@@ -10,6 +10,10 @@
#include <linux/sched.h>
#include <linux/cpumask.h>
#include <linux/interrupt.h>
+#include <linux/irq.h>
+#include <linux/irqchip.h>
+#include <linux/irqchip/chained_irq.h>
+#include <linux/irqdomain.h>
#include <linux/compiler.h>
#include <linux/sched/task_stack.h>
#include <linux/smp.h>
@@ -19,6 +23,7 @@
#include <asm/cpu.h>
#include <asm/processor.h>
#include <asm/hardirq.h>
+#include <asm/ipi.h>
#include <asm/mmu_context.h>
#include <asm/time.h>
#include <asm/mipsregs.h>
@@ -26,6 +31,65 @@
#include <asm/mips_mt.h>
#include <asm/mips-cps.h>
+static int vsmp_sw0_virq __ro_after_init;
+
+static void smvp_handle_ipi_irq(struct irq_desc *desc)
+{
+ struct irq_chip *chip = irq_desc_get_chip(desc);
+
+ chained_irq_enter(chip, desc);
+
+ /* irq-mips-cpu would ack for us, but EIC drivers won't */
+ if (cpu_has_veic) {
+ unsigned int vpflags = dvpe();
+
+ clear_c0_cause(C_SW0);
+ evpe(vpflags);
+ }
+ ipi_mux_process();
+
+ chained_irq_exit(chip, desc);
+}
+
+static void smvp_ipi_send(unsigned int cpu)
+{
+ unsigned long flags;
+ int vpflags;
+
+ local_irq_save(flags);
+
+ /* We can only send IPIs to VPEs within the local core */
+ WARN_ON(!cpus_are_siblings(smp_processor_id(), cpu));
+ vpflags = dvpe();
+ settc(cpu_vpe_id(&cpu_data[cpu]));
+ write_vpe_c0_status(read_vpe_c0_status() | C_SW0);
+ write_vpe_c0_cause(read_vpe_c0_cause() | C_SW0);
+ evpe(vpflags);
+
+ local_irq_restore(flags);
+}
+
+static int __init vsmp_ipi_init(void)
+{
+ int sw0_virq, mux_virq;
+
+ /* SW0 Interrupt for IPI */
+ sw0_virq = get_mips_sw_int(0);
+ if (!sw0_virq)
+ return -EINVAL;
+
+ mux_virq = ipi_mux_create(IPI_MAX, smvp_ipi_send);
+ if (!mux_virq)
+ return -EINVAL;
+
+ irq_set_percpu_devid(sw0_virq);
+ irq_set_chained_handler(sw0_virq, smvp_handle_ipi_irq);
+ mips_smp_ipi_set_virq_range(mux_virq, IPI_MAX);
+ vsmp_sw0_virq = sw0_virq;
+
+ return 0;
+}
+
static void __init smvp_copy_vpe_config(void)
{
write_vpe_c0_status(
@@ -123,6 +187,8 @@ static void vsmp_smp_finish(void)
/* CDFIXME: remove this? */
write_c0_compare(read_c0_count() + (8* mips_hpt_frequency/HZ));
+ enable_percpu_irq(vsmp_sw0_virq, IRQ_TYPE_NONE);
+
#ifdef CONFIG_MIPS_MT_FPAFF
/* If we have an FPU, enroll ourselves in the FPU-full mask */
if (cpu_has_fpu)
@@ -226,7 +292,11 @@ static void __init vsmp_smp_setup(void)
static void __init vsmp_prepare_cpus(unsigned int max_cpus)
{
+ int rc;
+
mips_mt_set_cpuoptions();
+ rc = vsmp_ipi_init();
+ WARN_ON(rc);
}
const struct plat_smp_ops vsmp_smp_ops = {
diff --git a/drivers/irqchip/Kconfig b/drivers/irqchip/Kconfig
index 763070be0088..786ed8a6b719 100644
--- a/drivers/irqchip/Kconfig
+++ b/drivers/irqchip/Kconfig
@@ -192,7 +192,6 @@ config MADERA_IRQ
config IRQ_MIPS_CPU
bool
select GENERIC_IRQ_CHIP
- select GENERIC_IRQ_IPI if SMP && SYS_SUPPORTS_MULTITHREADING
select IRQ_DOMAIN
select GENERIC_IRQ_EFFECTIVE_AFF_MASK if SMP
diff --git a/drivers/irqchip/irq-mips-cpu.c b/drivers/irqchip/irq-mips-cpu.c
index b2318e915d88..ec6999b5e73f 100644
--- a/drivers/irqchip/irq-mips-cpu.c
+++ b/drivers/irqchip/irq-mips-cpu.c
@@ -34,8 +34,7 @@
#include <asm/mipsmtregs.h>
#include <asm/setup.h>
-static struct irq_domain *irq_domain;
-static struct irq_domain *ipi_domain;
+static struct irq_domain *irq_domain __read_mostly;
static inline void unmask_mips_irq(struct irq_data *d)
{
@@ -108,37 +107,12 @@ static void mips_mt_sw_irq_ack(struct irq_data *d)
evpe(vpflags);
}
-#ifdef CONFIG_GENERIC_IRQ_IPI
-
-static void mips_mt_send_ipi(struct irq_data *d, unsigned int cpu)
-{
- irq_hw_number_t hwirq = irqd_to_hwirq(d);
- unsigned long flags;
- int vpflags;
-
- local_irq_save(flags);
-
- /* We can only send IPIs to VPEs within the local core */
- WARN_ON(!cpus_are_siblings(smp_processor_id(), cpu));
-
- vpflags = dvpe();
- settc(cpu_vpe_id(&cpu_data[cpu]));
- write_vpe_c0_cause(read_vpe_c0_cause() | (C_SW0 << hwirq));
- evpe(vpflags);
-
- local_irq_restore(flags);
-}
-
-#endif /* CONFIG_GENERIC_IRQ_IPI */
static const struct irq_chip mips_mt_cpu_sw_irq_controller = {
.name = "MIPS",
.irq_startup = mips_mt_sw_irq_startup,
.irq_ack = mips_mt_sw_irq_ack,
.irq_mask = mask_mips_irq,
.irq_unmask = unmask_mips_irq,
-#ifdef CONFIG_GENERIC_IRQ_IPI
- .ipi_send_single = mips_mt_send_ipi,
-#endif
};
#endif
@@ -154,15 +128,8 @@ asmlinkage void __weak plat_irq_dispatch(void)
pending >>= CAUSEB_IP;
while (pending) {
- struct irq_domain *d;
-
irq = fls(pending) - 1;
- if (IS_ENABLED(CONFIG_GENERIC_IRQ_IPI) && irq < 2)
- d = ipi_domain;
- else
- d = irq_domain;
-
- do_domain_IRQ(d, irq);
+ do_domain_IRQ(irq_domain, irq);
pending &= ~BIT(irq);
}
}
@@ -202,86 +169,6 @@ static const struct irq_domain_ops mips_cpu_intc_irq_domain_ops = {
.xlate = irq_domain_xlate_onecell,
};
-#ifdef CONFIG_GENERIC_IRQ_IPI
-
-struct cpu_ipi_domain_state {
- DECLARE_BITMAP(allocated, 2);
-};
-
-static int mips_cpu_ipi_alloc(struct irq_domain *domain, unsigned int virq,
- unsigned int nr_irqs, void *arg)
-{
- struct cpu_ipi_domain_state *state = domain->host_data;
- unsigned int i, hwirq;
- int ret;
-
- for (i = 0; i < nr_irqs; i++) {
- hwirq = find_first_zero_bit(state->allocated, 2);
- if (hwirq == 2)
- return -EBUSY;
- bitmap_set(state->allocated, hwirq, 1);
-
- ret = irq_domain_set_hwirq_and_chip(domain, virq + i, hwirq,
- &mips_mt_cpu_irq_controller,
- NULL);
- if (ret)
- return ret;
-
- ret = irq_domain_set_hwirq_and_chip(domain->parent, virq + i, hwirq,
- &mips_mt_cpu_irq_controller,
- NULL);
-
- if (ret)
- return ret;
-
- ret = irq_set_irq_type(virq + i, IRQ_TYPE_LEVEL_HIGH);
- if (ret)
- return ret;
- }
-
- return 0;
-}
-
-static int mips_cpu_ipi_match(struct irq_domain *d, struct device_node *node,
- enum irq_domain_bus_token bus_token)
-{
- bool is_ipi;
-
- switch (bus_token) {
- case DOMAIN_BUS_IPI:
- is_ipi = d->bus_token == bus_token;
- return (!node || (to_of_node(d->fwnode) == node)) && is_ipi;
- default:
- return 0;
- }
-}
-
-static const struct irq_domain_ops mips_cpu_ipi_chip_ops = {
- .alloc = mips_cpu_ipi_alloc,
- .match = mips_cpu_ipi_match,
-};
-
-static void mips_cpu_register_ipi_domain(struct device_node *of_node)
-{
- struct cpu_ipi_domain_state *ipi_domain_state;
-
- ipi_domain_state = kzalloc(sizeof(*ipi_domain_state), GFP_KERNEL);
- ipi_domain = irq_domain_add_hierarchy(irq_domain,
- IRQ_DOMAIN_FLAG_IPI_SINGLE,
- 2, of_node,
- &mips_cpu_ipi_chip_ops,
- ipi_domain_state);
- if (!ipi_domain)
- panic("Failed to add MIPS CPU IPI domain");
- irq_domain_update_bus_token(ipi_domain, DOMAIN_BUS_IPI);
-}
-
-#else /* !CONFIG_GENERIC_IRQ_IPI */
-
-static inline void mips_cpu_register_ipi_domain(struct device_node *of_node) {}
-
-#endif /* !CONFIG_GENERIC_IRQ_IPI */
-
int mips_cpu_get_sw_int(int hwint)
{
/* Only 0 and 1 for SW INT */
@@ -304,13 +191,6 @@ static void __init __mips_cpu_irq_init(struct device_node *of_node)
NULL);
if (!irq_domain)
panic("Failed to add irqdomain for MIPS CPU");
-
- /*
- * Only proceed to register the software interrupt IPI implementation
- * for CPUs which implement the MIPS MT (multi-threading) ASE.
- */
- if (cpu_has_mipsmt)
- mips_cpu_register_ipi_domain(of_node);
}
void __init mips_cpu_irq_init(void)
--
2.46.0
^ permalink raw reply related [flat|nested] 12+ messages in thread
* Re: [PATCH v5 01/10] MIPS: smp: Make IPI interrupts scalable
2024-09-08 10:20 ` [PATCH v5 01/10] MIPS: smp: Make IPI interrupts scalable Jiaxun Yang
@ 2024-09-12 3:47 ` Florian Fainelli
0 siblings, 0 replies; 12+ messages in thread
From: Florian Fainelli @ 2024-09-12 3:47 UTC (permalink / raw)
To: Jiaxun Yang, Thomas Bogendoerfer,
Broadcom internal kernel review list, Huacai Chen,
Thomas Gleixner, Serge Semin, Paul Burton
Cc: linux-mips, linux-kernel
On 9/8/2024 3:20 AM, Jiaxun Yang wrote:
> Define enum ipi_message_type as other architectures did to
> allow easy extension to number of IPI interrupts, fiddle
> around platform IPI code to adopt to the new infra, add
> extensive BUILD_BUG_ON on IPI numbers to ensure future
> extensions won't break existing platforms.
>
> IPI related stuff are pulled to asm/ipi.h to avoid include
> linux/interrupt.h in asm/smp.h.
>
> Tested-by: Serge Semin <fancer.lancer@gmail.com>
> Signed-off-by: Jiaxun Yang <jiaxun.yang@flygoat.com>
> ---
> arch/mips/cavium-octeon/smp.c | 111 ++++++++++--------------------
> arch/mips/fw/arc/init.c | 1 -
> arch/mips/include/asm/ipi.h | 34 ++++++++++
> arch/mips/include/asm/octeon/octeon.h | 2 +
> arch/mips/include/asm/smp-ops.h | 8 +--
> arch/mips/include/asm/smp.h | 41 +++++------
> arch/mips/kernel/smp-bmips.c | 43 ++++++------
For smp-bmips.c:
Reviewed-by: Florian Fainelli <florian.fainelli@broadcom.com>
--
Florian
^ permalink raw reply [flat|nested] 12+ messages in thread
end of thread, other threads:[~2024-09-12 3:47 UTC | newest]
Thread overview: 12+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-09-08 10:20 [PATCH v5 00/10] MIPS: IPI Improvements Jiaxun Yang
2024-09-08 10:20 ` [PATCH v5 01/10] MIPS: smp: Make IPI interrupts scalable Jiaxun Yang
2024-09-12 3:47 ` Florian Fainelli
2024-09-08 10:20 ` [PATCH v5 02/10] MIPS: smp: Manage IPI interrupts as percpu_devid interrupts Jiaxun Yang
2024-09-08 10:20 ` [PATCH v5 03/10] MIPS: smp: Provide platform IPI virq & domain hooks Jiaxun Yang
2024-09-08 10:20 ` [PATCH v5 04/10] MIPS: Move mips_smp_ipi_init call after prepare_cpus Jiaxun Yang
2024-09-08 10:20 ` [PATCH v5 05/10] MIPS: smp: Implement IPI stats Jiaxun Yang
2024-09-08 10:20 ` [PATCH v5 06/10] irqchip/irq-mips-gic: Switch to ipi_mux Jiaxun Yang
2024-09-08 10:20 ` [PATCH v5 07/10] MIPS: Implement get_mips_sw_int hook Jiaxun Yang
2024-09-08 10:20 ` [PATCH v5 08/10] MIPS: GIC: Implement get_sw_int hook Jiaxun Yang
2024-09-08 10:20 ` [PATCH v5 09/10] irqchip/irq-mips-cpu: Rework software IRQ handling flow Jiaxun Yang
2024-09-08 10:20 ` [PATCH v5 10/10] MIPS: smp-mt: Rework IPI functions Jiaxun Yang
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).