* [PATCH v5 00/11] MIPS: Support I6500 multi-cluster configuration
@ 2024-07-11 8:26 Aleksandar Rikalo
2024-07-11 8:26 ` [PATCH v5 01/11] MIPS: CPS: Add a couple of multi-cluster utility functions Aleksandar Rikalo
` (10 more replies)
0 siblings, 11 replies; 14+ messages in thread
From: Aleksandar Rikalo @ 2024-07-11 8:26 UTC (permalink / raw)
To: Thomas Bogendoerfer
Cc: Aleksandar Rikalo, Chao-ying Fu, Daniel Lezcano,
Geert Uytterhoeven, Greg Ungerer, Hauke Mehrtens, Ilya Lipnitskiy,
Jiaxun Yang, linux-kernel, linux-mips, Marc Zyngier, Paul Burton,
Peter Zijlstra, Serge Semin, Thomas Gleixner, Tiezhu Yang
Taken from Paul Burton MIPS repo with minor changes from Chao-ying Fu.
Tested with 64r6el_defconfig on Boston board in 2 cluster/2 VPU and
1 cluster/4 VPU configurations.
v5:
- Drop FDC related changes (patches 12, 13, and 14).
- Apply changes suggested by Thomas Gleixner (patches 3 and 4).
- Add #include <linux/cpumask.h> to patch 1, suggested by Thomas Bogendoerfer.
- Add Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> for the patch 08/11.
- Add Tested-by: Serge Semin <fancer.lancer@gmail.com> for the entire series.
- Correct some commit messages.
v4:
- Re-base onto the master branch, with no functionality impact.
- Refactor MIPS FDC driver in the context of multicluster support.
v3:
- Add Reviewed-by: Jiaxun Yang <jiaxun.yang@flygoat.com> for the patch 02/12.
- Add the changes requested by Marc Zyngier for the 3/12 patch.
- Remove the patch 11/12 (a consequence of a discussion between Jiaxun Yang
and Marc Zyngier.
- Re-base onto the master branch, with no functionality impact.
v2:
- Apply correct Signed-off-by to avoid confusion.
Chao-ying Fu (1):
irqchip/mips-gic: Setup defaults in each cluster
Paul Burton (10):
MIPS: CPS: Add a couple of multi-cluster utility functions
MIPS: GIC: Generate redirect block accessors
irqchip/mips-gic: Introduce for_each_online_cpu_gic()
irqchip/mips-gic: Support multi-cluster in for_each_online_cpu_gic()
irqchip/mips-gic: Multi-cluster support
clocksource: mips-gic-timer: Always use cluster 0 counter as
clocksource
clocksource: mips-gic-timer: Enable counter when CPUs start
MIPS: pm-cps: Use per-CPU variables as per-CPU, not per-core
MIPS: CPS: Introduce struct cluster_boot_config
MIPS: CPS: Boot CPUs in secondary clusters
arch/mips/include/asm/mips-cm.h | 18 ++
arch/mips/include/asm/mips-cps.h | 39 ++++
arch/mips/include/asm/mips-gic.h | 50 +++--
arch/mips/include/asm/smp-cps.h | 7 +-
arch/mips/kernel/asm-offsets.c | 3 +
arch/mips/kernel/cps-vec.S | 19 +-
arch/mips/kernel/mips-cm.c | 41 +++-
arch/mips/kernel/pm-cps.c | 35 ++--
arch/mips/kernel/smp-cps.c | 288 ++++++++++++++++++++++-----
drivers/clocksource/mips-gic-timer.c | 45 ++++-
drivers/irqchip/Kconfig | 1 +
drivers/irqchip/irq-mips-gic.c | 257 ++++++++++++++++++++----
12 files changed, 670 insertions(+), 133 deletions(-)
--
2.25.1
^ permalink raw reply [flat|nested] 14+ messages in thread
* [PATCH v5 01/11] MIPS: CPS: Add a couple of multi-cluster utility functions
2024-07-11 8:26 [PATCH v5 00/11] MIPS: Support I6500 multi-cluster configuration Aleksandar Rikalo
@ 2024-07-11 8:26 ` Aleksandar Rikalo
2024-07-11 8:26 ` [PATCH v5 02/11] MIPS: GIC: Generate redirect block accessors Aleksandar Rikalo
` (9 subsequent siblings)
10 siblings, 0 replies; 14+ messages in thread
From: Aleksandar Rikalo @ 2024-07-11 8:26 UTC (permalink / raw)
To: Thomas Bogendoerfer
Cc: Aleksandar Rikalo, Chao-ying Fu, Daniel Lezcano,
Geert Uytterhoeven, Greg Ungerer, Hauke Mehrtens, Ilya Lipnitskiy,
Jiaxun Yang, linux-kernel, linux-mips, Marc Zyngier, Paul Burton,
Peter Zijlstra, Serge Semin, Thomas Gleixner, Tiezhu Yang
From: Paul Burton <paulburton@kernel.org>
This patch introduces a couple of utility functions which help later
patches with introducing support for multi-cluster systems.
- mips_cps_multicluster_cpus() allows its caller to determine whether
the system includes CPUs spread across multiple clusters. This is
useful because in some cases behaviour can be more optimal taking
this knowledge into account. The means by which we check this is
dependent upon the way we probe CPUs & assign their numbers, so
keeping this knowledge confined here in arch/mips/ seems appropriate.
- mips_cps_first_online_in_cluster() allows its caller to determine
whether it is running upon the first CPU online within its cluster.
This information is useful in cases where some cluster-wide
configuration may need to occur, but should not be repeated if
another CPU in the cluster is already online. Similarly to the above
this is determined based upon knowledge of CPU numbering so it makes
sense to keep that knowledge in arch/mips/. The function is defined
in mips-cm.c rather than in asm/mips-cps.h in order to allow us to
use asm/cpu-info.h & linux/smp.h without encountering an include
nightmare.
Signed-off-by: Paul Burton <paulburton@kernel.org>
Signed-off-by: Chao-ying Fu <cfu@wavecomp.com>
Signed-off-by: Dragan Mladjenovic <dragan.mladjenovic@syrmia.com>
Signed-off-by: Aleksandar Rikalo <arikalo@gmail.com>
Tested-by: Serge Semin <fancer.lancer@gmail.com>
---
arch/mips/include/asm/mips-cps.h | 39 ++++++++++++++++++++++++++++++++
arch/mips/kernel/mips-cm.c | 37 ++++++++++++++++++++++++++++++
2 files changed, 76 insertions(+)
diff --git a/arch/mips/include/asm/mips-cps.h b/arch/mips/include/asm/mips-cps.h
index c077e8d100f5..917009b80e69 100644
--- a/arch/mips/include/asm/mips-cps.h
+++ b/arch/mips/include/asm/mips-cps.h
@@ -8,6 +8,7 @@
#define __MIPS_ASM_MIPS_CPS_H__
#include <linux/bitfield.h>
+#include <linux/cpumask.h>
#include <linux/io.h>
#include <linux/types.h>
@@ -228,4 +229,42 @@ static inline unsigned int mips_cps_numvps(unsigned int cluster, unsigned int co
return FIELD_GET(CM_GCR_Cx_CONFIG_PVPE, cfg + 1);
}
+/**
+ * mips_cps_multicluster_cpus() - Detect whether CPUs are in multiple clusters
+ *
+ * Determine whether the system includes CPUs in multiple clusters - ie.
+ * whether we can treat the system as single or multi-cluster as far as CPUs
+ * are concerned. Note that this is slightly different to simply checking
+ * whether multiple clusters are present - it is possible for there to be
+ * clusters which contain no CPUs, which this function will effectively ignore.
+ *
+ * Returns true if CPUs are spread across multiple clusters, else false.
+ */
+static inline bool mips_cps_multicluster_cpus(void)
+{
+ unsigned int first_cl, last_cl;
+
+ /*
+ * CPUs are numbered sequentially by cluster - ie. CPUs 0..X will be in
+ * cluster 0, CPUs X+1..Y in cluster 1, CPUs Y+1..Z in cluster 2 etc.
+ *
+ * Thus we can detect multiple clusters trivially by checking whether
+ * the first & last CPUs belong to the same cluster.
+ */
+ first_cl = cpu_cluster(&boot_cpu_data);
+ last_cl = cpu_cluster(&cpu_data[nr_cpu_ids - 1]);
+ return first_cl != last_cl;
+}
+
+/**
+ * mips_cps_first_online_in_cluster() - Detect if CPU is first online in cluster
+ *
+ * Determine whether the local CPU is the first to be brought online in its
+ * cluster - that is, whether there are any other online CPUs in the local
+ * cluster.
+ *
+ * Returns true if this CPU is first online, else false.
+ */
+extern unsigned int mips_cps_first_online_in_cluster(void);
+
#endif /* __MIPS_ASM_MIPS_CPS_H__ */
diff --git a/arch/mips/kernel/mips-cm.c b/arch/mips/kernel/mips-cm.c
index 3a115fab5573..3eb2cfb893e1 100644
--- a/arch/mips/kernel/mips-cm.c
+++ b/arch/mips/kernel/mips-cm.c
@@ -512,3 +512,40 @@ void mips_cm_error_report(void)
/* reprime cause register */
write_gcr_error_cause(cm_error);
}
+
+unsigned int mips_cps_first_online_in_cluster(void)
+{
+ unsigned int local_cl;
+ int i;
+
+ local_cl = cpu_cluster(¤t_cpu_data);
+
+ /*
+ * We rely upon knowledge that CPUs are numbered sequentially by
+ * cluster - ie. CPUs 0..X will be in cluster 0, CPUs X+1..Y in cluster
+ * 1, CPUs Y+1..Z in cluster 2 etc. This means that CPUs in the same
+ * cluster will immediately precede or follow one another.
+ *
+ * First we scan backwards, until we find an online CPU in the cluster
+ * or we move on to another cluster.
+ */
+ for (i = smp_processor_id() - 1; i >= 0; i--) {
+ if (cpu_cluster(&cpu_data[i]) != local_cl)
+ break;
+ if (!cpu_online(i))
+ continue;
+ return false;
+ }
+
+ /* Then do the same for higher numbered CPUs */
+ for (i = smp_processor_id() + 1; i < nr_cpu_ids; i++) {
+ if (cpu_cluster(&cpu_data[i]) != local_cl)
+ break;
+ if (!cpu_online(i))
+ continue;
+ return false;
+ }
+
+ /* We found no online CPUs in the local cluster */
+ return true;
+}
--
2.25.1
^ permalink raw reply related [flat|nested] 14+ messages in thread
* [PATCH v5 02/11] MIPS: GIC: Generate redirect block accessors
2024-07-11 8:26 [PATCH v5 00/11] MIPS: Support I6500 multi-cluster configuration Aleksandar Rikalo
2024-07-11 8:26 ` [PATCH v5 01/11] MIPS: CPS: Add a couple of multi-cluster utility functions Aleksandar Rikalo
@ 2024-07-11 8:26 ` Aleksandar Rikalo
2024-07-11 8:26 ` [PATCH v5 03/11] irqchip/mips-gic: Introduce for_each_online_cpu_gic() Aleksandar Rikalo
` (8 subsequent siblings)
10 siblings, 0 replies; 14+ messages in thread
From: Aleksandar Rikalo @ 2024-07-11 8:26 UTC (permalink / raw)
To: Thomas Bogendoerfer
Cc: Aleksandar Rikalo, Chao-ying Fu, Daniel Lezcano,
Geert Uytterhoeven, Greg Ungerer, Hauke Mehrtens, Ilya Lipnitskiy,
Jiaxun Yang, linux-kernel, linux-mips, Marc Zyngier, Paul Burton,
Peter Zijlstra, Serge Semin, Thomas Gleixner, Tiezhu Yang
From: Paul Burton <paulburton@kernel.org>
With CM 3.5 the "core-other" register block evolves into the "redirect"
register block, which is capable of accessing not only the core local
registers of other cores but also the shared/global registers of other
clusters.
This patch generates accessor functions for shared/global registers
accessed via the redirect block, with "redir_" inserted after "gic_" in
their names. For example the accessor function:
read_gic_config()
...accesses the GIC_CONFIG register of the GIC in the local cluster.
With this patch a new function:
read_gic_redir_config()
...is added which accesses the GIC_CONFIG register of the GIC in
whichever cluster the GCR_CL_REDIRECT register is configured to access.
This mirrors the similar redirect block accessors already provided for
the CM & CPC.
Reviewed-by: Jiaxun Yang <jiaxun.yang@flygoat.com>
Signed-off-by: Paul Burton <paulburton@kernel.org>
Signed-off-by: Chao-ying Fu <cfu@wavecomp.com>
Signed-off-by: Dragan Mladjenovic <dragan.mladjenovic@syrmia.com>
Signed-off-by: Aleksandar Rikalo <arikalo@gmail.com>
Tested-by: Serge Semin <fancer.lancer@gmail.com>
---
arch/mips/include/asm/mips-gic.h | 50 ++++++++++++++++++++++----------
1 file changed, 34 insertions(+), 16 deletions(-)
diff --git a/arch/mips/include/asm/mips-gic.h b/arch/mips/include/asm/mips-gic.h
index 084cac1c5ea2..fd9da5e3beaa 100644
--- a/arch/mips/include/asm/mips-gic.h
+++ b/arch/mips/include/asm/mips-gic.h
@@ -28,11 +28,13 @@ extern void __iomem *mips_gic_base;
/* For read-only shared registers */
#define GIC_ACCESSOR_RO(sz, off, name) \
- CPS_ACCESSOR_RO(gic, sz, MIPS_GIC_SHARED_OFS + off, name)
+ CPS_ACCESSOR_RO(gic, sz, MIPS_GIC_SHARED_OFS + off, name) \
+ CPS_ACCESSOR_RO(gic, sz, MIPS_GIC_REDIR_OFS + off, redir_##name)
/* For read-write shared registers */
#define GIC_ACCESSOR_RW(sz, off, name) \
- CPS_ACCESSOR_RW(gic, sz, MIPS_GIC_SHARED_OFS + off, name)
+ CPS_ACCESSOR_RW(gic, sz, MIPS_GIC_SHARED_OFS + off, name) \
+ CPS_ACCESSOR_RW(gic, sz, MIPS_GIC_REDIR_OFS + off, redir_##name)
/* For read-only local registers */
#define GIC_VX_ACCESSOR_RO(sz, off, name) \
@@ -45,7 +47,7 @@ extern void __iomem *mips_gic_base;
CPS_ACCESSOR_RW(gic, sz, MIPS_GIC_REDIR_OFS + off, vo_##name)
/* For read-only shared per-interrupt registers */
-#define GIC_ACCESSOR_RO_INTR_REG(sz, off, stride, name) \
+#define _GIC_ACCESSOR_RO_INTR_REG(sz, off, stride, name) \
static inline void __iomem *addr_gic_##name(unsigned int intr) \
{ \
return mips_gic_base + (off) + (intr * (stride)); \
@@ -58,8 +60,8 @@ static inline unsigned int read_gic_##name(unsigned int intr) \
}
/* For read-write shared per-interrupt registers */
-#define GIC_ACCESSOR_RW_INTR_REG(sz, off, stride, name) \
- GIC_ACCESSOR_RO_INTR_REG(sz, off, stride, name) \
+#define _GIC_ACCESSOR_RW_INTR_REG(sz, off, stride, name) \
+ _GIC_ACCESSOR_RO_INTR_REG(sz, off, stride, name) \
\
static inline void write_gic_##name(unsigned int intr, \
unsigned int val) \
@@ -68,22 +70,30 @@ static inline void write_gic_##name(unsigned int intr, \
__raw_writel(val, addr_gic_##name(intr)); \
}
+#define GIC_ACCESSOR_RO_INTR_REG(sz, off, stride, name) \
+ _GIC_ACCESSOR_RO_INTR_REG(sz, off, stride, name) \
+ _GIC_ACCESSOR_RO_INTR_REG(sz, MIPS_GIC_REDIR_OFS + off, stride, redir_##name)
+
+#define GIC_ACCESSOR_RW_INTR_REG(sz, off, stride, name) \
+ _GIC_ACCESSOR_RW_INTR_REG(sz, off, stride, name) \
+ _GIC_ACCESSOR_RW_INTR_REG(sz, MIPS_GIC_REDIR_OFS + off, stride, redir_##name)
+
/* For read-only local per-interrupt registers */
#define GIC_VX_ACCESSOR_RO_INTR_REG(sz, off, stride, name) \
- GIC_ACCESSOR_RO_INTR_REG(sz, MIPS_GIC_LOCAL_OFS + off, \
+ _GIC_ACCESSOR_RO_INTR_REG(sz, MIPS_GIC_LOCAL_OFS + off, \
stride, vl_##name) \
- GIC_ACCESSOR_RO_INTR_REG(sz, MIPS_GIC_REDIR_OFS + off, \
+ _GIC_ACCESSOR_RO_INTR_REG(sz, MIPS_GIC_REDIR_OFS + off, \
stride, vo_##name)
/* For read-write local per-interrupt registers */
#define GIC_VX_ACCESSOR_RW_INTR_REG(sz, off, stride, name) \
- GIC_ACCESSOR_RW_INTR_REG(sz, MIPS_GIC_LOCAL_OFS + off, \
+ _GIC_ACCESSOR_RW_INTR_REG(sz, MIPS_GIC_LOCAL_OFS + off, \
stride, vl_##name) \
- GIC_ACCESSOR_RW_INTR_REG(sz, MIPS_GIC_REDIR_OFS + off, \
+ _GIC_ACCESSOR_RW_INTR_REG(sz, MIPS_GIC_REDIR_OFS + off, \
stride, vo_##name)
/* For read-only shared bit-per-interrupt registers */
-#define GIC_ACCESSOR_RO_INTR_BIT(off, name) \
+#define _GIC_ACCESSOR_RO_INTR_BIT(off, name) \
static inline void __iomem *addr_gic_##name(void) \
{ \
return mips_gic_base + (off); \
@@ -106,8 +116,8 @@ static inline unsigned int read_gic_##name(unsigned int intr) \
}
/* For read-write shared bit-per-interrupt registers */
-#define GIC_ACCESSOR_RW_INTR_BIT(off, name) \
- GIC_ACCESSOR_RO_INTR_BIT(off, name) \
+#define _GIC_ACCESSOR_RW_INTR_BIT(off, name) \
+ _GIC_ACCESSOR_RO_INTR_BIT(off, name) \
\
static inline void write_gic_##name(unsigned int intr) \
{ \
@@ -146,6 +156,14 @@ static inline void change_gic_##name(unsigned int intr, \
} \
}
+#define GIC_ACCESSOR_RO_INTR_BIT(off, name) \
+ _GIC_ACCESSOR_RO_INTR_BIT(off, name) \
+ _GIC_ACCESSOR_RO_INTR_BIT(MIPS_GIC_REDIR_OFS + off, redir_##name)
+
+#define GIC_ACCESSOR_RW_INTR_BIT(off, name) \
+ _GIC_ACCESSOR_RW_INTR_BIT(off, name) \
+ _GIC_ACCESSOR_RW_INTR_BIT(MIPS_GIC_REDIR_OFS + off, redir_##name)
+
/* For read-only local bit-per-interrupt registers */
#define GIC_VX_ACCESSOR_RO_INTR_BIT(sz, off, name) \
GIC_ACCESSOR_RO_INTR_BIT(sz, MIPS_GIC_LOCAL_OFS + off, \
@@ -155,10 +173,10 @@ static inline void change_gic_##name(unsigned int intr, \
/* For read-write local bit-per-interrupt registers */
#define GIC_VX_ACCESSOR_RW_INTR_BIT(sz, off, name) \
- GIC_ACCESSOR_RW_INTR_BIT(sz, MIPS_GIC_LOCAL_OFS + off, \
- vl_##name) \
- GIC_ACCESSOR_RW_INTR_BIT(sz, MIPS_GIC_REDIR_OFS + off, \
- vo_##name)
+ _GIC_ACCESSOR_RW_INTR_BIT(sz, MIPS_GIC_LOCAL_OFS + off, \
+ vl_##name) \
+ _GIC_ACCESSOR_RW_INTR_BIT(sz, MIPS_GIC_REDIR_OFS + off, \
+ vo_##name)
/* GIC_SH_CONFIG - Information about the GIC configuration */
GIC_ACCESSOR_RW(32, 0x000, config)
--
2.25.1
^ permalink raw reply related [flat|nested] 14+ messages in thread
* [PATCH v5 03/11] irqchip/mips-gic: Introduce for_each_online_cpu_gic()
2024-07-11 8:26 [PATCH v5 00/11] MIPS: Support I6500 multi-cluster configuration Aleksandar Rikalo
2024-07-11 8:26 ` [PATCH v5 01/11] MIPS: CPS: Add a couple of multi-cluster utility functions Aleksandar Rikalo
2024-07-11 8:26 ` [PATCH v5 02/11] MIPS: GIC: Generate redirect block accessors Aleksandar Rikalo
@ 2024-07-11 8:26 ` Aleksandar Rikalo
2024-07-11 11:52 ` Thomas Gleixner
2024-07-11 8:26 ` [PATCH v5 04/11] irqchip/mips-gic: Support multi-cluster in for_each_online_cpu_gic() Aleksandar Rikalo
` (7 subsequent siblings)
10 siblings, 1 reply; 14+ messages in thread
From: Aleksandar Rikalo @ 2024-07-11 8:26 UTC (permalink / raw)
To: Thomas Bogendoerfer
Cc: Aleksandar Rikalo, Chao-ying Fu, Daniel Lezcano,
Geert Uytterhoeven, Greg Ungerer, Hauke Mehrtens, Ilya Lipnitskiy,
Jiaxun Yang, linux-kernel, linux-mips, Marc Zyngier, Paul Burton,
Peter Zijlstra, Serge Semin, Thomas Gleixner, Tiezhu Yang
From: Paul Burton <paulburton@kernel.org>
Parts of code in the MIPS GIC driver operate on the GIC local register
block for each online CPU, accessing each via the GIC's other/redirect
register block.
Abstract the process of iterating over online CPUs & configuring the
other/redirect region to access their registers through a new
for_each_online_cpu_gic() macro.
Signed-off-by: Paul Burton <paulburton@kernel.org>
Signed-off-by: Chao-ying Fu <cfu@wavecomp.com>
Signed-off-by: Dragan Mladjenovic <dragan.mladjenovic@syrmia.com>
Signed-off-by: Aleksandar Rikalo <arikalo@gmail.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Tested-by: Serge Semin <fancer.lancer@gmail.com>
---
drivers/irqchip/irq-mips-gic.c | 59 +++++++++++++++++++++++-----------
1 file changed, 41 insertions(+), 18 deletions(-)
diff --git a/drivers/irqchip/irq-mips-gic.c b/drivers/irqchip/irq-mips-gic.c
index 76253e864f23..6c7a7d2f0438 100644
--- a/drivers/irqchip/irq-mips-gic.c
+++ b/drivers/irqchip/irq-mips-gic.c
@@ -66,6 +66,44 @@ static struct gic_all_vpes_chip_data {
bool mask;
} gic_all_vpes_chip_data[GIC_NUM_LOCAL_INTRS];
+static int __gic_with_next_online_cpu(int prev)
+{
+ unsigned int cpu;
+
+ /* Discover the next online CPU */
+ cpu = cpumask_next(prev, cpu_online_mask);
+
+ /* If there isn't one, we're done */
+ if (cpu >= nr_cpu_ids)
+ return cpu;
+
+ /*
+ * Move the access lock to the next CPU's GIC local register block.
+ *
+ * Set GIC_VL_OTHER. Since the caller holds gic_lock nothing can
+ * clobber the written value.
+ */
+ write_gic_vl_other(mips_cm_vp_id(cpu));
+
+ return cpu;
+}
+
+/**
+ * for_each_online_cpu_gic() - Iterate over online CPUs, access local registers
+ * @cpu: An integer variable to hold the current CPU number
+ * @gic_lock: A pointer to raw spin lock used as a guard
+ *
+ * Iterate over online CPUs & configure the other/redirect register region to
+ * access each CPUs GIC local register block, which can be accessed from the
+ * loop body using read_gic_vo_*() or write_gic_vo_*() accessor functions or
+ * their derivatives.
+ */
+#define for_each_online_cpu_gic(cpu, gic_lock) \
+ guard(raw_spinlock_irqsave)(gic_lock); \
+ for ((cpu) = __gic_with_next_online_cpu(-1); \
+ (cpu) < nr_cpu_ids; \
+ (cpu) = __gic_with_next_online_cpu(cpu))
+
static void gic_clear_pcpu_masks(unsigned int intr)
{
unsigned int i;
@@ -350,37 +388,27 @@ static struct irq_chip gic_local_irq_controller = {
static void gic_mask_local_irq_all_vpes(struct irq_data *d)
{
struct gic_all_vpes_chip_data *cd;
- unsigned long flags;
int intr, cpu;
intr = GIC_HWIRQ_TO_LOCAL(d->hwirq);
cd = irq_data_get_irq_chip_data(d);
cd->mask = false;
- raw_spin_lock_irqsave(&gic_lock, flags);
- for_each_online_cpu(cpu) {
- write_gic_vl_other(mips_cm_vp_id(cpu));
+ for_each_online_cpu_gic(cpu, &gic_lock)
write_gic_vo_rmask(BIT(intr));
- }
- raw_spin_unlock_irqrestore(&gic_lock, flags);
}
static void gic_unmask_local_irq_all_vpes(struct irq_data *d)
{
struct gic_all_vpes_chip_data *cd;
- unsigned long flags;
int intr, cpu;
intr = GIC_HWIRQ_TO_LOCAL(d->hwirq);
cd = irq_data_get_irq_chip_data(d);
cd->mask = true;
- raw_spin_lock_irqsave(&gic_lock, flags);
- for_each_online_cpu(cpu) {
- write_gic_vl_other(mips_cm_vp_id(cpu));
+ for_each_online_cpu_gic(cpu, &gic_lock)
write_gic_vo_smask(BIT(intr));
- }
- raw_spin_unlock_irqrestore(&gic_lock, flags);
}
static void gic_all_vpes_irq_cpu_online(void)
@@ -469,7 +497,6 @@ static int gic_irq_domain_map(struct irq_domain *d, unsigned int virq,
irq_hw_number_t hwirq)
{
struct gic_all_vpes_chip_data *cd;
- unsigned long flags;
unsigned int intr;
int err, cpu;
u32 map;
@@ -533,12 +560,8 @@ static int gic_irq_domain_map(struct irq_domain *d, unsigned int virq,
if (!gic_local_irq_is_routable(intr))
return -EPERM;
- raw_spin_lock_irqsave(&gic_lock, flags);
- for_each_online_cpu(cpu) {
- write_gic_vl_other(mips_cm_vp_id(cpu));
+ for_each_online_cpu_gic(cpu, &gic_lock)
write_gic_vo_map(mips_gic_vx_map_reg(intr), map);
- }
- raw_spin_unlock_irqrestore(&gic_lock, flags);
return 0;
}
--
2.25.1
^ permalink raw reply related [flat|nested] 14+ messages in thread
* [PATCH v5 04/11] irqchip/mips-gic: Support multi-cluster in for_each_online_cpu_gic()
2024-07-11 8:26 [PATCH v5 00/11] MIPS: Support I6500 multi-cluster configuration Aleksandar Rikalo
` (2 preceding siblings ...)
2024-07-11 8:26 ` [PATCH v5 03/11] irqchip/mips-gic: Introduce for_each_online_cpu_gic() Aleksandar Rikalo
@ 2024-07-11 8:26 ` Aleksandar Rikalo
2024-07-12 4:38 ` kernel test robot
2024-07-11 8:26 ` [PATCH v5 05/11] irqchip/mips-gic: Setup defaults in each cluster Aleksandar Rikalo
` (6 subsequent siblings)
10 siblings, 1 reply; 14+ messages in thread
From: Aleksandar Rikalo @ 2024-07-11 8:26 UTC (permalink / raw)
To: Thomas Bogendoerfer
Cc: Aleksandar Rikalo, Chao-ying Fu, Daniel Lezcano,
Geert Uytterhoeven, Greg Ungerer, Hauke Mehrtens, Ilya Lipnitskiy,
Jiaxun Yang, linux-kernel, linux-mips, Marc Zyngier, Paul Burton,
Peter Zijlstra, Serge Semin, Thomas Gleixner, Tiezhu Yang
From: Paul Burton <paulburton@kernel.org>
Use CM's GCR_CL_REDIRECT register to access registers in remote clusters,
so users of gic_with_each_online_cpu() gain support for multi-cluster with
no further changes.
Signed-off-by: Paul Burton <paulburton@kernel.org>
Signed-off-by: Chao-ying Fu <cfu@wavecomp.com>
Signed-off-by: Dragan Mladjenovic <dragan.mladjenovic@syrmia.com>
Signed-off-by: Aleksandar Rikalo <arikalo@gmail.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Tested-by: Serge Semin <fancer.lancer@gmail.com>
---
drivers/irqchip/irq-mips-gic.c | 7 +++++++
1 file changed, 7 insertions(+)
diff --git a/drivers/irqchip/irq-mips-gic.c b/drivers/irqchip/irq-mips-gic.c
index 6c7a7d2f0438..e7358d3f4e74 100644
--- a/drivers/irqchip/irq-mips-gic.c
+++ b/drivers/irqchip/irq-mips-gic.c
@@ -98,10 +98,17 @@ static int __gic_with_next_online_cpu(int prev)
* loop body using read_gic_vo_*() or write_gic_vo_*() accessor functions or
* their derivatives.
*/
+static inline void gic_unlock_cluster(void)
+{
+ if (mips_cps_multicluster_cpus())
+ mips_cm_unlock_other();
+}
+
#define for_each_online_cpu_gic(cpu, gic_lock) \
guard(raw_spinlock_irqsave)(gic_lock); \
for ((cpu) = __gic_with_next_online_cpu(-1); \
(cpu) < nr_cpu_ids; \
+ gic_unlock_cluster(), \
(cpu) = __gic_with_next_online_cpu(cpu))
static void gic_clear_pcpu_masks(unsigned int intr)
--
2.25.1
^ permalink raw reply related [flat|nested] 14+ messages in thread
* [PATCH v5 05/11] irqchip/mips-gic: Setup defaults in each cluster
2024-07-11 8:26 [PATCH v5 00/11] MIPS: Support I6500 multi-cluster configuration Aleksandar Rikalo
` (3 preceding siblings ...)
2024-07-11 8:26 ` [PATCH v5 04/11] irqchip/mips-gic: Support multi-cluster in for_each_online_cpu_gic() Aleksandar Rikalo
@ 2024-07-11 8:26 ` Aleksandar Rikalo
2024-07-11 8:26 ` [PATCH v5 06/11] irqchip/mips-gic: Multi-cluster support Aleksandar Rikalo
` (5 subsequent siblings)
10 siblings, 0 replies; 14+ messages in thread
From: Aleksandar Rikalo @ 2024-07-11 8:26 UTC (permalink / raw)
To: Thomas Bogendoerfer
Cc: Aleksandar Rikalo, Chao-ying Fu, Daniel Lezcano,
Geert Uytterhoeven, Greg Ungerer, Hauke Mehrtens, Ilya Lipnitskiy,
Jiaxun Yang, linux-kernel, linux-mips, Marc Zyngier, Paul Burton,
Peter Zijlstra, Serge Semin, Thomas Gleixner, Tiezhu Yang
From: Chao-ying Fu <cfu@wavecomp.com>
In multi-cluster MIPS I6500 systems, there is a GIC per cluster.
The default shared interrupt setup configured in gic_of_init() will only
apply to the GIC in the cluster containing the boot CPU, leaving the GICs
of other clusters unconfigured.
Similarly configure other clusters.
Signed-off-by: Chao-ying Fu <cfu@wavecomp.com>
Signed-off-by: Dragan Mladjenovic <dragan.mladjenovic@syrmia.com>
Signed-off-by: Aleksandar Rikalo <arikalo@gmail.com>
Tested-by: Serge Semin <fancer.lancer@gmail.com>
---
drivers/irqchip/irq-mips-gic.c | 30 ++++++++++++++++++++++++------
1 file changed, 24 insertions(+), 6 deletions(-)
diff --git a/drivers/irqchip/irq-mips-gic.c b/drivers/irqchip/irq-mips-gic.c
index e7358d3f4e74..4fd6e316616e 100644
--- a/drivers/irqchip/irq-mips-gic.c
+++ b/drivers/irqchip/irq-mips-gic.c
@@ -764,7 +764,7 @@ static int gic_cpu_startup(unsigned int cpu)
static int __init gic_of_init(struct device_node *node,
struct device_node *parent)
{
- unsigned int cpu_vec, i, gicconfig;
+ unsigned int cpu_vec, i, gicconfig, cl, nclusters;
unsigned long reserved;
phys_addr_t gic_base;
struct resource res;
@@ -845,11 +845,29 @@ static int __init gic_of_init(struct device_node *node,
board_bind_eic_interrupt = &gic_bind_eic_interrupt;
- /* Setup defaults */
- for (i = 0; i < gic_shared_intrs; i++) {
- change_gic_pol(i, GIC_POL_ACTIVE_HIGH);
- change_gic_trig(i, GIC_TRIG_LEVEL);
- write_gic_rmask(i);
+ /*
+ * Initialise each cluster's GIC shared registers to sane default
+ * values.
+ * Otherwise, the IPI set up will be erased if we move code
+ * to gic_cpu_startup for each cpu.
+ */
+ nclusters = mips_cps_numclusters();
+ for (cl = 0; cl < nclusters; cl++) {
+ if (cl == cpu_cluster(¤t_cpu_data)) {
+ for (i = 0; i < gic_shared_intrs; i++) {
+ change_gic_pol(i, GIC_POL_ACTIVE_HIGH);
+ change_gic_trig(i, GIC_TRIG_LEVEL);
+ write_gic_rmask(i);
+ }
+ } else {
+ mips_cm_lock_other(cl, 0, 0, CM_GCR_Cx_OTHER_BLOCK_GLOBAL);
+ for (i = 0; i < gic_shared_intrs; i++) {
+ change_gic_redir_pol(i, GIC_POL_ACTIVE_HIGH);
+ change_gic_redir_trig(i, GIC_TRIG_LEVEL);
+ write_gic_redir_rmask(i);
+ }
+ mips_cm_unlock_other();
+ }
}
return cpuhp_setup_state(CPUHP_AP_IRQ_MIPS_GIC_STARTING,
--
2.25.1
^ permalink raw reply related [flat|nested] 14+ messages in thread
* [PATCH v5 06/11] irqchip/mips-gic: Multi-cluster support
2024-07-11 8:26 [PATCH v5 00/11] MIPS: Support I6500 multi-cluster configuration Aleksandar Rikalo
` (4 preceding siblings ...)
2024-07-11 8:26 ` [PATCH v5 05/11] irqchip/mips-gic: Setup defaults in each cluster Aleksandar Rikalo
@ 2024-07-11 8:26 ` Aleksandar Rikalo
2024-07-11 8:26 ` [PATCH v5 07/11] clocksource: mips-gic-timer: Always use cluster 0 counter as clocksource Aleksandar Rikalo
` (4 subsequent siblings)
10 siblings, 0 replies; 14+ messages in thread
From: Aleksandar Rikalo @ 2024-07-11 8:26 UTC (permalink / raw)
To: Thomas Bogendoerfer
Cc: Aleksandar Rikalo, Chao-ying Fu, Daniel Lezcano,
Geert Uytterhoeven, Greg Ungerer, Hauke Mehrtens, Ilya Lipnitskiy,
Jiaxun Yang, linux-kernel, linux-mips, Marc Zyngier, Paul Burton,
Peter Zijlstra, Serge Semin, Thomas Gleixner, Tiezhu Yang
From: Paul Burton <paulburton@kernel.org>
The MIPS I6500 CPU & CM (Coherence Manager) 3.5 introduce the concept of
multiple clusters to the system. In these systems, each cluster contains
its own GIC, so the GIC isn't truly global any longer. Access to
registers in the GICs of remote clusters is possible using a redirect
register block much like the redirect register blocks provided by the
CM & CPC, and configured through the same GCR_REDIRECT register that
mips_cm_lock_other() abstraction builds upon.
It is expected that external interrupts are connected identically on all
clusters. That is, if there is a device providing an interrupt connected
to GIC interrupt pin 0 then it should be connected to pin 0 of every GIC
in the system. For the most part, the GIC can be treated as though it is
still truly global, so long as interrupts in the cluster are configured
properly.
This patch introduces support for such multi-cluster systems in the
MIPS GIC irqchip driver. A newly introduced gic_irq_lock_cluster()
function allows:
1) Configure access to a GIC in a remote cluster via the redirect
register block, using mips_cm_lock_other().
Or:
2) Detect that the interrupt in question is affine to the local
cluster and plain old GIC register access to the GIC in the
local cluster should be used.
It is possible to access the local cluster's GIC registers via the
redirect block, but keeping the special case for them is both good for
performance (because we avoid the locking & indirection overhead of
using the redirect block) and necessary to maintain compatibility with
systems using CM revisions prior to 3.5 which don't support the redirect
block.
The gic_irq_lock_cluster() function relies upon an IRQs effective
affinity in order to discover which cluster the IRQ is affine to. In
order to track this & allow it to be updated at an appropriate point
during gic_set_affinity() we select the generic support for effective
affinity using CONFIG_GENERIC_IRQ_EFFECTIVE_AFF_MASK.
gic_set_affinity() is the one function which gains much complexity. It
now deconfigures routing to any VP(E), ie. CPU, on the old cluster when
moving affinity to a new cluster.
gic_shared_irq_domain_map() moves its update of the IRQs effective
affinity to before its use of gic_irq_lock_cluster(), to ensure that
operation is on the cluster the IRQ is affine to.
The remaining changes are straightforward use of the
gic_irq_lock_cluster() function to select between local cluster & remote
cluster code-paths when configuring interrupts.
Signed-off-by: Paul Burton <paulburton@kernel.org>
Signed-off-by: Chao-ying Fu <cfu@wavecomp.com>
Signed-off-by: Dragan Mladjenovic <dragan.mladjenovic@syrmia.com>
Signed-off-by: Aleksandar Rikalo <arikalo@gmail.com>
Tested-by: Serge Semin <fancer.lancer@gmail.com>
---
drivers/irqchip/Kconfig | 1 +
drivers/irqchip/irq-mips-gic.c | 161 +++++++++++++++++++++++++++++----
2 files changed, 143 insertions(+), 19 deletions(-)
diff --git a/drivers/irqchip/Kconfig b/drivers/irqchip/Kconfig
index 14464716bacb..fb124414c597 100644
--- a/drivers/irqchip/Kconfig
+++ b/drivers/irqchip/Kconfig
@@ -328,6 +328,7 @@ config KEYSTONE_IRQ
config MIPS_GIC
bool
+ select GENERIC_IRQ_EFFECTIVE_AFF_MASK
select GENERIC_IRQ_IPI if SMP
select IRQ_DOMAIN_HIERARCHY
select MIPS_CM
diff --git a/drivers/irqchip/irq-mips-gic.c b/drivers/irqchip/irq-mips-gic.c
index 4fd6e316616e..20fa6b12df49 100644
--- a/drivers/irqchip/irq-mips-gic.c
+++ b/drivers/irqchip/irq-mips-gic.c
@@ -111,6 +111,41 @@ static inline void gic_unlock_cluster(void)
gic_unlock_cluster(), \
(cpu) = __gic_with_next_online_cpu(cpu))
+/**
+ * gic_irq_lock_cluster() - Lock redirect block access to IRQ's cluster
+ * @d: struct irq_data corresponding to the interrupt we're interested in
+ *
+ * Locks redirect register block access to the global register block of the GIC
+ * within the remote cluster that the IRQ corresponding to @d is affine to,
+ * returning true when this redirect block setup & locking has been performed.
+ *
+ * If @d is affine to the local cluster then no locking is performed and this
+ * function will return false, indicating to the caller that it should access
+ * the local clusters registers without the overhead of indirection through the
+ * redirect block.
+ *
+ * In summary, if this function returns true then the caller should access GIC
+ * registers using redirect register block accessors & then call
+ * mips_cm_unlock_other() when done. If this function returns false then the
+ * caller should trivially access GIC registers in the local cluster.
+ *
+ * Returns true if locking performed, else false.
+ */
+static bool gic_irq_lock_cluster(struct irq_data *d)
+{
+ unsigned int cpu, cl;
+
+ cpu = cpumask_first(irq_data_get_effective_affinity_mask(d));
+ BUG_ON(cpu >= NR_CPUS);
+
+ cl = cpu_cluster(&cpu_data[cpu]);
+ if (cl == cpu_cluster(¤t_cpu_data))
+ return false;
+
+ mips_cm_lock_other(cl, 0, 0, CM_GCR_Cx_OTHER_BLOCK_GLOBAL);
+ return true;
+}
+
static void gic_clear_pcpu_masks(unsigned int intr)
{
unsigned int i;
@@ -157,7 +192,12 @@ static void gic_send_ipi(struct irq_data *d, unsigned int cpu)
{
irq_hw_number_t hwirq = GIC_HWIRQ_TO_SHARED(irqd_to_hwirq(d));
- write_gic_wedge(GIC_WEDGE_RW | hwirq);
+ if (gic_irq_lock_cluster(d)) {
+ write_gic_redir_wedge(GIC_WEDGE_RW | hwirq);
+ mips_cm_unlock_other();
+ } else {
+ write_gic_wedge(GIC_WEDGE_RW | hwirq);
+ }
}
int gic_get_c0_compare_int(void)
@@ -225,7 +265,13 @@ static void gic_mask_irq(struct irq_data *d)
{
unsigned int intr = GIC_HWIRQ_TO_SHARED(d->hwirq);
- write_gic_rmask(intr);
+ if (gic_irq_lock_cluster(d)) {
+ write_gic_redir_rmask(intr);
+ mips_cm_unlock_other();
+ } else {
+ write_gic_rmask(intr);
+ }
+
gic_clear_pcpu_masks(intr);
}
@@ -234,7 +280,12 @@ static void gic_unmask_irq(struct irq_data *d)
unsigned int intr = GIC_HWIRQ_TO_SHARED(d->hwirq);
unsigned int cpu;
- write_gic_smask(intr);
+ if (gic_irq_lock_cluster(d)) {
+ write_gic_redir_smask(intr);
+ mips_cm_unlock_other();
+ } else {
+ write_gic_smask(intr);
+ }
gic_clear_pcpu_masks(intr);
cpu = cpumask_first(irq_data_get_effective_affinity_mask(d));
@@ -245,7 +296,12 @@ static void gic_ack_irq(struct irq_data *d)
{
unsigned int irq = GIC_HWIRQ_TO_SHARED(d->hwirq);
- write_gic_wedge(irq);
+ if (gic_irq_lock_cluster(d)) {
+ write_gic_redir_wedge(irq);
+ mips_cm_unlock_other();
+ } else {
+ write_gic_wedge(irq);
+ }
}
static int gic_set_type(struct irq_data *d, unsigned int type)
@@ -285,9 +341,16 @@ static int gic_set_type(struct irq_data *d, unsigned int type)
break;
}
- change_gic_pol(irq, pol);
- change_gic_trig(irq, trig);
- change_gic_dual(irq, dual);
+ if (gic_irq_lock_cluster(d)) {
+ change_gic_redir_pol(irq, pol);
+ change_gic_redir_trig(irq, trig);
+ change_gic_redir_dual(irq, dual);
+ mips_cm_unlock_other();
+ } else {
+ change_gic_pol(irq, pol);
+ change_gic_trig(irq, trig);
+ change_gic_dual(irq, dual);
+ }
if (trig == GIC_TRIG_EDGE)
irq_set_chip_handler_name_locked(d, &gic_edge_irq_controller,
@@ -305,25 +368,72 @@ static int gic_set_affinity(struct irq_data *d, const struct cpumask *cpumask,
bool force)
{
unsigned int irq = GIC_HWIRQ_TO_SHARED(d->hwirq);
+ unsigned int cpu, cl, old_cpu, old_cl;
unsigned long flags;
- unsigned int cpu;
+ /*
+ * The GIC specifies that we can only route an interrupt to one VP(E),
+ * ie. CPU in Linux parlance, at a time. Therefore we always route to
+ * the first online CPU in the mask.
+ */
cpu = cpumask_first_and(cpumask, cpu_online_mask);
if (cpu >= NR_CPUS)
return -EINVAL;
- /* Assumption : cpumask refers to a single CPU */
- raw_spin_lock_irqsave(&gic_lock, flags);
+ old_cpu = cpumask_first(irq_data_get_effective_affinity_mask(d));
+ old_cl = cpu_cluster(&cpu_data[old_cpu]);
+ cl = cpu_cluster(&cpu_data[cpu]);
- /* Re-route this IRQ */
- write_gic_map_vp(irq, BIT(mips_cm_vp_id(cpu)));
+ raw_spin_lock_irqsave(&gic_lock, flags);
- /* Update the pcpu_masks */
- gic_clear_pcpu_masks(irq);
- if (read_gic_mask(irq))
- set_bit(irq, per_cpu_ptr(pcpu_masks, cpu));
+ /*
+ * If we're moving affinity between clusters, stop routing the
+ * interrupt to any VP(E) in the old cluster.
+ */
+ if (cl != old_cl) {
+ if (gic_irq_lock_cluster(d)) {
+ write_gic_redir_map_vp(irq, 0);
+ mips_cm_unlock_other();
+ } else {
+ write_gic_map_vp(irq, 0);
+ }
+ }
+ /*
+ * Update effective affinity - after this gic_irq_lock_cluster() will
+ * begin operating on the new cluster.
+ */
irq_data_update_effective_affinity(d, cpumask_of(cpu));
+
+ /*
+ * If we're moving affinity between clusters, configure the interrupt
+ * trigger type in the new cluster.
+ */
+ if (cl != old_cl)
+ gic_set_type(d, irqd_get_trigger_type(d));
+
+ /* Route the interrupt to its new VP(E) */
+ if (gic_irq_lock_cluster(d)) {
+ write_gic_redir_map_pin(irq,
+ GIC_MAP_PIN_MAP_TO_PIN | gic_cpu_pin);
+ write_gic_redir_map_vp(irq, BIT(mips_cm_vp_id(cpu)));
+
+ /* Update the pcpu_masks */
+ gic_clear_pcpu_masks(irq);
+ if (read_gic_redir_mask(irq))
+ set_bit(irq, per_cpu_ptr(pcpu_masks, cpu));
+
+ mips_cm_unlock_other();
+ } else {
+ write_gic_map_pin(irq, GIC_MAP_PIN_MAP_TO_PIN | gic_cpu_pin);
+ write_gic_map_vp(irq, BIT(mips_cm_vp_id(cpu)));
+
+ /* Update the pcpu_masks */
+ gic_clear_pcpu_masks(irq);
+ if (read_gic_mask(irq))
+ set_bit(irq, per_cpu_ptr(pcpu_masks, cpu));
+ }
+
raw_spin_unlock_irqrestore(&gic_lock, flags);
return IRQ_SET_MASK_OK;
@@ -471,11 +581,21 @@ static int gic_shared_irq_domain_map(struct irq_domain *d, unsigned int virq,
unsigned long flags;
data = irq_get_irq_data(virq);
+ irq_data_update_effective_affinity(data, cpumask_of(cpu));
raw_spin_lock_irqsave(&gic_lock, flags);
- write_gic_map_pin(intr, GIC_MAP_PIN_MAP_TO_PIN | gic_cpu_pin);
- write_gic_map_vp(intr, BIT(mips_cm_vp_id(cpu)));
- irq_data_update_effective_affinity(data, cpumask_of(cpu));
+
+ /* Route the interrupt to its VP(E) */
+ if (gic_irq_lock_cluster(data)) {
+ write_gic_redir_map_pin(intr,
+ GIC_MAP_PIN_MAP_TO_PIN | gic_cpu_pin);
+ write_gic_redir_map_vp(intr, BIT(mips_cm_vp_id(cpu)));
+ mips_cm_unlock_other();
+ } else {
+ write_gic_map_pin(intr, GIC_MAP_PIN_MAP_TO_PIN | gic_cpu_pin);
+ write_gic_map_vp(intr, BIT(mips_cm_vp_id(cpu)));
+ }
+
raw_spin_unlock_irqrestore(&gic_lock, flags);
return 0;
@@ -651,6 +771,9 @@ static int gic_ipi_domain_alloc(struct irq_domain *d, unsigned int virq,
if (ret)
goto error;
+ /* Set affinity to cpu. */
+ irq_data_update_effective_affinity(irq_get_irq_data(virq + i),
+ cpumask_of(cpu));
ret = irq_set_irq_type(virq + i, IRQ_TYPE_EDGE_RISING);
if (ret)
goto error;
--
2.25.1
^ permalink raw reply related [flat|nested] 14+ messages in thread
* [PATCH v5 07/11] clocksource: mips-gic-timer: Always use cluster 0 counter as clocksource
2024-07-11 8:26 [PATCH v5 00/11] MIPS: Support I6500 multi-cluster configuration Aleksandar Rikalo
` (5 preceding siblings ...)
2024-07-11 8:26 ` [PATCH v5 06/11] irqchip/mips-gic: Multi-cluster support Aleksandar Rikalo
@ 2024-07-11 8:26 ` Aleksandar Rikalo
2024-07-11 8:26 ` [PATCH v5 08/11] clocksource: mips-gic-timer: Enable counter when CPUs start Aleksandar Rikalo
` (3 subsequent siblings)
10 siblings, 0 replies; 14+ messages in thread
From: Aleksandar Rikalo @ 2024-07-11 8:26 UTC (permalink / raw)
To: Thomas Bogendoerfer
Cc: Aleksandar Rikalo, Chao-ying Fu, Daniel Lezcano,
Geert Uytterhoeven, Greg Ungerer, Hauke Mehrtens, Ilya Lipnitskiy,
Jiaxun Yang, linux-kernel, linux-mips, Marc Zyngier, Paul Burton,
Peter Zijlstra, Serge Semin, Thomas Gleixner, Tiezhu Yang
From: Paul Burton <paulburton@kernel.org>
In a multi-cluster MIPS system, there are multiple GICs - one in each
cluster - each of which has its independent counter. The counters in
each GIC are not synchronized in any way, so they can drift relative
to one another through the lifetime of the system. This is problematic
for a clock source which ought to be global.
Avoid problems by always accessing cluster 0's counter, using
cross-cluster register access. This adds overhead so it is applied only
on multi-cluster systems.
Signed-off-by: Paul Burton <paulburton@kernel.org>
Signed-off-by: Chao-ying Fu <cfu@wavecomp.com>
Signed-off-by: Dragan Mladjenovic <dragan.mladjenovic@syrmia.com>
Signed-off-by: Aleksandar Rikalo <arikalo@gmail.com>
Tested-by: Serge Semin <fancer.lancer@gmail.com>
---
drivers/clocksource/mips-gic-timer.c | 39 +++++++++++++++++++++++++++-
1 file changed, 38 insertions(+), 1 deletion(-)
diff --git a/drivers/clocksource/mips-gic-timer.c b/drivers/clocksource/mips-gic-timer.c
index b3ae38f36720..ebf308916fb1 100644
--- a/drivers/clocksource/mips-gic-timer.c
+++ b/drivers/clocksource/mips-gic-timer.c
@@ -165,6 +165,37 @@ static u64 gic_hpt_read(struct clocksource *cs)
return gic_read_count();
}
+static u64 gic_hpt_read_multicluster(struct clocksource *cs)
+{
+ unsigned int hi, hi2, lo;
+ u64 count;
+
+ mips_cm_lock_other(0, 0, 0, CM_GCR_Cx_OTHER_BLOCK_GLOBAL);
+
+ if (mips_cm_is64) {
+ count = read_gic_redir_counter();
+ goto out;
+ }
+
+ hi = read_gic_redir_counter_32h();
+ while (true) {
+ lo = read_gic_redir_counter_32l();
+
+ /* If hi didn't change then lo didn't wrap & we're done */
+ hi2 = read_gic_redir_counter_32h();
+ if (hi2 == hi)
+ break;
+
+ /* Otherwise, repeat with the latest hi value */
+ hi = hi2;
+ }
+
+ count = (((u64)hi) << 32) + lo;
+out:
+ mips_cm_unlock_other();
+ return count;
+}
+
static struct clocksource gic_clocksource = {
.name = "GIC",
.read = gic_hpt_read,
@@ -199,6 +230,11 @@ static int __init __gic_clocksource_init(void)
/* Calculate a somewhat reasonable rating value. */
gic_clocksource.rating = 200 + gic_frequency / 10000000;
+ if (mips_cps_multicluster_cpus()) {
+ gic_clocksource.read = &gic_hpt_read_multicluster;
+ gic_clocksource.vdso_clock_mode = VDSO_CLOCKMODE_NONE;
+ }
+
ret = clocksource_register_hz(&gic_clocksource, gic_frequency);
if (ret < 0)
pr_warn("Unable to register clocksource\n");
@@ -257,7 +293,8 @@ static int __init gic_clocksource_of_init(struct device_node *node)
* stable CPU frequency or on the platforms with CM3 and CPU frequency
* change performed by the CPC core clocks divider.
*/
- if (mips_cm_revision() >= CM_REV_CM3 || !IS_ENABLED(CONFIG_CPU_FREQ)) {
+ if ((mips_cm_revision() >= CM_REV_CM3 || !IS_ENABLED(CONFIG_CPU_FREQ)) &&
+ !mips_cps_multicluster_cpus()) {
sched_clock_register(mips_cm_is64 ?
gic_read_count_64 : gic_read_count_2x32,
64, gic_frequency);
--
2.25.1
^ permalink raw reply related [flat|nested] 14+ messages in thread
* [PATCH v5 08/11] clocksource: mips-gic-timer: Enable counter when CPUs start
2024-07-11 8:26 [PATCH v5 00/11] MIPS: Support I6500 multi-cluster configuration Aleksandar Rikalo
` (6 preceding siblings ...)
2024-07-11 8:26 ` [PATCH v5 07/11] clocksource: mips-gic-timer: Always use cluster 0 counter as clocksource Aleksandar Rikalo
@ 2024-07-11 8:26 ` Aleksandar Rikalo
2024-07-11 8:26 ` [PATCH v5 09/11] MIPS: pm-cps: Use per-CPU variables as per-CPU, not per-core Aleksandar Rikalo
` (2 subsequent siblings)
10 siblings, 0 replies; 14+ messages in thread
From: Aleksandar Rikalo @ 2024-07-11 8:26 UTC (permalink / raw)
To: Thomas Bogendoerfer
Cc: Aleksandar Rikalo, Chao-ying Fu, Daniel Lezcano,
Geert Uytterhoeven, Greg Ungerer, Hauke Mehrtens, Ilya Lipnitskiy,
Jiaxun Yang, linux-kernel, linux-mips, Marc Zyngier, Paul Burton,
Peter Zijlstra, Serge Semin, Thomas Gleixner, Tiezhu Yang
From: Paul Burton <paulburton@kernel.org>
In multi-cluster MIPS I6500 systems there is a GIC in each cluster,
each with its own counter. When a cluster powers up the counter will
be stopped, with the COUNTSTOP bit set in the GIC_CONFIG register.
In single cluster systems, it has been fine to clear COUNTSTOP once
in gic_clocksource_of_init() to start the counter. In multi-cluster
systems, this will only have started the counter in the boot cluster,
and any CPUs in other clusters will find their counter stopped which
will break the GIC clock_event_device.
Resolve this by having CPUs clear the COUNTSTOP bit when they come
online, using the existing gic_starting_cpu() CPU hotplug callback. This
will allow CPUs in secondary clusters to ensure that the cluster's GIC
counter is running as expected.
Signed-off-by: Paul Burton <paulburton@kernel.org>
Signed-off-by: Chao-ying Fu <cfu@wavecomp.com>
Signed-off-by: Dragan Mladjenovic <dragan.mladjenovic@syrmia.com>
Signed-off-by: Aleksandar Rikalo <arikalo@gmail.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Tested-by: Serge Semin <fancer.lancer@gmail.com>
---
drivers/clocksource/mips-gic-timer.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/drivers/clocksource/mips-gic-timer.c b/drivers/clocksource/mips-gic-timer.c
index ebf308916fb1..4d7659c119e1 100644
--- a/drivers/clocksource/mips-gic-timer.c
+++ b/drivers/clocksource/mips-gic-timer.c
@@ -114,6 +114,9 @@ static void gic_update_frequency(void *data)
static int gic_starting_cpu(unsigned int cpu)
{
+ /* Ensure the GIC counter is running */
+ clear_gic_config(GIC_CONFIG_COUNTSTOP);
+
gic_clockevent_cpu_init(cpu, this_cpu_ptr(&gic_clockevent_device));
return 0;
}
@@ -284,9 +287,6 @@ static int __init gic_clocksource_of_init(struct device_node *node)
pr_warn("Unable to register clock notifier\n");
}
- /* And finally start the counter */
- clear_gic_config(GIC_CONFIG_COUNTSTOP);
-
/*
* It's safe to use the MIPS GIC timer as a sched clock source only if
* its ticks are stable, which is true on either the platforms with
--
2.25.1
^ permalink raw reply related [flat|nested] 14+ messages in thread
* [PATCH v5 09/11] MIPS: pm-cps: Use per-CPU variables as per-CPU, not per-core
2024-07-11 8:26 [PATCH v5 00/11] MIPS: Support I6500 multi-cluster configuration Aleksandar Rikalo
` (7 preceding siblings ...)
2024-07-11 8:26 ` [PATCH v5 08/11] clocksource: mips-gic-timer: Enable counter when CPUs start Aleksandar Rikalo
@ 2024-07-11 8:26 ` Aleksandar Rikalo
2024-07-11 8:26 ` [PATCH v5 10/11] MIPS: CPS: Introduce struct cluster_boot_config Aleksandar Rikalo
2024-07-11 8:26 ` [PATCH v5 11/11] MIPS: CPS: Boot CPUs in secondary clusters Aleksandar Rikalo
10 siblings, 0 replies; 14+ messages in thread
From: Aleksandar Rikalo @ 2024-07-11 8:26 UTC (permalink / raw)
To: Thomas Bogendoerfer
Cc: Aleksandar Rikalo, Chao-ying Fu, Daniel Lezcano,
Geert Uytterhoeven, Greg Ungerer, Hauke Mehrtens, Ilya Lipnitskiy,
Jiaxun Yang, linux-kernel, linux-mips, Marc Zyngier, Paul Burton,
Peter Zijlstra, Serge Semin, Thomas Gleixner, Tiezhu Yang
From: Paul Burton <paulburton@kernel.org>
The pm-cps code has up until now used per-CPU variables indexed by core,
rather than CPU number, in order to share data amongst sibling CPUs (ie.
VPs/threads in a core). This works fine for single cluster systems, but
with multi-cluster systems a core number is no longer unique in the
system, leading to sharing between CPUs that are not actually siblings.
Avoid this issue by using per-CPU variables as they are more generally
used - ie. access them using CPU numbers rather than core numbers.
Sharing between siblings is then accomplished by:
- Assigning the same pointer to entries for each sibling CPU for the
nc_asm_enter & ready_count variables, which allow this by virtue of
being per-CPU pointers.
- Indexing by the first CPU set in a CPUs cpu_sibling_map in the case
of pm_barrier, for which we can't use the previous approach because
the per-CPU variable is not a pointer.
Signed-off-by: Paul Burton <paulburton@kernel.org>
Signed-off-by: Dragan Mladjenovic <dragan.mladjenovic@syrmia.com>
Signed-off-by: Aleksandar Rikalo <arikalo@gmail.com>
Tested-by: Serge Semin <fancer.lancer@gmail.com>
---
arch/mips/kernel/pm-cps.c | 30 +++++++++++++++++-------------
1 file changed, 17 insertions(+), 13 deletions(-)
diff --git a/arch/mips/kernel/pm-cps.c b/arch/mips/kernel/pm-cps.c
index d09ca77e624d..9369a8dc385e 100644
--- a/arch/mips/kernel/pm-cps.c
+++ b/arch/mips/kernel/pm-cps.c
@@ -57,10 +57,7 @@ static DEFINE_PER_CPU_ALIGNED(u32*, ready_count);
/* Indicates online CPUs coupled with the current CPU */
static DEFINE_PER_CPU_ALIGNED(cpumask_t, online_coupled);
-/*
- * Used to synchronize entry to deep idle states. Actually per-core rather
- * than per-CPU.
- */
+/* Used to synchronize entry to deep idle states */
static DEFINE_PER_CPU_ALIGNED(atomic_t, pm_barrier);
/* Saved CPU state across the CPS_PM_POWER_GATED state */
@@ -112,9 +109,10 @@ int cps_pm_enter_state(enum cps_pm_state state)
cps_nc_entry_fn entry;
struct core_boot_config *core_cfg;
struct vpe_boot_config *vpe_cfg;
+ atomic_t *barrier;
/* Check that there is an entry function for this state */
- entry = per_cpu(nc_asm_enter, core)[state];
+ entry = per_cpu(nc_asm_enter, cpu)[state];
if (!entry)
return -EINVAL;
@@ -150,7 +148,7 @@ int cps_pm_enter_state(enum cps_pm_state state)
smp_mb__after_atomic();
/* Create a non-coherent mapping of the core ready_count */
- core_ready_count = per_cpu(ready_count, core);
+ core_ready_count = per_cpu(ready_count, cpu);
nc_addr = kmap_noncoherent(virt_to_page(core_ready_count),
(unsigned long)core_ready_count);
nc_addr += ((unsigned long)core_ready_count & ~PAGE_MASK);
@@ -158,7 +156,8 @@ int cps_pm_enter_state(enum cps_pm_state state)
/* Ensure ready_count is zero-initialised before the assembly runs */
WRITE_ONCE(*nc_core_ready_count, 0);
- coupled_barrier(&per_cpu(pm_barrier, core), online);
+ barrier = &per_cpu(pm_barrier, cpumask_first(&cpu_sibling_map[cpu]));
+ coupled_barrier(barrier, online);
/* Run the generated entry code */
left = entry(online, nc_core_ready_count);
@@ -629,12 +628,14 @@ static void *cps_gen_entry_code(unsigned cpu, enum cps_pm_state state)
static int cps_pm_online_cpu(unsigned int cpu)
{
- enum cps_pm_state state;
- unsigned core = cpu_core(&cpu_data[cpu]);
+ unsigned int sibling, core;
void *entry_fn, *core_rc;
+ enum cps_pm_state state;
+
+ core = cpu_core(&cpu_data[cpu]);
for (state = CPS_PM_NC_WAIT; state < CPS_PM_STATE_COUNT; state++) {
- if (per_cpu(nc_asm_enter, core)[state])
+ if (per_cpu(nc_asm_enter, cpu)[state])
continue;
if (!test_bit(state, state_support))
continue;
@@ -646,16 +647,19 @@ static int cps_pm_online_cpu(unsigned int cpu)
clear_bit(state, state_support);
}
- per_cpu(nc_asm_enter, core)[state] = entry_fn;
+ for_each_cpu(sibling, &cpu_sibling_map[cpu])
+ per_cpu(nc_asm_enter, sibling)[state] = entry_fn;
}
- if (!per_cpu(ready_count, core)) {
+ if (!per_cpu(ready_count, cpu)) {
core_rc = kmalloc(sizeof(u32), GFP_KERNEL);
if (!core_rc) {
pr_err("Failed allocate core %u ready_count\n", core);
return -ENOMEM;
}
- per_cpu(ready_count, core) = core_rc;
+
+ for_each_cpu(sibling, &cpu_sibling_map[cpu])
+ per_cpu(ready_count, sibling) = core_rc;
}
return 0;
--
2.25.1
^ permalink raw reply related [flat|nested] 14+ messages in thread
* [PATCH v5 10/11] MIPS: CPS: Introduce struct cluster_boot_config
2024-07-11 8:26 [PATCH v5 00/11] MIPS: Support I6500 multi-cluster configuration Aleksandar Rikalo
` (8 preceding siblings ...)
2024-07-11 8:26 ` [PATCH v5 09/11] MIPS: pm-cps: Use per-CPU variables as per-CPU, not per-core Aleksandar Rikalo
@ 2024-07-11 8:26 ` Aleksandar Rikalo
2024-07-11 8:26 ` [PATCH v5 11/11] MIPS: CPS: Boot CPUs in secondary clusters Aleksandar Rikalo
10 siblings, 0 replies; 14+ messages in thread
From: Aleksandar Rikalo @ 2024-07-11 8:26 UTC (permalink / raw)
To: Thomas Bogendoerfer
Cc: Aleksandar Rikalo, Chao-ying Fu, Daniel Lezcano,
Geert Uytterhoeven, Greg Ungerer, Hauke Mehrtens, Ilya Lipnitskiy,
Jiaxun Yang, linux-kernel, linux-mips, Marc Zyngier, Paul Burton,
Peter Zijlstra, Serge Semin, Thomas Gleixner, Tiezhu Yang
From: Paul Burton <paulburton@kernel.org>
In preparation for supporting multi-cluster systems, introduce a struct
cluster_boot_config as an extra layer in the boot configuration
maintained by the MIPS Coherent Processing System (CPS) SMP
implementation. For now only one struct cluster_boot_config will be
allocated & we'll simply defererence its core_config field to find the
struct core_boot_config array which can be used to boot as usual.
Signed-off-by: Paul Burton <paulburton@kernel.org>
Signed-off-by: Dragan Mladjenovic <dragan.mladjenovic@syrmia.com>
Signed-off-by: Aleksandar Rikalo <arikalo@gmail.com>
Tested-by: Serge Semin <fancer.lancer@gmail.com>
---
arch/mips/include/asm/smp-cps.h | 6 ++-
arch/mips/kernel/asm-offsets.c | 3 ++
arch/mips/kernel/cps-vec.S | 19 ++++++--
arch/mips/kernel/pm-cps.c | 5 +-
arch/mips/kernel/smp-cps.c | 82 +++++++++++++++++++++------------
5 files changed, 81 insertions(+), 34 deletions(-)
diff --git a/arch/mips/include/asm/smp-cps.h b/arch/mips/include/asm/smp-cps.h
index ab94e50f62b8..a629e948a6fd 100644
--- a/arch/mips/include/asm/smp-cps.h
+++ b/arch/mips/include/asm/smp-cps.h
@@ -22,7 +22,11 @@ struct core_boot_config {
struct vpe_boot_config *vpe_config;
};
-extern struct core_boot_config *mips_cps_core_bootcfg;
+struct cluster_boot_config {
+ struct core_boot_config *core_config;
+};
+
+extern struct cluster_boot_config *mips_cps_cluster_bootcfg;
extern void mips_cps_core_boot(int cca, void __iomem *gcr_base);
extern void mips_cps_core_init(void);
diff --git a/arch/mips/kernel/asm-offsets.c b/arch/mips/kernel/asm-offsets.c
index cb1045ebab06..b29944160b28 100644
--- a/arch/mips/kernel/asm-offsets.c
+++ b/arch/mips/kernel/asm-offsets.c
@@ -404,6 +404,9 @@ void output_cps_defines(void)
{
COMMENT(" MIPS CPS offsets. ");
+ OFFSET(CLUSTERBOOTCFG_CORECONFIG, cluster_boot_config, core_config);
+ DEFINE(CLUSTERBOOTCFG_SIZE, sizeof(struct cluster_boot_config));
+
OFFSET(COREBOOTCFG_VPEMASK, core_boot_config, vpe_mask);
OFFSET(COREBOOTCFG_VPECONFIG, core_boot_config, vpe_config);
DEFINE(COREBOOTCFG_SIZE, sizeof(struct core_boot_config));
diff --git a/arch/mips/kernel/cps-vec.S b/arch/mips/kernel/cps-vec.S
index f876309130ad..2ae7034a3d5c 100644
--- a/arch/mips/kernel/cps-vec.S
+++ b/arch/mips/kernel/cps-vec.S
@@ -19,6 +19,10 @@
#define GCR_CPC_BASE_OFS 0x0088
#define GCR_CL_COHERENCE_OFS 0x2008
#define GCR_CL_ID_OFS 0x2028
+#define CM3_GCR_Cx_ID_CLUSTER_SHF 8
+#define CM3_GCR_Cx_ID_CLUSTER_MSK (0xff << 8)
+#define CM3_GCR_Cx_ID_CORENUM_SHF 0
+#define CM3_GCR_Cx_ID_CORENUM_MSK (0xff << 0)
#define CPC_CL_VC_STOP_OFS 0x2020
#define CPC_CL_VC_RUN_OFS 0x2028
@@ -271,12 +275,21 @@ LEAF(mips_cps_core_init)
*/
LEAF(mips_cps_get_bootcfg)
/* Calculate a pointer to this cores struct core_boot_config */
+ PTR_LA v0, mips_cps_cluster_bootcfg
+ PTR_L v0, 0(v0)
lw t0, GCR_CL_ID_OFS(s1)
+#ifdef CONFIG_CPU_MIPSR6
+ ext t1, t0, CM3_GCR_Cx_ID_CLUSTER_SHF, 8
+ li t2, CLUSTERBOOTCFG_SIZE
+ mul t1, t1, t2
+ PTR_ADDU \
+ v0, v0, t1
+#endif
+ PTR_L v0, CLUSTERBOOTCFG_CORECONFIG(v0)
+ andi t0, t0, CM3_GCR_Cx_ID_CORENUM_MSK
li t1, COREBOOTCFG_SIZE
mul t0, t0, t1
- PTR_LA t1, mips_cps_core_bootcfg
- PTR_L t1, 0(t1)
- PTR_ADDU v0, t0, t1
+ PTR_ADDU v0, v0, t0
/* Calculate this VPEs ID. If the core doesn't support MT use 0 */
li t9, 0
diff --git a/arch/mips/kernel/pm-cps.c b/arch/mips/kernel/pm-cps.c
index 9369a8dc385e..3de0e05e0511 100644
--- a/arch/mips/kernel/pm-cps.c
+++ b/arch/mips/kernel/pm-cps.c
@@ -101,12 +101,14 @@ static void coupled_barrier(atomic_t *a, unsigned online)
int cps_pm_enter_state(enum cps_pm_state state)
{
unsigned cpu = smp_processor_id();
+ unsigned int cluster = cpu_cluster(¤t_cpu_data);
unsigned core = cpu_core(¤t_cpu_data);
unsigned online, left;
cpumask_t *coupled_mask = this_cpu_ptr(&online_coupled);
u32 *core_ready_count, *nc_core_ready_count;
void *nc_addr;
cps_nc_entry_fn entry;
+ struct cluster_boot_config *cluster_cfg;
struct core_boot_config *core_cfg;
struct vpe_boot_config *vpe_cfg;
atomic_t *barrier;
@@ -136,7 +138,8 @@ int cps_pm_enter_state(enum cps_pm_state state)
if (!mips_cps_smp_in_use())
return -EINVAL;
- core_cfg = &mips_cps_core_bootcfg[core];
+ cluster_cfg = &mips_cps_cluster_bootcfg[cluster];
+ core_cfg = &cluster_cfg->core_config[core];
vpe_cfg = &core_cfg->vpe_config[cpu_vpe_id(¤t_cpu_data)];
vpe_cfg->pc = (unsigned long)mips_cps_pm_restore;
vpe_cfg->gp = (unsigned long)current_thread_info();
diff --git a/arch/mips/kernel/smp-cps.c b/arch/mips/kernel/smp-cps.c
index 9cc087dd1c19..1954c9e912b2 100644
--- a/arch/mips/kernel/smp-cps.c
+++ b/arch/mips/kernel/smp-cps.c
@@ -40,7 +40,7 @@ static DECLARE_BITMAP(core_power, NR_CPUS);
static uint32_t core_entry_reg;
static phys_addr_t cps_vec_pa;
-struct core_boot_config *mips_cps_core_bootcfg;
+struct cluster_boot_config *mips_cps_cluster_bootcfg;
static unsigned __init core_vpe_count(unsigned int cluster, unsigned core)
{
@@ -212,8 +212,10 @@ static void __init cps_smp_setup(void)
static void __init cps_prepare_cpus(unsigned int max_cpus)
{
- unsigned ncores, core_vpes, c, cca;
+ unsigned int nclusters, ncores, core_vpes, c, cl, cca;
bool cca_unsuitable, cores_limited;
+ struct cluster_boot_config *cluster_bootcfg;
+ struct core_boot_config *core_bootcfg;
mips_mt_set_cpuoptions();
@@ -255,40 +257,54 @@ static void __init cps_prepare_cpus(unsigned int max_cpus)
setup_cps_vecs();
- /* Allocate core boot configuration structs */
- ncores = mips_cps_numcores(0);
- mips_cps_core_bootcfg = kcalloc(ncores, sizeof(*mips_cps_core_bootcfg),
- GFP_KERNEL);
- if (!mips_cps_core_bootcfg) {
- pr_err("Failed to allocate boot config for %u cores\n", ncores);
- goto err_out;
- }
+ /* Allocate cluster boot configuration structs */
+ nclusters = mips_cps_numclusters();
+ mips_cps_cluster_bootcfg = kcalloc(nclusters,
+ sizeof(*mips_cps_cluster_bootcfg),
+ GFP_KERNEL);
- /* Allocate VPE boot configuration structs */
- for (c = 0; c < ncores; c++) {
- core_vpes = core_vpe_count(0, c);
- mips_cps_core_bootcfg[c].vpe_config = kcalloc(core_vpes,
- sizeof(*mips_cps_core_bootcfg[c].vpe_config),
- GFP_KERNEL);
- if (!mips_cps_core_bootcfg[c].vpe_config) {
- pr_err("Failed to allocate %u VPE boot configs\n",
- core_vpes);
+ for (cl = 0; cl < nclusters; cl++) {
+ /* Allocate core boot configuration structs */
+ ncores = mips_cps_numcores(cl);
+ core_bootcfg = kcalloc(ncores, sizeof(*core_bootcfg),
+ GFP_KERNEL);
+ if (!core_bootcfg)
goto err_out;
+ mips_cps_cluster_bootcfg[cl].core_config = core_bootcfg;
+
+ /* Allocate VPE boot configuration structs */
+ for (c = 0; c < ncores; c++) {
+ core_vpes = core_vpe_count(cl, c);
+ core_bootcfg[c].vpe_config = kcalloc(core_vpes,
+ sizeof(*core_bootcfg[c].vpe_config),
+ GFP_KERNEL);
+ if (!core_bootcfg[c].vpe_config)
+ goto err_out;
}
}
/* Mark this CPU as booted */
- atomic_set(&mips_cps_core_bootcfg[cpu_core(¤t_cpu_data)].vpe_mask,
- 1 << cpu_vpe_id(¤t_cpu_data));
+ cl = cpu_cluster(¤t_cpu_data);
+ c = cpu_core(¤t_cpu_data);
+ cluster_bootcfg = &mips_cps_cluster_bootcfg[cl];
+ core_bootcfg = &cluster_bootcfg->core_config[c];
+ atomic_set(&core_bootcfg->vpe_mask, 1 << cpu_vpe_id(¤t_cpu_data));
return;
err_out:
/* Clean up allocations */
- if (mips_cps_core_bootcfg) {
- for (c = 0; c < ncores; c++)
- kfree(mips_cps_core_bootcfg[c].vpe_config);
- kfree(mips_cps_core_bootcfg);
- mips_cps_core_bootcfg = NULL;
+ if (mips_cps_cluster_bootcfg) {
+ for (cl = 0; cl < nclusters; cl++) {
+ cluster_bootcfg = &mips_cps_cluster_bootcfg[cl];
+ ncores = mips_cps_numcores(cl);
+ for (c = 0; c < ncores; c++) {
+ core_bootcfg = &cluster_bootcfg->core_config[c];
+ kfree(core_bootcfg->vpe_config);
+ }
+ kfree(mips_cps_cluster_bootcfg[c].core_config);
+ }
+ kfree(mips_cps_cluster_bootcfg);
+ mips_cps_cluster_bootcfg = NULL;
}
/* Effectively disable SMP by declaring CPUs not present */
@@ -373,17 +389,23 @@ static void boot_core(unsigned int core, unsigned int vpe_id)
static void remote_vpe_boot(void *dummy)
{
+ unsigned int cluster = cpu_cluster(¤t_cpu_data);
unsigned core = cpu_core(¤t_cpu_data);
- struct core_boot_config *core_cfg = &mips_cps_core_bootcfg[core];
+ struct cluster_boot_config *cluster_cfg =
+ &mips_cps_cluster_bootcfg[cluster];
+ struct core_boot_config *core_cfg = &cluster_cfg->core_config[core];
mips_cps_boot_vpes(core_cfg, cpu_vpe_id(¤t_cpu_data));
}
static int cps_boot_secondary(int cpu, struct task_struct *idle)
{
+ unsigned int cluster = cpu_cluster(&cpu_data[cpu]);
unsigned core = cpu_core(&cpu_data[cpu]);
unsigned vpe_id = cpu_vpe_id(&cpu_data[cpu]);
- struct core_boot_config *core_cfg = &mips_cps_core_bootcfg[core];
+ struct cluster_boot_config *cluster_cfg =
+ &mips_cps_cluster_bootcfg[cluster];
+ struct core_boot_config *core_cfg = &cluster_cfg->core_config[core];
struct vpe_boot_config *vpe_cfg = &core_cfg->vpe_config[vpe_id];
unsigned int remote;
int err;
@@ -541,12 +563,14 @@ static void cps_kexec_nonboot_cpu(void)
static int cps_cpu_disable(void)
{
unsigned cpu = smp_processor_id();
+ struct cluster_boot_config *cluster_cfg;
struct core_boot_config *core_cfg;
if (!cps_pm_support_state(CPS_PM_POWER_GATED))
return -EINVAL;
- core_cfg = &mips_cps_core_bootcfg[cpu_core(¤t_cpu_data)];
+ cluster_cfg = &mips_cps_cluster_bootcfg[cpu_cluster(¤t_cpu_data)];
+ core_cfg = &cluster_cfg->core_config[cpu_core(¤t_cpu_data)];
atomic_sub(1 << cpu_vpe_id(¤t_cpu_data), &core_cfg->vpe_mask);
smp_mb__after_atomic();
set_cpu_online(cpu, false);
--
2.25.1
^ permalink raw reply related [flat|nested] 14+ messages in thread
* [PATCH v5 11/11] MIPS: CPS: Boot CPUs in secondary clusters
2024-07-11 8:26 [PATCH v5 00/11] MIPS: Support I6500 multi-cluster configuration Aleksandar Rikalo
` (9 preceding siblings ...)
2024-07-11 8:26 ` [PATCH v5 10/11] MIPS: CPS: Introduce struct cluster_boot_config Aleksandar Rikalo
@ 2024-07-11 8:26 ` Aleksandar Rikalo
10 siblings, 0 replies; 14+ messages in thread
From: Aleksandar Rikalo @ 2024-07-11 8:26 UTC (permalink / raw)
To: Thomas Bogendoerfer
Cc: Aleksandar Rikalo, Chao-ying Fu, Daniel Lezcano,
Geert Uytterhoeven, Greg Ungerer, Hauke Mehrtens, Ilya Lipnitskiy,
Jiaxun Yang, linux-kernel, linux-mips, Marc Zyngier, Paul Burton,
Peter Zijlstra, Serge Semin, Thomas Gleixner, Tiezhu Yang
From: Paul Burton <paulburton@kernel.org>
Probe for & boot CPUs (cores & VPs) in secondary clusters (ie. not the
cluster that began booting Linux) when they are present in systems with
CM 3.5 or higher.
Signed-off-by: Paul Burton <paulburton@kernel.org>
Signed-off-by: Chao-ying Fu <cfu@wavecomp.com>
Signed-off-by: Dragan Mladjenovic <dragan.mladjenovic@syrmia.com>
Signed-off-by: Aleksandar Rikalo <arikalo@gmail.com>
Tested-by: Serge Semin <fancer.lancer@gmail.com>
---
arch/mips/include/asm/mips-cm.h | 18 +++
arch/mips/include/asm/smp-cps.h | 1 +
arch/mips/kernel/mips-cm.c | 4 +-
arch/mips/kernel/smp-cps.c | 208 ++++++++++++++++++++++++++++----
4 files changed, 207 insertions(+), 24 deletions(-)
diff --git a/arch/mips/include/asm/mips-cm.h b/arch/mips/include/asm/mips-cm.h
index c2930a75b7e4..65a46b35a218 100644
--- a/arch/mips/include/asm/mips-cm.h
+++ b/arch/mips/include/asm/mips-cm.h
@@ -251,6 +251,12 @@ GCR_ACCESSOR_RW(32, 0x130, l2_config)
GCR_ACCESSOR_RO(32, 0x150, sys_config2)
#define CM_GCR_SYS_CONFIG2_MAXVPW GENMASK(3, 0)
+/* GCR_L2-RAM_CONFIG - Configuration & status of L2 cache RAMs */
+GCR_ACCESSOR_RW(64, 0x240, l2_ram_config)
+#define CM_GCR_L2_RAM_CONFIG_PRESENT BIT(31)
+#define CM_GCR_L2_RAM_CONFIG_HCI_DONE BIT(30)
+#define CM_GCR_L2_RAM_CONFIG_HCI_SUPPORTED BIT(29)
+
/* GCR_L2_PFT_CONTROL - Controls hardware L2 prefetching */
GCR_ACCESSOR_RW(32, 0x300, l2_pft_control)
#define CM_GCR_L2_PFT_CONTROL_PAGEMASK GENMASK(31, 12)
@@ -262,6 +268,18 @@ GCR_ACCESSOR_RW(32, 0x308, l2_pft_control_b)
#define CM_GCR_L2_PFT_CONTROL_B_CEN BIT(8)
#define CM_GCR_L2_PFT_CONTROL_B_PORTID GENMASK(7, 0)
+/* GCR_L2_TAG_ADDR - Access addresses in L2 cache tags */
+GCR_ACCESSOR_RW(64, 0x600, l2_tag_addr)
+
+/* GCR_L2_TAG_STATE - Access L2 cache tag state */
+GCR_ACCESSOR_RW(64, 0x608, l2_tag_state)
+
+/* GCR_L2_DATA - Access data in L2 cache lines */
+GCR_ACCESSOR_RW(64, 0x610, l2_data)
+
+/* GCR_L2_ECC - Access ECC information from L2 cache lines */
+GCR_ACCESSOR_RW(64, 0x618, l2_ecc)
+
/* GCR_L2SM_COP - L2 cache op state machine control */
GCR_ACCESSOR_RW(32, 0x620, l2sm_cop)
#define CM_GCR_L2SM_COP_PRESENT BIT(31)
diff --git a/arch/mips/include/asm/smp-cps.h b/arch/mips/include/asm/smp-cps.h
index a629e948a6fd..10d3ebd890cb 100644
--- a/arch/mips/include/asm/smp-cps.h
+++ b/arch/mips/include/asm/smp-cps.h
@@ -23,6 +23,7 @@ struct core_boot_config {
};
struct cluster_boot_config {
+ unsigned long *core_power;
struct core_boot_config *core_config;
};
diff --git a/arch/mips/kernel/mips-cm.c b/arch/mips/kernel/mips-cm.c
index 3eb2cfb893e1..9854bc2b6895 100644
--- a/arch/mips/kernel/mips-cm.c
+++ b/arch/mips/kernel/mips-cm.c
@@ -308,7 +308,9 @@ void mips_cm_lock_other(unsigned int cluster, unsigned int core,
FIELD_PREP(CM3_GCR_Cx_OTHER_VP, vp);
if (cm_rev >= CM_REV_CM3_5) {
- val |= CM_GCR_Cx_OTHER_CLUSTER_EN;
+ if (cluster != cpu_cluster(¤t_cpu_data))
+ val |= CM_GCR_Cx_OTHER_CLUSTER_EN;
+ val |= CM_GCR_Cx_OTHER_GIC_EN;
val |= FIELD_PREP(CM_GCR_Cx_OTHER_CLUSTER, cluster);
val |= FIELD_PREP(CM_GCR_Cx_OTHER_BLOCK, block);
} else {
diff --git a/arch/mips/kernel/smp-cps.c b/arch/mips/kernel/smp-cps.c
index 1954c9e912b2..ae814fdb930c 100644
--- a/arch/mips/kernel/smp-cps.c
+++ b/arch/mips/kernel/smp-cps.c
@@ -36,12 +36,56 @@ enum label_id {
UASM_L_LA(_not_nmi)
-static DECLARE_BITMAP(core_power, NR_CPUS);
static uint32_t core_entry_reg;
static phys_addr_t cps_vec_pa;
struct cluster_boot_config *mips_cps_cluster_bootcfg;
+static void power_up_other_cluster(unsigned int cluster)
+{
+ u32 stat, seq_state;
+ unsigned int timeout;
+
+ mips_cm_lock_other(cluster, CM_GCR_Cx_OTHER_CORE_CM, 0,
+ CM_GCR_Cx_OTHER_BLOCK_LOCAL);
+ stat = read_cpc_co_stat_conf();
+ mips_cm_unlock_other();
+
+ seq_state = stat & CPC_Cx_STAT_CONF_SEQSTATE;
+ seq_state >>= __ffs(CPC_Cx_STAT_CONF_SEQSTATE);
+ if (seq_state == CPC_Cx_STAT_CONF_SEQSTATE_U5)
+ return;
+
+ /* Set endianness & power up the CM */
+ mips_cm_lock_other(cluster, 0, 0, CM_GCR_Cx_OTHER_BLOCK_GLOBAL);
+ write_cpc_redir_sys_config(IS_ENABLED(CONFIG_CPU_BIG_ENDIAN));
+ write_cpc_redir_pwrup_ctl(1);
+ mips_cm_unlock_other();
+
+ /* Wait for the CM to start up */
+ timeout = 1000;
+ mips_cm_lock_other(cluster, CM_GCR_Cx_OTHER_CORE_CM, 0,
+ CM_GCR_Cx_OTHER_BLOCK_LOCAL);
+ while (1) {
+ stat = read_cpc_co_stat_conf();
+ seq_state = stat & CPC_Cx_STAT_CONF_SEQSTATE;
+ seq_state >>= __ffs(CPC_Cx_STAT_CONF_SEQSTATE);
+ if (seq_state == CPC_Cx_STAT_CONF_SEQSTATE_U5)
+ break;
+
+ if (timeout) {
+ mdelay(1);
+ timeout--;
+ } else {
+ pr_warn("Waiting for cluster %u CM to power up... STAT_CONF=0x%x\n",
+ cluster, stat);
+ mdelay(1000);
+ }
+ }
+
+ mips_cm_unlock_other();
+}
+
static unsigned __init core_vpe_count(unsigned int cluster, unsigned core)
{
return min(smp_max_threads, mips_cps_numvps(cluster, core));
@@ -152,6 +196,9 @@ static void __init cps_smp_setup(void)
pr_cont(",");
pr_cont("{");
+ if (mips_cm_revision() >= CM_REV_CM3_5)
+ power_up_other_cluster(cl);
+
ncores = mips_cps_numcores(cl);
for (c = 0; c < ncores; c++) {
core_vpes = core_vpe_count(cl, c);
@@ -179,8 +226,8 @@ static void __init cps_smp_setup(void)
/* Indicate present CPUs (CPU being synonymous with VPE) */
for (v = 0; v < min_t(unsigned, nvpes, NR_CPUS); v++) {
- set_cpu_possible(v, cpu_cluster(&cpu_data[v]) == 0);
- set_cpu_present(v, cpu_cluster(&cpu_data[v]) == 0);
+ set_cpu_possible(v, true);
+ set_cpu_present(v, true);
__cpu_number_map[v] = v;
__cpu_logical_map[v] = v;
}
@@ -188,9 +235,6 @@ static void __init cps_smp_setup(void)
/* Set a coherent default CCA (CWB) */
change_c0_config(CONF_CM_CMASK, 0x5);
- /* Core 0 is powered up (we're running on it) */
- bitmap_set(core_power, 0, 1);
-
/* Initialise core 0 */
mips_cps_core_init();
@@ -272,6 +316,10 @@ static void __init cps_prepare_cpus(unsigned int max_cpus)
goto err_out;
mips_cps_cluster_bootcfg[cl].core_config = core_bootcfg;
+ mips_cps_cluster_bootcfg[cl].core_power =
+ kcalloc(BITS_TO_LONGS(ncores), sizeof(unsigned long),
+ GFP_KERNEL);
+
/* Allocate VPE boot configuration structs */
for (c = 0; c < ncores; c++) {
core_vpes = core_vpe_count(cl, c);
@@ -283,11 +331,12 @@ static void __init cps_prepare_cpus(unsigned int max_cpus)
}
}
- /* Mark this CPU as booted */
+ /* Mark this CPU as powered up & booted */
cl = cpu_cluster(¤t_cpu_data);
c = cpu_core(¤t_cpu_data);
cluster_bootcfg = &mips_cps_cluster_bootcfg[cl];
core_bootcfg = &cluster_bootcfg->core_config[c];
+ bitmap_set(cluster_bootcfg->core_power, cpu_core(¤t_cpu_data), 1);
atomic_set(&core_bootcfg->vpe_mask, 1 << cpu_vpe_id(¤t_cpu_data));
return;
@@ -315,13 +364,118 @@ static void __init cps_prepare_cpus(unsigned int max_cpus)
}
}
-static void boot_core(unsigned int core, unsigned int vpe_id)
+static void init_cluster_l2(void)
{
- u32 stat, seq_state;
- unsigned timeout;
+ u32 l2_cfg, l2sm_cop, result;
+
+ while (1) {
+ l2_cfg = read_gcr_redir_l2_ram_config();
+
+ /* If HCI is not supported, use the state machine below */
+ if (!(l2_cfg & CM_GCR_L2_RAM_CONFIG_PRESENT))
+ break;
+ if (!(l2_cfg & CM_GCR_L2_RAM_CONFIG_HCI_SUPPORTED))
+ break;
+
+ /* If the HCI_DONE bit is set, we're finished */
+ if (l2_cfg & CM_GCR_L2_RAM_CONFIG_HCI_DONE)
+ return;
+ }
+
+ l2sm_cop = read_gcr_redir_l2sm_cop();
+ if (WARN(!(l2sm_cop & CM_GCR_L2SM_COP_PRESENT),
+ "L2 init not supported on this system yet"))
+ return;
+
+ /* Clear L2 tag registers */
+ write_gcr_redir_l2_tag_state(0);
+ write_gcr_redir_l2_ecc(0);
+
+ /* Ensure the L2 tag writes complete before the state machine starts */
+ mb();
+
+ /* Wait for the L2 state machine to be idle */
+ do {
+ l2sm_cop = read_gcr_redir_l2sm_cop();
+ } while (l2sm_cop & CM_GCR_L2SM_COP_RUNNING);
+
+ /* Start a store tag operation */
+ l2sm_cop = CM_GCR_L2SM_COP_TYPE_IDX_STORETAG;
+ l2sm_cop <<= __ffs(CM_GCR_L2SM_COP_TYPE);
+ l2sm_cop |= CM_GCR_L2SM_COP_CMD_START;
+ write_gcr_redir_l2sm_cop(l2sm_cop);
+
+ /* Ensure the state machine starts before we poll for completion */
+ mb();
+
+ /* Wait for the operation to be complete */
+ do {
+ l2sm_cop = read_gcr_redir_l2sm_cop();
+ result = l2sm_cop & CM_GCR_L2SM_COP_RESULT;
+ result >>= __ffs(CM_GCR_L2SM_COP_RESULT);
+ } while (!result);
+
+ WARN(result != CM_GCR_L2SM_COP_RESULT_DONE_OK,
+ "L2 state machine failed cache init with error %u\n", result);
+}
+
+static void boot_core(unsigned int cluster, unsigned int core,
+ unsigned int vpe_id)
+{
+ struct cluster_boot_config *cluster_cfg;
+ u32 access, stat, seq_state;
+ unsigned int timeout, ncores;
+
+ cluster_cfg = &mips_cps_cluster_bootcfg[cluster];
+ ncores = mips_cps_numcores(cluster);
+
+ if ((cluster != cpu_cluster(¤t_cpu_data)) &&
+ bitmap_empty(cluster_cfg->core_power, ncores)) {
+ power_up_other_cluster(cluster);
+
+ mips_cm_lock_other(cluster, core, 0,
+ CM_GCR_Cx_OTHER_BLOCK_GLOBAL);
+
+ /* Ensure cluster GCRs are where we expect */
+ write_gcr_redir_base(read_gcr_base());
+ write_gcr_redir_cpc_base(read_gcr_cpc_base());
+ write_gcr_redir_gic_base(read_gcr_gic_base());
+
+ init_cluster_l2();
+
+ /* Mirror L2 configuration */
+ write_gcr_redir_l2_only_sync_base(read_gcr_l2_only_sync_base());
+ write_gcr_redir_l2_pft_control(read_gcr_l2_pft_control());
+ write_gcr_redir_l2_pft_control_b(read_gcr_l2_pft_control_b());
+
+ /* Mirror ECC/parity setup */
+ write_gcr_redir_err_control(read_gcr_err_control());
+
+ /* Set BEV base */
+ write_gcr_redir_bev_base(core_entry_reg);
+
+ mips_cm_unlock_other();
+ }
+
+ if (cluster != cpu_cluster(¤t_cpu_data)) {
+ mips_cm_lock_other(cluster, core, 0,
+ CM_GCR_Cx_OTHER_BLOCK_GLOBAL);
+
+ /* Ensure the core can access the GCRs */
+ access = read_gcr_redir_access();
+ access |= BIT(core);
+ write_gcr_redir_access(access);
+
+ mips_cm_unlock_other();
+ } else {
+ /* Ensure the core can access the GCRs */
+ access = read_gcr_access();
+ access |= BIT(core);
+ write_gcr_access(access);
+ }
/* Select the appropriate core */
- mips_cm_lock_other(0, core, 0, CM_GCR_Cx_OTHER_BLOCK_LOCAL);
+ mips_cm_lock_other(cluster, core, 0, CM_GCR_Cx_OTHER_BLOCK_LOCAL);
/* Set its reset vector */
write_gcr_co_reset_base(core_entry_reg);
@@ -332,9 +486,6 @@ static void boot_core(unsigned int core, unsigned int vpe_id)
/* Start it with the legacy memory map and exception base */
write_gcr_co_reset_ext_base(CM_GCR_Cx_RESET_EXT_BASE_UEB);
- /* Ensure the core can access the GCRs */
- set_gcr_access(1 << core);
-
if (mips_cpc_present()) {
/* Reset the core */
mips_cpc_lock_other(core);
@@ -384,7 +535,17 @@ static void boot_core(unsigned int core, unsigned int vpe_id)
mips_cm_unlock_other();
/* The core is now powered up */
- bitmap_set(core_power, core, 1);
+ bitmap_set(cluster_cfg->core_power, core, 1);
+
+ /*
+ * Restore CM_PWRUP=0 so that the CM can power down if all the cores in
+ * the cluster do (eg. if they're all removed via hotplug.
+ */
+ if (mips_cm_revision() >= CM_REV_CM3_5) {
+ mips_cm_lock_other(cluster, 0, 0, CM_GCR_Cx_OTHER_BLOCK_GLOBAL);
+ write_cpc_redir_pwrup_ctl(0);
+ mips_cm_unlock_other();
+ }
}
static void remote_vpe_boot(void *dummy)
@@ -410,10 +571,6 @@ static int cps_boot_secondary(int cpu, struct task_struct *idle)
unsigned int remote;
int err;
- /* We don't yet support booting CPUs in other clusters */
- if (cpu_cluster(&cpu_data[cpu]) != cpu_cluster(&raw_current_cpu_data))
- return -ENOSYS;
-
vpe_cfg->pc = (unsigned long)&smp_bootstrap;
vpe_cfg->sp = __KSTK_TOS(idle);
vpe_cfg->gp = (unsigned long)task_thread_info(idle);
@@ -422,14 +579,15 @@ static int cps_boot_secondary(int cpu, struct task_struct *idle)
preempt_disable();
- if (!test_bit(core, core_power)) {
+ if (!test_bit(core, cluster_cfg->core_power)) {
/* Boot a VPE on a powered down core */
- boot_core(core, vpe_id);
+ boot_core(cluster, core, vpe_id);
goto out;
}
if (cpu_has_vp) {
- mips_cm_lock_other(0, core, vpe_id, CM_GCR_Cx_OTHER_BLOCK_LOCAL);
+ mips_cm_lock_other(cluster, core, vpe_id,
+ CM_GCR_Cx_OTHER_BLOCK_LOCAL);
write_gcr_co_reset_base(core_entry_reg);
mips_cm_unlock_other();
}
@@ -636,11 +794,15 @@ static void cps_cpu_die(unsigned int cpu) { }
static void cps_cleanup_dead_cpu(unsigned cpu)
{
+ unsigned int cluster = cpu_cluster(&cpu_data[cpu]);
unsigned core = cpu_core(&cpu_data[cpu]);
unsigned int vpe_id = cpu_vpe_id(&cpu_data[cpu]);
ktime_t fail_time;
unsigned stat;
int err;
+ struct cluster_boot_config *cluster_cfg;
+
+ cluster_cfg = &mips_cps_cluster_bootcfg[cluster];
/*
* Now wait for the CPU to actually offline. Without doing this that
@@ -692,7 +854,7 @@ static void cps_cleanup_dead_cpu(unsigned cpu)
} while (1);
/* Indicate the core is powered off */
- bitmap_clear(core_power, core, 1);
+ bitmap_clear(cluster_cfg->core_power, core, 1);
} else if (cpu_has_mipsmt) {
/*
* Have a CPU with access to the offlined CPUs registers wait
--
2.25.1
^ permalink raw reply related [flat|nested] 14+ messages in thread
* Re: [PATCH v5 03/11] irqchip/mips-gic: Introduce for_each_online_cpu_gic()
2024-07-11 8:26 ` [PATCH v5 03/11] irqchip/mips-gic: Introduce for_each_online_cpu_gic() Aleksandar Rikalo
@ 2024-07-11 11:52 ` Thomas Gleixner
0 siblings, 0 replies; 14+ messages in thread
From: Thomas Gleixner @ 2024-07-11 11:52 UTC (permalink / raw)
To: Aleksandar Rikalo, Thomas Bogendoerfer
Cc: Aleksandar Rikalo, Chao-ying Fu, Daniel Lezcano,
Geert Uytterhoeven, Greg Ungerer, Hauke Mehrtens, Ilya Lipnitskiy,
Jiaxun Yang, linux-kernel, linux-mips, Marc Zyngier, Paul Burton,
Peter Zijlstra, Serge Semin, Tiezhu Yang
On Thu, Jul 11 2024 at 10:26, Aleksandar Rikalo wrote:
> From: Paul Burton <paulburton@kernel.org>
>
> Parts of code in the MIPS GIC driver operate on the GIC local register
> block for each online CPU, accessing each via the GIC's other/redirect
> register block.
>
> Abstract the process of iterating over online CPUs & configuring the
> other/redirect region to access their registers through a new
> for_each_online_cpu_gic() macro.
>
> Signed-off-by: Paul Burton <paulburton@kernel.org>
> Signed-off-by: Chao-ying Fu <cfu@wavecomp.com>
> Signed-off-by: Dragan Mladjenovic <dragan.mladjenovic@syrmia.com>
> Signed-off-by: Aleksandar Rikalo <arikalo@gmail.com>
> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
I have nothing to do with this patch. I merely suggested you changes in
the review.
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH v5 04/11] irqchip/mips-gic: Support multi-cluster in for_each_online_cpu_gic()
2024-07-11 8:26 ` [PATCH v5 04/11] irqchip/mips-gic: Support multi-cluster in for_each_online_cpu_gic() Aleksandar Rikalo
@ 2024-07-12 4:38 ` kernel test robot
0 siblings, 0 replies; 14+ messages in thread
From: kernel test robot @ 2024-07-12 4:38 UTC (permalink / raw)
To: Aleksandar Rikalo, Thomas Bogendoerfer
Cc: oe-kbuild-all, Aleksandar Rikalo, Chao-ying Fu, Daniel Lezcano,
Geert Uytterhoeven, Greg Ungerer, Hauke Mehrtens, Ilya Lipnitskiy,
Jiaxun Yang, linux-kernel, linux-mips, Marc Zyngier, Paul Burton,
Peter Zijlstra, Serge Semin, Thomas Gleixner, Tiezhu Yang
Hi Aleksandar,
kernel test robot noticed the following build warnings:
[auto build test WARNING on tip/irq/core]
[also build test WARNING on tip/timers/core linus/master v6.10-rc7]
[cannot apply to daniel-lezcano/clockevents/next next-20240711]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch#_base_tree_information]
url: https://github.com/intel-lab-lkp/linux/commits/Aleksandar-Rikalo/MIPS-CPS-Add-a-couple-of-multi-cluster-utility-functions/20240711-164850
base: tip/irq/core
patch link: https://lore.kernel.org/r/20240711082656.1889440-5-arikalo%40gmail.com
patch subject: [PATCH v5 04/11] irqchip/mips-gic: Support multi-cluster in for_each_online_cpu_gic()
config: mips-allnoconfig (https://download.01.org/0day-ci/archive/20240712/202407121224.kUAil6pa-lkp@intel.com/config)
compiler: mips-linux-gcc (GCC) 14.1.0
reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20240712/202407121224.kUAil6pa-lkp@intel.com/reproduce)
If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <lkp@intel.com>
| Closes: https://lore.kernel.org/oe-kbuild-all/202407121224.kUAil6pa-lkp@intel.com/
All warnings (new ones prefixed by >>):
>> drivers/irqchip/irq-mips-gic.c:102: warning: expecting prototype for for_each_online_cpu_gic(). Prototype was for gic_unlock_cluster() instead
vim +102 drivers/irqchip/irq-mips-gic.c
90
91 /**
92 * for_each_online_cpu_gic() - Iterate over online CPUs, access local registers
93 * @cpu: An integer variable to hold the current CPU number
94 * @gic_lock: A pointer to raw spin lock used as a guard
95 *
96 * Iterate over online CPUs & configure the other/redirect register region to
97 * access each CPUs GIC local register block, which can be accessed from the
98 * loop body using read_gic_vo_*() or write_gic_vo_*() accessor functions or
99 * their derivatives.
100 */
101 static inline void gic_unlock_cluster(void)
> 102 {
103 if (mips_cps_multicluster_cpus())
104 mips_cm_unlock_other();
105 }
106
--
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki
^ permalink raw reply [flat|nested] 14+ messages in thread
end of thread, other threads:[~2024-07-12 4:39 UTC | newest]
Thread overview: 14+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-07-11 8:26 [PATCH v5 00/11] MIPS: Support I6500 multi-cluster configuration Aleksandar Rikalo
2024-07-11 8:26 ` [PATCH v5 01/11] MIPS: CPS: Add a couple of multi-cluster utility functions Aleksandar Rikalo
2024-07-11 8:26 ` [PATCH v5 02/11] MIPS: GIC: Generate redirect block accessors Aleksandar Rikalo
2024-07-11 8:26 ` [PATCH v5 03/11] irqchip/mips-gic: Introduce for_each_online_cpu_gic() Aleksandar Rikalo
2024-07-11 11:52 ` Thomas Gleixner
2024-07-11 8:26 ` [PATCH v5 04/11] irqchip/mips-gic: Support multi-cluster in for_each_online_cpu_gic() Aleksandar Rikalo
2024-07-12 4:38 ` kernel test robot
2024-07-11 8:26 ` [PATCH v5 05/11] irqchip/mips-gic: Setup defaults in each cluster Aleksandar Rikalo
2024-07-11 8:26 ` [PATCH v5 06/11] irqchip/mips-gic: Multi-cluster support Aleksandar Rikalo
2024-07-11 8:26 ` [PATCH v5 07/11] clocksource: mips-gic-timer: Always use cluster 0 counter as clocksource Aleksandar Rikalo
2024-07-11 8:26 ` [PATCH v5 08/11] clocksource: mips-gic-timer: Enable counter when CPUs start Aleksandar Rikalo
2024-07-11 8:26 ` [PATCH v5 09/11] MIPS: pm-cps: Use per-CPU variables as per-CPU, not per-core Aleksandar Rikalo
2024-07-11 8:26 ` [PATCH v5 10/11] MIPS: CPS: Introduce struct cluster_boot_config Aleksandar Rikalo
2024-07-11 8:26 ` [PATCH v5 11/11] MIPS: CPS: Boot CPUs in secondary clusters Aleksandar Rikalo
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).