* [PATCHv5 0/3] *** Detect interrupt storm in softlockup ***
@ 2024-02-06 9:58 Bitao Hu
2024-02-06 9:59 ` [PATCHv5 1/3] watchdog/softlockup: low-overhead detection of interrupt Bitao Hu
` (2 more replies)
0 siblings, 3 replies; 12+ messages in thread
From: Bitao Hu @ 2024-02-06 9:58 UTC (permalink / raw)
To: dianders, akpm, pmladek, kernelfans, liusong; +Cc: linux-kernel, yaoma
Hi, guys.
I have implemented a low-overhead method for detecting interrupt storm
in softlockup. Please review it, all comments are welcome.
Changes from v4 to v5:
- Rearranging variable placement to make code look neater.
Changes from v3 to v4:
- Renaming some variable and function names to make the code logic
more readable.
- Change the code location to avoid predeclaring.
- Just swap rather than a double loop in tabulate_irq_count.
- Since nr_irqs has the potential to grow at runtime, bounds-check
logic has been implemented.
- Add SOFTLOCKUP_DETECTOR_INTR_STORM Kconfig knob.
Changes from v2 to v3:
- From Liu Song, using enum instead of macro for cpu_stats, shortening
the name 'idx_to_stat' to 'stats', adding 'get_16bit_precesion' instead
of using right shift operations, and using 'struct irq_counts'.
- From kernel robot test, using '__this_cpu_read' and '__this_cpu_write'
instead of accessing to an per-cpu array directly, in order to avoid
this warning.
'sparse: incorrect type in initializer (different modifiers)'
Changes from v1 to v2:
- From Douglas, optimize the memory of cpustats. With the maximum number
of CPUs, that's now this.
2 * 8192 * 4 + 1 * 8192 * 5 * 4 + 1 * 8192 = 237,568 bytes.
- From Liu Song, refactor the code format and add necessary comments.
- From Douglas, use interrupt counts instead of interrupt time to
determine the cause of softlockup.
- Remove the cmdline parameter added in PATCHv1.
Bitao Hu (3):
watchdog/softlockup: low-overhead detection of interrupt
watchdog/softlockup: report the most frequent interrupts
watchdog/softlockup: add SOFTLOCKUP_DETECTOR_INTR_STORM Kconfig knob
kernel/watchdog.c | 245 ++++++++++++++++++++++++++++++++++++++++++++++
lib/Kconfig.debug | 13 +++
2 files changed, 258 insertions(+)
--
2.37.1 (Apple Git-137.1)
^ permalink raw reply [flat|nested] 12+ messages in thread
* [PATCHv5 1/3] watchdog/softlockup: low-overhead detection of interrupt
2024-02-06 9:58 [PATCHv5 0/3] *** Detect interrupt storm in softlockup *** Bitao Hu
@ 2024-02-06 9:59 ` Bitao Hu
2024-02-06 21:41 ` Doug Anderson
2024-02-06 9:59 ` [PATCHv5 2/3] watchdog/softlockup: report the most frequent interrupts Bitao Hu
2024-02-06 9:59 ` [PATCHv5 3/3] watchdog/softlockup: add SOFTLOCKUP_DETECTOR_INTR_STORM Kconfig knob Bitao Hu
2 siblings, 1 reply; 12+ messages in thread
From: Bitao Hu @ 2024-02-06 9:59 UTC (permalink / raw)
To: dianders, akpm, pmladek, kernelfans, liusong; +Cc: linux-kernel, yaoma
The following softlockup is caused by interrupt storm, but it cannot be
identified from the call tree. Because the call tree is just a snapshot
and doesn't fully capture the behavior of the CPU during the soft lockup.
watchdog: BUG: soft lockup - CPU#28 stuck for 23s! [fio:83921]
...
Call trace:
__do_softirq+0xa0/0x37c
__irq_exit_rcu+0x108/0x140
irq_exit+0x14/0x20
__handle_domain_irq+0x84/0xe0
gic_handle_irq+0x80/0x108
el0_irq_naked+0x50/0x58
Therefore,I think it is necessary to report CPU utilization during the
softlockup_thresh period (report once every sample_period, for a total
of 5 reportings), like this:
watchdog: BUG: soft lockup - CPU#28 stuck for 23s! [fio:83921]
CPU#28 Utilization every 4s during lockup:
#1: 0% system, 0% softirq, 100% hardirq, 0% idle
#2: 0% system, 0% softirq, 100% hardirq, 0% idle
#3: 0% system, 0% softirq, 100% hardirq, 0% idle
#4: 0% system, 0% softirq, 100% hardirq, 0% idle
#5: 0% system, 0% softirq, 100% hardirq, 0% idle
...
This would be helpful in determining whether an interrupt storm has
occurred or in identifying the cause of the softlockup. The criteria for
determination are as follows:
a. If the hardirq utilization is high, then interrupt storm should be
considered and the root cause cannot be determined from the call tree.
b. If the softirq utilization is high, then we could analyze the call
tree but it may cannot reflect the root cause.
c. If the system utilization is high, then we could analyze the root
cause from the call tree.
Signed-off-by: Bitao Hu <yaoma@linux.alibaba.com>
---
kernel/watchdog.c | 89 +++++++++++++++++++++++++++++++++++++++++++++++
1 file changed, 89 insertions(+)
diff --git a/kernel/watchdog.c b/kernel/watchdog.c
index 81a8862295d6..71d5b6dfa358 100644
--- a/kernel/watchdog.c
+++ b/kernel/watchdog.c
@@ -16,6 +16,8 @@
#include <linux/cpu.h>
#include <linux/nmi.h>
#include <linux/init.h>
+#include <linux/kernel_stat.h>
+#include <linux/math64.h>
#include <linux/module.h>
#include <linux/sysctl.h>
#include <linux/tick.h>
@@ -333,6 +335,90 @@ __setup("watchdog_thresh=", watchdog_thresh_setup);
static void __lockup_detector_cleanup(void);
+#ifdef CONFIG_IRQ_TIME_ACCOUNTING
+#define NUM_STATS_GROUPS 5
+#define NUM_STATS_PER_GROUP 4
+enum stats_per_group {
+ STATS_SYSTEM,
+ STATS_SOFTIRQ,
+ STATS_HARDIRQ,
+ STATS_IDLE,
+};
+static const enum cpu_usage_stat tracked_stats[NUM_STATS_PER_GROUP] = {
+ CPUTIME_SYSTEM,
+ CPUTIME_SOFTIRQ,
+ CPUTIME_IRQ,
+ CPUTIME_IDLE,
+};
+static DEFINE_PER_CPU(u16, cpustat_old[NUM_STATS_PER_GROUP]);
+static DEFINE_PER_CPU(u8, cpustat_util[NUM_STATS_GROUPS][NUM_STATS_PER_GROUP]);
+static DEFINE_PER_CPU(u8, cpustat_tail);
+
+/*
+ * We don't need nanosecond resolution. A granularity of 16ms is
+ * sufficient for our precision, allowing us to use u16 to store
+ * cpustats, which will roll over roughly every ~1000 seconds.
+ * 2^24 ~= 16 * 10^6
+ */
+static u16 get_16bit_precision(u64 data_ns)
+{
+ return data_ns >> 24LL; /* 2^24ns ~= 16.8ms */
+}
+
+static void update_cpustat(void)
+{
+ int i;
+ u8 util;
+ u16 old_stat, new_stat;
+ struct kernel_cpustat kcpustat;
+ u64 *cpustat = kcpustat.cpustat;
+ u8 tail = __this_cpu_read(cpustat_tail);
+ u16 sample_period_16 = get_16bit_precision(sample_period);
+
+ kcpustat_cpu_fetch(&kcpustat, smp_processor_id());
+ for (i = 0; i < NUM_STATS_PER_GROUP; i++) {
+ old_stat = __this_cpu_read(cpustat_old[i]);
+ new_stat = get_16bit_precision(cpustat[tracked_stats[i]]);
+ util = DIV_ROUND_UP(100 * (new_stat - old_stat), sample_period_16);
+ __this_cpu_write(cpustat_util[tail][i], util);
+ __this_cpu_write(cpustat_old[i], new_stat);
+ }
+ __this_cpu_write(cpustat_tail, (tail + 1) % NUM_STATS_GROUPS);
+}
+
+static void print_cpustat(void)
+{
+ int i, group;
+ u8 tail = __this_cpu_read(cpustat_tail);
+ u64 sample_period_second = sample_period;
+
+ do_div(sample_period_second, NSEC_PER_SEC);
+ /*
+ * We do not want the "watchdog: " prefix on every line,
+ * hence we use "printk" instead of "pr_crit".
+ */
+ printk(KERN_CRIT "CPU#%d Utilization every %llus during lockup:\n",
+ smp_processor_id(), sample_period_second);
+ for (i = 0; i < NUM_STATS_GROUPS; i++) {
+ group = (tail + i) % NUM_STATS_GROUPS;
+ printk(KERN_CRIT "\t#%d: %3u%% system,\t%3u%% softirq,\t"
+ "%3u%% hardirq,\t%3u%% idle\n", i+1,
+ __this_cpu_read(cpustat_util[group][STATS_SYSTEM]),
+ __this_cpu_read(cpustat_util[group][STATS_SOFTIRQ]),
+ __this_cpu_read(cpustat_util[group][STATS_HARDIRQ]),
+ __this_cpu_read(cpustat_util[group][STATS_IDLE]));
+ }
+}
+
+static void report_cpu_status(void)
+{
+ print_cpustat();
+}
+#else
+static inline void update_cpustat(void) { }
+static inline void report_cpu_status(void) { }
+#endif
+
/*
* Hard-lockup warnings should be triggered after just a few seconds. Soft-
* lockups can have false positives under extreme conditions. So we generally
@@ -504,6 +590,8 @@ static enum hrtimer_restart watchdog_timer_fn(struct hrtimer *hrtimer)
*/
period_ts = READ_ONCE(*this_cpu_ptr(&watchdog_report_ts));
+ update_cpustat();
+
/* Reset the interval when touched by known problematic code. */
if (period_ts == SOFTLOCKUP_DELAY_REPORT) {
if (unlikely(__this_cpu_read(softlockup_touch_sync))) {
@@ -539,6 +627,7 @@ static enum hrtimer_restart watchdog_timer_fn(struct hrtimer *hrtimer)
pr_emerg("BUG: soft lockup - CPU#%d stuck for %us! [%s:%d]\n",
smp_processor_id(), duration,
current->comm, task_pid_nr(current));
+ report_cpu_status();
print_modules();
print_irqtrace_events(current);
if (regs)
--
2.37.1 (Apple Git-137.1)
^ permalink raw reply related [flat|nested] 12+ messages in thread
* [PATCHv5 2/3] watchdog/softlockup: report the most frequent interrupts
2024-02-06 9:58 [PATCHv5 0/3] *** Detect interrupt storm in softlockup *** Bitao Hu
2024-02-06 9:59 ` [PATCHv5 1/3] watchdog/softlockup: low-overhead detection of interrupt Bitao Hu
@ 2024-02-06 9:59 ` Bitao Hu
2024-02-06 21:42 ` Doug Anderson
2024-02-06 9:59 ` [PATCHv5 3/3] watchdog/softlockup: add SOFTLOCKUP_DETECTOR_INTR_STORM Kconfig knob Bitao Hu
2 siblings, 1 reply; 12+ messages in thread
From: Bitao Hu @ 2024-02-06 9:59 UTC (permalink / raw)
To: dianders, akpm, pmladek, kernelfans, liusong; +Cc: linux-kernel, yaoma
When the watchdog determines that the current soft lockup is due
to an interrupt storm based on CPU utilization, reporting the
most frequent interrupts could be good enough for further
troubleshooting.
Below is an example of interrupt storm. The call tree does not
provide useful information, but we can analyze which interrupt
caused the soft lockup by comparing the counts of interrupts.
[ 2987.488075] watchdog: BUG: soft lockup - CPU#9 stuck for 23s! [kworker/9:1:214]
[ 2987.488607] CPU#9 Utilization every 4s during lockup:
[ 2987.488941] #1: 0% system, 0% softirq, 100% hardirq, 0% idle
[ 2987.489357] #2: 0% system, 0% softirq, 100% hardirq, 0% idle
[ 2987.489771] #3: 0% system, 0% softirq, 100% hardirq, 0% idle
[ 2987.490186] #4: 0% system, 0% softirq, 100% hardirq, 0% idle
[ 2987.490601] #5: 0% system, 0% softirq, 100% hardirq, 0% idle
[ 2987.491034] CPU#9 Detect HardIRQ Time exceeds 50%. Most frequent HardIRQs:
[ 2987.491493] #1: 330985 irq#7(IPI)
[ 2987.491743] #2: 5000 irq#10(arch_timer)
[ 2987.492039] #3: 9 irq#91(nvme0q2)
[ 2987.492318] #4: 3 irq#118(virtio1-output.12)
...
[ 2987.492728] Call trace:
[ 2987.492729] __do_softirq+0xa8/0x364
Signed-off-by: Bitao Hu <yaoma@linux.alibaba.com>
---
kernel/watchdog.c | 156 ++++++++++++++++++++++++++++++++++++++++++++++
1 file changed, 156 insertions(+)
diff --git a/kernel/watchdog.c b/kernel/watchdog.c
index 71d5b6dfa358..26dc1ad86276 100644
--- a/kernel/watchdog.c
+++ b/kernel/watchdog.c
@@ -18,6 +18,9 @@
#include <linux/init.h>
#include <linux/kernel_stat.h>
#include <linux/math64.h>
+#include <linux/irq.h>
+#include <linux/irqdesc.h>
+#include <linux/bitops.h>
#include <linux/module.h>
#include <linux/sysctl.h>
#include <linux/tick.h>
@@ -410,13 +413,153 @@ static void print_cpustat(void)
}
}
+#define HARDIRQ_PERCENT_THRESH 50
+#define NUM_HARDIRQ_REPORT 5
+static DECLARE_BITMAP(softlockup_hardirq_cpus, CONFIG_NR_CPUS);
+static DEFINE_PER_CPU(u32 *, hardirq_counts);
+static DEFINE_PER_CPU(int, actual_nr_irqs);
+struct irq_counts {
+ int irq;
+ u32 counts;
+};
+
+/* Tabulate the most frequent interrupts. */
+static void tabulate_irq_count(struct irq_counts *irq_counts, int irq, u32 counts, int rank)
+{
+ int i;
+ struct irq_counts new_count = {irq, counts};
+
+ for (i = 0; i < rank; i++) {
+ if (counts > irq_counts[i].counts)
+ swap(new_count, irq_counts[i]);
+ }
+}
+
+/*
+ * If the hardirq time exceeds HARDIRQ_PERCENT_THRESH% of the sample_period,
+ * then the cause of softlockup might be interrupt storm. In this case, it
+ * would be useful to start interrupt counting.
+ */
+static bool need_counting_irqs(void)
+{
+ u8 util;
+ int tail = __this_cpu_read(cpustat_tail);
+
+ tail = (tail + NUM_HARDIRQ_REPORT - 1) % NUM_HARDIRQ_REPORT;
+ util = __this_cpu_read(cpustat_util[tail][STATS_HARDIRQ]);
+ return util > HARDIRQ_PERCENT_THRESH;
+}
+
+static void start_counting_irqs(void)
+{
+ int i;
+ struct irq_desc *desc;
+ u32 *counts = __this_cpu_read(hardirq_counts);
+ int cpu = smp_processor_id();
+
+ if (!test_bit(cpu, softlockup_hardirq_cpus)) {
+ /*
+ * nr_irqs has the potential to grow at runtime. We should read
+ * it and store locally to avoid array out-of-bounds access.
+ */
+ __this_cpu_write(actual_nr_irqs, nr_irqs);
+ counts = kmalloc_array(__this_cpu_read(actual_nr_irqs),
+ sizeof(u32),
+ GFP_ATOMIC);
+ if (!counts)
+ return;
+ for (i = 0; i < __this_cpu_read(actual_nr_irqs); i++) {
+ desc = irq_to_desc(i);
+ if (!desc)
+ continue;
+ counts[i] = desc->kstat_irqs ?
+ *this_cpu_ptr(desc->kstat_irqs) : 0;
+ }
+ __this_cpu_write(hardirq_counts, counts);
+ set_bit(cpu, softlockup_hardirq_cpus);
+ }
+}
+
+static void stop_counting_irqs(void)
+{
+ u32 *counts = __this_cpu_read(hardirq_counts);
+ int cpu = smp_processor_id();
+
+ if (test_bit(cpu, softlockup_hardirq_cpus)) {
+ kfree(counts);
+ counts = NULL;
+ __this_cpu_write(hardirq_counts, counts);
+ clear_bit(cpu, softlockup_hardirq_cpus);
+ }
+}
+
+static void print_irq_counts(void)
+{
+ int i;
+ struct irq_desc *desc;
+ u32 counts_diff;
+ u32 *counts = __this_cpu_read(hardirq_counts);
+ int cpu = smp_processor_id();
+ struct irq_counts irq_counts_sorted[NUM_HARDIRQ_REPORT] = {
+ {-1, 0}, {-1, 0}, {-1, 0}, {-1, 0},
+ };
+
+ if (test_bit(cpu, softlockup_hardirq_cpus)) {
+ for_each_irq_desc(i, desc) {
+ if (!desc)
+ continue;
+ /*
+ * We need to bounds-check in case someone on a different CPU
+ * expanded nr_irqs.
+ */
+ if (i < __this_cpu_read(actual_nr_irqs))
+ counts_diff = desc->kstat_irqs ?
+ *this_cpu_ptr(desc->kstat_irqs) - counts[i] : 0;
+ else
+ counts_diff = desc->kstat_irqs ?
+ *this_cpu_ptr(desc->kstat_irqs) : 0;
+ tabulate_irq_count(irq_counts_sorted, i, counts_diff,
+ NUM_HARDIRQ_REPORT);
+ }
+ /*
+ * We do not want the "watchdog: " prefix on every line,
+ * hence we use "printk" instead of "pr_crit".
+ */
+ printk(KERN_CRIT "CPU#%d Detect HardIRQ Time exceeds %d%%. Most frequent HardIRQs:\n",
+ smp_processor_id(), HARDIRQ_PERCENT_THRESH);
+ for (i = 0; i < NUM_HARDIRQ_REPORT; i++) {
+ if (irq_counts_sorted[i].irq == -1)
+ break;
+ desc = irq_to_desc(irq_counts_sorted[i].irq);
+ if (desc && desc->action)
+ printk(KERN_CRIT "\t#%u: %-10u\tirq#%d(%s)\n",
+ i+1, irq_counts_sorted[i].counts,
+ irq_counts_sorted[i].irq, desc->action->name);
+ else
+ printk(KERN_CRIT "\t#%u: %-10u\tirq#%d\n",
+ i+1, irq_counts_sorted[i].counts,
+ irq_counts_sorted[i].irq);
+ }
+ /*
+ * If the hardirq time is less than HARDIRQ_PERCENT_THRESH% in the last
+ * sample_period, then we suspect the interrupt storm might be subsiding.
+ */
+ if (!need_counting_irqs())
+ stop_counting_irqs();
+ }
+}
+
static void report_cpu_status(void)
{
print_cpustat();
+ print_irq_counts();
}
#else
static inline void update_cpustat(void) { }
static inline void report_cpu_status(void) { }
+static inline bool need_counting_irqs(void) { return false; }
+static inline void start_counting_irqs(void) { }
+static inline void stop_counting_irqs(void) { }
#endif
/*
@@ -520,6 +663,18 @@ static int is_softlockup(unsigned long touch_ts,
unsigned long now)
{
if ((watchdog_enabled & WATCHDOG_SOFTOCKUP_ENABLED) && watchdog_thresh) {
+ /*
+ * If period_ts has not been updated during a sample_period, then
+ * in the subsequent few sample_periods, period_ts might also not
+ * be updated, which could indicate a potential softlockup. In
+ * this case, if we suspect the cause of the potential softlockup
+ * might be interrupt storm, then we need to count the interrupts
+ * to find which interrupt is storming.
+ */
+ if (time_after_eq(now, period_ts + get_softlockup_thresh() / 5) &&
+ need_counting_irqs())
+ start_counting_irqs();
+
/* Warn about unreasonable delays. */
if (time_after(now, period_ts + get_softlockup_thresh()))
return now - touch_ts;
@@ -542,6 +697,7 @@ static DEFINE_PER_CPU(struct cpu_stop_work, softlockup_stop_work);
static int softlockup_fn(void *data)
{
update_touch_ts();
+ stop_counting_irqs();
complete(this_cpu_ptr(&softlockup_completion));
return 0;
--
2.37.1 (Apple Git-137.1)
^ permalink raw reply related [flat|nested] 12+ messages in thread
* [PATCHv5 3/3] watchdog/softlockup: add SOFTLOCKUP_DETECTOR_INTR_STORM Kconfig knob
2024-02-06 9:58 [PATCHv5 0/3] *** Detect interrupt storm in softlockup *** Bitao Hu
2024-02-06 9:59 ` [PATCHv5 1/3] watchdog/softlockup: low-overhead detection of interrupt Bitao Hu
2024-02-06 9:59 ` [PATCHv5 2/3] watchdog/softlockup: report the most frequent interrupts Bitao Hu
@ 2024-02-06 9:59 ` Bitao Hu
2024-02-06 21:42 ` Doug Anderson
2 siblings, 1 reply; 12+ messages in thread
From: Bitao Hu @ 2024-02-06 9:59 UTC (permalink / raw)
To: dianders, akpm, pmladek, kernelfans, liusong; +Cc: linux-kernel, yaoma
The interrupt storm detection mechanism we implemented requires a
considerable amount of global storage space when configured for
the maximum number of CPUs.
Therefore, adding a SOFTLOCKUP_DETECTOR_INTR_STORM Kconfig knob that
defaults to "yes" if the max number of CPUs is <= 128.
Signed-off-by: Bitao Hu <yaoma@linux.alibaba.com>
---
kernel/watchdog.c | 2 +-
lib/Kconfig.debug | 13 +++++++++++++
2 files changed, 14 insertions(+), 1 deletion(-)
diff --git a/kernel/watchdog.c b/kernel/watchdog.c
index 26dc1ad86276..1595e4a94774 100644
--- a/kernel/watchdog.c
+++ b/kernel/watchdog.c
@@ -338,7 +338,7 @@ __setup("watchdog_thresh=", watchdog_thresh_setup);
static void __lockup_detector_cleanup(void);
-#ifdef CONFIG_IRQ_TIME_ACCOUNTING
+#ifdef CONFIG_SOFTLOCKUP_DETECTOR_INTR_STORM
#define NUM_STATS_GROUPS 5
#define NUM_STATS_PER_GROUP 4
enum stats_per_group {
diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug
index 975a07f9f1cc..74002ba7c42d 100644
--- a/lib/Kconfig.debug
+++ b/lib/Kconfig.debug
@@ -1029,6 +1029,19 @@ config SOFTLOCKUP_DETECTOR
chance to run. The current stack trace is displayed upon
detection and the system will stay locked up.
+config SOFTLOCKUP_DETECTOR_INTR_STORM
+ bool "Detect Interrupt Storm in Soft Lockups"
+ depends on SOFTLOCKUP_DETECTOR && IRQ_TIME_ACCOUNTING
+ default y if NR_CPUS <= 128
+ help
+ Say Y here to enable the kernel to detect interrupt storm
+ during "soft lockups".
+
+ "soft lockups" can be caused by a variety of reasons. If one is caused by
+ an interrupt storm, then the storming interrupts will not be on the
+ callstack. To detect this case, it is necessary to report the CPU stats
+ and the interrupt counts during the "soft lockups".
+
config BOOTPARAM_SOFTLOCKUP_PANIC
bool "Panic (Reboot) On Soft Lockups"
depends on SOFTLOCKUP_DETECTOR
--
2.37.1 (Apple Git-137.1)
^ permalink raw reply related [flat|nested] 12+ messages in thread
* Re: [PATCHv5 1/3] watchdog/softlockup: low-overhead detection of interrupt
2024-02-06 9:59 ` [PATCHv5 1/3] watchdog/softlockup: low-overhead detection of interrupt Bitao Hu
@ 2024-02-06 21:41 ` Doug Anderson
2024-02-07 6:18 ` Bitao Hu
0 siblings, 1 reply; 12+ messages in thread
From: Doug Anderson @ 2024-02-06 21:41 UTC (permalink / raw)
To: Bitao Hu; +Cc: akpm, pmladek, kernelfans, liusong, linux-kernel
Hi,
On Tue, Feb 6, 2024 at 1:59 AM Bitao Hu <yaoma@linux.alibaba.com> wrote:
>
> The following softlockup is caused by interrupt storm, but it cannot be
> identified from the call tree. Because the call tree is just a snapshot
> and doesn't fully capture the behavior of the CPU during the soft lockup.
> watchdog: BUG: soft lockup - CPU#28 stuck for 23s! [fio:83921]
> ...
> Call trace:
> __do_softirq+0xa0/0x37c
> __irq_exit_rcu+0x108/0x140
> irq_exit+0x14/0x20
> __handle_domain_irq+0x84/0xe0
> gic_handle_irq+0x80/0x108
> el0_irq_naked+0x50/0x58
>
> Therefore,I think it is necessary to report CPU utilization during the
> softlockup_thresh period (report once every sample_period, for a total
> of 5 reportings), like this:
> watchdog: BUG: soft lockup - CPU#28 stuck for 23s! [fio:83921]
> CPU#28 Utilization every 4s during lockup:
> #1: 0% system, 0% softirq, 100% hardirq, 0% idle
> #2: 0% system, 0% softirq, 100% hardirq, 0% idle
> #3: 0% system, 0% softirq, 100% hardirq, 0% idle
> #4: 0% system, 0% softirq, 100% hardirq, 0% idle
> #5: 0% system, 0% softirq, 100% hardirq, 0% idle
> ...
>
> This would be helpful in determining whether an interrupt storm has
> occurred or in identifying the cause of the softlockup. The criteria for
> determination are as follows:
> a. If the hardirq utilization is high, then interrupt storm should be
> considered and the root cause cannot be determined from the call tree.
> b. If the softirq utilization is high, then we could analyze the call
> tree but it may cannot reflect the root cause.
> c. If the system utilization is high, then we could analyze the root
> cause from the call tree.
>
> Signed-off-by: Bitao Hu <yaoma@linux.alibaba.com>
> ---
> kernel/watchdog.c | 89 +++++++++++++++++++++++++++++++++++++++++++++++
> 1 file changed, 89 insertions(+)
On v4 you got Liu Song's Reviewed-by and I don't think this is
massively different than v4. I would have expected you to carry the
tag forward. In any case ,I guess Liu Song can give it again...
> diff --git a/kernel/watchdog.c b/kernel/watchdog.c
> index 81a8862295d6..71d5b6dfa358 100644
> --- a/kernel/watchdog.c
> +++ b/kernel/watchdog.c
> @@ -16,6 +16,8 @@
> #include <linux/cpu.h>
> #include <linux/nmi.h>
> #include <linux/init.h>
> +#include <linux/kernel_stat.h>
> +#include <linux/math64.h>
> #include <linux/module.h>
> #include <linux/sysctl.h>
> #include <linux/tick.h>
> @@ -333,6 +335,90 @@ __setup("watchdog_thresh=", watchdog_thresh_setup);
>
> static void __lockup_detector_cleanup(void);
>
> +#ifdef CONFIG_IRQ_TIME_ACCOUNTING
> +#define NUM_STATS_GROUPS 5
> +#define NUM_STATS_PER_GROUP 4
> +enum stats_per_group {
> + STATS_SYSTEM,
> + STATS_SOFTIRQ,
> + STATS_HARDIRQ,
> + STATS_IDLE,
nit: I still would have left "NUM_STATS_PER_GROUP" here instead of as
a separate #define.
> +static void print_cpustat(void)
> +{
> + int i, group;
> + u8 tail = __this_cpu_read(cpustat_tail);
Sorry for not noticing before, but why are you using
"__this_cpu_read()" instead of "this_cpu_read()"? In other words, why
do you need the double-underscore version everywhere? I don't think
you do, do you?
> + u64 sample_period_second = sample_period;
> +
> + do_div(sample_period_second, NSEC_PER_SEC);
> + /*
> + * We do not want the "watchdog: " prefix on every line,
> + * hence we use "printk" instead of "pr_crit".
> + */
> + printk(KERN_CRIT "CPU#%d Utilization every %llus during lockup:\n",
> + smp_processor_id(), sample_period_second);
> + for (i = 0; i < NUM_STATS_GROUPS; i++) {
> + group = (tail + i) % NUM_STATS_GROUPS;
> + printk(KERN_CRIT "\t#%d: %3u%% system,\t%3u%% softirq,\t"
> + "%3u%% hardirq,\t%3u%% idle\n", i+1,
nit: though I don't care too much in this case, I think kernel folks
slightly prefer "i + 1" instead of "i+1". Running
"./scripts/checkpatch.pl --strict" will give a warning about this, for
instance. Actually, "./scripts/checkpatch.pl --strict" has a few extra
style nits that you could consider fixing.
> +static void report_cpu_status(void)
> +{
> + print_cpustat();
> +}
I don't understand why you need the extra wrapper. You didn't have it
on v3 and I don't see any reason why you introduced it. Ah, I see, in
the next patch you add something to it. OK, I guess it's fine to
introduce it here.
-Doug
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCHv5 2/3] watchdog/softlockup: report the most frequent interrupts
2024-02-06 9:59 ` [PATCHv5 2/3] watchdog/softlockup: report the most frequent interrupts Bitao Hu
@ 2024-02-06 21:42 ` Doug Anderson
2024-02-07 6:18 ` Bitao Hu
0 siblings, 1 reply; 12+ messages in thread
From: Doug Anderson @ 2024-02-06 21:42 UTC (permalink / raw)
To: Bitao Hu; +Cc: akpm, pmladek, kernelfans, liusong, linux-kernel
Hi,
On Tue, Feb 6, 2024 at 1:59 AM Bitao Hu <yaoma@linux.alibaba.com> wrote:
>
> diff --git a/kernel/watchdog.c b/kernel/watchdog.c
> index 71d5b6dfa358..26dc1ad86276 100644
> --- a/kernel/watchdog.c
> +++ b/kernel/watchdog.c
> @@ -18,6 +18,9 @@
> #include <linux/init.h>
> #include <linux/kernel_stat.h>
> #include <linux/math64.h>
> +#include <linux/irq.h>
> +#include <linux/irqdesc.h>
> +#include <linux/bitops.h>
These are still not sorted alphabetically. "irq.h" and "irqdesc.h"
should go between "init.h" and "kernel_stat.h". "bitops.h" is trickier
because the existing headers are not quite sorted. Probably the best
would be to fully sort them. They should end up like this:
#include <linux/bitops.h>
#include <linux/cpu.h>
#include <linux/init.h>
#include <linux/irq.h>
#include <linux/irqdesc.h>
#include <linux/kernel_stat.h>
#include <linux/kvm_para.h>
#include <linux/math64.h>
#include <linux/mm.h>
#include <linux/module.h>
#include <linux/nmi.h>
#include <linux/stop_machine.h>
#include <linux/sysctl.h>
#include <linux/tick.h>
#include <linux/sched/clock.h>
#include <linux/sched/debug.h>
#include <linux/sched/isolation.h>
#include <asm/irq_regs.h>
> +static void start_counting_irqs(void)
> +{
> + int i;
> + struct irq_desc *desc;
> + u32 *counts = __this_cpu_read(hardirq_counts);
> + int cpu = smp_processor_id();
> +
> + if (!test_bit(cpu, softlockup_hardirq_cpus)) {
I don't think you need "softlockup_hardirq_cpus", do you? Just read
"actual_nr_irqs" and see if it's non-zero? ...or read "hardirq_counts"
and see if it's non-NULL?
> + /*
> + * nr_irqs has the potential to grow at runtime. We should read
> + * it and store locally to avoid array out-of-bounds access.
> + */
> + __this_cpu_write(actual_nr_irqs, nr_irqs);
nit: IMO store nr_irqs in a local variable to avoid all of the
"__this_cpu_read" calls everywhere. Then just write it once from your
local variable.
> + counts = kmalloc_array(__this_cpu_read(actual_nr_irqs),
> + sizeof(u32),
> + GFP_ATOMIC);
should use "kcalloc()" so the array is zeroed. That way if the set of
non-NULL "desc"s changes between calls you don't end up reading
uninitialized memory.
> +static void stop_counting_irqs(void)
> +{
> + u32 *counts = __this_cpu_read(hardirq_counts);
> + int cpu = smp_processor_id();
> +
> + if (test_bit(cpu, softlockup_hardirq_cpus)) {
> + kfree(counts);
> + counts = NULL;
> + __this_cpu_write(hardirq_counts, counts);
nit: don't really need to set the local "counts" to NULL. Just:
__this_cpu_write(hardirq_counts, NULL);
...and actually if you take my advice above and get rid of
"softlockup_hardirq_cpus" then this function just becomes:
kfree(__this_cpu_read(hardirq_counts));
__this_cpu_write(hardirq_counts, NULL);
Since kfree() handles when you pass it NULL...
> +static void print_irq_counts(void)
> +{
> + int i;
> + struct irq_desc *desc;
> + u32 counts_diff;
> + u32 *counts = __this_cpu_read(hardirq_counts);
> + int cpu = smp_processor_id();
> + struct irq_counts irq_counts_sorted[NUM_HARDIRQ_REPORT] = {
> + {-1, 0}, {-1, 0}, {-1, 0}, {-1, 0},
> + };
> +
> + if (test_bit(cpu, softlockup_hardirq_cpus)) {
> + for_each_irq_desc(i, desc) {
> + if (!desc)
> + continue;
The "if" test above isn't needed. The "for_each_irq_desc()" macro
already checks for NULL.
> + /*
> + * We need to bounds-check in case someone on a different CPU
> + * expanded nr_irqs.
> + */
> + if (i < __this_cpu_read(actual_nr_irqs))
> + counts_diff = desc->kstat_irqs ?
> + *this_cpu_ptr(desc->kstat_irqs) - counts[i] : 0;
> + else
> + counts_diff = desc->kstat_irqs ?
> + *this_cpu_ptr(desc->kstat_irqs) : 0;
Why do you need to test "kstat_irqs" for 0? Also, ideally don't
duplicate the math. In other words, I'd expect this (untested):
if (i < __this_cpu_read(actual_nr_irqs))
count = counts[i];
else
count = 0;
counts_diff = *this_cpu_ptr(desc->kstat_irqs) - count;
I guess I'd also put "__this_cpu_read(actual_nr_irqs)" in a local
variable like you do with counts...
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCHv5 3/3] watchdog/softlockup: add SOFTLOCKUP_DETECTOR_INTR_STORM Kconfig knob
2024-02-06 9:59 ` [PATCHv5 3/3] watchdog/softlockup: add SOFTLOCKUP_DETECTOR_INTR_STORM Kconfig knob Bitao Hu
@ 2024-02-06 21:42 ` Doug Anderson
2024-02-07 6:19 ` Bitao Hu
0 siblings, 1 reply; 12+ messages in thread
From: Doug Anderson @ 2024-02-06 21:42 UTC (permalink / raw)
To: Bitao Hu; +Cc: akpm, pmladek, kernelfans, liusong, linux-kernel
Hi,
On Tue, Feb 6, 2024 at 1:59 AM Bitao Hu <yaoma@linux.alibaba.com> wrote:
>
> The interrupt storm detection mechanism we implemented requires a
> considerable amount of global storage space when configured for
> the maximum number of CPUs.
> Therefore, adding a SOFTLOCKUP_DETECTOR_INTR_STORM Kconfig knob that
> defaults to "yes" if the max number of CPUs is <= 128.
>
> Signed-off-by: Bitao Hu <yaoma@linux.alibaba.com>
> ---
> kernel/watchdog.c | 2 +-
> lib/Kconfig.debug | 13 +++++++++++++
> 2 files changed, 14 insertions(+), 1 deletion(-)
IMO this should be squashed into patch #1, though I won't insist.
> diff --git a/kernel/watchdog.c b/kernel/watchdog.c
> index 26dc1ad86276..1595e4a94774 100644
> --- a/kernel/watchdog.c
> +++ b/kernel/watchdog.c
> @@ -338,7 +338,7 @@ __setup("watchdog_thresh=", watchdog_thresh_setup);
>
> static void __lockup_detector_cleanup(void);
>
> -#ifdef CONFIG_IRQ_TIME_ACCOUNTING
> +#ifdef CONFIG_SOFTLOCKUP_DETECTOR_INTR_STORM
> #define NUM_STATS_GROUPS 5
> #define NUM_STATS_PER_GROUP 4
> enum stats_per_group {
> diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug
> index 975a07f9f1cc..74002ba7c42d 100644
> --- a/lib/Kconfig.debug
> +++ b/lib/Kconfig.debug
> @@ -1029,6 +1029,19 @@ config SOFTLOCKUP_DETECTOR
> chance to run. The current stack trace is displayed upon
> detection and the system will stay locked up.
>
> +config SOFTLOCKUP_DETECTOR_INTR_STORM
> + bool "Detect Interrupt Storm in Soft Lockups"
> + depends on SOFTLOCKUP_DETECTOR && IRQ_TIME_ACCOUNTING
> + default y if NR_CPUS <= 128
> + help
> + Say Y here to enable the kernel to detect interrupt storm
> + during "soft lockups".
> +
> + "soft lockups" can be caused by a variety of reasons. If one is caused by
> + an interrupt storm, then the storming interrupts will not be on the
> + callstack. To detect this case, it is necessary to report the CPU stats
> + and the interrupt counts during the "soft lockups".
It's probably not terribly important, but I notice that the other help
text in this file is generally wrapped to 80 columns. Even though the
kernel has relaxed the 80 column rule a bit, it still feels like this
could easily be wrapped to 80 columns without sacrificing any
readability.
In any case:
Reviewed-by: Douglas Anderson <dianders@chromium.org>
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCHv5 1/3] watchdog/softlockup: low-overhead detection of interrupt
2024-02-06 21:41 ` Doug Anderson
@ 2024-02-07 6:18 ` Bitao Hu
2024-02-07 17:13 ` Doug Anderson
0 siblings, 1 reply; 12+ messages in thread
From: Bitao Hu @ 2024-02-07 6:18 UTC (permalink / raw)
To: Doug Anderson; +Cc: akpm, pmladek, kernelfans, liusong, linux-kernel, yaoma
Hi,
On 2024/2/7 05:41, Doug Anderson wrote:
> Hi,
>
> On Tue, Feb 6, 2024 at 1:59 AM Bitao Hu <yaoma@linux.alibaba.com> wrote:
>>
>> The following softlockup is caused by interrupt storm, but it cannot be
>> identified from the call tree. Because the call tree is just a snapshot
>> and doesn't fully capture the behavior of the CPU during the soft lockup.
>> watchdog: BUG: soft lockup - CPU#28 stuck for 23s! [fio:83921]
>> ...
>> Call trace:
>> __do_softirq+0xa0/0x37c
>> __irq_exit_rcu+0x108/0x140
>> irq_exit+0x14/0x20
>> __handle_domain_irq+0x84/0xe0
>> gic_handle_irq+0x80/0x108
>> el0_irq_naked+0x50/0x58
>>
>> Therefore,I think it is necessary to report CPU utilization during the
>> softlockup_thresh period (report once every sample_period, for a total
>> of 5 reportings), like this:
>> watchdog: BUG: soft lockup - CPU#28 stuck for 23s! [fio:83921]
>> CPU#28 Utilization every 4s during lockup:
>> #1: 0% system, 0% softirq, 100% hardirq, 0% idle
>> #2: 0% system, 0% softirq, 100% hardirq, 0% idle
>> #3: 0% system, 0% softirq, 100% hardirq, 0% idle
>> #4: 0% system, 0% softirq, 100% hardirq, 0% idle
>> #5: 0% system, 0% softirq, 100% hardirq, 0% idle
>> ...
>>
>> This would be helpful in determining whether an interrupt storm has
>> occurred or in identifying the cause of the softlockup. The criteria for
>> determination are as follows:
>> a. If the hardirq utilization is high, then interrupt storm should be
>> considered and the root cause cannot be determined from the call tree.
>> b. If the softirq utilization is high, then we could analyze the call
>> tree but it may cannot reflect the root cause.
>> c. If the system utilization is high, then we could analyze the root
>> cause from the call tree.
>>
>> Signed-off-by: Bitao Hu <yaoma@linux.alibaba.com>
>> ---
>> kernel/watchdog.c | 89 +++++++++++++++++++++++++++++++++++++++++++++++
>> 1 file changed, 89 insertions(+)
>
> On v4 you got Liu Song's Reviewed-by and I don't think this is
> massively different than v4. I would have expected you to carry the
> tag forward. In any case ,I guess Liu Song can give it again.. >
>
>> diff --git a/kernel/watchdog.c b/kernel/watchdog.c
>> index 81a8862295d6..71d5b6dfa358 100644
>> --- a/kernel/watchdog.c
>> +++ b/kernel/watchdog.c
>> @@ -16,6 +16,8 @@
>> #include <linux/cpu.h>
>> #include <linux/nmi.h>
>> #include <linux/init.h>
>> +#include <linux/kernel_stat.h>
>> +#include <linux/math64.h>
>> #include <linux/module.h>
>> #include <linux/sysctl.h>
>> #include <linux/tick.h>
>> @@ -333,6 +335,90 @@ __setup("watchdog_thresh=", watchdog_thresh_setup);
>>
>> static void __lockup_detector_cleanup(void);
>>
>> +#ifdef CONFIG_IRQ_TIME_ACCOUNTING
>> +#define NUM_STATS_GROUPS 5
>> +#define NUM_STATS_PER_GROUP 4
>> +enum stats_per_group {
>> + STATS_SYSTEM,
>> + STATS_SOFTIRQ,
>> + STATS_HARDIRQ,
>> + STATS_IDLE,
>
> nit: I still would have left "NUM_STATS_PER_GROUP" here instead of as
> a separate #define.
OK.
>
>
>> +static void print_cpustat(void)
>> +{
>> + int i, group;
>> + u8 tail = __this_cpu_read(cpustat_tail);
>
> Sorry for not noticing before, but why are you using
> "__this_cpu_read()" instead of "this_cpu_read()"? In other words, why
> do you need the double-underscore version everywhere? I don't think
> you do, do you?
I also struggled with which version of the operation to use. The one
without double-underscores provides preemption/interrupt protection,
but in watchdog.c, the version with double-underscores is used. I
analyzed that it is also safe to use the version without
preemption/interrupt protection in my code, so to maintain consistency
with watchdog.c, I ues the version with double-underscores.
Is my approach reasonable? If not, I will switch to using the
non-underscored version.
>
>
>> + u64 sample_period_second = sample_period;
>> +
>> + do_div(sample_period_second, NSEC_PER_SEC);
>> + /*
>> + * We do not want the "watchdog: " prefix on every line,
>> + * hence we use "printk" instead of "pr_crit".
>> + */
>> + printk(KERN_CRIT "CPU#%d Utilization every %llus during lockup:\n",
>> + smp_processor_id(), sample_period_second);
>> + for (i = 0; i < NUM_STATS_GROUPS; i++) {
>> + group = (tail + i) % NUM_STATS_GROUPS;
>> + printk(KERN_CRIT "\t#%d: %3u%% system,\t%3u%% softirq,\t"
>> + "%3u%% hardirq,\t%3u%% idle\n", i+1,
>
> nit: though I don't care too much in this case, I think kernel folks
> slightly prefer "i + 1" instead of "i+1". Running
> "./scripts/checkpatch.pl --strict" will give a warning about this, for
> instance. Actually, "./scripts/checkpatch.pl --strict" has a few extra
> style nits that you could consider fixing.
Thanks for your reminder. I will use "./scripts/checkpatch.pl --strict"
to check and correct these patches.
>
>
>> +static void report_cpu_status(void)
>> +{
>> + print_cpustat();
>> +}
>
> I don't understand why you need the extra wrapper. You didn't have it
> on v3 and I don't see any reason why you introduced it. Ah, I see, in
> the next patch you add something to it. OK, I guess it's fine to
> introduce it here.
Yes, I add this wrapper to prepare for the next patch, to avoid
predeclaring of "print_irq_counts".
>
> -Doug
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCHv5 2/3] watchdog/softlockup: report the most frequent interrupts
2024-02-06 21:42 ` Doug Anderson
@ 2024-02-07 6:18 ` Bitao Hu
2024-02-07 17:14 ` Doug Anderson
0 siblings, 1 reply; 12+ messages in thread
From: Bitao Hu @ 2024-02-07 6:18 UTC (permalink / raw)
To: Doug Anderson; +Cc: akpm, pmladek, kernelfans, liusong, linux-kernel, yaoma
Hi,
On 2024/2/7 05:42, Doug Anderson wrote:
> Hi,
>
> On Tue, Feb 6, 2024 at 1:59 AM Bitao Hu <yaoma@linux.alibaba.com> wrote:
>>
>> diff --git a/kernel/watchdog.c b/kernel/watchdog.c
>> index 71d5b6dfa358..26dc1ad86276 100644
>> --- a/kernel/watchdog.c
>> +++ b/kernel/watchdog.c
>> @@ -18,6 +18,9 @@
>> #include <linux/init.h>
>> #include <linux/kernel_stat.h>
>> #include <linux/math64.h>
>> +#include <linux/irq.h>
>> +#include <linux/irqdesc.h>
>> +#include <linux/bitops.h>
>
> These are still not sorted alphabetically. "irq.h" and "irqdesc.h"
> should go between "init.h" and "kernel_stat.h". "bitops.h" is trickier
> because the existing headers are not quite sorted. Probably the best
> would be to fully sort them. They should end up like this:
>
> #include <linux/bitops.h>
> #include <linux/cpu.h>
> #include <linux/init.h>
> #include <linux/irq.h>
> #include <linux/irqdesc.h>
> #include <linux/kernel_stat.h>
> #include <linux/kvm_para.h>
> #include <linux/math64.h>
> #include <linux/mm.h>
> #include <linux/module.h>
> #include <linux/nmi.h>
> #include <linux/stop_machine.h>
> #include <linux/sysctl.h>
> #include <linux/tick.h>
>
> #include <linux/sched/clock.h>
> #include <linux/sched/debug.h>
> #include <linux/sched/isolation.h>
>
> #include <asm/irq_regs.h>
>
Sorry, I misunderstood your point, thinking that they should only be
added between "init.h" and "module.h". I will arrange them in the
alphabetical order as you suggested.
>
>> +static void start_counting_irqs(void)
>> +{
>> + int i;
>> + struct irq_desc *desc;
>> + u32 *counts = __this_cpu_read(hardirq_counts);
>> + int cpu = smp_processor_id();
>> +
>> + if (!test_bit(cpu, softlockup_hardirq_cpus)) {
>
> I don't think you need "softlockup_hardirq_cpus", do you? Just read
> "actual_nr_irqs" and see if it's non-zero? ...or read "hardirq_counts"
> and see if it's non-NULL?
Sure, the existing variables are sufficient for making a determination.
And may be I should swap it to make the decision logic here clearer,
like this (untested)?
bool is_counting_started(void)
{
return !!__this_cpu_read(hardirq_counts);
}
if (!is_counting_started()) {
>
>
>> + /*
>> + * nr_irqs has the potential to grow at runtime. We should read
>> + * it and store locally to avoid array out-of-bounds access.
>> + */
>> + __this_cpu_write(actual_nr_irqs, nr_irqs);
>
> nit: IMO store nr_irqs in a local variable to avoid all of the
> "__this_cpu_read" calls everywhere. Then just write it once from your
> local variable.
OK.
>
>
>> + counts = kmalloc_array(__this_cpu_read(actual_nr_irqs),
>> + sizeof(u32),
>> + GFP_ATOMIC);
>
> should use "kcalloc()" so the array is zeroed. That way if the set of
> non-NULL "desc"s changes between calls you don't end up reading
> uninitialized memory.
OK, I will use "kcalloc()" here.
>
>
>> +static void stop_counting_irqs(void)
>> +{
>> + u32 *counts = __this_cpu_read(hardirq_counts);
>> + int cpu = smp_processor_id();
>> +
>> + if (test_bit(cpu, softlockup_hardirq_cpus)) {
>> + kfree(counts);
>> + counts = NULL;
>> + __this_cpu_write(hardirq_counts, counts);
>
> nit: don't really need to set the local "counts" to NULL. Just:
>
> __this_cpu_write(hardirq_counts, NULL);
>
> ...and actually if you take my advice above and get rid of
> "softlockup_hardirq_cpus" then this function just becomes:
>
> kfree(__this_cpu_read(hardirq_counts));
> __this_cpu_write(hardirq_counts, NULL);
>
> Since kfree() handles when you pass it NULL...
OK.
>
>
>> +static void print_irq_counts(void)
>> +{
>> + int i;
>> + struct irq_desc *desc;
>> + u32 counts_diff;
>> + u32 *counts = __this_cpu_read(hardirq_counts);
>> + int cpu = smp_processor_id();
>> + struct irq_counts irq_counts_sorted[NUM_HARDIRQ_REPORT] = {
>> + {-1, 0}, {-1, 0}, {-1, 0}, {-1, 0},
>> + };
>> +
>> + if (test_bit(cpu, softlockup_hardirq_cpus)) {
>> + for_each_irq_desc(i, desc) {
>> + if (!desc)
>> + continue;
>
> The "if" test above isn't needed. The "for_each_irq_desc()" macro
> already checks for NULL.
Thanks for your reminder.
>
>
>
>> + /*
>> + * We need to bounds-check in case someone on a different CPU
>> + * expanded nr_irqs.
>> + */
>> + if (i < __this_cpu_read(actual_nr_irqs))
>> + counts_diff = desc->kstat_irqs ?
>> + *this_cpu_ptr(desc->kstat_irqs) - counts[i] : 0;
>> + else
>> + counts_diff = desc->kstat_irqs ?
>> + *this_cpu_ptr(desc->kstat_irqs) : 0;
>
> Why do you need to test "kstat_irqs" for 0?
Although "alloc_desc" wil allocate both "desc" and "kstat_irqs" at the
same time, I refer to the usage of "kstat_irqs" in "show_interrupts"
from kernel/irq/proc.c, where it does perform a check.
kernel/irq/proc.c: show_interrupts
for_each_online_cpu(j)
seq_printf(p, "%10u ", desc->kstat_irqs ?
*per_cpu_ptr(desc->kstat_irqs, j) : 0);
I'm not sure why this is necessary, so I copied it as it is. Can we skip
the check in "print_irq_counts"?
> duplicate the math. In other words, I'd expect this (untested):
>
> if (i < __this_cpu_read(actual_nr_irqs))
> count = counts[i];
> else
> count = 0;
> counts_diff = *this_cpu_ptr(desc->kstat_irqs) - count;
Agree.
>
> I guess I'd also put "__this_cpu_read(actual_nr_irqs)" in a local
> variable like you do with counts...
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCHv5 3/3] watchdog/softlockup: add SOFTLOCKUP_DETECTOR_INTR_STORM Kconfig knob
2024-02-06 21:42 ` Doug Anderson
@ 2024-02-07 6:19 ` Bitao Hu
0 siblings, 0 replies; 12+ messages in thread
From: Bitao Hu @ 2024-02-07 6:19 UTC (permalink / raw)
To: Doug Anderson; +Cc: akpm, pmladek, kernelfans, liusong, linux-kernel, yaoma
On 2024/2/7 05:42, Doug Anderson wrote:
> Hi,
>
> On Tue, Feb 6, 2024 at 1:59 AM Bitao Hu <yaoma@linux.alibaba.com> wrote:
>>
>> The interrupt storm detection mechanism we implemented requires a
>> considerable amount of global storage space when configured for
>> the maximum number of CPUs.
>> Therefore, adding a SOFTLOCKUP_DETECTOR_INTR_STORM Kconfig knob that
>> defaults to "yes" if the max number of CPUs is <= 128.
>>
>> Signed-off-by: Bitao Hu <yaoma@linux.alibaba.com>
>> ---
>> kernel/watchdog.c | 2 +-
>> lib/Kconfig.debug | 13 +++++++++++++
>> 2 files changed, 14 insertions(+), 1 deletion(-)
>
> IMO this should be squashed into patch #1, though I won't insist.
Agree.
>
>
>> diff --git a/kernel/watchdog.c b/kernel/watchdog.c
>> index 26dc1ad86276..1595e4a94774 100644
>> --- a/kernel/watchdog.c
>> +++ b/kernel/watchdog.c
>> @@ -338,7 +338,7 @@ __setup("watchdog_thresh=", watchdog_thresh_setup);
>>
>> static void __lockup_detector_cleanup(void);
>>
>> -#ifdef CONFIG_IRQ_TIME_ACCOUNTING
>> +#ifdef CONFIG_SOFTLOCKUP_DETECTOR_INTR_STORM
>> #define NUM_STATS_GROUPS 5
>> #define NUM_STATS_PER_GROUP 4
>> enum stats_per_group {
>> diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug
>> index 975a07f9f1cc..74002ba7c42d 100644
>> --- a/lib/Kconfig.debug
>> +++ b/lib/Kconfig.debug
>> @@ -1029,6 +1029,19 @@ config SOFTLOCKUP_DETECTOR
>> chance to run. The current stack trace is displayed upon
>> detection and the system will stay locked up.
>>
>> +config SOFTLOCKUP_DETECTOR_INTR_STORM
>> + bool "Detect Interrupt Storm in Soft Lockups"
>> + depends on SOFTLOCKUP_DETECTOR && IRQ_TIME_ACCOUNTING
>> + default y if NR_CPUS <= 128
>> + help
>> + Say Y here to enable the kernel to detect interrupt storm
>> + during "soft lockups".
>> +
>> + "soft lockups" can be caused by a variety of reasons. If one is caused by
>> + an interrupt storm, then the storming interrupts will not be on the
>> + callstack. To detect this case, it is necessary to report the CPU stats
>> + and the interrupt counts during the "soft lockups".
>
> It's probably not terribly important, but I notice that the other help
> text in this file is generally wrapped to 80 columns. Even though the
> kernel has relaxed the 80 column rule a bit, it still feels like this
> could easily be wrapped to 80 columns without sacrificing any
> readability.
OK.
>
> In any case:
>
> Reviewed-by: Douglas Anderson <dianders@chromium.org>
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCHv5 1/3] watchdog/softlockup: low-overhead detection of interrupt
2024-02-07 6:18 ` Bitao Hu
@ 2024-02-07 17:13 ` Doug Anderson
0 siblings, 0 replies; 12+ messages in thread
From: Doug Anderson @ 2024-02-07 17:13 UTC (permalink / raw)
To: Bitao Hu; +Cc: akpm, pmladek, kernelfans, liusong, linux-kernel
Hi,
On Tue, Feb 6, 2024 at 10:18 PM Bitao Hu <yaoma@linux.alibaba.com> wrote:
>
> >> +static void print_cpustat(void)
> >> +{
> >> + int i, group;
> >> + u8 tail = __this_cpu_read(cpustat_tail);
> >
> > Sorry for not noticing before, but why are you using
> > "__this_cpu_read()" instead of "this_cpu_read()"? In other words, why
> > do you need the double-underscore version everywhere? I don't think
> > you do, do you?
> I also struggled with which version of the operation to use. The one
> without double-underscores provides preemption/interrupt protection,
> but in watchdog.c, the version with double-underscores is used. I
> analyzed that it is also safe to use the version without
> preemption/interrupt protection in my code, so to maintain consistency
> with watchdog.c, I ues the version with double-underscores.
>
> Is my approach reasonable? If not, I will switch to using the
> non-underscored version.
Ah, OK. I hadn't followed the macros all the way through to the
arch-specific defines and I didn't see the preemption disable. OK,
what you have seems fine to me, especially since the double-underscore
version still has double-checks that preemption is disabled. Thanks
for explaining!
-Doug
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCHv5 2/3] watchdog/softlockup: report the most frequent interrupts
2024-02-07 6:18 ` Bitao Hu
@ 2024-02-07 17:14 ` Doug Anderson
0 siblings, 0 replies; 12+ messages in thread
From: Doug Anderson @ 2024-02-07 17:14 UTC (permalink / raw)
To: Bitao Hu; +Cc: akpm, pmladek, kernelfans, liusong, linux-kernel
Hi,
On Tue, Feb 6, 2024 at 10:19 PM Bitao Hu <yaoma@linux.alibaba.com> wrote:
>
> Hi,
>
> >> +static void start_counting_irqs(void)
> >> +{
> >> + int i;
> >> + struct irq_desc *desc;
> >> + u32 *counts = __this_cpu_read(hardirq_counts);
> >> + int cpu = smp_processor_id();
> >> +
> >> + if (!test_bit(cpu, softlockup_hardirq_cpus)) {
> >
> > I don't think you need "softlockup_hardirq_cpus", do you? Just read
> > "actual_nr_irqs" and see if it's non-zero? ...or read "hardirq_counts"
> > and see if it's non-NULL?
> Sure, the existing variables are sufficient for making a determination.
> And may be I should swap it to make the decision logic here clearer,
> like this (untested)?
>
> bool is_counting_started(void)
> {
> return !!__this_cpu_read(hardirq_counts);
> }
>
> if (!is_counting_started()) {
If you insist I guess I wouldn't object, but I don't feel it's
necessary. The whole point is just to know if you've already allocated
memory, right? ...and just checking to see if the pointer is non-NULL
or the array-size is non-zero feels pretty clear to me.
> >> + /*
> >> + * We need to bounds-check in case someone on a different CPU
> >> + * expanded nr_irqs.
> >> + */
> >> + if (i < __this_cpu_read(actual_nr_irqs))
> >> + counts_diff = desc->kstat_irqs ?
> >> + *this_cpu_ptr(desc->kstat_irqs) - counts[i] : 0;
> >> + else
> >> + counts_diff = desc->kstat_irqs ?
> >> + *this_cpu_ptr(desc->kstat_irqs) : 0;
> >
> > Why do you need to test "kstat_irqs" for 0?
> Although "alloc_desc" wil allocate both "desc" and "kstat_irqs" at the
> same time, I refer to the usage of "kstat_irqs" in "show_interrupts"
> from kernel/irq/proc.c, where it does perform a check.
Ah, I see. I hadn't noticed that you were testing the pointer before
dereferencing it. OK, seems fine to keep this check. I guess that
would make it this (untested):
if (desc->kstat_irqs) {
counts_diff = *this_cpu_ptr(desc->kstat_irqs);
if (i < __this_cpu_read(actual_nr_irqs))
counts_diff -= counts[i];
} else {
counts_diff = 0;
}
^ permalink raw reply [flat|nested] 12+ messages in thread
end of thread, other threads:[~2024-02-07 17:14 UTC | newest]
Thread overview: 12+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-02-06 9:58 [PATCHv5 0/3] *** Detect interrupt storm in softlockup *** Bitao Hu
2024-02-06 9:59 ` [PATCHv5 1/3] watchdog/softlockup: low-overhead detection of interrupt Bitao Hu
2024-02-06 21:41 ` Doug Anderson
2024-02-07 6:18 ` Bitao Hu
2024-02-07 17:13 ` Doug Anderson
2024-02-06 9:59 ` [PATCHv5 2/3] watchdog/softlockup: report the most frequent interrupts Bitao Hu
2024-02-06 21:42 ` Doug Anderson
2024-02-07 6:18 ` Bitao Hu
2024-02-07 17:14 ` Doug Anderson
2024-02-06 9:59 ` [PATCHv5 3/3] watchdog/softlockup: add SOFTLOCKUP_DETECTOR_INTR_STORM Kconfig knob Bitao Hu
2024-02-06 21:42 ` Doug Anderson
2024-02-07 6:19 ` Bitao Hu
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox