* [LTP] [RFC PATCH] starvation: set a baseline for maximum runtime
@ 2024-11-26 10:04 Li Wang
2024-11-26 10:28 ` Cyril Hrubis
0 siblings, 1 reply; 11+ messages in thread
From: Li Wang @ 2024-11-26 10:04 UTC (permalink / raw)
To: ltp; +Cc: Philip Auld
The commit ec14f4572 ("sched: starvation: Autocallibrate the timeout")
introduced a runtime calibration mechanism to dynamically adjust test
timeouts based on CPU speed.
While this works well for slower systems like microcontrollers or ARM
boards, it struggles to determine appropriate runtimes for modern CPUs,
especially when debugging kernels with significant overhead.
This patch introduces a baseline runtime (max_runtime = 600 seconds) to
ensure the test does not timeout prematurely, even on modern CPUs or
debug kernels. The calibrated runtime is compared against this baseline,
and the greater value is used as the test timeout.
This change reduces the likelihood of timeouts while maintaining flexibility
for slower systems.
Error log on debug-kernel:
...
starvation.c:98: TINFO: Setting affinity to CPU 0
starvation.c:52: TINFO: CPU did 120000000 loops in 52717us
tst_test.c:1727: TINFO: Updating max runtime to 0h 00m 52s
tst_test.c:1719: TINFO: Timeout per run is 0h 06m 16s
starvation.c:148: TFAIL: Scheduller starvation reproduced.
...
From Philip Auld:
"The test sends a large number of signals as fast as possible. On the
non-debug kernel both signal generation and signal deliver take 1usec
in my traces (maybe actually less in real time but the timestamp has
usec granularity).
But on the debug kernel these signal events take ~21usecs. A significant
increase and given the large number of them this leads the starvation
test to falsely report starvation when in fact it is just taking
a lot longer.
In both debug and non-debug the kernel is doing the same thing. Both
tasks are running as expected. It's just the timing is not working for
the debug case.
Probably should waive this as expected failure on the debug variants."
Signed-off-by: Li Wang <liwang@redhat.com>
Cc: Philip Auld <pauld@redhat.com>
Cc: Cyril Hrubis <chrubis@suse.cz>
---
testcases/kernel/sched/cfs-scheduler/starvation.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/testcases/kernel/sched/cfs-scheduler/starvation.c b/testcases/kernel/sched/cfs-scheduler/starvation.c
index e707e0865..d57052d1d 100644
--- a/testcases/kernel/sched/cfs-scheduler/starvation.c
+++ b/testcases/kernel/sched/cfs-scheduler/starvation.c
@@ -108,6 +108,7 @@ static void setup(void)
else
timeout = callibrate() / 1000;
+ timeout = MAX(timeout, test.max_runtime);
tst_set_max_runtime(timeout);
}
@@ -161,5 +162,6 @@ static struct tst_test test = {
{"t:", &str_timeout, "Max timeout (default 240s)"},
{}
},
+ .max_runtime = 600,
.needs_checkpoints = 1,
};
--
2.47.0
--
Mailing list info: https://lists.linux.it/listinfo/ltp
^ permalink raw reply related [flat|nested] 11+ messages in thread* Re: [LTP] [RFC PATCH] starvation: set a baseline for maximum runtime 2024-11-26 10:04 [LTP] [RFC PATCH] starvation: set a baseline for maximum runtime Li Wang @ 2024-11-26 10:28 ` Cyril Hrubis 2024-11-26 10:59 ` Li Wang 0 siblings, 1 reply; 11+ messages in thread From: Cyril Hrubis @ 2024-11-26 10:28 UTC (permalink / raw) To: Li Wang; +Cc: Philip Auld, ltp Hi! > The commit ec14f4572 ("sched: starvation: Autocallibrate the timeout") > introduced a runtime calibration mechanism to dynamically adjust test > timeouts based on CPU speed. > > While this works well for slower systems like microcontrollers or ARM > boards, it struggles to determine appropriate runtimes for modern CPUs, > especially when debugging kernels with significant overhead. Wouldn't it be better to either skip the test on kernels with debuging confing options on? Or multiply the timeout we got from the callibration when we detect a debugging kernel? The problem is that any number we put there will not be correct in a few years as CPU and RAM speed increase and the test will be effectively doing nothing because the default we put there will cover kernels that are overly slow on a future hardware. -- Cyril Hrubis chrubis@suse.cz -- Mailing list info: https://lists.linux.it/listinfo/ltp ^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [LTP] [RFC PATCH] starvation: set a baseline for maximum runtime 2024-11-26 10:28 ` Cyril Hrubis @ 2024-11-26 10:59 ` Li Wang 2024-11-26 11:23 ` Cyril Hrubis 0 siblings, 1 reply; 11+ messages in thread From: Li Wang @ 2024-11-26 10:59 UTC (permalink / raw) To: Cyril Hrubis; +Cc: Philip Auld, ltp On Tue, Nov 26, 2024 at 6:28 PM Cyril Hrubis <chrubis@suse.cz> wrote: > Hi! > > The commit ec14f4572 ("sched: starvation: Autocallibrate the timeout") > > introduced a runtime calibration mechanism to dynamically adjust test > > timeouts based on CPU speed. > > > > While this works well for slower systems like microcontrollers or ARM > > boards, it struggles to determine appropriate runtimes for modern CPUs, > > especially when debugging kernels with significant overhead. > > Wouldn't it be better to either skip the test on kernels with debuging > confing options on? Or multiply the timeout we got from the callibration > when we detect a debugging kernel? > Well, we have not achieved a reliable way to detect debug kernels in LTP. While I looking at our RHEL9 kernel config file. The general kernel also enables things like "CONFIG_DEBUG_KERNEL=y". # uname -r 5.14.0-533.el9.x86_64 # grep CONFIG_DEBUG_KERNEL /boot/config-5.14.0-533.el9.x86_64 CONFIG_DEBUG_KERNEL=y > The problem is that any number we put there will not be correct in a few > years as CPU and RAM speed increase and the test will be effectively > doing nothing because the default we put there will cover kernels that > are overly slow on a future hardware. > Sounds reasonable. The hardcode baseline time is not a wise method, It is still possible not to satisfy some slower boards or new processors. -- Regards, Li Wang -- Mailing list info: https://lists.linux.it/listinfo/ltp ^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [LTP] [RFC PATCH] starvation: set a baseline for maximum runtime 2024-11-26 10:59 ` Li Wang @ 2024-11-26 11:23 ` Cyril Hrubis 2024-11-27 4:15 ` Li Wang 0 siblings, 1 reply; 11+ messages in thread From: Cyril Hrubis @ 2024-11-26 11:23 UTC (permalink / raw) To: Li Wang; +Cc: Philip Auld, ltp Hi! > Well, we have not achieved a reliable way to detect debug kernels in LTP. > While I looking at our RHEL9 kernel config file. The general kernel also > enables things like "CONFIG_DEBUG_KERNEL=y". The slowdown is likely to be realated to a few specific debug options such as debugging for mutexes, spinlocks, lists, etc. I guess that the most interesting information would be a difference in the debug options between the general kernel and the debug kernel. Hopefully we can put together a set of debug options that are cause the test to run over slow. -- Cyril Hrubis chrubis@suse.cz -- Mailing list info: https://lists.linux.it/listinfo/ltp ^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [LTP] [RFC PATCH] starvation: set a baseline for maximum runtime 2024-11-26 11:23 ` Cyril Hrubis @ 2024-11-27 4:15 ` Li Wang 2024-11-27 7:48 ` [LTP] [Draft PATCH] lib: add TST_DYNAMICAL_RUNTIME option Li Wang 2024-11-27 9:46 ` [LTP] [RFC PATCH] starvation: set a baseline for maximum runtime Cyril Hrubis 0 siblings, 2 replies; 11+ messages in thread From: Li Wang @ 2024-11-27 4:15 UTC (permalink / raw) To: Cyril Hrubis; +Cc: Philip Auld, ltp On Tue, Nov 26, 2024 at 7:23 PM Cyril Hrubis <chrubis@suse.cz> wrote: > Hi! > > Well, we have not achieved a reliable way to detect debug kernels in LTP. > > While I looking at our RHEL9 kernel config file. The general kernel also > > enables things like "CONFIG_DEBUG_KERNEL=y". > > The slowdown is likely to be realated to a few specific debug options > such as debugging for mutexes, spinlocks, lists, etc. I guess that the > most interesting information would be a difference in the debug options > between the general kernel and the debug kernel. Hopefully we can put > together a set of debug options that are cause the test to run over > slow. > I have carefully compared the differences between the general kernel config-file and the debug kernel config-file. Below are some configurations that are only enabled in the debug kernel and may cause kernel performance degradation. The rough thoughts I have is to create a SET for those configurations, If the SUT kernel maps some of them, we reset the timeout using the value multiplier obtained from calibration. e.g. if mapped N number of the configs we use (timeout * N) as the max_runtime. Or next, we extract this method to the whole LTP timeout setting if possible? #Lock debugging: CONFIG_PROVE_LOCKING CONFIG_LOCKDEP CONFIG_DEBUG_SPINLOCK #Mutex debugging CONFIG_DEBUG_RT_MUTEXES=y CONFIG_DEBUG_MUTEXES=y #Memory debugging: CONFIG_DEBUG_PAGEALLOC CONFIG_KASAN CONFIG_SLUB_RCU_DEBUG #Tracing and profiling: CONFIG_TRACE_IRQFLAGS CONFIG_LATENCYTOP CONFIG_DEBUG_NET #Filesystem debugging: CONFIG_EXT4_DEBUG CONFIG_QUOTA_DEBUG #Miscellaneous debugging: CONFIG_FAULT_INJECTION CONFIG_DEBUG_OBJECTS -- Regards, Li Wang -- Mailing list info: https://lists.linux.it/listinfo/ltp ^ permalink raw reply [flat|nested] 11+ messages in thread
* [LTP] [Draft PATCH] lib: add TST_DYNAMICAL_RUNTIME option 2024-11-27 4:15 ` Li Wang @ 2024-11-27 7:48 ` Li Wang 2024-11-27 8:21 ` Li Wang 2024-11-27 9:46 ` [LTP] [RFC PATCH] starvation: set a baseline for maximum runtime Cyril Hrubis 1 sibling, 1 reply; 11+ messages in thread From: Li Wang @ 2024-11-27 7:48 UTC (permalink / raw) To: ltp Hi, this is a draft patch to reflect the method came up in the thread, if people agree I will polish it and send a complete one later. Signed-off-by: Li Wang <liwang@redhat.com> --- include/tst_kconfig.h | 44 +++++++++++++++++++++++++++++++++++++++++++ include/tst_test.h | 1 + include/tst_timer.h | 30 +++++++++++++++++++++++++++++ lib/tst_test.c | 12 ++++++++---- 4 files changed, 83 insertions(+), 4 deletions(-) diff --git a/include/tst_kconfig.h b/include/tst_kconfig.h index 23f807409..8f5bc06a7 100644 --- a/include/tst_kconfig.h +++ b/include/tst_kconfig.h @@ -98,4 +98,48 @@ struct tst_kcmdline_var { */ void tst_kcmdline_parse(struct tst_kcmdline_var params[], size_t params_len); +/* + * List of debug-related kernel config options that may degrade performance when enabled. + */ +static const char * const tst_kconf_debug_options[][2] = { + /* Lock debugging */ + {"CONFIG_PROVE_LOCKING=y", NULL}, + {"CONFIG_LOCKDEP=y", NULL}, + {"CONFIG_DEBUG_SPINLOCK=y", NULL}, + + /* Mutexes debugging */ + {"CONFIG_DEBUG_RT_MUTEXES=y", NULL}, + {"CONFIG_DEBUG_MUTEXES=y", NULL}, + + /* Memory debugging */ + {"CONFIG_DEBUG_PAGEALLOC=y", NULL}, + {"CONFIG_KASAN=y", NULL}, + {"CONFIG_SLUB_RCU_DEBUG=y", NULL}, + + /* Tracing and profiling */ + {"CONFIG_TRACE_IRQFLAGS=y", NULL}, + {"CONFIG_LATENCYTOP=y", NULL}, + {"CONFIG_DEBUG_NET=y", NULL}, + + /* Filesystem debugging */ + {"CONFIG_EXT4_DEBUG=y", NULL}, + {"CONFIG_QUOTA_DEBUG=y", NULL}, + + /* Miscellaneous debugging */ + {"CONFIG_FAULT_INJECTION=y", NULL}, + {"CONFIG_DEBUG_OBJECTS=y", NULL}, + + {NULL, NULL} /* End of the array */ +}; + +static inline int tst_kconfig_debug_matches(void) +{ + int i, num = 1; + + for (i = 0; tst_kconf_debug_options[i][0] != NULL; i++) + num += tst_kconfig_check(tst_kconf_debug_options[i]); + + return num; +} + #endif /* TST_KCONFIG_H__ */ diff --git a/include/tst_test.h b/include/tst_test.h index 8d1819f74..483b707d3 100644 --- a/include/tst_test.h +++ b/include/tst_test.h @@ -235,6 +235,7 @@ struct tst_tag { extern unsigned int tst_variant; #define TST_UNLIMITED_RUNTIME (-1) +#define TST_DYNAMICAL_RUNTIME (-2) /** * struct tst_ulimit_val - An ulimit resource and value. diff --git a/include/tst_timer.h b/include/tst_timer.h index 6fb940020..268fc8389 100644 --- a/include/tst_timer.h +++ b/include/tst_timer.h @@ -17,6 +17,7 @@ #include <mqueue.h> #include <time.h> #include "tst_test.h" +#include "tst_clocks.h" #include "lapi/common_timers.h" #include "lapi/posix_types.h" #include "lapi/syscalls.h" @@ -1074,4 +1075,33 @@ static inline long long tst_timer_elapsed_us(void) return tst_timespec_to_us(tst_timer_elapsed()); } +#define CALLIBRATE_LOOPS 120000000 + +/* + * Measures the time taken by the CPU to perform a specified + * number of empty loops for calibration. + */ +static inline int tst_callibrate(void) +{ + int i; + struct timespec start, stop; + long long diff; + + for (i = 0; i < CALLIBRATE_LOOPS; i++) + __asm__ __volatile__ ("" : "+g" (i) : :); + + tst_clock_gettime(CLOCK_MONOTONIC_RAW, &start); + + for (i = 0; i < CALLIBRATE_LOOPS; i++) + __asm__ __volatile__ ("" : "+g" (i) : :); + + tst_clock_gettime(CLOCK_MONOTONIC_RAW, &stop); + + diff = tst_timespec_diff_us(stop, start); + + tst_res(TINFO, "CPU did %i loops in %llius", CALLIBRATE_LOOPS, diff); + + return diff; +} + #endif /* TST_TIMER */ diff --git a/lib/tst_test.c b/lib/tst_test.c index 8db554dea..8a4460944 100644 --- a/lib/tst_test.c +++ b/lib/tst_test.c @@ -1265,8 +1265,8 @@ static void do_setup(int argc, char *argv[]) if (!tst_test) tst_brk(TBROK, "No tests to run"); - if (tst_test->max_runtime < -1) { - tst_brk(TBROK, "Invalid runtime value %i", + if (tst_test->max_runtime < -2) { + tst_brk(TBROK, "Invalid runtime value %d", results->max_runtime); } @@ -1695,7 +1695,6 @@ unsigned int tst_remaining_runtime(void) return 0; } - unsigned int tst_multiply_timeout(unsigned int timeout) { parse_mul(&timeout_mul, "LTP_TIMEOUT_MUL", 0.099, 10000); @@ -1715,8 +1714,13 @@ static void set_timeout(void) return; } + if (results->max_runtime == TST_DYNAMICAL_RUNTIME) { + tst_res(TINFO, "Timeout is decited in running time"); + results->max_runtime = (tst_callibrate() / 1000) * tst_kconfig_debug_matches(); + } + if (results->max_runtime < 0) { - tst_brk(TBROK, "max_runtime must to be >= -1! (%d)", + tst_brk(TBROK, "max_runtime must to be >= -2! (%d)", results->max_runtime); } -- 2.47.0 -- Mailing list info: https://lists.linux.it/listinfo/ltp ^ permalink raw reply related [flat|nested] 11+ messages in thread
* Re: [LTP] [Draft PATCH] lib: add TST_DYNAMICAL_RUNTIME option 2024-11-27 7:48 ` [LTP] [Draft PATCH] lib: add TST_DYNAMICAL_RUNTIME option Li Wang @ 2024-11-27 8:21 ` Li Wang 0 siblings, 0 replies; 11+ messages in thread From: Li Wang @ 2024-11-27 8:21 UTC (permalink / raw) To: ltp Modify starvation.c by adding ".max_runtime = TST_DYNAMICAL_RUNTIME". Test on general kernel: # ./starvation tst_tmpdir.c:316: TINFO: Using /tmp/LTP_stanNtz1s as tmpdir (xfs filesystem) tst_test.c:1894: TINFO: LTP version: 20240930 tst_test.c:1898: TINFO: Tested kernel: 6.12.0-30.el10.ppc64le #1 SMP Tue Nov 19 13:50:01 EST 2024 ppc64le tst_test.c:1718: TINFO: Timeout is decided in running time ../include/tst_timer.h:1102: TINFO: CPU did 120000000 loops in 61797us tst_kconfig.c:88: TINFO: Parsing kernel config '/lib/modules/6.12.0-30.el10.ppc64le/config' tst_kconfig.c:531: TINFO: Constraint 'CONFIG_PROVE_LOCKING=y' not satisfied! tst_kconfig.c:477: TINFO: Variables: tst_kconfig.c:495: TINFO: CONFIG_PROVE_LOCKING=n tst_kconfig.c:88: TINFO: Parsing kernel config '/lib/modules/6.12.0-30.el10.ppc64le/config' tst_kconfig.c:531: TINFO: Constraint 'CONFIG_LOCKDEP=y' not satisfied! tst_kconfig.c:477: TINFO: Variables: tst_kconfig.c:486: TINFO: CONFIG_LOCKDEP Undefined ... tst_test.c:1729: TINFO: Timeout per run is 0h 01m 31s starvation.c:98: TINFO: Setting affinity to CPU 0 starvation.c:146: TPASS: Haven't reproduced scheduler starvation. Summary: passed 1 failed 0 broken 0 skipped 0 warnings 0 Test on debug-kernel: # ./starvation tst_tmpdir.c:316: TINFO: Using /tmp/LTP_staVKoH2k as tmpdir (xfs filesystem) tst_test.c:1898: TINFO: LTP version: 20240930 tst_test.c:1902: TINFO: Tested kernel: 6.12.0-30.el10.ppc64le+debug #1 SMP Tue Nov 19 13:41:20 EST 2024 ppc64le tst_test.c:1718: TINFO: Timeout is decided in running time ../include/tst_timer.h:1102: TINFO: CPU did 120000000 loops in 68663us tst_kconfig.c:88: TINFO: Parsing kernel config '/lib/modules/6.12.0-30.el10.ppc64le+debug/config' tst_kconfig.c:88: TINFO: Parsing kernel config '/lib/modules/6.12.0-30.el10.ppc64le+debug/config' ... tst_test.c:1733: TINFO: Timeout per run is 0h 18m 38s starvation.c:71: TINFO: Setting affinity to CPU 0 starvation.c:116: TPASS: Haven't reproduced scheduler starvation. Summary: passed 1 failed 0 broken 0 skipped 0 warnings 0 Li Wang <liwang@redhat.com> wrote: > +static inline int tst_kconfig_debug_matches(void) > +{ > + int i, num = 1; > + > + for (i = 0; tst_kconf_debug_options[i][0] != NULL; i++) > + num += tst_kconfig_check(tst_kconf_debug_options[i]); > This should be: num += !tst_kconfig_check(tst_kconf_debug_options[i]); -- Regards, Li Wang -- Mailing list info: https://lists.linux.it/listinfo/ltp ^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [LTP] [RFC PATCH] starvation: set a baseline for maximum runtime 2024-11-27 4:15 ` Li Wang 2024-11-27 7:48 ` [LTP] [Draft PATCH] lib: add TST_DYNAMICAL_RUNTIME option Li Wang @ 2024-11-27 9:46 ` Cyril Hrubis 2024-11-27 10:08 ` Li Wang 1 sibling, 1 reply; 11+ messages in thread From: Cyril Hrubis @ 2024-11-27 9:46 UTC (permalink / raw) To: Li Wang; +Cc: Philip Auld, ltp Hi! > I have carefully compared the differences between the general > kernel config-file and the debug kernel config-file. > > Below are some configurations that are only enabled in the debug > kernel and may cause kernel performance degradation. > > The rough thoughts I have is to create a SET for those configurations, > If the SUT kernel maps some of them, we reset the timeout using the > value multiplier obtained from calibration. > > e.g. if mapped N number of the configs we use (timeout * N) as the > max_runtime. > > Or next, we extract this method to the whole LTP timeout setting if > possible? That actually sounds good to me, if we detect certain kernel options that are know to slow down the process execution it makes a good sense to multiply the timeouts for all tests directly in the test library. > #Lock debugging: > CONFIG_PROVE_LOCKING > CONFIG_LOCKDEP > CONFIG_DEBUG_SPINLOCK > > #Mutex debugging > CONFIG_DEBUG_RT_MUTEXES=y > CONFIG_DEBUG_MUTEXES=y > > #Memory debugging: > CONFIG_DEBUG_PAGEALLOC > CONFIG_KASAN > CONFIG_SLUB_RCU_DEBUG > > #Tracing and profiling: > CONFIG_TRACE_IRQFLAGS > CONFIG_LATENCYTOP > CONFIG_DEBUG_NET > > #Filesystem debugging: > CONFIG_EXT4_DEBUG > CONFIG_QUOTA_DEBUG > > #Miscellaneous debugging: > CONFIG_FAULT_INJECTION > CONFIG_DEBUG_OBJECTS -- Cyril Hrubis chrubis@suse.cz -- Mailing list info: https://lists.linux.it/listinfo/ltp ^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [LTP] [RFC PATCH] starvation: set a baseline for maximum runtime 2024-11-27 9:46 ` [LTP] [RFC PATCH] starvation: set a baseline for maximum runtime Cyril Hrubis @ 2024-11-27 10:08 ` Li Wang 2024-11-27 10:40 ` Cyril Hrubis 0 siblings, 1 reply; 11+ messages in thread From: Li Wang @ 2024-11-27 10:08 UTC (permalink / raw) To: Cyril Hrubis; +Cc: Philip Auld, ltp On Wed, Nov 27, 2024 at 5:46 PM Cyril Hrubis <chrubis@suse.cz> wrote: > Hi! > > I have carefully compared the differences between the general > > kernel config-file and the debug kernel config-file. > > > > Below are some configurations that are only enabled in the debug > > kernel and may cause kernel performance degradation. > > > > The rough thoughts I have is to create a SET for those configurations, > > If the SUT kernel maps some of them, we reset the timeout using the > > value multiplier obtained from calibration. > > > > e.g. if mapped N number of the configs we use (timeout * N) as the > > max_runtime. > > > > Or next, we extract this method to the whole LTP timeout setting if > > possible? > > That actually sounds good to me, if we detect certain kernel options > that are know to slow down the process execution it makes a good sense > to multiply the timeouts for all tests directly in the test library. > Thanks. After thinking it over, I guess we'd better _only_ apply this method to some special slow tests (aka. more easily timeout tests). If we do the examination of those kernel options in the library for all, that maybe a burden to most quick tests, which always finish in a few seconds (far less than the default 30s). Therefore, I came up with a new option for .max_runtime, which is TST_DYNAMICAL_RUNTIME. Similar to the TST_UNLIMITED_RUNTIME we ever use. Test by adding this .max_runtime = TST_DYNAIMCAL_RUNTIME that will try to find a proper timeout value in the running time for the test. See: https://lists.linux.it/pipermail/ltp/2024-November/040990.html -- Regards, Li Wang -- Mailing list info: https://lists.linux.it/listinfo/ltp ^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [LTP] [RFC PATCH] starvation: set a baseline for maximum runtime 2024-11-27 10:08 ` Li Wang @ 2024-11-27 10:40 ` Cyril Hrubis 2024-11-27 10:56 ` Li Wang 0 siblings, 1 reply; 11+ messages in thread From: Cyril Hrubis @ 2024-11-27 10:40 UTC (permalink / raw) To: Li Wang; +Cc: Philip Auld, ltp Hi! > After thinking it over, I guess we'd better _only_ apply this method > to some special slow tests (aka. more easily timeout tests). If we do > the examination of those kernel options in the library for all, that > maybe a burden to most quick tests, which always finish in a few > seconds (far less than the default 30s). > > Therefore, I came up with a new option for .max_runtime, which is > TST_DYNAMICAL_RUNTIME. Similar to the TST_UNLIMITED_RUNTIME > we ever use. Test by adding this .max_runtime = TST_DYNAIMCAL_RUNTIME > that will try to find a proper timeout value in the running time for the > test. I was thinking to only multiply the max_runtime defined by the test in the library. That way only slow tests that set the max_runtime would be affected. -- Cyril Hrubis chrubis@suse.cz -- Mailing list info: https://lists.linux.it/listinfo/ltp ^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [LTP] [RFC PATCH] starvation: set a baseline for maximum runtime 2024-11-27 10:40 ` Cyril Hrubis @ 2024-11-27 10:56 ` Li Wang 0 siblings, 0 replies; 11+ messages in thread From: Li Wang @ 2024-11-27 10:56 UTC (permalink / raw) To: Cyril Hrubis; +Cc: Philip Auld, ltp On Wed, Nov 27, 2024 at 6:40 PM Cyril Hrubis <chrubis@suse.cz> wrote: > Hi! > > After thinking it over, I guess we'd better _only_ apply this method > > to some special slow tests (aka. more easily timeout tests). If we do > > the examination of those kernel options in the library for all, that > > maybe a burden to most quick tests, which always finish in a few > > seconds (far less than the default 30s). > > > > Therefore, I came up with a new option for .max_runtime, which is > > TST_DYNAMICAL_RUNTIME. Similar to the TST_UNLIMITED_RUNTIME > > we ever use. Test by adding this .max_runtime = TST_DYNAIMCAL_RUNTIME > > that will try to find a proper timeout value in the running time for the > > test. > > I was thinking to only multiply the max_runtime defined by the test in > the library. That way only slow tests that set the max_runtime would be > affected. > Ok, that also indicates the test is slower. I will apply that to non-zero '.max_runtime' tests and resend a patch. Thanks! -- Regards, Li Wang -- Mailing list info: https://lists.linux.it/listinfo/ltp ^ permalink raw reply [flat|nested] 11+ messages in thread
end of thread, other threads:[~2024-11-27 10:56 UTC | newest] Thread overview: 11+ messages (download: mbox.gz follow: Atom feed -- links below jump to the message on this page -- 2024-11-26 10:04 [LTP] [RFC PATCH] starvation: set a baseline for maximum runtime Li Wang 2024-11-26 10:28 ` Cyril Hrubis 2024-11-26 10:59 ` Li Wang 2024-11-26 11:23 ` Cyril Hrubis 2024-11-27 4:15 ` Li Wang 2024-11-27 7:48 ` [LTP] [Draft PATCH] lib: add TST_DYNAMICAL_RUNTIME option Li Wang 2024-11-27 8:21 ` Li Wang 2024-11-27 9:46 ` [LTP] [RFC PATCH] starvation: set a baseline for maximum runtime Cyril Hrubis 2024-11-27 10:08 ` Li Wang 2024-11-27 10:40 ` Cyril Hrubis 2024-11-27 10:56 ` Li Wang
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox