public inbox for ltp@lists.linux.it
 help / color / mirror / Atom feed
From: Cyril Hrubis <chrubis@suse.cz>
To: Li Wang <liwang@redhat.com>
Cc: ltp@lists.linux.it
Subject: Re: [LTP] LTP Release preparations
Date: Thu, 4 Sep 2025 11:53:08 +0200	[thread overview]
Message-ID: <aLlhhJcdbx7mPQX_@yuki.lan> (raw)
In-Reply-To: <CAEemH2dzju5n1FZ8TyG0=YBJY-A80VD7Sv1PLJZuj_AYNERYvg@mail.gmail.com>

Hi!
> This makes sense! However, from my extensive testing, I still see
> occasional fails on KVM/Debug platforms.
> 
> I suspect the existing barriers ensure all threads are created before
> the game starts, but small scheduler skews can still allow the attacking
> thread to run for a few cycles before the defending thread migrates,
> especially on debug/RT kernels.

I guess that there is no defined order in which the threads are woken up
after the sleep on the barrier. Hence if we wake up the low prio thread
by a chance before all the high prio threads are awake they manage to do
a few cycles.

> So, based on this improve, we might need additional spin for all player
> threads (offense, defense, fans) wait at the barrier and then spin until
> the referee kicks off the ball.
> 
> --- a/testcases/realtime/func/sched_football/sched_football.c
> +++ b/testcases/realtime/func/sched_football/sched_football.c
> @@ -44,6 +44,7 @@
>  static tst_atomic_t the_ball;
>  static int players_per_team = 0;
>  static int game_length = DEF_GAME_LENGTH;
> +static tst_atomic_t kickoff_flag;
>  static tst_atomic_t game_over;
> 
>  static char *str_game_length;
> @@ -55,6 +56,9 @@ void *thread_fan(void *arg LTP_ATTRIBUTE_UNUSED)
>  {
>         prctl(PR_SET_NAME, "crazy_fan", 0, 0, 0);
>         pthread_barrier_wait(&start_barrier);
> +       while (!tst_atomic_load(&kickoff_flag))
> +               ;
> +
>         /*occasionally wake up and run across the field */
>         while (!tst_atomic_load(&game_over)) {
>                 struct timespec start, stop;
> @@ -80,6 +84,9 @@ void *thread_defense(void *arg LTP_ATTRIBUTE_UNUSED)
>  {
>         prctl(PR_SET_NAME, "defense", 0, 0, 0);
>         pthread_barrier_wait(&start_barrier);
> +       while (!tst_atomic_load(&kickoff_flag))
> +               ;
> +
>         /*keep the ball from being moved */
>         while (!tst_atomic_load(&game_over)) {
>         }
> @@ -92,6 +99,9 @@ void *thread_offense(void *arg LTP_ATTRIBUTE_UNUSED)
>  {
>         prctl(PR_SET_NAME, "offense", 0, 0, 0);
>         pthread_barrier_wait(&start_barrier);
> +       while (!tst_atomic_load(&kickoff_flag))
> +               ;
> +
>         while (!tst_atomic_load(&game_over)) {
>                 tst_atomic_add_return(1, &the_ball); /* move the ball ahead
> one yard */
>         }
> @@ -115,9 +125,10 @@ void referee(int game_length)
>         now = start;
> 
>         /* Start the game! */
> -       tst_atomic_store(0, &the_ball);
> -       pthread_barrier_wait(&start_barrier);
>         atrace_marker_write("sched_football", "Game_started!");
> +       pthread_barrier_wait(&start_barrier);
> +       tst_atomic_store(0, &the_ball);
> +       tst_atomic_store(1, &kickoff_flag);

Is this really 100% buletproof? Now the threads are going to wait for
the referee for the kick off but if the referee is the first thread to
be woken up after the barrier the order is stil not guaranteed.

Maybe we can just do a short sleep here in order to make sure that the
scheduller kicks in and redistributes the threads. I would say something
as 20ms (since with CONFIG_HZ=100 we have scheduller ticks every 10ms).

-- 
Cyril Hrubis
chrubis@suse.cz

-- 
Mailing list info: https://lists.linux.it/listinfo/ltp

  parent reply	other threads:[~2025-09-04  9:52 UTC|newest]

Thread overview: 150+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-09-01 10:20 [LTP] LTP Release preparations Cyril Hrubis
2025-09-02 10:23 ` Andrea Cervesato via ltp
2025-09-02 10:40   ` Ricardo B. Marlière via ltp
2025-09-02 10:45     ` Ricardo B. Marlière via ltp
2025-09-02 11:52 ` Martin Doucha
2025-09-02 12:14   ` Cyril Hrubis
2025-09-02 12:42     ` Martin Doucha
2025-09-03 12:05 ` Andrea Cervesato via ltp
2025-09-03 12:16   ` Cyril Hrubis
2025-09-03 13:20     ` Cyril Hrubis
2025-09-04  2:39       ` Li Wang via ltp
2025-09-04  3:06         ` Li Wang via ltp
2025-09-04  5:42           ` Andrea Cervesato via ltp
2025-09-04  6:29             ` Li Wang via ltp
2025-09-04  6:32               ` Andrea Cervesato via ltp
2025-09-04  9:53         ` Cyril Hrubis [this message]
2025-09-04 10:15           ` Li Wang via ltp
2025-09-04 10:22             ` Cyril Hrubis
2025-09-03 13:18 ` Petr Vorel
2025-09-03 13:21   ` Cyril Hrubis
2025-09-05  6:24 ` Andrea Cervesato via ltp
2025-09-05  8:12 ` Wei Gao via ltp
2025-09-05  9:05   ` Petr Vorel
2025-09-05  9:02 ` Avinesh Kumar
2025-09-23  2:28 ` Li Wang via ltp
2025-09-26 12:41   ` Petr Vorel
2025-09-26 14:23     ` Cyril Hrubis
  -- strict thread matches above, loose matches on Subject: below --
2025-05-02  9:45 [LTP] LTP release preparations Cyril Hrubis
2025-05-05 10:56 ` Martin Doucha
2025-05-07 15:51 ` Martin Doucha
2025-05-07 17:54   ` Petr Vorel
2025-05-07 17:57 ` Petr Vorel
2025-05-27  5:41   ` Li Wang via ltp
2025-05-27  7:05     ` Cyril Hrubis
2025-05-27  9:27 ` Andrea Cervesato via ltp
2025-05-27  9:33   ` Cyril Hrubis
2025-01-06 12:17 [LTP] LTP Release preparations Cyril Hrubis
2025-01-06 12:52 ` Andrea Cervesato via ltp
2025-01-06 15:45 ` Petr Vorel
2025-01-06 15:57   ` Cyril Hrubis
2025-01-08  2:57 ` Li Wang
2025-01-17 14:06 ` Cyril Hrubis
2025-01-17 14:57   ` Martin Doucha
2025-01-17 15:01     ` Cyril Hrubis
2025-01-20 10:31   ` Li Wang
2025-01-20 11:59   ` Andrea Cervesato via ltp
2024-09-02  9:54 [LTP] LTP release preparations Cyril Hrubis
2024-09-09 12:39 ` Andrea Cervesato via ltp
2024-09-09 12:44   ` Cyril Hrubis
2024-09-09 12:43 ` Andrea Cervesato via ltp
2024-09-16  8:22 ` Cyril Hrubis
2024-09-16  8:25   ` Andrea Cervesato via ltp
2024-09-18 15:14   ` Cyril Hrubis
2024-09-20 10:13     ` Li Wang
2024-09-20 10:30       ` Cyril Hrubis
2024-09-27  8:38 ` Cyril Hrubis
2024-09-27  9:26   ` Andrea Cervesato via ltp
2024-09-27  9:35     ` Cyril Hrubis
2024-09-27 10:23       ` Andrea Cervesato via ltp
2024-01-02 12:53 [LTP] LTP Release preparations Cyril Hrubis
2024-01-02 21:01 ` Petr Vorel
2024-01-10 18:06   ` Petr Vorel
2024-01-11 10:28     ` Petr Vorel
2024-01-03  1:57 ` Petr Vorel
2024-01-03  8:43 ` Yang Xu (Fujitsu)
2024-01-04 12:26 ` Petr Vorel
2024-01-04 12:35   ` Cyril Hrubis
2024-01-18 14:44 ` Cyril Hrubis
2024-01-25 10:17   ` Li Wang
2024-01-25 11:04     ` Cyril Hrubis
2024-01-26 10:40       ` Li Wang
2023-09-13  8:56 Cyril Hrubis
2023-09-13  9:45 ` Martin Doucha
2023-09-13  9:53 ` Andrea Cervesato via ltp
2023-09-13 11:18   ` Cyril Hrubis
2023-09-15  7:07 ` Petr Vorel
2023-09-15 12:03   ` Petr Vorel
2023-09-15 13:46     ` Petr Vorel
2023-09-15 13:02 ` Andrea Cervesato via ltp
2023-09-15 14:01   ` Cyril Hrubis
2023-09-18 11:23 ` Petr Vorel
2023-09-19  9:33   ` Richard Palethorpe
2023-09-26 10:50 ` Petr Vorel
2023-09-28  7:44   ` Li Wang
2023-09-28 17:39     ` Petr Vorel
2023-09-30  0:11   ` Edward Liaw via ltp
2023-09-30  6:22     ` Petr Vorel
2025-09-05  2:10 ` Wei Gao via ltp
2025-09-05  6:22   ` Andrea Cervesato via ltp
2025-09-05  8:11     ` Wei Gao via ltp
2023-04-26  9:17 [LTP] LTP release preparations Cyril Hrubis
2023-04-26 10:01 ` Li Wang
2023-04-26 10:08   ` Li Wang
2023-05-08 12:31   ` Li Wang
2023-05-09  9:29     ` Cyril Hrubis
2023-05-10  6:39       ` Li Wang
2023-05-10 13:03         ` Petr Vorel
2023-05-11  5:48           ` Li Wang
2023-05-15 13:15           ` Cyril Hrubis
2023-05-16  4:58             ` Petr Vorel
2023-05-16  5:35               ` Petr Vorel
2023-05-16  5:46                 ` Petr Vorel
2023-04-26 10:07 ` Teo Couprie Diaz
2023-04-27 14:06 ` Andrea Cervesato via ltp
2023-04-28  9:12 ` Yang Xu (Fujitsu)
2023-05-02 12:18   ` Cyril Hrubis
2023-05-08  5:27     ` Yang Xu (Fujitsu)
2023-05-02 14:37 ` Petr Vorel
2023-05-02 15:14   ` Petr Vorel
2023-05-03  9:35     ` Cyril Hrubis
2023-05-03 16:32       ` Petr Vorel
2023-05-04 10:28         ` Petr Vorel
2023-05-03  9:31   ` Cyril Hrubis
2023-05-03 12:56     ` Petr Vorel
2023-05-04 13:17     ` Petr Vorel
2023-05-04 13:44       ` Cyril Hrubis
2023-05-04 20:37         ` Petr Vorel
2023-05-05 16:55 ` Petr Vorel
2023-05-10 12:59 ` Martin Doucha
2023-05-10 13:15   ` Cyril Hrubis
2022-05-09 12:50 Cyril Hrubis
2022-05-09 13:51 ` Petr Vorel
2022-05-10  8:36 ` Li Wang
2022-05-10 13:54   ` Cyril Hrubis
2022-05-19 11:42 ` Martin Doucha
2022-05-19 12:11   ` Cyril Hrubis
2022-05-24 13:01 ` Cyril Hrubis
2022-01-05 11:36 [LTP] LTP Release preparations Cyril Hrubis
2022-01-05 14:45 ` Amir Goldstein
2022-01-05 14:53   ` Cyril Hrubis
2022-01-05 16:57     ` Petr Vorel
2022-01-06  9:52 ` xuyang2018.jy
2022-01-07  9:06 ` Li Wang
2022-01-07  9:51   ` Cyril Hrubis
2022-01-14  9:47 ` Cyril Hrubis
2018-04-12 11:35 Cyril Hrubis
2018-04-19 14:17 ` Cyril Hrubis
2018-04-20  3:56   ` Li Wang
2018-04-20 10:49   ` Petr Vorel
2018-04-24 13:39   ` Cyril Hrubis
2018-04-24 21:08     ` Jan Stancek
2018-04-25 13:23       ` Cyril Hrubis
2018-04-25 13:42         ` Jan Stancek
2018-04-25 13:44           ` Cyril Hrubis
2018-04-25 14:11     ` Petr Vorel
2018-05-03  9:28     ` Cyril Hrubis
2018-05-03 11:41       ` Alexey Kodanev
2018-05-03 12:06       ` Petr Vorel
2018-05-11 12:17       ` Cyril Hrubis
2016-01-05 15:36 [LTP] LTP release preparations Cyril Hrubis

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=aLlhhJcdbx7mPQX_@yuki.lan \
    --to=chrubis@suse.cz \
    --cc=liwang@redhat.com \
    --cc=ltp@lists.linux.it \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox