Sched_ext development
 help / color / mirror / Atom feed
From: Andrea Righi <arighi@nvidia.com>
To: Cheng-Yang Chou <yphbchou0911@gmail.com>
Cc: sched-ext@lists.linux.dev, Tejun Heo <tj@kernel.org>,
	David Vernet <void@manifault.com>,
	Changwoo Min <changwoo@igalia.com>,
	Ching-Chun Huang <jserv@ccns.ncku.edu.tw>,
	Chia-Ping Tsai <chia7712@gmail.com>
Subject: Re: [PATCH sched_ext/for-7.2] selftests/sched_ext: Fix select_cpu_dfl link leak on early return
Date: Fri, 8 May 2026 10:08:47 +0200	[thread overview]
Message-ID: <af2aD8-tFi_yhPOV@gpd4> (raw)
In-Reply-To: <20260508065514.1723906-1-yphbchou0911@gmail.com>

Hi Cheng-Yang,

On Fri, May 08, 2026 at 02:55:12PM +0800, Cheng-Yang Chou wrote:
> If run() exits early via SCX_EQ/SCX_ASSERT (which calls return
> directly), bpf_link__destroy() is never reached and the BPF
> scheduler stays loaded. All subsequent tests then fail to attach
> because SCX is not in the DISABLED state.
> 
> Move bpf_link into a context struct so cleanup() always destroys
> it, regardless of how run() exits. Also skip waitpid() for children
> where fork() returned -1, avoiding waitpid(-1,...) accidentally
> reaping an unrelated child and triggering the early return path.
> 
> Signed-off-by: Cheng-Yang Chou <yphbchou0911@gmail.com>

Looks good to me.

Reviewed-by: Andrea Righi <arighi@nvidia.com>

I think other selftests may need a similar change to prevent potential cascading
failures.

Thanks,
-Andrea

> ---
> I was hitting this error when testing the defer SCX bandwidth patchset,
> though I think it shouldn't hit the bug under regular testing. Tested
> this on the for-next branch and everything still looks fine.
> 
>  .../selftests/sched_ext/select_cpu_dfl.c      | 54 +++++++++++++------
>  1 file changed, 38 insertions(+), 16 deletions(-)
> 
> diff --git a/tools/testing/selftests/sched_ext/select_cpu_dfl.c b/tools/testing/selftests/sched_ext/select_cpu_dfl.c
> index 5b6e045e1109..7e342c0cec65 100644
> --- a/tools/testing/selftests/sched_ext/select_cpu_dfl.c
> +++ b/tools/testing/selftests/sched_ext/select_cpu_dfl.c
> @@ -6,6 +6,7 @@
>   */
>  #include <bpf/bpf.h>
>  #include <scx/common.h>
> +#include <stdlib.h>
>  #include <sys/wait.h>
>  #include <unistd.h>
>  #include "select_cpu_dfl.bpf.skel.h"
> @@ -13,29 +14,44 @@
>  
>  #define NUM_CHILDREN 1028
>  
> +struct select_cpu_dfl_ctx {
> +	struct select_cpu_dfl	*skel;
> +	struct bpf_link		*link;
> +};
> +
>  static enum scx_test_status setup(void **ctx)
>  {
> -	struct select_cpu_dfl *skel;
> +	struct select_cpu_dfl_ctx *tctx;
> +
> +	tctx = malloc(sizeof(*tctx));
> +	SCX_FAIL_IF(!tctx, "Failed to allocate test context");
> +	tctx->link = NULL;
>  
> -	skel = select_cpu_dfl__open();
> -	SCX_FAIL_IF(!skel, "Failed to open");
> -	SCX_ENUM_INIT(skel);
> -	SCX_FAIL_IF(select_cpu_dfl__load(skel), "Failed to load skel");
> +	tctx->skel = select_cpu_dfl__open();
> +	if (!tctx->skel) {
> +		free(tctx);
> +		SCX_FAIL("Failed to open");
> +	}
> +	SCX_ENUM_INIT(tctx->skel);
> +	if (select_cpu_dfl__load(tctx->skel)) {
> +		select_cpu_dfl__destroy(tctx->skel);
> +		free(tctx);
> +		SCX_FAIL("Failed to load skel");
> +	}
>  
> -	*ctx = skel;
> +	*ctx = tctx;
>  
>  	return SCX_TEST_PASS;
>  }
>  
>  static enum scx_test_status run(void *ctx)
>  {
> -	struct select_cpu_dfl *skel = ctx;
> -	struct bpf_link *link;
> +	struct select_cpu_dfl_ctx *tctx = ctx;
>  	pid_t pids[NUM_CHILDREN];
> -	int i, status;
> +	int i, status, nforked = 0;
>  
> -	link = bpf_map__attach_struct_ops(skel->maps.select_cpu_dfl_ops);
> -	SCX_FAIL_IF(!link, "Failed to attach scheduler");
> +	tctx->link = bpf_map__attach_struct_ops(tctx->skel->maps.select_cpu_dfl_ops);
> +	SCX_FAIL_IF(!tctx->link, "Failed to attach scheduler");
>  
>  	for (i = 0; i < NUM_CHILDREN; i++) {
>  		pids[i] = fork();
> @@ -43,25 +59,31 @@ static enum scx_test_status run(void *ctx)
>  			sleep(1);
>  			exit(0);
>  		}
> +		if (pids[i] > 0)
> +			nforked++;
>  	}
>  
>  	for (i = 0; i < NUM_CHILDREN; i++) {
> +		if (pids[i] <= 0)
> +			continue;
>  		SCX_EQ(waitpid(pids[i], &status, 0), pids[i]);
>  		SCX_EQ(status, 0);
>  	}
>  
> -	SCX_ASSERT(!skel->bss->saw_local);
> -
> -	bpf_link__destroy(link);
> +	SCX_GT(nforked, 0);
> +	SCX_ASSERT(!tctx->skel->bss->saw_local);
>  
>  	return SCX_TEST_PASS;
>  }
>  
>  static void cleanup(void *ctx)
>  {
> -	struct select_cpu_dfl *skel = ctx;
> +	struct select_cpu_dfl_ctx *tctx = ctx;
>  
> -	select_cpu_dfl__destroy(skel);
> +	if (tctx->link)
> +		bpf_link__destroy(tctx->link);
> +	select_cpu_dfl__destroy(tctx->skel);
> +	free(tctx);
>  }
>  
>  struct scx_test select_cpu_dfl = {
> -- 
> 2.48.1
> 

  reply	other threads:[~2026-05-08  8:09 UTC|newest]

Thread overview: 5+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-05-08  6:55 [PATCH sched_ext/for-7.2] selftests/sched_ext: Fix select_cpu_dfl link leak on early return Cheng-Yang Chou
2026-05-08  8:08 ` Andrea Righi [this message]
2026-05-08  9:54   ` Cheng-Yang Chou
2026-05-08 15:45 ` Tejun Heo
2026-05-08 18:20 ` sashiko-bot

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=af2aD8-tFi_yhPOV@gpd4 \
    --to=arighi@nvidia.com \
    --cc=changwoo@igalia.com \
    --cc=chia7712@gmail.com \
    --cc=jserv@ccns.ncku.edu.tw \
    --cc=sched-ext@lists.linux.dev \
    --cc=tj@kernel.org \
    --cc=void@manifault.com \
    --cc=yphbchou0911@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox