From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 33FFB3BED2D for ; Wed, 18 Mar 2026 11:31:10 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773833471; cv=none; b=XiIhZ7g70I1gfHbqgTwe1kbF0FOkX/ZU6Np8h0ei9yaKI3DFNIje3IESv3bPmq07Mp1QH6Kril+mDkBqui1FehKCxxyHP4qmBNWfdj7GNK5Q+f1sm0ddvUMkEQ8fFJlj2dcaSI1SI/dNfMlPZM3ex3ey3+l9Yz5K2cq4r4jHWw0= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773833471; c=relaxed/simple; bh=vfM3VJ96sjr18myvdq0vJEdZ614PBURDW+3TJJfvf4c=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=S2gD6TkB/1d7X8HpTHGWAh/YQjVIiKSW+x1R0CyG/FpsGCt6ab2r/AJh88wjPS06AgmrViw6Wlq1XJl8AqYBXrCswnWIKF9akE/Ga3i7YXaLayXWIwqG77tieZrkdiH65nIU5Yniltqetv75a6cp9oJXq9L4ip65aYFqFAxKIUk= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=AzKOItOJ; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="AzKOItOJ" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 24BFEC19421; Wed, 18 Mar 2026 11:31:10 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1773833470; bh=vfM3VJ96sjr18myvdq0vJEdZ614PBURDW+3TJJfvf4c=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=AzKOItOJ54zrUe4PJSf8G+vhY+7TTDTm1CdgnxzdY/DXpS5MNOaC3hLS0xlc+o8hW GECFvXcaaBueqdKzAlv1GD+61SzwiRBul6Fn0/RGo4P1LHfZrJqiLj6W0FPa+fAWuS zE/HlA1Fu7ObfmO1csoNpgXiB51FVK6gCfK7FaSIApNazBf7lbErMleQCouDwX6R7A sOIVhPh4DG+vHgCRZiKQ0/w1zZKcqAZfTGUHhBa0JMQfnY6fyTr/O3svR9Ni2iLv+r y+VUXsIBbPdwdg5vkljvlOJHvp6cEUOTmfn1lRZAlybinp/QuR4tU8k4cBj45N3QN2 bDZA8lh+qn00A== From: Sasha Levin To: stable@vger.kernel.org Cc: Huiwen He , Masami Hiramatsu , Mathieu Desnoyers , "Steven Rostedt (Google)" , Sasha Levin Subject: [PATCH 6.1.y] tracing: Fix syscall events activation by ensuring refcount hits zero Date: Wed, 18 Mar 2026 07:31:08 -0400 Message-ID: <20260318113108.626781-1-sashal@kernel.org> X-Mailer: git-send-email 2.51.0 In-Reply-To: <2026031719-flail-unleveled-35e4@gregkh> References: <2026031719-flail-unleveled-35e4@gregkh> Precedence: bulk X-Mailing-List: stable@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit From: Huiwen He [ Upstream commit 0a663b764dbdf135a126284f454c9f01f95a87d4 ] When multiple syscall events are specified in the kernel command line (e.g., trace_event=syscalls:sys_enter_openat,syscalls:sys_enter_close), they are often not captured after boot, even though they appear enabled in the tracing/set_event file. The issue stems from how syscall events are initialized. Syscall tracepoints require the global reference count (sys_tracepoint_refcount) to transition from 0 to 1 to trigger the registration of the syscall work (TIF_SYSCALL_TRACEPOINT) for tasks, including the init process (pid 1). The current implementation of early_enable_events() with disable_first=true used an interleaved sequence of "Disable A -> Enable A -> Disable B -> Enable B". If multiple syscalls are enabled, the refcount never drops to zero, preventing the 0->1 transition that triggers actual registration. Fix this by splitting early_enable_events() into two distinct phases: 1. Disable all events specified in the buffer. 2. Enable all events specified in the buffer. This ensures the refcount hits zero before re-enabling, allowing syscall events to be properly activated during early boot. The code is also refactored to use a helper function to avoid logic duplication between the disable and enable phases. Cc: stable@vger.kernel.org Cc: Masami Hiramatsu Cc: Mathieu Desnoyers Link: https://patch.msgid.link/20260224023544.1250787-1-hehuiwen@kylinos.cn Fixes: ce1039bd3a89 ("tracing: Fix enabling of syscall events on the command line") Signed-off-by: Huiwen He Signed-off-by: Steven Rostedt (Google) Signed-off-by: Sasha Levin --- kernel/trace/trace_events.c | 51 ++++++++++++++++++++++++++----------- 1 file changed, 36 insertions(+), 15 deletions(-) diff --git a/kernel/trace/trace_events.c b/kernel/trace/trace_events.c index 55623a9bb64ac..c4c900b69f061 100644 --- a/kernel/trace/trace_events.c +++ b/kernel/trace/trace_events.c @@ -3862,27 +3862,23 @@ static __init int event_trace_memsetup(void) return 0; } -static __init void -early_enable_events(struct trace_array *tr, bool disable_first) +/* + * Helper function to enable or disable a comma-separated list of events + * from the bootup buffer. + */ +static __init void __early_set_events(struct trace_array *tr, bool enable) { char *buf = bootup_event_buf; char *token; - int ret; - - while (true) { - token = strsep(&buf, ","); - - if (!token) - break; + while ((token = strsep(&buf, ","))) { if (*token) { - /* Restarting syscalls requires that we stop them first */ - if (disable_first) + if (enable) { + if (ftrace_set_clr_event(tr, token, 1)) + pr_warn("Failed to enable trace event: %s\n", token); + } else { ftrace_set_clr_event(tr, token, 0); - - ret = ftrace_set_clr_event(tr, token, 1); - if (ret) - pr_warn("Failed to enable trace event: %s\n", token); + } } /* Put back the comma to allow this to be called again */ @@ -3891,6 +3887,31 @@ early_enable_events(struct trace_array *tr, bool disable_first) } } +/** + * early_enable_events - enable events from the bootup buffer + * @tr: The trace array to enable the events in + * @disable_first: If true, disable all events before enabling them + * + * This function enables events from the bootup buffer. If @disable_first + * is true, it will first disable all events in the buffer before enabling + * them. + * + * For syscall events, which rely on a global refcount to register the + * SYSCALL_WORK_SYSCALL_TRACEPOINT flag (especially for pid 1), we must + * ensure the refcount hits zero before re-enabling them. A simple + * "disable then enable" per-event is not enough if multiple syscalls are + * used, as the refcount will stay above zero. Thus, we need a two-phase + * approach: disable all, then enable all. + */ +static __init void +early_enable_events(struct trace_array *tr, bool disable_first) +{ + if (disable_first) + __early_set_events(tr, false); + + __early_set_events(tr, true); +} + static __init int event_trace_enable(void) { struct trace_array *tr = top_trace_array(); -- 2.51.0