From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C0CA4296BCB for ; Sat, 4 Apr 2026 02:25:33 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1775269533; cv=none; b=B6/cV9SGQnKm94rSaT04ZN+m5aFnIXCU+pZYULuQR41+wRedsZyKNwNSKhCdp3d97r1X+5aYGRSioaJyLTVPydwrxSHDdXg1twNrUUqKWXjMNDspj/YGuP3/fr8vMuWu2ME7J4aI/XAgJ9AJFB52H4MNfAXqB5MNMMsJU5xQ6EM= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1775269533; c=relaxed/simple; bh=M5yBLEkfX8xBSUWEh0U5haPjD0m0eKWo0usQWtS3kCU=; h=Message-ID:Date:From:To:Cc:Subject:References:MIME-Version: Content-Type; b=nPt3z13U/ju+2pMxuQzd4ozR/Na+CZepPnc71MNlvj1YmUG073k3YLoVJ2NMW16YJycnP2aSnN3iIPC2eHLa31G+42f2vxh/ts61NVBq0uhRfZyna30XWee0U/4ZTit6fE3YH9Y6OpSg0cdIkDCJrmTjw2SEquSDUtsHEb4oXCs= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=r/SWHVVK; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="r/SWHVVK" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 7261FC2BCAF; Sat, 4 Apr 2026 02:25:33 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1775269533; bh=M5yBLEkfX8xBSUWEh0U5haPjD0m0eKWo0usQWtS3kCU=; h=Date:From:To:Cc:Subject:References:From; b=r/SWHVVKfxo5kC4zojaUaaNnwf3P42iPVANg6/dWyGQQjG04j3zTkrq+9RL6yJuuF eGCDXOVrLSSRzzy3+iwCqd/MCkQfoGTWyB4M89+hmgG11pmWd9QEktdwD+Ee9URCpY n52pUhZZ1Q+YoPph2M8YX6/RNNuv/It2t7x9R/MB0vtxVGIMTXuBbjum0CuFm5iurI emP0FvFL0BZVwJShe3ZigxVLj7ifqVss9V1pgCi0FKj+i+cUaEnf6A2xBGvbmpQCNY vwq8AQ+d3orYVDwyPKWpgy1LLwH5HOo2XG1a0vfEoq1lchzPI4nf5jv0GfWLQn7A2v iBclX518TnBPA== Received: from rostedt by gandalf with local (Exim 4.99.1) (envelope-from ) id 1w8qid-0000000164X-3Yok; Fri, 03 Apr 2026 22:26:39 -0400 Message-ID: <20260404022639.705588294@kernel.org> User-Agent: quilt/0.69 Date: Fri, 03 Apr 2026 22:26:19 -0400 From: Steven Rostedt To: linux-kernel@vger.kernel.org Cc: Masami Hiramatsu , Mark Rutland , Mathieu Desnoyers , Andrew Morton , Vincent Donnefort Subject: [for-next][PATCH 2/6] ring-buffer: Enforce read ordering of trace_buffer cpumask and buffers References: <20260404022617.436859059@kernel.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 From: Vincent Donnefort On CPU hotplug, if it is the first time a trace_buffer sees a CPU, a ring_buffer_per_cpu will be allocated and its corresponding bit toggled in the cpumask. Many readers check this cpumask to know if they can safely read the ring_buffer_per_cpu but they are doing so without memory ordering and may observe the cpumask bit set while having NULL buffer pointer. Enforce the memory read ordering by sending an IPI to all online CPUs. The hotplug path is a slow-path anyway and it saves us from adding read barriers in numerous call sites. Link: https://patch.msgid.link/20260401053659.3458961-1-vdonnefort@google.com Signed-off-by: Vincent Donnefort Suggested-by: Steven Rostedt (Google) Signed-off-by: Steven Rostedt (Google) --- kernel/trace/ring_buffer.c | 19 ++++++++++++++++++- 1 file changed, 18 insertions(+), 1 deletion(-) diff --git a/kernel/trace/ring_buffer.c b/kernel/trace/ring_buffer.c index 8b6c39bba56d..2caa5d3d0ae9 100644 --- a/kernel/trace/ring_buffer.c +++ b/kernel/trace/ring_buffer.c @@ -7722,6 +7722,12 @@ int ring_buffer_map_get_reader(struct trace_buffer *buffer, int cpu) return 0; } +static void rb_cpu_sync(void *data) +{ + /* Not really needed, but documents what is happening */ + smp_rmb(); +} + /* * We only allocate new buffers, never free them if the CPU goes down. * If we were to free the buffer, then the user would lose any trace that was in @@ -7760,7 +7766,18 @@ int trace_rb_cpu_prepare(unsigned int cpu, struct hlist_node *node) cpu); return -ENOMEM; } - smp_wmb(); + + /* + * Ensure trace_buffer readers observe the newly allocated + * ring_buffer_per_cpu before they check the cpumask. Instead of using a + * read barrier for all readers, send an IPI. + */ + if (unlikely(system_state == SYSTEM_RUNNING)) { + on_each_cpu(rb_cpu_sync, NULL, 1); + /* Not really needed, but documents what is happening */ + smp_wmb(); + } + cpumask_set_cpu(cpu, buffer->cpumask); return 0; } -- 2.51.0