From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D931B22B588; Mon, 9 Jun 2025 23:26:49 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1749511610; cv=none; b=rHh5ktba3WWvmZ+dCz05U2n5oxkx8e4GTyQp80yy9XAiUbJWuiVgHhjF1hM/Dw+rZ7PROlw7beB8YPf4gcS7zH2fsG0gLFcecRi0ahbakALv5I8c8i0lyPovWywquYVTRm0yzxRMxXQlrJibtHFt2x+/r9378+48D9XCRRuALTs= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1749511610; c=relaxed/simple; bh=Aumbtb4AZN063xAleZtZe+qd8XmERe+H3IwO4eAnrHY=; h=Date:From:To:Cc:Subject:Message-ID:References:MIME-Version: Content-Type:Content-Disposition:In-Reply-To; b=GW6mfe2kTkV5YLu+Lw67xxCprsb3nrDpMqyAxVBiiIKXR7f3i+QsIFj7tPaxbzD+ttEvp9MNzDxDO/n6/8Ql7OKQeyen9LIsW8v7jU7Ksw36QF57hmrJYhdyjUOd31xB7wDSuLXcwu5gqHu+rpdx3XBSIowec1U13zU+mkCRs44= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=VwW1qu/e; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="VwW1qu/e" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 022CFC4CEEB; Mon, 9 Jun 2025 23:26:48 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1749511609; bh=Aumbtb4AZN063xAleZtZe+qd8XmERe+H3IwO4eAnrHY=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=VwW1qu/eq5uzk5bTxz5QL1bi0HpVdDQAKsRuQwOqTG8PWLp3ZZucveTlegecacn55 VQqpi3aGOkB9+vuhpiUIKlHFNqx7tiYDZKV0TV1U7K7fS0XhIycJzHAKbuQ25n0q2f 7luNhqqapqR7queTL/cqauXhS0d8OroHLxDLnxiXHkmgv/qEgPPwSnvFLm0qHECIJg MW78edWJ327N7FLCnv9tZMrFbWhPIhYkF0+hS9NrWN9fmvmBbQew+GUn5GH0ikc2kN jRHktvSGIvuY3SC3/SpOs+qrV73ROZDKBE6rNupb2vsPTX2emgRh9DQe5n9XRWPmtY wkoKslX9zqUpQ== Date: Tue, 10 Jun 2025 01:26:46 +0200 From: Frederic Weisbecker To: Boqun Feng Cc: Joel Fernandes , linux-kernel@vger.kernel.org, "Paul E. McKenney" , Neeraj Upadhyay , Joel Fernandes , Josh Triplett , Uladzislau Rezki , Steven Rostedt , Mathieu Desnoyers , Lai Jiangshan , Zqiang , Xiongfeng Wang , rcu@vger.kernel.org, bpf@vger.kernel.org Subject: Re: [PATCH 2/2] rcu: Fix lockup when RCU reader used while IRQ exiting Message-ID: References: <20250609180125.2988129-1-joelagnelf@nvidia.com> <20250609180125.2988129-2-joelagnelf@nvidia.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-1 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: Le Mon, Jun 09, 2025 at 12:49:06PM -0700, Boqun Feng a écrit : > Hi Joel, > > On Mon, Jun 09, 2025 at 02:01:24PM -0400, Joel Fernandes wrote: > > During rcu_read_unlock_special(), if this happens during irq_exit(), we > > can lockup if an IPI is issued. This is because the IPI itself triggers > > the irq_exit() path causing a recursive lock up. > > > > This is precisely what Xiongfeng found when invoking a BPF program on > > the trace_tick_stop() tracepoint As shown in the trace below. Fix by > > using context-tracking to tell us if we're still in an IRQ. > > context-tracking keeps track of the IRQ until after the tracepoint, so > > it cures the issues. > > > > This does fix the issue, but do we know when the CPU will eventually > report a QS after this fix? I believe we still want to report a QS as > early as possible in this case? If !ct_in_irq(), we issue a self-IPI, then preempt_schedule_irq() will call into schedule() and report a QS (if preempt/bh is not disabled, otherwise this is delayed to preempt_enable() or local_bh_enable() issuing preempt_schedule()) If ct_in_irq(), we are already in an IRQ, then it's the same as above eventually. Thanks. -- Frederic Weisbecker SUSE Labs