From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 58D6EC7EE23 for ; Mon, 29 May 2023 02:52:57 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231443AbjE2Cw4 (ORCPT ); Sun, 28 May 2023 22:52:56 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58010 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229559AbjE2Cwy (ORCPT ); Sun, 28 May 2023 22:52:54 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7FE30AC for ; Sun, 28 May 2023 19:52:52 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 0B4D461FCE for ; Mon, 29 May 2023 02:52:52 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id A677DC433D2; Mon, 29 May 2023 02:52:49 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1685328771; bh=V44sYxF5Gr1oeSfD+weyLs2bjxgWuAZCS8Jb97MeiQI=; h=Date:From:To:Cc:Subject:In-Reply-To:References:From; b=gJ5Tl8KFgHfE3QYkIdP5LB0KXl1Gc5Qpr8o98J6bf/Hb91Eopm8oetOklMt1nw7ez lrUTQ6NWB93LeEzm+wqG7ZQ32ZhC4xOuitHDCakySIG6ubwgspfRBTECWCUmCgUgOl uXoX+nsdQLP5y/Y8VbCbG2sDIziTDErTkuYYZAaAxZP9H5QuLS6bbRzZNJPZOtdijQ AdrI9LGmtwH9QCBVRiT5Gfo/dU88UQDl8eVOykezglckVODo5iOQkFgWVLFiCbfW+7 CuETsIati44F8JFVI/jiN/ybfavosyE0swcORoD5VpoND8Ptcd8pCXXWZ922kHkajY Hoyz1KxE9CvCw== Date: Mon, 29 May 2023 11:52:46 +0900 From: Masami Hiramatsu (Google) To: Steven Rostedt Cc: LKML , x86@kernel.org, Masami Hiramatsu , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Peter Zijlstra Subject: Re: [PATCH] x86/alternatives: Add cond_resched() to text_poke_bp_batch() Message-Id: <20230529115246.a61734ce4e6d7644e2faec72@kernel.org> In-Reply-To: <20230528084652.5f3b48f0@rorschach.local.home> References: <20230528084652.5f3b48f0@rorschach.local.home> X-Mailer: Sylpheed 3.8.0beta1 (GTK+ 2.24.33; x86_64-pc-linux-gnu) Mime-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Sun, 28 May 2023 08:46:52 -0400 Steven Rostedt wrote: > From: "Steven Rostedt (Google)" > > Debugging in the kernel has started slowing down the kernel by a > noticeable amount. The ftrace start up tests are triggering the softlockup > watchdog on some boxes. This is caused by the start up tests that enable > function and function graph tracing several times. Sprinkling > cond_resched() just in the start up test code was not enough to stop the > softlockup from triggering. It would sometimes trigger in the > text_poke_bp_batch() code. > > The text_poke_bp_batch() is run in schedulable context. Add > cond_resched() between each phase (adding the int3, updating the code, and > removing the int3). This keeps the softlockup from triggering in the start > up tests. > > Signed-off-by: Steven Rostedt (Google) > --- > arch/x86/kernel/alternative.c | 13 ++++++++++++- > 1 file changed, 12 insertions(+), 1 deletion(-) > > diff --git a/arch/x86/kernel/alternative.c b/arch/x86/kernel/alternative.c > index f615e0cb6d93..e024eddd457f 100644 > --- a/arch/x86/kernel/alternative.c > +++ b/arch/x86/kernel/alternative.c > @@ -1953,6 +1953,14 @@ static void text_poke_bp_batch(struct text_poke_loc *tp, unsigned int nr_entries > */ > atomic_set_release(&bp_desc.refs, 1); > > + /* > + * Function tracing can enable thousands of places that need to be > + * updated. This can take quite some time, and with full kernel debugging > + * enabled, this could cause the softlockup watchdog to trigger. > + * Add cond_resched() calls to each phase. > + */ > + cond_resched(); Hmm, why don't you put this between the first step (put int3) and the second step (put other bytes)? I guess those would takes more time. Thank you, > + > /* > * Corresponding read barrier in int3 notifier for making sure the > * nr_entries and handler are correctly ordered wrt. patching. > @@ -2030,6 +2038,7 @@ static void text_poke_bp_batch(struct text_poke_loc *tp, unsigned int nr_entries > * better safe than sorry (plus there's not only Intel). > */ > text_poke_sync(); > + cond_resched(); > } > > /* > @@ -2049,8 +2058,10 @@ static void text_poke_bp_batch(struct text_poke_loc *tp, unsigned int nr_entries > do_sync++; > } > > - if (do_sync) > + if (do_sync) { > text_poke_sync(); > + cond_resched(); > + } > > /* > * Remove and wait for refs to be zero. > -- > 2.39.2 > -- Masami Hiramatsu (Google)