From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6C43FE743C0 for ; Thu, 28 Sep 2023 20:22:07 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232081AbjI1UWG (ORCPT ); Thu, 28 Sep 2023 16:22:06 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46556 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232288AbjI1UWE (ORCPT ); Thu, 28 Sep 2023 16:22:04 -0400 Received: from galois.linutronix.de (Galois.linutronix.de [IPv6:2a0a:51c0:0:12e:550::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 79727CC4; Thu, 28 Sep 2023 13:21:51 -0700 (PDT) From: Thomas Gleixner DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1695932510; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=DmX9bKSrcJ3gg7QbnQ5HyzLMjN52R6w5zfPwPk6xm2I=; b=YHT9m7xPV8mIUJbLtJqCcLFzUFa4VSLHpjgkGY3cd4OsQhGGJgLg1e37/hA4+6GMM4768U J25l2Ijl5q2Dr3EN5sV1sW5qOUs6yOKu0Riai6pRlhAkAIDZJr2/z5n6bLp/vMjHgJ+Gt8 63LUYOG1iK4A0Zkv/x961yPdotSWh/SOFx/NVCyhNci9dTYBETgAAOOicW6ltIasTVrzML PrMeIDKfE7VwKrI5Sn/Ar4iVv2LH2liLUD3belCuqIgfums7+W1d5fGNCGiQjZartkoR+z gUfdlE3SjYRXr6yMe/vD0mOpT499Ao6EyQxB6mxpa1onD+YQCdM+hq/cTMS7ww== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1695932510; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=DmX9bKSrcJ3gg7QbnQ5HyzLMjN52R6w5zfPwPk6xm2I=; b=caIsx9kL4G5PovmJ3A6YjoJRCjOZI7Nd/fXke7hfbfjDaR9tlUHt4i1fkUBLXVsyc5lCpC yR+HpMzGp3/N7MDA== To: Mathieu Desnoyers , Peter Zijlstra Cc: linux-kernel@vger.kernel.org, "Paul E . McKenney" , Boqun Feng , "H . Peter Anvin" , Paul Turner , linux-api@vger.kernel.org, Christian Brauner , Florian Weimer , David.Laight@ACULAB.COM, carlos@redhat.com, Peter Oskolkov , Alexander Mikhalitsyn , Chris Kennelly , Ingo Molnar , Darren Hart , Davidlohr Bueso , =?utf-8?Q?An?= =?utf-8?Q?dr=C3=A9?= Almeida , libc-alpha@sourceware.org, Steven Rostedt , Jonathan Corbet , Noah Goldstein , Daniel Colascione , longman@redhat.com, Mathieu Desnoyers , Florian Weimer Subject: Re: [RFC PATCH v2 1/4] rseq: Add sched_state field to struct rseq In-Reply-To: <20230529191416.53955-2-mathieu.desnoyers@efficios.com> References: <20230529191416.53955-1-mathieu.desnoyers@efficios.com> <20230529191416.53955-2-mathieu.desnoyers@efficios.com> Date: Thu, 28 Sep 2023 22:21:49 +0200 Message-ID: <87r0midp5u.ffs@tglx> MIME-Version: 1.0 Content-Type: text/plain Precedence: bulk List-ID: X-Mailing-List: linux-api@vger.kernel.org On Mon, May 29 2023 at 15:14, Mathieu Desnoyers wrote: > +void __rseq_set_sched_state(struct task_struct *t, unsigned int state); > + > +static inline void rseq_set_sched_state(struct task_struct *t, unsigned int state) > +{ > + if (t->rseq_sched_state) > + __rseq_set_sched_state(t, state); This is invoked on every context switch and writes over that state unconditionally even in the case that the state was already cleared. There are enough situations where tasks are scheduled out several times while being in the kernel. > /* rseq_preempt() requires preemption to be disabled. */ > static inline void rseq_preempt(struct task_struct *t) > { > __set_bit(RSEQ_EVENT_PREEMPT_BIT, &t->rseq_event_mask); > rseq_set_notify_resume(t); > + rseq_set_sched_state(t, 0); This code is already stupid to begin with. __set_bit() is cheap, but rseq_set_notify_resume() is not as it has a conditional and a locked instruction and now you add two more conditionals into the context switch path. Thanks, tglx