From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7CCB2174EDB; Wed, 20 Nov 2024 14:23:17 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1732112598; cv=none; b=i+veCh2GrVZ+m21hIjN8PiAmiyRcv0E2cVcz1BI5AHbbjOie57rvvgdf9eFuLf1UFpRBJ5LfFu9HBVATLrZLsK6uToC27RWPwegRWh4ga1GJH8MQkamFpYAs6qJmD/EAZqzrWABy6PW1W//oAQotHY72qRT69VpJJTCUcJfafm4= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1732112598; c=relaxed/simple; bh=BPomXTZqeEIbKC5DTXtaQe0v+mhLEggg8jnDCvzzxp8=; h=Date:From:To:Cc:Subject:Message-ID:References:MIME-Version: Content-Type:Content-Disposition:In-Reply-To; b=VfaXEWEO/Nvi/M9qBQD3QuzF8is182h5vZ4SWCyZo5oTlYZXI1jNv+EYRm1PWfTEabLswKbQUnW2cHD/rRtzSCbB13UaYvA9fyQbdAQptWxbzQZM+rlRBkOuB8u23g2fvJQOxnbndlHNF4JHZXGHTpaqmsXrZyJPYE4p4Wy8k3U= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=cOJXeO4d; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="cOJXeO4d" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 89846C4CECD; Wed, 20 Nov 2024 14:23:16 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1732112597; bh=BPomXTZqeEIbKC5DTXtaQe0v+mhLEggg8jnDCvzzxp8=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=cOJXeO4d8M/mqZJUH8CB5566vq/Z0/bAxmaeq/2Nxtiwpe+wtSTC/6SmNIk1+SJJZ /yfnF9ol9QoOk56ZHEfwKjQmx6of+pGKJ41lVxK6LciOHxIWC4LDlvEfsKiEw6qmmN xn8oMPj1juCbMl2rXRmHxZLhxeB8ztWS3vWIitW8S9/QUIWBjEk1ljtW5W2sHBrAE5 LmPZgcJ5DcNO+8n0fSgyIa1KdjDoyIK0TpURmc4pOHSp48VL2utv8hHs0RYNGkAP0k 2LtN6sF3YaFxsShTVjNTq6ScITbdERYtqS12vfHNP/f+2EzmOb9f0mMb/yfMQ5mPxo DRkZFo/Kh54Fg== Date: Wed, 20 Nov 2024 15:23:14 +0100 From: Frederic Weisbecker To: Valentin Schneider Cc: linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, kvm@vger.kernel.org, linux-mm@kvack.org, bpf@vger.kernel.org, x86@kernel.org, rcu@vger.kernel.org, linux-kselftest@vger.kernel.org, Nicolas Saenz Julienne , Steven Rostedt , Masami Hiramatsu , Jonathan Corbet , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , "H. Peter Anvin" , Paolo Bonzini , Wanpeng Li , Vitaly Kuznetsov , Andy Lutomirski , Peter Zijlstra , "Paul E. McKenney" , Neeraj Upadhyay , Joel Fernandes , Josh Triplett , Boqun Feng , Mathieu Desnoyers , Lai Jiangshan , Zqiang , Andrew Morton , Uladzislau Rezki , Christoph Hellwig , Lorenzo Stoakes , Josh Poimboeuf , Jason Baron , Kees Cook , Sami Tolvanen , Ard Biesheuvel , Nicholas Piggin , Juerg Haefliger , Nicolas Saenz Julienne , "Kirill A. Shutemov" , Nadav Amit , Dan Carpenter , Chuang Wang , Yang Jihong , Petr Mladek , "Jason A. Donenfeld" , Song Liu , Julian Pidancet , Tom Lendacky , Dionna Glaze , Thomas =?iso-8859-1?Q?Wei=DFschuh?= , Juri Lelli , Marcelo Tosatti , Yair Podemsky , Daniel Wagner , Petr Tesarik Subject: Re: [RFC PATCH v3 11/15] context-tracking: Introduce work deferral infrastructure Message-ID: References: <20241119153502.41361-1-vschneid@redhat.com> <20241119153502.41361-12-vschneid@redhat.com> Precedence: bulk X-Mailing-List: linux-doc@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-1 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: Le Wed, Nov 20, 2024 at 11:54:36AM +0100, Frederic Weisbecker a écrit : > Le Tue, Nov 19, 2024 at 04:34:58PM +0100, Valentin Schneider a écrit : > > +bool ct_set_cpu_work(unsigned int cpu, unsigned int work) > > +{ > > + struct context_tracking *ct = per_cpu_ptr(&context_tracking, cpu); > > + unsigned int old; > > + bool ret = false; > > + > > + preempt_disable(); > > + > > + old = atomic_read(&ct->state); > > + /* > > + * Try setting the work until either > > + * - the target CPU has entered kernelspace > > + * - the work has been set > > + */ > > + do { > > + ret = atomic_try_cmpxchg(&ct->state, &old, old | (work << CT_WORK_START)); > > + } while (!ret && ((old & CT_STATE_MASK) != CT_STATE_KERNEL)); > > + > > + preempt_enable(); > > + return ret; > > Does it ignore the IPI even if: > > (ret && (old & CT_STATE_MASK) == CT_STATE_KERNEL)) > > ? > > And what about CT_STATE_IDLE? > > Is the work ignored in those two cases? > > But would it be cleaner to never set the work if the target is elsewhere > than CT_STATE_USER. So you don't need to clear the work on kernel exit > but rather on kernel entry. > > That is: > > bool ct_set_cpu_work(unsigned int cpu, unsigned int work) > { > struct context_tracking *ct = per_cpu_ptr(&context_tracking, cpu); > unsigned int old; > bool ret = false; > > preempt_disable(); > > old = atomic_read(&ct->state); > > /* Start with our best wishes */ > old &= ~CT_STATE_MASK; > old |= CT_STATE_USER > > /* > * Try setting the work until either > * - the target CPU has exited userspace > * - the work has been set > */ > do { > ret = atomic_try_cmpxchg(&ct->state, &old, old | (work << CT_WORK_START)); > } while (!ret && ((old & CT_STATE_MASK) == CT_STATE_USER)); > > preempt_enable(); > > return ret; > } Ah but there is CT_STATE_GUEST and I see the last patch also applies that to CT_STATE_IDLE. So that could be: bool ct_set_cpu_work(unsigned int cpu, unsigned int work) { struct context_tracking *ct = per_cpu_ptr(&context_tracking, cpu); unsigned int old; bool ret = false; preempt_disable(); old = atomic_read(&ct->state); /* CT_STATE_IDLE can be added to last patch here */ if (!(old & (CT_STATE_USER | CT_STATE_GUEST))) { old &= ~CT_STATE_MASK; old |= CT_STATE_USER; } /* * Try setting the work until either * - the target CPU has exited userspace / guest * - the work has been set */ do { ret = atomic_try_cmpxchg(&ct->state, &old, old | (work << CT_WORK_START)); } while (!ret && old & (CT_STATE_USER | CT_STATE_GUEST)); preempt_enable(); return ret; } Thanks.