From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C2FF2C001E0 for ; Tue, 25 Jul 2023 13:38:08 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231178AbjGYNiH (ORCPT ); Tue, 25 Jul 2023 09:38:07 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33972 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231183AbjGYNiG (ORCPT ); Tue, 25 Jul 2023 09:38:06 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9396A1FEC for ; Tue, 25 Jul 2023 06:37:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1690292228; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=YuHDDSh+hqntXfEBDTeZy9lkclJmyfPYzRYLt1ff+f8=; b=Qrixst/ENYRVqVxuG2i01AUXJRbvpF1XhLd4QZDc38rBCgL5OGihc3fpcg/JnD9/ANaImE xxr5G+gVc8RfjKrBfHD57LqQEsmQqve0/xn5r/EaM/Mda4wNMZdO1psGeaCtF9aKZMVFXL hUtKw1+1B/khey73i+uu2o/Q+KiqpbI= Received: from mail-wr1-f70.google.com (mail-wr1-f70.google.com [209.85.221.70]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-641--nTX1dADPiao0-dKTdDPig-1; Tue, 25 Jul 2023 09:37:07 -0400 X-MC-Unique: -nTX1dADPiao0-dKTdDPig-1 Received: by mail-wr1-f70.google.com with SMTP id ffacd0b85a97d-31421c900b7so3258183f8f.3 for ; Tue, 25 Jul 2023 06:37:03 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1690292223; x=1690897023; h=content-transfer-encoding:mime-version:message-id:date:references :in-reply-to:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=YuHDDSh+hqntXfEBDTeZy9lkclJmyfPYzRYLt1ff+f8=; b=Osq6NwUdY6m9avi/qXQx5qphH9R4E9fOKSZReo3W8hBqTHDuh8S/jeiKDfTcnVsjLU TxrX7lPuPkQIPsQHpjT672L46NDGBf4v4E9F9GK07bSH4f8mG/2SXkXJxXOoWACqHWrJ Qc/JaiDZ3+lkqWj5ZzVRja8uEvXFCjqdlIfGLWl2l74bcc46rm9Xtru+qJZOauVtViIE /MTg58hkw6zzpuJPscF+5IACpl6vUGSGXK8GFoYF++jfh5aRXH2IjTTrLknfJFenLgQ+ poQEN9DWA3Mr0ZDeIowILm3RbU6K+BWcfvO8rTvT3PDku4fodETBrX94xUiikYXuUmbu pECQ== X-Gm-Message-State: ABy/qLYva08Sxi4YyTV0tzoY4dlkkXtv1Anx2/ns6n/3SshrPvusfFvk 4mXYzH5Bb5+bwHdQbJJXV8+KRuDUuAuLltjXienQwDOEwqt+Lk90gMmtUoaqfMSqUFuxsi6y9fW gm2cLQNT4E4kQ X-Received: by 2002:adf:edd1:0:b0:313:df09:ad04 with SMTP id v17-20020adfedd1000000b00313df09ad04mr11718237wro.57.1690292223004; Tue, 25 Jul 2023 06:37:03 -0700 (PDT) X-Google-Smtp-Source: APBJJlEu1q96uzpYLmrQ6ZyLxOIliHI6bjUjLVOksOKXP2KWpblMNlkbQak2J3cKJncV5JGWmoXhZA== X-Received: by 2002:adf:edd1:0:b0:313:df09:ad04 with SMTP id v17-20020adfedd1000000b00313df09ad04mr11718201wro.57.1690292222669; Tue, 25 Jul 2023 06:37:02 -0700 (PDT) Received: from vschneid.remote.csb ([149.12.7.81]) by smtp.gmail.com with ESMTPSA id h3-20020a5d4fc3000000b00314329f7d8asm16390715wrw.29.2023.07.25.06.36.59 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 25 Jul 2023 06:37:01 -0700 (PDT) From: Valentin Schneider To: Joel Fernandes Cc: linux-kernel@vger.kernel.org, linux-trace-kernel@vger.kernel.org, linux-doc@vger.kernel.org, kvm@vger.kernel.org, linux-mm@kvack.org, bpf@vger.kernel.org, x86@kernel.org, rcu@vger.kernel.org, linux-kselftest@vger.kernel.org, Peter Zijlstra , Nicolas Saenz Julienne , Steven Rostedt , Masami Hiramatsu , Jonathan Corbet , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , "H. Peter Anvin" , Paolo Bonzini , Wanpeng Li , Vitaly Kuznetsov , Andy Lutomirski , Frederic Weisbecker , "Paul E. McKenney" , Neeraj Upadhyay , Josh Triplett , Boqun Feng , Mathieu Desnoyers , Lai Jiangshan , Zqiang , Andrew Morton , Uladzislau Rezki , Christoph Hellwig , Lorenzo Stoakes , Josh Poimboeuf , Jason Baron , Kees Cook , Sami Tolvanen , Ard Biesheuvel , Nicholas Piggin , Juerg Haefliger , Nicolas Saenz Julienne , "Kirill A. Shutemov" , Nadav Amit , Dan Carpenter , Chuang Wang , Yang Jihong , Petr Mladek , "Jason A. Donenfeld" , Song Liu , Julian Pidancet , Tom Lendacky , Dionna Glaze , Thomas =?utf-8?Q?Wei=C3=9Fschuh?= , Juri Lelli , Daniel Bristot de Oliveira , Marcelo Tosatti , Yair Podemsky Subject: Re: [RFC PATCH v2 18/20] context_tracking,x86: Defer kernel text patching IPIs In-Reply-To: <6EBAEEED-6F38-472D-BA31-9C61179EFA2F@joelfernandes.org> References: <20230720163056.2564824-19-vschneid@redhat.com> <6EBAEEED-6F38-472D-BA31-9C61179EFA2F@joelfernandes.org> Date: Tue, 25 Jul 2023 14:36:59 +0100 Message-ID: MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org On 25/07/23 06:49, Joel Fernandes wrote: > Interesting series Valentin. Some high-level question/comments on this on= e: > >> On Jul 20, 2023, at 12:34 PM, Valentin Schneider w= rote: >> >> =EF=BB=BFtext_poke_bp_batch() sends IPIs to all online CPUs to synchroni= ze >> them vs the newly patched instruction. CPUs that are executing in usersp= ace >> do not need this synchronization to happen immediately, and this is >> actually harmful interference for NOHZ_FULL CPUs. > > Does the amount of harm not correspond to practical frequency of text_pok= e? > How often does instruction patching really happen? If it is very infreque= nt > then I am not sure if it is that harmful. > Being pushed over a latency threshold *once* is enough to impact the latency evaluation of your given system/application. It's mainly about shielding the isolated, NOHZ_FULL CPUs from whatever the housekeeping CPUs may be up to (flipping static keys, loading kprobes, using ftrace...) - frequency of the interference isn't such a big part of the reasoning. >> >> As the synchronization IPIs are sent using a blocking call, returning fr= om >> text_poke_bp_batch() implies all CPUs will observe the patched >> instruction(s), and this should be preserved even if the IPI is deferred. >> In other words, to safely defer this synchronization, any kernel >> instruction leading to the execution of the deferred instruction >> sync (ct_work_flush()) must *not* be mutable (patchable) at runtime. > > If it is not infrequent, then are you handling the case where userland > spends multiple seconds before entering the kernel, and all this while > the blocking call waits? Perhaps in such situation you want the real IPI > to be sent out instead of the deferred one? > The blocking call only waits for CPUs for which it queued a CSD. Deferred calls do not queue a CSD thus do not impact the waiting at all. See smp_call_function_many_cond().