From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id E6379CF6495 for ; Sun, 29 Sep 2024 10:38:23 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 2703A8D000B; Sun, 29 Sep 2024 06:38:23 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 1FA598D0002; Sun, 29 Sep 2024 06:38:23 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 073408D000B; Sun, 29 Sep 2024 06:38:23 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id D9F708D0002 for ; Sun, 29 Sep 2024 06:38:22 -0400 (EDT) Received: from smtpin26.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id 7E69A1A1CE8 for ; Sun, 29 Sep 2024 10:38:22 +0000 (UTC) X-FDA: 82617426444.26.F769A14 Received: from smtpout.efficios.com (smtpout.efficios.com [167.114.26.122]) by imf08.hostedemail.com (Postfix) with ESMTP id 847AE160010 for ; Sun, 29 Sep 2024 10:38:20 +0000 (UTC) Authentication-Results: imf08.hostedemail.com; dkim=pass header.d=efficios.com header.s=smtpout1 header.b=X5bJR1wU; dmarc=pass (policy=none) header.from=efficios.com; spf=pass (imf08.hostedemail.com: domain of mathieu.desnoyers@efficios.com designates 167.114.26.122 as permitted sender) smtp.mailfrom=mathieu.desnoyers@efficios.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1727606199; a=rsa-sha256; cv=none; b=1CVplcMpO54kTpqvYYoKcT+PPIJ4ELap4GqfINay+fl3mgiayLZi01KNu6vN0Dfb5eX466 zJJ2OjBOxfH5ggcLXSAevn3v1wetfwcfTjU94/NU6KaLsuI9K5pKeSXHDWum+HC/XGn7t2 yNOLNxXXCFzwHZgrhYghaVXwE1LfPF8= ARC-Authentication-Results: i=1; imf08.hostedemail.com; dkim=pass header.d=efficios.com header.s=smtpout1 header.b=X5bJR1wU; dmarc=pass (policy=none) header.from=efficios.com; spf=pass (imf08.hostedemail.com: domain of mathieu.desnoyers@efficios.com designates 167.114.26.122 as permitted sender) smtp.mailfrom=mathieu.desnoyers@efficios.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1727606199; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=u2Kv89G8ihzgJuUK2QvVnHYFZIQaU7tSbcUCjmEnp7c=; b=JvGNC3J1TV+05/dLSULBq8O0/IVK883i5a/mNLKHizC7iEAFkf89gaUp/BKVrRiZdysWVS /7yvyxBABCT5Xd5rxMIq+jPzpu6eX3wDe8uy1oN1xGG8GayRF2ud4YsLg8QNfkwI5xLYML Agf9clybgNuxrpjUdTNVMhpxY+doxlc= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=efficios.com; s=smtpout1; t=1727606299; bh=SgF4pgZTLOl0dKajqPfd5secSNAVBGqHN0Padipl/bE=; h=Date:Subject:To:Cc:References:From:In-Reply-To:From; b=X5bJR1wU6nTuItl/6Bx8LS4ISjasrXYFGCAnO+trklYufxKHm7oTuKBrSCiTPYEAG gjSz09XT8eaXAyT2kK6N7N/BZG2q8QqxnGw4XAXjWGQnJprTBiVSKQWKjenDdAGbxW Np5WPSNuPzQbfBzq3ltpWRxceuCCSr21XfRQC1jMEUpEp1SivzVOPLuPnK7r4asxds JNuvoIPtwIGTp/pXiyfIawItlMWUU6twY/ZFjs+IYUZ/sV+5MloofWsGzrEOhMAyDj cNc0SxdFqoI/7s7cZo5zm/VIH8DReE+8tgn41bHSaISGhp9Hc8BD3Yx0hEjqjP0c0j fIin0/arVPmMw== Received: from [IPV6:2606:6d00:100:4000:cacb:9855:de1f:ded2] (unknown [IPv6:2606:6d00:100:4000:cacb:9855:de1f:ded2]) by smtpout.efficios.com (Postfix) with ESMTPSA id 4XGgf31s3Dz65S; Sun, 29 Sep 2024 06:38:19 -0400 (EDT) Message-ID: <229ac2ba-dd5b-4735-af93-8ef8efb6fa02@efficios.com> Date: Sun, 29 Sep 2024 06:36:16 -0400 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH 1/2] compiler.h: Introduce ptr_eq() to preserve address dependency To: Gary Guo Cc: Linus Torvalds , linux-kernel@vger.kernel.org, Greg Kroah-Hartman , Sebastian Andrzej Siewior , "Paul E. McKenney" , Will Deacon , Peter Zijlstra , Boqun Feng , Alan Stern , John Stultz , Neeraj Upadhyay , Frederic Weisbecker , Joel Fernandes , Josh Triplett , Uladzislau Rezki , Steven Rostedt , Lai Jiangshan , Zqiang , Ingo Molnar , Waiman Long , Mark Rutland , Thomas Gleixner , Vlastimil Babka , maged.michael@gmail.com, Mateusz Guzik , Jonas Oberhauser , rcu@vger.kernel.org, linux-mm@kvack.org, lkmm@lists.linux.dev, github@npopov.com, llvm@lists.linux.dev References: <20240928135128.991110-1-mathieu.desnoyers@efficios.com> <20240928135128.991110-2-mathieu.desnoyers@efficios.com> <20240929002428.38f37f54.gary@garyguo.net> From: Mathieu Desnoyers Content-Language: en-US In-Reply-To: <20240929002428.38f37f54.gary@garyguo.net> Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: 847AE160010 X-Stat-Signature: zxh1g7supdgxu7k45x8mtosa4oqpcaha X-Rspam-User: X-HE-Tag: 1727606300-249491 X-HE-Meta: U2FsdGVkX1+abDB3RmyUKaSkyGwFxEpzU83if23nj4TV+gRPgZClBig2lMfs9ydiD2JTAuFiQDh/TceH+Ij9yn4pxygAMFfO6lXKXshq0xVNhcCnz1AzFO4wY1hTQGSU1mULkCm5a7trmqx0fUQC7hZ3RjWzfwZTQgRRqX5vx/1YdvaMpl9kdCemdXGxZBvt2puF1DCCpSYwNdKeMHxMuYncZAdt/u1UngRNBY0faTJEJV+FN4vDF1QrA3oDMkqK292xmLCRVW5d6k6wT6CIsCHUyPqthDC9CELp02Rqng87O3GX4f9w6lyrbnuO5Pl0pEy9vY82TLot7Q4S4/0+J32ItblLRjB+2zihZYAMMJZiXRxxflYtW6oY0rcxoEMLTcR+Hg8/z1GRLGSYFinWjqqOFUf1S5MQIGD37fJSLcObXLQPMX1ixnbXJnJOMLNgAqZ8Lc72aTDL/rNGg5nlVu35EB7p2sstqhdQ8KM+6CP2/+Kkd9jzxsgu52gDw21PijDyhcgJU0VV0N3MEa7Tj80wcvEatjsHSMKrCer4T8wS9AgbFlwUu1wmpSjT2J7V992sK8yiSVLd0HojGlCjkCqT3rspGusb4dPcCqNy81FtbSpxowazDZHbFo/MmcaeG4UDQR2dMNFHGqvVCgKWQ7is9xHeFsd4xg5KcX0oRDeJElVo+pk0tQwlERWZC1FUE3DoDNeoD0xtfdMsOF59y47jO8RqDS5Epl7k494LG5jQ7IkClLOrTBEtxuEkF94V8kgEVdEWKtRUo/9hKXv+hapA15c4yOJmos6xUHvfdBs5spnZHyCPtiLQ8wJnlRuDF1xo8Yj8ADMIVGy7wc+amvM+0PflfGc9+TeRh7EzonY6nIlNd7cA+pzhSG7ZMg+pBCtJPhZ3W15C0wuXp/1hV8rMvuBgmIP1kfk0P46Cwc06j2HOfwOY+b6KD3kVJyf28VrlXQla1kb6jkNIfPB lNKvOUrO uC4cjdg6MhsXtoKaHznyrbbzHFdjue8tJBQlDNVGhTqOCzYVHz8kc2RNxsdy71c1jXocjP2GUScm3dOl9ni4i7u7Y+CBOPJwI7Vaq3Y60eiDQx23odXtFvogsq1UcJVrGHRG5dl/CC0ISZi64lcxDV50i+6TaCwvhAxBfCxidkrU1sJBuXqAiPFRgLBW1psDJLf7W4NBSTpJ3zVC19lYwDqDJwPJTMpuLLibX3rVajptLPLG3Kv5wAGfpuW5gj2TFXrZSIiodqfowbMExlgkJtgK0WnOcswNYUA0mQaJGQUkKyho2V9DhsLwbeHxPbMwDxdh/HiGgNAj94SCxpnbhKo6Dl70a5YE+WVpbq2ikToZdalfaS+Ui/DFnx8XUPkuLlp6YtshJg6ZWItu+OrfYGQuvvswL8mOb0fuurpMjh+ERwXGfhFjLSwlANKxGP2JSTQ+LpSOUYgME5EJVKUv0JJBOxLhEqEh+7YT9BieDsOTRFOJnrLjYoRKGQElPYPQ01wGxkqmJCc+udiQJY7rdhi5AJcUSBZwunqyij8DsJgnPYfZgtaQDTGdTbt2K64YP7Y/FJj/g3BhHieEhKR20tgi2oLIAzOdvqNo5pXFOwQWzWgM+Zk62p1hgrODR44Qo4Nn2Q74/s27+XSCjg+3sZlZfIJ8YXDF3yBxPqtJ7/xArO1TIPK5mZnEoEt52SPwK8JMZbjEg1F4OzXlk+rwP/FF1tCiih4oqYf3ceuJtkPTIHpYRrE7qihWkQO9HVtzy0j480XywJwXkaSnya70MKUm/VQrmnvUJH1QCdxT0Kkf2IbBQKNdWh0Pn6l7rKr7ItIZCFwS+L0mcH3p/YU4TMS08y+5fEjJanWC2hMy1jM6GUpEFXDKBE+94ul/FGrbf2EcnAGpWSV4g0KN8+KNJ+kkIdTxGG15gyteFE6XIFrVdlMUW6lotLRPOjGwtbZjRZd2sUCmEAJWkxtaeGYx9IXqgU+Xp HS9RZ8UU Gh4Ey3FnRPWYuycWJakR3ItQXYNhMUaiUFHhG+ZSmw2W0c8YMab5+IPWu/i88QiRjy5mBZpeE2yKHC1r3TcwGH6kevm8zlpluHBgYuSHg5Oqu2mKSko4qep5B7vi5XCii04KhUHI3EPL5XebELhmHt5kct2LpDsSOMBG77D+nujcZfvZ6N58rVDMEUp4+CYG4Wj4ZaDGDwzEsjfDsVrRymUrRMOlzwnhkFmm8p55wRnWTv75eRiNW/rmOAcDfMMSDQkfxeLU6S5n0pXtRGqHhg== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On 2024-09-29 01:24, Gary Guo wrote: > Cc: Nikita Popov > Cc: llvm@lists.linux.dev > > On Sat, 28 Sep 2024 09:51:27 -0400 > Mathieu Desnoyers wrote: > >> Compiler CSE and SSA GVN optimizations can cause the address dependency >> of addresses returned by rcu_dereference to be lost when comparing those >> pointers with either constants or previously loaded pointers. >> >> Introduce ptr_eq() to compare two addresses while preserving the address >> dependencies for later use of the address. It should be used when >> comparing an address returned by rcu_dereference(). >> >> This is needed to prevent the compiler CSE and SSA GVN optimizations >> from replacing the registers holding @a or @b based on their >> equality, which does not preserve address dependencies and allows the >> following misordering speculations: >> >> - If @b is a constant, the compiler can issue the loads which depend >> on @a before loading @a. >> - If @b is a register populated by a prior load, weakly-ordered >> CPUs can speculate loads which depend on @a before loading @a. >> >> The same logic applies with @a and @b swapped. >> >> The compiler barrier() is ineffective at fixing this issue. >> It does not prevent the compiler CSE from losing the address dependency: >> >> int fct_2_volatile_barriers(void) >> { >> int *a, *b; >> >> do { >> a = READ_ONCE(p); >> asm volatile ("" : : : "memory"); >> b = READ_ONCE(p); >> } while (a != b); >> asm volatile ("" : : : "memory"); <----- barrier() >> return *b; >> } >> >> With gcc 14.2 (arm64): >> >> fct_2_volatile_barriers: >> adrp x0, .LANCHOR0 >> add x0, x0, :lo12:.LANCHOR0 >> .L2: >> ldr x1, [x0] <------ x1 populated by first load. >> ldr x2, [x0] >> cmp x1, x2 >> bne .L2 >> ldr w0, [x1] <------ x1 is used for access which should depend on b. >> ret >> >> On weakly-ordered architectures, this lets CPU speculation use the >> result from the first load to speculate "ldr w0, [x1]" before >> "ldr x2, [x0]". >> Based on the RCU documentation, the control dependency does not prevent >> the CPU from speculating loads. > > I recall seeing Nikita Popov (nikic) doing work related to this to LLVM > so it respects pointer provenances much better and doesn't randomly > replace pointers with others if they compare equal. So I tried to > reproduce this example on clang, which seems to generate the correct > code, loading from *b instead of *a. > > The generated code with "ptr_eq" however produces one extra move > instruction which is not necessary. > > I digged into the LLVM source code to see if this behaviour is that we > can rely on, and found that the GVN in use is very careful with > replacing pointers [1]. > > Essentially: > * null can be replaced > * constant addresses can be replaced <-- bad for this use case > * pointers originate from the same value (getUnderlyingObject). > > So it appears to me that if we can ensure that neither a or b come > from a constant address then the OPTIMIZER_HIDE_VAR might be > unnecessary for clang? This should be testable with __builtin_constant_p. > > Not necessary worth additional complexity handling clang specially, but > I think this is GCC/clang difference is worth pointing out. Thanks for the thorough analysis of the clang GVN behavior. It confirms my observations. AFAIU, your proposal is to add a clang-specific #ifdef to eliminate one mov from register to register (and thus free one register) when ptr_eq() is used. I'm not sure the gain (removing this extra mov) is worth it if what we lose is robustness. This would make the code dependent on current clang GVN optimization design choices which are really specific to the compiler implementation rather than guaranteed by the C standard. How can we be sure it won't subtly break with a future clang version ? If we think about it purely from a compiler optimization perspective, using the content of the earliest loaded register allows weakly-ordered CPUs to speculate following loads sooner. It's only when address dependencies are needed (e.g. RCU) that this is unwanted. Am I missing other cases where it is preferable to preserve address dependencies ? Thanks, Mathieu > > I cc'ed nikic and clang-built-linux mailing list, please correct me if > I'm wrong. > > [1]: https://github.com/llvm/llvm-project/blob/6558e5615ae9e6af6168b0a363808854fd66663f/llvm/lib/Analysis/Loads.cpp#L777-L788 > > Best, > Gary > >> >> Suggested-by: Linus Torvalds >> Suggested-by: Boqun Feng >> Signed-off-by: Mathieu Desnoyers >> Reviewed-by: Boqun Feng >> Acked-by: "Paul E. McKenney" >> Cc: Greg Kroah-Hartman >> Cc: Sebastian Andrzej Siewior >> Cc: "Paul E. McKenney" >> Cc: Will Deacon >> Cc: Peter Zijlstra >> Cc: Boqun Feng >> Cc: Alan Stern >> Cc: John Stultz >> Cc: Neeraj Upadhyay >> Cc: Linus Torvalds >> Cc: Boqun Feng >> Cc: Frederic Weisbecker >> Cc: Joel Fernandes >> Cc: Josh Triplett >> Cc: Uladzislau Rezki >> Cc: Steven Rostedt >> Cc: Lai Jiangshan >> Cc: Zqiang >> Cc: Ingo Molnar >> Cc: Waiman Long >> Cc: Mark Rutland >> Cc: Thomas Gleixner >> Cc: Vlastimil Babka >> Cc: maged.michael@gmail.com >> Cc: Mateusz Guzik >> Cc: Gary Guo >> Cc: Jonas Oberhauser >> Cc: rcu@vger.kernel.org >> Cc: linux-mm@kvack.org >> Cc: lkmm@lists.linux.dev >> --- >> include/linux/compiler.h | 62 ++++++++++++++++++++++++++++++++++++++++ >> 1 file changed, 62 insertions(+) >> >> diff --git a/include/linux/compiler.h b/include/linux/compiler.h >> index 2df665fa2964..f26705c267e8 100644 >> --- a/include/linux/compiler.h >> +++ b/include/linux/compiler.h >> @@ -186,6 +186,68 @@ void ftrace_likely_update(struct ftrace_likely_data *f, int val, >> __asm__ ("" : "=r" (var) : "0" (var)) >> #endif >> >> +/* >> + * Compare two addresses while preserving the address dependencies for >> + * later use of the address. It should be used when comparing an address >> + * returned by rcu_dereference(). >> + * >> + * This is needed to prevent the compiler CSE and SSA GVN optimizations >> + * from replacing the registers holding @a or @b based on their >> + * equality, which does not preserve address dependencies and allows the >> + * following misordering speculations: >> + * >> + * - If @b is a constant, the compiler can issue the loads which depend >> + * on @a before loading @a. >> + * - If @b is a register populated by a prior load, weakly-ordered >> + * CPUs can speculate loads which depend on @a before loading @a. >> + * >> + * The same logic applies with @a and @b swapped. >> + * >> + * Return value: true if pointers are equal, false otherwise. >> + * >> + * The compiler barrier() is ineffective at fixing this issue. It does >> + * not prevent the compiler CSE from losing the address dependency: >> + * >> + * int fct_2_volatile_barriers(void) >> + * { >> + * int *a, *b; >> + * >> + * do { >> + * a = READ_ONCE(p); >> + * asm volatile ("" : : : "memory"); >> + * b = READ_ONCE(p); >> + * } while (a != b); >> + * asm volatile ("" : : : "memory"); <-- barrier() >> + * return *b; >> + * } >> + * >> + * With gcc 14.2 (arm64): >> + * >> + * fct_2_volatile_barriers: >> + * adrp x0, .LANCHOR0 >> + * add x0, x0, :lo12:.LANCHOR0 >> + * .L2: >> + * ldr x1, [x0] <-- x1 populated by first load. >> + * ldr x2, [x0] >> + * cmp x1, x2 >> + * bne .L2 >> + * ldr w0, [x1] <-- x1 is used for access which should depend on b. >> + * ret >> + * > >> + * On weakly-ordered architectures, this lets CPU speculation use the >> + * result from the first load to speculate "ldr w0, [x1]" before >> + * "ldr x2, [x0]". >> + * Based on the RCU documentation, the control dependency does not >> + * prevent the CPU from speculating loads. >> + */ >> +static __always_inline >> +int ptr_eq(const volatile void *a, const volatile void *b) >> +{ >> + OPTIMIZER_HIDE_VAR(a); >> + OPTIMIZER_HIDE_VAR(b); >> + return a == b; >> +} >> + >> #define __UNIQUE_ID(prefix) __PASTE(__PASTE(__UNIQUE_ID_, prefix), __COUNTER__) >> >> /** > -- Mathieu Desnoyers EfficiOS Inc. https://www.efficios.com