From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.5 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_SANE_1 autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 30A51C433E7 for ; Wed, 2 Sep 2020 17:17:19 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 0473F20684 for ; Wed, 2 Sep 2020 17:17:19 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726966AbgIBRRR (ORCPT ); Wed, 2 Sep 2020 13:17:17 -0400 Received: from gate.crashing.org ([63.228.1.57]:59560 "EHLO gate.crashing.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726269AbgIBRRN (ORCPT ); Wed, 2 Sep 2020 13:17:13 -0400 Received: from gate.crashing.org (localhost.localdomain [127.0.0.1]) by gate.crashing.org (8.14.1/8.14.1) with ESMTP id 082HGQgb008950; Wed, 2 Sep 2020 12:16:26 -0500 Received: (from segher@localhost) by gate.crashing.org (8.14.1/8.14.1/Submit) id 082HGOr8008949; Wed, 2 Sep 2020 12:16:24 -0500 X-Authentication-Warning: gate.crashing.org: segher set sender to segher@kernel.crashing.org using -f Date: Wed, 2 Sep 2020 12:16:24 -0500 From: Segher Boessenkool To: Arvind Sankar Cc: Linus Torvalds , Miguel Ojeda , Sedat Dilek , Thomas Gleixner , Nick Desaulniers , "Paul E. McKenney" , Ingo Molnar , Arnd Bergmann , Borislav Petkov , "maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT)" , "H. Peter Anvin" , "Kirill A. Shutemov" , Kees Cook , Peter Zijlstra , Juergen Gross , Andy Lutomirski , Andrew Cooper , LKML , clang-built-linux , Will Deacon , nadav.amit@gmail.com, Nathan Chancellor Subject: Re: [PATCH v2] x86/asm: Replace __force_order with memory clobber Message-ID: <20200902171624.GX28786@gate.crashing.org> References: <20200823212550.3377591-1-nivedita@alum.mit.edu> <20200902153346.3296117-1-nivedita@alum.mit.edu> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20200902153346.3296117-1-nivedita@alum.mit.edu> User-Agent: Mutt/1.4.2.3i Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed, Sep 02, 2020 at 11:33:46AM -0400, Arvind Sankar wrote: > The CRn accessor functions use __force_order as a dummy operand to > prevent the compiler from reordering the inline asm. > > The fact that the asm is volatile should be enough to prevent this > already, however older versions of GCC had a bug that could sometimes > result in reordering. This was fixed in 8.1, 7.3 and 6.5. Versions prior > to these, including 5.x and 4.9.x, may reorder volatile asm. Reordering them amongst themselves. Yes, that is bad. Reordering them with "random" code is Just Fine. Volatile asm should be executed on the real machine exactly as often as on the C abstract machine, and in the same order. That is all. > + * The compiler should not reorder volatile asm, So, this comment needs work. And perhaps the rest of the patch as well? Segher