From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 216BAC19F2B for ; Sun, 31 Jul 2022 22:14:55 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S238336AbiGaWOw (ORCPT ); Sun, 31 Jul 2022 18:14:52 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52830 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229710AbiGaWOv (ORCPT ); Sun, 31 Jul 2022 18:14:51 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A7A69DEFE; Sun, 31 Jul 2022 15:14:47 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=In-Reply-To:Content-Type:MIME-Version: References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=+qm1vAmgi6sjZNEi6Z0fSdbuq7XPAQLGXeiD0XxIuOY=; b=HbUcOri3IfM8aEODm6uq0lPTlX Td1B9C+jQvJNZeArAg9FQVsunSHEatuIuq2N4hLFVV56R06dLus5sIcgladiW9a/3h0XyQL6H7iNo L0LOfaiVQYfU/O+lx0d7lkuOjE7JFd0Ax3x3qT5tMjObgECHRgA46V9Sz3IYsq5et9n6lzd3PcAJ4 /vrcYhhisOBYKQrwbs/mLmV8I43ORiM+qeBmov5NH857ExVaiQQ8SRxstAMTqsYx/WT8FDsbiCu9C fdJoXt4jmHaH+VK2/NFg80oN4v84Ixb3zxlkmEwW9zVKtyo2T4QXjc9zucLWm09Ajl0MRzFHjhQ0a 5bWQXYvg==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1oIHCW-006bWs-A2; Sun, 31 Jul 2022 22:14:20 +0000 Date: Sun, 31 Jul 2022 23:14:20 +0100 From: Matthew Wilcox To: Mikulas Patocka Cc: Linus Torvalds , Will Deacon , "Paul E. McKenney" , Ard Biesheuvel , Alexander Viro , Alan Stern , Andrea Parri , Peter Zijlstra , Boqun Feng , Nicholas Piggin , David Howells , Jade Alglave , Luc Maranget , Akira Yokosawa , Daniel Lustig , Joel Fernandes , Linux Kernel Mailing List , linux-arch , linux-fsdevel@vger.kernel.org Subject: Re: [PATCH v3 2/2] make buffer_locked provide an acquire semantics Message-ID: References: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org On Sun, Jul 31, 2022 at 04:43:08PM -0400, Mikulas Patocka wrote: > Let's have a look at this piece of code in __bread_slow: > get_bh(bh); > bh->b_end_io = end_buffer_read_sync; > submit_bh(REQ_OP_READ, 0, bh); > wait_on_buffer(bh); > if (buffer_uptodate(bh)) > return bh; > Neither wait_on_buffer nor buffer_uptodate contain a memory barrier. > Consequently, if someone calls sb_bread and then reads the buffer data, > the read of buffer data may be executed before wait_on_buffer(bh) on > architectures with weak memory ordering and it may return invalid data. I think we should be consistent between PageUptodate() and buffer_uptodate(). Here's how it's done for pages currently: static inline bool folio_test_uptodate(struct folio *folio) bool ret = test_bit(PG_uptodate, folio_flags(folio, 0)); /* * Must ensure that the data we read out of the folio is loaded * _after_ we've loaded folio->flags to check the uptodate bit. * We can skip the barrier if the folio is not uptodate, because * we wouldn't be reading anything from it. * * See folio_mark_uptodate() for the other side of the story. */ if (ret) smp_rmb(); return ret; ... static __always_inline void folio_mark_uptodate(struct folio *folio) /* * Memory barrier must be issued before setting the PG_uptodate bit, * so that all previous stores issued in order to bring the folio * uptodate are actually visible before folio_test_uptodate becomes true. */ smp_wmb(); set_bit(PG_uptodate, folio_flags(folio, 0)); I'm happy for these to also be changed to use acquire/release; no attachment to the current code. But bufferheads & pages should have the same semantics, or we'll be awfully confused.