From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B41F4C433EF for ; Mon, 23 May 2022 10:46:26 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234272AbiEWKqW (ORCPT ); Mon, 23 May 2022 06:46:22 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56978 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234273AbiEWKqV (ORCPT ); Mon, 23 May 2022 06:46:21 -0400 Received: from szxga02-in.huawei.com (szxga02-in.huawei.com [45.249.212.188]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A0C1D40E5D; Mon, 23 May 2022 03:46:18 -0700 (PDT) Received: from dggpemm500023.china.huawei.com (unknown [172.30.72.53]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4L6DW351QHzjX2m; Mon, 23 May 2022 18:45:19 +0800 (CST) Received: from dggpemm500001.china.huawei.com (7.185.36.107) by dggpemm500023.china.huawei.com (7.185.36.83) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Mon, 23 May 2022 18:46:15 +0800 Received: from [10.174.177.243] (10.174.177.243) by dggpemm500001.china.huawei.com (7.185.36.107) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Mon, 23 May 2022 18:46:14 +0800 Message-ID: <36c7224c-87a6-aa31-cfaa-06a0b168c68d@huawei.com> Date: Mon, 23 May 2022 18:46:14 +0800 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101 Thunderbird/91.5.1 Subject: Re: [PATCH v3 1/2] asm-generic: Add memory barrier dma_mb() Content-Language: en-US To: Marco Elver CC: , , , , , Jonathan Corbet , References: <20220523020051.141460-1-wangkefeng.wang@huawei.com> <20220523020051.141460-2-wangkefeng.wang@huawei.com> From: Kefeng Wang In-Reply-To: Content-Type: text/plain; charset="UTF-8"; format=flowed Content-Transfer-Encoding: 7bit X-Originating-IP: [10.174.177.243] X-ClientProxiedBy: dggems706-chm.china.huawei.com (10.3.19.183) To dggpemm500001.china.huawei.com (7.185.36.107) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-doc@vger.kernel.org On 2022/5/23 16:22, Marco Elver wrote: > On Mon, 23 May 2022 at 03:50, Kefeng Wang wrote: >> The memory barrier dma_mb() is introduced by commit a76a37777f2c >> ("iommu/arm-smmu-v3: Ensure queue is read after updating prod pointer"), >> which is used to ensure that prior (both reads and writes) accesses >> to memory by a CPU are ordered w.r.t. a subsequent MMIO write, this >> is only defined on arm64, but it is a generic memory barrier, let's >> add dma_mb() into documentation and include/asm-generic/barrier.h. >> >> Signed-off-by: Kefeng Wang >> --- >> Documentation/memory-barriers.txt | 11 ++++++----- >> include/asm-generic/barrier.h | 8 ++++++++ >> 2 files changed, 14 insertions(+), 5 deletions(-) >> >> diff --git a/Documentation/memory-barriers.txt b/Documentation/memory-barriers.txt >> index b12df9137e1c..07a8b8e1b12a 100644 >> --- a/Documentation/memory-barriers.txt >> +++ b/Documentation/memory-barriers.txt >> @@ -1894,6 +1894,7 @@ There are some more advanced barrier functions: >> >> (*) dma_wmb(); >> (*) dma_rmb(); >> + (*) dma_mb(); >> >> These are for use with consistent memory to guarantee the ordering >> of writes or reads of shared memory accessible to both the CPU and a >> @@ -1925,11 +1926,11 @@ There are some more advanced barrier functions: >> The dma_rmb() allows us guarantee the device has released ownership >> before we read the data from the descriptor, and the dma_wmb() allows >> us to guarantee the data is written to the descriptor before the device >> - can see it now has ownership. Note that, when using writel(), a prior >> - wmb() is not needed to guarantee that the cache coherent memory writes >> - have completed before writing to the MMIO region. The cheaper >> - writel_relaxed() does not provide this guarantee and must not be used >> - here. >> + can see it now has ownership. The dma_mb() implies both a dma_rmb() and >> + a dma_wmb(). Note that, when using writel(), a prior wmb() is not needed >> + to guarantee that the cache coherent memory writes have completed before >> + writing to the MMIO region. The cheaper writel_relaxed() does not provide >> + this guarantee and must not be used here. > It seems you've changed that spacing. This document uses 2 spaces > after a sentence-ending '.'. (My original suggestion included the 2 > spaces after dots.) I don't know the rules, it seems that some uses 1 spaces, others are 2 spaces, but most uses 2 spaces, will update. > Otherwise it all looks fine to me. > > Thanks, > -- Marco > .