From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 1F199CE7B09 for ; Fri, 6 Sep 2024 13:07:18 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:In-Reply-To: Content-Transfer-Encoding:Content-Type:MIME-Version:References:Message-ID: Subject:Cc:To:From:Date:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=vuBTV/iBWY8A0/UpYzjFHpuEiC9F9dBqEb3AOxPgZnc=; b=SuArcxrwbFgKCyuFSXtuskWBAC 6jJsSKal+ksyygMZoTPoWzOEliXwje69y+w/jYbOEKtGm0MdOVC0NH/xiMDpBTIg1hEQ9b8MTfIq2 r0Q969n0U/KykNxdOOVvR2/Q4ifKLtGRcj9/9yiENzseXH16fBptmmvXvb9cQRLLSSaf2Sp5aWSAg pUuO2Op3ZkX4KJsDrWMID3OYM4k5GSrx2BSIxvKiGbDwoRAHWmPmGmjMXIFTgPJuDdllAOAhSGWId P3HIydQh6vvIvAneIKRTUFGcsD8kHbrgppFauqbhIbEUSB8sGkTTK9NwI6nbP1fVsO6Zj47bpq3PF X9gWXPYA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1smYg7-0000000CI7x-0N8A; Fri, 06 Sep 2024 13:07:07 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1smYf7-0000000CHv9-1LPZ for linux-arm-kernel@lists.infradead.org; Fri, 06 Sep 2024 13:06:06 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 8E4D8DA7; Fri, 6 Sep 2024 06:06:31 -0700 (PDT) Received: from J2N7QTR9R3.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.121.207.14]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 387F53F73B; Fri, 6 Sep 2024 06:06:03 -0700 (PDT) Date: Fri, 6 Sep 2024 14:05:57 +0100 From: Mark Rutland To: "tiantao (H)" Cc: catalin.marinas@arm.com, will@kernel.org, jonathan.cameron@huawei.com, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, linuxarm@huawei.com Subject: Re: [PATCH] arm64: Add ARM64_HAS_LSE2 CPU capability Message-ID: References: <20240906090812.249473-1-tiantao6@hisilicon.com> <587f7c84-cdfc-b348-4cd0-1015adad2cca@hisilicon.com> <54980e73-4a1c-1eb2-98b4-fbb49e9a9b8f@hisilicon.com> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: <54980e73-4a1c-1eb2-98b4-fbb49e9a9b8f@hisilicon.com> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240906_060605_421500_5244FE37 X-CRM114-Status: GOOD ( 21.39 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org On Fri, Sep 06, 2024 at 08:20:19PM +0800, tiantao (H) wrote: > > 在 2024/9/6 19:42, Mark Rutland 写道: > > On Fri, Sep 06, 2024 at 06:58:19PM +0800, tiantao (H) wrote: > > > 在 2024/9/6 17:44, Mark Rutland 写道: > > > > On Fri, Sep 06, 2024 at 05:08:12PM +0800, Tian Tao wrote: > > > I've come across a situation where the simplified code is as follows: > > > > > >  long  address = (long) mmap(NULL,1024*1024*2,PROT_READ|PROT_WRITE, > > > MAP_PRIVATE|MAP_ANONYMOUS,-1,0); > > > > > > long new_address = address + 9; > > > > > >  long *p = (long*) new_address; > > >  long v = -1; > > > > > >  __atomic_store(p, &v, __ATOMIC_RELEASE); > > Hold on; looking at the ARM ARM (ARM DDI 0487K.a), the example and the > > patch are both bogus. NAK to this patch, explanation below. > > > > Per section B2.2.1.1 "Changes to single-copy atomicity in Armv8.4", all of the > > LSE2 relaxations on alignment require: > > > > | all bytes being accessed are within a 16-byte quantity that is aligned > > | to 16 bytes > > > > In your example you perform an 8-byte access at an offset of 9 bytes, > > which means the access is *not* contained within 16 bytes, and is *not* > > guaranteed to be atomic. That code simply has to be fixed, the kernel > > cannot magically make it safe. > > > > Regardless of that, the nAA bit *only* causes an alignment fault for > > accesses which cannot be atomic. If a CPU has LSE2 and SCTLR_ELx.nAA=0, > > an unaligned access within 16 bytes (which would be atomic) does not > > cause an alignment fault. That's pretty clear from the description of > > nAA and the AArch64.UnalignedAccessFaults() pseudocode: > > Sorry, this example is just for verifying nnA, it's not an example of a real > scenario, > > we have scenarios that don't require atomicity to be guaranteed, they just > require that coredump doesn't occur when non-aligned Please give a concrete explanation of such a scenario, with an explanation of why atomicity is not required. Mark.