From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-11.0 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH, MAILING_LIST_MULTI,NICE_REPLY_A,SPF_HELO_NONE,SPF_PASS,USER_AGENT_SANE_1 autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5C6ABC433DF for ; Thu, 27 Aug 2020 10:31:23 +0000 (UTC) Received: from merlin.infradead.org (merlin.infradead.org [205.233.59.134]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 24F2A2080C for ; Thu, 27 Aug 2020 10:31:23 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="HUJ6MPmL" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 24F2A2080C Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=arm.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=merlin.20170209; h=Sender:Content-Transfer-Encoding: Content-Type:Cc:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:In-Reply-To:MIME-Version:Date:Message-ID:From: References:To:Subject:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=x2nnIYYn5nsq+0d3X1zJjge9OmZuO8qZauI5aXyB+Tg=; b=HUJ6MPmLWaFfLtA2tQJ2c8Qoe EKs2yWAC/DJtPKOjgXoGmKFk2Ks3yBe7rMrzGy8mDAaOItwQAzdxPsUeo/7TWPiYAtRPmTTj5bLX2 6G7tNJTSxcI9FpKUxmI96HMC2Gyvd1ivVuf+4p1DhL5043a1VKLehbOQDAnX4GxFZtX2WT/YtNrT+ tcqI4tFwtBBjLHAc2sZyiTHkH42T0Mc07dlTtaJ1ZmRZ99bhdAPZrlKgz2QwV14JxlmhAk0uMWMOp f+ht4AjxP4P0cvX6Fm7kX8QHgFGwTHNByOB+hW4ytMN7hgXXVWJqN/09Ay6y6iKRFjfw+nBggyfbp iSIG4aqdQ==; Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1kBFAE-0006i9-TD; Thu, 27 Aug 2020 10:29:50 +0000 Received: from foss.arm.com ([217.140.110.172]) by merlin.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1kBFAC-0006hb-GM for linux-arm-kernel@lists.infradead.org; Thu, 27 Aug 2020 10:29:49 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 198D7101E; Thu, 27 Aug 2020 03:29:47 -0700 (PDT) Received: from [192.168.1.190] (unknown [172.31.20.19]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id C33C03F66B; Thu, 27 Aug 2020 03:29:43 -0700 (PDT) Subject: Re: [PATCH 20/35] arm64: mte: Add in-kernel MTE helpers To: Catalin Marinas , Andrey Konovalov References: <2cf260bdc20793419e32240d2a3e692b0adf1f80.1597425745.git.andreyknvl@google.com> <20200827093808.GB29264@gaia> From: Vincenzo Frascino Message-ID: <588f3812-c9d0-8dbe-fce2-1ea89f558bd2@arm.com> Date: Thu, 27 Aug 2020 11:31:56 +0100 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101 Thunderbird/68.10.0 MIME-Version: 1.0 In-Reply-To: <20200827093808.GB29264@gaia> Content-Language: en-US X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20200827_062948_673412_DDF02B3E X-CRM114-Status: GOOD ( 36.86 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: linux-arm-kernel@lists.infradead.org, Marco Elver , Elena Petrova , Kevin Brodsky , Will Deacon , Branislav Rankov , kasan-dev@googlegroups.com, linux-kernel@vger.kernel.org, linux-mm@kvack.org, Alexander Potapenko , Dmitry Vyukov , Andrey Ryabinin , Andrew Morton , Evgenii Stepanov Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Hi Catalin, On 8/27/20 10:38 AM, Catalin Marinas wrote: > On Fri, Aug 14, 2020 at 07:27:02PM +0200, Andrey Konovalov wrote: >> diff --git a/arch/arm64/include/asm/mte.h b/arch/arm64/include/asm/mte.h >> index 1c99fcadb58c..733be1cb5c95 100644 >> --- a/arch/arm64/include/asm/mte.h >> +++ b/arch/arm64/include/asm/mte.h >> @@ -5,14 +5,19 @@ >> #ifndef __ASM_MTE_H >> #define __ASM_MTE_H >> >> -#define MTE_GRANULE_SIZE UL(16) >> +#include > > So the reason for this move is to include it in asm/cache.h. Fine by > me but... > >> #define MTE_GRANULE_MASK (~(MTE_GRANULE_SIZE - 1)) >> #define MTE_TAG_SHIFT 56 >> #define MTE_TAG_SIZE 4 >> +#define MTE_TAG_MASK GENMASK((MTE_TAG_SHIFT + (MTE_TAG_SIZE - 1)), MTE_TAG_SHIFT) >> +#define MTE_TAG_MAX (MTE_TAG_MASK >> MTE_TAG_SHIFT) > > ... I'd rather move all these definitions in a file with a more > meaningful name like mte-def.h. The _asm implies being meant for .S > files inclusion which isn't the case. > mte-asm.h was originally called mte_helper.h hence it made sense to have these defines here. But I agree with your proposal it makes things more readable and it is in line with the rest of the arm64 code (e.g. page-def.h). We should as well update the commit message accordingly. >> diff --git a/arch/arm64/kernel/mte.c b/arch/arm64/kernel/mte.c >> index eb39504e390a..e2d708b4583d 100644 >> --- a/arch/arm64/kernel/mte.c >> +++ b/arch/arm64/kernel/mte.c >> @@ -72,6 +74,47 @@ int memcmp_pages(struct page *page1, struct page *page2) >> return ret; >> } >> >> +u8 mte_get_mem_tag(void *addr) >> +{ >> + if (system_supports_mte()) >> + addr = mte_assign_valid_ptr_tag(addr); > > The mte_assign_valid_ptr_tag() is slightly misleading. All it does is > read the allocation tag from memory. > > I also think this should be inline asm, possibly using alternatives. > It's just an LDG instruction (and it saves us from having to invent a > better function name). > Yes, I agree, I implemented this code in the early days and never got around to refactor it. >> + >> + return 0xF0 | mte_get_ptr_tag(addr); >> +} >> + >> +u8 mte_get_random_tag(void) >> +{ >> + u8 tag = 0xF; >> + >> + if (system_supports_mte()) >> + tag = mte_get_ptr_tag(mte_assign_random_ptr_tag(NULL)); > > Another alternative inline asm with an IRG instruction. > As per above. >> + >> + return 0xF0 | tag; >> +} >> + >> +void * __must_check mte_set_mem_tag_range(void *addr, size_t size, u8 tag) >> +{ >> + void *ptr = addr; >> + >> + if ((!system_supports_mte()) || (size == 0)) >> + return addr; >> + >> + tag = 0xF0 | (tag & 0xF); >> + ptr = (void *)__tag_set(ptr, tag); >> + size = ALIGN(size, MTE_GRANULE_SIZE); > > I think aligning the size is dangerous. Can we instead turn it into a > WARN_ON if not already aligned? At a quick look, the callers of > kasan_{un,}poison_memory() already align the size. > The size here is used only for tagging purposes and if we want to tag a subgranule amount of memory we end up tagging the granule anyway. Why do you think it can be dangerous? Anyway I agree on the fact that is seems redundant, a WARN_ON here should be sufficient. >> + >> + mte_assign_mem_tag_range(ptr, size); >> + >> + /* >> + * mte_assign_mem_tag_range() can be invoked in a multi-threaded >> + * context, ensure that tags are written in memory before the >> + * reference is used. >> + */ >> + smp_wmb(); >> + >> + return ptr; > > I'm not sure I understand the barrier here. It ensures the relative > ordering of memory (or tag) accesses on a CPU as observed by other CPUs. > While the first access here is setting the tag, I can't see what other > access on _this_ CPU it is ordered with. > You are right it can be removed. I was just overthinking here. >> +} >> + >> static void update_sctlr_el1_tcf0(u64 tcf0) >> { >> /* ISB required for the kernel uaccess routines */ >> diff --git a/arch/arm64/lib/mte.S b/arch/arm64/lib/mte.S >> index 03ca6d8b8670..8c743540e32c 100644 >> --- a/arch/arm64/lib/mte.S >> +++ b/arch/arm64/lib/mte.S >> @@ -149,3 +149,44 @@ SYM_FUNC_START(mte_restore_page_tags) >> >> ret >> SYM_FUNC_END(mte_restore_page_tags) >> + >> +/* >> + * Assign pointer tag based on the allocation tag >> + * x0 - source pointer >> + * Returns: >> + * x0 - pointer with the correct tag to access memory >> + */ >> +SYM_FUNC_START(mte_assign_valid_ptr_tag) >> + ldg x0, [x0] >> + ret >> +SYM_FUNC_END(mte_assign_valid_ptr_tag) >> + >> +/* >> + * Assign random pointer tag >> + * x0 - source pointer >> + * Returns: >> + * x0 - pointer with a random tag >> + */ >> +SYM_FUNC_START(mte_assign_random_ptr_tag) >> + irg x0, x0 >> + ret >> +SYM_FUNC_END(mte_assign_random_ptr_tag) > > As I said above, these two can be inline asm. > Agreed. >> + >> +/* >> + * Assign allocation tags for a region of memory based on the pointer tag >> + * x0 - source pointer >> + * x1 - size >> + * >> + * Note: size is expected to be MTE_GRANULE_SIZE aligned >> + */ >> +SYM_FUNC_START(mte_assign_mem_tag_range) >> + /* if (src == NULL) return; */ >> + cbz x0, 2f >> + /* if (size == 0) return; */ > > You could skip the cbz here and just document that the size should be > non-zero and aligned. The caller already takes care of this check. > I would prefer to keep the check here, unless there is a valid reason, since allocate(0) is a viable option hence tag(x, 0) should be as well. The caller takes care of it in one place, today, but I do not know where the API will be used in future. >> + cbz x1, 2f >> +1: stg x0, [x0] >> + add x0, x0, #MTE_GRANULE_SIZE >> + sub x1, x1, #MTE_GRANULE_SIZE >> + cbnz x1, 1b >> +2: ret >> +SYM_FUNC_END(mte_assign_mem_tag_range) > -- Regards, Vincenzo _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel