From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5D1AAC25B10 for ; Mon, 13 May 2024 17:47:37 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id DF7A56B02BD; Mon, 13 May 2024 13:47:36 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id DA7A66B02BE; Mon, 13 May 2024 13:47:36 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C47FA6B02BF; Mon, 13 May 2024 13:47:36 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id A6CD66B02BD for ; Mon, 13 May 2024 13:47:36 -0400 (EDT) Received: from smtpin29.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 3F9851C151D for ; Mon, 13 May 2024 17:47:36 +0000 (UTC) X-FDA: 82114104912.29.FF59F4E Received: from mail-pg1-f170.google.com (mail-pg1-f170.google.com [209.85.215.170]) by imf14.hostedemail.com (Postfix) with ESMTP id 526A5100011 for ; Mon, 13 May 2024 17:47:34 +0000 (UTC) Authentication-Results: imf14.hostedemail.com; dkim=pass header.d=rivosinc-com.20230601.gappssmtp.com header.s=20230601 header.b=xQtHfMWe; dmarc=none; spf=pass (imf14.hostedemail.com: domain of debug@rivosinc.com designates 209.85.215.170 as permitted sender) smtp.mailfrom=debug@rivosinc.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1715622454; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=6LaUfYbhjgFs2AjjPA78Kswq4d56jrbrUI3xg3pcRhk=; b=P8naC9LYH8apaibBxnlAFhBGF8kG82Y+kBjOXC+pHTgOB0U6JHhN1OIQ3lE5UIRtCMyk2x 7Atcsb3623vfVt18i+WKXo22WZYCPStZOZh/uFAsDNJB0q6UF8LHB/RUsLwrrxA5nG05mD TSQEq9JFaSzyidIMfONRffYEboTN8mg= ARC-Authentication-Results: i=1; imf14.hostedemail.com; dkim=pass header.d=rivosinc-com.20230601.gappssmtp.com header.s=20230601 header.b=xQtHfMWe; dmarc=none; spf=pass (imf14.hostedemail.com: domain of debug@rivosinc.com designates 209.85.215.170 as permitted sender) smtp.mailfrom=debug@rivosinc.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1715622454; a=rsa-sha256; cv=none; b=ih/VeSlN0mDmzhlKaT5iIvvCkydgiSMClSw+IGz09EqTm7HZpUNijYsHwN4bZpQpjy7Ptf H8nhOrr97KNG1tsDjDcJuXQcaDSgELYww8cUfpFgRtbq1iStMd2X/rf/haYS0ZOayTFjbS gPJcqsUbBFyJChj25LrIXAh6P2gTquQ= Received: by mail-pg1-f170.google.com with SMTP id 41be03b00d2f7-5d3907ff128so3982474a12.3 for ; Mon, 13 May 2024 10:47:33 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20230601.gappssmtp.com; s=20230601; t=1715622453; x=1716227253; darn=kvack.org; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:from:to:cc:subject:date:message-id:reply-to; bh=6LaUfYbhjgFs2AjjPA78Kswq4d56jrbrUI3xg3pcRhk=; b=xQtHfMWelwrc+C5xREtDDX8ji/yI0fSF7GPsALQ8FNAhzK6WwVnhO7G2joPVNPE+LF RoB2HGPnAJBmCm0XxmEbY0jei2oCJNlP6/pTkQDPZqAVnMYsgS/u76YMHPMsCMU/G2Xg vXthhFwztR0hpaotXKOFo7lS62nUKsSPF8VokVgw84vLfhVdEYdf7mWnM//UDyEEvbWB mMO8eXX4dm84/Trp+wbtdVwlj040B/9d4g3ThKAfS3MAfgblQDrYwtDkJIhdyUhvOnx+ +DhZfABR5J+z+xJPvcNlHgVjBt93pEqJ+sanDUpQfVVa9r4khEgDr9jMifBdLYDUG5rm KehQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1715622453; x=1716227253; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:x-gm-message-state:from:to:cc:subject:date :message-id:reply-to; bh=6LaUfYbhjgFs2AjjPA78Kswq4d56jrbrUI3xg3pcRhk=; b=chs9zbj9bR9BWtBATCT0J6++H9hfncerXlp9T5Hk4ubgdm2bwwTuspdqNUMRTnZwvi wbUVRwsOtoUt7skJPkq1JSgP9/4XMhlJ8A5ET7OsBZnq/VEXwEQ7KQj3QPbQL+8J90DQ p8jOKqQ8HrB8pzO6rs75iw44jGSR2yAt6zZaFyhlSJJSr1pW/ykOf6svHuRtP+v0wHIy mtrPfIO0+rx3WPd1IMjpDwCMPKCTRb8vh1GlOwn+XRf85EyCxFGIWE6o48EHzbkMgjIN yIcfScYQ6Jjn8QDypMItl6j8mTb52jOmMORQIPta9NcrvH4ewSPEou9BYHeiSbkJlFt8 itXA== X-Forwarded-Encrypted: i=1; AJvYcCWD5hBRUx/ce/wje/rGqceWe3OFbWd0Bbjt9joxgzIP+x+LyFcyqF8zHnm3kPxTZlwcwPPNsxST0Kv9YwvTRunsbTU= X-Gm-Message-State: AOJu0YyEGZrXuqOIEe76aOxyvCqTlQsjB0d88gl0tbQLoU5PN3hnlYg0 AMC4FvP01Mn2nCXuNCb0tHgmcejglFB7d97MckictFOdpNUL2njzi191ppdjG/Y= X-Google-Smtp-Source: AGHT+IGjx0AoLZdsnPMmBtuPR7WiUWSxaNpuC/v1kUd2jvw2APP6WDzK0//bbc0udBP7gghTi2HA6A== X-Received: by 2002:a17:90b:106:b0:2b6:c4d7:fc31 with SMTP id 98e67ed59e1d1-2b6cd1e4ba4mr11564581a91.40.1715622451120; Mon, 13 May 2024 10:47:31 -0700 (PDT) Received: from debug.ba.rivosinc.com ([64.71.180.162]) by smtp.gmail.com with ESMTPSA id 98e67ed59e1d1-2b628ea6c10sm10010384a91.53.2024.05.13.10.47.27 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 13 May 2024 10:47:30 -0700 (PDT) Date: Mon, 13 May 2024 10:47:25 -0700 From: Deepak Gupta To: Charlie Jenkins Cc: paul.walmsley@sifive.com, rick.p.edgecombe@intel.com, broonie@kernel.org, Szabolcs.Nagy@arm.com, kito.cheng@sifive.com, keescook@chromium.org, ajones@ventanamicro.com, conor.dooley@microchip.com, cleger@rivosinc.com, atishp@atishpatra.org, alex@ghiti.fr, bjorn@rivosinc.com, alexghiti@rivosinc.com, samuel.holland@sifive.com, conor@kernel.org, linux-doc@vger.kernel.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, devicetree@vger.kernel.org, linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-kselftest@vger.kernel.org, corbet@lwn.net, palmer@dabbelt.com, aou@eecs.berkeley.edu, robh+dt@kernel.org, krzysztof.kozlowski+dt@linaro.org, oleg@redhat.com, akpm@linux-foundation.org, arnd@arndb.de, ebiederm@xmission.com, Liam.Howlett@oracle.com, vbabka@suse.cz, lstoakes@gmail.com, shuah@kernel.org, brauner@kernel.org, andy.chiu@sifive.com, jerry.shih@sifive.com, hankuan.chen@sifive.com, greentime.hu@sifive.com, evan@rivosinc.com, xiao.w.wang@intel.com, apatel@ventanamicro.com, mchitale@ventanamicro.com, dbarboza@ventanamicro.com, sameo@rivosinc.com, shikemeng@huaweicloud.com, willy@infradead.org, vincent.chen@sifive.com, guoren@kernel.org, samitolvanen@google.com, songshuaishuai@tinylab.org, gerg@kernel.org, heiko@sntech.de, bhe@redhat.com, jeeheng.sia@starfivetech.com, cyy@cyyself.name, maskray@google.com, ancientmodern4@gmail.com, mathis.salmen@matsal.de, cuiyunhui@bytedance.com, bgray@linux.ibm.com, mpe@ellerman.id.au, baruch@tkos.co.il, alx@kernel.org, david@redhat.com, catalin.marinas@arm.com, revest@chromium.org, josh@joshtriplett.org, shr@devkernel.io, deller@gmx.de, omosnace@redhat.com, ojeda@kernel.org, jhubbard@nvidia.com Subject: Re: [PATCH v3 10/29] riscv/mm : ensure PROT_WRITE leads to VM_READ | VM_WRITE Message-ID: References: <20240403234054.2020347-1-debug@rivosinc.com> <20240403234054.2020347-11-debug@rivosinc.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii; format=flowed Content-Disposition: inline In-Reply-To: X-Rspamd-Queue-Id: 526A5100011 X-Stat-Signature: a7dqj91jhi757dxhxfdn9atgi639443u X-Rspam-User: X-Rspamd-Server: rspam11 X-HE-Tag: 1715622454-636455 X-HE-Meta: U2FsdGVkX1/QeggQHjwlTddh6IcJj3e79K7H8wwVVy1lCPDCZjMqT4fmLYe0ts673FubNlt1sQvX+jtNUHTONYn//7GtLTnsK8z00zoU6HtTBrogw9rgB+iUID6MHzFHiSJORSgVTtHI74q1+PWHWXStW0isRui0aCLFmco360WnGCAD0KUWKHWy+OZrRAZ0/ZHMQjAecHSrFdeG+TjSIkAc3aifi2GRgXu/f+gwqW0m76H+svVSpTIwUZfHBmpi4O7461JNE98/eBDudJR9LlQIoMuBPPhkQ7inlbfeHf6PArH/jc39EYGWxN1oTP6SQ4eqr2gvo+eH3z4VXZwIaiJBlnyi10iE5YyGM6rVfsZouucqNrJpVc+tjAhTXIcqrC5eLU+8rXruuyZrXR4ykerg1RoKntVbolkeJvlCEJLg/NjQGpX7tL4zs6nFPjBKHvuE0vvuOaGccZ2aOqO/7deSrOVEdIUSdkJRqiIApB2eUb0Yng6qvdOB0w2bz9cgMSqGbknED1vYJrbpFJfZKkji5g2qE105I1+AKLuy/asvONiicyBYtZpWA3symLbXEjfRaDeihnc6ie0qrabKounRsRKs17RJiLKexoiWrZwIUrEFhaoveq/EOKedxrQNTcqgtpO41g+pT82gCo50O8DK4RYC3Z5Gt4WZH8WfZsMX5IHZ35z2LCV5YjfIG3ppE9gRXuyB9qz5B6P5fNTtpWQ6iA5ZEbMtC+n45GqmDR5kctsUfrevGRr9ikU1Jb3S0YzmHT+fFX3z1FALch1sagbuATDB0uZdgCDXgdD7mFtnYhqxwCjxNNGx5uMk5+YhxPvzoJKDsN8zIluhPZL7FzG5CFx07vOunwz/+oIweAuKY04wjvB62avuSdUYCWVhJsS1x1QuOu413TE6zMursYIA9rZAm/OLgDGXZRtVVjCocPgZfIesHWHJV5ph/vV4PQY/i0hZis3WePWrXeS AXu1gUai yrphhKhb/QM9K6PAPV+D0hF8YtkUUnfvgc5w7UY3iSgRnp9Ojyd8LL9oK0MoCU2oKn+M84tTDgEKAes9iD2ByezJtgoqTkAbErMFCPA3HCJaZORw5qDvr8RHhjjw/fWq5pWAEYJTdV33UcreOlmxpFBXGbggE0AIYzWN2M+UmaQakWez9pNzyT4OLft+FcLJclqupR5rzi64J/nQQMIKV8kA+du6EsCmI11qJMbPZ6A8UVlO7jXGu2Je1cB+RDUwJI8iuEZ3RuJRw++WhOtB2mjYF90c2ljRFP+aak/1yZHBGG+RlOacUeEi4/o+aa2BVlNnX6toFJ0R/DYFkBouIEXI6TUlqRhBXhX/6QOZQtT6KRE6njPOULZPBC27PYWDPslR5nhfFVLPRWiDZolNmCP4v25P5mUvfMv800cvztgA9BJ3nOWiKdDr/8p7DP/z8invzUI7eqWtbNgdLtlwnkGw8BnCQBAAVpdOHTDxnWc6UXLrCO4F3tjT05HHWJ/gzQb839IyO6yyEN9Gjxe7uiVbzVK5b8swAE+vmurnQ2HJX+t2vWm9PknxKQt2MGvxz62oCX9w2ilNoDSI= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Fri, May 10, 2024 at 02:02:54PM -0700, Charlie Jenkins wrote: >On Wed, Apr 03, 2024 at 04:34:58PM -0700, Deepak Gupta wrote: >> `arch_calc_vm_prot_bits` is implemented on risc-v to return VM_READ | >> VM_WRITE if PROT_WRITE is specified. Similarly `riscv_sys_mmap` is >> updated to convert all incoming PROT_WRITE to (PROT_WRITE | PROT_READ). >> This is to make sure that any existing apps using PROT_WRITE still work. >> >> Earlier `protection_map[VM_WRITE]` used to pick read-write PTE encodings. >> Now `protection_map[VM_WRITE]` will always pick PAGE_SHADOWSTACK PTE >> encodings for shadow stack. Above changes ensure that existing apps >> continue to work because underneath kernel will be picking >> `protection_map[VM_WRITE|VM_READ]` PTE encodings. >> >> Signed-off-by: Deepak Gupta >> --- >> arch/riscv/include/asm/mman.h | 24 ++++++++++++++++++++++++ >> arch/riscv/include/asm/pgtable.h | 1 + >> arch/riscv/kernel/sys_riscv.c | 11 +++++++++++ >> arch/riscv/mm/init.c | 2 +- >> mm/mmap.c | 1 + >> 5 files changed, 38 insertions(+), 1 deletion(-) >> create mode 100644 arch/riscv/include/asm/mman.h >> >> diff --git a/arch/riscv/include/asm/mman.h b/arch/riscv/include/asm/mman.h >> new file mode 100644 >> index 000000000000..ef9fedf32546 >> --- /dev/null >> +++ b/arch/riscv/include/asm/mman.h >> @@ -0,0 +1,24 @@ >> +/* SPDX-License-Identifier: GPL-2.0 */ >> +#ifndef __ASM_MMAN_H__ >> +#define __ASM_MMAN_H__ >> + >> +#include >> +#include >> +#include >> + >> +static inline unsigned long arch_calc_vm_prot_bits(unsigned long prot, >> + unsigned long pkey __always_unused) >> +{ >> + unsigned long ret = 0; >> + >> + /* >> + * If PROT_WRITE was specified, force it to VM_READ | VM_WRITE. >> + * Only VM_WRITE means shadow stack. >> + */ >> + if (prot & PROT_WRITE) >> + ret = (VM_READ | VM_WRITE); >> + return ret; >> +} >> +#define arch_calc_vm_prot_bits(prot, pkey) arch_calc_vm_prot_bits(prot, pkey) >> + >> +#endif /* ! __ASM_MMAN_H__ */ >> diff --git a/arch/riscv/include/asm/pgtable.h b/arch/riscv/include/asm/pgtable.h >> index 6066822e7396..4d5983bc6766 100644 >> --- a/arch/riscv/include/asm/pgtable.h >> +++ b/arch/riscv/include/asm/pgtable.h >> @@ -184,6 +184,7 @@ extern struct pt_alloc_ops pt_ops __initdata; >> #define PAGE_READ_EXEC __pgprot(_PAGE_BASE | _PAGE_READ | _PAGE_EXEC) >> #define PAGE_WRITE_EXEC __pgprot(_PAGE_BASE | _PAGE_READ | \ >> _PAGE_EXEC | _PAGE_WRITE) >> +#define PAGE_SHADOWSTACK __pgprot(_PAGE_BASE | _PAGE_WRITE) >> >> #define PAGE_COPY PAGE_READ >> #define PAGE_COPY_EXEC PAGE_READ_EXEC >> diff --git a/arch/riscv/kernel/sys_riscv.c b/arch/riscv/kernel/sys_riscv.c >> index f1c1416a9f1e..846c36b1b3d5 100644 >> --- a/arch/riscv/kernel/sys_riscv.c >> +++ b/arch/riscv/kernel/sys_riscv.c >> @@ -8,6 +8,8 @@ >> #include >> #include >> #include >> +#include >> +#include >> >> static long riscv_sys_mmap(unsigned long addr, unsigned long len, >> unsigned long prot, unsigned long flags, >> @@ -17,6 +19,15 @@ static long riscv_sys_mmap(unsigned long addr, unsigned long len, >> if (unlikely(offset & (~PAGE_MASK >> page_shift_offset))) >> return -EINVAL; >> >> + /* >> + * If only PROT_WRITE is specified then extend that to PROT_READ >> + * protection_map[VM_WRITE] is now going to select shadow stack encodings. >> + * So specifying PROT_WRITE actually should select protection_map [VM_WRITE | VM_READ] >> + * If user wants to create shadow stack then they should use `map_shadow_stack` syscall. >> + */ >> + if (unlikely((prot & PROT_WRITE) && !(prot & PROT_READ))) > >The comments says that this should extend to PROT_READ if only >PROT_WRITE is specified. This condition instead is checking if >PROT_WRITE is selected but PROT_READ is not. If prot is (VM_EXEC | >VM_WRITE) then it would be extended to (VM_EXEC | VM_WRITE | VM_READ). >This will not currently cause any issues because these both map to the >same value in the protection_map PAGE_COPY_EXEC, however this seems to >be not the intention of this change. > >prot == PROT_WRITE better suits the condition explained in the comment. If someone specifies this (PROT_EXEC | PROT_WRITE) today, it works because of the way permissions are setup in `protection_map`. On risc-v there is no way to have a page which is execute and write only. So expectation is that if some apps were using `PROT_EXEC | PROT_WRITE` today, they were working because internally it was translating to read, write and execute on page permissions level. This patch make sure that, it stays same from page permissions perspective. If someone was using PROT_EXEC, it may translate to execute only and this change doesn't impact that. Patch simply looks for presence of `PROT_WRITE` and absence of `PROT_READ` in protection flags and if that condition is satisfied, it assumes that caller assumed page is going to be read allowed as well. > >> + prot |= PROT_READ; >> + >> return ksys_mmap_pgoff(addr, len, prot, flags, fd, >> offset >> (PAGE_SHIFT - page_shift_offset)); >> } >> diff --git a/arch/riscv/mm/init.c b/arch/riscv/mm/init.c >> index fa34cf55037b..98e5ece4052a 100644 >> --- a/arch/riscv/mm/init.c >> +++ b/arch/riscv/mm/init.c >> @@ -299,7 +299,7 @@ pgd_t early_pg_dir[PTRS_PER_PGD] __initdata __aligned(PAGE_SIZE); >> static const pgprot_t protection_map[16] = { >> [VM_NONE] = PAGE_NONE, >> [VM_READ] = PAGE_READ, >> - [VM_WRITE] = PAGE_COPY, >> + [VM_WRITE] = PAGE_SHADOWSTACK, >> [VM_WRITE | VM_READ] = PAGE_COPY, >> [VM_EXEC] = PAGE_EXEC, >> [VM_EXEC | VM_READ] = PAGE_READ_EXEC, >> diff --git a/mm/mmap.c b/mm/mmap.c >> index d89770eaab6b..57a974f49b00 100644 >> --- a/mm/mmap.c >> +++ b/mm/mmap.c >> @@ -47,6 +47,7 @@ >> #include >> #include >> #include >> +#include > >It doesn't seem like this is necessary for this patch. Thanks. Yeah it looks like I forgot to remove this over the churn. Will fix it. > >- Charlie > >> >> #include >> #include >> -- >> 2.43.2 >>