From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7B45CC25B74 for ; Mon, 13 May 2024 18:29:37 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id DE9DC6B02C1; Mon, 13 May 2024 14:29:36 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id D73FA6B02C2; Mon, 13 May 2024 14:29:36 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id BEBEC6B02C3; Mon, 13 May 2024 14:29:36 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id A169B6B02C1 for ; Mon, 13 May 2024 14:29:36 -0400 (EDT) Received: from smtpin18.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id 5C99C1A07CB for ; Mon, 13 May 2024 18:29:36 +0000 (UTC) X-FDA: 82114210752.18.0510B43 Received: from mail-pg1-f173.google.com (mail-pg1-f173.google.com [209.85.215.173]) by imf28.hostedemail.com (Postfix) with ESMTP id 72574C0014 for ; Mon, 13 May 2024 18:29:33 +0000 (UTC) Authentication-Results: imf28.hostedemail.com; dkim=pass header.d=rivosinc-com.20230601.gappssmtp.com header.s=20230601 header.b=y5PqPUlI; dmarc=none; spf=pass (imf28.hostedemail.com: domain of debug@rivosinc.com designates 209.85.215.173 as permitted sender) smtp.mailfrom=debug@rivosinc.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1715624973; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=PiRTZzs1aSfsYke8dHlxZkqknmQw6lQNFES/HAZTAzg=; b=4aYnEkgpEEi4bdOxMezfxFXI5FpQ9dB+s3q9GDJ4tksGvkP2/dBXB0ROgqzKtAcy2GsLjh CJ8Kk9d+/wlTn+xlRr/V6nQKyNE3rHLIQ2kYiyGHfd7pID6fKyQWsFwzq11py80zFWtOrz 3xLyTsCKmYHHZZ46knn1JNzUOLVtSSc= ARC-Authentication-Results: i=1; imf28.hostedemail.com; dkim=pass header.d=rivosinc-com.20230601.gappssmtp.com header.s=20230601 header.b=y5PqPUlI; dmarc=none; spf=pass (imf28.hostedemail.com: domain of debug@rivosinc.com designates 209.85.215.173 as permitted sender) smtp.mailfrom=debug@rivosinc.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1715624973; a=rsa-sha256; cv=none; b=Mh8pB3/vevZE8C2JGayoD2bSDSFrjYdLzC1YCvgOugfdHargJn4d0wBHrlM6WWcTIsoYry le05UqL8Dni2WvGAEwr3ItAz6GeJ9POJIDjhionyptAENdGzGP71RDKs5CwQDdLFg4wK0t pT7SHOo5JsDWx7AZQb/Y1VcTfkMDcCU= Received: by mail-pg1-f173.google.com with SMTP id 41be03b00d2f7-5f415fd71f8so3683797a12.3 for ; Mon, 13 May 2024 11:29:33 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20230601.gappssmtp.com; s=20230601; t=1715624972; x=1716229772; darn=kvack.org; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:from:to:cc:subject:date:message-id:reply-to; bh=PiRTZzs1aSfsYke8dHlxZkqknmQw6lQNFES/HAZTAzg=; b=y5PqPUlIiISh511sLJ/j22SkkpEqduMaw7luc5hJyTf/EUtHyYYItpUj8PPf4TWKvw LIHXedj7zd7Nim2xC7LtpzVTI1i8DvYXveVy+Yd92FR0c/NRawt2siT9g1s9NHPH43MT 1trdptXBjyC3qCqiUOpjX2IO6EglpODqsR3y3F+ypexFf2feIm7nnJrx21xJ+l4ivYDf o2BX8/5zF//Z5MdDjfzH8dLmNSQMJNtPzEzOB9T0098NnE/NzAwhjgEjZ8e5dxCHVTo5 Hbz+1RBjRkQG6wd9K3PUnU+nbIZntsTU2ncQKa83QY01ypoc1GJzxFD2p9N55NsZakQO 5ffQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1715624972; x=1716229772; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:x-gm-message-state:from:to:cc:subject:date :message-id:reply-to; bh=PiRTZzs1aSfsYke8dHlxZkqknmQw6lQNFES/HAZTAzg=; b=ORveM/rhB6QmffqgFDgpAYh5vrfzaGbi1VkjEoH5nHvinXn/5fQiSKWjwAGPYSVH9A 3qAVExO8YOr72KPks2JlxTZgXwQTjFyPPG/tN0DEE6kXHY4l7tP38OwJcUvQaUYGg5yZ vTBwQ3Qw6/d7zx2QLD22pIFlaPDlfjORbKzbutC7oQky0RtEdP2gSLtxs7VrnDIqc28H 5dCk7efT2H28t+tYAaJ9lr5gpBeKamp+4Jl9p+6NGEDW6kAJz+V22naTDUwuIBR7/5wK UKnsxV6BBKbos7lbLiw4vswEz3ae9jf+/HAHfPPh53mBrXsUAZJU3UBoXjm0VqLuIyMx BAOw== X-Forwarded-Encrypted: i=1; AJvYcCVc1LZXJd8+IDfPtd6eazzD39OQ/7NBk0j0A/8vDShwq0//5sPZvttbm+T5kPDg6I+Fmo9EZbL/AQyl4Et5+c2ZYjA= X-Gm-Message-State: AOJu0YwfsufeZFxRBTIpQoe4MS6s2i1Zs8LmBbp6ffKMpSVW8ThSciRw mPJkrRSIZ3xiOYQYVYh1ZXEO2qavPpMF/UDIXWek0nZkQ2nI3GXvElsbQv6vfzE= X-Google-Smtp-Source: AGHT+IFicEZcKeNvrbf0O6DdxrDGspREEgzWPCly98S4fjg/+yKIhmUX03N3FPrODmM7wQKDwilkhg== X-Received: by 2002:a17:90a:9604:b0:2b3:ed2:1a77 with SMTP id 98e67ed59e1d1-2b6cc340388mr9349756a91.10.1715624972209; Mon, 13 May 2024 11:29:32 -0700 (PDT) Received: from debug.ba.rivosinc.com ([64.71.180.162]) by smtp.gmail.com with ESMTPSA id 98e67ed59e1d1-2b628ea6affsm10023078a91.54.2024.05.13.11.29.28 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 13 May 2024 11:29:31 -0700 (PDT) Date: Mon, 13 May 2024 11:29:27 -0700 From: Deepak Gupta To: Alexandre Ghiti Cc: paul.walmsley@sifive.com, rick.p.edgecombe@intel.com, broonie@kernel.org, Szabolcs.Nagy@arm.com, kito.cheng@sifive.com, keescook@chromium.org, ajones@ventanamicro.com, conor.dooley@microchip.com, cleger@rivosinc.com, atishp@atishpatra.org, bjorn@rivosinc.com, alexghiti@rivosinc.com, samuel.holland@sifive.com, conor@kernel.org, linux-doc@vger.kernel.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, devicetree@vger.kernel.org, linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-kselftest@vger.kernel.org, corbet@lwn.net, palmer@dabbelt.com, aou@eecs.berkeley.edu, robh+dt@kernel.org, krzysztof.kozlowski+dt@linaro.org, oleg@redhat.com, akpm@linux-foundation.org, arnd@arndb.de, ebiederm@xmission.com, Liam.Howlett@oracle.com, vbabka@suse.cz, lstoakes@gmail.com, shuah@kernel.org, brauner@kernel.org, andy.chiu@sifive.com, jerry.shih@sifive.com, hankuan.chen@sifive.com, greentime.hu@sifive.com, evan@rivosinc.com, xiao.w.wang@intel.com, charlie@rivosinc.com, apatel@ventanamicro.com, mchitale@ventanamicro.com, dbarboza@ventanamicro.com, sameo@rivosinc.com, shikemeng@huaweicloud.com, willy@infradead.org, vincent.chen@sifive.com, guoren@kernel.org, samitolvanen@google.com, songshuaishuai@tinylab.org, gerg@kernel.org, heiko@sntech.de, bhe@redhat.com, jeeheng.sia@starfivetech.com, cyy@cyyself.name, maskray@google.com, ancientmodern4@gmail.com, mathis.salmen@matsal.de, cuiyunhui@bytedance.com, bgray@linux.ibm.com, mpe@ellerman.id.au, baruch@tkos.co.il, alx@kernel.org, david@redhat.com, catalin.marinas@arm.com, revest@chromium.org, josh@joshtriplett.org, shr@devkernel.io, deller@gmx.de, omosnace@redhat.com, ojeda@kernel.org, jhubbard@nvidia.com Subject: Re: [PATCH v3 10/29] riscv/mm : ensure PROT_WRITE leads to VM_READ | VM_WRITE Message-ID: References: <20240403234054.2020347-1-debug@rivosinc.com> <20240403234054.2020347-11-debug@rivosinc.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii; format=flowed Content-Disposition: inline In-Reply-To: X-Rspamd-Queue-Id: 72574C0014 X-Stat-Signature: zm3u9awqmjuqqi7k554a9y45k5tcxa63 X-Rspam-User: X-Rspamd-Server: rspam04 X-HE-Tag: 1715624973-91880 X-HE-Meta: U2FsdGVkX1/YtRhsdsTlNzh/MztdbYKKFYbI7Neaa9lCodnA8qRfiIBH55DS+FkDp3yu0kNwp3tTvAJCmk86052WVZPPqSehh8y2991NRdLLK4GqAL+Cs29qQuViHbjjcKc1CurXK0Nv2sgltXSdUYFXCDikA5+nXngZDoXv79dGBpn0hP7H1zM04DSrmbk/qmFix8hrwxdBlUwE0yKQT6M4SFqITP8FiLzxQGMEN0fgT6hm/irE4QkValcOcsjbqyX+vbpPfe6EHYvKu0VbyuTo87qIXDBO+2baAtRRzXY1Nwf2pBROU0ichaxHAu5vrKHuB66vVvgDWAg9M7liY2ZnI5AsJ9xjTOnhPtcFYvSHEWPj04VVrF3q0QeGPrZKp22WZURIcX2qACDzM/K2f2PjJ7BU6s3b5UNGT+gkYVnSs/vYNKxV/CoXKH0De9/4nzu7Mt/A+L0sPETNSRnWGMwL3seoKWuX279HzRJHP+A7br8/YjoRX/qgiihNoEbQiaeS7FKAXI6MCk2kxBG6vW7IxNAnMhDQ12T7y4kf+TeevdEa5hZsilaHLFkLAU6N3hSsWdkpLSdeWMV11/JZAV0XL6bnsawaGTRuLpvZvCgluWr/zDQyxlryOaj/XAmEo8+KjCoKn9WiGHZbQKaJQ3EG5uXAK4oPJxbpYakWeS9plKIEAZ7bEyiPFaVjInUfx/CK6D4U7J6u0kICuXCr+SmrcWF4EC0n7c7AdQHk0aKRHZDk9Mmcr7iiGjZ5byub2kgsqQko6+Tt+VcJgUKXrdTnNvrvTsjjUx4hYetAMFv/B1rxLEUKicQthThkDAjEhVDaC7pBQLwEQnU0T5Kz8r7XoI3OZMGymfdwLpxiOQBKt99oBxauZNAEVW1KdzeBJLfepm6UZfn4B2JJpzCaRnSQbDx1abVoTFM89v1bI2bDmMz4nwJIC4bw3ZmzED7hmpxYFtkEGYq2DMR3BWe lS5sF+aQ PlNCx9OxuxlRdSsH5xauDYqsEsCwugtUf0eVe1zTsbK8UlP65+n8keb0VEIo8LjBrv6Pu+YqpdP/NLbMkU3HCLttwAb9oA6iJrKkpeD0ttNVDFi7er26Es63S6ITlYKUGqedf8yZblI7yQaD6wrYBxx74vffrwsHWL5PJug69g2DRNc0Gt0NRMBWuw3qtK+NHf6mLLHHfdAJG/muyuiuKVKSI1JjCMq9aoRMDZ9WnyKZnlpkZZ+q6n4sMPU//BqZQExnrJfGrfNaBDNOBe/q92scDvsmAw1yWaWGFX8/bL7IxR7rTCossSsV3xWU9GXzTk4hR0cxcsK0kofgn0ns/2nYkiqGDuEKPdilEBMRnOaEcEhdx/l7Y/cUkJyIFK0p/BSbxu5jKP2TTeD8eRZugGhJ3Z/QaHpThdhsYOEUKiSsmF4EGAxWNc/yYBlUqfiXSFhdhmo7DNTNGnFByGZTizIR8dD79ZoUNPlI7nf7VbPozrKjRb6nhScsfqO8b5B63adjSDif6bU8i+6DkaxbkrrNAUjpUe6XQetDOLS5zr3jDC+VSJKtcKjIselbrpecjpgvl3KvBQ8xDW44= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Sun, May 12, 2024 at 06:24:45PM +0200, Alexandre Ghiti wrote: >Hi Deepak, > >On 04/04/2024 01:34, Deepak Gupta wrote: >>`arch_calc_vm_prot_bits` is implemented on risc-v to return VM_READ | >>VM_WRITE if PROT_WRITE is specified. Similarly `riscv_sys_mmap` is >>updated to convert all incoming PROT_WRITE to (PROT_WRITE | PROT_READ). >>This is to make sure that any existing apps using PROT_WRITE still work. >> >>Earlier `protection_map[VM_WRITE]` used to pick read-write PTE encodings. >>Now `protection_map[VM_WRITE]` will always pick PAGE_SHADOWSTACK PTE >>encodings for shadow stack. Above changes ensure that existing apps >>continue to work because underneath kernel will be picking >>`protection_map[VM_WRITE|VM_READ]` PTE encodings. >> >>Signed-off-by: Deepak Gupta >>--- >> arch/riscv/include/asm/mman.h | 24 ++++++++++++++++++++++++ >> arch/riscv/include/asm/pgtable.h | 1 + >> arch/riscv/kernel/sys_riscv.c | 11 +++++++++++ >> arch/riscv/mm/init.c | 2 +- >> mm/mmap.c | 1 + >> 5 files changed, 38 insertions(+), 1 deletion(-) >> create mode 100644 arch/riscv/include/asm/mman.h >> >>diff --git a/arch/riscv/include/asm/mman.h b/arch/riscv/include/asm/mman.h >>new file mode 100644 >>index 000000000000..ef9fedf32546 >>--- /dev/null >>+++ b/arch/riscv/include/asm/mman.h >>@@ -0,0 +1,24 @@ >>+/* SPDX-License-Identifier: GPL-2.0 */ >>+#ifndef __ASM_MMAN_H__ >>+#define __ASM_MMAN_H__ >>+ >>+#include >>+#include >>+#include >>+ >>+static inline unsigned long arch_calc_vm_prot_bits(unsigned long prot, >>+ unsigned long pkey __always_unused) >>+{ >>+ unsigned long ret = 0; >>+ >>+ /* >>+ * If PROT_WRITE was specified, force it to VM_READ | VM_WRITE. >>+ * Only VM_WRITE means shadow stack. >>+ */ >>+ if (prot & PROT_WRITE) >>+ ret = (VM_READ | VM_WRITE); >>+ return ret; >>+} >>+#define arch_calc_vm_prot_bits(prot, pkey) arch_calc_vm_prot_bits(prot, pkey) >>+ >>+#endif /* ! __ASM_MMAN_H__ */ >>diff --git a/arch/riscv/include/asm/pgtable.h b/arch/riscv/include/asm/pgtable.h >>index 6066822e7396..4d5983bc6766 100644 >>--- a/arch/riscv/include/asm/pgtable.h >>+++ b/arch/riscv/include/asm/pgtable.h >>@@ -184,6 +184,7 @@ extern struct pt_alloc_ops pt_ops __initdata; >> #define PAGE_READ_EXEC __pgprot(_PAGE_BASE | _PAGE_READ | _PAGE_EXEC) >> #define PAGE_WRITE_EXEC __pgprot(_PAGE_BASE | _PAGE_READ | \ >> _PAGE_EXEC | _PAGE_WRITE) >>+#define PAGE_SHADOWSTACK __pgprot(_PAGE_BASE | _PAGE_WRITE) >> #define PAGE_COPY PAGE_READ >> #define PAGE_COPY_EXEC PAGE_READ_EXEC >>diff --git a/arch/riscv/kernel/sys_riscv.c b/arch/riscv/kernel/sys_riscv.c >>index f1c1416a9f1e..846c36b1b3d5 100644 >>--- a/arch/riscv/kernel/sys_riscv.c >>+++ b/arch/riscv/kernel/sys_riscv.c >>@@ -8,6 +8,8 @@ >> #include >> #include >> #include >>+#include >>+#include >> static long riscv_sys_mmap(unsigned long addr, unsigned long len, >> unsigned long prot, unsigned long flags, >>@@ -17,6 +19,15 @@ static long riscv_sys_mmap(unsigned long addr, unsigned long len, >> if (unlikely(offset & (~PAGE_MASK >> page_shift_offset))) >> return -EINVAL; >>+ /* >>+ * If only PROT_WRITE is specified then extend that to PROT_READ >>+ * protection_map[VM_WRITE] is now going to select shadow stack encodings. >>+ * So specifying PROT_WRITE actually should select protection_map [VM_WRITE | VM_READ] >>+ * If user wants to create shadow stack then they should use `map_shadow_stack` syscall. >>+ */ >>+ if (unlikely((prot & PROT_WRITE) && !(prot & PROT_READ))) >>+ prot |= PROT_READ; >>+ >> return ksys_mmap_pgoff(addr, len, prot, flags, fd, >> offset >> (PAGE_SHIFT - page_shift_offset)); >> } >>diff --git a/arch/riscv/mm/init.c b/arch/riscv/mm/init.c >>index fa34cf55037b..98e5ece4052a 100644 >>--- a/arch/riscv/mm/init.c >>+++ b/arch/riscv/mm/init.c >>@@ -299,7 +299,7 @@ pgd_t early_pg_dir[PTRS_PER_PGD] __initdata __aligned(PAGE_SIZE); >> static const pgprot_t protection_map[16] = { >> [VM_NONE] = PAGE_NONE, >> [VM_READ] = PAGE_READ, >>- [VM_WRITE] = PAGE_COPY, >>+ [VM_WRITE] = PAGE_SHADOWSTACK, >> [VM_WRITE | VM_READ] = PAGE_COPY, >> [VM_EXEC] = PAGE_EXEC, >> [VM_EXEC | VM_READ] = PAGE_READ_EXEC, >>diff --git a/mm/mmap.c b/mm/mmap.c >>index d89770eaab6b..57a974f49b00 100644 >>--- a/mm/mmap.c >>+++ b/mm/mmap.c >>@@ -47,6 +47,7 @@ >> #include >> #include >> #include >>+#include >> #include >> #include > > >What happens if someone restricts the permission to PROT_WRITE using >mprotect()? I would say this is an issue since it would turn the pages >into shadow stack pages. look at this patch in this patch series. "riscv/mm : ensure PROT_WRITE leads to VM_READ | VM_WRITE" It implements `arch_calc_vm_prot_bits` for risc-v and enforces that incoming PROT_WRITE is converted to VM_READ | VM_WRITE. And thus it'll become read/write memory. This way `mprotect` can be used to convert a shadow stack page to read/write memory but not a regular memory to shadow stack page. > >