From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 9BEADC25B10 for ; Mon, 13 May 2024 18:29:42 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:Content-Type: Content-Transfer-Encoding:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:In-Reply-To:MIME-Version:References:Message-ID: Subject:Cc:To:From:Date:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=aBqpStI1JuC+rGNHQyYZvH9N4YQVn+wtd+k93sAoQOE=; b=ypBxuB59+YdlwSZ7YygrXe8r+s lTOyfB6Boy2b2veRb0irpx2HIt+pE43tJ6g8zXR/UcpJ5galAeGOVc6jBFASeMA8Ccb4d2u76TFdT KcCRiOVhA1lRQeWPb02P095mDIzFqXeqL0sMqqRNSHAJPBHdwo+Vw+I2bWX4r1CO/EjzjG1vZbmMF yfSWwO+zqqK3frldPsk6Kbpzsjh/tRZDgT+qNdPA+3uvBunFs0x4p9gva9kJRtamatfPmUKLSxq0w 1lZ65+patepLSA9EtbhaZFitoThNLX0dXm3fh9K8TOm+1Dedfv0ZYDO+QgZtcpWgbgyJHEypJCU8W JD/UVh1Q==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1s6aQa-0000000Drq4-2Saw; Mon, 13 May 2024 18:29:36 +0000 Received: from mail-pj1-x1034.google.com ([2607:f8b0:4864:20::1034]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1s6aQX-0000000Drp2-26YE for linux-riscv@lists.infradead.org; Mon, 13 May 2024 18:29:35 +0000 Received: by mail-pj1-x1034.google.com with SMTP id 98e67ed59e1d1-2b6208c88dcso3696242a91.3 for ; Mon, 13 May 2024 11:29:33 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20230601.gappssmtp.com; s=20230601; t=1715624972; x=1716229772; darn=lists.infradead.org; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:from:to:cc:subject:date:message-id:reply-to; bh=PiRTZzs1aSfsYke8dHlxZkqknmQw6lQNFES/HAZTAzg=; b=wgWsCLZDJHw4MH+C+itut3rl+oP72IxlSgyGSFaO0MSJGwPGvBUiKOrfa8fLfrpeLA lj41KkTEqsYNFphwP4IsywdKALLqmlQqkhWywqkIv710XiHzQnT833+ZH5QliAzY3CM2 g0ifb5JPEDVUrylSVu1iS+COb2/CNUMEP/2u327UN+ORDTSWpqgV65rQ27cXMKGna5Jo yvc4lbfZZh3aw2CzqbDtrTi2BF9s3Bjc0ba6Bss07FqU4Y9UrLxpIwhm0VGiqxGL49t9 Dc9q5GUjfSX8tYt4KsAmXOzQmbwJzdUlt3o6kWZF7UepqBxl1VKK0Tw5m/vTsR5d746D 6+cw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1715624972; x=1716229772; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:x-gm-message-state:from:to:cc:subject:date :message-id:reply-to; bh=PiRTZzs1aSfsYke8dHlxZkqknmQw6lQNFES/HAZTAzg=; b=UpypuX8217hnIDl4VPh+UcS5ZNIwaJSEVpgx4sibMU0eJvQ9arnZ0WCL5W1xZysewo M+C8w3prsjPsTarLbL/aOgevJAt9hyDWVgPV2P4L2YS2WmX6NFVUocSVYRnPOhu4AtbZ PxQ12X4N8M67175oD0FBQEQkwero+6dh4Qq9+Idtfi5T2jeBP4nc75Q2sr/EvdVc2hjg c1+hX7eSK7GtNe2hxNKAvgQAYqTqjuHkG+B4pA6R6VzdqnDlEcxrA4MTmFPJWCFAc4of phMeddK9PCgKcyII2j5eO34AKghqYHzFIGxmzkn74qWxsQ8NZy9UXVSkf3fvCtpFagJh tkKA== X-Forwarded-Encrypted: i=1; AJvYcCVwF1CqlmlfYwHKZHOt0l9BCyywrUXQy9baROfpNTsbFVPq3i7Bkh+TS5oSLkv4yDUq1BXST5cGjwBxyO875jNg6pbN+9QhIVAsEdGLkkB3 X-Gm-Message-State: AOJu0YzT2Z8bBwNYFXeqoCXg4DAYGJTNte0FIr+V9PaKmtHBNb44zfw7 orqWTZ30WB2p9eoBjGhDIvVsM9Sr7ZXaLzhOzj7kdx74z6ybBKu6cDRLCoGfW9A= X-Google-Smtp-Source: AGHT+IFicEZcKeNvrbf0O6DdxrDGspREEgzWPCly98S4fjg/+yKIhmUX03N3FPrODmM7wQKDwilkhg== X-Received: by 2002:a17:90a:9604:b0:2b3:ed2:1a77 with SMTP id 98e67ed59e1d1-2b6cc340388mr9349756a91.10.1715624972209; Mon, 13 May 2024 11:29:32 -0700 (PDT) Received: from debug.ba.rivosinc.com ([64.71.180.162]) by smtp.gmail.com with ESMTPSA id 98e67ed59e1d1-2b628ea6affsm10023078a91.54.2024.05.13.11.29.28 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 13 May 2024 11:29:31 -0700 (PDT) Date: Mon, 13 May 2024 11:29:27 -0700 From: Deepak Gupta To: Alexandre Ghiti Cc: paul.walmsley@sifive.com, rick.p.edgecombe@intel.com, broonie@kernel.org, Szabolcs.Nagy@arm.com, kito.cheng@sifive.com, keescook@chromium.org, ajones@ventanamicro.com, conor.dooley@microchip.com, cleger@rivosinc.com, atishp@atishpatra.org, bjorn@rivosinc.com, alexghiti@rivosinc.com, samuel.holland@sifive.com, conor@kernel.org, linux-doc@vger.kernel.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, devicetree@vger.kernel.org, linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-kselftest@vger.kernel.org, corbet@lwn.net, palmer@dabbelt.com, aou@eecs.berkeley.edu, robh+dt@kernel.org, krzysztof.kozlowski+dt@linaro.org, oleg@redhat.com, akpm@linux-foundation.org, arnd@arndb.de, ebiederm@xmission.com, Liam.Howlett@oracle.com, vbabka@suse.cz, lstoakes@gmail.com, shuah@kernel.org, brauner@kernel.org, andy.chiu@sifive.com, jerry.shih@sifive.com, hankuan.chen@sifive.com, greentime.hu@sifive.com, evan@rivosinc.com, xiao.w.wang@intel.com, charlie@rivosinc.com, apatel@ventanamicro.com, mchitale@ventanamicro.com, dbarboza@ventanamicro.com, sameo@rivosinc.com, shikemeng@huaweicloud.com, willy@infradead.org, vincent.chen@sifive.com, guoren@kernel.org, samitolvanen@google.com, songshuaishuai@tinylab.org, gerg@kernel.org, heiko@sntech.de, bhe@redhat.com, jeeheng.sia@starfivetech.com, cyy@cyyself.name, maskray@google.com, ancientmodern4@gmail.com, mathis.salmen@matsal.de, cuiyunhui@bytedance.com, bgray@linux.ibm.com, mpe@ellerman.id.au, baruch@tkos.co.il, alx@kernel.org, david@redhat.com, catalin.marinas@arm.com, revest@chromium.org, josh@joshtriplett.org, shr@devkernel.io, deller@gmx.de, omosnace@redhat.com, ojeda@kernel.org, jhubbard@nvidia.com Subject: Re: [PATCH v3 10/29] riscv/mm : ensure PROT_WRITE leads to VM_READ | VM_WRITE Message-ID: References: <20240403234054.2020347-1-debug@rivosinc.com> <20240403234054.2020347-11-debug@rivosinc.com> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240513_112933_679743_0B1D2E0B X-CRM114-Status: GOOD ( 16.98 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Transfer-Encoding: 7bit Content-Type: text/plain; charset="us-ascii"; Format="flowed" Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org On Sun, May 12, 2024 at 06:24:45PM +0200, Alexandre Ghiti wrote: >Hi Deepak, > >On 04/04/2024 01:34, Deepak Gupta wrote: >>`arch_calc_vm_prot_bits` is implemented on risc-v to return VM_READ | >>VM_WRITE if PROT_WRITE is specified. Similarly `riscv_sys_mmap` is >>updated to convert all incoming PROT_WRITE to (PROT_WRITE | PROT_READ). >>This is to make sure that any existing apps using PROT_WRITE still work. >> >>Earlier `protection_map[VM_WRITE]` used to pick read-write PTE encodings. >>Now `protection_map[VM_WRITE]` will always pick PAGE_SHADOWSTACK PTE >>encodings for shadow stack. Above changes ensure that existing apps >>continue to work because underneath kernel will be picking >>`protection_map[VM_WRITE|VM_READ]` PTE encodings. >> >>Signed-off-by: Deepak Gupta >>--- >> arch/riscv/include/asm/mman.h | 24 ++++++++++++++++++++++++ >> arch/riscv/include/asm/pgtable.h | 1 + >> arch/riscv/kernel/sys_riscv.c | 11 +++++++++++ >> arch/riscv/mm/init.c | 2 +- >> mm/mmap.c | 1 + >> 5 files changed, 38 insertions(+), 1 deletion(-) >> create mode 100644 arch/riscv/include/asm/mman.h >> >>diff --git a/arch/riscv/include/asm/mman.h b/arch/riscv/include/asm/mman.h >>new file mode 100644 >>index 000000000000..ef9fedf32546 >>--- /dev/null >>+++ b/arch/riscv/include/asm/mman.h >>@@ -0,0 +1,24 @@ >>+/* SPDX-License-Identifier: GPL-2.0 */ >>+#ifndef __ASM_MMAN_H__ >>+#define __ASM_MMAN_H__ >>+ >>+#include >>+#include >>+#include >>+ >>+static inline unsigned long arch_calc_vm_prot_bits(unsigned long prot, >>+ unsigned long pkey __always_unused) >>+{ >>+ unsigned long ret = 0; >>+ >>+ /* >>+ * If PROT_WRITE was specified, force it to VM_READ | VM_WRITE. >>+ * Only VM_WRITE means shadow stack. >>+ */ >>+ if (prot & PROT_WRITE) >>+ ret = (VM_READ | VM_WRITE); >>+ return ret; >>+} >>+#define arch_calc_vm_prot_bits(prot, pkey) arch_calc_vm_prot_bits(prot, pkey) >>+ >>+#endif /* ! __ASM_MMAN_H__ */ >>diff --git a/arch/riscv/include/asm/pgtable.h b/arch/riscv/include/asm/pgtable.h >>index 6066822e7396..4d5983bc6766 100644 >>--- a/arch/riscv/include/asm/pgtable.h >>+++ b/arch/riscv/include/asm/pgtable.h >>@@ -184,6 +184,7 @@ extern struct pt_alloc_ops pt_ops __initdata; >> #define PAGE_READ_EXEC __pgprot(_PAGE_BASE | _PAGE_READ | _PAGE_EXEC) >> #define PAGE_WRITE_EXEC __pgprot(_PAGE_BASE | _PAGE_READ | \ >> _PAGE_EXEC | _PAGE_WRITE) >>+#define PAGE_SHADOWSTACK __pgprot(_PAGE_BASE | _PAGE_WRITE) >> #define PAGE_COPY PAGE_READ >> #define PAGE_COPY_EXEC PAGE_READ_EXEC >>diff --git a/arch/riscv/kernel/sys_riscv.c b/arch/riscv/kernel/sys_riscv.c >>index f1c1416a9f1e..846c36b1b3d5 100644 >>--- a/arch/riscv/kernel/sys_riscv.c >>+++ b/arch/riscv/kernel/sys_riscv.c >>@@ -8,6 +8,8 @@ >> #include >> #include >> #include >>+#include >>+#include >> static long riscv_sys_mmap(unsigned long addr, unsigned long len, >> unsigned long prot, unsigned long flags, >>@@ -17,6 +19,15 @@ static long riscv_sys_mmap(unsigned long addr, unsigned long len, >> if (unlikely(offset & (~PAGE_MASK >> page_shift_offset))) >> return -EINVAL; >>+ /* >>+ * If only PROT_WRITE is specified then extend that to PROT_READ >>+ * protection_map[VM_WRITE] is now going to select shadow stack encodings. >>+ * So specifying PROT_WRITE actually should select protection_map [VM_WRITE | VM_READ] >>+ * If user wants to create shadow stack then they should use `map_shadow_stack` syscall. >>+ */ >>+ if (unlikely((prot & PROT_WRITE) && !(prot & PROT_READ))) >>+ prot |= PROT_READ; >>+ >> return ksys_mmap_pgoff(addr, len, prot, flags, fd, >> offset >> (PAGE_SHIFT - page_shift_offset)); >> } >>diff --git a/arch/riscv/mm/init.c b/arch/riscv/mm/init.c >>index fa34cf55037b..98e5ece4052a 100644 >>--- a/arch/riscv/mm/init.c >>+++ b/arch/riscv/mm/init.c >>@@ -299,7 +299,7 @@ pgd_t early_pg_dir[PTRS_PER_PGD] __initdata __aligned(PAGE_SIZE); >> static const pgprot_t protection_map[16] = { >> [VM_NONE] = PAGE_NONE, >> [VM_READ] = PAGE_READ, >>- [VM_WRITE] = PAGE_COPY, >>+ [VM_WRITE] = PAGE_SHADOWSTACK, >> [VM_WRITE | VM_READ] = PAGE_COPY, >> [VM_EXEC] = PAGE_EXEC, >> [VM_EXEC | VM_READ] = PAGE_READ_EXEC, >>diff --git a/mm/mmap.c b/mm/mmap.c >>index d89770eaab6b..57a974f49b00 100644 >>--- a/mm/mmap.c >>+++ b/mm/mmap.c >>@@ -47,6 +47,7 @@ >> #include >> #include >> #include >>+#include >> #include >> #include > > >What happens if someone restricts the permission to PROT_WRITE using >mprotect()? I would say this is an issue since it would turn the pages >into shadow stack pages. look at this patch in this patch series. "riscv/mm : ensure PROT_WRITE leads to VM_READ | VM_WRITE" It implements `arch_calc_vm_prot_bits` for risc-v and enforces that incoming PROT_WRITE is converted to VM_READ | VM_WRITE. And thus it'll become read/write memory. This way `mprotect` can be used to convert a shadow stack page to read/write memory but not a regular memory to shadow stack page. > > _______________________________________________ linux-riscv mailing list linux-riscv@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-riscv