From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 50B95C25B10 for ; Fri, 10 May 2024 21:03:05 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id D77046B0102; Fri, 10 May 2024 17:03:04 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id CFFE76B0105; Fri, 10 May 2024 17:03:04 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id BC74C6B010A; Fri, 10 May 2024 17:03:04 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 9E2706B0102 for ; Fri, 10 May 2024 17:03:04 -0400 (EDT) Received: from smtpin13.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 15EDC1605A5 for ; Fri, 10 May 2024 21:03:04 +0000 (UTC) X-FDA: 82103711088.13.F3077F6 Received: from mail-pl1-f179.google.com (mail-pl1-f179.google.com [209.85.214.179]) by imf06.hostedemail.com (Postfix) with ESMTP id 1E863180008 for ; Fri, 10 May 2024 21:03:01 +0000 (UTC) Authentication-Results: imf06.hostedemail.com; dkim=pass header.d=rivosinc-com.20230601.gappssmtp.com header.s=20230601 header.b=HsIFNkko; spf=pass (imf06.hostedemail.com: domain of charlie@rivosinc.com designates 209.85.214.179 as permitted sender) smtp.mailfrom=charlie@rivosinc.com; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1715374982; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=70XwkvLb6QGypvEuEeEsNDMqohqQheU/3q6EeX2hHv8=; b=7/Ja7jo8L9keW/2cqAAzhGVsHe9YChTcOP3ogNd2dGD0MsWn3/4x47PjtCUO6GcWmP1pa6 Va24akX3Qi/2yk6pTVS7KxP/uHuMtWz29TEyW3Sf7/bGO15ZjN3TuX+PLU6wkwMrPaeQll uzJg/yqEwYT5bWErbI/WBb09LMOqe2M= ARC-Authentication-Results: i=1; imf06.hostedemail.com; dkim=pass header.d=rivosinc-com.20230601.gappssmtp.com header.s=20230601 header.b=HsIFNkko; spf=pass (imf06.hostedemail.com: domain of charlie@rivosinc.com designates 209.85.214.179 as permitted sender) smtp.mailfrom=charlie@rivosinc.com; dmarc=none ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1715374982; a=rsa-sha256; cv=none; b=YqpkTr/76feKQhH1+VoWuEktIr1nXGBF97HR1uIZsb+3a20l/cd10jOm5ky5pqjbQ3FN6b VfEMaDgQQdKsgVPOkWD7lvu/fSpSXykmygDOuiH2fbpocP5iWEJwk//xRrvHcixVM6RB3R gqbrWjBtojxrZ7o0IGBLwdfU/i3LwXU= Received: by mail-pl1-f179.google.com with SMTP id d9443c01a7336-1e4bf0b3e06so22898015ad.1 for ; Fri, 10 May 2024 14:03:01 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20230601.gappssmtp.com; s=20230601; t=1715374981; x=1715979781; darn=kvack.org; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:from:to:cc:subject:date:message-id:reply-to; bh=70XwkvLb6QGypvEuEeEsNDMqohqQheU/3q6EeX2hHv8=; b=HsIFNkkopRNPcHk4yO8+4bTz1cmC7ZFQLRFZAWyUVzBxDz2uuMWemevTNRq84bmeAY NFJarnCXr6umWp3uuzbFD1Gj+gUxIBNRp5J4TEEZr40O+U6wK06vCg7ymspKy7RJIsSc v/TnHDMth+LMLYzTZMoRXhHicuHxNsycvf83WTAVLM5jZT51nM661ApM02azrCzh9ltq 9qOqnWNbIL0TxQswvDIgXNdANhxzJxzqAgwWS/sZkfWxzYATgOD2G9IC/Lxyqv+ahMOu 8BpHamNwi4S1JQccbr38et66WUOrPvhAmqctQB9cf17MLo3TBT4jvOfpdBk2/izaPojA cdEA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1715374981; x=1715979781; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:x-gm-message-state:from:to:cc:subject:date :message-id:reply-to; bh=70XwkvLb6QGypvEuEeEsNDMqohqQheU/3q6EeX2hHv8=; b=rqw/FNSDSPkMdB+LzXt1e30L5f+0mTI3+UHyts9hnQStnZz9b0a0clQ8jlnSNux/qr ICUEI9XffoM2/KIEoHYCUXtUt8UZk4sJNaQekyemDVFLkz1pHnMm7Q8d09f8xqS9xKBH y1niBlj6I+NYMQ9RRuow4/tZ+GTdjr2iNKqufw9IeS5zwIKPOhePQlIVqo8R5cZb2eQA Cb5CJSFyvvmxo8bmfsNM7QxXUSEHG0jpJ4eVG5KWeZ9k5eTewGuSPWONGHvnomyEAyFo b3mfKVUgWsLkFdizncVDNPFyZnVag1uOwpFBGJ37cqBCgTKGfv5/XwrAhw3NMt5ROzjR aaWw== X-Forwarded-Encrypted: i=1; AJvYcCX3HxtFECKS4fJBLLhWw9XLFN9tWucOjf6KIZBl8JjAHaZbY5gC7p/lDQjSqrBrquoG/3x88o8m8ifCqErD4QpGROE= X-Gm-Message-State: AOJu0YyUqctsKUayZWDaoxftJD2RT1JRngWTjOXFNPLdMxr1b9/Ekwx4 wYWMXLDRbLzUvs+WUmjuFF3keT1/hHgijT5Q4j47wIbp3OnCzMtlhvVTUCtRebc= X-Google-Smtp-Source: AGHT+IHWLLpJM9kP3GIJnAD3PIjowIRI5CiaLB2E1fEGzbZDh0VtuOglA9xHF/Pw/rBd48Qyed+Qyg== X-Received: by 2002:a17:903:228d:b0:1e5:a3b2:3dad with SMTP id d9443c01a7336-1ef43f51f2cmr48325145ad.42.1715374980776; Fri, 10 May 2024 14:03:00 -0700 (PDT) Received: from ghost ([2601:647:5700:6860:629e:3f2:f321:6c]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-1ef0c13873esm36725165ad.266.2024.05.10.14.02.55 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 10 May 2024 14:03:00 -0700 (PDT) Date: Fri, 10 May 2024 14:02:54 -0700 From: Charlie Jenkins To: Deepak Gupta Cc: paul.walmsley@sifive.com, rick.p.edgecombe@intel.com, broonie@kernel.org, Szabolcs.Nagy@arm.com, kito.cheng@sifive.com, keescook@chromium.org, ajones@ventanamicro.com, conor.dooley@microchip.com, cleger@rivosinc.com, atishp@atishpatra.org, alex@ghiti.fr, bjorn@rivosinc.com, alexghiti@rivosinc.com, samuel.holland@sifive.com, conor@kernel.org, linux-doc@vger.kernel.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, devicetree@vger.kernel.org, linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-kselftest@vger.kernel.org, corbet@lwn.net, palmer@dabbelt.com, aou@eecs.berkeley.edu, robh+dt@kernel.org, krzysztof.kozlowski+dt@linaro.org, oleg@redhat.com, akpm@linux-foundation.org, arnd@arndb.de, ebiederm@xmission.com, Liam.Howlett@oracle.com, vbabka@suse.cz, lstoakes@gmail.com, shuah@kernel.org, brauner@kernel.org, andy.chiu@sifive.com, jerry.shih@sifive.com, hankuan.chen@sifive.com, greentime.hu@sifive.com, evan@rivosinc.com, xiao.w.wang@intel.com, apatel@ventanamicro.com, mchitale@ventanamicro.com, dbarboza@ventanamicro.com, sameo@rivosinc.com, shikemeng@huaweicloud.com, willy@infradead.org, vincent.chen@sifive.com, guoren@kernel.org, samitolvanen@google.com, songshuaishuai@tinylab.org, gerg@kernel.org, heiko@sntech.de, bhe@redhat.com, jeeheng.sia@starfivetech.com, cyy@cyyself.name, maskray@google.com, ancientmodern4@gmail.com, mathis.salmen@matsal.de, cuiyunhui@bytedance.com, bgray@linux.ibm.com, mpe@ellerman.id.au, baruch@tkos.co.il, alx@kernel.org, david@redhat.com, catalin.marinas@arm.com, revest@chromium.org, josh@joshtriplett.org, shr@devkernel.io, deller@gmx.de, omosnace@redhat.com, ojeda@kernel.org, jhubbard@nvidia.com Subject: Re: [PATCH v3 10/29] riscv/mm : ensure PROT_WRITE leads to VM_READ | VM_WRITE Message-ID: References: <20240403234054.2020347-1-debug@rivosinc.com> <20240403234054.2020347-11-debug@rivosinc.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20240403234054.2020347-11-debug@rivosinc.com> X-Stat-Signature: 4bd4t3kjk1kifxpg3gcenwduwrt459sp X-Rspamd-Queue-Id: 1E863180008 X-Rspam-User: X-Rspamd-Server: rspam10 X-HE-Tag: 1715374981-724588 X-HE-Meta: U2FsdGVkX1/P1h7COYT+68lJgGc+WQMMqeSa79A9zBV0iivDbrnsyQOxkatv0OewQ5/XYBUBYjy7hxD42wBWCHDDIDqNtdPHXzaKw3kMUc1blHTrRBwlbPoza4dYqw3bf7VTAc29lajqmgtpWAHlCPRKvqTvZCuoCYF4TDWm0zhLKzYu0GhNBrQXoBCdlKsLrgHzCQCi2VlTUaiblvZGf61qgg1nDtzyNd6yzfDy4PVAWNATX0Ot0dCASk+nAi+6vu2VwkjK+x6SGdv0nhaL9owjy93xihmXGAlWQdtv3bzluRaKZLUMQmTDE8OcWLv/6HYHRihK/mpH0ztPLjQt6suI4ArnYX2y3Yb0cWai7JFI3oQf1GJqo+ugphnchp2gz+F06DeBVhKhHzUPt+B2XYaN1URi5STxf4Jm36J8uBojwAsZ8uW207Ko09/vuOOZpSoqb9qmYd/s0aMhaJ1kWAuLPx1XQUfL4SVgdwG4gwgaz4aFM/OctM5cCcV9Zjxhbetj5d5pyxFwbDH5IiJ0XELZi8O0jKYyeDmAP1EWA+KEZuGFWVES8y+davsBGhRmd0Auc78/B4xro44WxTL0bmeJ48GnSfb9+1u6YTdJEGTVbylA1M9kgeGuL20vLYfrW7ayQObpmyn+3BnsEpdrwGwUB9L5MkqYVZUkwx+6fQ3DrViPM7mjhmS6/JsvMVc332c+MrvIZhkOyWnOG4cNstG6sA9vESDGBiYIxcpAA44M9EoMVtD6KUURG4PDjgnj6kYL4bWTfGSG7zyPm/p1C8PEp8Wtc/i3gPWaH84ilKyKcelOEXbZgTdKJhfe5VPStFiTfKOgvGuKfX9jg2qsnSJN4CDmQFVTd1hSZrEO03Eaj2krNXcbBDt2CTGyCV9xheTgQe/8EYc8zLgSG4UkPwjiv783AJ/MDv1UCWAj7SoFs44RrjyjAdYCL4a22WZXytrSHpxXzjVeQPA/Zpw M4EYtDSQ sKY9ybtBw9WyD1uh5FfJYeUQgSR0WZaAnuqOwWy1YOvW1x8tFYH9p5YFNPc2e2yv4fomgkudi2BsS4HiT7BvkDLfZqP7yuqDSD439dZNDh+HmKt7gKBzLqbpVwIgqoOcaPpVSEl2UUdrFPqAKWiafeyMAgxBgp3sRMM9eS0CkjQFewUashIzpeW7+Hya56dglDs3etQY/lS+51lTgIfBpqZ2dPG4u9FEcF4tBcnJy+MreV8XhtjZxWOYV53YXwFLWQUhtsBMh0q2axBBqYpdOuwu3LftmcWwlFFXmq/aNiHs7APe39oKN5MjMPKNgMzHWH2cbsJfA1qKYgNd+0Fz9BQVp+nmUHoj4JpsX1miisWx5LFq5a+g63Fewo9qUelEnBmEq8N/Pqx9KGhXraQsWdBZEDaw+zKkXgFgG3ZezfxQ9O0fOPJaX6pJ9vpac5h4FcOdQASrgEkvi6F7aHb7LZUn0oau4HNp7a7ULMwPSzpChxVZzODxNQexBQxOvrl0F/LNOuYsz206EmgMnLSyNxs/nqJmk7SxnWPDl8QGHX2qXUBbC4eNbejiYNoWh+dpdwRdOLEA+BTqxUtuJioHJAFIFLmmVWRyNcN8JE0J16zTRslgqe9i9Ln348g== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Wed, Apr 03, 2024 at 04:34:58PM -0700, Deepak Gupta wrote: > `arch_calc_vm_prot_bits` is implemented on risc-v to return VM_READ | > VM_WRITE if PROT_WRITE is specified. Similarly `riscv_sys_mmap` is > updated to convert all incoming PROT_WRITE to (PROT_WRITE | PROT_READ). > This is to make sure that any existing apps using PROT_WRITE still work. > > Earlier `protection_map[VM_WRITE]` used to pick read-write PTE encodings. > Now `protection_map[VM_WRITE]` will always pick PAGE_SHADOWSTACK PTE > encodings for shadow stack. Above changes ensure that existing apps > continue to work because underneath kernel will be picking > `protection_map[VM_WRITE|VM_READ]` PTE encodings. > > Signed-off-by: Deepak Gupta > --- > arch/riscv/include/asm/mman.h | 24 ++++++++++++++++++++++++ > arch/riscv/include/asm/pgtable.h | 1 + > arch/riscv/kernel/sys_riscv.c | 11 +++++++++++ > arch/riscv/mm/init.c | 2 +- > mm/mmap.c | 1 + > 5 files changed, 38 insertions(+), 1 deletion(-) > create mode 100644 arch/riscv/include/asm/mman.h > > diff --git a/arch/riscv/include/asm/mman.h b/arch/riscv/include/asm/mman.h > new file mode 100644 > index 000000000000..ef9fedf32546 > --- /dev/null > +++ b/arch/riscv/include/asm/mman.h > @@ -0,0 +1,24 @@ > +/* SPDX-License-Identifier: GPL-2.0 */ > +#ifndef __ASM_MMAN_H__ > +#define __ASM_MMAN_H__ > + > +#include > +#include > +#include > + > +static inline unsigned long arch_calc_vm_prot_bits(unsigned long prot, > + unsigned long pkey __always_unused) > +{ > + unsigned long ret = 0; > + > + /* > + * If PROT_WRITE was specified, force it to VM_READ | VM_WRITE. > + * Only VM_WRITE means shadow stack. > + */ > + if (prot & PROT_WRITE) > + ret = (VM_READ | VM_WRITE); > + return ret; > +} > +#define arch_calc_vm_prot_bits(prot, pkey) arch_calc_vm_prot_bits(prot, pkey) > + > +#endif /* ! __ASM_MMAN_H__ */ > diff --git a/arch/riscv/include/asm/pgtable.h b/arch/riscv/include/asm/pgtable.h > index 6066822e7396..4d5983bc6766 100644 > --- a/arch/riscv/include/asm/pgtable.h > +++ b/arch/riscv/include/asm/pgtable.h > @@ -184,6 +184,7 @@ extern struct pt_alloc_ops pt_ops __initdata; > #define PAGE_READ_EXEC __pgprot(_PAGE_BASE | _PAGE_READ | _PAGE_EXEC) > #define PAGE_WRITE_EXEC __pgprot(_PAGE_BASE | _PAGE_READ | \ > _PAGE_EXEC | _PAGE_WRITE) > +#define PAGE_SHADOWSTACK __pgprot(_PAGE_BASE | _PAGE_WRITE) > > #define PAGE_COPY PAGE_READ > #define PAGE_COPY_EXEC PAGE_READ_EXEC > diff --git a/arch/riscv/kernel/sys_riscv.c b/arch/riscv/kernel/sys_riscv.c > index f1c1416a9f1e..846c36b1b3d5 100644 > --- a/arch/riscv/kernel/sys_riscv.c > +++ b/arch/riscv/kernel/sys_riscv.c > @@ -8,6 +8,8 @@ > #include > #include > #include > +#include > +#include > > static long riscv_sys_mmap(unsigned long addr, unsigned long len, > unsigned long prot, unsigned long flags, > @@ -17,6 +19,15 @@ static long riscv_sys_mmap(unsigned long addr, unsigned long len, > if (unlikely(offset & (~PAGE_MASK >> page_shift_offset))) > return -EINVAL; > > + /* > + * If only PROT_WRITE is specified then extend that to PROT_READ > + * protection_map[VM_WRITE] is now going to select shadow stack encodings. > + * So specifying PROT_WRITE actually should select protection_map [VM_WRITE | VM_READ] > + * If user wants to create shadow stack then they should use `map_shadow_stack` syscall. > + */ > + if (unlikely((prot & PROT_WRITE) && !(prot & PROT_READ))) The comments says that this should extend to PROT_READ if only PROT_WRITE is specified. This condition instead is checking if PROT_WRITE is selected but PROT_READ is not. If prot is (VM_EXEC | VM_WRITE) then it would be extended to (VM_EXEC | VM_WRITE | VM_READ). This will not currently cause any issues because these both map to the same value in the protection_map PAGE_COPY_EXEC, however this seems to be not the intention of this change. prot == PROT_WRITE better suits the condition explained in the comment. > + prot |= PROT_READ; > + > return ksys_mmap_pgoff(addr, len, prot, flags, fd, > offset >> (PAGE_SHIFT - page_shift_offset)); > } > diff --git a/arch/riscv/mm/init.c b/arch/riscv/mm/init.c > index fa34cf55037b..98e5ece4052a 100644 > --- a/arch/riscv/mm/init.c > +++ b/arch/riscv/mm/init.c > @@ -299,7 +299,7 @@ pgd_t early_pg_dir[PTRS_PER_PGD] __initdata __aligned(PAGE_SIZE); > static const pgprot_t protection_map[16] = { > [VM_NONE] = PAGE_NONE, > [VM_READ] = PAGE_READ, > - [VM_WRITE] = PAGE_COPY, > + [VM_WRITE] = PAGE_SHADOWSTACK, > [VM_WRITE | VM_READ] = PAGE_COPY, > [VM_EXEC] = PAGE_EXEC, > [VM_EXEC | VM_READ] = PAGE_READ_EXEC, > diff --git a/mm/mmap.c b/mm/mmap.c > index d89770eaab6b..57a974f49b00 100644 > --- a/mm/mmap.c > +++ b/mm/mmap.c > @@ -47,6 +47,7 @@ > #include > #include > #include > +#include It doesn't seem like this is necessary for this patch. - Charlie > > #include > #include > -- > 2.43.2 >