From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mail-pf1-f170.google.com (mail-pf1-f170.google.com [209.85.210.170]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E0D671C1F12 for ; Thu, 24 Apr 2025 04:44:14 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.170 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1745469857; cv=none; b=iKPoCOsa3rRFyvLfBvrtHanu4oPdiJpgrSZ7TXblvS95LFCpihFN5/8H1coos854YLXG5Bfnh7o8bjKJ95ItBcoLmi6jO36cJiJuJn1Uq9dn7Fo6MLbjvKHgaIiy+p6zOFAqZx2LrG5MdirSmS2APFLv5rkmL2wkyq1/1+B8b78= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1745469857; c=relaxed/simple; bh=scumeHQ1EWeXtDCiWxXsvS1GGnP6lxnjhn7+ttz3yqE=; h=Date:From:To:Cc:Subject:Message-ID:References:MIME-Version: Content-Type:Content-Disposition:In-Reply-To; b=hImJ9thTI2uB/1ptqm5treYT4tE0pBKo1D5SiUGQb2QLRIvKpLL0x7WjUixEXt1BmYpvXK5ahMOBNO2ANAK64c9S2i0NvU25wxt3G/kEQNGoMmlaMCtTzJ3Yd+LBKIzWZQQLpTL4ZBfRl9aNnvyyrbrZ8P/n7c2vB6schttWaKg= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=rivosinc.com; spf=pass smtp.mailfrom=rivosinc.com; dkim=pass (2048-bit key) header.d=rivosinc-com.20230601.gappssmtp.com header.i=@rivosinc-com.20230601.gappssmtp.com header.b=ZjA4nAE5; arc=none smtp.client-ip=209.85.210.170 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=rivosinc.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=rivosinc.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=rivosinc-com.20230601.gappssmtp.com header.i=@rivosinc-com.20230601.gappssmtp.com header.b="ZjA4nAE5" Received: by mail-pf1-f170.google.com with SMTP id d2e1a72fcca58-7376e311086so668602b3a.3 for ; Wed, 23 Apr 2025 21:44:14 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20230601.gappssmtp.com; s=20230601; t=1745469854; x=1746074654; darn=vger.kernel.org; h=in-reply-to:content-transfer-encoding:content-disposition :mime-version:references:message-id:subject:cc:to:from:date:from:to :cc:subject:date:message-id:reply-to; bh=d8Qiq1brGqi76YM1jZPx/K29TOoSYoqtc1NwyaTJxB8=; b=ZjA4nAE5C43SKeMg2lonMk0udTssP7uWcQGLa0ByW/qtijdIvBtCJFJykP47pThHrl uhmqJuWDfESGeJPodQLWjKUthkZOQQrTNvFswyRpSC+FNa2D8pbwV0i+vj1Sr3wkBFuE tmgnibOExIPfmGIgP/GatTadLh+QQUvKpZ8JMfhHOi6ub8spvTqyUB36pZAq2SD1lFrS a6MidcIQmCSwxDs+xz2fQ9mXiKkysadTbj1iycRBhjX+W56deBL3vIW4LM7EnWxb54RR pWXRq9oudnEw4nKJCH0Gh5JgtcRaapN2llz645Waw8/wZhztoYZ2M0bXJ9vU7uMO/MfJ 6ZmQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1745469854; x=1746074654; h=in-reply-to:content-transfer-encoding:content-disposition :mime-version:references:message-id:subject:cc:to:from:date :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=d8Qiq1brGqi76YM1jZPx/K29TOoSYoqtc1NwyaTJxB8=; b=fHgQOimTQt2vha0cTGPfK+tAx0PN2T19w9fvUyRaed8D33IKEYm5UhiFTexCKGssk5 mafmqhDJ12Yz3UGP7pCQeniX34Fu5dG+PyXlfUp1fL6xoEOi3wWSBkJjz7qVpBoEoMrW 8ks5V6HdDyXa/jVNTgHemLPFhOhFEuHUDQgaARbR9vD3fE8FPutXK+xWOThNjDqyxceh GMWx9MDhAlRu80ekBZqXKd/B6ZSUrZtmhFMrz1mD+M64J1gdoTOcJbMVTX7Ton86d61i w3pYosgSZdZsbIE0WrGmbX4jKREYPOZPlrF5C7h38Yn/KFctsVsugp5JTjgNEQu4XGwW mG5w== X-Forwarded-Encrypted: i=1; AJvYcCVd5Eqc4dRF8EG+tsHUO33nPMZcwkRKw+NmFIqoLlt/stgYMOEa3IJYI1PcwAnwRIt7uHWI8HBaZ/3tTY0=@vger.kernel.org X-Gm-Message-State: AOJu0Yz2iU03YvgHT38AX4X2iMXU4bUxwo842ho73eh2bR02yU51Tfa6 zF1UGzWvT0PPm4H6rTj/G76F+7u8WN6uJ7TX2E2JJjvbx/GtWYHn/OlG4Y7Tows= X-Gm-Gg: ASbGnctFMN1xuHW3g4owVdHTy+1yukLtR1kuDmpYj2X5Eg/MHGH3ICXeshiMNa8TjDH lmGhY1V4Z6+uOb7PurbOyvd8UZ1mDndm1xczjrVL0xCO22TtPUVM/lTUxe/Xe9Lc3GvTMfwv6oF OlqDZq7im8CFWtNGuoZiw7EWIVxmij09B1dxFyNcl101QjmORlSNGEcgKsJ0ZZHkAv6F0UJ5dF1 /YH7eqtrWJ9Un1N1zT+WnFQp9u11Mu+KUMvjpgrpUndQh/eEgs1G1VLVTrPfO8WF8dGQBuovYld 0UAYH2z9+CPhtkNc5D9tKtuQqrF/0d7XCpkdhcoqXO8I+309fPM= X-Google-Smtp-Source: AGHT+IEitFVHRTgSCgRhWMe9hU5oQbZR2x7Kpm268qxNcn6QnDdeucUqxqlmgTha+1K/W91b8N7XWQ== X-Received: by 2002:a05:6a00:2182:b0:736:73ad:365b with SMTP id d2e1a72fcca58-73e246647e4mr1767338b3a.14.1745469854052; Wed, 23 Apr 2025 21:44:14 -0700 (PDT) Received: from debug.ba.rivosinc.com ([64.71.180.162]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-73e25a6aa52sm463483b3a.94.2025.04.23.21.44.11 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 23 Apr 2025 21:44:13 -0700 (PDT) Date: Wed, 23 Apr 2025 21:44:09 -0700 From: Deepak Gupta To: Radim =?utf-8?B?S3LEjW3DocWZ?= Cc: Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H. Peter Anvin" , Andrew Morton , "Liam R. Howlett" , Vlastimil Babka , Lorenzo Stoakes , Paul Walmsley , Palmer Dabbelt , Albert Ou , Conor Dooley , Rob Herring , Krzysztof Kozlowski , Arnd Bergmann , Christian Brauner , Peter Zijlstra , Oleg Nesterov , Eric Biederman , Kees Cook , Jonathan Corbet , Shuah Khan , Jann Horn , Conor Dooley , linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-riscv@lists.infradead.org, devicetree@vger.kernel.org, linux-arch@vger.kernel.org, linux-doc@vger.kernel.org, linux-kselftest@vger.kernel.org, alistair.francis@wdc.com, richard.henderson@linaro.org, jim.shu@sifive.com, andybnac@gmail.com, kito.cheng@sifive.com, charlie@rivosinc.com, atishp@rivosinc.com, evan@rivosinc.com, cleger@rivosinc.com, alexghiti@rivosinc.com, samitolvanen@google.com, broonie@kernel.org, rick.p.edgecombe@intel.com, linux-riscv Subject: Re: [PATCH v12 12/28] riscv: Implements arch agnostic shadow stack prctls Message-ID: References: <20250314-v5_user_cfi_series-v12-0-e51202b53138@rivosinc.com> <20250314-v5_user_cfi_series-v12-12-e51202b53138@rivosinc.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8; format=flowed Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: On Thu, Apr 10, 2025 at 11:45:58AM +0200, Radim Krčmář wrote: >2025-03-14T14:39:31-07:00, Deepak Gupta : >> diff --git a/arch/riscv/include/asm/usercfi.h b/arch/riscv/include/asm/usercfi.h >> @@ -14,7 +15,8 @@ struct kernel_clone_args; >> struct cfi_status { >> unsigned long ubcfi_en : 1; /* Enable for backward cfi. */ >> - unsigned long rsvd : ((sizeof(unsigned long) * 8) - 1); >> + unsigned long ubcfi_locked : 1; >> + unsigned long rsvd : ((sizeof(unsigned long) * 8) - 2); > >The rsvd field shouldn't be necessary as the container for the bitfield >is 'unsigned long' sized. > >Why don't we use bools here, though? >It might produce a better binary and we're not hurting for struct size. If you remember one of the previous patch discussion, this goes into `thread_info` Don't want to bloat it. Even if we end shoving into task_struct, don't want to bloat that either. I can just convert it into bitmask if bitfields are an eyesore here. > >> diff --git a/arch/riscv/kernel/usercfi.c b/arch/riscv/kernel/usercfi.c >> @@ -24,6 +24,16 @@ bool is_shstk_enabled(struct task_struct *task) >> +bool is_shstk_allocated(struct task_struct *task) >> +{ >> + return task->thread_info.user_cfi_state.shdw_stk_base ? true : false; > >I think that the following is clearer: > > return task->thread_info.user_cfi_state.shdw_stk_base > >(Similar for all other implicit conversion ternaries.) Hmm... noted. > >> @@ -42,6 +52,26 @@ void set_active_shstk(struct task_struct *task, unsigned long shstk_addr) >> +void set_shstk_status(struct task_struct *task, bool enable) >> +{ >> + if (!cpu_supports_shadow_stack()) >> + return; >> + >> + task->thread_info.user_cfi_state.ubcfi_en = enable ? 1 : 0; >> + >> + if (enable) >> + task->thread.envcfg |= ENVCFG_SSE; >> + else >> + task->thread.envcfg &= ~ENVCFG_SSE; >> + >> + csr_write(CSR_ENVCFG, task->thread.envcfg); > >There is a new helper we could reuse for this: > > envcfg_update_bits(task, ENVCFG_SSE, enable ? ENVCFG_SSE : 0); Yeah it's in switch_to.h header. I'll think about it. > >> +} >> @@ -262,3 +292,83 @@ void shstk_release(struct task_struct *tsk) >> +int arch_set_shadow_stack_status(struct task_struct *t, unsigned long status) >> +{ >> + /* Request is to enable shadow stack and shadow stack is not enabled already */ >> + if (enable_shstk && !is_shstk_enabled(t)) { >> + /* shadow stack was allocated and enable request again >> + * no need to support such usecase and return EINVAL. >> + */ >> + if (is_shstk_allocated(t)) >> + return -EINVAL; >> + >> + size = calc_shstk_size(0); >> + addr = allocate_shadow_stack(0, size, 0, false); > >Why don't we use the userspace-allocated stack? > >I'm completely missing the design idea here... Userspace has absolute >over the shadow stack pointer CSR, so we don't need to do much in Linux: > >1. interface to set up page tables with -W- PTE and >2. interface to control senvcfg.SSE. > >Userspace can do the rest. Design is like following: When a user task wants to enable shadow stack for itself, it has to issue a syscall to kernel (like this prctl). Now it can be done independently by user task by first issuing `map_shadow_stack`, then asking kernel to light up envcfg bit and eventually when return to usermode happens, it can write to CSR. It is no different from doing all of the above together in single `prctl` call. They are equivalent in that nature. Background is that x86 followed this because x86 had workloads/binaries/ functions with (deep)recursive functions and thus by default were forced to always allocate shadow stack to be of the same size as data stack. To reduce burden on userspace for determining and then allocating same size (size of data stack) shadow stack, prctl would do the job of calculating default shadow stack size (and reduce programming error in usermode). arm64 followed the suite. I don't want to find out what's the compatiblity issues we will see and thus just following the suite (given that both approaches are equivalent). Take a look at static `calc_shstk_size(unsigned long size)`. Coming back to your question of why not allowing userspace to manage its own shadow stack. Answer is that it can manage its own shadow stack. If it does, it just have to be aware of size its allocating for shadow stack. There is already a patch series going on to manage this using clone3. https://lore.kernel.org/all/20250408-clone3-shadow-stack-v15-4-3fa245c6e3be@kernel.org/ I fully expect green thread implementations in rust/go or swapcontext based thread management doing this on their own. Current design is to ensure existing apps dont have to change a lot in userspace and by default kernel gives compatibility. Anyone else wanting to optimize the usage of shadow stack can do so with current design. - > >> +int arch_lock_shadow_stack_status(struct task_struct *task, >> + unsigned long arg) >> +{ >> + /* If shtstk not supported or not enabled on task, nothing to lock here */ >> + if (!cpu_supports_shadow_stack() || >> + !is_shstk_enabled(task) || arg != 0) >> + return -EINVAL; > >The task might want to prevent shadow stack from being enabled? But Why would it want to do that? Task can simply not issue the prctl. There are glibc tunables as well using which it can be disabled. > >Thanks.