From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mail-pl1-f182.google.com (mail-pl1-f182.google.com [209.85.214.182]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D02E81FC0EA for ; Thu, 24 Apr 2025 18:04:03 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.182 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1745517845; cv=none; b=s58M1O/PYzS6USJw10mHmL5M9Dx+cQwC4N3gGChDgoPCiMuiiujCkv9vUGqMKzI1uEZEb+ZvNApCYDXa+Slxx+ZpdzxZ7o/O/Pcx45fgbm9v4djL8cBu/BOfYCTWwjKKIDAlIfdUOL7ZXHZSem3JT4xdhpOdQgtSW/wKWA8h6kA= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1745517845; c=relaxed/simple; bh=uPn6plOz3AG5jrEfFpRZaDs6P0aP85UrPumlULvK+r8=; h=Date:From:To:Cc:Subject:Message-ID:References:MIME-Version: Content-Type:Content-Disposition:In-Reply-To; b=KEZfw+HGaD6SK2ap55QVjPFqMQt5gC2q3iNb8N1olRFl1Fsl8ILz9uAydTMQnLvnJYJ39ZmmQaGhtLnCRD9v3aVGwC5GK/j7LeKWcFxzYhTKGwUFzKxgdb6wvHAoO0hQSpRgcfOIqsau2YnVQP8M8we7uiJ7Z54ouhY2FryUIyI= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=rivosinc.com; spf=pass smtp.mailfrom=rivosinc.com; dkim=pass (2048-bit key) header.d=rivosinc-com.20230601.gappssmtp.com header.i=@rivosinc-com.20230601.gappssmtp.com header.b=BrzPX+ns; arc=none smtp.client-ip=209.85.214.182 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=rivosinc.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=rivosinc.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=rivosinc-com.20230601.gappssmtp.com header.i=@rivosinc-com.20230601.gappssmtp.com header.b="BrzPX+ns" Received: by mail-pl1-f182.google.com with SMTP id d9443c01a7336-227c7e57da2so12014195ad.0 for ; Thu, 24 Apr 2025 11:04:03 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20230601.gappssmtp.com; s=20230601; t=1745517843; x=1746122643; darn=vger.kernel.org; h=in-reply-to:content-transfer-encoding:content-disposition :mime-version:references:message-id:subject:cc:to:from:date:from:to :cc:subject:date:message-id:reply-to; bh=R+2ifGTP7Zuk1f3PGxpKoh8D1eRLE/ZHAefBGZIpiLo=; b=BrzPX+nsbBbhPpijNsceUyyDomfenzy/7GbMN64AroqKTxoylv2ovD6hn0CRWqBlxs ulrrWRaiMYzHEdGqjtf/3RqjjwY6kRG5bEvpPsloj2x6unflxrvd7OZeFcolPB3CVKBE eupWPZWQUEaGbsD7rKhdEueNm0qf1/qtRRW05uUzXnvTB3AKt5nuHzXOrHtTr5XnD6Ri jXZraAzv+36aPvl7LB8SMNA+JU/JJaS+pw30Qwz7cznpfAC76YlfziI4Qwx4v8iXAy2a O6FSsZL5LpnN3TihyjvB5S0s3dnYShOd4bWtKX7O65OT7pbg3b2/Gk0NB+qX7HIv4Jmr bHKg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1745517843; x=1746122643; h=in-reply-to:content-transfer-encoding:content-disposition :mime-version:references:message-id:subject:cc:to:from:date :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=R+2ifGTP7Zuk1f3PGxpKoh8D1eRLE/ZHAefBGZIpiLo=; b=qZKIcoGGjsk+4gmTFPyT2MhD6KrI07j+1XLXzLR8oll4m7l09pBbDUJ7Pc7Y034/78 gECWmOEvmemb3vL2my0NYg5XCgukC8AWS4A5JZ2x4oqME16mI73Vcfmf2x8yZUbdaJf5 4azIvN1ANvDDB6u0lleVw0wRa0BAvBaljZ7fUdYmNvHPp6XvPtl3xZb31KYK84vWOjVg EdRYKvW6EcOMFyHisRIW0NPZDU/bXxkjMgrSGJNrKN8UsAXIxxe4mG1JwE+gJSbz/h9E zgyDAbxF5VMV9ybS1eP40pISGDqauFfb2n7byPq3iNxooKv9XXRdBiB0yKt8JCj9xc1l refw== X-Forwarded-Encrypted: i=1; AJvYcCXcxgIxfSDo3f1J5cGhky7P60xyXBYUObUsoQODRcafxAVTaR+tDCOtcJhzd/WA/tKVKz4SlXyPMSI=@vger.kernel.org X-Gm-Message-State: AOJu0YyZizXf54PrUCh01K30fSHS4lqnlqB5+dxGfK4ZQSdUqh504qFw /I7yZKHV6BYAQfpcQAWOr7Gsinag/mfwJQCu1IeE/FpYnIBSMl24nyHH56sfEbs= X-Gm-Gg: ASbGnctaLBhdKTSH+0reyeSnWRTT71ziUQdToJXD5JW0kx9PtQcB0u9ZzurTC0ALD0z xdTa/nbwjM0M9vmfugphiw6XMGN08CiXDAzw9Zm4F+vIKCxAkRHA/eOYgp9QJGnUQt1RdMzpzfe 3DTVGON5GwEkjJKNknSfq3WfF8BLCwdI4Vr61sohgURHlBFAvcmVfCI4TRcIQKfxVvtNx7e38IT RrBxvq6KJfn9MR8odXFOs4F555VT/KkAoPhOrOd2OIyRYe9Wtgm5B+Gg+usEOcnml0jvaWarRtq 9CXkemLYLYskEq7ekHoXEsprfsjgkHvMEFf0LJnglgLskcLpPGSKf23rqRhNsw== X-Google-Smtp-Source: AGHT+IEwApZFffChVQ3IgP8GwvBv+BNm7qJFv/s7tyP4mivow/7aAesEdu2H5hIh83xgouvcI3whKA== X-Received: by 2002:a17:902:cec8:b0:21f:4c8b:c514 with SMTP id d9443c01a7336-22dbd46edccmr5459035ad.45.1745517843026; Thu, 24 Apr 2025 11:04:03 -0700 (PDT) Received: from debug.ba.rivosinc.com ([64.71.180.162]) by smtp.gmail.com with ESMTPSA id 41be03b00d2f7-b15f7ec0bb2sm1513897a12.18.2025.04.24.11.04.00 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 24 Apr 2025 11:04:02 -0700 (PDT) Date: Thu, 24 Apr 2025 11:03:59 -0700 From: Deepak Gupta To: Radim =?utf-8?B?S3LEjW3DocWZ?= Cc: Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H. Peter Anvin" , Andrew Morton , "Liam R. Howlett" , Vlastimil Babka , Lorenzo Stoakes , Paul Walmsley , Palmer Dabbelt , Albert Ou , Conor Dooley , Rob Herring , Krzysztof Kozlowski , Arnd Bergmann , Christian Brauner , Peter Zijlstra , Oleg Nesterov , Eric Biederman , Kees Cook , Jonathan Corbet , Shuah Khan , Jann Horn , Conor Dooley , linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-riscv@lists.infradead.org, devicetree@vger.kernel.org, linux-arch@vger.kernel.org, linux-doc@vger.kernel.org, linux-kselftest@vger.kernel.org, alistair.francis@wdc.com, richard.henderson@linaro.org, jim.shu@sifive.com, andybnac@gmail.com, kito.cheng@sifive.com, charlie@rivosinc.com, atishp@rivosinc.com, evan@rivosinc.com, cleger@rivosinc.com, alexghiti@rivosinc.com, samitolvanen@google.com, broonie@kernel.org, rick.p.edgecombe@intel.com, Zong Li , linux-riscv Subject: Re: [PATCH v12 05/28] riscv: usercfi state for task and save/restore of CSR_SSP on trap entry/exit Message-ID: References: <20250314-v5_user_cfi_series-v12-0-e51202b53138@rivosinc.com> <20250314-v5_user_cfi_series-v12-5-e51202b53138@rivosinc.com> Precedence: bulk X-Mailing-List: linux-doc@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8; format=flowed Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: On Thu, Apr 24, 2025 at 02:16:32PM +0200, Radim Krčmář wrote: >2025-04-23T17:23:56-07:00, Deepak Gupta : >> On Thu, Apr 10, 2025 at 01:04:39PM +0200, Radim Krčmář wrote: >>>2025-03-14T14:39:24-07:00, Deepak Gupta : >>>> diff --git a/arch/riscv/kernel/entry.S b/arch/riscv/kernel/entry.S >>>> @@ -147,6 +147,20 @@ SYM_CODE_START(handle_exception) >>>> >>>> REG_L s0, TASK_TI_USER_SP(tp) >>>> csrrc s1, CSR_STATUS, t0 >>>> + /* >>>> + * If previous mode was U, capture shadow stack pointer and save it away >>>> + * Zero CSR_SSP at the same time for sanitization. >>>> + */ >>>> + ALTERNATIVE("nop; nop; nop; nop", >>>> + __stringify( \ >>>> + andi s2, s1, SR_SPP; \ >>>> + bnez s2, skip_ssp_save; \ >>>> + csrrw s2, CSR_SSP, x0; \ >>>> + REG_S s2, TASK_TI_USER_SSP(tp); \ >>>> + skip_ssp_save:), >>>> + 0, >>>> + RISCV_ISA_EXT_ZICFISS, >>>> + CONFIG_RISCV_USER_CFI) >>> >>>(I'd prefer this closer to the user_sp and kernel_sp swap, it's breaking >>> the flow here. We also already know if we've returned from userspace >>> or not even without SR_SPP, but reusing the information might tangle >>> the logic.) >> >> If CSR_SCRATCH was 0, then we would be coming from kernel else flow goes >> to `.Lsave_context`. If we were coming from kernel mode, then eventually >> flow merges to `.Lsave_context`. >> >> So we will be saving CSR_SSP on all kernel -- > kernel trap handling. That >> would be unnecessary. IIRC, this was one of the first review comments in >> early RFC series of these patch series (to not touch CSR_SSP un-necessarily) >> >> We can avoid that by ensuring when we branch by determining if we are coming >> from user to something like `.Lsave_ssp` which eventually merges into >> ".Lsave_context". And if we were coming from kernel then we would branch to >> `.Lsave_context` and thus skipping ssp save logic. But # of branches it >> introduces in early exception handling is equivalent to what current patches >> do. So I don't see any value in doing that. >> >> Let me know if I am missing something. > >Right, it's hard to avoid the extra branches. > >I think we could modify the entry point (STVEC), so we start at >different paths based on kernel/userspace trap and only jump once to the >common code, like: > > SYM_CODE_START(handle_exception_kernel) > /* kernel setup magic */ > j handle_exception_common > SYM_CODE_START(handle_exception_user) > /* userspace setup magic */ > handle_exception_common: Hmm... This can be done. But then it would require to constantly modify `stvec` When you're going back to user mode, you would have to write `stvec` with addr of `handle_exception_user`. But then you can easily get a NMI. It can become ugly. Needs much more thought and on first glance feels error prone. Only if we have an extension that allows different trap address depending on mode you're coming from (arm does that, right?, I think x86 FRED also does that) > >This is not a suggestion for this series. I would be perfectly happy >with just a cleaner code. > >Would it be possible to hide the ALTERNATIVE ugliness behind a macro and >move it outside the code block that saves pt_regs? Sure, I'll do something about it. > >Thanks.