From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1C7BDC433F5 for ; Thu, 12 May 2022 15:16:23 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1355825AbiELPQV (ORCPT ); Thu, 12 May 2022 11:16:21 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54940 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1355867AbiELPQO (ORCPT ); Thu, 12 May 2022 11:16:14 -0400 Received: from galois.linutronix.de (Galois.linutronix.de [IPv6:2a0a:51c0:0:12e:550::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 99D9C20F58 for ; Thu, 12 May 2022 08:16:13 -0700 (PDT) From: Thomas Gleixner DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1652368572; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=grsC0w+E9Jfgx2nf3sde57yzh3ij8Mi1ObW9I0pQcXg=; b=QP9d2vJHTocBeVJGSXckP+GiDE7SnKW6S8OcPZe9+uq845NpdBDnLqssIS2OsTgDSmHBxu AM9gkU2DIL5w5tw42R/cVp6oWGGpHN7Y3//rkCDvlhoII9OBk25WB2mjb6hgVJhKFEavMv estScOX8SotUNTYuZs2TsWMf6TMSblYYqjWkLQhxN0Wk2FL1qsrqNp8xz3bFrS/WjlwMc0 USLLNJTNsNGxf9qcoD930NiRt/dohlPDR462wfK+abm/5ZLP6Ft/dLWIbRG3TS0fgGCtJc OYFtUFXKa+AOGS/3I7sLz8OjK2M4wb8iYnGqMT5nIo+8JiMOk+aI0riolxJyHA== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1652368572; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=grsC0w+E9Jfgx2nf3sde57yzh3ij8Mi1ObW9I0pQcXg=; b=iuRZSkwtyMK+lJI9aS6Ba4q73+eZ+FZGunSKYXQB3nHdm4GEmHTNOE/mVJ/PcUpfsVKJac V4fUVoxCgKjP7GAg== To: Peter Zijlstra Cc: "Kirill A. Shutemov" , Dave Hansen , Andy Lutomirski , x86@kernel.org, Andrey Ryabinin , Alexander Potapenko , Dmitry Vyukov , "H . J . Lu" , Andi Kleen , Rick Edgecombe , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: Re: [RFCv2 05/10] x86/mm: Provide untagged_addr() helper In-Reply-To: References: <20220511022751.65540-1-kirill.shutemov@linux.intel.com> <20220511022751.65540-7-kirill.shutemov@linux.intel.com> <87a6bmx5lt.ffs@tglx> Date: Thu, 12 May 2022 17:16:11 +0200 Message-ID: <87sfpevl1g.ffs@tglx> MIME-Version: 1.0 Content-Type: text/plain Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, May 12 2022 at 16:23, Peter Zijlstra wrote: > On Thu, May 12, 2022 at 03:06:38PM +0200, Thomas Gleixner wrote: > >> #define untagged_addr(addr) ({ \ >> u64 __addr = (__force u64)(addr); \ >> \ >> __addr &= current->thread.lam_untag_mask; \ >> (__force __typeof__(addr))__addr; \ >> }) >> >> No conditionals, fast _and_ correct. Setting this untag mask up once >> when LAM is enabled is not rocket science. > > But that goes wrong if someone ever wants to untag a kernel address and > not use the result for access_ok(). > > I'd feel better about something like: > > s64 __addr = (addr); > s64 __sign = __addr; > > __sign >>= 63; > __sign &= lam_untag_mask; that needs to be __sign &= ~lam_untag_mask; > __addr &= lam_untag_mask; > __addr |= __sign; > > __addr; > > Which simply extends bit 63 downwards -- although possibly there's an > easier way to do that, this is pretty gross. For the price of a conditional: __addr &= lam_untag_mask; if (__addr & BIT(63)) __addr |= ~lam_untag_mask; Now you have the choice between gross and ugly. Thanks, tglx