From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.ozlabs.org (lists.ozlabs.org [112.213.38.117]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 78A63C4345F for ; Sat, 20 Apr 2024 04:25:01 +0000 (UTC) Authentication-Results: lists.ozlabs.org; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=kernel.org header.i=@kernel.org header.a=rsa-sha256 header.s=k20201202 header.b=YhvasTWv; dkim-atps=neutral Received: from boromir.ozlabs.org (localhost [IPv6:::1]) by lists.ozlabs.org (Postfix) with ESMTP id 4VLz234V7mz3dKF for ; Sat, 20 Apr 2024 14:24:59 +1000 (AEST) Authentication-Results: lists.ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=kernel.org header.i=@kernel.org header.a=rsa-sha256 header.s=k20201202 header.b=YhvasTWv; dkim-atps=neutral Authentication-Results: lists.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=kernel.org (client-ip=2604:1380:4641:c500::1; helo=dfw.source.kernel.org; envelope-from=rppt@kernel.org; receiver=lists.ozlabs.org) Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by lists.ozlabs.org (Postfix) with ESMTPS id 4VLz1F4Vprz3c4P for ; Sat, 20 Apr 2024 14:24:17 +1000 (AEST) Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by dfw.source.kernel.org (Postfix) with ESMTP id 1D0AA60C70; Sat, 20 Apr 2024 04:24:13 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 83904C072AA; Sat, 20 Apr 2024 04:24:00 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1713587052; bh=XIe/vC5nJoDDEu9/Q7ZoXeJlkdIzu3x0dG0pVHSQyaU=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=YhvasTWvPZXPkThPYjnuTXJtKsDsFI6ZK4Bt9IvngW4VcopIHwZq0rN2TN47VCWUq FBJkDpR2DtP+unWDJY8C7sJEQiAOVs8L9KzXsojOjPLi6kQ9YDHioYtZ6nUFM1e0iU J5CLWhI8Oh/UQKju6kDG7HnVFfwfwBxuIE1OR267WnG9gRFiBTNZwpWVx17pWQtTrY fdbuHV6o8S3vVJ6wqWsJuIqtAsaxpsI2nzHYz53bqy1qeEbZ3ljz7qw9wo3USiXNjB OSy86A07SSbIPPzvVI3U5+/bWUkfoO6zoETP+0NHBuzBlMRXY9HoUBu5v7wjVTOH+s DowdwAPfYuE4g== Date: Sat, 20 Apr 2024 07:22:50 +0300 From: Mike Rapoport To: Song Liu Subject: Re: [PATCH v4 05/15] mm: introduce execmem_alloc() and execmem_free() Message-ID: References: MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: X-BeenThere: linuxppc-dev@lists.ozlabs.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Linux on PowerPC Developers Mail List List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Mark Rutland , x86@kernel.org, Peter Zijlstra , Catalin Marinas , Russell King , linux-mm@kvack.org, Donald Dutile , sparclinux@vger.kernel.org, linux-riscv@lists.infradead.org, Nadav Amit , linux-arch@vger.kernel.org, linux-s390@vger.kernel.org, Helge Deller , Huacai Chen , Luis Chamberlain , linux-mips@vger.kernel.org, linux-trace-kernel@vger.kernel.org, Alexandre Ghiti , Will Deacon , Heiko Carstens , Steven Rostedt , loongarch@lists.linux.dev, Thomas Gleixner , bpf@vger.kernel.org, linux-arm-kernel@lists.infradead.org, Thomas Bogendoerfer , linux-parisc@vger.kernel.org, Puranjay Mohan , netdev@vger.kernel.org, Kent Overstreet , linux-kernel@vger.kernel.org, Dinh Nguyen , Bjorn Topel , Eric Chanudet , Palmer Dabbelt , Andrew Morton , Rick Edgecombe , linuxppc-dev@lists.ozlabs.org, "David S. Miller" , linux-modules@vger.kernel.org Errors-To: linuxppc-dev-bounces+linuxppc-dev=archiver.kernel.org@lists.ozlabs.org Sender: "Linuxppc-dev" On Fri, Apr 19, 2024 at 02:42:16PM -0700, Song Liu wrote: > On Fri, Apr 19, 2024 at 1:00 PM Mike Rapoport wrote: > > > > On Fri, Apr 19, 2024 at 10:32:39AM -0700, Song Liu wrote: > > > On Fri, Apr 19, 2024 at 10:03 AM Mike Rapoport wrote: > > > [...] > > > > > > > > > > > > [1] https://lore.kernel.org/all/20240411160526.2093408-1-rppt@kernel.org > > > > > > > > > > For the ROX to work, we need different users (module text, kprobe, etc.) to have > > > > > the same execmem_range. From [1]: > > > > > > > > > > static void *execmem_cache_alloc(struct execmem_range *range, size_t size) > > > > > { > > > > > ... > > > > > p = __execmem_cache_alloc(size); > > > > > if (p) > > > > > return p; > > > > > err = execmem_cache_populate(range, size); > > > > > ... > > > > > } > > > > > > > > > > We are calling __execmem_cache_alloc() without range. For this to work, > > > > > we can only call execmem_cache_alloc() with one execmem_range. > > > > > > > > Actually, on x86 this will "just work" because everything shares the same > > > > address space :) > > > > > > > > The 2M pages in the cache will be in the modules space, so > > > > __execmem_cache_alloc() will always return memory from that address space. > > > > > > > > For other architectures this indeed needs to be fixed with passing the > > > > range to __execmem_cache_alloc() and limiting search in the cache for that > > > > range. > > > > > > I think we at least need the "map to" concept (initially proposed by Thomas) > > > to get this work. For example, EXECMEM_BPF and EXECMEM_KPROBE > > > maps to EXECMEM_MODULE_TEXT, so that all these actually share > > > the same range. > > > > Why? > > IIUC, we need to update __execmem_cache_alloc() to take a range pointer as > input. module text will use "range" for EXECMEM_MODULE_TEXT, while kprobe > will use "range" for EXECMEM_KPROBE. Without "map to" concept or sharing > the "range" object, we will have to compare different range parameters to check > we can share cached pages between module text and kprobe, which is not > efficient. Did I miss something? We can always share large ROX pages as long as they are within the correct address space. The permissions for them are ROX and the alignment differences are due to KASAN and this is handled during allocation of the large page to refill the cache. __execmem_cache_alloc() only needs to limit the search for the address space of the range. And regardless, they way we deal with sharing of the cache can be sorted out later. > Thanks, > Song -- Sincerely yours, Mike.