From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 3B21E1C5F1D for ; Fri, 10 Jan 2025 10:02:52 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736503373; cv=none; b=pahExUaKbgD7/84pn1iKuGLp79itLLnhjrFnSHiQypkRrJNLoMPyJ4pQWVDkGZzQOGt7+IyH4MmH2sf2mQFfM9qxX2sZ6YLA7gBJgOf7jmDYTxm9rFbDQUYyUwgIAvbgU871tClsjAlv16WcsZEgYRyiZ0Ka7FE9pGCeK4+nb8w= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736503373; c=relaxed/simple; bh=w0Rm8530/OMrhhJ1Yd7LFuJ5hrvw68QFf1DOHj3jCqk=; h=Date:From:To:Cc:Subject:Message-ID:References:MIME-Version: Content-Type:Content-Disposition:In-Reply-To; b=lSi2BGZgEmQRv/9k4Jbj8WdEiCux/nq7kKGNBW3DBHbSIMVQCr5v19NlLe8/JAvRSbtdWjFi4t433L41DAh7vBm2IKjZ3uDI87kF9W5iRmrF7jkdM4h+GMnREktUSp9/6ldf3DMksx/YF44v/3ZqQGzIth8ynPmYFhmrSLwSVD8= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=oSJivSab; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="oSJivSab" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 29D8EC4CEE0; Fri, 10 Jan 2025 10:02:48 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1736503372; bh=w0Rm8530/OMrhhJ1Yd7LFuJ5hrvw68QFf1DOHj3jCqk=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=oSJivSab8+Cs3cvSx/aWAiCNbtsb+WWKlKDpwIfqtqKZNOzToZYMWbvj5gsnTW4dj 4PFpk2xKqTfz2l/uvvoVSGcAKd9bZlf4tnpTwpjBfXmFCZc6pCv9R2wpGt9WNAe1iO buK227+A7jVak8m32WZHJOnw2nbcBq+iI812GZgiDBghHOc0mz705aEPBan9tAAM3q AG1IDbsRttwUMUyMmWTU6bNIcdDbATUBFs2zIOojoSx0M24rvSIvUCY+zgbpN3Lq43 F59hCCdhOK8e/Eq5zgm9yoh5khaaNInS0d4oZ52ZL1TFhdcGN6EQVwFNpN8fDuAGHN PxD0lcxNu9olg== Date: Fri, 10 Jan 2025 12:02:38 +0200 From: Mike Rapoport To: Borislav Petkov Cc: Juergen Gross , linux-kernel@vger.kernel.org, x86@kernel.org, Dave Hansen , Andy Lutomirski , Peter Zijlstra , Thomas Gleixner , Ingo Molnar , "H. Peter Anvin" , Marek =?iso-8859-1?Q?Marczykowski-G=F3recki?= Subject: Re: [PATCH] x86/execmem: fix ROX cache usage in Xen PV guests Message-ID: References: <20250103065631.26459-1-jgross@suse.com> <20250103130044.GEZ3fffHPSmJ3ngPXn@fat_crate.local> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-1 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: <20250103130044.GEZ3fffHPSmJ3ngPXn@fat_crate.local> On Fri, Jan 03, 2025 at 02:00:44PM +0100, Borislav Petkov wrote: > Adding the author in Fixes to Cc Thanks, Boris! > On Fri, Jan 03, 2025 at 07:56:31AM +0100, Juergen Gross wrote: > > The recently introduced ROX cache for modules is assuming large page > > support in 64-bit mode without testing the related feature bit. This > > results in breakage when running as a Xen PV guest, as in this mode > > large pages are not supported. The ROX cache does not assume support for large pages, it just had a bug when dealing with base pages and the patch below should fix it. Restricting ROX cache only for configurations that support large pages makes sense on it's own because there's no real benefit from the cache on such systems, but it does not fix the issue but rather covers it up. diff --git a/mm/execmem.c b/mm/execmem.c index be6b234c032e..0090a6f422aa 100644 --- a/mm/execmem.c +++ b/mm/execmem.c @@ -266,6 +266,7 @@ static int execmem_cache_populate(struct execmem_range *range, size_t size) unsigned long vm_flags = VM_ALLOW_HUGE_VMAP; struct execmem_area *area; unsigned long start, end; + unsigned int page_shift; struct vm_struct *vm; size_t alloc_size; int err = -ENOMEM; @@ -296,8 +297,9 @@ static int execmem_cache_populate(struct execmem_range *range, size_t size) if (err) goto err_free_mem; + page_shift = get_vm_area_page_order(vm) + PAGE_SHIFT; err = vmap_pages_range_noflush(start, end, range->pgprot, vm->pages, - PMD_SHIFT); + page_shift); if (err) goto err_free_mem; -- 2.45.2 > > Fix that by testing the X86_FEATURE_PSE capability when deciding > > whether to enable the ROX cache. > > > > Fixes: 2e45474ab14f ("execmem: add support for cache of large ROX pages") > > Reported-by: Marek Marczykowski-Górecki > > Tested-by: Marek Marczykowski-Górecki > > Signed-off-by: Juergen Gross > > --- > > arch/x86/mm/init.c | 3 ++- > > 1 file changed, 2 insertions(+), 1 deletion(-) > > > > diff --git a/arch/x86/mm/init.c b/arch/x86/mm/init.c > > index c6d29f283001..62aa4d66a032 100644 > > --- a/arch/x86/mm/init.c > > +++ b/arch/x86/mm/init.c > > @@ -1080,7 +1080,8 @@ struct execmem_info __init *execmem_arch_setup(void) > > > > start = MODULES_VADDR + offset; > > > > - if (IS_ENABLED(CONFIG_ARCH_HAS_EXECMEM_ROX)) { > > + if (IS_ENABLED(CONFIG_ARCH_HAS_EXECMEM_ROX) && > > + cpu_feature_enabled(X86_FEATURE_PSE)) { > > pgprot = PAGE_KERNEL_ROX; > > flags = EXECMEM_KASAN_SHADOW | EXECMEM_ROX_CACHE; > > } else { > > -- > > 2.43.0 > > > > -- > Regards/Gruss, > Boris. > > https://people.kernel.org/tglx/notes-about-netiquette -- Sincerely yours, Mike.