From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 6D5CA1C84DE for ; Fri, 3 Apr 2026 12:22:47 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1775218968; cv=none; b=Jq5qaEyMv9Uf1/GrhBZEFZp2cxosk0NIlqOrP1X4t1BnWJn66hBqrYnPtwXo84cs3XzHJUBBNzC8kFQSCk2KK+UmOL7NyArgosv91d5yaHFqJ50n1GTWK+cWnEBYyii/qj2hcT81Kd1fgqYJ4Um/8rRdj7vkdIv869qcWL0GOBI= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1775218968; c=relaxed/simple; bh=TA1y5fRkPsp4gPLcmkgKEJXtYbfeXie1VFCnWgDnU54=; h=Date:From:To:Cc:Subject:Message-ID:References:MIME-Version: Content-Type:Content-Disposition:In-Reply-To; b=rZp5Xv7vL3+Q+Ls3SX+oJvQQmkW7FHVR1EN/NxEuAlPOHEpWxjFYpT5uDoqt9YXyan3N6MQ2vvz9cWEDv42/CzwjAC8EC8aO+Q8Z/noNwj4scjDbsu/A6GQSf5QpVoLNUkwFX6FdECBHNV7vzwFyMqUx611W2c0XjUI2/AuboFo= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=HhNUpdYP; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="HhNUpdYP" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 3DF48C4CEF7; Fri, 3 Apr 2026 12:22:43 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1775218967; bh=TA1y5fRkPsp4gPLcmkgKEJXtYbfeXie1VFCnWgDnU54=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=HhNUpdYPkQrJ9Iw7Vy7SNI3OeiVS/TAjzxObcJLVL4RYHpttERbCyc8/YiYbv8OAh T2VfNjqF+O5JSC+mS+0zcRkqbTTC1wK4zUEcZAQbu/yWHwgErUeXGWVrlnD1HkGby5 4Dxd/5Kq0Mhn4W1ZbRHaqfFLqLjU1WqmWAZjeKyL8DXgjTD6gC+xEzanLDPz4W7Bxc fcvBihqXXNofIF+7Xb9fpTp5yT3giGtNr+X+QquFV1n6nuxNJ9+KhpmU0mCx4UKX3K GNJYR74v3+WtigfOaSOEUnZ0+6LggeOa6St2l6tLrIzFiauqQA1HtSnjpPk3fCiMm2 0eBtECRQwBsLg== Date: Fri, 3 Apr 2026 15:22:40 +0300 From: Mike Rapoport To: x86@kernel.org Cc: Borislav Petkov , Bert Karwatzki , Dave Hansen , Ingo Molnar , "H. Peter Anvin" , Peter Zijlstra , Thomas Gleixner , linux-kernel@vger.kernel.org, kernel test robot Subject: Re: [PATCH v2] x86/alternative: delay freeing of smp_locks section Message-ID: References: <20260330191000.1190533-1-rppt@kernel.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20260330191000.1190533-1-rppt@kernel.org> On Mon, Mar 30, 2026 at 10:10:00PM +0300, Mike Rapoport wrote: > From: "Mike Rapoport (Microsoft)" > > On SMP systems alternative_instructions() frees memory occupied by > smp_locks section immediately after patching the lock instructions. > > The memory is freed using free_init_pages() that calls free_reserved_area() > that essentially does __free_page() for every page in the range. > > Up until recently it didn't update memblock state so in cases when > CONFIG_ARCH_KEEP_MEMBLOCK is enabled (on x86 it is selected by > INTEL_TDX_HOST), the state of memblock and the memory map would be > inconsistent. > > Additionally, with CONFIG_DEFERRED_STRUCT_PAGE_INIT enabled feeing of > smp_locks happens before the memory map is fully initialized and freeing > reserved memory may case an access to not-yet-initialized struct page when > __free_page() searches for a buddy page. > > Following the discussion in [1], implementation of memblock_free_late() and > free_reserved_area() was unified to ensure that reserved memory that's > freed after memblock transfers the pages to the buddy allocator is actually > freed and that the memblock and the memory map are consistent. As a part of > these changes, free_reserved_area() now WARN()s when it is called before > the initialization of the memory map is complete. > > The memory map is fully initialized in page_alloc_init_late() that > completes before initcalls are executed, so it is safe to free reserved > memory in any initcall except early_initcall(). > > Move freeing of smp_locks section to an initcall to ensure it will happen > after the memory map is fully initialized. Since it does not matter which > exactly initcall to use and the code lives in arch/, pick arch_initcall. > > [1] https://lore.kernel.org/all/ec2aaef14783869b3be6e3c253b2dcbf67dbc12a.camel@kernel.crashing.org Forgot to add: Fixes: b2129a39511b ("memblock: make free_reserved_area() update memblock if ARCH_KEEP_MEMBLOCK=y") > Reported-By: Bert Karwatzki > Reported-by: kernel test robot > Closes: https://lore.kernel.org/oe-lkp/202603302154.b50adaf1-lkp@intel.com > Tested-By: Bert Karwatzki > Link: https://lore.kernel.org/r/20260327140109.7561-1-spasswolf@web.de > Signed-off-by: Mike Rapoport (Microsoft) > --- > arch/x86/kernel/alternative.c | 22 +++++++++++++++++----- > 1 file changed, 17 insertions(+), 5 deletions(-) -- Sincerely yours, Mike.