From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D5CB3346E46 for ; Tue, 31 Mar 2026 11:16:32 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774955792; cv=none; b=TLDI5imX00Dw8G3+IAmyaktU+j2sIxrZmyQU0atH8642MVcaZ74+kHZw6aPh0TVwzOnb5PCOrjst+0fvrFYTHhpMwkNuXoGIOTxlsefZJ1Jwy9obNw5UYvGgWDsWg/nMQokXCnPSkkumJLfb+auPm3hbta44YD/Xnsx0YxQzIVY= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774955792; c=relaxed/simple; bh=4vM0AXLfyGMP596hcrGEf4BpP7ULuR1IfQSKgYjEJIY=; h=Date:From:To:Cc:Subject:Message-ID:References:MIME-Version: Content-Type:Content-Disposition:In-Reply-To; b=hLZ7fkskz9BAqG0TzasyJe4bgCjkqIQmt4kN0RWeZ0BF0C/n/ciiYM9tN6Ox74VY3sTE0AL/bR0PKFmH0vsmb9lVQrgcoQWr0HlD75zkd53kSbleeKeEmxgpZO53AMnblAgAhUhARetVd2dB+EiHSLSaVubvfD6DHQkYJ99UBjc= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=nRxF4m/S; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="nRxF4m/S" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 59102C19423; Tue, 31 Mar 2026 11:16:29 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1774955792; bh=4vM0AXLfyGMP596hcrGEf4BpP7ULuR1IfQSKgYjEJIY=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=nRxF4m/S+edkQehP1zpMV17xtFeDv4eSpRAwHq3NdDIj7bRXJ5ttWUxwWYMA9R9yh tuJnVhd4g82vrVwe0hWeQOvduItKewzgFduc9bh1SkgQ7Y09T73+ewSk6fEgoD76wf 2XatQzB68gFhzfcGZi8jR99W/xkHDKnuUh9C6eXmUq+7xz25/aVa0+ngCdS4cUj7fh pt1v5JASq0emAbj81nVWhoJdmjUDWq4T8eSJv0wBnD+gYdZ/lOa5ZIP1lW2hGjohof bkHHxrCYwyPUxM0NsGlKjNI72t7J8qeZYjoOAAdhvPDo1DY8sfDZyBUbH1uvcTMB/l QudfQ/7ww71aQ== Date: Tue, 31 Mar 2026 14:16:25 +0300 From: Mike Rapoport To: Peter Zijlstra Cc: x86@kernel.org, Borislav Petkov , Bert Karwatzki , Dave Hansen , Ingo Molnar , "H. Peter Anvin" , Thomas Gleixner , linux-kernel@vger.kernel.org, kernel test robot Subject: Re: [PATCH v2] x86/alternative: delay freeing of smp_locks section Message-ID: References: <20260330191000.1190533-1-rppt@kernel.org> <20260330192737.GD3558198@noisy.programming.kicks-ass.net> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20260330192737.GD3558198@noisy.programming.kicks-ass.net> On Mon, Mar 30, 2026 at 09:27:37PM +0200, Peter Zijlstra wrote: > On Mon, Mar 30, 2026 at 10:10:00PM +0300, Mike Rapoport wrote: > > From: "Mike Rapoport (Microsoft)" > > > > On SMP systems alternative_instructions() frees memory occupied by > > smp_locks section immediately after patching the lock instructions. > > > > The memory is freed using free_init_pages() that calls free_reserved_area() > > that essentially does __free_page() for every page in the range. > > > > Up until recently it didn't update memblock state so in cases when > > CONFIG_ARCH_KEEP_MEMBLOCK is enabled (on x86 it is selected by > > INTEL_TDX_HOST), the state of memblock and the memory map would be > > inconsistent. > > > > Additionally, with CONFIG_DEFERRED_STRUCT_PAGE_INIT enabled feeing of > > smp_locks happens before the memory map is fully initialized and freeing > > reserved memory may case an access to not-yet-initialized struct page when > > __free_page() searches for a buddy page. > > > > Following the discussion in [1], implementation of memblock_free_late() and > > free_reserved_area() was unified to ensure that reserved memory that's > > freed after memblock transfers the pages to the buddy allocator is actually > > freed and that the memblock and the memory map are consistent. As a part of > > these changes, free_reserved_area() now WARN()s when it is called before > > the initialization of the memory map is complete. > > > > The memory map is fully initialized in page_alloc_init_late() that > > completes before initcalls are executed, so it is safe to free reserved > > memory in any initcall except early_initcall(). > > > > Move freeing of smp_locks section to an initcall to ensure it will happen > > after the memory map is fully initialized. Since it does not matter which > > exactly initcall to use and the code lives in arch/, pick arch_initcall. > > Silly question, why not put the .smp_locks in > __init_begin[],__init_end[] right next to .altinstr such that it gets > freed by free_initmem() ? Because it's not always freed? :) -- Sincerely yours, Mike.