From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 86F7E23AE for ; Wed, 13 Apr 2022 14:39:34 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 7A7A8C385A4; Wed, 13 Apr 2022 14:39:25 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1649860774; bh=8yJB3RTfMHAvG+qVCpJGHNnXUQN/4L5d65PERRX6VvI=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=R6UmbR9Nnkyyb0sa6mU+ScjeQj6ySSju0otK3fB765GF5jbY38UmFbxK8+CO3WWmv F90FPJTu6x0c5EdcRloalAL0l4kvi4dhG77+C7x2aoN2CGYa6EA/STzOwkBp+H1/bI /dxmdZQhK/FfXWeHKOiRPHwA8AtpSvBl86ja5Y46Vm/w1prNZkBZoE3Ds4VCn5CFtq v+/MnU4bLFcgdCigJu1Yg2hJWLSN6KwcZQ+9ZG0DtRkfjHjUMcvOE04QbRbZ0gYAeB /XCJIHJAHtZ9YjazAXQfpXUbK9a31JvULtIkqEwSTHTpvcazlHD76suMOabluS2C0x 7aPTRxcwuEDKQ== Date: Wed, 13 Apr 2022 17:39:20 +0300 From: Mike Rapoport To: David Hildenbrand Cc: Dave Hansen , "Kirill A. Shutemov" , Borislav Petkov , Andy Lutomirski , Sean Christopherson , Andrew Morton , Joerg Roedel , Ard Biesheuvel , Andi Kleen , Kuppuswamy Sathyanarayanan , David Rientjes , Vlastimil Babka , Tom Lendacky , Thomas Gleixner , Peter Zijlstra , Paolo Bonzini , Ingo Molnar , Varad Gautam , Dario Faggioli , Brijesh Singh , x86@kernel.org, linux-mm@kvack.org, linux-coco@lists.linux.dev, linux-efi@vger.kernel.org, linux-kernel@vger.kernel.org, Mike Rapoport Subject: Re: [PATCHv4 1/8] mm: Add support for unaccepted memory Message-ID: References: <20220405234343.74045-1-kirill.shutemov@linux.intel.com> <20220405234343.74045-2-kirill.shutemov@linux.intel.com> <93a7cfdf-02e6-6880-c563-76b01c9f41f5@intel.com> Precedence: bulk X-Mailing-List: linux-coco@lists.linux.dev List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: On Wed, Apr 13, 2022 at 12:36:11PM +0200, David Hildenbrand wrote: > On 12.04.22 18:08, Dave Hansen wrote: > > On 4/12/22 01:15, David Hildenbrand wrote: > > > > The other option might be to tie this all to DEFERRED_STRUCT_PAGE_INIT. > > Have the rule that everything that gets a 'struct page' must be > > accepted. If you want to do delayed acceptance, you do it via > > DEFERRED_STRUCT_PAGE_INIT. > > That could also be an option, yes. At least being able to chose would be > good. But IIRC, DEFERRED_STRUCT_PAGE_INIT will still make the system get > stuck during boot and wait until everything was accepted. The deferred page init runs multithreaded, so guest with SMP will be stuck for less time. > I see the following variants: > > 1) Slow boot; after boot, all memory is already accepted. > 2) Fast boot; after boot, all memory will slowly but steadily get > accepted in the background. After a while, all memory is accepted and > can be signaled to user space. > 3) Fast boot; after boot, memory gets accepted on demand. This is what > we have in this series. > > I somehow don't quite like 3), but with deferred population in the > hypervisor, it might just make sense. IMHO, deferred population in hypervisor will be way more complex than this series with similar "visible" performance. -- Sincerely yours, Mike.