From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 03B04C433EF for ; Mon, 4 Apr 2022 00:41:24 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1376827AbiDDAnS (ORCPT ); Sun, 3 Apr 2022 20:43:18 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36542 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S241444AbiDDAnR (ORCPT ); Sun, 3 Apr 2022 20:43:17 -0400 Received: from mga01.intel.com (mga01.intel.com [192.55.52.88]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D81F223150; Sun, 3 Apr 2022 17:41:20 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1649032880; x=1680568880; h=message-id:subject:from:to:cc:date:in-reply-to: references:mime-version:content-transfer-encoding; bh=AQY0RgB1apu3f3OnS0H9ItCK1wYR3RZozmXjysWHgPs=; b=QJp0fL+Da3hubl9IgHYFLEIeeJd0N3fpNbHVEln206FMdcN6S+cbE8Zc 3LucWdIiz7wXIfiBrHrFHOlh5RekSxafsIgX982ezZqeuz9WTr7pgheRv cmb0K/8iamr9wTh/KJKm2Gt4alnBX2J3w40BV7wvOrEdho9JcMO+AXX+k j+GYr4Q6XfLKKZpnshwlvyMamRYBM6NWxlwPon88iVGkcCcnDWh79lG/3 FVQjKCc9XzWXdvOhuDePtJ8vC2Mn1ZBWYeziaNF4izWH92Mxo/RW2JwiO Bzn0y205FkQzd2GqKX4F8Trb7AmL0P/Dwa+3koM+8cZ5Jv2LuJDszQFkH g==; X-IronPort-AV: E=McAfee;i="6200,9189,10306"; a="285364336" X-IronPort-AV: E=Sophos;i="5.90,233,1643702400"; d="scan'208";a="285364336" Received: from fmsmga006.fm.intel.com ([10.253.24.20]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 03 Apr 2022 17:41:20 -0700 X-IronPort-AV: E=Sophos;i="5.90,233,1643702400"; d="scan'208";a="789338501" Received: from bthorroc-mobl1.amr.corp.intel.com (HELO khuang2-desk.gar.corp.intel.com) ([10.251.132.176]) by fmsmga006-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 03 Apr 2022 17:41:18 -0700 Message-ID: <039c9eada49e9eeca48c39c5bea0fac0ab68c673.camel@intel.com> Subject: Re: [RFC PATCH v5 038/104] KVM: x86/mmu: Allow per-VM override of the TDP max page level From: Kai Huang To: Sean Christopherson Cc: isaku.yamahata@intel.com, kvm@vger.kernel.org, linux-kernel@vger.kernel.org, isaku.yamahata@gmail.com, Paolo Bonzini , Jim Mattson , erdemaktas@google.com, Connor Kuehl Date: Mon, 04 Apr 2022 12:41:16 +1200 In-Reply-To: References: <5cc4b1c90d929b7f4f9829a42c0b63b52af0c1ed.1646422845.git.isaku.yamahata@intel.com> <43098446667829fc592b7cc7d5fd463319d37562.camel@intel.com> Content-Type: text/plain; charset="UTF-8" User-Agent: Evolution 3.42.4 (3.42.4-1.fc35) MIME-Version: 1.0 Content-Transfer-Encoding: 7bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Sat, 2022-04-02 at 00:08 +0000, Sean Christopherson wrote: > On Sat, Apr 02, 2022, Kai Huang wrote: > > On Fri, 2022-04-01 at 14:08 +0000, Sean Christopherson wrote: > > > On Fri, Apr 01, 2022, Kai Huang wrote: > > > > On Fri, 2022-03-04 at 11:48 -0800, isaku.yamahata@intel.com wrote: > > > > > From: Sean Christopherson > > > > > > > > > > In the existing x86 KVM MMU code, there is already max_level member in > > > > > struct kvm_page_fault with KVM_MAX_HUGEPAGE_LEVEL initial value. The KVM > > > > > page fault handler denies page size larger than max_level. > > > > > > > > > > Add per-VM member to indicate the allowed maximum page size with > > > > > KVM_MAX_HUGEPAGE_LEVEL as default value and initialize max_level in struct > > > > > kvm_page_fault with it. > > > > > > > > > > For the guest TD, the set per-VM value for allows maximum page size to 4K > > > > > page size. Then only allowed page size is 4K. It means large page is > > > > > disabled. > > > > > > > > Do not support large page for TD is the reason that you want this change, but > > > > not the result. Please refine a little bit. > > > > > > Not supporting huge pages was fine for the PoC, but I'd prefer not to merge TDX > > > without support for huge pages. Has any work been put into enabling huge pages? > > > If so, what's the technical blocker? If not... > > > > Hi Sean, > > > > Is there any reason large page support must be included in the initial merge of > > TDX? Large page is more about performance improvement I think. Given this > > series is already very big, perhaps we can do it later. > > I'm ok punting 1gb for now, but I want to have a high level of confidence that 2mb > pages will work without requiring significant churn in KVM on top of the initial > TDX support. I suspect gaining that level of confidence will mean getting 95%+ of > the way to a fully working code base. IIRC, 2mb wasn't expected to be terrible, it > was 1gb support where things started to get messy. OK no argument here :) -- Thanks, -Kai