From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mail-pj1-f73.google.com (mail-pj1-f73.google.com [209.85.216.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 8722749620 for ; Sat, 17 Jan 2026 01:02:05 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.73 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1768611726; cv=none; b=tVxcQ95q9b8C496OHN7HZYYfZkq27OWZimyM8IsV3IOWyGhj6caZ9rTBdlvpyA3UtlHkNm4fZnyg/IjrEuIx3TtJMSfeNzkVS0ELJghWJcB81JsmXL8HCp49cL53jpe+qmaRjpTwcWRW+BU3aVfzaDGicvwiJyUEQinHxqZx7OM= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1768611726; c=relaxed/simple; bh=387TLTU83paT+jk/XoywCNBbbdR4aObP3kXaD1x/h9E=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=SsmYxib0wtXM+FS6RbjFmlpG5BWxrnozo/EXBaFHeAHCE9ZjJpLCycTGmCBG3SQdm9W/yu38roKc0z2W4Y7hAa2OGj2ZHiBihzwKWr4t0T9J/qjluvbra19v3mwLGN0EBVK7VCS7Qn/fVIXFxWn+KeaW77TviiYuAjLDXSpw9Y8= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=GWkhClwK; arc=none smtp.client-ip=209.85.216.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="GWkhClwK" Received: by mail-pj1-f73.google.com with SMTP id 98e67ed59e1d1-34c6e05af6fso2519770a91.1 for ; Fri, 16 Jan 2026 17:02:05 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1768611725; x=1769216525; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=33kwIywl/krx+QdnC8KwFooydqEwwakdHI97f7zntIM=; b=GWkhClwKwL9zquNgNAPBDRCwL8UVQcdPdIOxHkqKEYCbiMFs6enE5Oec4z+7sdsBjJ lqAV+icp2sM5quMs9nhYeIw2Yw1TJI2OUqSDziZZip10b74pyw7UFiq4N1o9l7D+NF0+ FepYcj61SOD3SJLrDmKnpBdJVgAaepIc2a1QE4US4rsVO+TQJBNfpOCnKsr/N9d4HfWh WmJt24exxQ+ELkDOERYvSDscJfI04o5nLanZYEFCVeg08g1kcnKp2TVB7wf870tAS3QN eiL+o+t8emhJljNuav21hDwWLyL68QBzyz4QQwyfuNN5rvNpSdQRHYD2VhCONMoRkTHK LYPg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1768611725; x=1769216525; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=33kwIywl/krx+QdnC8KwFooydqEwwakdHI97f7zntIM=; b=ZYqYSXB/T0VWGN1tzsztsXQ4pc6QChLGvYOoL0ns2ksxt3auX+MDa4O4FnS+WtEFho 1WDzMLqRDXhkFFUWDwi28RkB+BY6JSoil9u5czu26fseK1Y6dML7I95qiy17p4W0o2DK XaF4URYhZTeDouLAR+hUdxiTfVokMV19f5rxp+Zsp/0XHzGQTrBc2E/nh6t+ox795u0G fw7X0goYeUcIgXQGffI/KN30ukXNZuWUizEJLRNqNsc3KDwtQJxit4yiTj31gvusW8SQ RnH1WFH3au3cCIUmV99cwNvq74795T88vdP87BGvkUvbdSKArgjRuDANAaM3+kCmmoWA QrJg== X-Forwarded-Encrypted: i=1; AJvYcCUQFWLDYrjniBFyNS6BQ9+kebGl8nE1nwus34f9EpRVGhyG3Tp++ccdqgQn49ENaXf61p1MBX+xtBoeGNc=@vger.kernel.org X-Gm-Message-State: AOJu0Yxrd6BENl4YICuA0oNNH2xZBNasHAf7SNWznqyVpCHZqgQMVAby NA+HzXWNZlpbH/UoW3Efsxk+SGknszRqaYOFjN6POIpVrPkNXI6Qcasie+myEUS6ZpC4blNFnXN 8U5vjeg== X-Received: from pjtn4.prod.google.com ([2002:a17:90a:c684:b0:340:c0e9:24b6]) (user=seanjc job=prod-delivery.src-stubby-dispatcher) by 2002:a17:90b:5101:b0:34a:8c77:d37b with SMTP id 98e67ed59e1d1-352731799d0mr4376642a91.16.1768611724820; Fri, 16 Jan 2026 17:02:04 -0800 (PST) Date: Fri, 16 Jan 2026 17:02:03 -0800 In-Reply-To: <20251121005125.417831-13-rick.p.edgecombe@intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20251121005125.417831-1-rick.p.edgecombe@intel.com> <20251121005125.417831-13-rick.p.edgecombe@intel.com> Message-ID: Subject: Re: [PATCH v4 12/16] x86/virt/tdx: Add helpers to allow for pre-allocating pages From: Sean Christopherson To: Rick Edgecombe Cc: bp@alien8.de, chao.gao@intel.com, dave.hansen@intel.com, isaku.yamahata@intel.com, kai.huang@intel.com, kas@kernel.org, kvm@vger.kernel.org, linux-coco@lists.linux.dev, linux-kernel@vger.kernel.org, mingo@redhat.com, pbonzini@redhat.com, tglx@linutronix.de, vannapurve@google.com, x86@kernel.org, yan.y.zhao@intel.com, xiaoyao.li@intel.com, binbin.wu@intel.com Content-Type: text/plain; charset="us-ascii" On Thu, Nov 20, 2025, Rick Edgecombe wrote: > --- > v4: > - Change to GFP_KERNEL_ACCOUNT to match replaced kvm_mmu_memory_cache > - Add GFP_ATOMIC backup, like kvm_mmu_memory_cache has (Kiryl) LOL, having fun reinventing kvm_mmu_memory_cache? :-D > - Explain why not to use mempool (Dave) > - Tweak local vars to be more reverse christmas tree by deleting some > that were only added for reasons that go away in this patch anyway > --- > arch/x86/include/asm/tdx.h | 43 ++++++++++++++++++++++++++++++++++++- > arch/x86/kvm/vmx/tdx.c | 21 +++++++++++++----- > arch/x86/kvm/vmx/tdx.h | 2 +- > arch/x86/virt/vmx/tdx/tdx.c | 22 +++++++++++++------ > virt/kvm/kvm_main.c | 3 --- > 5 files changed, 75 insertions(+), 16 deletions(-) > +/* > + * Simple structure for pre-allocating Dynamic > + * PAMT pages outside of locks. As called out in an earlier patch, it's not just PAMT pages. > + */ > +struct tdx_prealloc { > + struct list_head page_list; > + int cnt; > +}; > + > +static inline struct page *get_tdx_prealloc_page(struct tdx_prealloc *prealloc) > +{ > + struct page *page; > + > + page = list_first_entry_or_null(&prealloc->page_list, struct page, lru); > + if (page) { > + list_del(&page->lru); > + prealloc->cnt--; > + } > + > + return page; > +} > + > +static inline int topup_tdx_prealloc_page(struct tdx_prealloc *prealloc, unsigned int min_size) > +{ > + while (prealloc->cnt < min_size) { > + struct page *page = alloc_page(GFP_KERNEL_ACCOUNT); > + > + if (!page) > + return -ENOMEM; > + > + list_add(&page->lru, &prealloc->page_list); Huh, TIL that page->lru is fair game for private usage when the page is kernel- allocated. > + prealloc->cnt++; > > static int tdx_topup_external_fault_cache(struct kvm_vcpu *vcpu, unsigned int cnt) > { > - struct vcpu_tdx *tdx = to_tdx(vcpu); > + struct tdx_prealloc *prealloc = &to_tdx(vcpu)->prealloc; > + int min_fault_cache_size; > > - return kvm_mmu_topup_memory_cache(&tdx->mmu_external_spt_cache, cnt); > + /* External page tables */ > + min_fault_cache_size = cnt; > + /* Dynamic PAMT pages (if enabled) */ > + min_fault_cache_size += tdx_dpamt_entry_pages() * PT64_ROOT_MAX_LEVEL; > + > + return topup_tdx_prealloc_page(prealloc, min_fault_cache_size); > } > > static void tdx_free_external_fault_cache(struct kvm_vcpu *vcpu) > { > struct vcpu_tdx *tdx = to_tdx(vcpu); > + struct page *page; > > - kvm_mmu_free_memory_cache(&tdx->mmu_external_spt_cache); > + while ((page = get_tdx_prealloc_page(&tdx->prealloc))) > + __free_page(page); No. Either put the ownership of the PAMT cache in arch/x86/virt/vmx/tdx/tdx.c or use kvm_mmu_memory_cache. Don't add a custom caching scheme in KVM. > /* Number PAMT pages to be provided to TDX module per 2M region of PA */ > -static int tdx_dpamt_entry_pages(void) > +int tdx_dpamt_entry_pages(void) > { > if (!tdx_supports_dynamic_pamt(&tdx_sysinfo)) > return 0; > A comment here stating the "common" number of entries would be helper. I have no clue as to the magnitude. E.g. this could be 2 or it could be 200, I genuinely have no idea.