From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mail-pl1-f201.google.com (mail-pl1-f201.google.com [209.85.214.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 66C0632E724 for ; Thu, 29 Jan 2026 14:51:13 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.201 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1769698274; cv=none; b=t9kgIfaJMwRJCHliV1Js60lQt7YFcXsvn22+i8EDuT2KFq4haNhb8b4AFGybXMarpbwSJxvt72aAi1j3Y706ERiUrAzpz/vhsCeqVuntQadFHimIRIIsI2pQyd3ZnE/WePyyQdQHiW9bLC7iDOzkO9WLt1hdo7e9lzEh/8ul5lI= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1769698274; c=relaxed/simple; bh=oM+7UUJdpz1OO1k3PHX0fdPguu2Vcpzyfe6jwB2RS8k=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=IzM27vZ0uvpAFgOI6URlLSadI61Mw+hIOj6t2OT1F1L5r2BLwOv+b/X87ltBr4HPin1VAtok4hiTwCKGfg7kXWUW0rVnO5UUPPPdB5xmR1qH6Aob5U5id7WXN97qhtvcV5JVS/8PyNXuwhb/yzGJrqRxH+8jIHKIk8PnEB5Yw6M= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=nX/vI5wE; arc=none smtp.client-ip=209.85.214.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="nX/vI5wE" Received: by mail-pl1-f201.google.com with SMTP id d9443c01a7336-2a7701b6353so10322735ad.3 for ; Thu, 29 Jan 2026 06:51:13 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1769698273; x=1770303073; darn=vger.kernel.org; h=content-transfer-encoding:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:from:to:cc:subject:date:message-id :reply-to; bh=dI0DzUb+KQ9H02mJnIhZJ92CcKiOgQ087wkT8EgGVcc=; b=nX/vI5wEIpNVXBQA/b41i09Ssh+Jsyd3hc6uxENcwZbXLNar7iWzGejsortBxENOii 3G+RKyqyXzfEgYB0tY2NVQ59v3qKgf4ZsRfM7aofEZE3rZ6Ir9lKeII04cMaEOQ9b1Gh bS5oRcDHQ4M71/r+LIDqFUYSqKZD8HC8bfPQ9IBLcI09q8BGuO1xxrtmCo3BFJlZT5vM djoNIodcsivm7oc+KVQAsTudXpVmXkw3cMInZXqb2IAN4dCDHziDwhNIAAfYWbtiFsnG gM3+4sRbC1cW4sKgEBiYymo44JCVzIW9bh5bBd5ihiY+q75KID294tjJgzucVmRDYLqv wklg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1769698273; x=1770303073; h=content-transfer-encoding:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:x-gm-message-state:from:to:cc:subject :date:message-id:reply-to; bh=dI0DzUb+KQ9H02mJnIhZJ92CcKiOgQ087wkT8EgGVcc=; b=OXfLm5QePXEzGqxBcpTJNiNUxkxzXD1NMwnoUBAe+C0cOxlvvG4L+wEscjikctBnZs cNVp4vZU8jgy0QOVTTLnbzMTldUqsI4c2gY9+lnAbuFKMFjtZmgwHKHNKjGqgNMzwhqA 8kKoJMAp//dQ3Kjb4Sugu8jhjfxKBUxv7yKfjFVBqe6854ziJHUV3za/kL91qEDFMaOb IC1eKBMGAfnbjV2dR8a5TRJFtsLZ0eYLoLgr4K1Qq4rHhFxxWNaHY7pQYPTfsEf9TS19 N5wFMmUoj94Z1aEQs7CKypuWl0J9TY4WZrbZhubVqar9PSMd/NNmny/U2ti6bxCvrBEu hIfw== X-Forwarded-Encrypted: i=1; AJvYcCUpFaoOw9EU50T5MhSA6/a7WsVIFLyvAkN2JE5BX9Lgj/b3JvKChVUUgliUsHE07qezXpK0n6lIwZeJEGA=@vger.kernel.org X-Gm-Message-State: AOJu0Yzk4+KTDA0WQmDnA4WYXsasofhHzGQW/Q9H5BeH4Terj5p3kSFz N5luRmA8IigzTt9y+JR5iKKP+ZvlkNOvehQ9fTBbhJpAVkR0KyhVHSBLfYfymEl7XXfuPVGMGYb DIDT1pA== X-Received: from pllm10.prod.google.com ([2002:a17:902:768a:b0:2a0:a0e0:a9c3]) (user=seanjc job=prod-delivery.src-stubby-dispatcher) by 2002:a17:902:ef44:b0:2a5:99ec:16d7 with SMTP id d9443c01a7336-2a870da13b0mr99177525ad.7.1769698272741; Thu, 29 Jan 2026 06:51:12 -0800 (PST) Date: Thu, 29 Jan 2026 06:51:11 -0800 In-Reply-To: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20260106101646.24809-1-yan.y.zhao@intel.com> <20260106102136.25108-1-yan.y.zhao@intel.com> <2906b4d3b789985917a063d095c4063ee6ab7b72.camel@intel.com> Message-ID: Subject: Re: [PATCH v3 11/24] KVM: x86/mmu: Introduce kvm_split_cross_boundary_leafs() From: Sean Christopherson To: Yan Zhao Cc: Vishal Annapurve , Kai Huang , "pbonzini@redhat.com" , "kvm@vger.kernel.org" , Fan Du , Xiaoyao Li , Chao Gao , Dave Hansen , "thomas.lendacky@amd.com" , "vbabka@suse.cz" , "tabba@google.com" , "david@kernel.org" , "kas@kernel.org" , "michael.roth@amd.com" , Ira Weiny , "linux-kernel@vger.kernel.org" , "binbin.wu@linux.intel.com" , "ackerleytng@google.com" , "nik.borisov@suse.com" , Isaku Yamahata , Chao P Peng , "francescolavra.fl@gmail.com" , "sagis@google.com" , Rick P Edgecombe , Jun Miao , "jgross@suse.com" , "pgonda@google.com" , "x86@kernel.org" Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable On Thu, Jan 22, 2026, Yan Zhao wrote: > On Tue, Jan 20, 2026 at 10:02:41AM -0800, Sean Christopherson wrote: > > On Tue, Jan 20, 2026, Vishal Annapurve wrote: > > > On Fri, Jan 16, 2026 at 3:39=E2=80=AFPM Sean Christopherson wrote: > > > > And then for the PUNCH_HOLE case, do the math to determine which, i= f any, head > > > > and tail pages need to be split, and use the existing APIs to make = that happen. > > >=20 > > > Just a note: Through guest_memfd upstream syncs, we agreed that > > > guest_memfd will only allow the punch_hole operation for huge page > > > size-aligned ranges for hugetlb and thp backing. i.e. the PUNCH_HOLE > > > operation doesn't need to split any EPT mappings for foreseeable > > > future. > >=20 > > Oh! Right, forgot about that. It's the conversion path that we need t= o sort out, > > not PUNCH_HOLE. Thanks for the reminder! > Hmm, I see. > However, do you think it's better to leave the splitting logic in PUNCH_H= OLE as > well? e.g., guest_memfd may want to map several folios in a mapping in th= e > future, i.e., after *max_order > folio_order(folio); No, not at this time. That is a _very_ big "if". Coordinating and trackin= g contiguous chunks of memory at a larger granularity than the underlying Hug= eTLB page size would require significant complexity, I don't see us ever doing t= hat.