From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 086CEC04A6A for ; Wed, 2 Aug 2023 15:19:41 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235190AbjHBPTj (ORCPT ); Wed, 2 Aug 2023 11:19:39 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39056 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235098AbjHBPTY (ORCPT ); Wed, 2 Aug 2023 11:19:24 -0400 Received: from mail-yw1-x114a.google.com (mail-yw1-x114a.google.com [IPv6:2607:f8b0:4864:20::114a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 772014498 for ; Wed, 2 Aug 2023 08:15:13 -0700 (PDT) Received: by mail-yw1-x114a.google.com with SMTP id 00721157ae682-56942667393so83601887b3.2 for ; Wed, 02 Aug 2023 08:15:13 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1690989299; x=1691594099; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=D3LyAPm1LNXozV2RDmYRg8+vON4aRYGJGm2Cl/7lnow=; b=XxAyvJEO7IvzK8IYMYZ+WS/J+sA7iCGeO/qEfRRUawL+imOo+d6hkk9tUh5jHbyo4S vQGbWg2rHOmo3vI3I8eZsYP1MfuQXLf9tXkE98BN2ljOVo+WO9GF4PRR4fZ1Io64i9rt AquHoHU8QjHE3a7+B2EG89Fo8JyzzEgz6ts5yrVgkIAGsaNjBLvqGGKD7sZGTktwjbSp PptLKLD7gPBiCT4/8eD89DjQsFM0/odFsivt5nb+kBZwwUbLru7oJo12lu5FUG9wopW7 KEZDagprbTjCdGzJLcNvieS7/L2XPpLKz8lMisUkuvGdDJn1gDVJWecEDcY1025O5keW oHyw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1690989299; x=1691594099; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=D3LyAPm1LNXozV2RDmYRg8+vON4aRYGJGm2Cl/7lnow=; b=ITqovL1wpZjcQv5miyYqDRfvRMn6649ZYmTZm3EG2+M8Fi4njSX1RmXP0wnHFZCUW4 3qraMZHUoP9vH0PK2qpV+KSUvCL7yjgEYtOwlAgnaL4KyC/8go9z81y8PmL6vVCwFmdE d2FRu2r6S9RexVAopQoAgej3kmy8hyhPpS5xelJoP0ld5sxbQtr3WvLxWfsbtUtPDIi6 4ivMVgNULaP/ZjDEB5Xk+hR83kKnOfwJjio6lqFWEGty3/MVM3PTm697GJ3rSPNWq17I jwndjNjteV8e4S21dvg1nj7Z0f2knRtm/qvEDtSMIsAZ/Gpb2j2on3s51DjQyFgrFi49 Coew== X-Gm-Message-State: ABy/qLZR5zctsEOVjSSOxIXn6vnwra5zG+2w2NGay89KBJxFBH5zACVs 9mMuFqU+xJ55fZrO1YKedONOz9rxQww= X-Google-Smtp-Source: APBJJlFe2+t6w0L/91zFwfj+ThLLNQdAgY8DrVCuc0sOklYI7DhI1WhyrzKFNJT/kn6NkoRoeM0hhJO8ghs= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a81:ac58:0:b0:584:3d8f:a423 with SMTP id z24-20020a81ac58000000b005843d8fa423mr143956ywj.8.1690989299390; Wed, 02 Aug 2023 08:14:59 -0700 (PDT) Date: Wed, 2 Aug 2023 08:14:58 -0700 In-Reply-To: <20230802142737.5572-1-wei.w.wang@intel.com> Mime-Version: 1.0 References: <20230802142737.5572-1-wei.w.wang@intel.com> Message-ID: Subject: Re: [PATCH v1] KVM: x86/mmu: refactor kvm_tdp_mmu_map From: Sean Christopherson To: Wei Wang Cc: pbonzini@redhat.com, bgardon@google.com, dmatlack@google.com, kvm@vger.kernel.org, linux-kernel@vger.kernel.org Content-Type: text/plain; charset="us-ascii" Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed, Aug 02, 2023, Wei Wang wrote: > The implementation of kvm_tdp_mmu_map is a bit long. It essentially does > three things: > 1) adjust the leaf entry level (e.g. 4KB, 2MB or 1GB) to map according to > the hugepage configurations; > 2) map the nonleaf entries of the tdp page table; and > 3) map the target leaf entry. > > Improve the readabiliy by moving the implementation of 2) above into a > subfunction, kvm_tdp_mmu_map_nonleaf, and removing the unnecessary > "goto"s. No functional changes intended. Eh, I prefer the current code from a readability perspective. I like being able to see the entire flow, and I especially like that this if (iter.level == fault->goal_level) goto map_target_level; very clearly and explicitly captures that reaching the goal leavel means that it's time to map the target level, whereas IMO this does not, in no small part because seeing "continue" in a loop makes me think "continue the loop", not "continue on to the next part of the page fault" if (iter->level == fault->goal_level) return RET_PF_CONTINUE; And the existing code follows the patter of the other page fault paths, direct_map() and FNAME(fetch). That doesn't necessarily mean that the existing pattern is "better", but I personally place a lot of value on consistency. > +/* > + * Handle a TDP page fault (NPT/EPT violation/misconfiguration) by installing > + * page tables and SPTEs to translate the faulting guest physical address. > + */ > +int kvm_tdp_mmu_map(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault) > +{ > + struct tdp_iter iter; > + int ret; > + > + kvm_mmu_hugepage_adjust(vcpu, fault); > + > + trace_kvm_mmu_spte_requested(fault); > + > + rcu_read_lock(); > + > + ret = kvm_tdp_mmu_map_nonleafs(vcpu, fault, &iter); > + if (ret == RET_PF_CONTINUE) > + ret = tdp_mmu_map_handle_target_level(vcpu, fault, &iter); And I also don't like passing in an uninitialized tdp_iter, and then consuming it too. > > -retry: > rcu_read_unlock(); > return ret; > } > -- > 2.27.0 >