From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 44487C04A94 for ; Fri, 4 Aug 2023 22:30:31 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229937AbjHDWaa (ORCPT ); Fri, 4 Aug 2023 18:30:30 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41568 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229449AbjHDWa2 (ORCPT ); Fri, 4 Aug 2023 18:30:28 -0400 Received: from mail-oi1-x249.google.com (mail-oi1-x249.google.com [IPv6:2607:f8b0:4864:20::249]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CF1391706 for ; Fri, 4 Aug 2023 15:30:26 -0700 (PDT) Received: by mail-oi1-x249.google.com with SMTP id 5614622812f47-3a426e70577so3926209b6e.2 for ; Fri, 04 Aug 2023 15:30:26 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1691188226; x=1691793026; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=bFzzUD1s8YBHblwI04VOeCdWAbdee3I5YnQWnVOxvpo=; b=TFcn4WhuF59CuOFW+nA/rZmm0UJ9lmliU/kzI0ZppRYE7sDBYkerAuHzzRxdX3d7+X 4kzLTKYNiq3+tnYZLXIkLwujJLcAX9xlXFF7P6HucTbVVv4htw7jc7Fx1odjOorNhcMK dodd2vDxN7Iv863IP8Ul0VCeKTSNeqWflaPt3PPJv7+5ayDRQWhkgJ+D3K75dcAoYxtZ WPhu6CqDtddYxxH4dTGFYTkyLBsWJw8IjQxufeXxzFdyCgNiKmvISMu1mHITGEKHgkkb HhwXF5timTSNjIo6jS4S4vsYjZPe1ZaATLPhRP8kRuAhbSLipOJg1Qe8hXm34hNZJUIk 0cUg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1691188226; x=1691793026; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=bFzzUD1s8YBHblwI04VOeCdWAbdee3I5YnQWnVOxvpo=; b=duxAQXDMf0N7FKCiL/aqPHOiIYqbXUB16/o1uGzvMyvxam3YAxHXGgb0iQBUCHnI8C 0b9IznjuQut/+T+oDB+FMh0DoAEsGGKHeUbJoGDm2XWOfN9EE8pMZJlkdRT1ufFAPSj/ qvqv3yOwUi3/DGubPRCseIq3lCHZUs/4uzZVQ3mLSEZMcizux55OqHCHXK875z396j9K dmZZ/6MQy+rc78OZBqqGmMpWZqw7EZ3mCqLtYG7QdzjlT+HKjrATUjuoW/oqCQGTJ384 Lth77DpJFYhB6oAukuFO76Erk1bktRUj5MhgnUQPE1oTlBkt0HwmRhzvz7T+nmtb+P+f SvWA== X-Gm-Message-State: AOJu0YyijJ0ki0uJ+QtaE10Og6Zzco/uTqNmrAAASIUpTwFRrbhQnUTI Oa7Syvd9RHkBSPkqLmV7h4ZwBhhND1o= X-Google-Smtp-Source: AGHT+IG/f7h4NBFD7Fzz9VqAMwBIHPfIp5Oxxm2+KkdFX36TNY82jUIYr8jbB4mGvNrT4WJJhTCK/9o3DZg= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a05:6808:198b:b0:3a7:75cd:df61 with SMTP id bj11-20020a056808198b00b003a775cddf61mr4531539oib.5.1691188226241; Fri, 04 Aug 2023 15:30:26 -0700 (PDT) Date: Fri, 4 Aug 2023 15:30:24 -0700 In-Reply-To: <20230705080756.xv7fm3jxewipunvn@linux.intel.com> Mime-Version: 1.0 References: <20230704075054.3344915-1-stevensd@google.com> <20230704075054.3344915-5-stevensd@google.com> <20230705080756.xv7fm3jxewipunvn@linux.intel.com> Message-ID: Subject: Re: [PATCH v7 4/8] KVM: x86/mmu: Migrate to __kvm_follow_pfn From: Sean Christopherson To: Yu Zhang Cc: David Stevens , Marc Zyngier , Michael Ellerman , Peter Xu , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm@vger.kernel.org Content-Type: text/plain; charset="us-ascii" Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed, Jul 05, 2023, Yu Zhang wrote: > On Tue, Jul 04, 2023 at 04:50:49PM +0900, David Stevens wrote: > > From: David Stevens > > > > Migrate from __gfn_to_pfn_memslot to __kvm_follow_pfn. Please turn up your changelog verbosity from ~2 to ~8. E.g. explain the transition from async => FOLL_NOWAIT+KVM_PFN_ERR_NEEDS_IO, there's no reason to force readers to suss that out on their own. > > Signed-off-by: David Stevens > > --- > > arch/x86/kvm/mmu/mmu.c | 35 +++++++++++++++++++++++++---------- > > 1 file changed, 25 insertions(+), 10 deletions(-) > > > > diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c > > index ec169f5c7dce..e44ab512c3a1 100644 > > --- a/arch/x86/kvm/mmu/mmu.c > > +++ b/arch/x86/kvm/mmu/mmu.c > > @@ -4296,7 +4296,12 @@ void kvm_arch_async_page_ready(struct kvm_vcpu *vcpu, struct kvm_async_pf *work) > > static int __kvm_faultin_pfn(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault) > > { > > struct kvm_memory_slot *slot = fault->slot; > > - bool async; > > + struct kvm_follow_pfn foll = { > > + .slot = slot, > > + .gfn = fault->gfn, > > + .flags = FOLL_GET | (fault->write ? FOLL_WRITE : 0), > > + .allow_write_mapping = true, > > + }; > > > > /* > > * Retry the page fault if the gfn hit a memslot that is being deleted > > @@ -4325,12 +4330,14 @@ static int __kvm_faultin_pfn(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault > > return RET_PF_EMULATE; > > } > > > > - async = false; > > - fault->pfn = __gfn_to_pfn_memslot(slot, fault->gfn, false, false, &async, > > - fault->write, &fault->map_writable, > > - &fault->hva); > > - if (!async) > > - return RET_PF_CONTINUE; /* *pfn has correct page already */ > > + foll.flags |= FOLL_NOWAIT; > > + fault->pfn = __kvm_follow_pfn(&foll); > > + > > + if (!is_error_noslot_pfn(fault->pfn)) > > + goto success; > > + > > + if (fault->pfn != KVM_PFN_ERR_NEEDS_IO) > > + return RET_PF_CONTINUE; > > IIUC, FOLL_NOWAIT is set only when we wanna an async fault. So > KVM_PFN_ERR_NEEDS_IO may not be necessary? But FOLL_NOWAIT is set above. This logic is essentially saying "bail immediately if __gfn_to_pfn_memslot() returned a fatal error". A commented would definitely be helpful though. How about? /* * If __kvm_follow_pfn() failed because I/O is needed to fault in the * page, then either set up an asynchronous #PF to do the I/O, or if * doing an async #PF isn't possible, retry __kvm_follow_pfn() with I/O allowed. All other failures are fatal, i.e. retrying won't help. */ if (fault->pfn != KVM_PFN_ERR_NEEDS_IO) return RET_PF_CONTINUE;