public inbox for kvm@vger.kernel.org
 help / color / mirror / Atom feed
From: Sean Christopherson <sean.j.christopherson@intel.com>
To: Li RongQing <lirongqing@baidu.com>
Cc: kvm@vger.kernel.org, x86@kernel.org
Subject: Re: [PATCH][v2] KVM: x86/mmu: fix counting of rmap entries in pte_list_add
Date: Fri, 25 Sep 2020 09:43:33 -0700	[thread overview]
Message-ID: <20200925164332.GA31528@linux.intel.com> (raw)
In-Reply-To: <1600837138-21110-1-git-send-email-lirongqing@baidu.com>

On Wed, Sep 23, 2020 at 12:58:58PM +0800, Li RongQing wrote:
> counting of rmap entries was missed when desc->sptes is full
> and desc->more is NULL
> 
> and merging two PTE_LIST_EXT-1 check as one, to avoids the
> extra comparison to give slightly optimization

Please write complete sentences, and use proper capitalization and punctuation.
It's not a big deal for short changelogs, but it's crucial for readability of
larger changelogs.

E.g.

  Fix an off-by-one style bug in pte_list_add() where it failed to account
  the last full set of SPTEs, i.e. when desc->sptes is full and desc->more
  is NULL.

  Merge the two "PTE_LIST_EXT-1" checks as part of the fix to avoid an
  extra comparison.

> Suggested-by: Sean Christopherson <sean.j.christopherson@intel.com>

No need to give me credit, I just nitpicked the code, identifying the bug
and the fix was all you. :-)

Thanks for the fix!

> Signed-off-by: Li RongQing <lirongqing@baidu.com>

Paolo,

Although it's a bug fix, I don't think this needs a Fixes / Cc:stable.  The bug
only results in rmap recycling being delayed by one rmap.  Stable kernels can
probably live with an off-by-one bug given that RMAP_RECYCLE_THRESHOLD is
completely arbitrary. :-)

Reviewed-by: Sean Christopherson <sean.j.christopherson@intel.com>

> ---
> diff with v1: merge two check as one
> 
>  arch/x86/kvm/mmu/mmu.c | 12 +++++++-----
>  1 file changed, 7 insertions(+), 5 deletions(-)
> 
> diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
> index a5d0207e7189..c4068be6bb3f 100644
> --- a/arch/x86/kvm/mmu/mmu.c
> +++ b/arch/x86/kvm/mmu/mmu.c
> @@ -1273,12 +1273,14 @@ static int pte_list_add(struct kvm_vcpu *vcpu, u64 *spte,
>  	} else {
>  		rmap_printk("pte_list_add: %p %llx many->many\n", spte, *spte);
>  		desc = (struct pte_list_desc *)(rmap_head->val & ~1ul);
> -		while (desc->sptes[PTE_LIST_EXT-1] && desc->more) {
> -			desc = desc->more;
> +		while (desc->sptes[PTE_LIST_EXT-1]) {
>  			count += PTE_LIST_EXT;
> -		}
> -		if (desc->sptes[PTE_LIST_EXT-1]) {
> -			desc->more = mmu_alloc_pte_list_desc(vcpu);
> +
> +			if (!desc->more) {
> +				desc->more = mmu_alloc_pte_list_desc(vcpu);
> +				desc = desc->more;
> +				break;
> +			}
>  			desc = desc->more;
>  		}
>  		for (i = 0; desc->sptes[i]; ++i)
> -- 
> 2.16.2
> 

  reply	other threads:[~2020-09-25 16:43 UTC|newest]

Thread overview: 3+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-09-23  4:58 [PATCH][v2] KVM: x86/mmu: fix counting of rmap entries in pte_list_add Li RongQing
2020-09-25 16:43 ` Sean Christopherson [this message]
2020-09-26  6:34   ` Li,Rongqing

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20200925164332.GA31528@linux.intel.com \
    --to=sean.j.christopherson@intel.com \
    --cc=kvm@vger.kernel.org \
    --cc=lirongqing@baidu.com \
    --cc=x86@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox