From: Sean Christopherson <sean.j.christopherson@intel.com>
To: Paolo Bonzini <pbonzini@redhat.com>
Cc: Vitaly Kuznetsov <vkuznets@redhat.com>,
Wanpeng Li <wanpengli@tencent.com>,
Jim Mattson <jmattson@google.com>, Joerg Roedel <joro@8bytes.org>,
kvm@vger.kernel.org, linux-kernel@vger.kernel.org,
Ben Gardon <bgardon@google.com>
Subject: Re: [PATCH 0/3] KVM: x86/mmu: Add macro for hugepage GFN mask
Date: Wed, 4 Nov 2020 16:44:12 -0800 [thread overview]
Message-ID: <20201105004412.GA24605@linux.intel.com> (raw)
In-Reply-To: <e3d68b2b-2af6-04ce-c5f6-47786d9a15bb@redhat.com>
On Thu, Oct 29, 2020 at 08:08:48AM +0100, Paolo Bonzini wrote:
> On 28/10/20 16:29, Sean Christopherson wrote:
> > The naming and usage also aligns with the kernel, which defines PAGE, PMD and
> > PUD masks, and has near identical usage patterns.
> >
> > #define PAGE_SIZE (_AC(1,UL) << PAGE_SHIFT)
> > #define PAGE_MASK (~(PAGE_SIZE-1))
> >
> > #define PMD_PAGE_SIZE (_AC(1, UL) << PMD_SHIFT)
> > #define PMD_PAGE_MASK (~(PMD_PAGE_SIZE-1))
> >
> > #define PUD_PAGE_SIZE (_AC(1, UL) << PUD_SHIFT)
> > #define PUD_PAGE_MASK (~(PUD_PAGE_SIZE-1))
>
> Well, PAGE_MASK is also one of my pet peeves for Linux. At least I am
> consistent. :)
>
> >> and of course if you're debugging it you have to
> >> look closer and check if it's really "x & -y" or "x & ~y", but at least
> >> in normal cursory code reading that's how it works for me.
> >
> > IMO, "x & -y" has a higher barrier to entry, especially when the kernel's page
> > masks uses "x & ~(y - 1))". But, my opinion is definitely colored by my
> > inability to read two's-complement on the fly.
>
> Fair enough. What about having instead
>
> #define KVM_HPAGE_GFN_BASE(gfn, level) \
> (x & ~(KVM_PAGES_PER_HPAGE(gfn) - 1))
> #define KVM_HPAGE_GFN_INDEX(gfn, level) \
> (x & (KVM_PAGES_PER_HPAGE(gfn) - 1))
>
> ?
Hmm, not awful? What about OFFSET instead of INDEX, to pair with page offset?
I don't particularly love either one, but I can't think of anything better.
prev parent reply other threads:[~2020-11-05 0:44 UTC|newest]
Thread overview: 12+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-10-27 21:42 [PATCH 0/3] KVM: x86/mmu: Add macro for hugepage GFN mask Sean Christopherson
2020-10-27 21:42 ` [PATCH 1/3] KVM: x86/mmu: Add helper macro for computing " Sean Christopherson
2020-10-27 22:17 ` Ben Gardon
2020-10-27 22:46 ` Sean Christopherson
2020-10-27 21:42 ` [PATCH 2/3] KVM: x86/mmu: Open code GFN "rounding" in TDP MMU Sean Christopherson
2020-10-27 22:13 ` Ben Gardon
2020-10-27 21:43 ` [PATCH 3/3] KVM: x86/mmu: Use hugepage GFN mask to compute GFN offset mask Sean Christopherson
2020-10-27 22:09 ` Ben Gardon
2020-10-27 22:15 ` Sean Christopherson
2020-10-28 15:01 ` [PATCH 0/3] KVM: x86/mmu: Add macro for hugepage GFN mask Paolo Bonzini
[not found] ` <20201028152948.GA7584@linux.intel.com>
2020-10-29 7:08 ` Paolo Bonzini
2020-11-05 0:44 ` Sean Christopherson [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20201105004412.GA24605@linux.intel.com \
--to=sean.j.christopherson@intel.com \
--cc=bgardon@google.com \
--cc=jmattson@google.com \
--cc=joro@8bytes.org \
--cc=kvm@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=pbonzini@redhat.com \
--cc=vkuznets@redhat.com \
--cc=wanpengli@tencent.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).