From: Richard Henderson <richard.henderson@linaro.org>
To: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
Cc: qemu-devel@nongnu.org
Subject: Re: [PATCH] x86: Implement Linear Address Masking support
Date: Fri, 8 Apr 2022 07:39:31 -0700 [thread overview]
Message-ID: <9b91bf33-56db-6718-4f7c-158d8873a71f@linaro.org> (raw)
In-Reply-To: <20220407152734.miad3m2aqtbsfin3@black.fi.intel.com>
On 4/7/22 08:27, Kirill A. Shutemov wrote:
>> The fast path does not clear the bits, so you enter the slow path before you
>> get to clearing the bits. You've lost most of the advantage of the tlb
>> already.
>
> Sorry for my ignorance, but what do you mean by fast path here?
>
> My understanding is that it is the case when tlb_hit() is true and you
> don't need to get into tlb_fill(). Are we talking about the same scheme?
We are not. Paulo already mentioned the JIT. One example is tcg_out_tlb_load in
tcg/i386/tcg-target.c.inc. Obviously, there's an implementation of that for each host
architecture in the other tcg/arch/ subdirectories.
>> I've just now had a browse through the Intel docs, and I see that you're not
>> performing the required modified canonicality check.
>
> Modified is effectively done by clearing (and sign-extending) the address
> before the check.
>
>> While a proper tagged address will have the tag removed in CR2 during a
>> page fault, an improper tagged address (with bit 63 != {47,56}) should
>> have the original address reported to CR2.
>
> Hm. I don't see it in spec. It rather points to other direction:
>
> Page faults report the faulting linear address in CR2. Because LAM
> masking (by sign-extension) applies before paging, the faulting
> linear address recorded in CR2 does not contain the masked
> metadata.
>
# Regardless of the paging mode, the processor performs a modified
# canonicality check that enforces that bit 47 of the pointer matches
# bit 63. As illustrated in Figure 14-1, bits 62:48 are not checked
# and are thus available for software metadata. After this modified
# canonicality check is performed, bits 62:48 are masked by
# sign-extending the value of bit 47
Note especially that the sign-extension happens after canonicality check.
> But what other options do you see. Clering the bits before TLB look up
> matches the architectural spec and makes INVLPG match described behaviour
> without special handling.
We have special handling for INVLPG: tlb_flush_page_bits_by_mmuidx. That's how we handle
TBI for ARM. You'd supply 48 or 57 here.
r~
prev parent reply other threads:[~2022-04-08 14:57 UTC|newest]
Thread overview: 8+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-04-07 1:01 [PATCH] x86: Implement Linear Address Masking support Kirill A. Shutemov
2022-04-07 3:34 ` Richard Henderson
2022-04-07 13:18 ` Kirill A. Shutemov
2022-04-07 14:28 ` Richard Henderson
2022-04-07 15:27 ` Kirill A. Shutemov
2022-04-07 16:38 ` Paolo Bonzini
2022-04-07 17:44 ` Kirill A. Shutemov
2022-04-08 14:39 ` Richard Henderson [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=9b91bf33-56db-6718-4f7c-158d8873a71f@linaro.org \
--to=richard.henderson@linaro.org \
--cc=kirill.shutemov@linux.intel.com \
--cc=qemu-devel@nongnu.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).