From: kernel test robot <lkp@intel.com>
To: Paolo Bonzini <pbonzini@redhat.com>,
linux-kernel@vger.kernel.org, kvm@vger.kernel.org
Cc: oe-kbuild-all@lists.linux.dev, Jon Kohler <jon@nutanix.com>,
Nikunj A Dadhania <nikunj@amd.com>, Amit Shah <amit.shah@amd.com>,
Sean Christopherson <seanjc@google.com>,
Marcelo Tosatti <mtosatti@redhat.com>
Subject: Re: [PATCH 08/24] KVM: x86/mmu: introduce ACC_READ_MASK
Date: Mon, 30 Mar 2026 12:12:21 +0800 [thread overview]
Message-ID: <202603301246.a5sPkQdh-lkp@intel.com> (raw)
In-Reply-To: <20260326181723.218115-9-pbonzini@redhat.com>
Hi Paolo,
kernel test robot noticed the following build warnings:
[auto build test WARNING on kvm/queue]
[also build test WARNING on kvm/next tip/x86/tdx linus/master v7.0-rc6 next-20260327]
[cannot apply to kvm/linux-next]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch#_base_tree_information]
url: https://github.com/intel-lab-lkp/linux/commits/Paolo-Bonzini/KVM-TDX-VMX-rework-EPT_VIOLATION_EXEC_FOR_RING3_LIN-into-PROT_MASK/20260329-124019
base: https://git.kernel.org/pub/scm/virt/kvm/kvm.git queue
patch link: https://lore.kernel.org/r/20260326181723.218115-9-pbonzini%40redhat.com
patch subject: [PATCH 08/24] KVM: x86/mmu: introduce ACC_READ_MASK
config: x86_64-randconfig-123-20260329 (https://download.01.org/0day-ci/archive/20260330/202603301246.a5sPkQdh-lkp@intel.com/config)
compiler: clang version 20.1.8 (https://github.com/llvm/llvm-project 87f0227cb60147a26a1eeb4fb06e3b505e9c7261)
rustc: rustc 1.88.0 (6b00bc388 2025-06-23)
sparse: v0.6.5-rc1
reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20260330/202603301246.a5sPkQdh-lkp@intel.com/reproduce)
If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <lkp@intel.com>
| Closes: https://lore.kernel.org/oe-kbuild-all/202603301246.a5sPkQdh-lkp@intel.com/
sparse warnings: (new ones prefixed by >>)
arch/x86/kvm/mmu/mmu.c: note: in included file:
arch/x86/kvm/mmu/paging_tmpl.h:106:24: sparse: sparse: cast truncates bits from constant value (ffffffffff000 becomes fffff000)
arch/x86/kvm/mmu/paging_tmpl.h:440:24: sparse: sparse: cast truncates bits from constant value (ffffffffff000 becomes fffff000)
>> arch/x86/kvm/mmu/mmu.c:5585:82: sparse: sparse: cast truncates bits from constant value (ffff5555 becomes 5555)
>> arch/x86/kvm/mmu/mmu.c:5587:59: sparse: sparse: cast truncates bits from constant value (ffff3333 becomes 3333)
>> arch/x86/kvm/mmu/mmu.c:5589:58: sparse: sparse: cast truncates bits from constant value (ffff0f0f becomes f0f)
>> arch/x86/kvm/mmu/mmu.c:5591:59: sparse: sparse: cast truncates bits from constant value (ffff00ff becomes ff)
vim +5585 arch/x86/kvm/mmu/mmu.c
5519
5520 /*
5521 * Build a mask with all combinations of PTE access rights that
5522 * include the given access bit. The mask can be queried with
5523 * "mask & (1 << access)", where access is a combination of
5524 * ACC_* bits.
5525 *
5526 * By mixing and matching multiple masks returned by ACC_BITS_MASK,
5527 * update_permission_bitmask() builds what is effectively a
5528 * two-dimensional array of bools. The second dimension is
5529 * provided by individual bits of permissions[pfec >> 1], and
5530 * logical &, | and ~ operations operate on all the 16 possible
5531 * combinations of ACC_* bits.
5532 */
5533 #define ACC_BITS_MASK(access) \
5534 ((1 & (access) ? 1 << 1 : 0) | \
5535 (2 & (access) ? 1 << 2 : 0) | \
5536 (3 & (access) ? 1 << 3 : 0) | \
5537 (4 & (access) ? 1 << 4 : 0) | \
5538 (5 & (access) ? 1 << 5 : 0) | \
5539 (6 & (access) ? 1 << 6 : 0) | \
5540 (7 & (access) ? 1 << 7 : 0) | \
5541 (8 & (access) ? 1 << 8 : 0) | \
5542 (9 & (access) ? 1 << 9 : 0) | \
5543 (10 & (access) ? 1 << 10 : 0) | \
5544 (11 & (access) ? 1 << 11 : 0) | \
5545 (12 & (access) ? 1 << 12 : 0) | \
5546 (13 & (access) ? 1 << 13 : 0) | \
5547 (14 & (access) ? 1 << 14 : 0) | \
5548 (15 & (access) ? 1 << 15 : 0))
5549
5550 static void update_permission_bitmask(struct kvm_mmu *mmu, bool ept)
5551 {
5552 unsigned byte;
5553
5554 const u16 x = ACC_BITS_MASK(ACC_EXEC_MASK);
5555 const u16 w = ACC_BITS_MASK(ACC_WRITE_MASK);
5556 const u16 u = ACC_BITS_MASK(ACC_USER_MASK);
5557 const u16 r = ACC_BITS_MASK(ACC_READ_MASK);
5558
5559 bool cr4_smep = is_cr4_smep(mmu);
5560 bool cr4_smap = is_cr4_smap(mmu);
5561 bool cr0_wp = is_cr0_wp(mmu);
5562 bool efer_nx = is_efer_nx(mmu);
5563
5564 /*
5565 * In hardware, page fault error codes are generated (as the name
5566 * suggests) on any kind of page fault. permission_fault() and
5567 * paging_tmpl.h already use the same bits after a successful page
5568 * table walk, to indicate the kind of access being performed.
5569 *
5570 * However, PFERR_PRESENT_MASK and PFERR_RSVD_MASK are never set here,
5571 * exactly because the page walk is successful. PFERR_PRESENT_MASK is
5572 * removed by the shift, while PFERR_RSVD_MASK is repurposed in
5573 * permission_fault() to indicate accesses that are *not* subject to
5574 * SMAP restrictions.
5575 */
5576 for (byte = 0; byte < ARRAY_SIZE(mmu->permissions); ++byte) {
5577 unsigned pfec = byte << 1;
5578
5579 /*
5580 * Each "*f" variable has a 1 bit for each ACC_* combo
5581 * that causes a fault with the given PFEC.
5582 */
5583
5584 /* Faults from reads to non-readable pages */
> 5585 u16 rf = (pfec & (PFERR_WRITE_MASK|PFERR_FETCH_MASK)) ? 0 : (u16)~r;
5586 /* Faults from writes to non-writable pages */
> 5587 u16 wf = (pfec & PFERR_WRITE_MASK) ? (u16)~w : 0;
5588 /* Faults from user mode accesses to supervisor pages */
> 5589 u16 uf = (pfec & PFERR_USER_MASK) ? (u16)~u : 0;
5590 /* Faults from fetches of non-executable pages*/
> 5591 u16 ff = (pfec & PFERR_FETCH_MASK) ? (u16)~x : 0;
5592 /* Faults from kernel mode fetches of user pages */
5593 u16 smepf = 0;
5594 /* Faults from kernel mode accesses of user pages */
5595 u16 smapf = 0;
5596
5597 if (!ept) {
5598 /* Faults from kernel mode accesses to user pages */
5599 u16 kf = (pfec & PFERR_USER_MASK) ? 0 : u;
5600
5601 /* Not really needed: !nx will cause pte.nx to fault */
5602 if (!efer_nx)
5603 ff = 0;
5604
5605 /* Allow supervisor writes if !cr0.wp */
5606 if (!cr0_wp)
5607 wf = (pfec & PFERR_USER_MASK) ? wf : 0;
5608
5609 /* Disallow supervisor fetches of user code if cr4.smep */
5610 if (cr4_smep)
5611 smepf = (pfec & PFERR_FETCH_MASK) ? kf : 0;
5612
5613 /*
5614 * SMAP:kernel-mode data accesses from user-mode
5615 * mappings should fault. A fault is considered
5616 * as a SMAP violation if all of the following
5617 * conditions are true:
5618 * - X86_CR4_SMAP is set in CR4
5619 * - A user page is accessed
5620 * - The access is not a fetch
5621 * - The access is supervisor mode
5622 * - If implicit supervisor access or X86_EFLAGS_AC is clear
5623 *
5624 * Here, we cover the first four conditions. The fifth
5625 * is computed dynamically in permission_fault() and
5626 * communicated by setting PFERR_RSVD_MASK.
5627 */
5628 if (cr4_smap)
5629 smapf = (pfec & (PFERR_RSVD_MASK|PFERR_FETCH_MASK)) ? 0 : kf;
5630 }
5631
5632 mmu->permissions[byte] = ff | uf | wf | rf | smepf | smapf;
5633 }
5634 }
5635
--
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki
next prev parent reply other threads:[~2026-03-30 4:12 UTC|newest]
Thread overview: 32+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-03-26 18:16 [RFC PATCH v2 00/24] KVM: combined patchset for MBEC/GMET support Paolo Bonzini
2026-03-26 18:16 ` [PATCH 01/24] KVM: TDX/VMX: rework EPT_VIOLATION_EXEC_FOR_RING3_LIN into PROT_MASK Paolo Bonzini
2026-03-26 18:17 ` [PATCH 02/24] KVM: x86/mmu: remove SPTE_PERM_MASK Paolo Bonzini
2026-03-26 18:17 ` [PATCH 03/24] KVM: x86/mmu: free up bit 10 of PTEs in preparation for MBEC Paolo Bonzini
2026-03-26 18:17 ` [PATCH 04/24] KVM: x86/mmu: shuffle high bits of SPTEs " Paolo Bonzini
2026-03-26 18:17 ` [PATCH 05/24] KVM: x86/mmu: remove SPTE_EPT_* Paolo Bonzini
2026-03-26 18:17 ` [PATCH 06/24] KVM: x86/mmu: merge make_spte_{non,}executable Paolo Bonzini
2026-03-26 18:17 ` [PATCH 07/24] KVM: x86/mmu: rename and clarify BYTE_MASK Paolo Bonzini
2026-03-26 18:17 ` [PATCH 08/24] KVM: x86/mmu: introduce ACC_READ_MASK Paolo Bonzini
2026-03-27 4:06 ` Jon Kohler
2026-03-30 4:12 ` kernel test robot [this message]
2026-03-26 18:17 ` [PATCH 09/24] KVM: x86/mmu: separate more EPT/non-EPT permission_fault() Paolo Bonzini
2026-03-26 18:17 ` [PATCH 10/24] KVM: x86/mmu: split XS/XU bits for EPT Paolo Bonzini
2026-03-26 18:17 ` [PATCH 11/24] KVM: x86/mmu: move cr4_smep to base role Paolo Bonzini
2026-03-26 18:17 ` [PATCH 12/24] KVM: VMX: enable use of MBEC Paolo Bonzini
2026-03-26 18:17 ` [PATCH 13/24] KVM: nVMX: pass advanced EPT violation vmexit info to guest Paolo Bonzini
2026-03-26 18:17 ` [PATCH 14/24] KVM: nVMX: pass PFERR_USER_MASK to MMU on EPT violations Paolo Bonzini
2026-03-26 18:17 ` [PATCH 15/24] KVM: x86/mmu: add support for MBEC to EPT page table walks Paolo Bonzini
2026-03-26 18:17 ` [PATCH 16/24] KVM: nVMX: advertise MBEC to nested guests Paolo Bonzini
2026-03-26 18:17 ` [PATCH 17/24] KVM: nVMX: allow MBEC with EVMCS Paolo Bonzini
2026-03-26 18:17 ` [PATCH 18/24] KVM: x86/mmu: propagate access mask from root pages down Paolo Bonzini
2026-03-26 18:17 ` [PATCH 19/24] KVM: x86/mmu: introduce cpu_role bit for availability of PFEC.I/D Paolo Bonzini
2026-03-26 18:17 ` [PATCH 20/24] KVM: SVM: add GMET bit definitions Paolo Bonzini
2026-03-26 18:17 ` [PATCH 21/24] KVM: x86/mmu: add support for GMET to NPT page table walks Paolo Bonzini
2026-03-26 18:17 ` [PATCH 22/24] KVM: SVM: enable GMET and set it in MMU role Paolo Bonzini
2026-03-26 18:17 ` [PATCH 23/24] KVM: SVM: work around errata 1218 Paolo Bonzini
2026-03-26 18:17 ` [PATCH 24/24] KVM: nSVM: enable GMET for guests Paolo Bonzini
2026-03-26 18:17 ` [PATCH 25/24] stats hack Paolo Bonzini
2026-03-30 2:27 ` [RFC PATCH v2 00/24] KVM: combined patchset for MBEC/GMET support Jon Kohler
2026-03-30 10:43 ` Paolo Bonzini
2026-03-30 18:59 ` Jon Kohler
2026-04-01 16:27 ` Paolo Bonzini
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=202603301246.a5sPkQdh-lkp@intel.com \
--to=lkp@intel.com \
--cc=amit.shah@amd.com \
--cc=jon@nutanix.com \
--cc=kvm@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=mtosatti@redhat.com \
--cc=nikunj@amd.com \
--cc=oe-kbuild-all@lists.linux.dev \
--cc=pbonzini@redhat.com \
--cc=seanjc@google.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox