* [PATCH 0/5] Followup patches for lockless access tracking
@ 2016-12-22 4:29 Junaid Shahid
2016-12-22 4:29 ` [PATCH 1/5] kvm: x86: mmu: Rename EPT_VIOLATION_READ/WRITE/INSTR constants Junaid Shahid
` (4 more replies)
0 siblings, 5 replies; 8+ messages in thread
From: Junaid Shahid @ 2016-12-22 4:29 UTC (permalink / raw)
To: kvm; +Cc: andreslc, pfeiner, pbonzini, guangrong.xiao
This series includes a few followup patches implementing some of the code review feedback from the lockless access tracking patch series.
Junaid Shahid (5):
kvm: x86: mmu: Rename EPT_VIOLATION_READ/WRITE/INSTR constants
kvm: x86: mmu: Set SPTE_SPECIAL_MASK within mmu.c
kvm: x86: mmu: Move pgtbl walk inside retry loop in fast_page_fault
kvm: x86: mmu: Update comment in mark_spte_for_access_track
kvm: x86: mmu: Verify that restored PTE has needed perms in fast page
fault
arch/x86/include/asm/vmx.h | 12 ++--
arch/x86/kvm/mmu.c | 153 ++++++++++++++++++++++++---------------------
arch/x86/kvm/vmx.c | 12 ++--
arch/x86/kvm/x86.c | 3 -
4 files changed, 93 insertions(+), 87 deletions(-)
--
2.8.0.rc3.226.g39d4020
^ permalink raw reply [flat|nested] 8+ messages in thread
* [PATCH 1/5] kvm: x86: mmu: Rename EPT_VIOLATION_READ/WRITE/INSTR constants
2016-12-22 4:29 [PATCH 0/5] Followup patches for lockless access tracking Junaid Shahid
@ 2016-12-22 4:29 ` Junaid Shahid
2016-12-22 4:29 ` [PATCH 2/5] kvm: x86: mmu: Set SPTE_SPECIAL_MASK within mmu.c Junaid Shahid
` (3 subsequent siblings)
4 siblings, 0 replies; 8+ messages in thread
From: Junaid Shahid @ 2016-12-22 4:29 UTC (permalink / raw)
To: kvm; +Cc: andreslc, pfeiner, pbonzini, guangrong.xiao
Rename the EPT_VIOLATION_READ/WRITE/INSTR constants to
EPT_VIOLATION_ACC_READ/WRITE/INSTR to more clearly indicate that these
signify the type of the memory access as opposed to the permissions
granted by the PTE.
Signed-off-by: Junaid Shahid <junaids@google.com>
---
arch/x86/include/asm/vmx.h | 12 ++++++------
arch/x86/kvm/vmx.c | 6 +++---
2 files changed, 9 insertions(+), 9 deletions(-)
diff --git a/arch/x86/include/asm/vmx.h b/arch/x86/include/asm/vmx.h
index a22a479..cc54b70 100644
--- a/arch/x86/include/asm/vmx.h
+++ b/arch/x86/include/asm/vmx.h
@@ -510,15 +510,15 @@ struct vmx_msr_entry {
/*
* Exit Qualifications for EPT Violations
*/
-#define EPT_VIOLATION_READ_BIT 0
-#define EPT_VIOLATION_WRITE_BIT 1
-#define EPT_VIOLATION_INSTR_BIT 2
+#define EPT_VIOLATION_ACC_READ_BIT 0
+#define EPT_VIOLATION_ACC_WRITE_BIT 1
+#define EPT_VIOLATION_ACC_INSTR_BIT 2
#define EPT_VIOLATION_READABLE_BIT 3
#define EPT_VIOLATION_WRITABLE_BIT 4
#define EPT_VIOLATION_EXECUTABLE_BIT 5
-#define EPT_VIOLATION_READ (1 << EPT_VIOLATION_READ_BIT)
-#define EPT_VIOLATION_WRITE (1 << EPT_VIOLATION_WRITE_BIT)
-#define EPT_VIOLATION_INSTR (1 << EPT_VIOLATION_INSTR_BIT)
+#define EPT_VIOLATION_ACC_READ (1 << EPT_VIOLATION_ACC_READ_BIT)
+#define EPT_VIOLATION_ACC_WRITE (1 << EPT_VIOLATION_ACC_WRITE_BIT)
+#define EPT_VIOLATION_ACC_INSTR (1 << EPT_VIOLATION_ACC_INSTR_BIT)
#define EPT_VIOLATION_READABLE (1 << EPT_VIOLATION_READABLE_BIT)
#define EPT_VIOLATION_WRITABLE (1 << EPT_VIOLATION_WRITABLE_BIT)
#define EPT_VIOLATION_EXECUTABLE (1 << EPT_VIOLATION_EXECUTABLE_BIT)
diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
index 6a441a9..3f83856 100644
--- a/arch/x86/kvm/vmx.c
+++ b/arch/x86/kvm/vmx.c
@@ -6367,13 +6367,13 @@ static int handle_ept_violation(struct kvm_vcpu *vcpu)
trace_kvm_page_fault(gpa, exit_qualification);
/* Is it a read fault? */
- error_code = (exit_qualification & EPT_VIOLATION_READ)
+ error_code = (exit_qualification & EPT_VIOLATION_ACC_READ)
? PFERR_USER_MASK : 0;
/* Is it a write fault? */
- error_code |= (exit_qualification & EPT_VIOLATION_WRITE)
+ error_code |= (exit_qualification & EPT_VIOLATION_ACC_WRITE)
? PFERR_WRITE_MASK : 0;
/* Is it a fetch fault? */
- error_code |= (exit_qualification & EPT_VIOLATION_INSTR)
+ error_code |= (exit_qualification & EPT_VIOLATION_ACC_INSTR)
? PFERR_FETCH_MASK : 0;
/* ept page table entry is present? */
error_code |= (exit_qualification &
--
2.8.0.rc3.226.g39d4020
^ permalink raw reply related [flat|nested] 8+ messages in thread
* [PATCH 2/5] kvm: x86: mmu: Set SPTE_SPECIAL_MASK within mmu.c
2016-12-22 4:29 [PATCH 0/5] Followup patches for lockless access tracking Junaid Shahid
2016-12-22 4:29 ` [PATCH 1/5] kvm: x86: mmu: Rename EPT_VIOLATION_READ/WRITE/INSTR constants Junaid Shahid
@ 2016-12-22 4:29 ` Junaid Shahid
2016-12-22 4:29 ` [PATCH 3/5] kvm: x86: mmu: Move pgtbl walk inside retry loop in fast_page_fault Junaid Shahid
` (2 subsequent siblings)
4 siblings, 0 replies; 8+ messages in thread
From: Junaid Shahid @ 2016-12-22 4:29 UTC (permalink / raw)
To: kvm; +Cc: andreslc, pfeiner, pbonzini, guangrong.xiao
Instead of the caller including the SPTE_SPECIAL_MASK in the masks being
supplied to kvm_mmu_set_mmio_spte_mask() and kvm_mmu_set_mask_ptes(),
those functions now themselves include the SPTE_SPECIAL_MASK.
Signed-off-by: Junaid Shahid <junaids@google.com>
---
arch/x86/kvm/mmu.c | 5 ++++-
arch/x86/kvm/vmx.c | 6 ++----
arch/x86/kvm/x86.c | 3 ---
3 files changed, 6 insertions(+), 8 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 3e42fcd..8707084 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -207,7 +207,7 @@ static void mmu_free_roots(struct kvm_vcpu *vcpu);
void kvm_mmu_set_mmio_spte_mask(u64 mmio_mask)
{
- shadow_mmio_mask = mmio_mask;
+ shadow_mmio_mask = mmio_mask | SPTE_SPECIAL_MASK;
}
EXPORT_SYMBOL_GPL(kvm_mmu_set_mmio_spte_mask);
@@ -317,6 +317,9 @@ void kvm_mmu_set_mask_ptes(u64 user_mask, u64 accessed_mask,
u64 dirty_mask, u64 nx_mask, u64 x_mask, u64 p_mask,
u64 acc_track_mask)
{
+ if (acc_track_mask != 0)
+ acc_track_mask |= SPTE_SPECIAL_MASK;
+
shadow_user_mask = user_mask;
shadow_accessed_mask = accessed_mask;
shadow_dirty_mask = dirty_mask;
diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
index 3f83856..3744dbb 100644
--- a/arch/x86/kvm/vmx.c
+++ b/arch/x86/kvm/vmx.c
@@ -5228,10 +5228,8 @@ static void ept_set_mmio_spte_mask(void)
/*
* EPT Misconfigurations can be generated if the value of bits 2:0
* of an EPT paging-structure entry is 110b (write/execute).
- * Also, special bit (62) is set to quickly identify mmio spte.
*/
- kvm_mmu_set_mmio_spte_mask(SPTE_SPECIAL_MASK |
- VMX_EPT_MISCONFIG_WX_VALUE);
+ kvm_mmu_set_mmio_spte_mask(VMX_EPT_MISCONFIG_WX_VALUE);
}
#define VMX_XSS_EXIT_BITMAP 0
@@ -6577,7 +6575,7 @@ void vmx_enable_tdp(void)
enable_ept_ad_bits ? VMX_EPT_DIRTY_BIT : 0ull,
0ull, VMX_EPT_EXECUTABLE_MASK,
cpu_has_vmx_ept_execute_only() ? 0ull : VMX_EPT_READABLE_MASK,
- enable_ept_ad_bits ? 0ull : SPTE_SPECIAL_MASK | VMX_EPT_RWX_MASK);
+ enable_ept_ad_bits ? 0ull : VMX_EPT_RWX_MASK);
ept_set_mmio_spte_mask();
kvm_enable_tdp();
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index fb52d66..51c3a4d 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -5944,9 +5944,6 @@ static void kvm_set_mmio_spte_mask(void)
/* Mask the reserved physical address bits. */
mask = rsvd_bits(maxphyaddr, 51);
- /* Bit 62 is always reserved for 32bit host. */
- mask |= 0x3ull << 62;
-
/* Set the present bit. */
mask |= 1ull;
--
2.8.0.rc3.226.g39d4020
^ permalink raw reply related [flat|nested] 8+ messages in thread
* [PATCH 3/5] kvm: x86: mmu: Move pgtbl walk inside retry loop in fast_page_fault
2016-12-22 4:29 [PATCH 0/5] Followup patches for lockless access tracking Junaid Shahid
2016-12-22 4:29 ` [PATCH 1/5] kvm: x86: mmu: Rename EPT_VIOLATION_READ/WRITE/INSTR constants Junaid Shahid
2016-12-22 4:29 ` [PATCH 2/5] kvm: x86: mmu: Set SPTE_SPECIAL_MASK within mmu.c Junaid Shahid
@ 2016-12-22 4:29 ` Junaid Shahid
2016-12-22 4:29 ` [PATCH 4/5] kvm: x86: mmu: Update comment in mark_spte_for_access_track Junaid Shahid
2016-12-22 4:29 ` [PATCH 5/5] kvm: x86: mmu: Verify that restored PTE has needed perms in fast page fault Junaid Shahid
4 siblings, 0 replies; 8+ messages in thread
From: Junaid Shahid @ 2016-12-22 4:29 UTC (permalink / raw)
To: kvm; +Cc: andreslc, pfeiner, pbonzini, guangrong.xiao
Redo the page table walk in fast_page_fault when retrying so that we are
working on the latest PTE even if the hierarchy changes.
Signed-off-by: Junaid Shahid <junaids@google.com>
---
arch/x86/kvm/mmu.c | 10 +++++-----
1 file changed, 5 insertions(+), 5 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 8707084..76cc911 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -3087,14 +3087,16 @@ static bool fast_page_fault(struct kvm_vcpu *vcpu, gva_t gva, int level,
return false;
walk_shadow_page_lockless_begin(vcpu);
- for_each_shadow_entry_lockless(vcpu, gva, iterator, spte)
- if (!is_shadow_present_pte(spte) || iterator.level < level)
- break;
do {
bool remove_write_prot = false;
bool remove_acc_track;
+ for_each_shadow_entry_lockless(vcpu, gva, iterator, spte)
+ if (!is_shadow_present_pte(spte) ||
+ iterator.level < level)
+ break;
+
sp = page_header(__pa(iterator.sptep));
if (!is_last_spte(spte, sp->role.level))
break;
@@ -3175,8 +3177,6 @@ static bool fast_page_fault(struct kvm_vcpu *vcpu, gva_t gva, int level,
break;
}
- spte = mmu_spte_get_lockless(iterator.sptep);
-
} while (true);
trace_fast_page_fault(vcpu, gva, error_code, iterator.sptep,
--
2.8.0.rc3.226.g39d4020
^ permalink raw reply related [flat|nested] 8+ messages in thread
* [PATCH 4/5] kvm: x86: mmu: Update comment in mark_spte_for_access_track
2016-12-22 4:29 [PATCH 0/5] Followup patches for lockless access tracking Junaid Shahid
` (2 preceding siblings ...)
2016-12-22 4:29 ` [PATCH 3/5] kvm: x86: mmu: Move pgtbl walk inside retry loop in fast_page_fault Junaid Shahid
@ 2016-12-22 4:29 ` Junaid Shahid
2016-12-22 4:29 ` [PATCH 5/5] kvm: x86: mmu: Verify that restored PTE has needed perms in fast page fault Junaid Shahid
4 siblings, 0 replies; 8+ messages in thread
From: Junaid Shahid @ 2016-12-22 4:29 UTC (permalink / raw)
To: kvm; +Cc: andreslc, pfeiner, pbonzini, guangrong.xiao
Reword the comment to hopefully make it more clear.
Signed-off-by: Junaid Shahid <junaids@google.com>
---
arch/x86/kvm/mmu.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 76cc911..49a7fe4 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -707,9 +707,9 @@ static u64 mark_spte_for_access_track(u64 spte)
return spte;
/*
- * Verify that the write-protection that we do below will be fixable
- * via the fast page fault path. Currently, that is always the case, at
- * least when using EPT (which is when access tracking would be used).
+ * Making an Access Tracking PTE will result in removal of write access
+ * from the PTE. So, verify that we will be able to restore the write
+ * access in the fast page fault path later on.
*/
WARN_ONCE((spte & PT_WRITABLE_MASK) &&
!spte_can_locklessly_be_made_writable(spte),
--
2.8.0.rc3.226.g39d4020
^ permalink raw reply related [flat|nested] 8+ messages in thread
* [PATCH 5/5] kvm: x86: mmu: Verify that restored PTE has needed perms in fast page fault
2016-12-22 4:29 [PATCH 0/5] Followup patches for lockless access tracking Junaid Shahid
` (3 preceding siblings ...)
2016-12-22 4:29 ` [PATCH 4/5] kvm: x86: mmu: Update comment in mark_spte_for_access_track Junaid Shahid
@ 2016-12-22 4:29 ` Junaid Shahid
2017-01-25 11:49 ` Paolo Bonzini
4 siblings, 1 reply; 8+ messages in thread
From: Junaid Shahid @ 2016-12-22 4:29 UTC (permalink / raw)
To: kvm; +Cc: andreslc, pfeiner, pbonzini, guangrong.xiao
Before fast page fault restores an access track PTE back to a regular PTE,
it now also verifies that the restored PTE would grant the necessary
permissions for the faulting access to succeed. If not, it falls back
to the slow page fault path.
Signed-off-by: Junaid Shahid <junaids@google.com>
---
arch/x86/kvm/mmu.c | 132 ++++++++++++++++++++++++++++-------------------------
1 file changed, 70 insertions(+), 62 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 49a7fe4..1317d1e 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -372,6 +372,11 @@ static int is_last_spte(u64 pte, int level)
return 0;
}
+static bool is_executable_pte(u64 spte)
+{
+ return (spte & (shadow_x_mask | shadow_nx_mask)) == shadow_x_mask;
+}
+
static kvm_pfn_t spte_to_pfn(u64 pte)
{
return (pte & PT64_BASE_ADDR_MASK) >> PAGE_SHIFT;
@@ -727,6 +732,23 @@ static u64 mark_spte_for_access_track(u64 spte)
return spte;
}
+/* Restore an acc-track PTE back to a regular PTE */
+static u64 restore_acc_track_spte(u64 spte)
+{
+ u64 new_spte = spte;
+ u64 saved_bits = (spte >> shadow_acc_track_saved_bits_shift)
+ & shadow_acc_track_saved_bits_mask;
+
+ WARN_ON_ONCE(!is_access_track_spte(spte));
+
+ new_spte &= ~shadow_acc_track_mask;
+ new_spte &= ~(shadow_acc_track_saved_bits_mask <<
+ shadow_acc_track_saved_bits_shift);
+ new_spte |= saved_bits;
+
+ return new_spte;
+}
+
/* Returns the Accessed status of the PTE and resets it at the same time. */
static bool mmu_spte_age(u64 *sptep)
{
@@ -3018,27 +3040,12 @@ static bool page_fault_can_be_fast(u32 error_code)
*/
static bool
fast_pf_fix_direct_spte(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp,
- u64 *sptep, u64 old_spte,
- bool remove_write_prot, bool remove_acc_track)
+ u64 *sptep, u64 old_spte, u64 new_spte)
{
gfn_t gfn;
- u64 new_spte = old_spte;
WARN_ON(!sp->role.direct);
- if (remove_acc_track) {
- u64 saved_bits = (old_spte >> shadow_acc_track_saved_bits_shift)
- & shadow_acc_track_saved_bits_mask;
-
- new_spte &= ~shadow_acc_track_mask;
- new_spte &= ~(shadow_acc_track_saved_bits_mask <<
- shadow_acc_track_saved_bits_shift);
- new_spte |= saved_bits;
- }
-
- if (remove_write_prot)
- new_spte |= PT_WRITABLE_MASK;
-
/*
* Theoretically we could also set dirty bit (and flush TLB) here in
* order to eliminate unnecessary PML logging. See comments in
@@ -3054,7 +3061,7 @@ fast_pf_fix_direct_spte(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp,
if (cmpxchg64(sptep, old_spte, new_spte) != old_spte)
return false;
- if (remove_write_prot) {
+ if (is_writable_pte(new_spte) && !is_writable_pte(old_spte)) {
/*
* The gfn of direct spte is stable since it is
* calculated by sp->gfn.
@@ -3066,6 +3073,18 @@ fast_pf_fix_direct_spte(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp,
return true;
}
+static bool is_access_allowed(u32 fault_err_code, u64 spte)
+{
+ if (fault_err_code & PFERR_FETCH_MASK)
+ return is_executable_pte(spte);
+
+ if (fault_err_code & PFERR_WRITE_MASK)
+ return is_writable_pte(spte);
+
+ /* Fault was on Read access */
+ return spte & PT_PRESENT_MASK;
+}
+
/*
* Return value:
* - true: let the vcpu to access on the same address again.
@@ -3089,8 +3108,7 @@ static bool fast_page_fault(struct kvm_vcpu *vcpu, gva_t gva, int level,
walk_shadow_page_lockless_begin(vcpu);
do {
- bool remove_write_prot = false;
- bool remove_acc_track;
+ u64 new_spte;
for_each_shadow_entry_lockless(vcpu, gva, iterator, spte)
if (!is_shadow_present_pte(spte) ||
@@ -3111,52 +3129,43 @@ static bool fast_page_fault(struct kvm_vcpu *vcpu, gva_t gva, int level,
* Need not check the access of upper level table entries since
* they are always ACC_ALL.
*/
-
- if (error_code & PFERR_FETCH_MASK) {
- if ((spte & (shadow_x_mask | shadow_nx_mask))
- == shadow_x_mask) {
- fault_handled = true;
- break;
- }
- } else if (error_code & PFERR_WRITE_MASK) {
- if (is_writable_pte(spte)) {
- fault_handled = true;
- break;
- }
-
- /*
- * Currently, to simplify the code, write-protection can
- * be removed in the fast path only if the SPTE was
- * write-protected for dirty-logging.
- */
- remove_write_prot =
- spte_can_locklessly_be_made_writable(spte);
- } else {
- /* Fault was on Read access */
- if (spte & PT_PRESENT_MASK) {
- fault_handled = true;
- break;
- }
+ if (is_access_allowed(error_code, spte)) {
+ fault_handled = true;
+ break;
}
- remove_acc_track = is_access_track_spte(spte);
-
- /* Verify that the fault can be handled in the fast path */
- if (!remove_acc_track && !remove_write_prot)
- break;
+ new_spte = is_access_track_spte(spte)
+ ? restore_acc_track_spte(spte)
+ : spte;
/*
- * Do not fix write-permission on the large spte since we only
- * dirty the first page into the dirty-bitmap in
- * fast_pf_fix_direct_spte() that means other pages are missed
- * if its slot is dirty-logged.
- *
- * Instead, we let the slow page fault path create a normal spte
- * to fix the access.
- *
- * See the comments in kvm_arch_commit_memory_region().
+ * Currently, to simplify the code, write-protection can
+ * be removed in the fast path only if the SPTE was
+ * write-protected for dirty-logging or access tracking.
*/
- if (sp->role.level > PT_PAGE_TABLE_LEVEL && remove_write_prot)
+ if ((error_code & PFERR_WRITE_MASK) &&
+ spte_can_locklessly_be_made_writable(spte))
+ {
+ new_spte |= PT_WRITABLE_MASK;
+
+ /*
+ * Do not fix write-permission on the large spte since
+ * we only dirty the first page into the dirty-bitmap in
+ * fast_pf_fix_direct_spte() that means other pages are
+ * missed if its slot is dirty-logged.
+ *
+ * Instead, we let the slow page fault path create a
+ * normal spte to fix the access.
+ *
+ * See the comments in kvm_arch_commit_memory_region().
+ */
+ if (sp->role.level > PT_PAGE_TABLE_LEVEL)
+ break;
+ }
+
+ /* Verify that the fault can be handled in the fast path */
+ if (new_spte == spte ||
+ !is_access_allowed(error_code, new_spte))
break;
/*
@@ -3166,8 +3175,7 @@ static bool fast_page_fault(struct kvm_vcpu *vcpu, gva_t gva, int level,
*/
fault_handled = fast_pf_fix_direct_spte(vcpu, sp,
iterator.sptep, spte,
- remove_write_prot,
- remove_acc_track);
+ new_spte);
if (fault_handled)
break;
--
2.8.0.rc3.226.g39d4020
^ permalink raw reply related [flat|nested] 8+ messages in thread
* Re: [PATCH 5/5] kvm: x86: mmu: Verify that restored PTE has needed perms in fast page fault
2016-12-22 4:29 ` [PATCH 5/5] kvm: x86: mmu: Verify that restored PTE has needed perms in fast page fault Junaid Shahid
@ 2017-01-25 11:49 ` Paolo Bonzini
2017-01-25 20:05 ` Junaid Shahid
0 siblings, 1 reply; 8+ messages in thread
From: Paolo Bonzini @ 2017-01-25 11:49 UTC (permalink / raw)
To: Junaid Shahid, kvm; +Cc: andreslc, pfeiner, guangrong.xiao
> - remove_acc_track = is_access_track_spte(spte);
> -
> - /* Verify that the fault can be handled in the fast path */
> - if (!remove_acc_track && !remove_write_prot)
> - break;
> + new_spte = is_access_track_spte(spte)
> + ? restore_acc_track_spte(spte)
> + : spte;
Just a tiny stylistic change to match the "new_spte |= PT_WRITABLE_MASK"
in the block below:
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index aa3b0d14c019..2fd7586aad4d 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -3135,9 +3135,10 @@ static bool fast_page_fault(struct kvm_vcpu *vcpu, gva_t gva, int level,
break;
}
- new_spte = is_access_track_spte(spte)
- ? restore_acc_track_spte(spte)
- : spte;
+ new_spte = spte;
+
+ if (is_access_track_spte(spte))
+ new_spte = restore_acc_track_spte(new_spte);
/*
* Currently, to simplify the code, write-protection can
Paolo
> /*
> - * Do not fix write-permission on the large spte since we only
> - * dirty the first page into the dirty-bitmap in
> - * fast_pf_fix_direct_spte() that means other pages are missed
> - * if its slot is dirty-logged.
> - *
> - * Instead, we let the slow page fault path create a normal spte
> - * to fix the access.
> - *
> - * See the comments in kvm_arch_commit_memory_region().
> + * Currently, to simplify the code, write-protection can
> + * be removed in the fast path only if the SPTE was
> + * write-protected for dirty-logging or access tracking.
> */
> - if (sp->role.level > PT_PAGE_TABLE_LEVEL && remove_write_prot)
> + if ((error_code & PFERR_WRITE_MASK) &&
> + spte_can_locklessly_be_made_writable(spte))
> + {
> + new_spte |= PT_WRITABLE_MASK;
> +
> + /*
> + * Do not fix write-permission on the large spte since
> + * we only dirty the first page into the dirty-bitmap in
> + * fast_pf_fix_direct_spte() that means other pages are
> + * missed if its slot is dirty-logged.
> + *
> + * Instead, we let the slow page fault path create a
> + * normal spte to fix the access.
> + *
> + * See the comments in kvm_arch_commit_memory_region().
> + */
> + if (sp->role.level > PT_PAGE_TABLE_LEVEL)
> + break;
> + }
> +
> + /* Verify that the fault can be handled in the fast path */
> + if (new_spte == spte ||
> + !is_access_allowed(error_code, new_spte))
> break;
>
> /*
> @@ -3166,8 +3175,7 @@ static bool fast_page_fault(struct kvm_vcpu *vcpu, gva_t gva, int level,
> */
> fault_handled = fast_pf_fix_direct_spte(vcpu, sp,
> iterator.sptep, spte,
> - remove_write_prot,
> - remove_acc_track);
> + new_spte);
> if (fault_handled)
> break;
>
>
^ permalink raw reply related [flat|nested] 8+ messages in thread
* Re: [PATCH 5/5] kvm: x86: mmu: Verify that restored PTE has needed perms in fast page fault
2017-01-25 11:49 ` Paolo Bonzini
@ 2017-01-25 20:05 ` Junaid Shahid
0 siblings, 0 replies; 8+ messages in thread
From: Junaid Shahid @ 2017-01-25 20:05 UTC (permalink / raw)
To: Paolo Bonzini; +Cc: kvm, andreslc, pfeiner, guangrong.xiao
Looks good. Thanks.
On Wednesday, January 25, 2017 12:49:06 PM Paolo Bonzini wrote:
>
> > - remove_acc_track = is_access_track_spte(spte);
> > -
> > - /* Verify that the fault can be handled in the fast path */
> > - if (!remove_acc_track && !remove_write_prot)
> > - break;
> > + new_spte = is_access_track_spte(spte)
> > + ? restore_acc_track_spte(spte)
> > + : spte;
>
> Just a tiny stylistic change to match the "new_spte |= PT_WRITABLE_MASK"
> in the block below:
>
>
> diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
> index aa3b0d14c019..2fd7586aad4d 100644
> --- a/arch/x86/kvm/mmu.c
> +++ b/arch/x86/kvm/mmu.c
> @@ -3135,9 +3135,10 @@ static bool fast_page_fault(struct kvm_vcpu *vcpu, gva_t gva, int level,
> break;
> }
>
> - new_spte = is_access_track_spte(spte)
> - ? restore_acc_track_spte(spte)
> - : spte;
> + new_spte = spte;
> +
> + if (is_access_track_spte(spte))
> + new_spte = restore_acc_track_spte(new_spte);
>
> /*
> * Currently, to simplify the code, write-protection can
>
> Paolo
>
> > /*
> > - * Do not fix write-permission on the large spte since we only
> > - * dirty the first page into the dirty-bitmap in
> > - * fast_pf_fix_direct_spte() that means other pages are missed
> > - * if its slot is dirty-logged.
> > - *
> > - * Instead, we let the slow page fault path create a normal spte
> > - * to fix the access.
> > - *
> > - * See the comments in kvm_arch_commit_memory_region().
> > + * Currently, to simplify the code, write-protection can
> > + * be removed in the fast path only if the SPTE was
> > + * write-protected for dirty-logging or access tracking.
> > */
> > - if (sp->role.level > PT_PAGE_TABLE_LEVEL && remove_write_prot)
> > + if ((error_code & PFERR_WRITE_MASK) &&
> > + spte_can_locklessly_be_made_writable(spte))
> > + {
> > + new_spte |= PT_WRITABLE_MASK;
> > +
> > + /*
> > + * Do not fix write-permission on the large spte since
> > + * we only dirty the first page into the dirty-bitmap in
> > + * fast_pf_fix_direct_spte() that means other pages are
> > + * missed if its slot is dirty-logged.
> > + *
> > + * Instead, we let the slow page fault path create a
> > + * normal spte to fix the access.
> > + *
> > + * See the comments in kvm_arch_commit_memory_region().
> > + */
> > + if (sp->role.level > PT_PAGE_TABLE_LEVEL)
> > + break;
> > + }
> > +
> > + /* Verify that the fault can be handled in the fast path */
> > + if (new_spte == spte ||
> > + !is_access_allowed(error_code, new_spte))
> > break;
> >
> > /*
> > @@ -3166,8 +3175,7 @@ static bool fast_page_fault(struct kvm_vcpu *vcpu, gva_t gva, int level,
> > */
> > fault_handled = fast_pf_fix_direct_spte(vcpu, sp,
> > iterator.sptep, spte,
> > - remove_write_prot,
> > - remove_acc_track);
> > + new_spte);
> > if (fault_handled)
> > break;
> >
> >
^ permalink raw reply [flat|nested] 8+ messages in thread
end of thread, other threads:[~2017-01-25 20:05 UTC | newest]
Thread overview: 8+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2016-12-22 4:29 [PATCH 0/5] Followup patches for lockless access tracking Junaid Shahid
2016-12-22 4:29 ` [PATCH 1/5] kvm: x86: mmu: Rename EPT_VIOLATION_READ/WRITE/INSTR constants Junaid Shahid
2016-12-22 4:29 ` [PATCH 2/5] kvm: x86: mmu: Set SPTE_SPECIAL_MASK within mmu.c Junaid Shahid
2016-12-22 4:29 ` [PATCH 3/5] kvm: x86: mmu: Move pgtbl walk inside retry loop in fast_page_fault Junaid Shahid
2016-12-22 4:29 ` [PATCH 4/5] kvm: x86: mmu: Update comment in mark_spte_for_access_track Junaid Shahid
2016-12-22 4:29 ` [PATCH 5/5] kvm: x86: mmu: Verify that restored PTE has needed perms in fast page fault Junaid Shahid
2017-01-25 11:49 ` Paolo Bonzini
2017-01-25 20:05 ` Junaid Shahid
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox