* [PATCH RESEND v15 07/10] KVM: arm: page logging 2nd stage fault handling
@ 2015-01-09 1:42 Mario Smarduch
2015-01-09 1:42 ` [PATCH RESEND v15 10/10] KVM: arm/arm64: Enable Dirty Page logging for ARMv8 Mario Smarduch
0 siblings, 1 reply; 4+ messages in thread
From: Mario Smarduch @ 2015-01-09 1:42 UTC (permalink / raw)
To: linux-arm-kernel
This patch adds support for handling 2nd stage page faults during migration,
it disables faulting in huge pages, and dissolves huge pages to normal pages.
In case migration is canceled huge pages are used again, if memory conditions
permit it. I applies cleanly on top patches series posted Dec 15:
https://lists.cs.columbia.edu/pipermail/kvmarm/2014-December/012826.html
Patch number #11 of the series has be dropped.
Signed-off-by: Mario Smarduch <m.smarduch@samsung.com>
---
Change Log since last RESEND:
- fixed bug exposed __get_user_pages_fast(), when region is writable prevent
write protection of pte so we can handle a future write fault and mark page
dirty.
- Removed marking entire huge page dirty on initial dirty log read.
- don't dissolve non-writable huge pages
- Made updates based on Christoffers comments
- renamed logging status function to memslot_is_logging()
- changes few values to bool from
- streamlined user_mem_abort() to eliminate extra conditional checks
---
arch/arm/kvm/mmu.c | 92 +++++++++++++++++++++++++++++++++++++++++++++++++---
1 file changed, 87 insertions(+), 5 deletions(-)
diff --git a/arch/arm/kvm/mmu.c b/arch/arm/kvm/mmu.c
index 73d506f..2bfe22d 100644
--- a/arch/arm/kvm/mmu.c
+++ b/arch/arm/kvm/mmu.c
@@ -47,6 +47,18 @@ static phys_addr_t hyp_idmap_vector;
#define kvm_pmd_huge(_x) (pmd_huge(_x) || pmd_trans_huge(_x))
#define kvm_pud_huge(_x) pud_huge(_x)
+#define KVM_S2PTE_FLAG_IS_IOMAP (1UL << 0)
+#define KVM_S2PTE_FLAG_LOGGING_ACTIVE (1UL << 1)
+
+static bool memslot_is_logging(struct kvm_memory_slot *memslot)
+{
+#ifdef CONFIG_ARM
+ return !!memslot->dirty_bitmap;
+#else
+ return false;
+#endif
+}
+
static void kvm_tlb_flush_vmid_ipa(struct kvm *kvm, phys_addr_t ipa)
{
/*
@@ -59,6 +71,25 @@ static void kvm_tlb_flush_vmid_ipa(struct kvm *kvm, phys_addr_t ipa)
kvm_call_hyp(__kvm_tlb_flush_vmid_ipa, kvm, ipa);
}
+/**
+ * stage2_dissolve_pmd() - clear and flush huge PMD entry
+ * @kvm: pointer to kvm structure.
+ * @addr: IPA
+ * @pmd: pmd pointer for IPA
+ *
+ * Function clears a PMD entry, flushes addr 1st and 2nd stage TLBs. Marks all
+ * pages in the range dirty.
+ */
+static void stage2_dissolve_pmd(struct kvm *kvm, phys_addr_t addr, pmd_t *pmd)
+{
+ if (!kvm_pmd_huge(*pmd))
+ return;
+
+ pmd_clear(pmd);
+ kvm_tlb_flush_vmid_ipa(kvm, addr);
+ put_page(virt_to_page(pmd));
+}
+
static int mmu_topup_memory_cache(struct kvm_mmu_memory_cache *cache,
int min, int max)
{
@@ -703,10 +734,13 @@ static int stage2_set_pmd_huge(struct kvm *kvm, struct kvm_mmu_memory_cache
}
static int stage2_set_pte(struct kvm *kvm, struct kvm_mmu_memory_cache *cache,
- phys_addr_t addr, const pte_t *new_pte, bool iomap)
+ phys_addr_t addr, const pte_t *new_pte,
+ unsigned long flags)
{
pmd_t *pmd;
pte_t *pte, old_pte;
+ bool iomap = flags & KVM_S2PTE_FLAG_IS_IOMAP;
+ bool logging_active = flags & KVM_S2PTE_FLAG_LOGGING_ACTIVE;
/* Create stage-2 page table mapping - Levels 0 and 1 */
pmd = stage2_get_pmd(kvm, cache, addr);
@@ -718,6 +752,13 @@ static int stage2_set_pte(struct kvm *kvm, struct kvm_mmu_memory_cache *cache,
return 0;
}
+ /*
+ * While dirty page logging - dissolve huge PMD, then continue on to
+ * allocate page.
+ */
+ if (logging_active)
+ stage2_dissolve_pmd(kvm, addr, pmd);
+
/* Create stage-2 page mappings - Level 2 */
if (pmd_none(*pmd)) {
if (!cache)
@@ -774,7 +815,8 @@ int kvm_phys_addr_ioremap(struct kvm *kvm, phys_addr_t guest_ipa,
if (ret)
goto out;
spin_lock(&kvm->mmu_lock);
- ret = stage2_set_pte(kvm, &cache, addr, &pte, true);
+ ret = stage2_set_pte(kvm, &cache, addr, &pte,
+ KVM_S2PTE_FLAG_IS_IOMAP);
spin_unlock(&kvm->mmu_lock);
if (ret)
goto out;
@@ -1002,6 +1044,8 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
pfn_t pfn;
pgprot_t mem_type = PAGE_S2;
bool fault_ipa_uncached;
+ bool can_set_pte_rw = true;
+ unsigned long set_pte_flags = 0;
write_fault = kvm_is_write_fault(vcpu);
if (fault_status == FSC_PERM && !write_fault) {
@@ -1009,6 +1053,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
return -EFAULT;
}
+
/* Let's check if we will get back a huge page backed by hugetlbfs */
down_read(¤t->mm->mmap_sem);
vma = find_vma_intersection(current->mm, hva, hva + 1);
@@ -1065,6 +1110,26 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
spin_lock(&kvm->mmu_lock);
if (mmu_notifier_retry(kvm, mmu_seq))
goto out_unlock;
+
+ /*
+ * When logging is enabled general page fault handling changes:
+ * - Writable huge pages are dissolved on a read or write fault.
+ * - pte's are not allowed write permission on a read fault to
+ * writable region so future writes can be marked dirty
+ * - access to non-writable region is unchanged
+ */
+ if (memslot_is_logging(memslot) && writable) {
+ set_pte_flags = KVM_S2PTE_FLAG_LOGGING_ACTIVE;
+ if (hugetlb) {
+ gfn += pte_index(fault_ipa);
+ pfn += pte_index(fault_ipa);
+ hugetlb = false;
+ }
+ force_pte = true;
+ if (!write_fault)
+ can_set_pte_rw = false;
+ }
+
if (!hugetlb && !force_pte)
hugetlb = transparent_hugepage_adjust(&pfn, &fault_ipa);
@@ -1082,16 +1147,26 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
ret = stage2_set_pmd_huge(kvm, memcache, fault_ipa, &new_pmd);
} else {
pte_t new_pte = pfn_pte(pfn, mem_type);
- if (writable) {
+
+ if (pgprot_val(mem_type) == pgprot_val(PAGE_S2_DEVICE))
+ set_pte_flags |= KVM_S2PTE_FLAG_IS_IOMAP;
+
+ /*
+ * Don't set write permission, for non-writable region, and
+ * for read fault to writable region while logging.
+ */
+ if (writable && can_set_pte_rw) {
kvm_set_s2pte_writable(&new_pte);
kvm_set_pfn_dirty(pfn);
}
coherent_cache_guest_page(vcpu, hva, PAGE_SIZE,
fault_ipa_uncached);
ret = stage2_set_pte(kvm, memcache, fault_ipa, &new_pte,
- pgprot_val(mem_type) == pgprot_val(PAGE_S2_DEVICE));
+ set_pte_flags);
}
+ if (write_fault)
+ mark_page_dirty(kvm, gfn);
out_unlock:
spin_unlock(&kvm->mmu_lock);
@@ -1242,7 +1317,14 @@ static void kvm_set_spte_handler(struct kvm *kvm, gpa_t gpa, void *data)
{
pte_t *pte = (pte_t *)data;
- stage2_set_pte(kvm, NULL, gpa, pte, false);
+ /*
+ * We can always call stage2_set_pte with KVM_S2PTE_FLAG_LOGGING_ACTIVE
+ * flag clear because MMU notifiers will have unmapped a huge PMD before
+ * calling ->change_pte() (which in turn calls kvm_set_spte_hva()) and
+ * therefore stage2_set_pte() never needs to clear out a huge PMD
+ * through this calling path.
+ */
+ stage2_set_pte(kvm, NULL, gpa, pte, 0);
}
--
1.7.9.5
^ permalink raw reply related [flat|nested] 4+ messages in thread
* [PATCH RESEND v15 10/10] KVM: arm/arm64: Enable Dirty Page logging for ARMv8
2015-01-09 1:42 [PATCH RESEND v15 07/10] KVM: arm: page logging 2nd stage fault handling Mario Smarduch
@ 2015-01-09 1:42 ` Mario Smarduch
2015-01-11 14:03 ` Christoffer Dall
0 siblings, 1 reply; 4+ messages in thread
From: Mario Smarduch @ 2015-01-09 1:42 UTC (permalink / raw)
To: linux-arm-kernel
This patch enables ARMv8 dirty page logging support. Plugs ARMv8 into generic
layer through Kconfig symbol, and drops earlier ARM64 constraints to enable
logging at architecture layer. I applies cleanly on top patches series
posted Dec 15:
https://lists.cs.columbia.edu/pipermail/kvmarm/2014-December/012826.html
Patch number #11 of the series has be dropped.
Signed-off-by: Mario Smarduch <m.smarduch@samsung.com>
---
Change Log:
- removed inline definition from kvm_flush_remote_tlbs()
---
arch/arm/include/asm/kvm_host.h | 12 ------------
arch/arm/kvm/arm.c | 4 ----
arch/arm/kvm/mmu.c | 19 +++++++++++--------
arch/arm64/kvm/Kconfig | 2 ++
4 files changed, 13 insertions(+), 24 deletions(-)
diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h
index b138431..088ea87 100644
--- a/arch/arm/include/asm/kvm_host.h
+++ b/arch/arm/include/asm/kvm_host.h
@@ -223,18 +223,6 @@ static inline void __cpu_init_hyp_mode(phys_addr_t boot_pgd_ptr,
kvm_call_hyp((void*)hyp_stack_ptr, vector_ptr, pgd_ptr);
}
-/**
- * kvm_flush_remote_tlbs() - flush all VM TLB entries
- * @kvm: pointer to kvm structure.
- *
- * Interface to HYP function to flush all VM TLB entries without address
- * parameter.
- */
-static inline void kvm_flush_remote_tlbs(struct kvm *kvm)
-{
- kvm_call_hyp(__kvm_tlb_flush_vmid, kvm);
-}
-
static inline int kvm_arch_dev_ioctl_check_extension(long ext)
{
return 0;
diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c
index 6e4290c..1b6577c 100644
--- a/arch/arm/kvm/arm.c
+++ b/arch/arm/kvm/arm.c
@@ -740,7 +740,6 @@ long kvm_arch_vcpu_ioctl(struct file *filp,
*/
int kvm_vm_ioctl_get_dirty_log(struct kvm *kvm, struct kvm_dirty_log *log)
{
-#ifdef CONFIG_ARM
bool is_dirty = false;
int r;
@@ -753,9 +752,6 @@ int kvm_vm_ioctl_get_dirty_log(struct kvm *kvm, struct kvm_dirty_log *log)
mutex_unlock(&kvm->slots_lock);
return r;
-#else /* arm64 */
- return -EINVAL;
-#endif
}
static int kvm_vm_ioctl_set_device_addr(struct kvm *kvm,
diff --git a/arch/arm/kvm/mmu.c b/arch/arm/kvm/mmu.c
index 2bfe22d..1384743 100644
--- a/arch/arm/kvm/mmu.c
+++ b/arch/arm/kvm/mmu.c
@@ -52,11 +52,18 @@ static phys_addr_t hyp_idmap_vector;
static bool memslot_is_logging(struct kvm_memory_slot *memslot)
{
-#ifdef CONFIG_ARM
return !!memslot->dirty_bitmap;
-#else
- return false;
-#endif
+}
+
+/**
+ * kvm_flush_remote_tlbs() - flush all VM TLB entries for v7/8
+ * @kvm: pointer to kvm structure.
+ *
+ * Interface to HYP function to flush all VM TLB entries
+ */
+void kvm_flush_remote_tlbs(struct kvm *kvm)
+{
+ kvm_call_hyp(__kvm_tlb_flush_vmid, kvm);
}
static void kvm_tlb_flush_vmid_ipa(struct kvm *kvm, phys_addr_t ipa)
@@ -883,7 +890,6 @@ static bool kvm_is_device_pfn(unsigned long pfn)
return !pfn_valid(pfn);
}
-#ifdef CONFIG_ARM
/**
* stage2_wp_ptes - write protect PMD range
* @pmd: pointer to pmd entry
@@ -1028,7 +1034,6 @@ void kvm_arch_mmu_write_protect_pt_masked(struct kvm *kvm,
stage2_wp_range(kvm, start, end);
}
-#endif
static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
struct kvm_memory_slot *memslot, unsigned long hva,
@@ -1457,7 +1462,6 @@ void kvm_arch_commit_memory_region(struct kvm *kvm,
const struct kvm_memory_slot *old,
enum kvm_mr_change change)
{
-#ifdef CONFIG_ARM
/*
* At this point memslot has been committed and there is an
* allocated dirty_bitmap[], dirty pages will be be tracked while the
@@ -1465,7 +1469,6 @@ void kvm_arch_commit_memory_region(struct kvm *kvm,
*/
if (change != KVM_MR_DELETE && mem->flags & KVM_MEM_LOG_DIRTY_PAGES)
kvm_mmu_wp_memory_region(kvm, mem->slot);
-#endif
}
int kvm_arch_prepare_memory_region(struct kvm *kvm,
diff --git a/arch/arm64/kvm/Kconfig b/arch/arm64/kvm/Kconfig
index 8ba85e9..3ce389b 100644
--- a/arch/arm64/kvm/Kconfig
+++ b/arch/arm64/kvm/Kconfig
@@ -22,10 +22,12 @@ config KVM
select PREEMPT_NOTIFIERS
select ANON_INODES
select HAVE_KVM_CPU_RELAX_INTERCEPT
+ select HAVE_KVM_ARCH_TLB_FLUSH_ALL
select KVM_MMIO
select KVM_ARM_HOST
select KVM_ARM_VGIC
select KVM_ARM_TIMER
+ select KVM_GENERIC_DIRTYLOG_READ_PROTECT
---help---
Support hosting virtualized guest machines.
--
1.7.9.5
^ permalink raw reply related [flat|nested] 4+ messages in thread
* [PATCH RESEND v15 10/10] KVM: arm/arm64: Enable Dirty Page logging for ARMv8
2015-01-09 1:42 ` [PATCH RESEND v15 10/10] KVM: arm/arm64: Enable Dirty Page logging for ARMv8 Mario Smarduch
@ 2015-01-11 14:03 ` Christoffer Dall
2015-01-12 16:29 ` Mario Smarduch
0 siblings, 1 reply; 4+ messages in thread
From: Christoffer Dall @ 2015-01-11 14:03 UTC (permalink / raw)
To: linux-arm-kernel
On Thu, Jan 08, 2015 at 05:42:07PM -0800, Mario Smarduch wrote:
> This patch enables ARMv8 dirty page logging support. Plugs ARMv8 into generic
> layer through Kconfig symbol, and drops earlier ARM64 constraints to enable
> logging at architecture layer. I applies cleanly on top patches series
> posted Dec 15:
> https://lists.cs.columbia.edu/pipermail/kvmarm/2014-December/012826.html
>
> Patch number #11 of the series has be dropped.
Again, the stuff about where this applied and other information related
to the series should go below the '---' markers.
I suggest we iterate on patch 07 and make sure we agree, and then you
send out a complete new series (collect the acks and reviewed-by's in
there please), so I know I'll end up picking the right patches, as this
whole series has sort of sprawled.
>
> Signed-off-by: Mario Smarduch <m.smarduch@samsung.com>
Otherwise:
Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
Thanks,
-Christoffer
^ permalink raw reply [flat|nested] 4+ messages in thread
* [PATCH RESEND v15 10/10] KVM: arm/arm64: Enable Dirty Page logging for ARMv8
2015-01-11 14:03 ` Christoffer Dall
@ 2015-01-12 16:29 ` Mario Smarduch
0 siblings, 0 replies; 4+ messages in thread
From: Mario Smarduch @ 2015-01-12 16:29 UTC (permalink / raw)
To: linux-arm-kernel
On 01/11/2015 06:03 AM, Christoffer Dall wrote:
> On Thu, Jan 08, 2015 at 05:42:07PM -0800, Mario Smarduch wrote:
>> This patch enables ARMv8 dirty page logging support. Plugs ARMv8 into generic
>> layer through Kconfig symbol, and drops earlier ARM64 constraints to enable
>> logging at architecture layer. I applies cleanly on top patches series
>> posted Dec 15:
>> https://lists.cs.columbia.edu/pipermail/kvmarm/2014-December/012826.html
>>
>> Patch number #11 of the series has be dropped.
>
> Again, the stuff about where this applied and other information related
> to the series should go below the '---' markers.
>
> I suggest we iterate on patch 07 and make sure we agree, and then you
> send out a complete new series (collect the acks and reviewed-by's in
> there please), so I know I'll end up picking the right patches, as this
> whole series has sort of sprawled.
Yes, the patchset went of track quite a bit. Will do.
- Mario
>
>>
>> Signed-off-by: Mario Smarduch <m.smarduch@samsung.com>
>
> Otherwise:
>
> Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
>
> Thanks,
> -Christoffer
>
^ permalink raw reply [flat|nested] 4+ messages in thread
end of thread, other threads:[~2015-01-12 16:29 UTC | newest]
Thread overview: 4+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2015-01-09 1:42 [PATCH RESEND v15 07/10] KVM: arm: page logging 2nd stage fault handling Mario Smarduch
2015-01-09 1:42 ` [PATCH RESEND v15 10/10] KVM: arm/arm64: Enable Dirty Page logging for ARMv8 Mario Smarduch
2015-01-11 14:03 ` Christoffer Dall
2015-01-12 16:29 ` Mario Smarduch
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).