From: marc.zyngier@arm.com (Marc Zyngier)
To: linux-arm-kernel@lists.infradead.org
Subject: [PATCH v14 7/7] KVM: arm: page logging 2nd stage fault handling
Date: Fri, 14 Nov 2014 16:45:20 +0000 [thread overview]
Message-ID: <546631A0.4050301@arm.com> (raw)
In-Reply-To: <1415930268-7674-8-git-send-email-m.smarduch@samsung.com>
Hi Mario,
On 14/11/14 01:57, Mario Smarduch wrote:
> This patch adds support for handling 2nd stage page faults during migration,
> it disables faulting in huge pages, and dissolves huge pages to page tables.
> In case migration is canceled huge pages are used again.
>
> Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
> Signed-off-by: Mario Smarduch <m.smarduch@samsung.com>
> ---
> arch/arm/kvm/mmu.c | 56 ++++++++++++++++++++++++++++++++++++++++++++--------
> 1 file changed, 48 insertions(+), 8 deletions(-)
>
> diff --git a/arch/arm/kvm/mmu.c b/arch/arm/kvm/mmu.c
> index 8137455..ff88e5b 100644
> --- a/arch/arm/kvm/mmu.c
> +++ b/arch/arm/kvm/mmu.c
> @@ -47,6 +47,20 @@ static phys_addr_t hyp_idmap_vector;
> #define kvm_pmd_huge(_x) (pmd_huge(_x) || pmd_trans_huge(_x))
> #define kvm_pud_huge(_x) pud_huge(_x)
>
> +#define IOMAP_ATTR 0x1
> +#define LOGGING_ACTIVE 0x2
> +#define SET_SPTE_FLAGS(l, i) ((l) << (LOGGING_ACTIVE - 1) | \
> + (i) << (IOMAP_ATTR - 1))
I suffered a minor brain haemorrhage on this one. How about something
that is more in line with what's done almost everywhere else:
#define KVM_S2PTE_FLAG_IS_IOMAP (1UL << 0)
#define KVM_S2PTE_FLAG_LOGGING_ACTIVE (1UL << 1)
> +static bool kvm_get_logging_state(struct kvm_memory_slot *memslot)
> +{
> +#ifdef CONFIG_ARM
> + return !!memslot->dirty_bitmap;
> +#else
> + return false;
> +#endif
> +}
> +
> static void kvm_tlb_flush_vmid_ipa(struct kvm *kvm, phys_addr_t ipa)
> {
> /*
> @@ -626,10 +640,13 @@ static int stage2_set_pmd_huge(struct kvm *kvm, struct kvm_mmu_memory_cache
> }
>
> static int stage2_set_pte(struct kvm *kvm, struct kvm_mmu_memory_cache *cache,
> - phys_addr_t addr, const pte_t *new_pte, bool iomap)
> + phys_addr_t addr, const pte_t *new_pte,
> + unsigned long flags)
> {
> pmd_t *pmd;
> pte_t *pte, old_pte;
> + bool iomap = flags & IOMAP_ATTR;
> + bool logging_active = flags & LOGGING_ACTIVE;
If you convert these two variables to be 'unsigned long'...
>
> /* Create stage-2 page table mapping - Level 1 */
> pmd = stage2_get_pmd(kvm, cache, addr);
> @@ -641,6 +658,18 @@ static int stage2_set_pte(struct kvm *kvm, struct kvm_mmu_memory_cache *cache,
> return 0;
> }
>
> + /*
> + * While dirty memory logging, clear PMD entry for huge page and split
> + * into smaller pages, to track dirty memory at page granularity.
> + */
> + if (logging_active && kvm_pmd_huge(*pmd)) {
> + phys_addr_t ipa = pmd_pfn(*pmd) << PAGE_SHIFT;
> +
> + pmd_clear(pmd);
> + kvm_tlb_flush_vmid_ipa(kvm, ipa);
> + put_page(virt_to_page(pmd));
> + }
> +
> /* Create stage-2 page mappings - Level 2 */
> if (pmd_none(*pmd)) {
> if (!cache)
> @@ -693,7 +722,7 @@ int kvm_phys_addr_ioremap(struct kvm *kvm, phys_addr_t guest_ipa,
> if (ret)
> goto out;
> spin_lock(&kvm->mmu_lock);
> - ret = stage2_set_pte(kvm, &cache, addr, &pte, true);
> + ret = stage2_set_pte(kvm, &cache, addr, &pte, IOMAP_ATTR);
> spin_unlock(&kvm->mmu_lock);
> if (ret)
> goto out;
> @@ -908,6 +937,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
> struct vm_area_struct *vma;
> pfn_t pfn;
> pgprot_t mem_type = PAGE_S2;
> + bool logging_active = kvm_get_logging_state(memslot);
and this to be:
unsigned long logging_active = 0;
if (kvm_get_logging_state(memslot))
logging_active = KVM_S2PTE_FLAG_LOGGING_ACTIVE;
>
> write_fault = kvm_is_write_fault(kvm_vcpu_get_hsr(vcpu));
> if (fault_status == FSC_PERM && !write_fault) {
> @@ -918,7 +948,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
> /* Let's check if we will get back a huge page backed by hugetlbfs */
> down_read(¤t->mm->mmap_sem);
> vma = find_vma_intersection(current->mm, hva, hva + 1);
> - if (is_vm_hugetlb_page(vma)) {
> + if (is_vm_hugetlb_page(vma) && !logging_active) {
> hugetlb = true;
> gfn = (fault_ipa & PMD_MASK) >> PAGE_SHIFT;
> } else {
> @@ -964,7 +994,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
> spin_lock(&kvm->mmu_lock);
> if (mmu_notifier_retry(kvm, mmu_seq))
> goto out_unlock;
> - if (!hugetlb && !force_pte)
> + if (!hugetlb && !force_pte && !logging_active)
> hugetlb = transparent_hugepage_adjust(&pfn, &fault_ipa);
>
> if (hugetlb) {
> @@ -978,16 +1008,18 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
> ret = stage2_set_pmd_huge(kvm, memcache, fault_ipa, &new_pmd);
> } else {
> pte_t new_pte = pfn_pte(pfn, mem_type);
> + unsigned long flags = SET_SPTE_FLAGS(logging_active,
> + mem_type == PAGE_S2_DEVICE);
... you can convert this to:
unsigned long flags = logging_active;
if (mem_type == PAGE_S2_DEVICE)
flags |= KVM_S2PTE_FLAG_IS_IOMAP;
Yes, this is more verbose, but I can decipher it without any effort.
Call me lazy! ;-)
> if (writable) {
> kvm_set_s2pte_writable(&new_pte);
> kvm_set_pfn_dirty(pfn);
> }
> coherent_cache_guest_page(vcpu, hva, PAGE_SIZE);
> - ret = stage2_set_pte(kvm, memcache, fault_ipa, &new_pte,
> - mem_type == PAGE_S2_DEVICE);
> + ret = stage2_set_pte(kvm, memcache, fault_ipa, &new_pte, flags);
> }
>
> -
> + if (write_fault)
> + mark_page_dirty(kvm, gfn);
> out_unlock:
> spin_unlock(&kvm->mmu_lock);
> kvm_release_pfn_clean(pfn);
> @@ -1137,7 +1169,15 @@ static void kvm_set_spte_handler(struct kvm *kvm, gpa_t gpa, void *data)
> {
> pte_t *pte = (pte_t *)data;
>
> - stage2_set_pte(kvm, NULL, gpa, pte, false);
> + /*
> + * We can always call stage2_set_pte with logging_active == false,
> + * because MMU notifiers will have unmapped a huge PMD before calling
> + * ->change_pte() (which in turn calls kvm_set_spte_hva()) and therefore
> + * stage2_set_pte() never needs to clear out a huge PMD through this
> + * calling path.
> + */
> +
> + stage2_set_pte(kvm, NULL, gpa, pte, 0);
> }
Other than this:
Reviewed-by: Marc Zyngier <marc.zyngier@arm.com>
Thanks,
M.
--
Jazz is not dead. It just smells funny...
next prev parent reply other threads:[~2014-11-14 16:45 UTC|newest]
Thread overview: 26+ messages / expand[flat|nested] mbox.gz Atom feed top
2014-11-14 1:57 [PATCH v14 0/7] KVM/arm/x86: dirty page logging for ARMv7 (3.17.0-rc1) Mario Smarduch
2014-11-14 1:57 ` [PATCH v14 1/7] KVM: Add architecture-defined TLB flush support Mario Smarduch
2014-11-22 17:08 ` Christoffer Dall
2014-11-14 1:57 ` [PATCH v14 2/7] KVM: Add generic support for dirty page logging Mario Smarduch
2014-11-22 19:13 ` Christoffer Dall
2014-11-14 1:57 ` [PATCH v14 3/7] KVM: x86: switch to kvm_get_dirty_log_protect Mario Smarduch
2014-11-14 2:06 ` Mario Smarduch
2014-11-14 10:03 ` Paolo Bonzini
2014-11-22 19:19 ` Christoffer Dall
2014-11-24 18:35 ` Mario Smarduch
2014-12-08 23:12 ` Mario Smarduch
2014-12-09 19:42 ` Paolo Bonzini
2014-11-14 1:57 ` [PATCH v14 4/7] KVM: arm: Add ARMv7 API to flush TLBs Mario Smarduch
2014-11-14 1:57 ` [PATCH v14 5/7] KVM: arm: Add initial dirty page locking support Mario Smarduch
2014-11-22 19:33 ` Christoffer Dall
2014-11-24 18:44 ` Mario Smarduch
2014-11-14 1:57 ` [PATCH v14 6/7] KVM: arm: dirty logging write protect support Mario Smarduch
2014-11-22 19:40 ` Christoffer Dall
2014-11-24 18:47 ` Mario Smarduch
2014-11-14 1:57 ` [PATCH v14 7/7] KVM: arm: page logging 2nd stage fault handling Mario Smarduch
2014-11-14 16:45 ` Marc Zyngier [this message]
2014-11-14 18:53 ` Mario Smarduch
2014-11-14 8:06 ` [PATCH v14 0/7] KVM/arm/x86: dirty page logging for ARMv7 (3.17.0-rc1) Cornelia Huck
2014-11-14 18:57 ` Mario Smarduch
2014-11-25 10:22 ` Christoffer Dall
2014-11-25 21:57 ` Mario Smarduch
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=546631A0.4050301@arm.com \
--to=marc.zyngier@arm.com \
--cc=linux-arm-kernel@lists.infradead.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).