From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.1 (2015-04-28) on archive.lwn.net X-Spam-Level: X-Spam-Status: No, score=-5.4 required=5.0 tests=DKIM_ADSP_CUSTOM_MED, DKIM_SIGNED,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI, T_DKIM_INVALID autolearn=ham autolearn_force=no version=3.4.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by archive.lwn.net (Postfix) with ESMTP id 4774E7D072 for ; Thu, 19 Jul 2018 21:40:17 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730798AbeGSWZP (ORCPT ); Thu, 19 Jul 2018 18:25:15 -0400 Received: from mail-ed1-f68.google.com ([209.85.208.68]:42515 "EHLO mail-ed1-f68.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730638AbeGSWZP (ORCPT ); Thu, 19 Jul 2018 18:25:15 -0400 Received: by mail-ed1-f68.google.com with SMTP id r4-v6so8150814edp.9; Thu, 19 Jul 2018 14:40:14 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=9C8qpNYvHyGQ1129baPOpZj+Z9WVomApri//w41v92Q=; b=fYU6DqVeDpBCtBa3V3rlZjHVZUAEh34EqOKYaM94C3ZR/s/Kyuf8oadZWLtsESW/NQ nJbNRj0DkrXX86+IbRjBgo+p939P14nwAevl7319IXMbKgg+PFFtgNHfOwPQjVYhpI24 NHMmf+b7Lw7NBF8+49fXVsmXTBi2yB63ygDTiOxfJ1bAsjFUR3jazGSb+ZuN3VINJJaY zp776p5ANbZQO8RVTtLKqYbuGuDXIzpHaXf4V9hM9dJtJ895ECmFDvoi/XpynpcslqVq x7PUrXYjIeY0hwNrtg95TrgMuVAxg66inoGpOkDGVSj3qWY+f7da9MZ28Md7bF/sq4sY JCKw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=9C8qpNYvHyGQ1129baPOpZj+Z9WVomApri//w41v92Q=; b=P6tfYZdhN0ooJl1tjST9ADOISODr9BJVk+jvs9ZpPlBVGl0reCdHzfCAhSgOoiX2eq Nig+kVDgpPQjtwAPUmkWgLAut8TvxHEFq827z3p9GrRpsSYgF8+OhlqRS0/ToeEBw1Mg ED61dc+mUz8mqPKHq9K16JIxnt7HSL4Q1K+B5huRJlEsxEJ+B8r0GE3OFsZuXyT+51vD 8Fq5ts/rvh1Y+Exh/yzEm2LWkcrdRhQb8j61Jro3j24fq5vqXqmy62H2Uk9wXJPM0Bmy 7jeLmlgGhkYwqqp4hudmBlyaxGBmUxV9XjLMOOiDjXXqDHFu3VJ5+9V4EoNEfOa8Bpaw wP3w== X-Gm-Message-State: AOUpUlHT7Edmrm8os0waNJZhbVADnIwlKYpB2m6obIBhSSFDa46NKXis Hruwl6yXqRKg72Ko0oSvp9vsqAoygoA= X-Google-Smtp-Source: AAOMgpc2OmwbmcsHOdSL7xQ6/Ib7QSJm80czvVtVdqW87XKYuY1vL3kzbkGWcWB+bwS00mFEOFWpIA== X-Received: by 2002:a50:fb91:: with SMTP id e17-v6mr13408649edq.308.1532036413943; Thu, 19 Jul 2018 14:40:13 -0700 (PDT) Received: from localhost.localdomain ([41.35.209.101]) by smtp.gmail.com with ESMTPSA id g9-v6sm224049edq.34.2018.07.19.14.40.10 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 19 Jul 2018 14:40:13 -0700 (PDT) From: Ahmed Abd El Mawgood To: kvm@vger.kernel.org, Kernel Hardening , virtualization@lists.linux-foundation.org, linux-doc@vger.kernel.org, x86@kernel.org Cc: Paolo Bonzini , rkrcmar@redhat.com, nathan Corbet , Thomas Gleixner , Ingo Molnar , hpa@zytor.com, Kees Cook , Ard Biesheuvel , David Hildenbrand , Boris Lukashev , David Vrabel , nigel.edwards@hpe.com, Rik van Riel , Ahmed Abd El Mawgood Subject: [PATCH 2/3] [RFC V3] KVM: X86: Adding arbitrary data pointer in kvm memslot itterator functions Date: Thu, 19 Jul 2018 23:38:01 +0200 Message-Id: <20180719213802.17161-3-ahmedsoliman0x666@gmail.com> X-Mailer: git-send-email 2.16.4 In-Reply-To: <20180719213802.17161-1-ahmedsoliman0x666@gmail.com> References: <20180719213802.17161-1-ahmedsoliman0x666@gmail.com> Sender: linux-doc-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-doc@vger.kernel.org This will help sharing data into the slot_level_handler callback. In my case I need to a share a counter for the pages traversed to use it in some bitmap. Being able to send arbitrary memory pointer into the slot_level_handler callback made it easy. Signed-off-by: Ahmed Abd El Mawgood --- arch/x86/kvm/mmu.c | 65 +++++++++++++++++++++++++++++++----------------------- 1 file changed, 37 insertions(+), 28 deletions(-) diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c index d594690d8b95..77661530b2c4 100644 --- a/arch/x86/kvm/mmu.c +++ b/arch/x86/kvm/mmu.c @@ -1418,7 +1418,7 @@ static bool spte_write_protect(u64 *sptep, bool pt_protect) static bool __rmap_write_protect(struct kvm *kvm, struct kvm_rmap_head *rmap_head, - bool pt_protect) + bool pt_protect, void *data) { u64 *sptep; struct rmap_iterator iter; @@ -1457,7 +1457,8 @@ static bool wrprot_ad_disabled_spte(u64 *sptep) * - W bit on ad-disabled SPTEs. * Returns true iff any D or W bits were cleared. */ -static bool __rmap_clear_dirty(struct kvm *kvm, struct kvm_rmap_head *rmap_head) +static bool __rmap_clear_dirty(struct kvm *kvm, struct kvm_rmap_head *rmap_head, + void *data) { u64 *sptep; struct rmap_iterator iter; @@ -1483,7 +1484,8 @@ static bool spte_set_dirty(u64 *sptep) return mmu_spte_update(sptep, spte); } -static bool __rmap_set_dirty(struct kvm *kvm, struct kvm_rmap_head *rmap_head) +static bool __rmap_set_dirty(struct kvm *kvm, struct kvm_rmap_head *rmap_head, + void *data) { u64 *sptep; struct rmap_iterator iter; @@ -1515,7 +1517,7 @@ static void kvm_mmu_write_protect_pt_masked(struct kvm *kvm, while (mask) { rmap_head = __gfn_to_rmap(slot->base_gfn + gfn_offset + __ffs(mask), PT_PAGE_TABLE_LEVEL, slot); - __rmap_write_protect(kvm, rmap_head, false); + __rmap_write_protect(kvm, rmap_head, false, NULL); /* clear the first set bit */ mask &= mask - 1; @@ -1541,7 +1543,7 @@ void kvm_mmu_clear_dirty_pt_masked(struct kvm *kvm, while (mask) { rmap_head = __gfn_to_rmap(slot->base_gfn + gfn_offset + __ffs(mask), PT_PAGE_TABLE_LEVEL, slot); - __rmap_clear_dirty(kvm, rmap_head); + __rmap_clear_dirty(kvm, rmap_head, NULL); /* clear the first set bit */ mask &= mask - 1; @@ -1594,7 +1596,8 @@ bool kvm_mmu_slot_gfn_write_protect(struct kvm *kvm, for (i = PT_PAGE_TABLE_LEVEL; i <= PT_MAX_HUGEPAGE_LEVEL; ++i) { rmap_head = __gfn_to_rmap(gfn, i, slot); - write_protected |= __rmap_write_protect(kvm, rmap_head, true); + write_protected |= __rmap_write_protect(kvm, rmap_head, true, + NULL); } return write_protected; @@ -1608,7 +1611,8 @@ static bool rmap_write_protect(struct kvm_vcpu *vcpu, u64 gfn) return kvm_mmu_slot_gfn_write_protect(vcpu->kvm, slot, gfn); } -static bool kvm_zap_rmapp(struct kvm *kvm, struct kvm_rmap_head *rmap_head) +static bool kvm_zap_rmapp(struct kvm *kvm, struct kvm_rmap_head *rmap_head, + void *data) { u64 *sptep; struct rmap_iterator iter; @@ -1628,7 +1632,7 @@ static int kvm_unmap_rmapp(struct kvm *kvm, struct kvm_rmap_head *rmap_head, struct kvm_memory_slot *slot, gfn_t gfn, int level, unsigned long data) { - return kvm_zap_rmapp(kvm, rmap_head); + return kvm_zap_rmapp(kvm, rmap_head, NULL); } static int kvm_set_pte_rmapp(struct kvm *kvm, struct kvm_rmap_head *rmap_head, @@ -5086,13 +5090,15 @@ void kvm_mmu_uninit_vm(struct kvm *kvm) } /* The return value indicates if tlb flush on all vcpus is needed. */ -typedef bool (*slot_level_handler) (struct kvm *kvm, struct kvm_rmap_head *rmap_head); +typedef bool (*slot_level_handler) (struct kvm *kvm, + struct kvm_rmap_head *rmap_head, void *data); /* The caller should hold mmu-lock before calling this function. */ static __always_inline bool slot_handle_level_range(struct kvm *kvm, struct kvm_memory_slot *memslot, slot_level_handler fn, int start_level, int end_level, - gfn_t start_gfn, gfn_t end_gfn, bool lock_flush_tlb) + gfn_t start_gfn, gfn_t end_gfn, bool lock_flush_tlb, + void *data) { struct slot_rmap_walk_iterator iterator; bool flush = false; @@ -5100,7 +5106,7 @@ slot_handle_level_range(struct kvm *kvm, struct kvm_memory_slot *memslot, for_each_slot_rmap_range(memslot, start_level, end_level, start_gfn, end_gfn, &iterator) { if (iterator.rmap) - flush |= fn(kvm, iterator.rmap); + flush |= fn(kvm, iterator.rmap, data); if (need_resched() || spin_needbreak(&kvm->mmu_lock)) { if (flush && lock_flush_tlb) { @@ -5122,36 +5128,36 @@ slot_handle_level_range(struct kvm *kvm, struct kvm_memory_slot *memslot, static __always_inline bool slot_handle_level(struct kvm *kvm, struct kvm_memory_slot *memslot, slot_level_handler fn, int start_level, int end_level, - bool lock_flush_tlb) + bool lock_flush_tlb, void *data) { return slot_handle_level_range(kvm, memslot, fn, start_level, end_level, memslot->base_gfn, memslot->base_gfn + memslot->npages - 1, - lock_flush_tlb); + lock_flush_tlb, data); } static __always_inline bool slot_handle_all_level(struct kvm *kvm, struct kvm_memory_slot *memslot, - slot_level_handler fn, bool lock_flush_tlb) + slot_level_handler fn, bool lock_flush_tlb, void *data) { return slot_handle_level(kvm, memslot, fn, PT_PAGE_TABLE_LEVEL, - PT_MAX_HUGEPAGE_LEVEL, lock_flush_tlb); + PT_MAX_HUGEPAGE_LEVEL, lock_flush_tlb, data); } static __always_inline bool slot_handle_large_level(struct kvm *kvm, struct kvm_memory_slot *memslot, - slot_level_handler fn, bool lock_flush_tlb) + slot_level_handler fn, bool lock_flush_tlb, void *data) { return slot_handle_level(kvm, memslot, fn, PT_PAGE_TABLE_LEVEL + 1, - PT_MAX_HUGEPAGE_LEVEL, lock_flush_tlb); + PT_MAX_HUGEPAGE_LEVEL, lock_flush_tlb, data); } static __always_inline bool slot_handle_leaf(struct kvm *kvm, struct kvm_memory_slot *memslot, - slot_level_handler fn, bool lock_flush_tlb) + slot_level_handler fn, bool lock_flush_tlb, void *data) { return slot_handle_level(kvm, memslot, fn, PT_PAGE_TABLE_LEVEL, - PT_PAGE_TABLE_LEVEL, lock_flush_tlb); + PT_PAGE_TABLE_LEVEL, lock_flush_tlb, data); } void kvm_zap_gfn_range(struct kvm *kvm, gfn_t gfn_start, gfn_t gfn_end) @@ -5173,7 +5179,7 @@ void kvm_zap_gfn_range(struct kvm *kvm, gfn_t gfn_start, gfn_t gfn_end) slot_handle_level_range(kvm, memslot, kvm_zap_rmapp, PT_PAGE_TABLE_LEVEL, PT_MAX_HUGEPAGE_LEVEL, - start, end - 1, true); + start, end - 1, true, NULL); } } @@ -5181,9 +5187,10 @@ void kvm_zap_gfn_range(struct kvm *kvm, gfn_t gfn_start, gfn_t gfn_end) } static bool slot_rmap_write_protect(struct kvm *kvm, - struct kvm_rmap_head *rmap_head) + struct kvm_rmap_head *rmap_head, + void *data) { - return __rmap_write_protect(kvm, rmap_head, false); + return __rmap_write_protect(kvm, rmap_head, false, data); } void kvm_mmu_slot_remove_write_access(struct kvm *kvm, @@ -5193,7 +5200,7 @@ void kvm_mmu_slot_remove_write_access(struct kvm *kvm, spin_lock(&kvm->mmu_lock); flush = slot_handle_all_level(kvm, memslot, slot_rmap_write_protect, - false); + false, NULL); spin_unlock(&kvm->mmu_lock); /* @@ -5219,7 +5226,8 @@ void kvm_mmu_slot_remove_write_access(struct kvm *kvm, } static bool kvm_mmu_zap_collapsible_spte(struct kvm *kvm, - struct kvm_rmap_head *rmap_head) + struct kvm_rmap_head *rmap_head, + void *data) { u64 *sptep; struct rmap_iterator iter; @@ -5257,7 +5265,7 @@ void kvm_mmu_zap_collapsible_sptes(struct kvm *kvm, /* FIXME: const-ify all uses of struct kvm_memory_slot. */ spin_lock(&kvm->mmu_lock); slot_handle_leaf(kvm, (struct kvm_memory_slot *)memslot, - kvm_mmu_zap_collapsible_spte, true); + kvm_mmu_zap_collapsible_spte, true, NULL); spin_unlock(&kvm->mmu_lock); } @@ -5267,7 +5275,7 @@ void kvm_mmu_slot_leaf_clear_dirty(struct kvm *kvm, bool flush; spin_lock(&kvm->mmu_lock); - flush = slot_handle_leaf(kvm, memslot, __rmap_clear_dirty, false); + flush = slot_handle_leaf(kvm, memslot, __rmap_clear_dirty, false, NULL); spin_unlock(&kvm->mmu_lock); lockdep_assert_held(&kvm->slots_lock); @@ -5290,7 +5298,7 @@ void kvm_mmu_slot_largepage_remove_write_access(struct kvm *kvm, spin_lock(&kvm->mmu_lock); flush = slot_handle_large_level(kvm, memslot, slot_rmap_write_protect, - false); + false, NULL); spin_unlock(&kvm->mmu_lock); /* see kvm_mmu_slot_remove_write_access */ @@ -5307,7 +5315,8 @@ void kvm_mmu_slot_set_dirty(struct kvm *kvm, bool flush; spin_lock(&kvm->mmu_lock); - flush = slot_handle_all_level(kvm, memslot, __rmap_set_dirty, false); + flush = slot_handle_all_level(kvm, memslot, __rmap_set_dirty, false, + NULL); spin_unlock(&kvm->mmu_lock); lockdep_assert_held(&kvm->slots_lock); -- 2.16.4 -- To unsubscribe from this list: send the line "unsubscribe linux-doc" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html