From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1498FC433FE for ; Wed, 23 Mar 2022 18:22:04 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S245640AbiCWSXc (ORCPT ); Wed, 23 Mar 2022 14:23:32 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59650 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236368AbiCWSXa (ORCPT ); Wed, 23 Mar 2022 14:23:30 -0400 Received: from mail-pj1-x102f.google.com (mail-pj1-x102f.google.com [IPv6:2607:f8b0:4864:20::102f]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 097E0890B1 for ; Wed, 23 Mar 2022 11:22:00 -0700 (PDT) Received: by mail-pj1-x102f.google.com with SMTP id n7-20020a17090aab8700b001c6aa871860so2678789pjq.2 for ; Wed, 23 Mar 2022 11:22:00 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:from:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to; bh=HwXsboqz7k+wKrfipOk4DNs4mMmptEqDFu/VTWeb9ZU=; b=CN/PX4pE4aYnOxwsm40V+KrokbNR16/dSQD3iNts8MJofP8HlO6GpT+h22msgeHGhs ST9IQSbtqSGVy+UkycoeSWQsj9v82NLTIjT3G8DlD2v0SYLIWz/i11SJORUvAnYmMuTP 3D79oFc0E77ldLCgmQEpW6K5ao6tBlW6F/lN8KSDMeg5HzM1gEvvAqFoiKOyp6xGAQVf QnGDKa8yD2QHqKQCfTg8lu09id8wYeS/Qq7L4FmMKS4MD5BlUllEYreo6LWIkGoSgz0Q T0ZxB2kwqLjtoQapFqul1N1SEm/epY5E1PT5uuturo4ZtcLI8fXzNqnxU2WTUCjFmaWQ 2kkA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to; bh=HwXsboqz7k+wKrfipOk4DNs4mMmptEqDFu/VTWeb9ZU=; b=36KutL0Yvn54fzO1wiysqaxpCMyUYRB8NLzKcCChkaB8bhhy5qDP6ifXbXOmL1YGg+ rvsPOUbWNSpz5SWloQr7UKKlv2v6NehznRVn0z7VyWA5qnHwaLMG/jiR1EVGJl1BqyWu K4rpFaqlMlNzZJm0Uww2lT8nuP+juUAYvJmoyT58wurYIS1KkzM8dmQ1Jdr5ZVpEWNWQ 7hee3Ka02NTmpKWk6g/Ibcdu+7f5zOYpCaBL5tm6J6ELRE0XBtf5atd4MCyFXld0TFLg EU5aSdpid9vfqShTQ7wQIKGJSHkzZfomBL6auCUeHZiK2G5a8D++jF+toZFDB8tELkL4 vkPg== X-Gm-Message-State: AOAM531x0FekMGod8ATvRUtgG8Bs9Z7xZs1cwKwf2W2LZPYYecNcoUgM UZ00POFwLMWlO2gqvLEemyJd2g== X-Google-Smtp-Source: ABdhPJwfUfqzM2AAjmkT+qb1Sp4piX1tk1H2e1mkKMc5NjjnY/FwnHwrRBnJTZQY2IxNX4IFCKMwXg== X-Received: by 2002:a17:902:d512:b0:154:7580:6e78 with SMTP id b18-20020a170902d51200b0015475806e78mr1443684plg.35.1648059719285; Wed, 23 Mar 2022 11:21:59 -0700 (PDT) Received: from google.com (226.75.127.34.bc.googleusercontent.com. [34.127.75.226]) by smtp.gmail.com with ESMTPSA id h14-20020a63384e000000b00366ba5335e7sm407689pgn.72.2022.03.23.11.21.58 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 23 Mar 2022 11:21:58 -0700 (PDT) Date: Wed, 23 Mar 2022 18:21:55 +0000 From: Mingwei Zhang To: Ben Gardon Cc: Paolo Bonzini , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , kvm@vger.kernel.org, linux-kernel@vger.kernel.org, David Matlack , Jing Zhang , Peter Xu , Ben Gardon Subject: Re: [PATCH 4/4] selftests: KVM: use dirty logging to check if page stats work correctly Message-ID: References: <20220321002638.379672-1-mizhang@google.com> <20220321002638.379672-5-mizhang@google.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon, Mar 21, 2022, Ben Gardon wrote: > On Sun, Mar 20, 2022 at 5:26 PM Mingwei Zhang wrote: > > > > When dirty logging is enabled, KVM splits the all hugepage mapping in > > NPT/EPT into the smallest 4K size. This property could be used to check if > > Note this is only true if eager page splitting is enabled. It would be > more accurate to say: > "While dirty logging is enabled, KVM will re-map any accessed page in > NPT/EPT at 4K." > > > the page stats metrics work properly in KVM mmu. At the same time, this > > logic might be used the other way around: using page stats to verify if > > dirty logging really splits all huge pages. Moreover, when dirty logging is > > It might be worth having a follow up commit which checks if eager > splitting is enabled and changes the assertions accordingly. > > > disabled, KVM zaps corresponding SPTEs and we could check whether the large > > pages come back when guest touches the pages again. > > > > So add page stats checking in dirty logging performance selftest. In > > particular, add checks in three locations: > > - just after vm is created; > > - after populating memory into vm but before enabling dirty logging; > > - just after turning on dirty logging. > > Note a key stage here is after dirty logging is enabled, and then the > VM touches all the memory in the data region. > I believe that's the point at which you're making the assertion that > all mappings are 4k currently, which is the right place if eager > splitting is not enabled. > > > - after one final iteration after turning off dirty logging. > > > > Tested using commands: > > - ./dirty_log_perf_test -s anonymous_hugetlb_1gb > > - ./dirty_log_perf_test -s anonymous_thp > > > > Cc: Sean Christopherson > > Cc: David Matlack > > Cc: Jing Zhang > > Cc: Peter Xu > > > > Suggested-by: Ben Gardon > > Signed-off-by: Mingwei Zhang > > --- > > .../selftests/kvm/dirty_log_perf_test.c | 52 +++++++++++++++++++ > > 1 file changed, 52 insertions(+) > > > > diff --git a/tools/testing/selftests/kvm/dirty_log_perf_test.c b/tools/testing/selftests/kvm/dirty_log_perf_test.c > > index 1954b964d1cf..ab0457d91658 100644 > > --- a/tools/testing/selftests/kvm/dirty_log_perf_test.c > > +++ b/tools/testing/selftests/kvm/dirty_log_perf_test.c > > @@ -19,6 +19,10 @@ > > #include "perf_test_util.h" > > #include "guest_modes.h" > > > > +#ifdef __x86_64__ > > +#include "processor.h" > > +#endif > > + > > /* How many host loops to run by default (one KVM_GET_DIRTY_LOG for each loop)*/ > > #define TEST_HOST_LOOP_N 2UL > > > > @@ -185,6 +189,14 @@ static void run_test(enum vm_guest_mode mode, void *arg) > > p->slots, p->backing_src, > > p->partition_vcpu_memory_access); > > > > +#ifdef __x86_64__ > > + TEST_ASSERT(vm_get_single_stat(vm, "pages_4k") == 0, > > + "4K page is non zero"); > > + TEST_ASSERT(vm_get_single_stat(vm, "pages_2m") == 0, > > + "2M page is non zero"); > > + TEST_ASSERT(vm_get_single_stat(vm, "pages_1g") == 0, > > + "1G page is non zero"); > > +#endif > > perf_test_set_wr_fract(vm, p->wr_fract); > > > > guest_num_pages = (nr_vcpus * guest_percpu_mem_size) >> vm_get_page_shift(vm); > > @@ -222,6 +234,16 @@ static void run_test(enum vm_guest_mode mode, void *arg) > > pr_info("Populate memory time: %ld.%.9lds\n", > > ts_diff.tv_sec, ts_diff.tv_nsec); > > > > +#ifdef __x86_64__ > > + TEST_ASSERT(vm_get_single_stat(vm, "pages_4k") != 0, > > + "4K page is zero"); > > + if (p->backing_src == VM_MEM_SRC_ANONYMOUS_THP) > > This should also handle 2M hugetlb memory. > I think there might be a library function to translate backing src > type to page size too, which could make this check cleaner. > Just went through the selftest code again, it seems this logic a quite x86 and there were no similar checks in other places. So I think I'll just add another condition here for now. > > + TEST_ASSERT(vm_get_single_stat(vm, "pages_2m") != 0, > > + "2M page is zero"); > > + if (p->backing_src == VM_MEM_SRC_ANONYMOUS_HUGETLB_1GB) > > + TEST_ASSERT(vm_get_single_stat(vm, "pages_1g") != 0, > > + "1G page is zero"); > > +#endif > > /* Enable dirty logging */ > > clock_gettime(CLOCK_MONOTONIC, &start); > > enable_dirty_logging(vm, p->slots); > > @@ -267,6 +289,14 @@ static void run_test(enum vm_guest_mode mode, void *arg) > > iteration, ts_diff.tv_sec, ts_diff.tv_nsec); > > } > > } > > +#ifdef __x86_64__ > > + TEST_ASSERT(vm_get_single_stat(vm, "pages_4k") != 0, > > + "4K page is zero after dirty logging"); > > + TEST_ASSERT(vm_get_single_stat(vm, "pages_2m") == 0, > > + "2M page is non-zero after dirty logging"); > > + TEST_ASSERT(vm_get_single_stat(vm, "pages_1g") == 0, > > + "1G page is non-zero after dirty logging"); > > +#endif > > Note this is after dirty logging has been enabled, AND all pages in > the data region have been written by the guest. > > > > > /* Disable dirty logging */ > > clock_gettime(CLOCK_MONOTONIC, &start); > > @@ -275,6 +305,28 @@ static void run_test(enum vm_guest_mode mode, void *arg) > > pr_info("Disabling dirty logging time: %ld.%.9lds\n", > > ts_diff.tv_sec, ts_diff.tv_nsec); > > > > +#ifdef __x86_64__ > > + /* > > + * Increment iteration to run the vcpus again to verify if huge pages > > + * come back. > > + */ > > + iteration++; > > + pr_info("Starting the final iteration to verify page stats\n"); > > + > > + for (vcpu_id = 0; vcpu_id < nr_vcpus; vcpu_id++) { > > + while (READ_ONCE(vcpu_last_completed_iteration[vcpu_id]) > > + != iteration) > > + ; > > + } > > We might as well do this on all archs. Even without the stats, it at > least validates that disabling dirty logging doesn't break the VM. > > > + > > + if (p->backing_src == VM_MEM_SRC_ANONYMOUS_THP) > > + TEST_ASSERT(vm_get_single_stat(vm, "pages_2m") != 0, > > + "2M page is zero"); > > + if (p->backing_src == VM_MEM_SRC_ANONYMOUS_HUGETLB_1GB) > > + TEST_ASSERT(vm_get_single_stat(vm, "pages_1g") != 0, > > + "1G page is zero"); > > +#endif > > + > > /* Tell the vcpu thread to quit */ > > host_quit = true; > > perf_test_join_vcpu_threads(nr_vcpus); > > -- > > 2.35.1.894.gb6a874cedc-goog > >