From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id D021AC3DA7F for ; Mon, 5 Aug 2024 23:23:50 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Transfer-Encoding: Content-Type:Cc:To:From:Subject:Message-ID:References:Mime-Version: In-Reply-To:Date:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=ZqhT0BfYtdHqe5JEJY1dxcglZN0X6+UEhtDWxpbx9R0=; b=EX5lY4sPqs7NHy3PhGX1EqRhnL YG2lVGled5b39wbGyLSfkYZfhHuLLcQCLM0+BF/lrz5JQQscostcT7jZYDz1ffBAGBKxABWSkoOSW Qtk7PXsZQKpLFgRHWTzwUYAKOqKeCsAO4pzT5XnoM2UjQgubLuhP4WK9KOMcxpXfTohS6m5SReFYK JZpAmvW5PHtoKZidWDCARlGp8V04GLCdvCTnA0ybMElNo/VaFqp1gn4T6DkCDkjpUUSIjujKTYe8f mEPqFsgg0Nd6iAz1jnZCB5f8rGlLH37sPIBQHo5seqi9sICHiGstfsNW1DN3Q1bqyyPQLC6cv2G6q Oiqpw7iQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1sb735-0000000HXSY-3HGE; Mon, 05 Aug 2024 23:23:31 +0000 Received: from mail-pl1-x64a.google.com ([2607:f8b0:4864:20::64a]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1sb72Y-0000000HXH6-0SdN for linux-arm-kernel@lists.infradead.org; Mon, 05 Aug 2024 23:22:59 +0000 Received: by mail-pl1-x64a.google.com with SMTP id d9443c01a7336-1fc54c57a92so2493285ad.3 for ; Mon, 05 Aug 2024 16:22:57 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1722900176; x=1723504976; darn=lists.infradead.org; h=content-transfer-encoding:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:from:to:cc:subject:date:message-id :reply-to; bh=ZqhT0BfYtdHqe5JEJY1dxcglZN0X6+UEhtDWxpbx9R0=; b=o7dEg/pPwKafDLzD1eiG7YI9CYxpFruJ9KyJTUvmr3fPLH+nB8TFEjzvdpQK5ueVnr JQfDX+atAb6WFb75r65eSxJruX7T3vtDjtvk5RCmYtl4XDs3lhruVw5Yom2j4TioLscg JZz60w0dHTJzi25vL0Ob7miPVD30+uLp5edIvZ1CFnjaMWDOHpDC1o6zg5Z2oLdX/fl7 Lh38K0UJHWwkTF+OppA107+08wlTkBhG4vY8DcbJ8P59KYPBzT+8pDaKyc3gR0fT3JQk nZApDAaHwOncKOoeUrGNFWQunG8LQbUFBMJJRz7Lddp/bSiXEvsTm0PYzTLeUHkLfO60 HsDg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1722900176; x=1723504976; h=content-transfer-encoding:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:x-gm-message-state:from:to:cc:subject :date:message-id:reply-to; bh=ZqhT0BfYtdHqe5JEJY1dxcglZN0X6+UEhtDWxpbx9R0=; b=rBMXq/81LAtsTdKjn6JwNW/jq3KWCBME+2NBZ79sQTmfQb/QBNhOKIoD+p2lDtn4zg hgasOHcvderG9T10akHyAVe1B0qbFOzX7Ln2nFirsv+Gs3EbbSPScDMC2A82/YDF/5xc elohEL06v2UXHQZ8U/fYL1DwGy7IN9MbAm9GC/CA8b0p34uWg2K3IeyRnVqmacvK/ha0 /FHH3xT9Gn//e6DC8rr34atnKei2OjA7gxouN+v0VRo16Qg+SqVJy/4GY69+hZ0ZFDTk N5WICAjWYtYv62bYGoOgswkkZA+Nj9/cIzu+RaF9fl5PDjHu2BHmwJF4anaLDgvjw2q1 2mNQ== X-Forwarded-Encrypted: i=1; AJvYcCVcI8duFOphTdTKkmwgO81HwTmz8m90qijVM2unYSltIuvZrZglTW8p35O2xYL21oEYNqDtGy2GKzTpoCAxeKBUoenNDL+/VbUNz+KVbnMBZC5egrI= X-Gm-Message-State: AOJu0Yy9Kh/5qQde6Vaf/80Aj3KlB9vNk4nvrAxlPe2bwTtHWVcZY06Z P3jtDfw0DwXUP0QLJ91sJ0QcPCShd4K5OvmwJ7ybf7jD5UYBDRiF9wjYc9G13YOLcAqflF0wISS biA== X-Google-Smtp-Source: AGHT+IFnC2s9xgJUYrQamoHpNNjMdfjnR6jKC7U3Po3pJ/hHYZ7yn8m2xqR/ZVZ53CGLeUu13/uKbBw5g2Q= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a17:902:dacd:b0:1fb:325d:2b62 with SMTP id d9443c01a7336-1ff57464ba5mr9382545ad.10.1722900176428; Mon, 05 Aug 2024 16:22:56 -0700 (PDT) Date: Mon, 5 Aug 2024 16:22:54 -0700 In-Reply-To: <345d89c1-4f31-6b49-2cd4-a0696210fa7c@loongson.cn> Mime-Version: 1.0 References: <20240726235234.228822-1-seanjc@google.com> <20240726235234.228822-65-seanjc@google.com> <345d89c1-4f31-6b49-2cd4-a0696210fa7c@loongson.cn> Message-ID: Subject: Re: [PATCH v12 64/84] KVM: LoongArch: Mark "struct page" pfns dirty only in "slow" page fault path From: Sean Christopherson To: maobibo Cc: Paolo Bonzini , Marc Zyngier , Oliver Upton , Tianrui Zhao , Huacai Chen , Michael Ellerman , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, David Matlack , David Stevens Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240805_162258_397719_FADEA37D X-CRM114-Status: GOOD ( 32.65 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org On Sat, Aug 03, 2024, maobibo wrote: > On 2024/8/3 =E4=B8=8A=E5=8D=883:32, Sean Christopherson wrote: > > On Fri, Aug 02, 2024, maobibo wrote: > > > On 2024/7/27 =E4=B8=8A=E5=8D=887:52, Sean Christopherson wrote: > > > > Mark pages/folios dirty only the slow page fault path, i.e. only wh= en > > > > mmu_lock is held and the operation is mmu_notifier-protected, as ma= rking a > > > > page/folio dirty after it has been written back can make some files= ystems > > > > unhappy (backing KVM guests will such filesystem files is uncommon,= and > > > > the race is minuscule, hence the lack of complaints). > > > >=20 > > > > See the link below for details. > > > >=20 > > > > Link: https://lore.kernel.org/all/cover.1683044162.git.lstoakes@gma= il.com > > > > Signed-off-by: Sean Christopherson > > > > --- > > > > arch/loongarch/kvm/mmu.c | 18 ++++++++++-------- > > > > 1 file changed, 10 insertions(+), 8 deletions(-) > > > >=20 > > > > diff --git a/arch/loongarch/kvm/mmu.c b/arch/loongarch/kvm/mmu.c > > > > index 2634a9e8d82c..364dd35e0557 100644 > > > > --- a/arch/loongarch/kvm/mmu.c > > > > +++ b/arch/loongarch/kvm/mmu.c > > > > @@ -608,13 +608,13 @@ static int kvm_map_page_fast(struct kvm_vcpu = *vcpu, unsigned long gpa, bool writ > > > > if (kvm_pte_young(changed)) > > > > kvm_set_pfn_accessed(pfn); > > > > - if (kvm_pte_dirty(changed)) { > > > > - mark_page_dirty(kvm, gfn); > > > > - kvm_set_pfn_dirty(pfn); > > > > - } > > > > if (page) > > > > put_page(page); > > > > } > > > > + > > > > + if (kvm_pte_dirty(changed)) > > > > + mark_page_dirty(kvm, gfn); > > > > + > > > > return ret; > > > > out: > > > > spin_unlock(&kvm->mmu_lock); > > > > @@ -915,12 +915,14 @@ static int kvm_map_page(struct kvm_vcpu *vcpu= , unsigned long gpa, bool write) > > > > else > > > > ++kvm->stat.pages; > > > > kvm_set_pte(ptep, new_pte); > > > > - spin_unlock(&kvm->mmu_lock); > > > > - if (prot_bits & _PAGE_DIRTY) { > > > > - mark_page_dirty_in_slot(kvm, memslot, gfn); > > > > + if (writeable) > > > Is it better to use write or (prot_bits & _PAGE_DIRTY) here? writabl= e is > > > pte permission from function hva_to_pfn_slow(), write is fault action= . > >=20 > > Marking folios dirty in the slow/full path basically necessitates marki= ng the > > folio dirty if KVM creates a writable SPTE, as KVM won't mark the folio= dirty > > if/when _PAGE_DIRTY is set. > >=20 > > Practically speaking, I'm 99.9% certain it doesn't matter. The folio i= s marked > > dirty by core MM when the folio is made writable, and cleaning the foli= o triggers > > an mmu_notifier invalidation. I.e. if the page is mapped writable in K= VM's > yes, it is. Thanks for the explanation. kvm_set_pfn_dirty() can be put on= ly > in slow page fault path. I only concern with fault type, read fault type = can > set pte entry writable however not _PAGE_DIRTY at stage-2 mmu table. >=20 > > stage-2 PTEs, then its folio has already been marked dirty. > Considering one condition although I do not know whether it exists actual= ly. > user mode VMM writes the folio with hva address firstly, then VCPU thread > *reads* the folio. With primary mmu table, pte entry is writable and > _PAGE_DIRTY is set, with secondary mmu table(state-2 PTE table), it is > pte_none since the filio is accessed at first time, so there will be slow > page fault path for stage-2 mmu page table filling. >=20 > Since it is read fault, stage-2 PTE will be created with _PAGE_WRITE(comi= ng > from function hva_to_pfn_slow()), however _PAGE_DIRTY is not set. Do we n= eed > call kvm_set_pfn_dirty() at this situation? If KVM doesn't mark the folio dirty when the stage-2 _PAGE_DIRTY flag is se= t, i.e. as proposed in this series, then yes, KVM needs to call kvm_set_pfn_di= rty() even though the VM hasn't (yet) written to the memory. In practice, KVM ca= lling kvm_set_pfn_dirty() is redundant the majority of the time, as the stage-1 P= TE will have _PAGE_DIRTY set, and that will get propagated to the folio when t= he primary MMU does anything relevant with the PTE. And for file systems that= care about writeback, odds are very good that the folio was marked dirty even ea= rlier, when MM invoked vm_operations_struct.page_mkwrite(). The reason I am pushing to have all architectures mark pages/folios dirty i= n the slow page fault path is that a false positive (marking a folio dirty withou= t the folio ever being written in _any_ context since the last pte_mkclean()) is = rare, and at worst results an unnecessary writeback. On the other hand, marking = folios dirty in fast page fault handlers (or anywhere else that isn't protected by mmu_notifiers) is technically unsafe. In other words, the intent is to sacrifice accuracy to improve stability/ro= bustness, because the vast majority of time the loss in accuracy has no effect, and t= he worst case scenario is that the kernel does I/O that wasn't necessary.