From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 43B54C3DA7F for ; Mon, 5 Aug 2024 23:26:55 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:In-Reply-To:Content-Type: MIME-Version:References:Message-ID:Subject:Cc:To:From:Date:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=+mfeo2RqmDWupNPy1ZQ9DQzaW41XBFzcKm+dZAjG39w=; b=LHgbtfcOqL9P/dEg67y4IqdUPu uO1A/N29yqeyfZZ6X5zpq+0vlPoRtyhgc7+Rk5+qow55/QEg4m6wBD3MsqeUx3O7EckYVR8S8zpdo v7QiTgFAsxBPqdc0O1AsTXN8gg7vqJNOPmfWve59U76Py50LpgxKa+OaTYJ9zxjRdRiSp5dVa0amC mB4uAGHa2XSnGZc20RDxAHXrrapum0xym5mlIxxKROMrP8slq24c5jaGQslCxKWVF7x8EuqM0pDO6 oyovjw7UDCMIPuf2174fNmLDETrI+9fnBFKuAyeluRyvqLxFdmBTjgiwUW2GTb75/d0rpqNIoqFp1 pPjGJWSw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1sb76B-0000000HY8a-1yc8; Mon, 05 Aug 2024 23:26:43 +0000 Received: from out-182.mta0.migadu.com ([91.218.175.182]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1sb75b-0000000HY08-1NAn for linux-arm-kernel@lists.infradead.org; Mon, 05 Aug 2024 23:26:12 +0000 Date: Mon, 5 Aug 2024 23:25:55 +0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1722900362; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=+mfeo2RqmDWupNPy1ZQ9DQzaW41XBFzcKm+dZAjG39w=; b=nsqM4WvIeqvYHal0R7bad4sl2RiCqdaEThtnF1PVGpbpIk6TP7/CSQ1lcdBlex5IzcKPCb 7g4CjFX6c6eUuhlGombjt0+/nOmNqPRG0qjhofrleHk8DYNXPAn7Q/OhxjYef93ialM8P+ YvQTTQKWBorXAs+vuLMxHvrpEon2vKo= X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. From: Oliver Upton To: Sean Christopherson Cc: Paolo Bonzini , Marc Zyngier , Tianrui Zhao , Bibo Mao , Huacai Chen , Michael Ellerman , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, David Matlack , David Stevens Subject: Re: [PATCH v12 54/84] KVM: arm64: Mark "struct page" pfns accessed/dirty before dropping mmu_lock Message-ID: References: <20240726235234.228822-1-seanjc@google.com> <20240726235234.228822-55-seanjc@google.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20240726235234.228822-55-seanjc@google.com> X-Migadu-Flow: FLOW_OUT X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240805_162607_567122_21F02AC7 X-CRM114-Status: GOOD ( 19.09 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org [+cc Fuad] Fuad, you mentioned in commit 9c30fc615daa ("KVM: arm64: Move setting the page as dirty out of the critical section") that restructuring around the MMU lock was helpful for reuse (presumably for pKVM), but I lack the context there. On Fri, Jul 26, 2024 at 04:52:03PM -0700, Sean Christopherson wrote: > Mark pages/folios accessed+dirty prior to dropping mmu_lock, as marking a > page/folio dirty after it has been written back can make some filesystems > unhappy (backing KVM guests will such filesystem files is uncommon, and typo: s/will/with/ > the race is minuscule, hence the lack of complaints). See the link below > for details. > > This will also allow converting arm64 to kvm_release_faultin_page(), which > requires that mmu_lock be held (for the aforementioned reason). > > Link: https://lore.kernel.org/all/cover.1683044162.git.lstoakes@gmail.com > Signed-off-by: Sean Christopherson > --- > arch/arm64/kvm/mmu.c | 10 ++++++---- > 1 file changed, 6 insertions(+), 4 deletions(-) > > diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c > index 22ee37360c4e..ce13c3d884d5 100644 > --- a/arch/arm64/kvm/mmu.c > +++ b/arch/arm64/kvm/mmu.c > @@ -1685,15 +1685,17 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, > } > > out_unlock: > + if (writable && !ret) > + kvm_set_pfn_dirty(pfn); I'm guessing you meant kvm_release_pfn_dirty() here, because this leaks a reference. > + else > + kvm_release_pfn_clean(pfn); > + > read_unlock(&kvm->mmu_lock); > > /* Mark the page dirty only if the fault is handled successfully */ > - if (writable && !ret) { > - kvm_set_pfn_dirty(pfn); > + if (writable && !ret) > mark_page_dirty_in_slot(kvm, memslot, gfn); > - } > > - kvm_release_pfn_clean(pfn); > return ret != -EAGAIN ? ret : 0; > } > > -- > 2.46.0.rc1.232.g9752f9e123-goog > -- Thanks, Oliver