From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id CBEADC3064D for ; Thu, 27 Jun 2024 05:45:18 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:In-Reply-To:Content-Type: MIME-Version:References:Message-ID:Subject:Cc:To:From:Date:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=ramgVCCQWB8g0WLV6ItptqESVIgfjHuJplPPwD2z6p0=; b=qFWu0Cha91gsIzTkEQWKlafp14 c8ocPqCzbFN7ydM7Q66cIjXDz0C44k2gA3y7GR/Ai6i7MdhRzumXvwQsGtRxyeUBMSjHo1kSDFFdV WKZGoZgFLYkZMwSC5DYo8EHJp9C/6wKfyvR70K6aLZ3AFumKfqz/1O98UdI9EVCCT38RlRJUWpTA5 2G2prjcbKkI1o3VL++HW4ruJ48AV3McilqgcKLt0FJK4NNACmXlrxPtGhhlkXRIFJexH8g4Bd7Mff iHx+scbb3F2bbwHYhvIAa5X/4Vr08AfuEpuzoomgMiPhkwHs8cCeoSkGGoHTaC/zSg65znWNVS069 GoLylQ8g==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1sMhwS-00000009IxG-3OLh; Thu, 27 Jun 2024 05:45:08 +0000 Received: from verein.lst.de ([213.95.11.211]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1sMhwJ-00000009IuM-3rd2 for linux-arm-kernel@lists.infradead.org; Thu, 27 Jun 2024 05:45:02 +0000 Received: by verein.lst.de (Postfix, from userid 2407) id 70CDD68AFE; Thu, 27 Jun 2024 07:44:55 +0200 (CEST) Date: Thu, 27 Jun 2024 07:44:55 +0200 From: Christoph Hellwig To: Alistair Popple Cc: dan.j.williams@intel.com, vishal.l.verma@intel.com, dave.jiang@intel.com, logang@deltatee.com, bhelgaas@google.com, jack@suse.cz, jgg@ziepe.ca, catalin.marinas@arm.com, will@kernel.org, mpe@ellerman.id.au, npiggin@gmail.com, dave.hansen@linux.intel.com, ira.weiny@intel.com, willy@infradead.org, djwong@kernel.org, tytso@mit.edu, linmiaohe@huawei.com, david@redhat.com, peterx@redhat.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linuxppc-dev@lists.ozlabs.org, nvdimm@lists.linux.dev, linux-cxl@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-ext4@vger.kernel.org, linux-xfs@vger.kernel.org, jhubbard@nvidia.com, hch@lst.de, david@fromorbit.com Subject: Re: [PATCH 10/13] fs/dax: Properly refcount fs dax pages Message-ID: <20240627054455.GF14837@lst.de> References: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.5.17 (2007-11-01) X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240626_224500_302283_BA00F2F6 X-CRM114-Status: GOOD ( 22.25 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org > diff --git a/drivers/dax/device.c b/drivers/dax/device.c > index eb61598..b7a31ae 100644 > --- a/drivers/dax/device.c > +++ b/drivers/dax/device.c > @@ -126,11 +126,11 @@ static vm_fault_t __dev_dax_pte_fault(struct dev_dax *dev_dax, > return VM_FAULT_SIGBUS; > } > > - pfn = phys_to_pfn_t(phys, PFN_DEV|PFN_MAP); > + pfn = phys_to_pfn_t(phys, 0); > > dax_set_mapping(vmf, pfn, fault_size); > > - return vmf_insert_mixed(vmf->vma, vmf->address, pfn); > + return dax_insert_pfn(vmf->vma, vmf->address, pfn, vmf->flags & FAULT_FLAG_WRITE); Plenty overly long lines here and later. Q: hould dax_insert_pfn take a vm_fault structure instead of the vma? Or are the potential use cases that aren't from the fault path? similar instead of the bool write passing the fault flags might actually make things more readable than the bool. Also at least currently it seems like there are no modular users despite the export, or am I missing something? > + blk_queue_flag_set(QUEUE_FLAG_DAX, q); Just as a heads up, setting of these flags has changed a lot in linux-next. > { > + /* > + * Make sure we flush any cached data to the page now that it's free. > + */ > + if (PageDirty(page)) > + dax_flush(NULL, page_address(page), page_size(page)); > + Adding the magic dax_dev == NULL case to dax_flush and going through it vs just calling arch_wb_cache_pmem directly here seems odd. But I also don't quite understand how it is related to the rest of the patch anyway. > --- a/mm/mlock.c > +++ b/mm/mlock.c > @@ -373,6 +373,8 @@ static int mlock_pte_range(pmd_t *pmd, unsigned long addr, > unsigned long start = addr; > > ptl = pmd_trans_huge_lock(pmd, vma); > + if (vma_is_dax(vma)) > + ptl = NULL; > if (ptl) { This feels sufficiently magic to warrant a comment. > if (!pmd_present(*pmd)) > goto out; > diff --git a/mm/mm_init.c b/mm/mm_init.c > index b7e1599..f11ee0d 100644 > --- a/mm/mm_init.c > +++ b/mm/mm_init.c > @@ -1016,7 +1016,8 @@ static void __ref __init_zone_device_page(struct page *page, unsigned long pfn, > */ > if (pgmap->type == MEMORY_DEVICE_PRIVATE || > pgmap->type == MEMORY_DEVICE_COHERENT || > - pgmap->type == MEMORY_DEVICE_PCI_P2PDMA) > + pgmap->type == MEMORY_DEVICE_PCI_P2PDMA || > + pgmap->type == MEMORY_DEVICE_FS_DAX) > set_page_count(page, 0); > } So we'll skip this for MEMORY_DEVICE_GENERIC only. Does anyone remember if that's actively harmful or just not needed? If the latter it might be simpler to just set the page count unconditionally here.