From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.8 required=3.0 tests=DKIM_INVALID,DKIM_SIGNED, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C5D95C43381 for ; Fri, 1 Mar 2019 04:24:53 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 927C02087E for ; Fri, 1 Mar 2019 04:24:53 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="YpFW8mYF" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727952AbfCAEYw (ORCPT ); Thu, 28 Feb 2019 23:24:52 -0500 Received: from bombadil.infradead.org ([198.137.202.133]:58026 "EHLO bombadil.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726088AbfCAEYw (ORCPT ); Thu, 28 Feb 2019 23:24:52 -0500 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=Message-Id:Date:Subject:Cc:To:From: Sender:Reply-To:MIME-Version:Content-Type:Content-Transfer-Encoding: Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender: Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id: List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=6y6M8iIIy9tvGJZ471SZvl9W7ktgva+yQLLUmyf8VCs=; b=YpFW8mYFpThrVSvtG/jqAOLs5 WyNJTM88FWMckODQiwF0Z3h4NDLLM3OL7hI0YJnp/UEVE9RYTDc/V58AaGUbFJBAgsa8N1GeBUe7x /iL3nO/zWbrW8briipDmRYGtIv/GI3XDbru0kYG5vmC8EAiqf9q+6tRVgSHawFBABkGSsU1OTJbG4 fRy/i1JfljsLAkVIgXGn92x/QwWR+pw98eMmKf6NBaBmg1/nMW5sTm+WizEPMnlGrr686eeuQh48s VFuq5qvHJqY+Ar6bRlVgs2KTMlD2m1J2M59dEb51Rc33WNiEiOszT18iHQAtSl/8kX6JDXoDZ87nf WWx0ZTngA==; Received: from willy by bombadil.infradead.org with local (Exim 4.90_1 #2 (Red Hat Linux)) id 1gzZj8-0002SM-UC; Fri, 01 Mar 2019 04:24:50 +0000 From: Matthew Wilcox To: Piotr Balcer , Dan Williams , linux-fsdevel@vger.kernel.org, Jan Kara , linux-nvdimm@lists.01.org Cc: Matthew Wilcox Subject: [PATCH] dax: Flush partial PMDs correctly Date: Thu, 28 Feb 2019 20:24:48 -0800 Message-Id: <20190301042448.6868-1-willy@infradead.org> X-Mailer: git-send-email 2.14.5 Sender: linux-fsdevel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org The radix tree would rewind the index in an iterator to the lowest index of a multi-slot entry. The XArray iterators instead leave the index unchanged, but I overlooked that when converting DAX from the radix tree to the XArray. Adjust the index that we use for flushing to the start of the PMD range. Fixes: c1901cd33cf4 "page cache: Convert find_get_entries_tag to XArray" Reported-by: Piotr Balcer Tested-by: Dan Williams Signed-off-by: Matthew Wilcox --- fs/dax.c | 17 ++++++++--------- 1 file changed, 8 insertions(+), 9 deletions(-) diff --git a/fs/dax.c b/fs/dax.c index 6959837cc465..f7a7af766efe 100644 --- a/fs/dax.c +++ b/fs/dax.c @@ -843,9 +843,8 @@ static void dax_entry_mkclean(struct address_space *mapping, pgoff_t index, static int dax_writeback_one(struct xa_state *xas, struct dax_device *dax_dev, struct address_space *mapping, void *entry) { - unsigned long pfn; + unsigned long pfn, index, count; long ret = 0; - size_t size; /* * A page got tagged dirty in DAX mapping? Something is seriously @@ -894,17 +893,18 @@ static int dax_writeback_one(struct xa_state *xas, struct dax_device *dax_dev, xas_unlock_irq(xas); /* - * Even if dax_writeback_mapping_range() was given a wbc->range_start - * in the middle of a PMD, the 'index' we are given will be aligned to - * the start index of the PMD, as will the pfn we pull from 'entry'. + * If dax_writeback_mapping_range() was given a wbc->range_start + * in the middle of a PMD, the 'index' we are given needs to be + * aligned to the start index of the PMD. * This allows us to flush for PMD_SIZE and not have to worry about * partial PMD writebacks. */ pfn = dax_to_pfn(entry); - size = PAGE_SIZE << dax_entry_order(entry); + count = 1UL << dax_entry_order(entry); + index = xas->xa_index &~ (count - 1); dax_entry_mkclean(mapping, xas->xa_index, pfn); - dax_flush(dax_dev, page_address(pfn_to_page(pfn)), size); + dax_flush(dax_dev, page_address(pfn_to_page(pfn)), count * PAGE_SIZE); /* * After we have flushed the cache, we can clear the dirty tag. There * cannot be new dirty data in the pfn after the flush has completed as @@ -917,8 +917,7 @@ static int dax_writeback_one(struct xa_state *xas, struct dax_device *dax_dev, xas_clear_mark(xas, PAGECACHE_TAG_DIRTY); dax_wake_entry(xas, entry, false); - trace_dax_writeback_one(mapping->host, xas->xa_index, - size >> PAGE_SHIFT); + trace_dax_writeback_one(mapping->host, xas->xa_index, count); return ret; put_unlocked: -- 2.20.1