From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.8 required=3.0 tests=DKIM_INVALID,DKIM_SIGNED, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id F25F2C43381 for ; Fri, 1 Mar 2019 19:12:45 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id B7CDC2084F for ; Fri, 1 Mar 2019 19:12:45 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="g1Od9W+X" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728142AbfCATMp (ORCPT ); Fri, 1 Mar 2019 14:12:45 -0500 Received: from bombadil.infradead.org ([198.137.202.133]:42042 "EHLO bombadil.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728031AbfCATMp (ORCPT ); Fri, 1 Mar 2019 14:12:45 -0500 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=Message-Id:Date:Subject:Cc:To:From: Sender:Reply-To:MIME-Version:Content-Type:Content-Transfer-Encoding: Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender: Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id: List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=LIXU54krxJYMzxiFDvrTtOce2bp+OufiV87Z+LSxkoQ=; b=g1Od9W+X0lyxv9PpQz8tiVtZT jZiYmFRlrUoOHrh4ixGVohRVujuogefRPUt4Yk/Eufq4IPv7fED10BvYGA4KG0W7dxQ4TtPJnQLPz XGozjxyVMJpk5QK6tbkNg/T5FDMn8fCF94zj6fcMfuDCvtxDXlKBwhbo3qHmx3C6+PQD+mbJ0DpXu CddM+aAGMMjGcmFGKDnELtHLl/M4EM3TYPBJe9SlabHcv3L2jUcuMgk93UHM5Uh7DxOxnadv0fV2D FPuVOhT/lvvbDnSwTaP5rdAiPaWaM4uXr0ZxDCaZaBwF2/P9SvDCoe760lhBgsQT+9SvscDsklQDq yrefCRKlw==; Received: from willy by bombadil.infradead.org with local (Exim 4.90_1 #2 (Red Hat Linux)) id 1gznaO-0007Fw-4C; Fri, 01 Mar 2019 19:12:44 +0000 From: Matthew Wilcox To: Piotr Balcer , Dan Williams , linux-fsdevel@vger.kernel.org, Jan Kara , linux-nvdimm@lists.01.org Cc: Matthew Wilcox Subject: [PATCH] dax: Flush partial PMDs correctly Date: Fri, 1 Mar 2019 11:12:41 -0800 Message-Id: <20190301191241.27842-1-willy@infradead.org> X-Mailer: git-send-email 2.14.5 Sender: linux-fsdevel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org The radix tree would rewind the index in an iterator to the lowest index of a multi-slot entry. The XArray iterators instead leave the index unchanged, but I overlooked that when converting DAX from the radix tree to the XArray. Adjust the index that we use for flushing to the start of the PMD range. Fixes: c1901cd33cf4 "page cache: Convert find_get_entries_tag to XArray" Reported-by: Piotr Balcer Tested-by: Dan Williams Signed-off-by: Matthew Wilcox --- fs/dax.c | 19 +++++++++---------- 1 file changed, 9 insertions(+), 10 deletions(-) diff --git a/fs/dax.c b/fs/dax.c index 6959837cc465..4116cd9f55dd 100644 --- a/fs/dax.c +++ b/fs/dax.c @@ -843,9 +843,8 @@ static void dax_entry_mkclean(struct address_space *mapping, pgoff_t index, static int dax_writeback_one(struct xa_state *xas, struct dax_device *dax_dev, struct address_space *mapping, void *entry) { - unsigned long pfn; + unsigned long pfn, index, count; long ret = 0; - size_t size; /* * A page got tagged dirty in DAX mapping? Something is seriously @@ -894,17 +893,18 @@ static int dax_writeback_one(struct xa_state *xas, struct dax_device *dax_dev, xas_unlock_irq(xas); /* - * Even if dax_writeback_mapping_range() was given a wbc->range_start - * in the middle of a PMD, the 'index' we are given will be aligned to - * the start index of the PMD, as will the pfn we pull from 'entry'. + * If dax_writeback_mapping_range() was given a wbc->range_start + * in the middle of a PMD, the 'index' we use needs to be + * aligned to the start of the PMD. * This allows us to flush for PMD_SIZE and not have to worry about * partial PMD writebacks. */ pfn = dax_to_pfn(entry); - size = PAGE_SIZE << dax_entry_order(entry); + count = 1UL << dax_entry_order(entry); + index = xas->xa_index & ~(count - 1); - dax_entry_mkclean(mapping, xas->xa_index, pfn); - dax_flush(dax_dev, page_address(pfn_to_page(pfn)), size); + dax_entry_mkclean(mapping, index, pfn); + dax_flush(dax_dev, page_address(pfn_to_page(pfn)), count * PAGE_SIZE); /* * After we have flushed the cache, we can clear the dirty tag. There * cannot be new dirty data in the pfn after the flush has completed as @@ -917,8 +917,7 @@ static int dax_writeback_one(struct xa_state *xas, struct dax_device *dax_dev, xas_clear_mark(xas, PAGECACHE_TAG_DIRTY); dax_wake_entry(xas, entry, false); - trace_dax_writeback_one(mapping->host, xas->xa_index, - size >> PAGE_SHIFT); + trace_dax_writeback_one(mapping->host, xas->xa_index, count); return ret; put_unlocked: -- 2.20.1