From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.5 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5C4D4C433E7 for ; Wed, 14 Oct 2020 03:04:26 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id EDF0F2222F for ; Wed, 14 Oct 2020 03:04:24 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="gFatcmt4" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org EDF0F2222F Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 4D65994000A; Tue, 13 Oct 2020 23:04:06 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 48201940009; Tue, 13 Oct 2020 23:04:06 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 2FCEB94000A; Tue, 13 Oct 2020 23:04:06 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0084.hostedemail.com [216.40.44.84]) by kanga.kvack.org (Postfix) with ESMTP id ED4C0940009 for ; Tue, 13 Oct 2020 23:04:05 -0400 (EDT) Received: from smtpin14.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 96C04181AC217 for ; Wed, 14 Oct 2020 03:04:05 +0000 (UTC) X-FDA: 77369036850.14.scale11_5017b5c27208 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin14.hostedemail.com (Postfix) with ESMTP id 75D8418229837 for ; Wed, 14 Oct 2020 03:04:05 +0000 (UTC) X-HE-Tag: scale11_5017b5c27208 X-Filterd-Recvd-Size: 4612 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf31.hostedemail.com (Postfix) with ESMTP for ; Wed, 14 Oct 2020 03:04:05 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=Bpmz/m3iXeUk23W+8MMxf9WSox2EZIuxPCrXIX5P4lY=; b=gFatcmt45Yz9HQXA6+8PQzqSET 4Gz04kVe8F6sUR6iuXkTL46htFS73/cIwxf531hY/wkhpI+vGMIhhrSXen6fh41NGOKIY/tul7HQk lzVgf1FzIQnX+aAVlXuqPM5MpzBC4ZHCsROAI2kDldXwRy7Ijvx0nGMlqehv6E0oTKAaH+KAKjAxx kITWsrJuvbtX4TqD61FJLyzYzr3tmPngIGxy/oWTl7tZt37YKAa6GP19wEYyZp4LPlbxZPXch+nTx XvouAM8dl8+qnqjfa80i4r5g/g6bt8tdjlI5HBio5tJGImaRAkDzX1EXltww0gBwP1WLYLjvrfdkd vTMLibqw==; Received: from willy by casper.infradead.org with local (Exim 4.92.3 #3 (Red Hat Linux)) id 1kSX57-0005jA-Py; Wed, 14 Oct 2020 03:04:01 +0000 From: "Matthew Wilcox (Oracle)" To: linux-fsdevel@vger.kernel.org, linux-xfs@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org Subject: [PATCH 10/14] iomap: Handle THPs when writing to pages Date: Wed, 14 Oct 2020 04:03:53 +0100 Message-Id: <20201014030357.21898-11-willy@infradead.org> X-Mailer: git-send-email 2.21.3 In-Reply-To: <20201014030357.21898-1-willy@infradead.org> References: <20201014030357.21898-1-willy@infradead.org> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: If we come across a THP that is not uptodate when writing to the page cache, this must be due to a readahead error, so behave the same way as readpage and split it. Make sure to flush the right page after completin= g the write. We still only copy up to a page boundary, so there's no need to flush multiple pages at this time. Signed-off-by: Matthew Wilcox (Oracle) --- fs/iomap/buffered-io.c | 20 +++++++++++++++++--- 1 file changed, 17 insertions(+), 3 deletions(-) diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c index 397795db3ce5..0a1fe7d1a27c 100644 --- a/fs/iomap/buffered-io.c +++ b/fs/iomap/buffered-io.c @@ -682,12 +682,19 @@ static ssize_t iomap_write_begin(struct inode *inod= e, loff_t pos, loff_t len, return status; } =20 +retry: page =3D grab_cache_page_write_begin(inode->i_mapping, pos >> PAGE_SHIF= T, AOP_FLAG_NOFS); if (!page) { status =3D -ENOMEM; goto out_no_page; } + if (PageTransCompound(page) && !PageUptodate(page)) { + if (iomap_split_page(inode, page) =3D=3D AOP_TRUNCATED_PAGE) { + put_page(page); + goto retry; + } + } page =3D thp_head(page); offset =3D offset_in_thp(page, pos); if (len > thp_size(page) - offset) @@ -724,6 +731,7 @@ iomap_set_page_dirty(struct page *page) struct address_space *mapping =3D page_mapping(page); int newly_dirty; =20 + VM_BUG_ON_PGFLAGS(PageTail(page), page); if (unlikely(!mapping)) return !TestSetPageDirty(page); =20 @@ -746,7 +754,9 @@ EXPORT_SYMBOL_GPL(iomap_set_page_dirty); static size_t __iomap_write_end(struct inode *inode, loff_t pos, size_t = len, size_t copied, struct page *page) { - flush_dcache_page(page); + size_t offset =3D offset_in_thp(page, pos); + + flush_dcache_page(page + offset / PAGE_SIZE); =20 /* * The blocks that were entirely written will now be uptodate, so we @@ -761,7 +771,7 @@ static size_t __iomap_write_end(struct inode *inode, = loff_t pos, size_t len, */ if (unlikely(copied < len && !PageUptodate(page))) return 0; - iomap_set_range_uptodate(page, offset_in_page(pos), len); + iomap_set_range_uptodate(page, offset, len); iomap_set_page_dirty(page); return copied; } @@ -837,6 +847,10 @@ iomap_write_actor(struct inode *inode, loff_t pos, l= off_t length, void *data, unsigned long bytes; /* Bytes to write to page */ size_t copied; /* Bytes copied from user */ =20 + /* + * XXX: We don't know what size page we'll find in the + * page cache, so only copy up to a regular page boundary. + */ offset =3D offset_in_page(pos); bytes =3D min_t(unsigned long, PAGE_SIZE - offset, iov_iter_count(i)); @@ -867,7 +881,7 @@ iomap_write_actor(struct inode *inode, loff_t pos, lo= ff_t length, void *data, offset =3D offset_in_thp(page, pos); =20 if (mapping_writably_mapped(inode->i_mapping)) - flush_dcache_page(page); + flush_dcache_page(page + offset / PAGE_SIZE); =20 copied =3D iov_iter_copy_from_user_atomic(page, i, offset, bytes); =20 --=20 2.28.0