From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id BB3EFC4829A for ; Tue, 13 Feb 2024 05:46:46 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id D98086B0071; Tue, 13 Feb 2024 00:46:45 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id D47166B0074; Tue, 13 Feb 2024 00:46:45 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C36DF6B0075; Tue, 13 Feb 2024 00:46:45 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id B37796B0071 for ; Tue, 13 Feb 2024 00:46:45 -0500 (EST) Received: from smtpin26.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id 86C7BA01BA for ; Tue, 13 Feb 2024 05:46:45 +0000 (UTC) X-FDA: 81785696370.26.216E871 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) by imf25.hostedemail.com (Postfix) with ESMTP id BA8ECA0009 for ; Tue, 13 Feb 2024 05:46:42 +0000 (UTC) Authentication-Results: imf25.hostedemail.com; dkim=pass header.d=infradead.org header.s=bombadil.20210309 header.b=xrZHcyPL; dmarc=none; spf=none (imf25.hostedemail.com: domain of BATV+9a63a840bab807518824+7478+infradead.org+hch@bombadil.srs.infradead.org has no SPF policy when checking 198.137.202.133) smtp.mailfrom=BATV+9a63a840bab807518824+7478+infradead.org+hch@bombadil.srs.infradead.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1707803204; a=rsa-sha256; cv=none; b=lJIU2AIElRCda1mzdWO97EOHCX8dqeNN1JFELuO7AxtQKfCvDKCIrCNvEMram46NtE8PAZ WyY/W+xeOGdvVzMLBgUPJ+kif28USp4SAwQ+X5XaeW7EtYS+RP3UGOn1QMLmBW/KOTp1Yg PJxO3xbXD6U3ZB7oW2wRlbyF/n83pYs= ARC-Authentication-Results: i=1; imf25.hostedemail.com; dkim=pass header.d=infradead.org header.s=bombadil.20210309 header.b=xrZHcyPL; dmarc=none; spf=none (imf25.hostedemail.com: domain of BATV+9a63a840bab807518824+7478+infradead.org+hch@bombadil.srs.infradead.org has no SPF policy when checking 198.137.202.133) smtp.mailfrom=BATV+9a63a840bab807518824+7478+infradead.org+hch@bombadil.srs.infradead.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1707803204; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=T3oaY7gdOOxrlvC+Ug5bN838TmkbxRk5QZ7DRKmSkjg=; b=Tyluc9ckbGI4hE9kLMr0fiS7kM8jFbJq9dca+kMwBNTQ7ffl4/Y7uK+qu6FN7MtmeH5nsv qRrXmBt6Wxp07eUOfmAU95K/uqCYqWA7QCXjz0wOs7L9d73hyq2Rn4PRlo8c8eMwCoyEFD 1/eevxtgNodscVcFPh/IHejSMvhyATE= DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=In-Reply-To:Content-Type:MIME-Version :References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=T3oaY7gdOOxrlvC+Ug5bN838TmkbxRk5QZ7DRKmSkjg=; b=xrZHcyPLkZHEDdjmrk+cnQA+xy 0AkdwwRG+DaStdTpZtJRXi7YdqCsSSB5ScDf2AMPZ4rdY7/J7au0yFOM/aQkPRmcgxqoJJ2tg9Wq0 PKp7d80Jdfa5rJnkIIx63gdIWW+wVREJQNfo+0TvK0njQDTR4M4fIVKrvqKdilYmcrHexy4f0W6GE 6lTnp87fdtqVZNBFze1rXiMASKxZ8dvlOjd0qZHBXvb0klJyLxr76HMRZTxFSsodHcD5svV638mvl BelEMwroYY15e3rj/n4RpzqImMiL94y443lDFASin5rJEmMNWp0bY4j64qDzAzZWkMwtZ0sME2ACr ms6UMe9A==; Received: from hch by bombadil.infradead.org with local (Exim 4.97.1 #2 (Red Hat Linux)) id 1rZlct-000000081r5-2JOd; Tue, 13 Feb 2024 05:46:39 +0000 Date: Mon, 12 Feb 2024 21:46:39 -0800 From: Christoph Hellwig To: Zhang Yi Cc: linux-ext4@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, tytso@mit.edu, adilger.kernel@dilger.ca, jack@suse.cz, ritesh.list@gmail.com, hch@infradead.org, djwong@kernel.org, willy@infradead.org, zokeefe@google.com, yi.zhang@huawei.com, chengzhihao1@huawei.com, yukuai3@huawei.com, wangkefeng.wang@huawei.com Subject: Re: [RFC PATCH v3 07/26] iomap: don't increase i_size if it's not a write operation Message-ID: References: <20240127015825.1608160-1-yi.zhang@huaweicloud.com> <20240127015825.1608160-8-yi.zhang@huaweicloud.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20240127015825.1608160-8-yi.zhang@huaweicloud.com> X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html X-Rspam-User: X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: BA8ECA0009 X-Stat-Signature: d9n4byce8qprozfyjwwfeo971go11noq X-HE-Tag: 1707803202-35741 X-HE-Meta: U2FsdGVkX1+j+A/6V6/q5ThRraQk/iJ7PJFqSIBFICwuHq2FOXLZCdp1Gxu1D8FxRUPTLdQLkKihvY3aUbqs+1mNwxAEOpQ2Lal/ejkOPXR8gaxLJW/fP69eYUdtl09aKJAaSbrmfqUJPwZngW2dr2C2wql/okKRIDZvLaQKDXXLJMtp/Cf/ziE6LgFtr+AqH7x0l9rvAd/GAb6cSvPb4PbLVMQfQx8aT1JXgDezki7uag4UCEf/o74KXW3WaoXqCWRx6uIGzVt9i0lxPX78lmsUy9a3yka5eAxTSrjTkS9rSZPJiWDbppWyRFf+9WtNUAegUBB2pc707td8tEsZmzjE7FbMQQYkNR7xsY3jbqddANEVDOlnDwP2qOliROdH4MkaX8L51Wk55llMLRYHx5SJF8ESKw0A6EmPtUbSljTv0AqxfK2AIYtUWY3Vlh4ugvj2n/SoCzAPhpEhF83RPt3q1aDxx0bvxI16UIKXBrnumhpL/oPdJ425RlasW28xxyg5Tc4qJTAEtVCivlNNI3Ko+rs1+QXqVJcaNu5fLrI/19cnVcmIVv6eWTowvOO3YQYz4n7d5kv/MmHc3pLho3grhPbPHhdpS4asgv517GuUOxasoKWZMU5xfxKuQkcapPKw+buN3yuX3dCCtpFleuZnsvp9AVs9B/bO+pTCsA3N1Nwt68C1bQa1FlUUQESDUr67t6qgh2H9epzM0vWlQfWF04hqe2UfYzXDhoX9z6R3/H8XATi/3NOU9MaR0K8dpcacG2eJ85AmkQWsSOvmVHgIvJE/Qx7RouGcT6hK8SvP+J86kyG910aF9UEBSmI9LP8ptWd/dd9wfzQFxY0+SiUAi0+1QOr9civDvfETEd1I6krMHvbgP6CzsATpKvZucnVgHhUBMrJ0QioemycdB467KTWYWxdgsKT7tip7damti/8gKI7ddxBCczlGTD/LcgSWGCu4bLmlSEbSx7q iJjssL/1 gpc6Bw+sVXchmrqcrAqUEzYUyFIojAcGVfLmuSU+1Ohb8cY7J0qHEJp5311SOFduA06GHLQO8BW6xgLTEpkdrI2yVwyMU7XdioyBHszSwXCymvhF2rx9y4n6ajr9lSuCCeukQtNvGnYmAXlYUK+BEzzTB+g13l3DqvoqE9NKJC0tDCCHDBuMeAaExEMha0AjMzptLfAXWYjX4VqecVQieSAJgwxhcjhrLYeCGsBq8OO3m4c4QeRIBXQrnlLgweNZxrC5h9bFNsfDLQRv0l/5C6guhh2x0omYfm6trDMWf8rWaPdAN4OTs0kk0AABmFWsdIQoK9o0Rtb536CsHHnOra9mQDdw3j8bJJY7yq+8kZCrIzi1dO4vR41hJTaDRPkXWoGfVAM/T9CG8lQHPY7vcKmrh2KGpJ/u2QM9f+nhPq4QeDf5TJ/FD9o65SMw2kvf10+Ve3pEjEKP14kS8Bpluwfwkzvb4Ht/D+17Jq9nUme5sz3WrxOXnVCeN3kUyaogw6uVm X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Wouldn't it make more sense to just move the size manipulation to the write-only code? An untested version of that is below. With this the naming of the status variable becomes even more confusing than it already is, maybe we need to do a cleanup of the *_write_end calling conventions as it always returns the passed in copied value or 0. diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c index 3dab060aed6d7b..8401a9ca702fc0 100644 --- a/fs/iomap/buffered-io.c +++ b/fs/iomap/buffered-io.c @@ -876,34 +876,13 @@ static size_t iomap_write_end(struct iomap_iter *iter, loff_t pos, size_t len, size_t copied, struct folio *folio) { const struct iomap *srcmap = iomap_iter_srcmap(iter); - loff_t old_size = iter->inode->i_size; - size_t ret; - - if (srcmap->type == IOMAP_INLINE) { - ret = iomap_write_end_inline(iter, folio, pos, copied); - } else if (srcmap->flags & IOMAP_F_BUFFER_HEAD) { - ret = block_write_end(NULL, iter->inode->i_mapping, pos, len, - copied, &folio->page, NULL); - } else { - ret = __iomap_write_end(iter->inode, pos, len, copied, folio); - } - - /* - * Update the in-memory inode size after copying the data into the page - * cache. It's up to the file system to write the updated size to disk, - * preferably after I/O completion so that no stale data is exposed. - */ - if (pos + ret > old_size) { - i_size_write(iter->inode, pos + ret); - iter->iomap.flags |= IOMAP_F_SIZE_CHANGED; - } - __iomap_put_folio(iter, pos, ret, folio); - if (old_size < pos) - pagecache_isize_extended(iter->inode, old_size, pos); - if (ret < len) - iomap_write_failed(iter->inode, pos + ret, len - ret); - return ret; + if (srcmap->type == IOMAP_INLINE) + return iomap_write_end_inline(iter, folio, pos, copied); + if (srcmap->flags & IOMAP_F_BUFFER_HEAD) + return block_write_end(NULL, iter->inode->i_mapping, pos, len, + copied, &folio->page, NULL); + return __iomap_write_end(iter->inode, pos, len, copied, folio); } static loff_t iomap_write_iter(struct iomap_iter *iter, struct iov_iter *i) @@ -918,6 +897,7 @@ static loff_t iomap_write_iter(struct iomap_iter *iter, struct iov_iter *i) do { struct folio *folio; + loff_t old_size; size_t offset; /* Offset into folio */ size_t bytes; /* Bytes to write to folio */ size_t copied; /* Bytes copied from user */ @@ -964,7 +944,24 @@ static loff_t iomap_write_iter(struct iomap_iter *iter, struct iov_iter *i) copied = copy_folio_from_iter_atomic(folio, offset, bytes, i); status = iomap_write_end(iter, pos, bytes, copied, folio); + /* + * Update the in-memory inode size after copying the data into + * the page cache. It's up to the file system to write the + * updated size to disk, preferably after I/O completion so that + * no stale data is exposed. + */ + old_size = iter->inode->i_size; + if (pos + status > old_size) { + i_size_write(iter->inode, pos + status); + iter->iomap.flags |= IOMAP_F_SIZE_CHANGED; + } + __iomap_put_folio(iter, pos, status, folio); + if (old_size < pos) + pagecache_isize_extended(iter->inode, old_size, pos); + if (status < bytes) + iomap_write_failed(iter->inode, pos + status, + bytes - status); if (unlikely(copied != status)) iov_iter_revert(i, copied - status); @@ -1334,6 +1331,7 @@ static loff_t iomap_unshare_iter(struct iomap_iter *iter) bytes = folio_size(folio) - offset; bytes = iomap_write_end(iter, pos, bytes, bytes, folio); + __iomap_put_folio(iter, pos, bytes, folio); if (WARN_ON_ONCE(bytes == 0)) return -EIO; @@ -1398,6 +1396,7 @@ static loff_t iomap_zero_iter(struct iomap_iter *iter, bool *did_zero) folio_mark_accessed(folio); bytes = iomap_write_end(iter, pos, bytes, bytes, folio); + __iomap_put_folio(iter, pos, bytes, folio); if (WARN_ON_ONCE(bytes == 0)) return -EIO;