linux-fsdevel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: "Darrick J. Wong" <djwong@kernel.org>
To: Zhang Yi <yi.zhang@huaweicloud.com>
Cc: linux-xfs@vger.kernel.org, linux-fsdevel@vger.kernel.org,
	linux-kernel@vger.kernel.org, hch@infradead.org,
	brauner@kernel.org, david@fromorbit.com, jack@suse.cz,
	willy@infradead.org, yi.zhang@huawei.com,
	chengzhihao1@huawei.com, yukuai3@huawei.com
Subject: Re: [PATCH v2 6/6] iomap: reduce unnecessary state_lock when setting ifs uptodate and dirty bits
Date: Mon, 12 Aug 2024 09:54:44 -0700	[thread overview]
Message-ID: <20240812165444.GG6043@frogsfrogsfrogs> (raw)
In-Reply-To: <20240812121159.3775074-7-yi.zhang@huaweicloud.com>

On Mon, Aug 12, 2024 at 08:11:59PM +0800, Zhang Yi wrote:
> From: Zhang Yi <yi.zhang@huawei.com>
> 
> When doing buffered write, we set uptodate and drity bits of the written
> range separately, it holds the ifs->state_lock twice when blocksize <
> folio size, which is redundant. After large folio is supported, the
> spinlock could affect more about the performance, merge them could
> reduce some unnecessary locking overhead and gets some performance gain.
> 
> Suggested-by: Dave Chinner <david@fromorbit.com>
> Signed-off-by: Zhang Yi <yi.zhang@huawei.com>

Seems reasonable to me
Reviewed-by: Darrick J. Wong <djwong@kernel.org>

--D

> ---
>  fs/iomap/buffered-io.c | 38 +++++++++++++++++++++++++++++++++++---
>  1 file changed, 35 insertions(+), 3 deletions(-)
> 
> diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c
> index 96600405dbb5..67d7c1c22c98 100644
> --- a/fs/iomap/buffered-io.c
> +++ b/fs/iomap/buffered-io.c
> @@ -182,6 +182,37 @@ static void iomap_set_range_dirty(struct folio *folio, size_t off, size_t len)
>  		ifs_set_range_dirty(folio, ifs, off, len);
>  }
>  
> +static void ifs_set_range_dirty_uptodate(struct folio *folio,
> +		struct iomap_folio_state *ifs, size_t off, size_t len)
> +{
> +	struct inode *inode = folio->mapping->host;
> +	unsigned int blks_per_folio = i_blocks_per_folio(inode, folio);
> +	unsigned int first_blk = (off >> inode->i_blkbits);
> +	unsigned int last_blk = (off + len - 1) >> inode->i_blkbits;
> +	unsigned int nr_blks = last_blk - first_blk + 1;
> +	unsigned long flags;
> +
> +	spin_lock_irqsave(&ifs->state_lock, flags);
> +	bitmap_set(ifs->state, first_blk, nr_blks);
> +	if (ifs_is_fully_uptodate(folio, ifs))
> +		folio_mark_uptodate(folio);
> +	bitmap_set(ifs->state, first_blk + blks_per_folio, nr_blks);
> +	spin_unlock_irqrestore(&ifs->state_lock, flags);
> +}
> +
> +static void iomap_set_range_dirty_uptodate(struct folio *folio,
> +		size_t off, size_t len)
> +{
> +	struct iomap_folio_state *ifs = folio->private;
> +
> +	if (ifs)
> +		ifs_set_range_dirty_uptodate(folio, ifs, off, len);
> +	else
> +		folio_mark_uptodate(folio);
> +
> +	filemap_dirty_folio(folio->mapping, folio);
> +}
> +
>  static struct iomap_folio_state *ifs_alloc(struct inode *inode,
>  		struct folio *folio, unsigned int flags)
>  {
> @@ -851,6 +882,8 @@ static int iomap_write_begin(struct iomap_iter *iter, loff_t pos,
>  static bool __iomap_write_end(struct inode *inode, loff_t pos, size_t len,
>  		size_t copied, struct folio *folio)
>  {
> +	size_t from = offset_in_folio(folio, pos);
> +
>  	flush_dcache_folio(folio);
>  
>  	/*
> @@ -866,9 +899,8 @@ static bool __iomap_write_end(struct inode *inode, loff_t pos, size_t len,
>  	 */
>  	if (unlikely(copied < len && !folio_test_uptodate(folio)))
>  		return false;
> -	iomap_set_range_uptodate(folio, offset_in_folio(folio, pos), len);
> -	iomap_set_range_dirty(folio, offset_in_folio(folio, pos), copied);
> -	filemap_dirty_folio(inode->i_mapping, folio);
> +
> +	iomap_set_range_dirty_uptodate(folio, from, copied);
>  	return true;
>  }
>  
> -- 
> 2.39.2
> 
> 

  reply	other threads:[~2024-08-12 16:54 UTC|newest]

Thread overview: 42+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-08-12 12:11 [PATCH v2 0/6] iomap: some minor non-critical fixes and improvements when block size < folio size Zhang Yi
2024-08-12 12:11 ` [PATCH v2 1/6] iomap: correct the range of a partial dirty clear Zhang Yi
2024-08-12 16:33   ` Darrick J. Wong
2024-08-13  2:14     ` Zhang Yi
2024-08-14  1:53     ` Dave Chinner
2024-08-12 12:11 ` [PATCH v2 2/6] iomap: support invalidating partial folios Zhang Yi
2024-08-12 16:55   ` Darrick J. Wong
2024-08-12 12:11 ` [PATCH v2 3/6] iomap: advance the ifs allocation if we have more than one blocks per folio Zhang Yi
2024-08-12 12:47   ` yangerkun
2024-08-13  2:21     ` Zhang Yi
2024-08-14  5:32   ` Christoph Hellwig
2024-08-14  7:08     ` Zhang Yi
2024-08-15  6:00       ` Christoph Hellwig
2024-08-16  1:44         ` Zhang Yi
2024-08-17  4:27     ` Zhang Yi
2024-08-17  4:42       ` Matthew Wilcox
2024-08-17  6:16         ` Zhang Yi
2024-08-12 12:11 ` [PATCH v2 4/6] iomap: correct the dirty length in page mkwrite Zhang Yi
2024-08-12 16:45   ` Darrick J. Wong
2024-08-13  2:49     ` Zhang Yi
2024-08-14  5:36   ` Christoph Hellwig
2024-08-14  7:49     ` Zhang Yi
2024-08-15  5:59       ` Christoph Hellwig
2024-08-16  2:19         ` Zhang Yi
2024-08-17  4:45   ` Matthew Wilcox
2024-08-17  6:43     ` Zhang Yi
2024-08-12 12:11 ` [PATCH v2 5/6] iomap: don't mark blocks uptodate after partial zeroing Zhang Yi
2024-08-12 16:49   ` Darrick J. Wong
2024-08-13  3:01     ` Zhang Yi
2024-08-14  5:39   ` Christoph Hellwig
2024-08-17  4:48   ` Matthew Wilcox
2024-08-17  7:16     ` Zhang Yi
2024-08-12 12:11 ` [PATCH v2 6/6] iomap: reduce unnecessary state_lock when setting ifs uptodate and dirty bits Zhang Yi
2024-08-12 16:54   ` Darrick J. Wong [this message]
2024-08-12 17:00   ` Matthew Wilcox
2024-08-13  8:15     ` Zhang Yi
2024-08-14  1:49 ` [PATCH v2 0/6] iomap: some minor non-critical fixes and improvements when block size < folio size Dave Chinner
2024-08-14  2:14   ` Zhang Yi
2024-08-14  2:47     ` Dave Chinner
2024-08-14  3:57       ` Zhang Yi
2024-08-14  5:16         ` Dave Chinner
2024-08-14  6:32           ` Zhang Yi

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20240812165444.GG6043@frogsfrogsfrogs \
    --to=djwong@kernel.org \
    --cc=brauner@kernel.org \
    --cc=chengzhihao1@huawei.com \
    --cc=david@fromorbit.com \
    --cc=hch@infradead.org \
    --cc=jack@suse.cz \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-xfs@vger.kernel.org \
    --cc=willy@infradead.org \
    --cc=yi.zhang@huawei.com \
    --cc=yi.zhang@huaweicloud.com \
    --cc=yukuai3@huawei.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).