public inbox for linux-xfs@vger.kernel.org
 help / color / mirror / Atom feed
From: "Darrick J. Wong" <djwong@kernel.org>
To: Christoph Hellwig <hch@lst.de>
Cc: Carlos Maiolino <cem@kernel.org>,
	Dave Chinner <dchinner@redhat.com>,
	linux-xfs@vger.kernel.org
Subject: Re: [PATCH 05/12] xfs: refactor backing memory allocations for buffers
Date: Wed, 26 Feb 2025 09:08:47 -0800	[thread overview]
Message-ID: <20250226170847.GP6242@frogsfrogsfrogs> (raw)
In-Reply-To: <20250226155245.513494-6-hch@lst.de>

On Wed, Feb 26, 2025 at 07:51:33AM -0800, Christoph Hellwig wrote:
> Lift handling of shmem and slab backed buffers into xfs_buf_alloc_pages
> and rename the result to xfs_buf_alloc_backing_mem.  This shares more
> code and ensures uncached buffers can also use slab, which slightly
> reduces the memory usage of growfs on 512 byte sector size file systems,
> but more importantly means the allocation invariants are the same for
> cached and uncached buffers.  Document these new invariants with a big
> fat comment mostly stolen from a patch by Dave Chinner.
> 
> Signed-off-by: Christoph Hellwig <hch@lst.de>

That's a nice refactoring.  I'd wondered about get_buf_uncached not
going for slab memory when I was writing the xmbuf code, but didn't want
to overcomplicate that patchset and then forgot about it :/

Reviewed-by: "Darrick J. Wong" <djwong@kernel.org>

--D

> ---
>  fs/xfs/xfs_buf.c | 55 +++++++++++++++++++++++++++++++-----------------
>  1 file changed, 36 insertions(+), 19 deletions(-)
> 
> diff --git a/fs/xfs/xfs_buf.c b/fs/xfs/xfs_buf.c
> index af1389ebdd69..e8783ee23623 100644
> --- a/fs/xfs/xfs_buf.c
> +++ b/fs/xfs/xfs_buf.c
> @@ -329,19 +329,49 @@ xfs_buf_alloc_kmem(
>  	return 0;
>  }
>  
> +/*
> + * Allocate backing memory for a buffer.
> + *
> + * For tmpfs-backed buffers used by in-memory btrees this directly maps the
> + * tmpfs page cache folios.
> + *
> + * For real file system buffers there are two different kinds backing memory:
> + *
> + * The first type backs the buffer by a kmalloc allocation.  This is done for
> + * less than PAGE_SIZE allocations to avoid wasting memory.
> + *
> + * The second type of buffer is the multi-page buffer. These are always made
> + * up of single pages so that they can be fed to vmap_ram() to return a
> + * contiguous memory region we can access the data through, or mark it as
> + * XBF_UNMAPPED and access the data directly through individual page_address()
> + * calls.
> + */
>  static int
> -xfs_buf_alloc_pages(
> +xfs_buf_alloc_backing_mem(
>  	struct xfs_buf	*bp,
>  	xfs_buf_flags_t	flags)
>  {
> +	size_t		size = BBTOB(bp->b_length);
>  	gfp_t		gfp_mask = GFP_KERNEL | __GFP_NOLOCKDEP | __GFP_NOWARN;
>  	long		filled = 0;
>  
> +	if (xfs_buftarg_is_mem(bp->b_target))
> +		return xmbuf_map_page(bp);
> +
> +	/*
> +	 * For buffers that fit entirely within a single page, first attempt to
> +	 * allocate the memory from the heap to minimise memory usage.  If we
> +	 * can't get heap memory for these small buffers, we fall back to using
> +	 * the page allocator.
> +	 */
> +	if (size < PAGE_SIZE && xfs_buf_alloc_kmem(new_bp, flags) == 0)
> +		return 0;
> +
>  	if (flags & XBF_READ_AHEAD)
>  		gfp_mask |= __GFP_NORETRY;
>  
>  	/* Make sure that we have a page list */
> -	bp->b_page_count = DIV_ROUND_UP(BBTOB(bp->b_length), PAGE_SIZE);
> +	bp->b_page_count = DIV_ROUND_UP(size, PAGE_SIZE);
>  	if (bp->b_page_count <= XB_PAGES) {
>  		bp->b_pages = bp->b_page_array;
>  	} else {
> @@ -622,18 +652,7 @@ xfs_buf_find_insert(
>  	if (error)
>  		goto out_drop_pag;
>  
> -	if (xfs_buftarg_is_mem(new_bp->b_target)) {
> -		error = xmbuf_map_page(new_bp);
> -	} else if (BBTOB(new_bp->b_length) >= PAGE_SIZE ||
> -		   xfs_buf_alloc_kmem(new_bp, flags) < 0) {
> -		/*
> -		 * For buffers that fit entirely within a single page, first
> -		 * attempt to allocate the memory from the heap to minimise
> -		 * memory usage. If we can't get heap memory for these small
> -		 * buffers, we fall back to using the page allocator.
> -		 */
> -		error = xfs_buf_alloc_pages(new_bp, flags);
> -	}
> +	error = xfs_buf_alloc_backing_mem(new_bp, flags);
>  	if (error)
>  		goto out_free_buf;
>  
> @@ -995,14 +1014,12 @@ xfs_buf_get_uncached(
>  	if (error)
>  		return error;
>  
> -	if (xfs_buftarg_is_mem(bp->b_target))
> -		error = xmbuf_map_page(bp);
> -	else
> -		error = xfs_buf_alloc_pages(bp, flags);
> +	error = xfs_buf_alloc_backing_mem(bp, flags);
>  	if (error)
>  		goto fail_free_buf;
>  
> -	error = _xfs_buf_map_pages(bp, 0);
> +	if (!bp->b_addr)
> +		error = _xfs_buf_map_pages(bp, 0);
>  	if (unlikely(error)) {
>  		xfs_warn(target->bt_mount,
>  			"%s: failed to map pages", __func__);
> -- 
> 2.45.2
> 
> 

  reply	other threads:[~2025-02-26 17:08 UTC|newest]

Thread overview: 28+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-02-26 15:51 use folios and vmalloc for buffer cache backing memory Christoph Hellwig
2025-02-26 15:51 ` [PATCH 01/12] xfs: unmapped buffer item size straddling mismatch Christoph Hellwig
2025-02-26 15:51 ` [PATCH 02/12] xfs: add a fast path to xfs_buf_zero when b_addr is set Christoph Hellwig
2025-02-26 17:00   ` Darrick J. Wong
2025-02-26 15:51 ` [PATCH 03/12] xfs: remove xfs_buf.b_offset Christoph Hellwig
2025-02-26 17:00   ` Darrick J. Wong
2025-02-26 15:51 ` [PATCH 04/12] xfs: remove xfs_buf_is_vmapped Christoph Hellwig
2025-02-26 17:02   ` Darrick J. Wong
2025-02-26 15:51 ` [PATCH 05/12] xfs: refactor backing memory allocations for buffers Christoph Hellwig
2025-02-26 17:08   ` Darrick J. Wong [this message]
2025-02-26 15:51 ` [PATCH 06/12] xfs: remove the kmalloc to page allocator fallback Christoph Hellwig
2025-02-26 17:22   ` Darrick J. Wong
2025-03-04 14:05     ` Christoph Hellwig
2025-02-26 15:51 ` [PATCH 07/12] xfs: convert buffer cache to use high order folios Christoph Hellwig
2025-02-26 17:33   ` Darrick J. Wong
2025-03-04 14:06     ` Christoph Hellwig
2025-02-26 15:51 ` [PATCH 08/12] xfs: kill XBF_UNMAPPED Christoph Hellwig
2025-02-26 15:51 ` [PATCH 09/12] xfs: buffer items don't straddle pages anymore Christoph Hellwig
2025-02-26 15:51 ` [PATCH 10/12] xfs: use vmalloc instead of vm_map_area for buffer backing memory Christoph Hellwig
2025-02-26 18:02   ` Darrick J. Wong
2025-03-04 14:10     ` Christoph Hellwig
2025-02-26 15:51 ` [PATCH 11/12] xfs: cleanup mapping tmpfs folios into the buffer cache Christoph Hellwig
2025-02-26 17:39   ` Darrick J. Wong
2025-03-04 14:11     ` Christoph Hellwig
2025-02-26 15:51 ` [PATCH 12/12] xfs: trace what memory backs a buffer Christoph Hellwig
2025-02-26 16:45   ` Darrick J. Wong
  -- strict thread matches above, loose matches on Subject: below --
2025-03-05 14:05 use folios and vmalloc for buffer cache backing memory v2 Christoph Hellwig
2025-03-05 14:05 ` [PATCH 05/12] xfs: refactor backing memory allocations for buffers Christoph Hellwig
2025-03-10 13:19 use folios and vmalloc for buffer cache backing memory v3 Christoph Hellwig
2025-03-10 13:19 ` [PATCH 05/12] xfs: refactor backing memory allocations for buffers Christoph Hellwig

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20250226170847.GP6242@frogsfrogsfrogs \
    --to=djwong@kernel.org \
    --cc=cem@kernel.org \
    --cc=dchinner@redhat.com \
    --cc=hch@lst.de \
    --cc=linux-xfs@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox