public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
* [RFC PATCH 0/2] Add mru cache for inode to zone allocation mapping
@ 2025-04-30  8:41 Hans Holmberg
  2025-04-30  8:41 ` [RFC PATCH 1/2] xfs: free the item in xfs_mru_cache_insert on failure Hans Holmberg
                   ` (2 more replies)
  0 siblings, 3 replies; 9+ messages in thread
From: Hans Holmberg @ 2025-04-30  8:41 UTC (permalink / raw)
  To: linux-xfs@vger.kernel.org
  Cc: Carlos Maiolino, Dave Chinner, Darrick J . Wong, hch,
	linux-kernel@vger.kernel.org, Hans Holmberg

These patches cleans up the xfs mru code a bit and adds a cache for
keeping track of which zone an inode allocated data to last. Placing
file data in the same zone helps reduce write amplification.

Sending out as an RFC to get comments, specifically about the potential
mru lock contention when doing the lookup during allocation.

Can we do something better there?
I'll look into benchmarking the overhead, but any sugestions on how to
do this best would be helpful.

Christoph Hellwig (1):
  xfs: free the item in xfs_mru_cache_insert on failure

Hans Holmberg (1):
  xfs: add inode to zone caching for data placement

 fs/xfs/xfs_filestream.c |  15 ++----
 fs/xfs/xfs_mount.h      |   1 +
 fs/xfs/xfs_mru_cache.c  |  15 ++++--
 fs/xfs/xfs_zone_alloc.c | 109 ++++++++++++++++++++++++++++++++++++++++
 4 files changed, 126 insertions(+), 14 deletions(-)

-- 
2.34.1

^ permalink raw reply	[flat|nested] 9+ messages in thread

* [RFC PATCH 1/2] xfs: free the item in xfs_mru_cache_insert on failure
  2025-04-30  8:41 [RFC PATCH 0/2] Add mru cache for inode to zone allocation mapping Hans Holmberg
@ 2025-04-30  8:41 ` Hans Holmberg
  2025-05-02 20:06   ` Darrick J. Wong
  2025-04-30  8:41 ` [RFC PATCH 2/2] xfs: add inode to zone caching for data placement Hans Holmberg
  2025-05-05  5:57 ` [RFC PATCH 0/2] Add mru cache for inode to zone allocation mapping hch
  2 siblings, 1 reply; 9+ messages in thread
From: Hans Holmberg @ 2025-04-30  8:41 UTC (permalink / raw)
  To: linux-xfs@vger.kernel.org
  Cc: Carlos Maiolino, Dave Chinner, Darrick J . Wong, hch,
	linux-kernel@vger.kernel.org, Hans Holmberg

From: Christoph Hellwig <hch@lst.de>

Call the provided free_func when xfs_mru_cache_insert as that's what
the callers need to do anyway.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Hans Holmberg <hans.holmberg@wdc.com>
---
 fs/xfs/xfs_filestream.c | 15 ++++-----------
 fs/xfs/xfs_mru_cache.c  | 15 ++++++++++++---
 2 files changed, 16 insertions(+), 14 deletions(-)

diff --git a/fs/xfs/xfs_filestream.c b/fs/xfs/xfs_filestream.c
index a961aa420c48..044918fbae06 100644
--- a/fs/xfs/xfs_filestream.c
+++ b/fs/xfs/xfs_filestream.c
@@ -304,11 +304,9 @@ xfs_filestream_create_association(
 	 * for us, so all we need to do here is take another active reference to
 	 * the perag for the cached association.
 	 *
-	 * If we fail to store the association, we need to drop the fstrms
-	 * counter as well as drop the perag reference we take here for the
-	 * item. We do not need to return an error for this failure - as long as
-	 * we return a referenced AG, the allocation can still go ahead just
-	 * fine.
+	 * If we fail to store the association, we do not need to return an
+	 * error for this failure - as long as we return a referenced AG, the
+	 * allocation can still go ahead just fine.
 	 */
 	item = kmalloc(sizeof(*item), GFP_KERNEL | __GFP_RETRY_MAYFAIL);
 	if (!item)
@@ -316,14 +314,9 @@ xfs_filestream_create_association(
 
 	atomic_inc(&pag_group(args->pag)->xg_active_ref);
 	item->pag = args->pag;
-	error = xfs_mru_cache_insert(mp->m_filestream, pino, &item->mru);
-	if (error)
-		goto out_free_item;
+	xfs_mru_cache_insert(mp->m_filestream, pino, &item->mru);
 	return 0;
 
-out_free_item:
-	xfs_perag_rele(item->pag);
-	kfree(item);
 out_put_fstrms:
 	atomic_dec(&args->pag->pagf_fstrms);
 	return 0;
diff --git a/fs/xfs/xfs_mru_cache.c b/fs/xfs/xfs_mru_cache.c
index d0f5b403bdbe..08443ceec329 100644
--- a/fs/xfs/xfs_mru_cache.c
+++ b/fs/xfs/xfs_mru_cache.c
@@ -414,6 +414,8 @@ xfs_mru_cache_destroy(
  * To insert an element, call xfs_mru_cache_insert() with the data store, the
  * element's key and the client data pointer.  This function returns 0 on
  * success or ENOMEM if memory for the data element couldn't be allocated.
+ *
+ * The passed in elem is freed through the per-cache free_func on failure.
  */
 int
 xfs_mru_cache_insert(
@@ -421,14 +423,15 @@ xfs_mru_cache_insert(
 	unsigned long		key,
 	struct xfs_mru_cache_elem *elem)
 {
-	int			error;
+	int			error = -EINVAL;
 
 	ASSERT(mru && mru->lists);
 	if (!mru || !mru->lists)
-		return -EINVAL;
+		goto out_free;
 
+	error = -ENOMEM;
 	if (radix_tree_preload(GFP_KERNEL))
-		return -ENOMEM;
+		goto out_free;
 
 	INIT_LIST_HEAD(&elem->list_node);
 	elem->key = key;
@@ -440,6 +443,12 @@ xfs_mru_cache_insert(
 		_xfs_mru_cache_list_insert(mru, elem);
 	spin_unlock(&mru->lock);
 
+	if (error)
+		goto out_free;
+	return 0;
+
+out_free:
+	mru->free_func(mru->data, elem);
 	return error;
 }
 
-- 
2.34.1

^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [RFC PATCH 2/2] xfs: add inode to zone caching for data placement
  2025-04-30  8:41 [RFC PATCH 0/2] Add mru cache for inode to zone allocation mapping Hans Holmberg
  2025-04-30  8:41 ` [RFC PATCH 1/2] xfs: free the item in xfs_mru_cache_insert on failure Hans Holmberg
@ 2025-04-30  8:41 ` Hans Holmberg
  2025-05-02 20:04   ` Darrick J. Wong
  2025-05-05  5:57 ` [RFC PATCH 0/2] Add mru cache for inode to zone allocation mapping hch
  2 siblings, 1 reply; 9+ messages in thread
From: Hans Holmberg @ 2025-04-30  8:41 UTC (permalink / raw)
  To: linux-xfs@vger.kernel.org
  Cc: Carlos Maiolino, Dave Chinner, Darrick J . Wong, hch,
	linux-kernel@vger.kernel.org, Hans Holmberg

Placing data from the same file in the same zone is a great heuristic
for reducing write amplification and we do this already - but only
for sequential writes.

To support placing data in the same way for random writes, reuse the
xfs mru cache to map inodes to open zones on first write. If a mapping
is present, use the open zone for data placement for this file until
the zone is full.

Signed-off-by: Hans Holmberg <hans.holmberg@wdc.com>
---
 fs/xfs/xfs_mount.h      |   1 +
 fs/xfs/xfs_zone_alloc.c | 109 ++++++++++++++++++++++++++++++++++++++++
 2 files changed, 110 insertions(+)

diff --git a/fs/xfs/xfs_mount.h b/fs/xfs/xfs_mount.h
index e5192c12e7ac..f90c0a16766f 100644
--- a/fs/xfs/xfs_mount.h
+++ b/fs/xfs/xfs_mount.h
@@ -230,6 +230,7 @@ typedef struct xfs_mount {
 	bool			m_update_sb;	/* sb needs update in mount */
 	unsigned int		m_max_open_zones;
 	unsigned int		m_zonegc_low_space;
+	struct xfs_mru_cache	*m_zone_cache;  /* Inode to open zone cache */
 
 	/*
 	 * Bitsets of per-fs metadata that have been checked and/or are sick.
diff --git a/fs/xfs/xfs_zone_alloc.c b/fs/xfs/xfs_zone_alloc.c
index d509e49b2aaa..80add26c0111 100644
--- a/fs/xfs/xfs_zone_alloc.c
+++ b/fs/xfs/xfs_zone_alloc.c
@@ -24,6 +24,7 @@
 #include "xfs_zone_priv.h"
 #include "xfs_zones.h"
 #include "xfs_trace.h"
+#include "xfs_mru_cache.h"
 
 void
 xfs_open_zone_put(
@@ -796,6 +797,100 @@ xfs_submit_zoned_bio(
 	submit_bio(&ioend->io_bio);
 }
 
+/*
+ * Cache the last zone written to for an inode so that it is considered first
+ * for subsequent writes.
+ */
+struct xfs_zone_cache_item {
+	struct xfs_mru_cache_elem	mru;
+	struct xfs_open_zone		*oz;
+};
+
+static inline struct xfs_zone_cache_item *
+xfs_zone_cache_item(struct xfs_mru_cache_elem *mru)
+{
+	return container_of(mru, struct xfs_zone_cache_item, mru);
+}
+
+static void
+xfs_zone_cache_free_func(
+	void				*data,
+	struct xfs_mru_cache_elem	*mru)
+{
+	struct xfs_zone_cache_item	*item = xfs_zone_cache_item(mru);
+
+	xfs_open_zone_put(item->oz);
+	kfree(item);
+}
+
+/*
+ * Check if we have a cached last open zone available for the inode and
+ * if yes return a reference to it.
+ */
+static struct xfs_open_zone *
+xfs_cached_zone(
+	struct xfs_mount		*mp,
+	struct xfs_inode		*ip)
+{
+	struct xfs_mru_cache_elem	*mru;
+	struct xfs_open_zone		*oz;
+
+	mru = xfs_mru_cache_lookup(mp->m_zone_cache, ip->i_ino);
+	if (!mru)
+		return NULL;
+	oz = xfs_zone_cache_item(mru)->oz;
+	if (oz) {
+		/*
+		 * GC only steals open zones at mount time, so no GC zones
+		 * should end up in the cache.
+		 */
+		ASSERT(!oz->oz_is_gc);
+		ASSERT(atomic_read(&oz->oz_ref) > 0);
+		atomic_inc(&oz->oz_ref);
+	}
+	xfs_mru_cache_done(mp->m_zone_cache);
+	return oz;
+}
+
+/*
+ * Update the last used zone cache for a given inode.
+ *
+ * The caller must have a reference on the open zone.
+ */
+static void
+xfs_zone_cache_create_association(
+	struct xfs_inode		*ip,
+	struct xfs_open_zone		*oz)
+{
+	struct xfs_mount		*mp = ip->i_mount;
+	struct xfs_zone_cache_item	*item = NULL;
+	struct xfs_mru_cache_elem	*mru;
+
+	ASSERT(atomic_read(&oz->oz_ref) > 0);
+	atomic_inc(&oz->oz_ref);
+
+	mru = xfs_mru_cache_lookup(mp->m_zone_cache, ip->i_ino);
+	if (mru) {
+		/*
+		 * If we have an association already, update it to point to the
+		 * new zone.
+		 */
+		item = xfs_zone_cache_item(mru);
+		xfs_open_zone_put(item->oz);
+		item->oz = oz;
+		xfs_mru_cache_done(mp->m_zone_cache);
+		return;
+	}
+
+	item = kmalloc(sizeof(*item), GFP_KERNEL);
+	if (!item) {
+		xfs_open_zone_put(oz);
+		return;
+	}
+	item->oz = oz;
+	xfs_mru_cache_insert(mp->m_zone_cache, ip->i_ino, &item->mru);
+}
+
 void
 xfs_zone_alloc_and_submit(
 	struct iomap_ioend	*ioend,
@@ -819,11 +914,16 @@ xfs_zone_alloc_and_submit(
 	 */
 	if (!*oz && ioend->io_offset)
 		*oz = xfs_last_used_zone(ioend);
+	if (!*oz)
+		*oz = xfs_cached_zone(mp, ip);
+
 	if (!*oz) {
 select_zone:
 		*oz = xfs_select_zone(mp, write_hint, pack_tight);
 		if (!*oz)
 			goto out_error;
+
+		xfs_zone_cache_create_association(ip, *oz);
 	}
 
 	alloc_len = xfs_zone_alloc_blocks(*oz, XFS_B_TO_FSB(mp, ioend->io_size),
@@ -1211,6 +1311,14 @@ xfs_mount_zones(
 	error = xfs_zone_gc_mount(mp);
 	if (error)
 		goto out_free_zone_info;
+
+	/*
+	 * Set up a mru cache to track inode to open zone for data placement
+	 * purposes. The magic values for group count and life time is the
+	 * same as the defaults for file streams, which seems sane enough.
+	 */
+	xfs_mru_cache_create(&mp->m_zone_cache, mp,
+			5000, 10, xfs_zone_cache_free_func);
 	return 0;
 
 out_free_zone_info:
@@ -1224,4 +1332,5 @@ xfs_unmount_zones(
 {
 	xfs_zone_gc_unmount(mp);
 	xfs_free_zone_info(mp->m_zone_info);
+	xfs_mru_cache_destroy(mp->m_zone_cache);
 }
-- 
2.34.1

^ permalink raw reply related	[flat|nested] 9+ messages in thread

* Re: [RFC PATCH 2/2] xfs: add inode to zone caching for data placement
  2025-04-30  8:41 ` [RFC PATCH 2/2] xfs: add inode to zone caching for data placement Hans Holmberg
@ 2025-05-02 20:04   ` Darrick J. Wong
  2025-05-05  5:55     ` hch
  0 siblings, 1 reply; 9+ messages in thread
From: Darrick J. Wong @ 2025-05-02 20:04 UTC (permalink / raw)
  To: Hans Holmberg
  Cc: linux-xfs@vger.kernel.org, Carlos Maiolino, Dave Chinner, hch,
	linux-kernel@vger.kernel.org

On Wed, Apr 30, 2025 at 08:41:21AM +0000, Hans Holmberg wrote:
> Placing data from the same file in the same zone is a great heuristic
> for reducing write amplification and we do this already - but only
> for sequential writes.
> 
> To support placing data in the same way for random writes, reuse the
> xfs mru cache to map inodes to open zones on first write. If a mapping
> is present, use the open zone for data placement for this file until
> the zone is full.
> 
> Signed-off-by: Hans Holmberg <hans.holmberg@wdc.com>

It seems like a decent idea to try to land random writes to the same
file in the same zone.  This helps us reduce seeking out of the zone on
subsequent reads, right?

If so, then I've understood the purpose, and:
Reviewed-by: "Darrick J. Wong" <djwong@kernel.org>

--D

> ---
>  fs/xfs/xfs_mount.h      |   1 +
>  fs/xfs/xfs_zone_alloc.c | 109 ++++++++++++++++++++++++++++++++++++++++
>  2 files changed, 110 insertions(+)
> 
> diff --git a/fs/xfs/xfs_mount.h b/fs/xfs/xfs_mount.h
> index e5192c12e7ac..f90c0a16766f 100644
> --- a/fs/xfs/xfs_mount.h
> +++ b/fs/xfs/xfs_mount.h
> @@ -230,6 +230,7 @@ typedef struct xfs_mount {
>  	bool			m_update_sb;	/* sb needs update in mount */
>  	unsigned int		m_max_open_zones;
>  	unsigned int		m_zonegc_low_space;
> +	struct xfs_mru_cache	*m_zone_cache;  /* Inode to open zone cache */
>  
>  	/*
>  	 * Bitsets of per-fs metadata that have been checked and/or are sick.
> diff --git a/fs/xfs/xfs_zone_alloc.c b/fs/xfs/xfs_zone_alloc.c
> index d509e49b2aaa..80add26c0111 100644
> --- a/fs/xfs/xfs_zone_alloc.c
> +++ b/fs/xfs/xfs_zone_alloc.c
> @@ -24,6 +24,7 @@
>  #include "xfs_zone_priv.h"
>  #include "xfs_zones.h"
>  #include "xfs_trace.h"
> +#include "xfs_mru_cache.h"
>  
>  void
>  xfs_open_zone_put(
> @@ -796,6 +797,100 @@ xfs_submit_zoned_bio(
>  	submit_bio(&ioend->io_bio);
>  }
>  
> +/*
> + * Cache the last zone written to for an inode so that it is considered first
> + * for subsequent writes.
> + */
> +struct xfs_zone_cache_item {
> +	struct xfs_mru_cache_elem	mru;
> +	struct xfs_open_zone		*oz;
> +};
> +
> +static inline struct xfs_zone_cache_item *
> +xfs_zone_cache_item(struct xfs_mru_cache_elem *mru)
> +{
> +	return container_of(mru, struct xfs_zone_cache_item, mru);
> +}
> +
> +static void
> +xfs_zone_cache_free_func(
> +	void				*data,
> +	struct xfs_mru_cache_elem	*mru)
> +{
> +	struct xfs_zone_cache_item	*item = xfs_zone_cache_item(mru);
> +
> +	xfs_open_zone_put(item->oz);
> +	kfree(item);
> +}
> +
> +/*
> + * Check if we have a cached last open zone available for the inode and
> + * if yes return a reference to it.
> + */
> +static struct xfs_open_zone *
> +xfs_cached_zone(
> +	struct xfs_mount		*mp,
> +	struct xfs_inode		*ip)
> +{
> +	struct xfs_mru_cache_elem	*mru;
> +	struct xfs_open_zone		*oz;
> +
> +	mru = xfs_mru_cache_lookup(mp->m_zone_cache, ip->i_ino);
> +	if (!mru)
> +		return NULL;
> +	oz = xfs_zone_cache_item(mru)->oz;
> +	if (oz) {
> +		/*
> +		 * GC only steals open zones at mount time, so no GC zones
> +		 * should end up in the cache.
> +		 */
> +		ASSERT(!oz->oz_is_gc);
> +		ASSERT(atomic_read(&oz->oz_ref) > 0);
> +		atomic_inc(&oz->oz_ref);
> +	}
> +	xfs_mru_cache_done(mp->m_zone_cache);
> +	return oz;
> +}
> +
> +/*
> + * Update the last used zone cache for a given inode.
> + *
> + * The caller must have a reference on the open zone.
> + */
> +static void
> +xfs_zone_cache_create_association(
> +	struct xfs_inode		*ip,
> +	struct xfs_open_zone		*oz)
> +{
> +	struct xfs_mount		*mp = ip->i_mount;
> +	struct xfs_zone_cache_item	*item = NULL;
> +	struct xfs_mru_cache_elem	*mru;
> +
> +	ASSERT(atomic_read(&oz->oz_ref) > 0);
> +	atomic_inc(&oz->oz_ref);
> +
> +	mru = xfs_mru_cache_lookup(mp->m_zone_cache, ip->i_ino);
> +	if (mru) {
> +		/*
> +		 * If we have an association already, update it to point to the
> +		 * new zone.
> +		 */
> +		item = xfs_zone_cache_item(mru);
> +		xfs_open_zone_put(item->oz);
> +		item->oz = oz;
> +		xfs_mru_cache_done(mp->m_zone_cache);
> +		return;
> +	}
> +
> +	item = kmalloc(sizeof(*item), GFP_KERNEL);
> +	if (!item) {
> +		xfs_open_zone_put(oz);
> +		return;
> +	}
> +	item->oz = oz;
> +	xfs_mru_cache_insert(mp->m_zone_cache, ip->i_ino, &item->mru);
> +}
> +
>  void
>  xfs_zone_alloc_and_submit(
>  	struct iomap_ioend	*ioend,
> @@ -819,11 +914,16 @@ xfs_zone_alloc_and_submit(
>  	 */
>  	if (!*oz && ioend->io_offset)
>  		*oz = xfs_last_used_zone(ioend);
> +	if (!*oz)
> +		*oz = xfs_cached_zone(mp, ip);
> +
>  	if (!*oz) {
>  select_zone:
>  		*oz = xfs_select_zone(mp, write_hint, pack_tight);
>  		if (!*oz)
>  			goto out_error;
> +
> +		xfs_zone_cache_create_association(ip, *oz);
>  	}
>  
>  	alloc_len = xfs_zone_alloc_blocks(*oz, XFS_B_TO_FSB(mp, ioend->io_size),
> @@ -1211,6 +1311,14 @@ xfs_mount_zones(
>  	error = xfs_zone_gc_mount(mp);
>  	if (error)
>  		goto out_free_zone_info;
> +
> +	/*
> +	 * Set up a mru cache to track inode to open zone for data placement
> +	 * purposes. The magic values for group count and life time is the
> +	 * same as the defaults for file streams, which seems sane enough.
> +	 */
> +	xfs_mru_cache_create(&mp->m_zone_cache, mp,
> +			5000, 10, xfs_zone_cache_free_func);
>  	return 0;
>  
>  out_free_zone_info:
> @@ -1224,4 +1332,5 @@ xfs_unmount_zones(
>  {
>  	xfs_zone_gc_unmount(mp);
>  	xfs_free_zone_info(mp->m_zone_info);
> +	xfs_mru_cache_destroy(mp->m_zone_cache);
>  }
> -- 
> 2.34.1
> 

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [RFC PATCH 1/2] xfs: free the item in xfs_mru_cache_insert on failure
  2025-04-30  8:41 ` [RFC PATCH 1/2] xfs: free the item in xfs_mru_cache_insert on failure Hans Holmberg
@ 2025-05-02 20:06   ` Darrick J. Wong
  2025-05-05  5:45     ` hch
  0 siblings, 1 reply; 9+ messages in thread
From: Darrick J. Wong @ 2025-05-02 20:06 UTC (permalink / raw)
  To: Hans Holmberg
  Cc: linux-xfs@vger.kernel.org, Carlos Maiolino, Dave Chinner, hch,
	linux-kernel@vger.kernel.org

On Wed, Apr 30, 2025 at 08:41:21AM +0000, Hans Holmberg wrote:
> From: Christoph Hellwig <hch@lst.de>
> 
> Call the provided free_func when xfs_mru_cache_insert as that's what
> the callers need to do anyway.
> 
> Signed-off-by: Christoph Hellwig <hch@lst.de>
> Signed-off-by: Hans Holmberg <hans.holmberg@wdc.com>
> ---
>  fs/xfs/xfs_filestream.c | 15 ++++-----------
>  fs/xfs/xfs_mru_cache.c  | 15 ++++++++++++---
>  2 files changed, 16 insertions(+), 14 deletions(-)
> 
> diff --git a/fs/xfs/xfs_filestream.c b/fs/xfs/xfs_filestream.c
> index a961aa420c48..044918fbae06 100644
> --- a/fs/xfs/xfs_filestream.c
> +++ b/fs/xfs/xfs_filestream.c
> @@ -304,11 +304,9 @@ xfs_filestream_create_association(
>  	 * for us, so all we need to do here is take another active reference to
>  	 * the perag for the cached association.
>  	 *
> -	 * If we fail to store the association, we need to drop the fstrms
> -	 * counter as well as drop the perag reference we take here for the
> -	 * item. We do not need to return an error for this failure - as long as
> -	 * we return a referenced AG, the allocation can still go ahead just
> -	 * fine.
> +	 * If we fail to store the association, we do not need to return an
> +	 * error for this failure - as long as we return a referenced AG, the
> +	 * allocation can still go ahead just fine.
>  	 */
>  	item = kmalloc(sizeof(*item), GFP_KERNEL | __GFP_RETRY_MAYFAIL);
>  	if (!item)
> @@ -316,14 +314,9 @@ xfs_filestream_create_association(
>  
>  	atomic_inc(&pag_group(args->pag)->xg_active_ref);
>  	item->pag = args->pag;
> -	error = xfs_mru_cache_insert(mp->m_filestream, pino, &item->mru);
> -	if (error)
> -		goto out_free_item;
> +	xfs_mru_cache_insert(mp->m_filestream, pino, &item->mru);

Hmm, don't you still need to check for -ENOMEM returns?  Or if truly
none of the callers care anymore, then can we get rid of the return
value for xfs_mru_cache_insert?

--D

>  	return 0;
>  
> -out_free_item:
> -	xfs_perag_rele(item->pag);
> -	kfree(item);
>  out_put_fstrms:
>  	atomic_dec(&args->pag->pagf_fstrms);
>  	return 0;
> diff --git a/fs/xfs/xfs_mru_cache.c b/fs/xfs/xfs_mru_cache.c
> index d0f5b403bdbe..08443ceec329 100644
> --- a/fs/xfs/xfs_mru_cache.c
> +++ b/fs/xfs/xfs_mru_cache.c
> @@ -414,6 +414,8 @@ xfs_mru_cache_destroy(
>   * To insert an element, call xfs_mru_cache_insert() with the data store, the
>   * element's key and the client data pointer.  This function returns 0 on
>   * success or ENOMEM if memory for the data element couldn't be allocated.
> + *
> + * The passed in elem is freed through the per-cache free_func on failure.
>   */
>  int
>  xfs_mru_cache_insert(
> @@ -421,14 +423,15 @@ xfs_mru_cache_insert(
>  	unsigned long		key,
>  	struct xfs_mru_cache_elem *elem)
>  {
> -	int			error;
> +	int			error = -EINVAL;
>  
>  	ASSERT(mru && mru->lists);
>  	if (!mru || !mru->lists)
> -		return -EINVAL;
> +		goto out_free;
>  
> +	error = -ENOMEM;
>  	if (radix_tree_preload(GFP_KERNEL))
> -		return -ENOMEM;
> +		goto out_free;
>  
>  	INIT_LIST_HEAD(&elem->list_node);
>  	elem->key = key;
> @@ -440,6 +443,12 @@ xfs_mru_cache_insert(
>  		_xfs_mru_cache_list_insert(mru, elem);
>  	spin_unlock(&mru->lock);
>  
> +	if (error)
> +		goto out_free;
> +	return 0;
> +
> +out_free:
> +	mru->free_func(mru->data, elem);
>  	return error;
>  }
>  
> -- 
> 2.34.1
> 

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [RFC PATCH 1/2] xfs: free the item in xfs_mru_cache_insert on failure
  2025-05-02 20:06   ` Darrick J. Wong
@ 2025-05-05  5:45     ` hch
  2025-05-05 15:07       ` Darrick J. Wong
  0 siblings, 1 reply; 9+ messages in thread
From: hch @ 2025-05-05  5:45 UTC (permalink / raw)
  To: Darrick J. Wong
  Cc: Hans Holmberg, linux-xfs@vger.kernel.org, Carlos Maiolino,
	Dave Chinner, hch, linux-kernel@vger.kernel.org

On Fri, May 02, 2025 at 01:06:46PM -0700, Darrick J. Wong wrote:
> >  	atomic_inc(&pag_group(args->pag)->xg_active_ref);
> >  	item->pag = args->pag;
> > -	error = xfs_mru_cache_insert(mp->m_filestream, pino, &item->mru);
> > -	if (error)
> > -		goto out_free_item;
> > +	xfs_mru_cache_insert(mp->m_filestream, pino, &item->mru);
> 
> Hmm, don't you still need to check for -ENOMEM returns?  Or if truly
> none of the callers care anymore, then can we get rid of the return
> value for xfs_mru_cache_insert?

Both for file streams and the zone association in the next patch the
mru cache is just a hint, so we ignore all errors (see the return 0
in the error handling boilerplate in the existing code).  But hardcoding
that assumption into the core mru cache helpers seems a bit weird.


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [RFC PATCH 2/2] xfs: add inode to zone caching for data placement
  2025-05-02 20:04   ` Darrick J. Wong
@ 2025-05-05  5:55     ` hch
  0 siblings, 0 replies; 9+ messages in thread
From: hch @ 2025-05-05  5:55 UTC (permalink / raw)
  To: Darrick J. Wong
  Cc: Hans Holmberg, linux-xfs@vger.kernel.org, Carlos Maiolino,
	Dave Chinner, hch, linux-kernel@vger.kernel.org

On Fri, May 02, 2025 at 01:04:15PM -0700, Darrick J. Wong wrote:
> It seems like a decent idea to try to land random writes to the same
> file in the same zone.  This helps us reduce seeking out of the zone on
> subsequent reads, right?

Yes.  Having as few zones as possible per file also means that GC works
better, as it often can consolidate extents.


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [RFC PATCH 0/2] Add mru cache for inode to zone allocation mapping
  2025-04-30  8:41 [RFC PATCH 0/2] Add mru cache for inode to zone allocation mapping Hans Holmberg
  2025-04-30  8:41 ` [RFC PATCH 1/2] xfs: free the item in xfs_mru_cache_insert on failure Hans Holmberg
  2025-04-30  8:41 ` [RFC PATCH 2/2] xfs: add inode to zone caching for data placement Hans Holmberg
@ 2025-05-05  5:57 ` hch
  2 siblings, 0 replies; 9+ messages in thread
From: hch @ 2025-05-05  5:57 UTC (permalink / raw)
  To: Hans Holmberg
  Cc: linux-xfs@vger.kernel.org, Carlos Maiolino, Dave Chinner,
	Darrick J . Wong, hch, linux-kernel@vger.kernel.org

On Wed, Apr 30, 2025 at 08:41:20AM +0000, Hans Holmberg wrote:
> Sending out as an RFC to get comments, specifically about the potential
> mru lock contention when doing the lookup during allocation.

I am a little worried about that.  The MRU cache is implemented right
now rotates the mru list on every lookup under a mru-cache wide lock,
which is bit of a performance nightmare.

I wonder if we can do some form of batched move between the mru buckets
that can reduce the locking and cacheline write impact.

But maybe it's worth to give this a try and work from performance reports
as needed.


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [RFC PATCH 1/2] xfs: free the item in xfs_mru_cache_insert on failure
  2025-05-05  5:45     ` hch
@ 2025-05-05 15:07       ` Darrick J. Wong
  0 siblings, 0 replies; 9+ messages in thread
From: Darrick J. Wong @ 2025-05-05 15:07 UTC (permalink / raw)
  To: hch
  Cc: Hans Holmberg, linux-xfs@vger.kernel.org, Carlos Maiolino,
	Dave Chinner, linux-kernel@vger.kernel.org

On Mon, May 05, 2025 at 07:45:49AM +0200, hch wrote:
> On Fri, May 02, 2025 at 01:06:46PM -0700, Darrick J. Wong wrote:
> > >  	atomic_inc(&pag_group(args->pag)->xg_active_ref);
> > >  	item->pag = args->pag;
> > > -	error = xfs_mru_cache_insert(mp->m_filestream, pino, &item->mru);
> > > -	if (error)
> > > -		goto out_free_item;
> > > +	xfs_mru_cache_insert(mp->m_filestream, pino, &item->mru);
> > 
> > Hmm, don't you still need to check for -ENOMEM returns?  Or if truly
> > none of the callers care anymore, then can we get rid of the return
> > value for xfs_mru_cache_insert?
> 
> Both for file streams and the zone association in the next patch the
> mru cache is just a hint, so we ignore all errors (see the return 0
> in the error handling boilerplate in the existing code).  But hardcoding
> that assumption into the core mru cache helpers seems a bit weird.

Ok then.  The comment change in this patch is a reasonable explanation
for why the return value is/has always been ignored, so

Reviewed-by: "Darrick J. Wong" <djwong@kernel.org>

--D

^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2025-05-05 15:07 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-04-30  8:41 [RFC PATCH 0/2] Add mru cache for inode to zone allocation mapping Hans Holmberg
2025-04-30  8:41 ` [RFC PATCH 1/2] xfs: free the item in xfs_mru_cache_insert on failure Hans Holmberg
2025-05-02 20:06   ` Darrick J. Wong
2025-05-05  5:45     ` hch
2025-05-05 15:07       ` Darrick J. Wong
2025-04-30  8:41 ` [RFC PATCH 2/2] xfs: add inode to zone caching for data placement Hans Holmberg
2025-05-02 20:04   ` Darrick J. Wong
2025-05-05  5:55     ` hch
2025-05-05  5:57 ` [RFC PATCH 0/2] Add mru cache for inode to zone allocation mapping hch

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox