* [RFC PATCH 01/11] XArray: add cmpxchg order test
2023-10-28 21:15 ` [RFC PATCH 00/11] shmem: high order folios support in " Daniel Gomez
@ 2023-10-28 21:15 ` Daniel Gomez
2023-10-29 20:11 ` Matthew Wilcox
2023-10-28 21:15 ` [RFC PATCH 02/11] test_xarray: add tests for advanced multi-index use Daniel Gomez
` (10 subsequent siblings)
11 siblings, 1 reply; 30+ messages in thread
From: Daniel Gomez @ 2023-10-28 21:15 UTC (permalink / raw)
To: minchan@kernel.org, senozhatsky@chromium.org, axboe@kernel.dk,
djwong@kernel.org, willy@infradead.org, hughd@google.com,
akpm@linux-foundation.org, mcgrof@kernel.org,
linux-kernel@vger.kernel.org, linux-block@vger.kernel.org,
linux-xfs@vger.kernel.org, linux-fsdevel@vger.kernel.org,
linux-mm@kvack.org
Cc: gost.dev@samsung.com, Pankaj Raghav, Daniel Gomez
XArray multi-index entries do not keep track of the order stored once
the entry is being marked as used (replaced with NULL). Add a test
to check the order is actually lost.
Signed-off-by: Daniel Gomez <da.gomez@samsung.com>
Reviewed-by: Luis Chamberlain <mcgrof@kernel.org>
---
lib/test_xarray.c | 21 +++++++++++++++++++++
1 file changed, 21 insertions(+)
diff --git a/lib/test_xarray.c b/lib/test_xarray.c
index e77d4856442c..6c22588963bc 100644
--- a/lib/test_xarray.c
+++ b/lib/test_xarray.c
@@ -423,6 +423,26 @@ static noinline void check_cmpxchg(struct xarray *xa)
XA_BUG_ON(xa, !xa_empty(xa));
}
+static noinline void check_cmpxchg_order(struct xarray *xa)
+{
+ void *FIVE = xa_mk_value(5);
+ unsigned int order = IS_ENABLED(CONFIG_XARRAY_MULTI) ? 15 : 1;
+ void *old;
+
+ XA_BUG_ON(xa, !xa_empty(xa));
+ XA_BUG_ON(xa, xa_store_index(xa, 5, GFP_KERNEL) != NULL);
+ XA_BUG_ON(xa, xa_insert(xa, 5, FIVE, GFP_KERNEL) != -EBUSY);
+ XA_BUG_ON(xa, xa_store_order(xa, 5, order, FIVE, GFP_KERNEL));
+ XA_BUG_ON(xa, xa_get_order(xa, 5) != order);
+ XA_BUG_ON(xa, xa_get_order(xa, xa_to_value(FIVE)) != order);
+ old = xa_cmpxchg(xa, 5, FIVE, NULL, GFP_KERNEL);
+ XA_BUG_ON(xa, old != FIVE);
+ XA_BUG_ON(xa, xa_get_order(xa, 5) != 0);
+ XA_BUG_ON(xa, xa_get_order(xa, xa_to_value(FIVE)) != 0);
+ XA_BUG_ON(xa, xa_get_order(xa, xa_to_value(old)) != 0);
+ XA_BUG_ON(xa, !xa_empty(xa));
+}
+
static noinline void check_reserve(struct xarray *xa)
{
void *entry;
@@ -1801,6 +1821,7 @@ static int xarray_checks(void)
check_xas_erase(&array);
check_insert(&array);
check_cmpxchg(&array);
+ check_cmpxchg_order(&array);
check_reserve(&array);
check_reserve(&xa0);
check_multi_store(&array);
--
2.39.2
^ permalink raw reply related [flat|nested] 30+ messages in thread* Re: [RFC PATCH 01/11] XArray: add cmpxchg order test
2023-10-28 21:15 ` [RFC PATCH 01/11] XArray: add cmpxchg order test Daniel Gomez
@ 2023-10-29 20:11 ` Matthew Wilcox
2023-11-03 23:12 ` Daniel Gomez
0 siblings, 1 reply; 30+ messages in thread
From: Matthew Wilcox @ 2023-10-29 20:11 UTC (permalink / raw)
To: Daniel Gomez
Cc: minchan@kernel.org, senozhatsky@chromium.org, axboe@kernel.dk,
djwong@kernel.org, hughd@google.com, akpm@linux-foundation.org,
mcgrof@kernel.org, linux-kernel@vger.kernel.org,
linux-block@vger.kernel.org, linux-xfs@vger.kernel.org,
linux-fsdevel@vger.kernel.org, linux-mm@kvack.org,
gost.dev@samsung.com, Pankaj Raghav
On Sat, Oct 28, 2023 at 09:15:35PM +0000, Daniel Gomez wrote:
> +static noinline void check_cmpxchg_order(struct xarray *xa)
> +{
> + void *FIVE = xa_mk_value(5);
> + unsigned int order = IS_ENABLED(CONFIG_XARRAY_MULTI) ? 15 : 1;
... have you tried this with CONFIG_XARRAY_MULTI deselected?
I suspect it will BUG() because orders greater than 0 are not allowed.
> + XA_BUG_ON(xa, !xa_empty(xa));
> + XA_BUG_ON(xa, xa_store_index(xa, 5, GFP_KERNEL) != NULL);
> + XA_BUG_ON(xa, xa_insert(xa, 5, FIVE, GFP_KERNEL) != -EBUSY);
> + XA_BUG_ON(xa, xa_store_order(xa, 5, order, FIVE, GFP_KERNEL));
> + XA_BUG_ON(xa, xa_get_order(xa, 5) != order);
> + XA_BUG_ON(xa, xa_get_order(xa, xa_to_value(FIVE)) != order);
> + old = xa_cmpxchg(xa, 5, FIVE, NULL, GFP_KERNEL);
> + XA_BUG_ON(xa, old != FIVE);
> + XA_BUG_ON(xa, xa_get_order(xa, 5) != 0);
> + XA_BUG_ON(xa, xa_get_order(xa, xa_to_value(FIVE)) != 0);
> + XA_BUG_ON(xa, xa_get_order(xa, xa_to_value(old)) != 0);
> + XA_BUG_ON(xa, !xa_empty(xa));
I'm not sure this is a great test. It definitely does do what you claim
it will, but for example, it's possible that we might keep that
information for other orders. So maybe we should have another entry at
(1 << order) that keeps the node around and could theoretically keep
the order information around for the now-NULL entry?
^ permalink raw reply [flat|nested] 30+ messages in thread* Re: [RFC PATCH 01/11] XArray: add cmpxchg order test
2023-10-29 20:11 ` Matthew Wilcox
@ 2023-11-03 23:12 ` Daniel Gomez
0 siblings, 0 replies; 30+ messages in thread
From: Daniel Gomez @ 2023-11-03 23:12 UTC (permalink / raw)
To: Matthew Wilcox
Cc: minchan@kernel.org, senozhatsky@chromium.org, axboe@kernel.dk,
djwong@kernel.org, hughd@google.com, akpm@linux-foundation.org,
mcgrof@kernel.org, linux-kernel@vger.kernel.org,
linux-block@vger.kernel.org, linux-xfs@vger.kernel.org,
linux-fsdevel@vger.kernel.org, linux-mm@kvack.org,
gost.dev@samsung.com, Pankaj Raghav
On Sun, Oct 29, 2023 at 08:11:32PM +0000, Matthew Wilcox wrote:
> On Sat, Oct 28, 2023 at 09:15:35PM +0000, Daniel Gomez wrote:
> > +static noinline void check_cmpxchg_order(struct xarray *xa)
> > +{
> > + void *FIVE = xa_mk_value(5);
> > + unsigned int order = IS_ENABLED(CONFIG_XARRAY_MULTI) ? 15 : 1;
>
> ... have you tried this with CONFIG_XARRAY_MULTI deselected?
> I suspect it will BUG() because orders greater than 0 are not allowed.
>
> > + XA_BUG_ON(xa, !xa_empty(xa));
> > + XA_BUG_ON(xa, xa_store_index(xa, 5, GFP_KERNEL) != NULL);
> > + XA_BUG_ON(xa, xa_insert(xa, 5, FIVE, GFP_KERNEL) != -EBUSY);
> > + XA_BUG_ON(xa, xa_store_order(xa, 5, order, FIVE, GFP_KERNEL));
> > + XA_BUG_ON(xa, xa_get_order(xa, 5) != order);
> > + XA_BUG_ON(xa, xa_get_order(xa, xa_to_value(FIVE)) != order);
> > + old = xa_cmpxchg(xa, 5, FIVE, NULL, GFP_KERNEL);
> > + XA_BUG_ON(xa, old != FIVE);
> > + XA_BUG_ON(xa, xa_get_order(xa, 5) != 0);
> > + XA_BUG_ON(xa, xa_get_order(xa, xa_to_value(FIVE)) != 0);
> > + XA_BUG_ON(xa, xa_get_order(xa, xa_to_value(old)) != 0);
> > + XA_BUG_ON(xa, !xa_empty(xa));
>
> I'm not sure this is a great test. It definitely does do what you claim
> it will, but for example, it's possible that we might keep that
> information for other orders. So maybe we should have another entry at
> (1 << order) that keeps the node around and could theoretically keep
> the order information around for the now-NULL entry?
Thanks Matthew for the review. I'm sending a separate patch with the
fixes and improvements on the XArray cmpxchg test.
^ permalink raw reply [flat|nested] 30+ messages in thread
* [RFC PATCH 02/11] test_xarray: add tests for advanced multi-index use
2023-10-28 21:15 ` [RFC PATCH 00/11] shmem: high order folios support in " Daniel Gomez
2023-10-28 21:15 ` [RFC PATCH 01/11] XArray: add cmpxchg order test Daniel Gomez
@ 2023-10-28 21:15 ` Daniel Gomez
2023-10-28 21:15 ` [RFC PATCH 03/11] shmem: drop BLOCKS_PER_PAGE macro Daniel Gomez
` (9 subsequent siblings)
11 siblings, 0 replies; 30+ messages in thread
From: Daniel Gomez @ 2023-10-28 21:15 UTC (permalink / raw)
To: minchan@kernel.org, senozhatsky@chromium.org, axboe@kernel.dk,
djwong@kernel.org, willy@infradead.org, hughd@google.com,
akpm@linux-foundation.org, mcgrof@kernel.org,
linux-kernel@vger.kernel.org, linux-block@vger.kernel.org,
linux-xfs@vger.kernel.org, linux-fsdevel@vger.kernel.org,
linux-mm@kvack.org
Cc: gost.dev@samsung.com, Pankaj Raghav, Daniel Gomez
From: Luis Chamberlain <mcgrof@kernel.org>
The multi index selftests are great but they don't replicate
how we deal with the page cache exactly, which makes it a bit
hard to follow as the page cache uses the advanced API.
Add tests which use the advanced API, mimicking what we do in the
page cache, while at it, extend the example to do what is needed for
min order support.
Signed-off-by: Luis Chamberlain <mcgrof@kernel.org>
Tested-by: Daniel Gomez <da.gomez@samsung.com>
---
lib/test_xarray.c | 134 ++++++++++++++++++++++++++++++++++++++++++++++
1 file changed, 134 insertions(+)
diff --git a/lib/test_xarray.c b/lib/test_xarray.c
index 6c22588963bc..22a687e33dc5 100644
--- a/lib/test_xarray.c
+++ b/lib/test_xarray.c
@@ -694,6 +694,139 @@ static noinline void check_multi_store(struct xarray *xa)
#endif
}
+#ifdef CONFIG_XARRAY_MULTI
+static noinline void check_xa_multi_store_adv_add(struct xarray *xa,
+ unsigned long index,
+ unsigned int order,
+ void *p)
+{
+ XA_STATE(xas, xa, index);
+
+ xas_set_order(&xas, index, order);
+
+ do {
+ xas_lock_irq(&xas);
+
+ xas_store(&xas, p);
+ XA_BUG_ON(xa, xas_error(&xas));
+ XA_BUG_ON(xa, xa_load(xa, index) != p);
+
+ xas_unlock_irq(&xas);
+ } while (xas_nomem(&xas, GFP_KERNEL));
+
+ XA_BUG_ON(xa, xas_error(&xas));
+}
+
+static noinline void check_xa_multi_store_adv_delete(struct xarray *xa,
+ unsigned long index,
+ unsigned int order)
+{
+ unsigned int nrpages = 1UL << order;
+ unsigned long base = round_down(index, nrpages);
+ XA_STATE(xas, xa, base);
+
+ xas_set_order(&xas, base, order);
+ xas_store(&xas, NULL);
+ xas_init_marks(&xas);
+}
+
+static unsigned long some_val = 0xdeadbeef;
+static unsigned long some_val_2 = 0xdeaddead;
+
+/* mimics the page cache */
+static noinline void check_xa_multi_store_adv(struct xarray *xa,
+ unsigned long pos,
+ unsigned int order)
+{
+ unsigned int nrpages = 1UL << order;
+ unsigned long index, base, next_index, next_next_index;
+ unsigned int i;
+
+ index = pos >> PAGE_SHIFT;
+ base = round_down(index, nrpages);
+ next_index = round_down(base + nrpages, nrpages);
+ next_next_index = round_down(next_index + nrpages, nrpages);
+
+ check_xa_multi_store_adv_add(xa, base, order, &some_val);
+
+ for (i = 0; i < nrpages; i++)
+ XA_BUG_ON(xa, xa_load(xa, base + i) != &some_val);
+
+ XA_BUG_ON(xa, xa_load(xa, next_index) != NULL);
+
+ /* Use order 0 for the next item */
+ check_xa_multi_store_adv_add(xa, next_index, 0, &some_val_2);
+ XA_BUG_ON(xa, xa_load(xa, next_index) != &some_val_2);
+
+ /* Remove the next item */
+ check_xa_multi_store_adv_delete(xa, next_index, 0);
+
+ /* Now use order for a new pointer */
+ check_xa_multi_store_adv_add(xa, next_index, order, &some_val_2);
+
+ for (i = 0; i < nrpages; i++)
+ XA_BUG_ON(xa, xa_load(xa, next_index + i) != &some_val_2);
+
+ check_xa_multi_store_adv_delete(xa, next_index, order);
+ check_xa_multi_store_adv_delete(xa, base, order);
+ XA_BUG_ON(xa, !xa_empty(xa));
+
+ /* starting fresh again */
+
+ /* let's test some holes now */
+
+ /* hole at base and next_next */
+ check_xa_multi_store_adv_add(xa, next_index, order, &some_val_2);
+
+ for (i = 0; i < nrpages; i++)
+ XA_BUG_ON(xa, xa_load(xa, base + i) != NULL);
+
+ for (i = 0; i < nrpages; i++)
+ XA_BUG_ON(xa, xa_load(xa, next_index + i) != &some_val_2);
+
+ for (i = 0; i < nrpages; i++)
+ XA_BUG_ON(xa, xa_load(xa, next_next_index + i) != NULL);
+
+ check_xa_multi_store_adv_delete(xa, next_index, order);
+ XA_BUG_ON(xa, !xa_empty(xa));
+
+ /* hole at base and next */
+
+ check_xa_multi_store_adv_add(xa, next_next_index, order, &some_val_2);
+
+ for (i = 0; i < nrpages; i++)
+ XA_BUG_ON(xa, xa_load(xa, base + i) != NULL);
+
+ for (i = 0; i < nrpages; i++)
+ XA_BUG_ON(xa, xa_load(xa, next_index + i) != NULL);
+
+ for (i = 0; i < nrpages; i++)
+ XA_BUG_ON(xa, xa_load(xa, next_next_index + i) != &some_val_2);
+
+ check_xa_multi_store_adv_delete(xa, next_next_index, order);
+ XA_BUG_ON(xa, !xa_empty(xa));
+}
+#endif
+
+static noinline void check_multi_store_advanced(struct xarray *xa)
+{
+#ifdef CONFIG_XARRAY_MULTI
+ unsigned int max_order = IS_ENABLED(CONFIG_XARRAY_MULTI) ? 20 : 1;
+ unsigned long end = ULONG_MAX/2;
+ unsigned long pos, i;
+
+ /*
+ * About 117 million tests below.
+ */
+ for (pos = 7; pos < end; pos = (pos * pos) + 564) {
+ for (i = 0; i < max_order; i++) {
+ check_xa_multi_store_adv(xa, pos, i);
+ check_xa_multi_store_adv(xa, pos + 157, i);
+ }
+ }
+#endif
+}
+
static noinline void check_xa_alloc_1(struct xarray *xa, unsigned int base)
{
int i;
@@ -1825,6 +1958,7 @@ static int xarray_checks(void)
check_reserve(&array);
check_reserve(&xa0);
check_multi_store(&array);
+ check_multi_store_advanced(&array);
check_get_order(&array);
check_xa_alloc();
check_find(&array);
--
2.39.2
^ permalink raw reply related [flat|nested] 30+ messages in thread* [RFC PATCH 03/11] shmem: drop BLOCKS_PER_PAGE macro
2023-10-28 21:15 ` [RFC PATCH 00/11] shmem: high order folios support in " Daniel Gomez
2023-10-28 21:15 ` [RFC PATCH 01/11] XArray: add cmpxchg order test Daniel Gomez
2023-10-28 21:15 ` [RFC PATCH 02/11] test_xarray: add tests for advanced multi-index use Daniel Gomez
@ 2023-10-28 21:15 ` Daniel Gomez
2023-10-28 21:15 ` [RFC PATCH 04/11] shmem: return number of pages beeing freed in shmem_free_swap Daniel Gomez
` (8 subsequent siblings)
11 siblings, 0 replies; 30+ messages in thread
From: Daniel Gomez @ 2023-10-28 21:15 UTC (permalink / raw)
To: minchan@kernel.org, senozhatsky@chromium.org, axboe@kernel.dk,
djwong@kernel.org, willy@infradead.org, hughd@google.com,
akpm@linux-foundation.org, mcgrof@kernel.org,
linux-kernel@vger.kernel.org, linux-block@vger.kernel.org,
linux-xfs@vger.kernel.org, linux-fsdevel@vger.kernel.org,
linux-mm@kvack.org
Cc: gost.dev@samsung.com, Pankaj Raghav, Daniel Gomez
The commit [1] replaced all BLOCKS_PER_PAGE in favor of the
generic PAGE_SECTORS but definition was not removed. Drop it
as unused macro.
[1] e09764cff44b5 ("shmem: quota support").
Signed-off-by: Daniel Gomez <da.gomez@samsung.com>
Reviewed-by: Luis Chamberlain <mcgrof@kernel.org>
---
mm/shmem.c | 1 -
1 file changed, 1 deletion(-)
diff --git a/mm/shmem.c b/mm/shmem.c
index 0d1ce70bce38..a2ac425b97ea 100644
--- a/mm/shmem.c
+++ b/mm/shmem.c
@@ -84,7 +84,6 @@ static struct vfsmount *shm_mnt __ro_after_init;
#include "internal.h"
-#define BLOCKS_PER_PAGE (PAGE_SIZE/512)
#define VM_ACCT(size) (PAGE_ALIGN(size) >> PAGE_SHIFT)
/* Pretend that each entry is of this size in directory's i_size */
--
2.39.2
^ permalink raw reply related [flat|nested] 30+ messages in thread* [RFC PATCH 04/11] shmem: return number of pages beeing freed in shmem_free_swap
2023-10-28 21:15 ` [RFC PATCH 00/11] shmem: high order folios support in " Daniel Gomez
` (2 preceding siblings ...)
2023-10-28 21:15 ` [RFC PATCH 03/11] shmem: drop BLOCKS_PER_PAGE macro Daniel Gomez
@ 2023-10-28 21:15 ` Daniel Gomez
2023-10-28 21:15 ` [RFC PATCH 05/11] shmem: account for large order folios Daniel Gomez
` (7 subsequent siblings)
11 siblings, 0 replies; 30+ messages in thread
From: Daniel Gomez @ 2023-10-28 21:15 UTC (permalink / raw)
To: minchan@kernel.org, senozhatsky@chromium.org, axboe@kernel.dk,
djwong@kernel.org, willy@infradead.org, hughd@google.com,
akpm@linux-foundation.org, mcgrof@kernel.org,
linux-kernel@vger.kernel.org, linux-block@vger.kernel.org,
linux-xfs@vger.kernel.org, linux-fsdevel@vger.kernel.org,
linux-mm@kvack.org
Cc: gost.dev@samsung.com, Pankaj Raghav, Daniel Gomez
Both shmem_free_swap callers expect the number of pages being freed. In
the large folios context, this needs to support larger values other than
0 (used as 1 page being freed) and -ENOENT (used as 0 pages being
freed). In preparation for large folios adoption, make shmem_free_swap
routine return the number of pages being freed. So, returning 0 in this
context, means 0 pages being freed.
Suggested-by: Matthew Wilcox <willy@infradead.org>
Signed-off-by: Daniel Gomez <da.gomez@samsung.com>
Reviewed-by: Luis Chamberlain <mcgrof@kernel.org>
---
mm/shmem.c | 21 ++++++++++++++-------
1 file changed, 14 insertions(+), 7 deletions(-)
diff --git a/mm/shmem.c b/mm/shmem.c
index a2ac425b97ea..9f4c9b9286e5 100644
--- a/mm/shmem.c
+++ b/mm/shmem.c
@@ -827,18 +827,22 @@ static void shmem_delete_from_page_cache(struct folio *folio, void *radswap)
}
/*
- * Remove swap entry from page cache, free the swap and its page cache.
+ * Remove swap entry from page cache, free the swap and its page cache. Returns
+ * the number of pages being freed. 0 means entry not found in XArray (0 pages
+ * being freed).
*/
-static int shmem_free_swap(struct address_space *mapping,
+static long shmem_free_swap(struct address_space *mapping,
pgoff_t index, void *radswap)
{
void *old;
+ long swaps_freed = 1UL << xa_get_order(&mapping->i_pages, index);
old = xa_cmpxchg_irq(&mapping->i_pages, index, radswap, NULL, 0);
if (old != radswap)
- return -ENOENT;
+ return 0;
free_swap_and_cache(radix_to_swp_entry(radswap));
- return 0;
+
+ return swaps_freed;
}
/*
@@ -990,7 +994,7 @@ static void shmem_undo_range(struct inode *inode, loff_t lstart, loff_t lend,
if (xa_is_value(folio)) {
if (unfalloc)
continue;
- nr_swaps_freed += !shmem_free_swap(mapping,
+ nr_swaps_freed += shmem_free_swap(mapping,
indices[i], folio);
continue;
}
@@ -1057,14 +1061,17 @@ static void shmem_undo_range(struct inode *inode, loff_t lstart, loff_t lend,
folio = fbatch.folios[i];
if (xa_is_value(folio)) {
+ long swaps_freed;
+
if (unfalloc)
continue;
- if (shmem_free_swap(mapping, indices[i], folio)) {
+ swaps_freed = shmem_free_swap(mapping, indices[i], folio);
+ if (!swaps_freed) {
/* Swap was replaced by page: retry */
index = indices[i];
break;
}
- nr_swaps_freed++;
+ nr_swaps_freed += swaps_freed;
continue;
}
--
2.39.2
^ permalink raw reply related [flat|nested] 30+ messages in thread* [RFC PATCH 05/11] shmem: account for large order folios
2023-10-28 21:15 ` [RFC PATCH 00/11] shmem: high order folios support in " Daniel Gomez
` (3 preceding siblings ...)
2023-10-28 21:15 ` [RFC PATCH 04/11] shmem: return number of pages beeing freed in shmem_free_swap Daniel Gomez
@ 2023-10-28 21:15 ` Daniel Gomez
2023-10-29 20:40 ` Matthew Wilcox
2023-10-28 21:15 ` [RFC PATCH 06/11] shmem: trace shmem_add_to_page_cache folio order Daniel Gomez
` (6 subsequent siblings)
11 siblings, 1 reply; 30+ messages in thread
From: Daniel Gomez @ 2023-10-28 21:15 UTC (permalink / raw)
To: minchan@kernel.org, senozhatsky@chromium.org, axboe@kernel.dk,
djwong@kernel.org, willy@infradead.org, hughd@google.com,
akpm@linux-foundation.org, mcgrof@kernel.org,
linux-kernel@vger.kernel.org, linux-block@vger.kernel.org,
linux-xfs@vger.kernel.org, linux-fsdevel@vger.kernel.org,
linux-mm@kvack.org
Cc: gost.dev@samsung.com, Pankaj Raghav, Daniel Gomez
From: Luis Chamberlain <mcgrof@kernel.org>
shmem uses the shem_info_inode alloced, swapped to account
for allocated pages and swapped pages. In preparation for large
order folios adjust the accounting to use folio_nr_pages().
This should produce no functional changes yet as larger order
folios are not yet used or supported in shmem.
Signed-off-by: Luis Chamberlain <mcgrof@kernel.org>
Signed-off-by: Daniel Gomez <da.gomez@samsung.com>
---
mm/shmem.c | 20 +++++++++++---------
1 file changed, 11 insertions(+), 9 deletions(-)
diff --git a/mm/shmem.c b/mm/shmem.c
index 9f4c9b9286e5..ab31d2880e5d 100644
--- a/mm/shmem.c
+++ b/mm/shmem.c
@@ -856,16 +856,16 @@ unsigned long shmem_partial_swap_usage(struct address_space *mapping,
pgoff_t start, pgoff_t end)
{
XA_STATE(xas, &mapping->i_pages, start);
- struct page *page;
+ struct folio *folio;
unsigned long swapped = 0;
unsigned long max = end - 1;
rcu_read_lock();
- xas_for_each(&xas, page, max) {
- if (xas_retry(&xas, page))
+ xas_for_each(&xas, folio, max) {
+ if (xas_retry(&xas, folio))
continue;
- if (xa_is_value(page))
- swapped++;
+ if (xa_is_value(folio))
+ swapped += folio_nr_pages(folio);
if (xas.xa_index == max)
break;
if (need_resched()) {
@@ -1514,7 +1514,7 @@ static int shmem_writepage(struct page *page, struct writeback_control *wbc)
if (add_to_swap_cache(folio, swap,
__GFP_HIGH | __GFP_NOMEMALLOC | __GFP_NOWARN,
NULL) == 0) {
- shmem_recalc_inode(inode, 0, 1);
+ shmem_recalc_inode(inode, 0, folio_nr_pages(folio));
swap_shmem_alloc(swap);
shmem_delete_from_page_cache(folio, swp_to_radix_entry(swap));
@@ -1828,6 +1828,7 @@ static void shmem_set_folio_swapin_error(struct inode *inode, pgoff_t index,
struct address_space *mapping = inode->i_mapping;
swp_entry_t swapin_error;
void *old;
+ long num_swap_pages;
swapin_error = make_poisoned_swp_entry();
old = xa_cmpxchg_irq(&mapping->i_pages, index,
@@ -1837,13 +1838,14 @@ static void shmem_set_folio_swapin_error(struct inode *inode, pgoff_t index,
return;
folio_wait_writeback(folio);
+ num_swap_pages = folio_nr_pages(folio);
delete_from_swap_cache(folio);
/*
* Don't treat swapin error folio as alloced. Otherwise inode->i_blocks
* won't be 0 when inode is released and thus trigger WARN_ON(i_blocks)
* in shmem_evict_inode().
*/
- shmem_recalc_inode(inode, -1, -1);
+ shmem_recalc_inode(inode, -num_swap_pages, -num_swap_pages);
swap_free(swap);
}
@@ -1928,7 +1930,7 @@ static int shmem_swapin_folio(struct inode *inode, pgoff_t index,
if (error)
goto failed;
- shmem_recalc_inode(inode, 0, -1);
+ shmem_recalc_inode(inode, 0, -folio_nr_pages(folio));
if (sgp == SGP_WRITE)
folio_mark_accessed(folio);
@@ -2684,7 +2686,7 @@ int shmem_mfill_atomic_pte(pmd_t *dst_pmd,
if (ret)
goto out_delete_from_cache;
- shmem_recalc_inode(inode, 1, 0);
+ shmem_recalc_inode(inode, folio_nr_pages(folio), 0);
folio_unlock(folio);
return 0;
out_delete_from_cache:
--
2.39.2
^ permalink raw reply related [flat|nested] 30+ messages in thread* Re: [RFC PATCH 05/11] shmem: account for large order folios
2023-10-28 21:15 ` [RFC PATCH 05/11] shmem: account for large order folios Daniel Gomez
@ 2023-10-29 20:40 ` Matthew Wilcox
0 siblings, 0 replies; 30+ messages in thread
From: Matthew Wilcox @ 2023-10-29 20:40 UTC (permalink / raw)
To: Daniel Gomez
Cc: minchan@kernel.org, senozhatsky@chromium.org, axboe@kernel.dk,
djwong@kernel.org, hughd@google.com, akpm@linux-foundation.org,
mcgrof@kernel.org, linux-kernel@vger.kernel.org,
linux-block@vger.kernel.org, linux-xfs@vger.kernel.org,
linux-fsdevel@vger.kernel.org, linux-mm@kvack.org,
gost.dev@samsung.com, Pankaj Raghav
On Sat, Oct 28, 2023 at 09:15:42PM +0000, Daniel Gomez wrote:
> @@ -856,16 +856,16 @@ unsigned long shmem_partial_swap_usage(struct address_space *mapping,
> pgoff_t start, pgoff_t end)
> {
> XA_STATE(xas, &mapping->i_pages, start);
> - struct page *page;
> + struct folio *folio;
> unsigned long swapped = 0;
> unsigned long max = end - 1;
>
> rcu_read_lock();
> - xas_for_each(&xas, page, max) {
> - if (xas_retry(&xas, page))
> + xas_for_each(&xas, folio, max) {
> + if (xas_retry(&xas, folio))
> continue;
> - if (xa_is_value(page))
> - swapped++;
> + if (xa_is_value(folio))
> + swapped += folio_nr_pages(folio);
... you can't call folio_nr_pages() if xa_is_value().
^ permalink raw reply [flat|nested] 30+ messages in thread
* [RFC PATCH 06/11] shmem: trace shmem_add_to_page_cache folio order
2023-10-28 21:15 ` [RFC PATCH 00/11] shmem: high order folios support in " Daniel Gomez
` (4 preceding siblings ...)
2023-10-28 21:15 ` [RFC PATCH 05/11] shmem: account for large order folios Daniel Gomez
@ 2023-10-28 21:15 ` Daniel Gomez
2023-10-29 23:14 ` Matthew Wilcox
2023-10-28 21:15 ` [RFC PATCH 07/11] shmem: remove huge arg from shmem_alloc_and_add_folio() Daniel Gomez
` (5 subsequent siblings)
11 siblings, 1 reply; 30+ messages in thread
From: Daniel Gomez @ 2023-10-28 21:15 UTC (permalink / raw)
To: minchan@kernel.org, senozhatsky@chromium.org, axboe@kernel.dk,
djwong@kernel.org, willy@infradead.org, hughd@google.com,
akpm@linux-foundation.org, mcgrof@kernel.org,
linux-kernel@vger.kernel.org, linux-block@vger.kernel.org,
linux-xfs@vger.kernel.org, linux-fsdevel@vger.kernel.org,
linux-mm@kvack.org
Cc: gost.dev@samsung.com, Pankaj Raghav, Daniel Gomez
To be able to trace and account for order of the folio.
Based on include/trace/filemap.h.
Update MAINTAINERS file list for SHMEM.
Signed-off-by: Daniel Gomez <da.gomez@samsung.com>
---
MAINTAINERS | 1 +
include/trace/events/shmem.h | 52 ++++++++++++++++++++++++++++++++++++
mm/shmem.c | 4 +++
3 files changed, 57 insertions(+)
create mode 100644 include/trace/events/shmem.h
diff --git a/MAINTAINERS b/MAINTAINERS
index bdc4638b2df5..befa63e7cb28 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -21923,6 +21923,7 @@ M: Hugh Dickins <hughd@google.com>
L: linux-mm@kvack.org
S: Maintained
F: include/linux/shmem_fs.h
+F: include/trace/events/shmem.h
F: mm/shmem.c
TOMOYO SECURITY MODULE
diff --git a/include/trace/events/shmem.h b/include/trace/events/shmem.h
new file mode 100644
index 000000000000..223f78f11457
--- /dev/null
+++ b/include/trace/events/shmem.h
@@ -0,0 +1,52 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+#undef TRACE_SYSTEM
+#define TRACE_SYSTEM shmem
+
+#if !defined(_TRACE_SHMEM_H) || defined(TRACE_HEADER_MULTI_READ)
+#define _TRACE_SHMEM_H
+
+#include <linux/types.h>
+#include <linux/tracepoint.h>
+
+DECLARE_EVENT_CLASS(mm_shmem_op_page_cache,
+
+ TP_PROTO(struct folio *folio),
+
+ TP_ARGS(folio),
+
+ TP_STRUCT__entry(
+ __field(unsigned long, pfn)
+ __field(unsigned long, i_ino)
+ __field(unsigned long, index)
+ __field(dev_t, s_dev)
+ __field(unsigned char, order)
+ ),
+
+ TP_fast_assign(
+ __entry->pfn = folio_pfn(folio);
+ __entry->i_ino = folio->mapping->host->i_ino;
+ __entry->index = folio->index;
+ if (folio->mapping->host->i_sb)
+ __entry->s_dev = folio->mapping->host->i_sb->s_dev;
+ else
+ __entry->s_dev = folio->mapping->host->i_rdev;
+ __entry->order = folio_order(folio);
+ ),
+
+ TP_printk("dev %d:%d ino %lx pfn=0x%lx ofs=%lu order=%u",
+ MAJOR(__entry->s_dev), MINOR(__entry->s_dev),
+ __entry->i_ino,
+ __entry->pfn,
+ __entry->index << PAGE_SHIFT,
+ __entry->order)
+);
+
+DEFINE_EVENT(mm_shmem_op_page_cache, mm_shmem_add_to_page_cache,
+ TP_PROTO(struct folio *folio),
+ TP_ARGS(folio)
+ );
+
+#endif /* _TRACE_SHMEM_H */
+
+/* This part must be outside protection */
+#include <trace/define_trace.h>
diff --git a/mm/shmem.c b/mm/shmem.c
index ab31d2880e5d..e2893cf2287f 100644
--- a/mm/shmem.c
+++ b/mm/shmem.c
@@ -84,6 +84,9 @@ static struct vfsmount *shm_mnt __ro_after_init;
#include "internal.h"
+#define CREATE_TRACE_POINTS
+#include <trace/events/shmem.h>
+
#define VM_ACCT(size) (PAGE_ALIGN(size) >> PAGE_SHIFT)
/* Pretend that each entry is of this size in directory's i_size */
@@ -1726,6 +1729,7 @@ static struct folio *shmem_alloc_and_add_folio(gfp_t gfp,
}
}
+ trace_mm_shmem_add_to_page_cache(folio);
shmem_recalc_inode(inode, pages, 0);
folio_add_lru(folio);
return folio;
--
2.39.2
^ permalink raw reply related [flat|nested] 30+ messages in thread* Re: [RFC PATCH 06/11] shmem: trace shmem_add_to_page_cache folio order
2023-10-28 21:15 ` [RFC PATCH 06/11] shmem: trace shmem_add_to_page_cache folio order Daniel Gomez
@ 2023-10-29 23:14 ` Matthew Wilcox
0 siblings, 0 replies; 30+ messages in thread
From: Matthew Wilcox @ 2023-10-29 23:14 UTC (permalink / raw)
To: Daniel Gomez
Cc: minchan@kernel.org, senozhatsky@chromium.org, axboe@kernel.dk,
djwong@kernel.org, hughd@google.com, akpm@linux-foundation.org,
mcgrof@kernel.org, linux-kernel@vger.kernel.org,
linux-block@vger.kernel.org, linux-xfs@vger.kernel.org,
linux-fsdevel@vger.kernel.org, linux-mm@kvack.org,
gost.dev@samsung.com, Pankaj Raghav
On Sat, Oct 28, 2023 at 09:15:44PM +0000, Daniel Gomez wrote:
> To be able to trace and account for order of the folio.
>
> Based on include/trace/filemap.h.
Why is this better than using trace_mm_filemap_add_to_page_cache()?
It's basically the same thing.
^ permalink raw reply [flat|nested] 30+ messages in thread
* [RFC PATCH 07/11] shmem: remove huge arg from shmem_alloc_and_add_folio()
2023-10-28 21:15 ` [RFC PATCH 00/11] shmem: high order folios support in " Daniel Gomez
` (5 preceding siblings ...)
2023-10-28 21:15 ` [RFC PATCH 06/11] shmem: trace shmem_add_to_page_cache folio order Daniel Gomez
@ 2023-10-28 21:15 ` Daniel Gomez
2023-10-29 23:17 ` Matthew Wilcox
2023-10-28 21:15 ` [RFC PATCH 08/11] shmem: add file length arg in shmem_get_folio() path Daniel Gomez
` (4 subsequent siblings)
11 siblings, 1 reply; 30+ messages in thread
From: Daniel Gomez @ 2023-10-28 21:15 UTC (permalink / raw)
To: minchan@kernel.org, senozhatsky@chromium.org, axboe@kernel.dk,
djwong@kernel.org, willy@infradead.org, hughd@google.com,
akpm@linux-foundation.org, mcgrof@kernel.org,
linux-kernel@vger.kernel.org, linux-block@vger.kernel.org,
linux-xfs@vger.kernel.org, linux-fsdevel@vger.kernel.org,
linux-mm@kvack.org
Cc: gost.dev@samsung.com, Pankaj Raghav, Daniel Gomez
The huge flag is already part of of the memory allocation flag (gfp_t).
Make use of the VM_HUGEPAGE bit set by vma_thp_gfp_mask() to know if
the allocation must be a huge page.
Drop CONFIG_TRANSPARENT_HUGEPAGE check in shmem_alloc_and_add_folio()
as VM_HUGEPAGE won't be set unless THP config is enabled.
Signed-off-by: Daniel Gomez <da.gomez@samsung.com>
---
mm/shmem.c | 13 +++++--------
1 file changed, 5 insertions(+), 8 deletions(-)
diff --git a/mm/shmem.c b/mm/shmem.c
index e2893cf2287f..9d68211373c4 100644
--- a/mm/shmem.c
+++ b/mm/shmem.c
@@ -1644,7 +1644,7 @@ static struct folio *shmem_alloc_folio(gfp_t gfp,
static struct folio *shmem_alloc_and_add_folio(gfp_t gfp,
struct inode *inode, pgoff_t index,
- struct mm_struct *fault_mm, bool huge)
+ struct mm_struct *fault_mm)
{
struct address_space *mapping = inode->i_mapping;
struct shmem_inode_info *info = SHMEM_I(inode);
@@ -1652,10 +1652,7 @@ static struct folio *shmem_alloc_and_add_folio(gfp_t gfp,
long pages;
int error;
- if (!IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE))
- huge = false;
-
- if (huge) {
+ if (gfp & VM_HUGEPAGE) {
pages = HPAGE_PMD_NR;
index = round_down(index, HPAGE_PMD_NR);
@@ -1690,7 +1687,7 @@ static struct folio *shmem_alloc_and_add_folio(gfp_t gfp,
if (xa_find(&mapping->i_pages, &index,
index + pages - 1, XA_PRESENT)) {
error = -EEXIST;
- } else if (huge) {
+ } else if (gfp & VM_HUGEPAGE) {
count_vm_event(THP_FILE_FALLBACK);
count_vm_event(THP_FILE_FALLBACK_CHARGE);
}
@@ -2054,7 +2051,7 @@ static int shmem_get_folio_gfp(struct inode *inode, pgoff_t index,
huge_gfp = vma_thp_gfp_mask(vma);
huge_gfp = limit_gfp_mask(huge_gfp, gfp);
folio = shmem_alloc_and_add_folio(huge_gfp,
- inode, index, fault_mm, true);
+ inode, index, fault_mm);
if (!IS_ERR(folio)) {
count_vm_event(THP_FILE_ALLOC);
goto alloced;
@@ -2063,7 +2060,7 @@ static int shmem_get_folio_gfp(struct inode *inode, pgoff_t index,
goto repeat;
}
- folio = shmem_alloc_and_add_folio(gfp, inode, index, fault_mm, false);
+ folio = shmem_alloc_and_add_folio(gfp, inode, index, fault_mm);
if (IS_ERR(folio)) {
error = PTR_ERR(folio);
if (error == -EEXIST)
--
2.39.2
^ permalink raw reply related [flat|nested] 30+ messages in thread* Re: [RFC PATCH 07/11] shmem: remove huge arg from shmem_alloc_and_add_folio()
2023-10-28 21:15 ` [RFC PATCH 07/11] shmem: remove huge arg from shmem_alloc_and_add_folio() Daniel Gomez
@ 2023-10-29 23:17 ` Matthew Wilcox
0 siblings, 0 replies; 30+ messages in thread
From: Matthew Wilcox @ 2023-10-29 23:17 UTC (permalink / raw)
To: Daniel Gomez
Cc: minchan@kernel.org, senozhatsky@chromium.org, axboe@kernel.dk,
djwong@kernel.org, hughd@google.com, akpm@linux-foundation.org,
mcgrof@kernel.org, linux-kernel@vger.kernel.org,
linux-block@vger.kernel.org, linux-xfs@vger.kernel.org,
linux-fsdevel@vger.kernel.org, linux-mm@kvack.org,
gost.dev@samsung.com, Pankaj Raghav
On Sat, Oct 28, 2023 at 09:15:45PM +0000, Daniel Gomez wrote:
> The huge flag is already part of of the memory allocation flag (gfp_t).
> Make use of the VM_HUGEPAGE bit set by vma_thp_gfp_mask() to know if
> the allocation must be a huge page.
... what?
> + if (gfp & VM_HUGEPAGE) {
Does sparse not complain about this? VM_HUGEPAGE is never part of
the GFP flags and there's supposed to be annotations that make the
various checkers warn.
^ permalink raw reply [flat|nested] 30+ messages in thread
* [RFC PATCH 08/11] shmem: add file length arg in shmem_get_folio() path
2023-10-28 21:15 ` [RFC PATCH 00/11] shmem: high order folios support in " Daniel Gomez
` (6 preceding siblings ...)
2023-10-28 21:15 ` [RFC PATCH 07/11] shmem: remove huge arg from shmem_alloc_and_add_folio() Daniel Gomez
@ 2023-10-28 21:15 ` Daniel Gomez
2023-10-28 21:15 ` [RFC PATCH 09/11] shmem: add order arg to shmem_alloc_folio() Daniel Gomez
` (3 subsequent siblings)
11 siblings, 0 replies; 30+ messages in thread
From: Daniel Gomez @ 2023-10-28 21:15 UTC (permalink / raw)
To: minchan@kernel.org, senozhatsky@chromium.org, axboe@kernel.dk,
djwong@kernel.org, willy@infradead.org, hughd@google.com,
akpm@linux-foundation.org, mcgrof@kernel.org,
linux-kernel@vger.kernel.org, linux-block@vger.kernel.org,
linux-xfs@vger.kernel.org, linux-fsdevel@vger.kernel.org,
linux-mm@kvack.org
Cc: gost.dev@samsung.com, Pankaj Raghav, Daniel Gomez
In preparation for large folio in the write path, add file length
argument in shmem_get_folio() path to be able to calculate the folio
order based on the file size. Use of order-0 (PAGE_SIZE) for non write
paths such as read, page cache read, and vm fault.
This enables high order folios in the write and fallocate paths once the
folio order is calculated based on the length.
Signed-off-by: Daniel Gomez <da.gomez@samsung.com>
---
include/linux/shmem_fs.h | 2 +-
mm/khugepaged.c | 3 ++-
mm/shmem.c | 33 ++++++++++++++++++---------------
mm/userfaultfd.c | 2 +-
4 files changed, 22 insertions(+), 18 deletions(-)
diff --git a/include/linux/shmem_fs.h b/include/linux/shmem_fs.h
index 2caa6b86106a..7138ea980884 100644
--- a/include/linux/shmem_fs.h
+++ b/include/linux/shmem_fs.h
@@ -137,7 +137,7 @@ enum sgp_type {
};
int shmem_get_folio(struct inode *inode, pgoff_t index, struct folio **foliop,
- enum sgp_type sgp);
+ enum sgp_type sgp, size_t len);
struct folio *shmem_read_folio_gfp(struct address_space *mapping,
pgoff_t index, gfp_t gfp);
diff --git a/mm/khugepaged.c b/mm/khugepaged.c
index 064654717843..fcde8223b507 100644
--- a/mm/khugepaged.c
+++ b/mm/khugepaged.c
@@ -1855,7 +1855,8 @@ static int collapse_file(struct mm_struct *mm, unsigned long addr,
xas_unlock_irq(&xas);
/* swap in or instantiate fallocated page */
if (shmem_get_folio(mapping->host, index,
- &folio, SGP_NOALLOC)) {
+ &folio, SGP_NOALLOC,
+ PAGE_SIZE)) {
result = SCAN_FAIL;
goto xa_unlocked;
}
diff --git a/mm/shmem.c b/mm/shmem.c
index 9d68211373c4..d8dc2ceaba18 100644
--- a/mm/shmem.c
+++ b/mm/shmem.c
@@ -958,7 +958,7 @@ static struct folio *shmem_get_partial_folio(struct inode *inode, pgoff_t index)
* (although in some cases this is just a waste of time).
*/
folio = NULL;
- shmem_get_folio(inode, index, &folio, SGP_READ);
+ shmem_get_folio(inode, index, &folio, SGP_READ, PAGE_SIZE);
return folio;
}
@@ -1644,7 +1644,7 @@ static struct folio *shmem_alloc_folio(gfp_t gfp,
static struct folio *shmem_alloc_and_add_folio(gfp_t gfp,
struct inode *inode, pgoff_t index,
- struct mm_struct *fault_mm)
+ struct mm_struct *fault_mm, size_t len)
{
struct address_space *mapping = inode->i_mapping;
struct shmem_inode_info *info = SHMEM_I(inode);
@@ -1969,7 +1969,7 @@ static int shmem_swapin_folio(struct inode *inode, pgoff_t index,
*/
static int shmem_get_folio_gfp(struct inode *inode, pgoff_t index,
struct folio **foliop, enum sgp_type sgp, gfp_t gfp,
- struct vm_fault *vmf, vm_fault_t *fault_type)
+ struct vm_fault *vmf, vm_fault_t *fault_type, size_t len)
{
struct vm_area_struct *vma = vmf ? vmf->vma : NULL;
struct mm_struct *fault_mm;
@@ -2051,7 +2051,7 @@ static int shmem_get_folio_gfp(struct inode *inode, pgoff_t index,
huge_gfp = vma_thp_gfp_mask(vma);
huge_gfp = limit_gfp_mask(huge_gfp, gfp);
folio = shmem_alloc_and_add_folio(huge_gfp,
- inode, index, fault_mm);
+ inode, index, fault_mm, len);
if (!IS_ERR(folio)) {
count_vm_event(THP_FILE_ALLOC);
goto alloced;
@@ -2060,7 +2060,7 @@ static int shmem_get_folio_gfp(struct inode *inode, pgoff_t index,
goto repeat;
}
- folio = shmem_alloc_and_add_folio(gfp, inode, index, fault_mm);
+ folio = shmem_alloc_and_add_folio(gfp, inode, index, fault_mm, len);
if (IS_ERR(folio)) {
error = PTR_ERR(folio);
if (error == -EEXIST)
@@ -2140,10 +2140,10 @@ static int shmem_get_folio_gfp(struct inode *inode, pgoff_t index,
}
int shmem_get_folio(struct inode *inode, pgoff_t index, struct folio **foliop,
- enum sgp_type sgp)
+ enum sgp_type sgp, size_t len)
{
return shmem_get_folio_gfp(inode, index, foliop, sgp,
- mapping_gfp_mask(inode->i_mapping), NULL, NULL);
+ mapping_gfp_mask(inode->i_mapping), NULL, NULL, len);
}
/*
@@ -2237,7 +2237,7 @@ static vm_fault_t shmem_fault(struct vm_fault *vmf)
WARN_ON_ONCE(vmf->page != NULL);
err = shmem_get_folio_gfp(inode, vmf->pgoff, &folio, SGP_CACHE,
- gfp, vmf, &ret);
+ gfp, vmf, &ret, PAGE_SIZE);
if (err)
return vmf_error(err);
if (folio) {
@@ -2716,6 +2716,9 @@ shmem_write_begin(struct file *file, struct address_space *mapping,
struct folio *folio;
int ret = 0;
+ if (!mapping_large_folio_support(mapping))
+ len = min_t(size_t, len, PAGE_SIZE - offset_in_page(pos));
+
/* i_rwsem is held by caller */
if (unlikely(info->seals & (F_SEAL_GROW |
F_SEAL_WRITE | F_SEAL_FUTURE_WRITE))) {
@@ -2725,7 +2728,7 @@ shmem_write_begin(struct file *file, struct address_space *mapping,
return -EPERM;
}
- ret = shmem_get_folio(inode, index, &folio, SGP_WRITE);
+ ret = shmem_get_folio(inode, index, &folio, SGP_WRITE, len);
if (ret)
return ret;
@@ -2796,7 +2799,7 @@ static ssize_t shmem_file_read_iter(struct kiocb *iocb, struct iov_iter *to)
break;
}
- error = shmem_get_folio(inode, index, &folio, SGP_READ);
+ error = shmem_get_folio(inode, index, &folio, SGP_READ, PAGE_SIZE);
if (error) {
if (error == -EINVAL)
error = 0;
@@ -2973,7 +2976,7 @@ static ssize_t shmem_file_splice_read(struct file *in, loff_t *ppos,
break;
error = shmem_get_folio(inode, *ppos / PAGE_SIZE, &folio,
- SGP_READ);
+ SGP_READ, PAGE_SIZE);
if (error) {
if (error == -EINVAL)
error = 0;
@@ -3160,7 +3163,7 @@ static long shmem_fallocate(struct file *file, int mode, loff_t offset,
error = -ENOMEM;
else
error = shmem_get_folio(inode, index, &folio,
- SGP_FALLOC);
+ SGP_FALLOC, (end - index) << PAGE_SHIFT);
if (error) {
info->fallocend = undo_fallocend;
/* Remove the !uptodate folios we added */
@@ -3511,7 +3514,7 @@ static int shmem_symlink(struct mnt_idmap *idmap, struct inode *dir,
inode->i_op = &shmem_short_symlink_operations;
} else {
inode_nohighmem(inode);
- error = shmem_get_folio(inode, 0, &folio, SGP_WRITE);
+ error = shmem_get_folio(inode, 0, &folio, SGP_WRITE, PAGE_SIZE);
if (error)
goto out_remove_offset;
inode->i_mapping->a_ops = &shmem_aops;
@@ -3558,7 +3561,7 @@ static const char *shmem_get_link(struct dentry *dentry, struct inode *inode,
return ERR_PTR(-ECHILD);
}
} else {
- error = shmem_get_folio(inode, 0, &folio, SGP_READ);
+ error = shmem_get_folio(inode, 0, &folio, SGP_READ, PAGE_SIZE);
if (error)
return ERR_PTR(error);
if (!folio)
@@ -4923,7 +4926,7 @@ struct folio *shmem_read_folio_gfp(struct address_space *mapping,
BUG_ON(!shmem_mapping(mapping));
error = shmem_get_folio_gfp(inode, index, &folio, SGP_CACHE,
- gfp, NULL, NULL);
+ gfp, NULL, NULL, PAGE_SIZE);
if (error)
return ERR_PTR(error);
diff --git a/mm/userfaultfd.c b/mm/userfaultfd.c
index 96d9eae5c7cc..aab8679b322a 100644
--- a/mm/userfaultfd.c
+++ b/mm/userfaultfd.c
@@ -256,7 +256,7 @@ static int mfill_atomic_pte_continue(pmd_t *dst_pmd,
struct page *page;
int ret;
- ret = shmem_get_folio(inode, pgoff, &folio, SGP_NOALLOC);
+ ret = shmem_get_folio(inode, pgoff, &folio, SGP_NOALLOC, PAGE_SIZE);
/* Our caller expects us to return -EFAULT if we failed to find folio */
if (ret == -ENOENT)
ret = -EFAULT;
--
2.39.2
^ permalink raw reply related [flat|nested] 30+ messages in thread* [RFC PATCH 09/11] shmem: add order arg to shmem_alloc_folio()
2023-10-28 21:15 ` [RFC PATCH 00/11] shmem: high order folios support in " Daniel Gomez
` (7 preceding siblings ...)
2023-10-28 21:15 ` [RFC PATCH 08/11] shmem: add file length arg in shmem_get_folio() path Daniel Gomez
@ 2023-10-28 21:15 ` Daniel Gomez
2023-10-31 7:04 ` Hannes Reinecke
2023-10-28 21:15 ` [RFC PATCH 10/11] shmem: add large folio support to the write path Daniel Gomez
` (2 subsequent siblings)
11 siblings, 1 reply; 30+ messages in thread
From: Daniel Gomez @ 2023-10-28 21:15 UTC (permalink / raw)
To: minchan@kernel.org, senozhatsky@chromium.org, axboe@kernel.dk,
djwong@kernel.org, willy@infradead.org, hughd@google.com,
akpm@linux-foundation.org, mcgrof@kernel.org,
linux-kernel@vger.kernel.org, linux-block@vger.kernel.org,
linux-xfs@vger.kernel.org, linux-fsdevel@vger.kernel.org,
linux-mm@kvack.org
Cc: gost.dev@samsung.com, Pankaj Raghav, Daniel Gomez
Add folio order argument to the shmem_alloc_folio() and merge it with
the shmem_alloc_folio_huge(). Return will make use of the new
page_rmappable_folio() where order-0 and high order folios are
both supported.
Signed-off-by: Daniel Gomez <da.gomez@samsung.com>
---
mm/shmem.c | 33 ++++++++++-----------------------
1 file changed, 10 insertions(+), 23 deletions(-)
diff --git a/mm/shmem.c b/mm/shmem.c
index d8dc2ceaba18..fc7605da4316 100644
--- a/mm/shmem.c
+++ b/mm/shmem.c
@@ -1614,40 +1614,27 @@ static gfp_t limit_gfp_mask(gfp_t huge_gfp, gfp_t limit_gfp)
return result;
}
-static struct folio *shmem_alloc_hugefolio(gfp_t gfp,
- struct shmem_inode_info *info, pgoff_t index)
+static struct folio *shmem_alloc_folio(gfp_t gfp, struct shmem_inode_info *info,
+ pgoff_t index, unsigned int order)
{
struct mempolicy *mpol;
pgoff_t ilx;
struct page *page;
- mpol = shmem_get_pgoff_policy(info, index, HPAGE_PMD_ORDER, &ilx);
- page = alloc_pages_mpol(gfp, HPAGE_PMD_ORDER, mpol, ilx, numa_node_id());
+ mpol = shmem_get_pgoff_policy(info, index, order, &ilx);
+ page = alloc_pages_mpol(gfp, order, mpol, ilx, numa_node_id());
mpol_cond_put(mpol);
return page_rmappable_folio(page);
}
-static struct folio *shmem_alloc_folio(gfp_t gfp,
- struct shmem_inode_info *info, pgoff_t index)
-{
- struct mempolicy *mpol;
- pgoff_t ilx;
- struct page *page;
-
- mpol = shmem_get_pgoff_policy(info, index, 0, &ilx);
- page = alloc_pages_mpol(gfp, 0, mpol, ilx, numa_node_id());
- mpol_cond_put(mpol);
-
- return (struct folio *)page;
-}
-
static struct folio *shmem_alloc_and_add_folio(gfp_t gfp,
struct inode *inode, pgoff_t index,
struct mm_struct *fault_mm, size_t len)
{
struct address_space *mapping = inode->i_mapping;
struct shmem_inode_info *info = SHMEM_I(inode);
+ unsigned int order = 0;
struct folio *folio;
long pages;
int error;
@@ -1668,12 +1655,12 @@ static struct folio *shmem_alloc_and_add_folio(gfp_t gfp,
index + HPAGE_PMD_NR - 1, XA_PRESENT))
return ERR_PTR(-E2BIG);
- folio = shmem_alloc_hugefolio(gfp, info, index);
+ folio = shmem_alloc_folio(gfp, info, index, HPAGE_PMD_ORDER);
if (!folio)
count_vm_event(THP_FILE_FALLBACK);
} else {
- pages = 1;
- folio = shmem_alloc_folio(gfp, info, index);
+ pages = 1UL << order;
+ folio = shmem_alloc_folio(gfp, info, index, order);
}
if (!folio)
return ERR_PTR(-ENOMEM);
@@ -1774,7 +1761,7 @@ static int shmem_replace_folio(struct folio **foliop, gfp_t gfp,
*/
gfp &= ~GFP_CONSTRAINT_MASK;
VM_BUG_ON_FOLIO(folio_test_large(old), old);
- new = shmem_alloc_folio(gfp, info, index);
+ new = shmem_alloc_folio(gfp, info, index, 0);
if (!new)
return -ENOMEM;
@@ -2618,7 +2605,7 @@ int shmem_mfill_atomic_pte(pmd_t *dst_pmd,
if (!*foliop) {
ret = -ENOMEM;
- folio = shmem_alloc_folio(gfp, info, pgoff);
+ folio = shmem_alloc_folio(gfp, info, pgoff, 0);
if (!folio)
goto out_unacct_blocks;
--
2.39.2
^ permalink raw reply related [flat|nested] 30+ messages in thread* Re: [RFC PATCH 09/11] shmem: add order arg to shmem_alloc_folio()
2023-10-28 21:15 ` [RFC PATCH 09/11] shmem: add order arg to shmem_alloc_folio() Daniel Gomez
@ 2023-10-31 7:04 ` Hannes Reinecke
0 siblings, 0 replies; 30+ messages in thread
From: Hannes Reinecke @ 2023-10-31 7:04 UTC (permalink / raw)
To: Daniel Gomez, minchan@kernel.org, senozhatsky@chromium.org,
axboe@kernel.dk, djwong@kernel.org, willy@infradead.org,
hughd@google.com, akpm@linux-foundation.org, mcgrof@kernel.org,
linux-kernel@vger.kernel.org, linux-block@vger.kernel.org,
linux-xfs@vger.kernel.org, linux-fsdevel@vger.kernel.org,
linux-mm@kvack.org
Cc: gost.dev@samsung.com, Pankaj Raghav
On 10/28/23 23:15, Daniel Gomez wrote:
> Add folio order argument to the shmem_alloc_folio() and merge it with
> the shmem_alloc_folio_huge(). Return will make use of the new
> page_rmappable_folio() where order-0 and high order folios are
> both supported.
>
> Signed-off-by: Daniel Gomez <da.gomez@samsung.com>
> ---
> mm/shmem.c | 33 ++++++++++-----------------------
> 1 file changed, 10 insertions(+), 23 deletions(-)
>
> diff --git a/mm/shmem.c b/mm/shmem.c
> index d8dc2ceaba18..fc7605da4316 100644
> --- a/mm/shmem.c
> +++ b/mm/shmem.c
> @@ -1614,40 +1614,27 @@ static gfp_t limit_gfp_mask(gfp_t huge_gfp, gfp_t limit_gfp)
> return result;
> }
>
> -static struct folio *shmem_alloc_hugefolio(gfp_t gfp,
> - struct shmem_inode_info *info, pgoff_t index)
> +static struct folio *shmem_alloc_folio(gfp_t gfp, struct shmem_inode_info *info,
> + pgoff_t index, unsigned int order)
> {
> struct mempolicy *mpol;
> pgoff_t ilx;
> struct page *page;
>
> - mpol = shmem_get_pgoff_policy(info, index, HPAGE_PMD_ORDER, &ilx);
> - page = alloc_pages_mpol(gfp, HPAGE_PMD_ORDER, mpol, ilx, numa_node_id());
> + mpol = shmem_get_pgoff_policy(info, index, order, &ilx);
> + page = alloc_pages_mpol(gfp, order, mpol, ilx, numa_node_id());
> mpol_cond_put(mpol);
>
> return page_rmappable_folio(page);
> }
>
> -static struct folio *shmem_alloc_folio(gfp_t gfp,
> - struct shmem_inode_info *info, pgoff_t index)
> -{
> - struct mempolicy *mpol;
> - pgoff_t ilx;
> - struct page *page;
> -
> - mpol = shmem_get_pgoff_policy(info, index, 0, &ilx);
> - page = alloc_pages_mpol(gfp, 0, mpol, ilx, numa_node_id());
> - mpol_cond_put(mpol);
> -
> - return (struct folio *)page;
> -}
> -
> static struct folio *shmem_alloc_and_add_folio(gfp_t gfp,
> struct inode *inode, pgoff_t index,
> struct mm_struct *fault_mm, size_t len)
> {
> struct address_space *mapping = inode->i_mapping;
> struct shmem_inode_info *info = SHMEM_I(inode);
> + unsigned int order = 0;
> struct folio *folio;
> long pages;
> int error;
> @@ -1668,12 +1655,12 @@ static struct folio *shmem_alloc_and_add_folio(gfp_t gfp,
> index + HPAGE_PMD_NR - 1, XA_PRESENT))
> return ERR_PTR(-E2BIG);
>
> - folio = shmem_alloc_hugefolio(gfp, info, index);
> + folio = shmem_alloc_folio(gfp, info, index, HPAGE_PMD_ORDER);
> if (!folio)
> count_vm_event(THP_FILE_FALLBACK);
> } else {
> - pages = 1;
> - folio = shmem_alloc_folio(gfp, info, index);
> + pages = 1UL << order;
> + folio = shmem_alloc_folio(gfp, info, index, order);
> }
> if (!folio)
> return ERR_PTR(-ENOMEM);
> @@ -1774,7 +1761,7 @@ static int shmem_replace_folio(struct folio **foliop, gfp_t gfp,
> */
> gfp &= ~GFP_CONSTRAINT_MASK;
> VM_BUG_ON_FOLIO(folio_test_large(old), old);
> - new = shmem_alloc_folio(gfp, info, index);
> + new = shmem_alloc_folio(gfp, info, index, 0);
Shouldn't you use folio_order(old) here?
Cheers,
Hannes
--
Dr. Hannes Reinecke Kernel Storage Architect
hare@suse.de +49 911 74053 688
SUSE Software Solutions GmbH, Maxfeldstr. 5, 90409 Nürnberg
HRB 36809 (AG Nürnberg), Geschäftsführer: Ivo Totev, Andrew
Myers, Andrew McDonald, Martje Boudien Moerman
^ permalink raw reply [flat|nested] 30+ messages in thread
* [RFC PATCH 10/11] shmem: add large folio support to the write path
2023-10-28 21:15 ` [RFC PATCH 00/11] shmem: high order folios support in " Daniel Gomez
` (8 preceding siblings ...)
2023-10-28 21:15 ` [RFC PATCH 09/11] shmem: add order arg to shmem_alloc_folio() Daniel Gomez
@ 2023-10-28 21:15 ` Daniel Gomez
2023-10-29 23:32 ` Matthew Wilcox
2023-10-28 21:15 ` [RFC PATCH 11/11] shmem: add per-block uptodate tracking Daniel Gomez
2023-10-29 20:43 ` [RFC PATCH 00/11] shmem: high order folios support in write path Matthew Wilcox
11 siblings, 1 reply; 30+ messages in thread
From: Daniel Gomez @ 2023-10-28 21:15 UTC (permalink / raw)
To: minchan@kernel.org, senozhatsky@chromium.org, axboe@kernel.dk,
djwong@kernel.org, willy@infradead.org, hughd@google.com,
akpm@linux-foundation.org, mcgrof@kernel.org,
linux-kernel@vger.kernel.org, linux-block@vger.kernel.org,
linux-xfs@vger.kernel.org, linux-fsdevel@vger.kernel.org,
linux-mm@kvack.org
Cc: gost.dev@samsung.com, Pankaj Raghav, Daniel Gomez
Current work in progress. Large folios in the fallocate path makes
regress fstests generic/285 and generic/436.
Add large folio support for shmem write path matching the same high
order preference mechanism used for iomap buffered IO path as used in
__filemap_get_folio().
Add shmem_mapping_size_order to get a hint for the order of the folio
based on the file size which takes care of the mapping requirements.
Swap does not support high order folios for now, so make it order 0 in
case swap is enabled.
Add the __GFP_COMP flag for high order folios except when huge is
enabled. This fixes a memory leak when allocating high order folios.
Signed-off-by: Daniel Gomez <da.gomez@samsung.com>
---
mm/shmem.c | 49 ++++++++++++++++++++++++++++++++++++++++++++++++-
1 file changed, 48 insertions(+), 1 deletion(-)
diff --git a/mm/shmem.c b/mm/shmem.c
index fc7605da4316..eb314927be78 100644
--- a/mm/shmem.c
+++ b/mm/shmem.c
@@ -1621,6 +1621,9 @@ static struct folio *shmem_alloc_folio(gfp_t gfp, struct shmem_inode_info *info,
pgoff_t ilx;
struct page *page;
+ if ((order != 0) && !(gfp & VM_HUGEPAGE))
+ gfp |= __GFP_COMP;
+
mpol = shmem_get_pgoff_policy(info, index, order, &ilx);
page = alloc_pages_mpol(gfp, order, mpol, ilx, numa_node_id());
mpol_cond_put(mpol);
@@ -1628,17 +1631,56 @@ static struct folio *shmem_alloc_folio(gfp_t gfp, struct shmem_inode_info *info,
return page_rmappable_folio(page);
}
+/**
+ * shmem_mapping_size_order - Get maximum folio order for the given file size.
+ * @mapping: Target address_space.
+ * @index: The page index.
+ * @size: The suggested size of the folio to create.
+ *
+ * This returns a high order for folios (when supported) based on the file size
+ * which the mapping currently allows at the given index. The index is relevant
+ * due to alignment considerations the mapping might have. The returned order
+ * may be less than the size passed.
+ *
+ * Like __filemap_get_folio order calculation.
+ *
+ * Return: The order.
+ */
+static inline unsigned int
+shmem_mapping_size_order(struct address_space *mapping, pgoff_t index,
+ size_t size, struct shmem_sb_info *sbinfo)
+{
+ unsigned int order = ilog2(size);
+
+ if ((order <= PAGE_SHIFT) ||
+ (!mapping_large_folio_support(mapping) || !sbinfo->noswap))
+ return 0;
+
+ order -= PAGE_SHIFT;
+
+ /* If we're not aligned, allocate a smaller folio */
+ if (index & ((1UL << order) - 1))
+ order = __ffs(index);
+
+ order = min_t(size_t, order, MAX_PAGECACHE_ORDER);
+
+ /* Order-1 not supported due to THP dependency */
+ return (order == 1) ? 0 : order;
+}
+
static struct folio *shmem_alloc_and_add_folio(gfp_t gfp,
struct inode *inode, pgoff_t index,
struct mm_struct *fault_mm, size_t len)
{
struct address_space *mapping = inode->i_mapping;
struct shmem_inode_info *info = SHMEM_I(inode);
- unsigned int order = 0;
+ unsigned int order = shmem_mapping_size_order(mapping, index, len,
+ SHMEM_SB(inode->i_sb));
struct folio *folio;
long pages;
int error;
+neworder:
if (gfp & VM_HUGEPAGE) {
pages = HPAGE_PMD_NR;
index = round_down(index, HPAGE_PMD_NR);
@@ -1721,6 +1763,11 @@ static struct folio *shmem_alloc_and_add_folio(gfp_t gfp,
unlock:
folio_unlock(folio);
folio_put(folio);
+ if (order != 0) {
+ if (--order == 1)
+ order = 0;
+ goto neworder;
+ }
return ERR_PTR(error);
}
--
2.39.2
^ permalink raw reply related [flat|nested] 30+ messages in thread* Re: [RFC PATCH 10/11] shmem: add large folio support to the write path
2023-10-28 21:15 ` [RFC PATCH 10/11] shmem: add large folio support to the write path Daniel Gomez
@ 2023-10-29 23:32 ` Matthew Wilcox
0 siblings, 0 replies; 30+ messages in thread
From: Matthew Wilcox @ 2023-10-29 23:32 UTC (permalink / raw)
To: Daniel Gomez
Cc: minchan@kernel.org, senozhatsky@chromium.org, axboe@kernel.dk,
djwong@kernel.org, hughd@google.com, akpm@linux-foundation.org,
mcgrof@kernel.org, linux-kernel@vger.kernel.org,
linux-block@vger.kernel.org, linux-xfs@vger.kernel.org,
linux-fsdevel@vger.kernel.org, linux-mm@kvack.org,
gost.dev@samsung.com, Pankaj Raghav
On Sat, Oct 28, 2023 at 09:15:50PM +0000, Daniel Gomez wrote:
> +++ b/mm/shmem.c
> @@ -1621,6 +1621,9 @@ static struct folio *shmem_alloc_folio(gfp_t gfp, struct shmem_inode_info *info,
> pgoff_t ilx;
> struct page *page;
>
> + if ((order != 0) && !(gfp & VM_HUGEPAGE))
> + gfp |= __GFP_COMP;
This is silly. Just set it unconditionally.
> +static inline unsigned int
> +shmem_mapping_size_order(struct address_space *mapping, pgoff_t index,
> + size_t size, struct shmem_sb_info *sbinfo)
> +{
> + unsigned int order = ilog2(size);
> +
> + if ((order <= PAGE_SHIFT) ||
> + (!mapping_large_folio_support(mapping) || !sbinfo->noswap))
> + return 0;
> +
> + order -= PAGE_SHIFT;
You know we have get_order(), right?
^ permalink raw reply [flat|nested] 30+ messages in thread
* [RFC PATCH 11/11] shmem: add per-block uptodate tracking
2023-10-28 21:15 ` [RFC PATCH 00/11] shmem: high order folios support in " Daniel Gomez
` (9 preceding siblings ...)
2023-10-28 21:15 ` [RFC PATCH 10/11] shmem: add large folio support to the write path Daniel Gomez
@ 2023-10-28 21:15 ` Daniel Gomez
2023-10-29 20:43 ` [RFC PATCH 00/11] shmem: high order folios support in write path Matthew Wilcox
11 siblings, 0 replies; 30+ messages in thread
From: Daniel Gomez @ 2023-10-28 21:15 UTC (permalink / raw)
To: minchan@kernel.org, senozhatsky@chromium.org, axboe@kernel.dk,
djwong@kernel.org, willy@infradead.org, hughd@google.com,
akpm@linux-foundation.org, mcgrof@kernel.org,
linux-kernel@vger.kernel.org, linux-block@vger.kernel.org,
linux-xfs@vger.kernel.org, linux-fsdevel@vger.kernel.org,
linux-mm@kvack.org
Cc: gost.dev@samsung.com, Pankaj Raghav, Daniel Gomez
Current work in progress due to fsx regression (check below).
Based on iomap per-block dirty and uptodate state track, add support
for shmem_folio_state struct to track uptodate per-block when a folio is
larger than a block. In shmem, this is when large folios is used, as one
block is equal to one page in this context.
Add support for invalidate_folio, release_folio and is_partially_uptodate
address space operations. The first two are needed to be able to free
the new shmem_folio_state struct. The last callback is required for
large folios when enabling per-block tracking.
This was spotted when running fstests for tmpfs and regress on
generic/285 and generic/436 tests [1] with large folios support in the
fallocate path without having per-block uptodate tracking.
[1] tests:
generic/285: src/seek_sanity_test/test09()
generic/436: src/seek_sanity_test/test13()
How to reproduce:
```sh
mkdir -p /mnt/test-tmpfs
./src/seek_sanity_test -s 9 -e 9 /mnt/test-tmpfs/file
./src/seek_sanity_test -s 13 -e 13 /mnt/test-tmpfs/file
umount /mnt/test-tmpfs
```
After per-block uptodate support is added, fsx regresion is found when
running the following:
```sh
mkdir -p /mnt/test-tmpfs
mount -t tmpfs -o size=1G -o noswap tmpfs /mnt/test-tmpfs
/root/xfstests-dev/ltp/fsx /mnt/test-tmpfs/file -d -N 1200 -X
umount /mnt/test-tmpfs
```
Signed-off-by: Daniel Gomez <da.gomez@samsung.com>
---
mm/shmem.c | 169 +++++++++++++++++++++++++++++++++++++++++++++++++----
1 file changed, 159 insertions(+), 10 deletions(-)
diff --git a/mm/shmem.c b/mm/shmem.c
index eb314927be78..fa67594495d5 100644
--- a/mm/shmem.c
+++ b/mm/shmem.c
@@ -132,6 +132,94 @@ struct shmem_options {
#define SHMEM_SEEN_QUOTA 32
};
+/*
+ * Structure allocated for each folio to track per-block uptodate state.
+ *
+ * Like buffered-io shmem_folio_state struct but only for uptodate.
+ */
+struct shmem_folio_state {
+ spinlock_t state_lock;
+ unsigned long state[];
+};
+
+static inline bool sfs_is_fully_uptodate(struct folio *folio,
+ struct shmem_folio_state *sfs)
+{
+ struct inode *inode = folio->mapping->host;
+
+ return bitmap_full(sfs->state, i_blocks_per_folio(inode, folio));
+}
+
+static inline bool sfs_block_is_uptodate(struct shmem_folio_state *sfs,
+ unsigned int block)
+{
+ return test_bit(block, sfs->state);
+}
+
+static void sfs_set_range_uptodate(struct folio *folio,
+ struct shmem_folio_state *sfs, size_t off,
+ size_t len)
+{
+ struct inode *inode = folio->mapping->host;
+ unsigned int first_blk = off >> inode->i_blkbits;
+ unsigned int last_blk = (off + len - 1) >> inode->i_blkbits;
+ unsigned int nr_blks = last_blk - first_blk + 1;
+ unsigned long flags;
+
+ spin_lock_irqsave(&sfs->state_lock, flags);
+ bitmap_set(sfs->state, first_blk, nr_blks);
+ if (sfs_is_fully_uptodate(folio, sfs))
+ folio_mark_uptodate(folio);
+ spin_unlock_irqrestore(&sfs->state_lock, flags);
+}
+
+static void shmem_set_range_uptodate(struct folio *folio, size_t off,
+ size_t len)
+{
+ struct shmem_folio_state *sfs = folio->private;
+
+ if (sfs)
+ sfs_set_range_uptodate(folio, sfs, off, len);
+ else
+ folio_mark_uptodate(folio);
+}
+
+static struct shmem_folio_state *sfs_alloc(struct inode *inode,
+ struct folio *folio, gfp_t gfp)
+{
+ struct shmem_folio_state *sfs = folio->private;
+ unsigned int nr_blocks = i_blocks_per_folio(inode, folio);
+
+ if (sfs || nr_blocks <= 1)
+ return sfs;
+
+ /*
+ * sfs->state tracks uptodate flag when the block size is smaller
+ * than the folio size.
+ */
+ sfs = kzalloc(struct_size(sfs, state, BITS_TO_LONGS(nr_blocks)), gfp);
+ if (!sfs)
+ return sfs;
+
+ spin_lock_init(&sfs->state_lock);
+ if (folio_test_uptodate(folio))
+ bitmap_set(sfs->state, 0, nr_blocks);
+ folio_attach_private(folio, sfs);
+
+ return sfs;
+}
+
+static void sfs_free(struct folio *folio)
+{
+ struct shmem_folio_state *sfs = folio_detach_private(folio);
+
+ if (!sfs)
+ return;
+ WARN_ON_ONCE(sfs_is_fully_uptodate(folio, sfs) !=
+ folio_test_uptodate(folio));
+ kfree(sfs);
+}
+
#ifdef CONFIG_TMPFS
static unsigned long shmem_default_max_blocks(void)
{
@@ -1495,7 +1583,7 @@ static int shmem_writepage(struct page *page, struct writeback_control *wbc)
}
folio_zero_range(folio, 0, folio_size(folio));
flush_dcache_folio(folio);
- folio_mark_uptodate(folio);
+ shmem_set_range_uptodate(folio, 0, folio_size(folio));
}
swap = folio_alloc_swap(folio);
@@ -1676,6 +1764,7 @@ static struct folio *shmem_alloc_and_add_folio(gfp_t gfp,
struct shmem_inode_info *info = SHMEM_I(inode);
unsigned int order = shmem_mapping_size_order(mapping, index, len,
SHMEM_SB(inode->i_sb));
+ struct shmem_folio_state *sfs;
struct folio *folio;
long pages;
int error;
@@ -1755,6 +1844,10 @@ static struct folio *shmem_alloc_and_add_folio(gfp_t gfp,
}
}
+ sfs = sfs_alloc(inode, folio, gfp);
+ if (!sfs && i_blocks_per_folio(inode, folio) > 1)
+ goto unlock;
+
trace_mm_shmem_add_to_page_cache(folio);
shmem_recalc_inode(inode, pages, 0);
folio_add_lru(folio);
@@ -1818,7 +1911,7 @@ static int shmem_replace_folio(struct folio **foliop, gfp_t gfp,
__folio_set_locked(new);
__folio_set_swapbacked(new);
- folio_mark_uptodate(new);
+ shmem_set_range_uptodate(new, 0, folio_size(new));
new->swap = entry;
folio_set_swapcache(new);
@@ -2146,7 +2239,7 @@ static int shmem_get_folio_gfp(struct inode *inode, pgoff_t index,
for (i = 0; i < n; i++)
clear_highpage(folio_page(folio, i));
flush_dcache_folio(folio);
- folio_mark_uptodate(folio);
+ shmem_set_range_uptodate(folio, 0, folio_size(folio));
}
/* Perhaps the file has been truncated since we checked */
@@ -2788,13 +2881,18 @@ shmem_write_end(struct file *file, struct address_space *mapping,
if (pos + copied > inode->i_size)
i_size_write(inode, pos + copied);
+ if (unlikely(copied < len && !folio_test_uptodate(folio)))
+ return 0;
+
if (!folio_test_uptodate(folio)) {
- if (copied < folio_size(folio)) {
- size_t from = offset_in_folio(folio, pos);
- folio_zero_segments(folio, 0, from,
- from + copied, folio_size(folio));
- }
- folio_mark_uptodate(folio);
+ size_t from = offset_in_folio(folio, pos);
+ if (!folio_test_large(folio) && copied < folio_size(folio))
+ folio_zero_segments(folio, 0, from, from + copied,
+ folio_size(folio));
+ if (folio_test_large(folio) && copied < PAGE_SIZE)
+ folio_zero_segments(folio, from, from, from + copied,
+ folio_size(folio));
+ shmem_set_range_uptodate(folio, from, len);
}
folio_mark_dirty(folio);
folio_unlock(folio);
@@ -2803,6 +2901,54 @@ shmem_write_end(struct file *file, struct address_space *mapping,
return copied;
}
+void shmem_invalidate_folio(struct folio *folio, size_t offset, size_t len)
+{
+ /*
+ * If we're invalidating the entire folio, clear the dirty state
+ * from it and release it to avoid unnecessary buildup of the LRU.
+ */
+ if (offset == 0 && len == folio_size(folio)) {
+ WARN_ON_ONCE(folio_test_writeback(folio));
+ folio_cancel_dirty(folio);
+ sfs_free(folio);
+ }
+}
+
+bool shmem_release_folio(struct folio *folio, gfp_t gfp_flags)
+{
+ sfs_free(folio);
+ return true;
+}
+
+/*
+ * shmem_is_partially_uptodate checks whether blocks within a folio are
+ * uptodate or not.
+ *
+ * Returns true if all blocks which correspond to the specified part
+ * of the folio are uptodate.
+ */
+bool shmem_is_partially_uptodate(struct folio *folio, size_t from, size_t count)
+{
+ struct shmem_folio_state *sfs = folio->private;
+ struct inode *inode = folio->mapping->host;
+ unsigned first, last, i;
+
+ if (!sfs)
+ return false;
+
+ /* Caller's range may extend past the end of this folio */
+ count = min(folio_size(folio) - from, count);
+
+ /* First and last blocks in range within folio */
+ first = from >> inode->i_blkbits;
+ last = (from + count - 1) >> inode->i_blkbits;
+
+ for (i = first; i <= last; i++)
+ if (!sfs_block_is_uptodate(sfs, i))
+ return false;
+ return true;
+}
+
static ssize_t shmem_file_read_iter(struct kiocb *iocb, struct iov_iter *to)
{
struct file *file = iocb->ki_filp;
@@ -3554,7 +3700,7 @@ static int shmem_symlink(struct mnt_idmap *idmap, struct inode *dir,
inode->i_mapping->a_ops = &shmem_aops;
inode->i_op = &shmem_symlink_inode_operations;
memcpy(folio_address(folio), symname, len);
- folio_mark_uptodate(folio);
+ shmem_set_range_uptodate(folio, 0, folio_size(folio));
folio_mark_dirty(folio);
folio_unlock(folio);
folio_put(folio);
@@ -4524,6 +4670,9 @@ const struct address_space_operations shmem_aops = {
#ifdef CONFIG_MIGRATION
.migrate_folio = migrate_folio,
#endif
+ .invalidate_folio = shmem_invalidate_folio,
+ .release_folio = shmem_release_folio,
+ .is_partially_uptodate = shmem_is_partially_uptodate,
.error_remove_page = shmem_error_remove_page,
};
EXPORT_SYMBOL(shmem_aops);
--
2.39.2
^ permalink raw reply related [flat|nested] 30+ messages in thread* Re: [RFC PATCH 00/11] shmem: high order folios support in write path
2023-10-28 21:15 ` [RFC PATCH 00/11] shmem: high order folios support in " Daniel Gomez
` (10 preceding siblings ...)
2023-10-28 21:15 ` [RFC PATCH 11/11] shmem: add per-block uptodate tracking Daniel Gomez
@ 2023-10-29 20:43 ` Matthew Wilcox
11 siblings, 0 replies; 30+ messages in thread
From: Matthew Wilcox @ 2023-10-29 20:43 UTC (permalink / raw)
To: Daniel Gomez
Cc: minchan@kernel.org, senozhatsky@chromium.org, axboe@kernel.dk,
djwong@kernel.org, hughd@google.com, akpm@linux-foundation.org,
mcgrof@kernel.org, linux-kernel@vger.kernel.org,
linux-block@vger.kernel.org, linux-xfs@vger.kernel.org,
linux-fsdevel@vger.kernel.org, linux-mm@kvack.org,
gost.dev@samsung.com, Pankaj Raghav
On Sat, Oct 28, 2023 at 09:15:34PM +0000, Daniel Gomez wrote:
> This series try to add support for high order folios in shmem write and
> fallocate paths when swap is disabled (noswap option). This is part of the
> Large Block Size (LBS) effort [1][2] and a continuation of the shmem work from
> Luis here [3] following Matthew Wilcox's suggestion [4] regarding the path to
> take for the folio allocation order calculation.
I don't see how this is part of the LBS effort. shmem doesn't use a
block device. swap might, but that's a separate problem, as you've
pointed out.
^ permalink raw reply [flat|nested] 30+ messages in thread