* [PATCH v8 0/5] block: add larger order folio instead of pages
[not found] <CGME20240711051521epcas5p348f2cd84a1a80577754929143255352b@epcas5p3.samsung.com>
@ 2024-07-11 5:07 ` Kundan Kumar
[not found] ` <CGME20240711051529epcas5p1aaf694dfa65859b8a32bdffce5239bf6@epcas5p1.samsung.com>
` (5 more replies)
0 siblings, 6 replies; 18+ messages in thread
From: Kundan Kumar @ 2024-07-11 5:07 UTC (permalink / raw)
To: axboe, hch, willy, kbusch
Cc: linux-block, joshi.k, mcgrof, anuj20.g, nj.shetty, c.gameti,
gost.dev, Kundan Kumar
User space memory is mapped in kernel in form of pages array. These pages
are iterated and added to BIO. In process, pages are also checked for
contiguity and merged.
When mTHP is enabled the pages generally belong to larger order folio. This
patch series enables adding large folio to bio. It fetches folio for
page in the page array. The page might start from an offset in the folio
which could be multiples of PAGE_SIZE. Subsequent pages in page array
might belong to same folio. Using the length of folio, folio_offset and
remaining size, determine length in folio which can be added to the bio.
Check if pages are contiguous and belong to same folio. If yes then skip
further processing for the contiguous pages.
This complete scheme reduces the overhead of iterating through pages.
perf diff before and after this change(with mTHP enabled):
Perf diff for write I/O with 128K block size:
1.24% -0.20% [kernel.kallsyms] [k] bio_iov_iter_get_pages
1.71% [kernel.kallsyms] [k] bvec_try_merge_page
Perf diff for read I/O with 128K block size:
4.03% -1.59% [kernel.kallsyms] [k] bio_iov_iter_get_pages
5.14% [kernel.kallsyms] [k] bvec_try_merge_page
Patch 1: Adds folio-ized version of bvec_try_merge_hw_page()
Patch 2: Adds folio-ized version of bio_add_hw_page()
Patch 3: Adds bigger size from folio to BIO
Patch 4: Adds mm function to release n pages of folio
Patch 5: Unpin user pages belonging to folio at once using the mm
function
Changes since v7:
- corrected issue found by kernel robot, fixed by avoiding updating
same_page for different pages of folio
- changed folio-lise to folio-ize
- corrected order with wrapper definition after the guts
- removed check for BIO_PAGE_PINNED
- separated mm related implementation in different patch
Changes since v6:
- folio-lize bvec_try_merge_hw_page() to bvec_try_merge_hw_folio()
- restructured the code as per comments
- typecast with size_t while calculating the offset in folio.
Changes since v5:
- Made offset and len as size_t in function bio_add_hw_folio()
- Avoid unpinning skipped pages at submission, rather unpin all pages at
once on IO completion
Changes since v4:
- folio-lize bio_add_hw_page() to bio_add_hw_folio()
- make bio_add_hw_page() as a wrapper around bio_add_hw_folio()
- made a helper function to check for contiguous pages of folio
- changed &folio->page to folio_page(folio, 0)
- reworded comments
Changes since v3:
- Added change to see if pages are contiguous and belong to same folio.
If not then avoid skipping of pages.(Suggested by Matthew Wilcox)
Changes since v2:
- Made separate patches
- Corrected code as per kernel coding style
- Removed size_folio variable
Changes since v1:
- Changed functions bio_iov_add_page() and bio_iov_add_zone_append_page()
to accept a folio
- Removed branch and calculate folio_offset and len in same fashion for
both 0 order and larger folios
- Added change in NVMe driver to use nvme_setup_prp_simple() by
ignoring multiples of PAGE_SIZE in offset
- Added a change to unpin_user_pages which were added as folios. Also
stopped the unpin of pages one by one from __bio_release_pages()
(Suggested by Keith)
Kundan Kumar (5):
block: Added folio-ized version of bvec_try_merge_hw_page()
block: Added folio-ized version of bio_add_hw_page()
block: introduce folio awareness and add a bigger size from folio
mm: release number of pages of a folio
block: unpin user pages belonging to a folio at once
block/bio.c | 156 ++++++++++++++++++++++++++++++++++-----------
block/blk.h | 14 ++++
include/linux/mm.h | 1 +
mm/gup.c | 13 ++++
4 files changed, 147 insertions(+), 37 deletions(-)
--
2.25.1
^ permalink raw reply [flat|nested] 18+ messages in thread
* [PATCH v8 1/5] block: Added folio-ized version of bvec_try_merge_hw_page()
[not found] ` <CGME20240711051529epcas5p1aaf694dfa65859b8a32bdffce5239bf6@epcas5p1.samsung.com>
@ 2024-07-11 5:07 ` Kundan Kumar
2024-08-17 4:23 ` Matthew Wilcox
0 siblings, 1 reply; 18+ messages in thread
From: Kundan Kumar @ 2024-07-11 5:07 UTC (permalink / raw)
To: axboe, hch, willy, kbusch
Cc: linux-block, joshi.k, mcgrof, anuj20.g, nj.shetty, c.gameti,
gost.dev, Kundan Kumar
Added new function bvec_try_merge_hw_folio(). This is a prep patch.
Signed-off-by: Kundan Kumar <kundan.kumar@samsung.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
---
block/bio.c | 27 +++++++++++++++++++++++----
block/blk.h | 4 ++++
2 files changed, 27 insertions(+), 4 deletions(-)
diff --git a/block/bio.c b/block/bio.c
index c4053d49679a..7018eded8d7b 100644
--- a/block/bio.c
+++ b/block/bio.c
@@ -931,7 +931,9 @@ static bool bvec_try_merge_page(struct bio_vec *bv, struct page *page,
if (!zone_device_pages_have_same_pgmap(bv->bv_page, page))
return false;
- *same_page = ((vec_end_addr & PAGE_MASK) == page_addr);
+ if (off < PAGE_SIZE)
+ *same_page = ((vec_end_addr & PAGE_MASK) == page_addr);
+
if (!*same_page) {
if (IS_ENABLED(CONFIG_KMSAN))
return false;
@@ -944,14 +946,15 @@ static bool bvec_try_merge_page(struct bio_vec *bv, struct page *page,
}
/*
- * Try to merge a page into a segment, while obeying the hardware segment
+ * Try to merge a folio into a segment, while obeying the hardware segment
* size limit. This is not for normal read/write bios, but for passthrough
* or Zone Append operations that we can't split.
*/
-bool bvec_try_merge_hw_page(struct request_queue *q, struct bio_vec *bv,
- struct page *page, unsigned len, unsigned offset,
+bool bvec_try_merge_hw_folio(struct request_queue *q, struct bio_vec *bv,
+ struct folio *folio, size_t len, size_t offset,
bool *same_page)
{
+ struct page *page = folio_page(folio, 0);
unsigned long mask = queue_segment_boundary(q);
phys_addr_t addr1 = bvec_phys(bv);
phys_addr_t addr2 = page_to_phys(page) + offset + len - 1;
@@ -963,6 +966,22 @@ bool bvec_try_merge_hw_page(struct request_queue *q, struct bio_vec *bv,
return bvec_try_merge_page(bv, page, len, offset, same_page);
}
+/*
+ * Try to merge a page into a segment, while obeying the hardware segment
+ * size limit. This is not for normal read/write bios, but for passthrough
+ * or Zone Append operations that we can't split.
+ */
+bool bvec_try_merge_hw_page(struct request_queue *q, struct bio_vec *bv,
+ struct page *page, unsigned int len, unsigned int offset,
+ bool *same_page)
+{
+ struct folio *folio = page_folio(page);
+
+ return bvec_try_merge_hw_folio(q, bv, folio, len,
+ ((size_t)folio_page_idx(folio, page) << PAGE_SHIFT) +
+ offset, same_page);
+}
+
/**
* bio_add_hw_page - attempt to add a page to a bio with hw constraints
* @q: the target queue
diff --git a/block/blk.h b/block/blk.h
index e180863f918b..6a566b78a0a8 100644
--- a/block/blk.h
+++ b/block/blk.h
@@ -91,6 +91,10 @@ struct bio_vec *bvec_alloc(mempool_t *pool, unsigned short *nr_vecs,
gfp_t gfp_mask);
void bvec_free(mempool_t *pool, struct bio_vec *bv, unsigned short nr_vecs);
+bool bvec_try_merge_hw_folio(struct request_queue *q, struct bio_vec *bv,
+ struct folio *folio, size_t len, size_t offset,
+ bool *same_page);
+
bool bvec_try_merge_hw_page(struct request_queue *q, struct bio_vec *bv,
struct page *page, unsigned len, unsigned offset,
bool *same_page);
--
2.25.1
^ permalink raw reply related [flat|nested] 18+ messages in thread
* [PATCH v8 2/5] block: Added folio-ized version of bio_add_hw_page()
[not found] ` <CGME20240711051532epcas5p360e92f46b86eca56e73643a71950783a@epcas5p3.samsung.com>
@ 2024-07-11 5:07 ` Kundan Kumar
0 siblings, 0 replies; 18+ messages in thread
From: Kundan Kumar @ 2024-07-11 5:07 UTC (permalink / raw)
To: axboe, hch, willy, kbusch
Cc: linux-block, joshi.k, mcgrof, anuj20.g, nj.shetty, c.gameti,
gost.dev, Kundan Kumar
Added new bio_add_hw_folio() function. This is a prep patch.
Signed-off-by: Kundan Kumar <kundan.kumar@samsung.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
---
block/bio.c | 46 ++++++++++++++++++++++++++++++++++------------
block/blk.h | 4 ++++
2 files changed, 38 insertions(+), 12 deletions(-)
diff --git a/block/bio.c b/block/bio.c
index 7018eded8d7b..a7442f4bbfc6 100644
--- a/block/bio.c
+++ b/block/bio.c
@@ -983,20 +983,20 @@ bool bvec_try_merge_hw_page(struct request_queue *q, struct bio_vec *bv,
}
/**
- * bio_add_hw_page - attempt to add a page to a bio with hw constraints
+ * bio_add_hw_folio - attempt to add a folio to a bio with hw constraints
* @q: the target queue
* @bio: destination bio
- * @page: page to add
+ * @folio: folio to add
* @len: vec entry length
- * @offset: vec entry offset
+ * @offset: vec entry offset in the folio
* @max_sectors: maximum number of sectors that can be added
- * @same_page: return if the segment has been merged inside the same page
+ * @same_page: return if the segment has been merged inside the same folio
*
- * Add a page to a bio while respecting the hardware max_sectors, max_segment
+ * Add a folio to a bio while respecting the hardware max_sectors, max_segment
* and gap limitations.
*/
-int bio_add_hw_page(struct request_queue *q, struct bio *bio,
- struct page *page, unsigned int len, unsigned int offset,
+int bio_add_hw_folio(struct request_queue *q, struct bio *bio,
+ struct folio *folio, size_t len, size_t offset,
unsigned int max_sectors, bool *same_page)
{
unsigned int max_size = max_sectors << SECTOR_SHIFT;
@@ -1011,8 +1011,8 @@ int bio_add_hw_page(struct request_queue *q, struct bio *bio,
if (bio->bi_vcnt > 0) {
struct bio_vec *bv = &bio->bi_io_vec[bio->bi_vcnt - 1];
- if (bvec_try_merge_hw_page(q, bv, page, len, offset,
- same_page)) {
+ if (bvec_try_merge_hw_folio(q, bv, folio, len, offset,
+ same_page)) {
bio->bi_iter.bi_size += len;
return len;
}
@@ -1029,12 +1029,34 @@ int bio_add_hw_page(struct request_queue *q, struct bio *bio,
return 0;
}
- bvec_set_page(&bio->bi_io_vec[bio->bi_vcnt], page, len, offset);
- bio->bi_vcnt++;
- bio->bi_iter.bi_size += len;
+ bio_add_folio_nofail(bio, folio, len, offset);
return len;
}
+/**
+ * bio_add_hw_page - attempt to add a page to a bio with hw constraints
+ * @q: the target queue
+ * @bio: destination bio
+ * @page: page to add
+ * @len: vec entry length
+ * @offset: vec entry offset
+ * @max_sectors: maximum number of sectors that can be added
+ * @same_page: return if the segment has been merged inside the same page
+ *
+ * Add a page to a bio while respecting the hardware max_sectors, max_segment
+ * and gap limitations.
+ */
+int bio_add_hw_page(struct request_queue *q, struct bio *bio,
+ struct page *page, unsigned int len, unsigned int offset,
+ unsigned int max_sectors, bool *same_page)
+{
+ struct folio *folio = page_folio(page);
+
+ return bio_add_hw_folio(q, bio, folio, len,
+ ((size_t)folio_page_idx(folio, page) << PAGE_SHIFT) +
+ offset, max_sectors, same_page);
+}
+
/**
* bio_add_pc_page - attempt to add page to passthrough bio
* @q: the target queue
diff --git a/block/blk.h b/block/blk.h
index 6a566b78a0a8..777e1486f0de 100644
--- a/block/blk.h
+++ b/block/blk.h
@@ -540,6 +540,10 @@ void bdev_set_nr_sectors(struct block_device *bdev, sector_t sectors);
struct gendisk *__alloc_disk_node(struct request_queue *q, int node_id,
struct lock_class_key *lkclass);
+int bio_add_hw_folio(struct request_queue *q, struct bio *bio,
+ struct folio *folio, size_t len, size_t offset,
+ unsigned int max_sectors, bool *same_page);
+
int bio_add_hw_page(struct request_queue *q, struct bio *bio,
struct page *page, unsigned int len, unsigned int offset,
unsigned int max_sectors, bool *same_page);
--
2.25.1
^ permalink raw reply related [flat|nested] 18+ messages in thread
* [PATCH v8 3/5] block: introduce folio awareness and add a bigger size from folio
[not found] ` <CGME20240711051536epcas5p369e005a83fb16d9f6d9636585cc66e3b@epcas5p3.samsung.com>
@ 2024-07-11 5:07 ` Kundan Kumar
2024-07-25 20:56 ` Anuj gupta
0 siblings, 1 reply; 18+ messages in thread
From: Kundan Kumar @ 2024-07-11 5:07 UTC (permalink / raw)
To: axboe, hch, willy, kbusch
Cc: linux-block, joshi.k, mcgrof, anuj20.g, nj.shetty, c.gameti,
gost.dev, Kundan Kumar
Add a bigger size from folio to bio and skip merge processing for pages.
Fetch the offset of page within a folio. Depending on the size of folio
and folio_offset, fetch a larger length. This length may consist of
multiple contiguous pages if folio is multiorder.
Using the length calculate number of pages which will be added to bio and
increment the loop counter to skip those pages.
Using a helper function check if pages are contiguous and belong to same
folio, this is done as a COW may happen and change contiguous mapping of
pages of folio.
This technique helps to avoid overhead of merging pages which belong to
same large order folio.
Also folio-lize the functions bio_iov_add_page() and
bio_iov_add_zone_append_page()
Signed-off-by: Kundan Kumar <kundan.kumar@samsung.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
---
block/bio.c | 77 ++++++++++++++++++++++++++++++++++++++++++-----------
1 file changed, 61 insertions(+), 16 deletions(-)
diff --git a/block/bio.c b/block/bio.c
index a7442f4bbfc6..b4df3af3e303 100644
--- a/block/bio.c
+++ b/block/bio.c
@@ -1245,8 +1245,8 @@ void bio_iov_bvec_set(struct bio *bio, struct iov_iter *iter)
bio_set_flag(bio, BIO_CLONED);
}
-static int bio_iov_add_page(struct bio *bio, struct page *page,
- unsigned int len, unsigned int offset)
+static int bio_iov_add_folio(struct bio *bio, struct folio *folio, size_t len,
+ size_t offset)
{
bool same_page = false;
@@ -1255,30 +1255,61 @@ static int bio_iov_add_page(struct bio *bio, struct page *page,
if (bio->bi_vcnt > 0 &&
bvec_try_merge_page(&bio->bi_io_vec[bio->bi_vcnt - 1],
- page, len, offset, &same_page)) {
+ folio_page(folio, 0), len, offset,
+ &same_page)) {
bio->bi_iter.bi_size += len;
if (same_page)
- bio_release_page(bio, page);
+ bio_release_page(bio, folio_page(folio, 0));
return 0;
}
- __bio_add_page(bio, page, len, offset);
+ bio_add_folio_nofail(bio, folio, len, offset);
return 0;
}
-static int bio_iov_add_zone_append_page(struct bio *bio, struct page *page,
- unsigned int len, unsigned int offset)
+static int bio_iov_add_zone_append_folio(struct bio *bio, struct folio *folio,
+ size_t len, size_t offset)
{
struct request_queue *q = bdev_get_queue(bio->bi_bdev);
bool same_page = false;
- if (bio_add_hw_page(q, bio, page, len, offset,
+ if (bio_add_hw_folio(q, bio, folio, len, offset,
queue_max_zone_append_sectors(q), &same_page) != len)
return -EINVAL;
if (same_page)
- bio_release_page(bio, page);
+ bio_release_page(bio, folio_page(folio, 0));
return 0;
}
+static unsigned int get_contig_folio_len(unsigned int *num_pages,
+ struct page **pages, unsigned int i,
+ struct folio *folio, size_t left,
+ size_t offset)
+{
+ size_t bytes = left;
+ size_t contig_sz = min_t(size_t, PAGE_SIZE - offset, bytes);
+ unsigned int j;
+
+ /*
+ * We might COW a single page in the middle of
+ * a large folio, so we have to check that all
+ * pages belong to the same folio.
+ */
+ bytes -= contig_sz;
+ for (j = i + 1; j < i + *num_pages; j++) {
+ size_t next = min_t(size_t, PAGE_SIZE, bytes);
+
+ if (page_folio(pages[j]) != folio ||
+ pages[j] != pages[j - 1] + 1) {
+ break;
+ }
+ contig_sz += next;
+ bytes -= next;
+ }
+ *num_pages = j - i;
+
+ return contig_sz;
+}
+
#define PAGE_PTRS_PER_BVEC (sizeof(struct bio_vec) / sizeof(struct page *))
/**
@@ -1298,9 +1329,9 @@ static int __bio_iov_iter_get_pages(struct bio *bio, struct iov_iter *iter)
unsigned short entries_left = bio->bi_max_vecs - bio->bi_vcnt;
struct bio_vec *bv = bio->bi_io_vec + bio->bi_vcnt;
struct page **pages = (struct page **)bv;
- ssize_t size, left;
- unsigned len, i = 0;
- size_t offset;
+ ssize_t size;
+ unsigned int i = 0, num_pages;
+ size_t offset, folio_offset, left, len;
int ret = 0;
/*
@@ -1342,15 +1373,29 @@ static int __bio_iov_iter_get_pages(struct bio *bio, struct iov_iter *iter)
for (left = size, i = 0; left > 0; left -= len, i++) {
struct page *page = pages[i];
+ struct folio *folio = page_folio(page);
+
+ folio_offset = ((size_t)folio_page_idx(folio, page) <<
+ PAGE_SHIFT) + offset;
+
+ len = min_t(size_t, (folio_size(folio) - folio_offset), left);
+
+ num_pages = DIV_ROUND_UP(offset + len, PAGE_SIZE);
+
+ if (num_pages > 1)
+ len = get_contig_folio_len(&num_pages, pages, i,
+ folio, left, offset);
- len = min_t(size_t, PAGE_SIZE - offset, left);
if (bio_op(bio) == REQ_OP_ZONE_APPEND) {
- ret = bio_iov_add_zone_append_page(bio, page, len,
- offset);
+ ret = bio_iov_add_zone_append_folio(bio, folio, len,
+ folio_offset);
if (ret)
break;
} else
- bio_iov_add_page(bio, page, len, offset);
+ bio_iov_add_folio(bio, folio, len, folio_offset);
+
+ /* Skip the pages which got added */
+ i = i + (num_pages - 1);
offset = 0;
}
--
2.25.1
^ permalink raw reply related [flat|nested] 18+ messages in thread
* [PATCH v8 4/5] mm: release number of pages of a folio
[not found] ` <CGME20240711051539epcas5p2842f564375dc2f1c3de1a869c938436b@epcas5p2.samsung.com>
@ 2024-07-11 5:07 ` Kundan Kumar
0 siblings, 0 replies; 18+ messages in thread
From: Kundan Kumar @ 2024-07-11 5:07 UTC (permalink / raw)
To: axboe, hch, willy, kbusch
Cc: linux-block, joshi.k, mcgrof, anuj20.g, nj.shetty, c.gameti,
gost.dev, Kundan Kumar
Add a new function unpin_user_folio() to put the refs of a folio by
npages count.
Signed-off-by: Kundan Kumar <kundan.kumar@samsung.com>
---
include/linux/mm.h | 1 +
mm/gup.c | 13 +++++++++++++
2 files changed, 14 insertions(+)
diff --git a/include/linux/mm.h b/include/linux/mm.h
index eb7c96d24ac0..fcf9b6263326 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -1623,6 +1623,7 @@ void unpin_user_pages_dirty_lock(struct page **pages, unsigned long npages,
void unpin_user_page_range_dirty_lock(struct page *page, unsigned long npages,
bool make_dirty);
void unpin_user_pages(struct page **pages, unsigned long npages);
+void unpin_user_folio(struct folio *folio, unsigned long npages);
static inline bool is_cow_mapping(vm_flags_t flags)
{
diff --git a/mm/gup.c b/mm/gup.c
index ca0f5cedce9b..bc96efa43d1b 100644
--- a/mm/gup.c
+++ b/mm/gup.c
@@ -488,6 +488,19 @@ void unpin_user_pages(struct page **pages, unsigned long npages)
}
EXPORT_SYMBOL(unpin_user_pages);
+/**
+ * unpin_user_folio() - release pages of a folio
+ * @folio: pointer to folio to be released
+ * @npages: number of pages of same folio
+ *
+ * Release npages of the folio
+ */
+void unpin_user_folio(struct folio *folio, unsigned long npages)
+{
+ gup_put_folio(folio, npages, FOLL_PIN);
+}
+EXPORT_SYMBOL(unpin_user_folio);
+
/*
* Set the MMF_HAS_PINNED if not set yet; after set it'll be there for the mm's
* lifecycle. Avoid setting the bit unless necessary, or it might cause write
--
2.25.1
^ permalink raw reply related [flat|nested] 18+ messages in thread
* [PATCH v8 5/5] block: unpin user pages belonging to a folio at once
[not found] ` <CGME20240711051543epcas5p364f770974e2367d27c685a626cc9dbb5@epcas5p3.samsung.com>
@ 2024-07-11 5:07 ` Kundan Kumar
2024-07-18 6:17 ` Kundan Kumar
` (2 more replies)
0 siblings, 3 replies; 18+ messages in thread
From: Kundan Kumar @ 2024-07-11 5:07 UTC (permalink / raw)
To: axboe, hch, willy, kbusch
Cc: linux-block, joshi.k, mcgrof, anuj20.g, nj.shetty, c.gameti,
gost.dev, Kundan Kumar
Add a new wrapper bio_release_folio() and use it to put refs by npages
count.
Signed-off-by: Kundan Kumar <kundan.kumar@samsung.com>
---
block/bio.c | 6 +-----
block/blk.h | 6 ++++++
2 files changed, 7 insertions(+), 5 deletions(-)
diff --git a/block/bio.c b/block/bio.c
index b4df3af3e303..ca249f2c99a7 100644
--- a/block/bio.c
+++ b/block/bio.c
@@ -1207,7 +1207,6 @@ void __bio_release_pages(struct bio *bio, bool mark_dirty)
struct folio_iter fi;
bio_for_each_folio_all(fi, bio) {
- struct page *page;
size_t nr_pages;
if (mark_dirty) {
@@ -1215,12 +1214,9 @@ void __bio_release_pages(struct bio *bio, bool mark_dirty)
folio_mark_dirty(fi.folio);
folio_unlock(fi.folio);
}
- page = folio_page(fi.folio, fi.offset / PAGE_SIZE);
nr_pages = (fi.offset + fi.length - 1) / PAGE_SIZE -
fi.offset / PAGE_SIZE + 1;
- do {
- bio_release_page(bio, page++);
- } while (--nr_pages != 0);
+ bio_release_folio(bio, fi.folio, nr_pages);
}
}
EXPORT_SYMBOL_GPL(__bio_release_pages);
diff --git a/block/blk.h b/block/blk.h
index 777e1486f0de..8e266f0ace2b 100644
--- a/block/blk.h
+++ b/block/blk.h
@@ -558,6 +558,12 @@ static inline void bio_release_page(struct bio *bio, struct page *page)
unpin_user_page(page);
}
+static inline void bio_release_folio(struct bio *bio, struct folio *folio,
+ unsigned long npages)
+{
+ unpin_user_folio(folio, npages);
+}
+
struct request_queue *blk_alloc_queue(struct queue_limits *lim, int node_id);
int disk_scan_partitions(struct gendisk *disk, blk_mode_t mode);
--
2.25.1
^ permalink raw reply related [flat|nested] 18+ messages in thread
* Re: [PATCH v8 5/5] block: unpin user pages belonging to a folio at once
2024-07-11 5:07 ` [PATCH v8 5/5] block: unpin user pages belonging to a folio at once Kundan Kumar
@ 2024-07-18 6:17 ` Kundan Kumar
2024-07-25 20:40 ` Anuj gupta
2024-08-17 4:36 ` Matthew Wilcox
2 siblings, 0 replies; 18+ messages in thread
From: Kundan Kumar @ 2024-07-18 6:17 UTC (permalink / raw)
To: axboe, hch, willy, kbusch
Cc: linux-block, joshi.k, mcgrof, anuj20.g, nj.shetty, c.gameti,
gost.dev
[-- Attachment #1: Type: text/plain, Size: 1754 bytes --]
On 11/07/24 10:37AM, Kundan Kumar wrote:
>Add a new wrapper bio_release_folio() and use it to put refs by npages
>count.
>
>Signed-off-by: Kundan Kumar <kundan.kumar@samsung.com>
>---
> block/bio.c | 6 +-----
> block/blk.h | 6 ++++++
> 2 files changed, 7 insertions(+), 5 deletions(-)
>
>diff --git a/block/bio.c b/block/bio.c
>index b4df3af3e303..ca249f2c99a7 100644
>--- a/block/bio.c
>+++ b/block/bio.c
>@@ -1207,7 +1207,6 @@ void __bio_release_pages(struct bio *bio, bool mark_dirty)
> struct folio_iter fi;
>
> bio_for_each_folio_all(fi, bio) {
>- struct page *page;
> size_t nr_pages;
>
> if (mark_dirty) {
>@@ -1215,12 +1214,9 @@ void __bio_release_pages(struct bio *bio, bool mark_dirty)
> folio_mark_dirty(fi.folio);
> folio_unlock(fi.folio);
> }
>- page = folio_page(fi.folio, fi.offset / PAGE_SIZE);
> nr_pages = (fi.offset + fi.length - 1) / PAGE_SIZE -
> fi.offset / PAGE_SIZE + 1;
>- do {
>- bio_release_page(bio, page++);
>- } while (--nr_pages != 0);
>+ bio_release_folio(bio, fi.folio, nr_pages);
> }
> }
> EXPORT_SYMBOL_GPL(__bio_release_pages);
>diff --git a/block/blk.h b/block/blk.h
>index 777e1486f0de..8e266f0ace2b 100644
>--- a/block/blk.h
>+++ b/block/blk.h
>@@ -558,6 +558,12 @@ static inline void bio_release_page(struct bio *bio, struct page *page)
> unpin_user_page(page);
> }
>
>+static inline void bio_release_folio(struct bio *bio, struct folio *folio,
>+ unsigned long npages)
>+{
>+ unpin_user_folio(folio, npages);
>+}
>+
> struct request_queue *blk_alloc_queue(struct queue_limits *lim, int node_id);
>
> int disk_scan_partitions(struct gendisk *disk, blk_mode_t mode);
Hi Christoph, Willy a gentle ping here. Any comments on the v8 patchset?
Thanks,
Kundan
>--
>2.25.1
>
>
[-- Attachment #2: Type: text/plain, Size: 0 bytes --]
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [PATCH v8 5/5] block: unpin user pages belonging to a folio at once
2024-07-11 5:07 ` [PATCH v8 5/5] block: unpin user pages belonging to a folio at once Kundan Kumar
2024-07-18 6:17 ` Kundan Kumar
@ 2024-07-25 20:40 ` Anuj gupta
2024-08-17 4:36 ` Matthew Wilcox
2 siblings, 0 replies; 18+ messages in thread
From: Anuj gupta @ 2024-07-25 20:40 UTC (permalink / raw)
To: Kundan Kumar
Cc: axboe, hch, willy, kbusch, linux-block, joshi.k, mcgrof, anuj20.g,
nj.shetty, c.gameti, gost.dev
On Thu, Jul 11, 2024 at 10:50 AM Kundan Kumar <kundan.kumar@samsung.com> wrote:
>
> @@ -1215,12 +1214,9 @@ void __bio_release_pages(struct bio *bio, bool mark_dirty)
> folio_mark_dirty(fi.folio);
> folio_unlock(fi.folio);
> }
> - page = folio_page(fi.folio, fi.offset / PAGE_SIZE);
> nr_pages = (fi.offset + fi.length - 1) / PAGE_SIZE -
> fi.offset / PAGE_SIZE + 1;
> - do {
> - bio_release_page(bio, page++);
> - } while (--nr_pages != 0);
> + bio_release_folio(bio, fi.folio, nr_pages);
Wouldn't it be better to use unpin_user_folio (introduced in the previous
patch) here, rather than using bio_release_folio.
> }
> }
> EXPORT_SYMBOL_GPL(__bio_release_pages);
> diff --git a/block/blk.h b/block/blk.h
> index 777e1486f0de..8e266f0ace2b 100644
> --- a/block/blk.h
> +++ b/block/blk.h
> @@ -558,6 +558,12 @@ static inline void bio_release_page(struct bio *bio, struct page *page)
> unpin_user_page(page);
> }
>
> +static inline void bio_release_folio(struct bio *bio, struct folio *folio,
> + unsigned long npages)
> +{
> + unpin_user_folio(folio, npages);
> +}
> +
This function takes bio as a parameter but doesn't really use it. Also if
we use unpin_user_folio at the previous place, we wouldn't really need to
introduce this function.
.
> struct request_queue *blk_alloc_queue(struct queue_limits *lim, int node_id);
>
> int disk_scan_partitions(struct gendisk *disk, blk_mode_t mode);
> --
> 2.25.1
>
>
Could you give this series a respin against the latest for-next, some patches
don't apply cleanly now.
--
Anuj Gupta
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [PATCH v8 3/5] block: introduce folio awareness and add a bigger size from folio
2024-07-11 5:07 ` [PATCH v8 3/5] block: introduce folio awareness and add a bigger size from folio Kundan Kumar
@ 2024-07-25 20:56 ` Anuj gupta
0 siblings, 0 replies; 18+ messages in thread
From: Anuj gupta @ 2024-07-25 20:56 UTC (permalink / raw)
To: Kundan Kumar
Cc: axboe, hch, willy, kbusch, linux-block, joshi.k, mcgrof, anuj20.g,
nj.shetty, c.gameti, gost.dev
On Thu, Jul 11, 2024 at 10:50 AM Kundan Kumar <kundan.kumar@samsung.com> wrote:
>
> Using a helper function check if pages are contiguous and belong to same
> folio, this is done as a COW may happen and change contiguous mapping of
> pages of folio.
>
You can consider removing this part in the commit description, as the same
comment is present below in the code as well.
> #define PAGE_PTRS_PER_BVEC (sizeof(struct bio_vec) / sizeof(struct page *))
>
> /**
> @@ -1298,9 +1329,9 @@ static int __bio_iov_iter_get_pages(struct bio *bio, struct iov_iter *iter)
> unsigned short entries_left = bio->bi_max_vecs - bio->bi_vcnt;
> struct bio_vec *bv = bio->bi_io_vec + bio->bi_vcnt;
> struct page **pages = (struct page **)bv;
> - ssize_t size, left;
> - unsigned len, i = 0;
> - size_t offset;
> + ssize_t size;
> + unsigned int i = 0, num_pages;
> + size_t offset, folio_offset, left, len;
> int ret = 0;
>
> /*
> @@ -1342,15 +1373,29 @@ static int __bio_iov_iter_get_pages(struct bio *bio, struct iov_iter *iter)
>
> for (left = size, i = 0; left > 0; left -= len, i++) {
Can we do a i+=num_pages here, given that you are incrementing i by 1 here
and by (num_pages-1) at the end of the loop. This way you can get rid of
the increment that you are doing at the end of the loop.
> struct page *page = pages[i];
> + struct folio *folio = page_folio(page);
> +
> + folio_offset = ((size_t)folio_page_idx(folio, page) <<
> + PAGE_SHIFT) + offset;
> +
> + len = min_t(size_t, (folio_size(folio) - folio_offset), left);
> +
> + num_pages = DIV_ROUND_UP(offset + len, PAGE_SIZE);
> +
> + if (num_pages > 1)
> + len = get_contig_folio_len(&num_pages, pages, i,
> + folio, left, offset);
>
> - len = min_t(size_t, PAGE_SIZE - offset, left);
> if (bio_op(bio) == REQ_OP_ZONE_APPEND) {
> - ret = bio_iov_add_zone_append_page(bio, page, len,
> - offset);
> + ret = bio_iov_add_zone_append_folio(bio, folio, len,
> + folio_offset);
> if (ret)
> break;
> } else
> - bio_iov_add_page(bio, page, len, offset);
> + bio_iov_add_folio(bio, folio, len, folio_offset);
> +
> + /* Skip the pages which got added */
> + i = i + (num_pages - 1);
>
> offset = 0;
> }
> --
> 2.25.1
>
--
Anuj Gupta
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [PATCH v8 0/5] block: add larger order folio instead of pages
2024-07-11 5:07 ` [PATCH v8 0/5] block: add larger order folio instead of pages Kundan Kumar
` (4 preceding siblings ...)
[not found] ` <CGME20240711051543epcas5p364f770974e2367d27c685a626cc9dbb5@epcas5p3.samsung.com>
@ 2024-08-08 23:04 ` Luis Chamberlain
2024-08-12 13:38 ` Christoph Hellwig
5 siblings, 1 reply; 18+ messages in thread
From: Luis Chamberlain @ 2024-08-08 23:04 UTC (permalink / raw)
To: Kundan Kumar, Daniel Gomez, Pankaj Raghav, kernel
Cc: axboe, hch, willy, kbusch, linux-block, joshi.k, anuj20.g,
nj.shetty, c.gameti, gost.dev
On Thu, Jul 11, 2024 at 10:37:45AM +0530, Kundan Kumar wrote:
> User space memory is mapped in kernel in form of pages array. These pages
> are iterated and added to BIO. In process, pages are also checked for
> contiguity and merged.
>
> When mTHP is enabled the pages generally belong to larger order folio. This
> patch series enables adding large folio to bio. It fetches folio for
> page in the page array. The page might start from an offset in the folio
> which could be multiples of PAGE_SIZE. Subsequent pages in page array
> might belong to same folio. Using the length of folio, folio_offset and
> remaining size, determine length in folio which can be added to the bio.
> Check if pages are contiguous and belong to same folio. If yes then skip
> further processing for the contiguous pages.
>
> This complete scheme reduces the overhead of iterating through pages.
>
> perf diff before and after this change(with mTHP enabled):
>
> Perf diff for write I/O with 128K block size:
> 1.24% -0.20% [kernel.kallsyms] [k] bio_iov_iter_get_pages
> 1.71% [kernel.kallsyms] [k] bvec_try_merge_page
> Perf diff for read I/O with 128K block size:
> 4.03% -1.59% [kernel.kallsyms] [k] bio_iov_iter_get_pages
> 5.14% [kernel.kallsyms] [k] bvec_try_merge_page
This is not just about mTHP uses though, this can also affect buffered IO and
direct IO patterns as well and this needs to be considered and tested as well.
I've given this a spin on top of of the LBS patches [0] and used the LBS
patches as a baseline. The good news is I see a considerable amount of
larger IOs for buffered IO and direct IO, however for buffered IO there
is an increase on unalignenment to the target filesystem block size and
that can affect performance.
You can test this with Daniel Gomez's blkalgn tool for IO introspection:
wget https://raw.githubusercontent.com/dkruces/bcc/lbs/tools/blkalgn.py
mv blkalgn.py /usr/local/bin/
apt-get install python3-bpfcc
And so let's try to make things "bad" by forcing a million of small 4k files
on a 64k block size fileystem, we see an increase in alignment by a
factor of about 2133:
fio -name=1k-files-per-thread --nrfiles=1000 -direct=0 -bs=512 \
-ioengine=io_uring --group_reporting=1 \
--alloc-size=2097152 --filesize=4KiB --readwrite=randwrite \
--fallocate=none --numjobs=1000 --create_on_open=1 --directory=$DIR
# Force any pending IO from the page cache
umount /xfs-64k/
You can use blkalgn with something like this:
The left hand side are order, so for example we see only six 4k IOs
aligned to 4k with the baseline of just LBS on top of next-20240723.
However with these patches that increases to 11 4k IOs, but 23,468 IOs
are aligned to 4k.
mkfs.xfs -f -b size=64k /dev/nvme0n1
blkalgn -d nvme0n1 --ops Write --json-output 64k-next-20240723.json
# Hit CTRL-C after you umount above.
cat 64k-next-20240723.json
{
"Block size": {
"13": 1,
"12": 6,
"18": 244899,
"16": 5236751,
"17": 13088
},
"Algn size": {
"18": 244899,
"12": 6,
"17": 9793,
"13": 1,
"16": 5240047
}
}
And with this series say 64k-next-20240723-block-folios.json
{
"Block size": {
"16": 1018244,
"9": 7,
"17": 507163,
"13": 16,
"10": 4,
"15": 51671,
"12": 11,
"14": 43,
"11": 5
},
"Algn size": {
"15": 6651,
"16": 1018244,
"13": 17620,
"12": 23468,
"17": 507163,
"14": 4018
}
}
When using direct IO, since applications typically do the right thing,
I see only improvements. And so this needs a bit more testing and
evaluation for impact on alignment for buffered IO.
[0] https://github.com/linux-kdevops/linux/tree/large-block-folio-for-next
Luis
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [PATCH v8 0/5] block: add larger order folio instead of pages
2024-08-08 23:04 ` [PATCH v8 0/5] block: add larger order folio instead of pages Luis Chamberlain
@ 2024-08-12 13:38 ` Christoph Hellwig
2024-08-12 16:35 ` Luis Chamberlain
0 siblings, 1 reply; 18+ messages in thread
From: Christoph Hellwig @ 2024-08-12 13:38 UTC (permalink / raw)
To: Luis Chamberlain
Cc: Kundan Kumar, Daniel Gomez, Pankaj Raghav, kernel, axboe, hch,
willy, kbusch, linux-block, joshi.k, anuj20.g, nj.shetty,
c.gameti, gost.dev
On Thu, Aug 08, 2024 at 04:04:03PM -0700, Luis Chamberlain wrote:
> This is not just about mTHP uses though, this can also affect buffered IO and
> direct IO patterns as well and this needs to be considered and tested as well.
Not sure what the above is supposed to mean. Besides small tweaks
to very low-level helpers the changes are entirely in the direct I/O
path, and they optimize that path for folios larger than PAGE_SIZE.
> I've given this a spin on top of of the LBS patches [0] and used the LBS
> patches as a baseline. The good news is I see a considerable amount of
> larger IOs for buffered IO and direct IO, however for buffered IO there
> is an increase on unalignenment to the target filesystem block size and
> that can affect performance.
Compared to what? There is nothing in the series here changing buffered
I/O patterns. What do you compare? If this series changes buffered
I/O patterns that is very well hidden and accidental, so we need to
bisect which patch does it and figure out why, but it would surprise me
a lot.
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [PATCH v8 0/5] block: add larger order folio instead of pages
2024-08-12 13:38 ` Christoph Hellwig
@ 2024-08-12 16:35 ` Luis Chamberlain
2024-08-16 8:45 ` Kundan Kumar
0 siblings, 1 reply; 18+ messages in thread
From: Luis Chamberlain @ 2024-08-12 16:35 UTC (permalink / raw)
To: Christoph Hellwig
Cc: Kundan Kumar, Daniel Gomez, Pankaj Raghav, kernel, axboe, willy,
kbusch, linux-block, joshi.k, anuj20.g, nj.shetty, c.gameti,
gost.dev
On Mon, Aug 12, 2024 at 03:38:43PM +0200, Christoph Hellwig wrote:
> On Thu, Aug 08, 2024 at 04:04:03PM -0700, Luis Chamberlain wrote:
> > This is not just about mTHP uses though, this can also affect buffered IO and
> > direct IO patterns as well and this needs to be considered and tested as well.
>
> Not sure what the above is supposed to mean. Besides small tweaks
> to very low-level helpers the changes are entirely in the direct I/O
> path, and they optimize that path for folios larger than PAGE_SIZE.
Which was my expectation as well.
> > I've given this a spin on top of of the LBS patches [0] and used the LBS
> > patches as a baseline. The good news is I see a considerable amount of
> > larger IOs for buffered IO and direct IO, however for buffered IO there
> > is an increase on unalignenment to the target filesystem block size and
> > that can affect performance.
>
> Compared to what? There is nothing in the series here changing buffered
> I/O patterns. What do you compare? If this series changes buffered
> I/O patterns that is very well hidden and accidental, so we need to
> bisect which patch does it and figure out why, but it would surprise me
> a lot.
The comparison was the without the patches Vs with the patches on the
same fio run with buffered IO. I'll re-test more times and bisect.
Thanks,
Luis
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [PATCH v8 0/5] block: add larger order folio instead of pages
2024-08-12 16:35 ` Luis Chamberlain
@ 2024-08-16 8:45 ` Kundan Kumar
2024-08-17 1:22 ` Luis Chamberlain
0 siblings, 1 reply; 18+ messages in thread
From: Kundan Kumar @ 2024-08-16 8:45 UTC (permalink / raw)
To: Luis Chamberlain
Cc: Christoph Hellwig, Daniel Gomez, Pankaj Raghav, kernel, axboe,
willy, kbusch, linux-block, joshi.k, anuj20.g, nj.shetty,
c.gameti, gost.dev
[-- Attachment #1: Type: text/plain, Size: 1555 bytes --]
On 12/08/24 09:35AM, Luis Chamberlain wrote:
>On Mon, Aug 12, 2024 at 03:38:43PM +0200, Christoph Hellwig wrote:
>> On Thu, Aug 08, 2024 at 04:04:03PM -0700, Luis Chamberlain wrote:
>> > This is not just about mTHP uses though, this can also affect buffered IO and
>> > direct IO patterns as well and this needs to be considered and tested as well.
>>
>> Not sure what the above is supposed to mean. Besides small tweaks
>> to very low-level helpers the changes are entirely in the direct I/O
>> path, and they optimize that path for folios larger than PAGE_SIZE.
>
>Which was my expectation as well.
>
>> > I've given this a spin on top of of the LBS patches [0] and used the LBS
>> > patches as a baseline. The good news is I see a considerable amount of
>> > larger IOs for buffered IO and direct IO, however for buffered IO there
>> > is an increase on unalignenment to the target filesystem block size and
>> > that can affect performance.
>>
>> Compared to what? There is nothing in the series here changing buffered
>> I/O patterns. What do you compare? If this series changes buffered
>> I/O patterns that is very well hidden and accidental, so we need to
>> bisect which patch does it and figure out why, but it would surprise me
>> a lot.
>
>The comparison was the without the patches Vs with the patches on the
>same fio run with buffered IO. I'll re-test more times and bisect.
>
Did tests with LBS + block folio patches and couldn't observe alignment
issue. Also, the changes in this series are not executed when we issue
buffered I/O.
[-- Attachment #2: Type: text/plain, Size: 0 bytes --]
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [PATCH v8 0/5] block: add larger order folio instead of pages
2024-08-16 8:45 ` Kundan Kumar
@ 2024-08-17 1:22 ` Luis Chamberlain
0 siblings, 0 replies; 18+ messages in thread
From: Luis Chamberlain @ 2024-08-17 1:22 UTC (permalink / raw)
To: Kundan Kumar
Cc: Christoph Hellwig, Daniel Gomez, Pankaj Raghav, kernel, axboe,
willy, kbusch, linux-block, joshi.k, anuj20.g, nj.shetty,
c.gameti, gost.dev
On Fri, Aug 16, 2024 at 02:15:41PM +0530, Kundan Kumar wrote:
> On 12/08/24 09:35AM, Luis Chamberlain wrote:
> > On Mon, Aug 12, 2024 at 03:38:43PM +0200, Christoph Hellwig wrote:
> > > On Thu, Aug 08, 2024 at 04:04:03PM -0700, Luis Chamberlain wrote:
> > > > This is not just about mTHP uses though, this can also affect buffered IO and
> > > > direct IO patterns as well and this needs to be considered and tested as well.
> > >
> > > Not sure what the above is supposed to mean. Besides small tweaks
> > > to very low-level helpers the changes are entirely in the direct I/O
> > > path, and they optimize that path for folios larger than PAGE_SIZE.
> >
> > Which was my expectation as well.
> >
> > > > I've given this a spin on top of of the LBS patches [0] and used the LBS
> > > > patches as a baseline. The good news is I see a considerable amount of
> > > > larger IOs for buffered IO and direct IO, however for buffered IO there
> > > > is an increase on unalignenment to the target filesystem block size and
> > > > that can affect performance.
> > >
> > > Compared to what? There is nothing in the series here changing buffered
> > > I/O patterns. What do you compare? If this series changes buffered
> > > I/O patterns that is very well hidden and accidental, so we need to
> > > bisect which patch does it and figure out why, but it would surprise me
> > > a lot.
> >
> > The comparison was the without the patches Vs with the patches on the
> > same fio run with buffered IO. I'll re-test more times and bisect.
> >
>
> Did tests with LBS + block folio patches and couldn't observe alignment
> issue. Also, the changes in this series are not executed when we issue
> buffered I/O.
I can't quite understand yet either why buffered IO is implicated yet
data does not lie. The good news is I re-tested twice and I get similar
results **however** what I failed to notice is that we also get a lot
more IOs and this ends up even helping in other ways. So this is not
bad, in the end only good. It is hard to visualize this, but an image
says more than 1000 words so here you go:
https://lh3.googleusercontent.com/pw/AP1GczML6LevSkZ8yHTF9zu0xtXkzy332kd98XBp7biDrxyGWG2IXfgyKNpy6YItUYaWnVeLQSABGJgpiJOANppix7lIYb82_pjl_ZtCCjXenvkDgHGV3KlvXlayG4mAFR762jLugrI4osH0uoKRA1WGZk50xA=w1389-h690-s-no-gm
So the volume in the end is what counts too, so let's say we
use a tool like the collowing which takes the blkalgn json file as
input and outputs worst case workload WAF computation:
#!/usr/bin/python3
import json
import argparse
import math
def order_to_kb(order):
return (2 ** order) / 1024
def calculate_waf(file_path, iu):
with open(file_path, 'r') as file:
data = json.load(file)
block_size_orders = data["Block size"]
algn_size_orders = data["Algn size"]
# Calculate total host writes
total_host_writes_kb = sum(order_to_kb(int(order)) * count for order, count in block_size_orders.items())
# Calculate total internal writes based on the provided logic
total_internal_writes_kb = 0
for order, count in block_size_orders.items():
size_kb = order_to_kb(int(order))
if size_kb >= iu:
total_internal_writes_kb += size_kb * count
else:
total_internal_writes_kb += math.ceil(size_kb / iu) * iu * count
# Calculate WAF
waf = total_internal_writes_kb / total_host_writes_kb
return waf
def main():
parser = argparse.ArgumentParser(description="Calculate the Worst-case Write Amplification Factor (WAF) from JSON data.")
parser.add_argument('file', type=str, help='Path to the JSON file containing the IO data.')
parser.add_argument('--iu', type=int, default=16, help='Indirection Unit (IU) size in KB (default: 16)')
args = parser.parse_args()
file_path = args.file
iu = args.iu
waf = calculate_waf(file_path, iu)
print(f"Worst-case WAF: {waf:.10f}")
if __name__ == "__main__":
main()
compute-waf.py lbs.json --iu 64
Worst-case WAF: 1.0000116423
compute-waf.py lbs+block-folios.json --iu 64
Worst-case WAF: 1.0000095356
On my second run I have:
cat 01-next-20240723-LBS.json
{
"Block size": {
"18": 6960,
"14": 302,
"16": 2339746,
"13": 165,
"12": 88,
"19": 117886,
"10": 49,
"9": 33,
"11": 31,
"17": 42707,
"15": 89393
},
"Algn size": {
"16": 2351238,
"19": 117886,
"18": 3353,
"17": 34823,
"15": 13067,
"12": 40060,
"14": 13583,
"13": 23351
}
}
cat 02-next-20240723-LBS+block-folios.json
{
"Block size": {
"11": 38,
"10": 49,
"12": 88,
"15": 91949,
"18": 33858,
"17": 104329,
"19": 129301,
"9": 34,
"13": 199,
"16": 4912264,
"14": 344
},
"Algn size": {
"16": 4954494,
"14": 10166,
"13": 20527,
"17": 82125,
"19": 129301,
"15": 13111,
"12": 48897,
"18": 13820
}
}
compute-waf.py 01-next-20240723-LBS.json --iu 64
Worst-case WAF: 1.0131538374
compute-waf.py 02-next-20240723-LBS+block-folios.json --iu 64
Worst-case WAF: 1.0073550532
Things are even better for Direct IO :) and so I encourage you to
test as this is all nice.
Tested-by: Luis Chamberlain <mcgrof@kernel.org>
Luis
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [PATCH v8 1/5] block: Added folio-ized version of bvec_try_merge_hw_page()
2024-07-11 5:07 ` [PATCH v8 1/5] block: Added folio-ized version of bvec_try_merge_hw_page() Kundan Kumar
@ 2024-08-17 4:23 ` Matthew Wilcox
2024-08-20 7:43 ` Kundan Kumar
0 siblings, 1 reply; 18+ messages in thread
From: Matthew Wilcox @ 2024-08-17 4:23 UTC (permalink / raw)
To: Kundan Kumar
Cc: axboe, hch, kbusch, linux-block, joshi.k, mcgrof, anuj20.g,
nj.shetty, c.gameti, gost.dev
On Thu, Jul 11, 2024 at 10:37:46AM +0530, Kundan Kumar wrote:
> -bool bvec_try_merge_hw_page(struct request_queue *q, struct bio_vec *bv,
> - struct page *page, unsigned len, unsigned offset,
> +bool bvec_try_merge_hw_folio(struct request_queue *q, struct bio_vec *bv,
> + struct folio *folio, size_t len, size_t offset,
> bool *same_page)
> {
> + struct page *page = folio_page(folio, 0);
> unsigned long mask = queue_segment_boundary(q);
> phys_addr_t addr1 = bvec_phys(bv);
> phys_addr_t addr2 = page_to_phys(page) + offset + len - 1;
[...]
> +bool bvec_try_merge_hw_page(struct request_queue *q, struct bio_vec *bv,
> + struct page *page, unsigned int len, unsigned int offset,
> + bool *same_page)
> +{
> + struct folio *folio = page_folio(page);
> +
> + return bvec_try_merge_hw_folio(q, bv, folio, len,
> + ((size_t)folio_page_idx(folio, page) << PAGE_SHIFT) +
> + offset, same_page);
> +}
This is the wrong way to do it. bio_add_folio() does it correctly
by being a wrapper around bio_add_page().
The reason is that in the future, not all pages will belong to folios.
For those pages, page_folio() will return NULL, and this will crash.
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [PATCH v8 5/5] block: unpin user pages belonging to a folio at once
2024-07-11 5:07 ` [PATCH v8 5/5] block: unpin user pages belonging to a folio at once Kundan Kumar
2024-07-18 6:17 ` Kundan Kumar
2024-07-25 20:40 ` Anuj gupta
@ 2024-08-17 4:36 ` Matthew Wilcox
2 siblings, 0 replies; 18+ messages in thread
From: Matthew Wilcox @ 2024-08-17 4:36 UTC (permalink / raw)
To: Kundan Kumar
Cc: axboe, hch, kbusch, linux-block, joshi.k, mcgrof, anuj20.g,
nj.shetty, c.gameti, gost.dev
On Thu, Jul 11, 2024 at 10:37:50AM +0530, Kundan Kumar wrote:
> +static inline void bio_release_folio(struct bio *bio, struct folio *folio,
> + unsigned long npages)
> +{
> + unpin_user_folio(folio, npages);
> +}
Why doesn't this need to check BIO_PAGE_PINNED like bio_release_page()
does? That should be explained, certainly in the commit message or
maybe in a comment.
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [PATCH v8 1/5] block: Added folio-ized version of bvec_try_merge_hw_page()
2024-08-17 4:23 ` Matthew Wilcox
@ 2024-08-20 7:43 ` Kundan Kumar
2024-08-27 8:22 ` Kundan Kumar
0 siblings, 1 reply; 18+ messages in thread
From: Kundan Kumar @ 2024-08-20 7:43 UTC (permalink / raw)
To: Matthew Wilcox
Cc: axboe, hch, kbusch, linux-block, joshi.k, mcgrof, anuj20.g,
nj.shetty, c.gameti, gost.dev
[-- Attachment #1: Type: text/plain, Size: 1758 bytes --]
On 17/08/24 05:23AM, Matthew Wilcox wrote:
>On Thu, Jul 11, 2024 at 10:37:46AM +0530, Kundan Kumar wrote:
>> -bool bvec_try_merge_hw_page(struct request_queue *q, struct bio_vec *bv,
>> - struct page *page, unsigned len, unsigned offset,
>> +bool bvec_try_merge_hw_folio(struct request_queue *q, struct bio_vec *bv,
>> + struct folio *folio, size_t len, size_t offset,
>> bool *same_page)
>> {
>> + struct page *page = folio_page(folio, 0);
>> unsigned long mask = queue_segment_boundary(q);
>> phys_addr_t addr1 = bvec_phys(bv);
>> phys_addr_t addr2 = page_to_phys(page) + offset + len - 1;
>[...]
>> +bool bvec_try_merge_hw_page(struct request_queue *q, struct bio_vec *bv,
>> + struct page *page, unsigned int len, unsigned int offset,
>> + bool *same_page)
>> +{
>> + struct folio *folio = page_folio(page);
>> +
>> + return bvec_try_merge_hw_folio(q, bv, folio, len,
>> + ((size_t)folio_page_idx(folio, page) << PAGE_SHIFT) +
>> + offset, same_page);
>> +}
>
>This is the wrong way to do it. bio_add_folio() does it correctly
>by being a wrapper around bio_add_page().
>
>The reason is that in the future, not all pages will belong to folios.
>For those pages, page_folio() will return NULL, and this will crash.
>
I can change in this fashion. page_folio is getting used at many other
places in kernel and currently there are no NULL checks. Will every place
need a change?
In this series we use page_folio to fetch folio for pages returned by
iov_iter_extract_pages. Then we iterate on the folios instead of pages.
We were progressing to change all the page related functions to accept
struct folio.
If page_folio may return NULL in future, it will require us to maintain
both page and folio versions. Do you see it differently ?
[-- Attachment #2: Type: text/plain, Size: 0 bytes --]
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [PATCH v8 1/5] block: Added folio-ized version of bvec_try_merge_hw_page()
2024-08-20 7:43 ` Kundan Kumar
@ 2024-08-27 8:22 ` Kundan Kumar
0 siblings, 0 replies; 18+ messages in thread
From: Kundan Kumar @ 2024-08-27 8:22 UTC (permalink / raw)
To: Kundan Kumar, Matthew Wilcox
Cc: axboe, hch, kbusch, linux-block, joshi.k, mcgrof, anuj20.g,
nj.shetty, c.gameti, gost.dev
On Tue, Aug 20, 2024 at 1:28 PM Kundan Kumar <kundan.kumar@samsung.com> wrote:
>
> On 17/08/24 05:23AM, Matthew Wilcox wrote:
> >On Thu, Jul 11, 2024 at 10:37:46AM +0530, Kundan Kumar wrote:
> >> -bool bvec_try_merge_hw_page(struct request_queue *q, struct bio_vec *bv,
> >> - struct page *page, unsigned len, unsigned offset,
> >> +bool bvec_try_merge_hw_folio(struct request_queue *q, struct bio_vec *bv,
> >> + struct folio *folio, size_t len, size_t offset,
> >> bool *same_page)
> >> {
> >> + struct page *page = folio_page(folio, 0);
> >> unsigned long mask = queue_segment_boundary(q);
> >> phys_addr_t addr1 = bvec_phys(bv);
> >> phys_addr_t addr2 = page_to_phys(page) + offset + len - 1;
> >[...]
> >> +bool bvec_try_merge_hw_page(struct request_queue *q, struct bio_vec *bv,
> >> + struct page *page, unsigned int len, unsigned int offset,
> >> + bool *same_page)
> >> +{
> >> + struct folio *folio = page_folio(page);
> >> +
> >> + return bvec_try_merge_hw_folio(q, bv, folio, len,
> >> + ((size_t)folio_page_idx(folio, page) << PAGE_SHIFT) +
> >> + offset, same_page);
> >> +}
> >
> >This is the wrong way to do it. bio_add_folio() does it correctly
> >by being a wrapper around bio_add_page().
> >
> >The reason is that in the future, not all pages will belong to folios.
> >For those pages, page_folio() will return NULL, and this will crash.
> >
>
> I can change in this fashion. page_folio is getting used at many other
> places in kernel and currently there are no NULL checks. Will every place
> need a change?
> In this series we use page_folio to fetch folio for pages returned by
> iov_iter_extract_pages. Then we iterate on the folios instead of pages.
> We were progressing to change all the page related functions to accept
> struct folio.
> If page_folio may return NULL in future, it will require us to maintain
> both page and folio versions. Do you see it differently ?
>
Gentle ping. Any feedback on this ?
^ permalink raw reply [flat|nested] 18+ messages in thread
end of thread, other threads:[~2024-08-27 8:23 UTC | newest]
Thread overview: 18+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
[not found] <CGME20240711051521epcas5p348f2cd84a1a80577754929143255352b@epcas5p3.samsung.com>
2024-07-11 5:07 ` [PATCH v8 0/5] block: add larger order folio instead of pages Kundan Kumar
[not found] ` <CGME20240711051529epcas5p1aaf694dfa65859b8a32bdffce5239bf6@epcas5p1.samsung.com>
2024-07-11 5:07 ` [PATCH v8 1/5] block: Added folio-ized version of bvec_try_merge_hw_page() Kundan Kumar
2024-08-17 4:23 ` Matthew Wilcox
2024-08-20 7:43 ` Kundan Kumar
2024-08-27 8:22 ` Kundan Kumar
[not found] ` <CGME20240711051532epcas5p360e92f46b86eca56e73643a71950783a@epcas5p3.samsung.com>
2024-07-11 5:07 ` [PATCH v8 2/5] block: Added folio-ized version of bio_add_hw_page() Kundan Kumar
[not found] ` <CGME20240711051536epcas5p369e005a83fb16d9f6d9636585cc66e3b@epcas5p3.samsung.com>
2024-07-11 5:07 ` [PATCH v8 3/5] block: introduce folio awareness and add a bigger size from folio Kundan Kumar
2024-07-25 20:56 ` Anuj gupta
[not found] ` <CGME20240711051539epcas5p2842f564375dc2f1c3de1a869c938436b@epcas5p2.samsung.com>
2024-07-11 5:07 ` [PATCH v8 4/5] mm: release number of pages of a folio Kundan Kumar
[not found] ` <CGME20240711051543epcas5p364f770974e2367d27c685a626cc9dbb5@epcas5p3.samsung.com>
2024-07-11 5:07 ` [PATCH v8 5/5] block: unpin user pages belonging to a folio at once Kundan Kumar
2024-07-18 6:17 ` Kundan Kumar
2024-07-25 20:40 ` Anuj gupta
2024-08-17 4:36 ` Matthew Wilcox
2024-08-08 23:04 ` [PATCH v8 0/5] block: add larger order folio instead of pages Luis Chamberlain
2024-08-12 13:38 ` Christoph Hellwig
2024-08-12 16:35 ` Luis Chamberlain
2024-08-16 8:45 ` Kundan Kumar
2024-08-17 1:22 ` Luis Chamberlain
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).