public inbox for linux-block@vger.kernel.org
 help / color / mirror / Atom feed
* a few cleanups to prepare for multipage bio_vecs
@ 2018-06-04 13:58 Christoph Hellwig
  2018-06-04 13:58 ` [PATCH 1/3] block: simplify bio_check_pages_dirty Christoph Hellwig
                   ` (2 more replies)
  0 siblings, 3 replies; 8+ messages in thread
From: Christoph Hellwig @ 2018-06-04 13:58 UTC (permalink / raw)
  To: Jens Axboe, Ming Lei; +Cc: linux-block

Clean up some bio_vec abuses to prepare for saner handling of multipage
bio_vecs.

^ permalink raw reply	[flat|nested] 8+ messages in thread

* [PATCH 1/3] block: simplify bio_check_pages_dirty
  2018-06-04 13:58 a few cleanups to prepare for multipage bio_vecs Christoph Hellwig
@ 2018-06-04 13:58 ` Christoph Hellwig
  2018-06-05  3:23   ` Ming Lei
  2018-06-04 13:58 ` [PATCH 2/3] block: bio_set_pages_dirty can't see NULL bv_page in a valid bio_vec Christoph Hellwig
  2018-06-04 13:58 ` [PATCH 3/3] block: use bio_add_page in bio_iov_iter_get_pages Christoph Hellwig
  2 siblings, 1 reply; 8+ messages in thread
From: Christoph Hellwig @ 2018-06-04 13:58 UTC (permalink / raw)
  To: Jens Axboe, Ming Lei; +Cc: linux-block

bio_check_pages_dirty currently inviolates the invariant that bv_page of
a bio_vec inside bi_vcnt shouldn't be zero, and that is going to become
really annoying with multpath biovecs.  Fortunately there isn't any
all that good reason for it - once we decide to defer freeing the bio
to a workqueue holding onto a few additional pages isn't really an
issue anymore.  So just check if there is a clean page that needs
dirtying in the first path, and do a second pass to free them if there
was none, while the cache is still hot.

Also use the chance to micro-optimize bio_dirty_fn a bit by not saving
irq state - we know we are called from a workqueue.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 block/bio.c | 56 +++++++++++++++++++++-----------------------------------
 1 file changed, 21 insertions(+), 35 deletions(-)

diff --git a/block/bio.c b/block/bio.c
index 53e0f0a1ed94..50941c1c9118 100644
--- a/block/bio.c
+++ b/block/bio.c
@@ -1590,19 +1590,15 @@ static void bio_release_pages(struct bio *bio)
 	struct bio_vec *bvec;
 	int i;
 
-	bio_for_each_segment_all(bvec, bio, i) {
-		struct page *page = bvec->bv_page;
-
-		if (page)
-			put_page(page);
-	}
+	bio_for_each_segment_all(bvec, bio, i)
+		put_page(bvec->bv_page);
 }
 
 /*
  * bio_check_pages_dirty() will check that all the BIO's pages are still dirty.
  * If they are, then fine.  If, however, some pages are clean then they must
  * have been written out during the direct-IO read.  So we take another ref on
- * the BIO and the offending pages and re-dirty the pages in process context.
+ * the BIO and re-dirty the pages in process context.
  *
  * It is expected that bio_check_pages_dirty() will wholly own the BIO from
  * here on.  It will run one put_page() against each page and will run one
@@ -1620,52 +1616,42 @@ static struct bio *bio_dirty_list;
  */
 static void bio_dirty_fn(struct work_struct *work)
 {
-	unsigned long flags;
-	struct bio *bio;
+	struct bio *bio, *next;
 
-	spin_lock_irqsave(&bio_dirty_lock, flags);
-	bio = bio_dirty_list;
+	spin_lock_irq(&bio_dirty_lock);
+	next = bio_dirty_list;
 	bio_dirty_list = NULL;
-	spin_unlock_irqrestore(&bio_dirty_lock, flags);
+	spin_unlock_irq(&bio_dirty_lock);
 
-	while (bio) {
-		struct bio *next = bio->bi_private;
+	while ((bio = next) != NULL) {
+		next = bio->bi_private;
 
 		bio_set_pages_dirty(bio);
 		bio_release_pages(bio);
 		bio_put(bio);
-		bio = next;
 	}
 }
 
 void bio_check_pages_dirty(struct bio *bio)
 {
 	struct bio_vec *bvec;
-	int nr_clean_pages = 0;
+	unsigned long flags;
 	int i;
 
 	bio_for_each_segment_all(bvec, bio, i) {
-		struct page *page = bvec->bv_page;
-
-		if (PageDirty(page) || PageCompound(page)) {
-			put_page(page);
-			bvec->bv_page = NULL;
-		} else {
-			nr_clean_pages++;
-		}
+		if (!PageDirty(bvec->bv_page) && !PageCompound(bvec->bv_page))
+			goto defer;
 	}
 
-	if (nr_clean_pages) {
-		unsigned long flags;
-
-		spin_lock_irqsave(&bio_dirty_lock, flags);
-		bio->bi_private = bio_dirty_list;
-		bio_dirty_list = bio;
-		spin_unlock_irqrestore(&bio_dirty_lock, flags);
-		schedule_work(&bio_dirty_work);
-	} else {
-		bio_put(bio);
-	}
+	bio_release_pages(bio);
+	bio_put(bio);
+	return;
+defer:
+	spin_lock_irqsave(&bio_dirty_lock, flags);
+	bio->bi_private = bio_dirty_list;
+	bio_dirty_list = bio;
+	spin_unlock_irqrestore(&bio_dirty_lock, flags);
+	schedule_work(&bio_dirty_work);
 }
 
 void generic_start_io_acct(struct request_queue *q, int rw,
-- 
2.14.2

^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH 2/3] block: bio_set_pages_dirty can't see NULL bv_page in a valid bio_vec
  2018-06-04 13:58 a few cleanups to prepare for multipage bio_vecs Christoph Hellwig
  2018-06-04 13:58 ` [PATCH 1/3] block: simplify bio_check_pages_dirty Christoph Hellwig
@ 2018-06-04 13:58 ` Christoph Hellwig
  2018-06-05  3:28   ` Ming Lei
  2018-06-04 13:58 ` [PATCH 3/3] block: use bio_add_page in bio_iov_iter_get_pages Christoph Hellwig
  2 siblings, 1 reply; 8+ messages in thread
From: Christoph Hellwig @ 2018-06-04 13:58 UTC (permalink / raw)
  To: Jens Axboe, Ming Lei; +Cc: linux-block

So don't bother handling it.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 block/bio.c | 6 ++----
 1 file changed, 2 insertions(+), 4 deletions(-)

diff --git a/block/bio.c b/block/bio.c
index 50941c1c9118..d95fab72acb5 100644
--- a/block/bio.c
+++ b/block/bio.c
@@ -1578,10 +1578,8 @@ void bio_set_pages_dirty(struct bio *bio)
 	int i;
 
 	bio_for_each_segment_all(bvec, bio, i) {
-		struct page *page = bvec->bv_page;
-
-		if (page && !PageCompound(page))
-			set_page_dirty_lock(page);
+		if (!PageCompound(bvec->bv_page))
+			set_page_dirty_lock(bvec->bv_page);
 	}
 }
 
-- 
2.14.2

^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH 3/3] block: use bio_add_page in bio_iov_iter_get_pages
  2018-06-04 13:58 a few cleanups to prepare for multipage bio_vecs Christoph Hellwig
  2018-06-04 13:58 ` [PATCH 1/3] block: simplify bio_check_pages_dirty Christoph Hellwig
  2018-06-04 13:58 ` [PATCH 2/3] block: bio_set_pages_dirty can't see NULL bv_page in a valid bio_vec Christoph Hellwig
@ 2018-06-04 13:58 ` Christoph Hellwig
  2018-06-05  3:41   ` Ming Lei
  2 siblings, 1 reply; 8+ messages in thread
From: Christoph Hellwig @ 2018-06-04 13:58 UTC (permalink / raw)
  To: Jens Axboe, Ming Lei; +Cc: linux-block

Replace a nasty hack with a different nasty hack to prepare for multipage
bio_vecs.  By moving the temporary page array as far up as possible in
the space allocated for the bio_vec array we can iterate forward over it
and thus use bio_add_page.  Using bio_add_page means we'll be able to
merge physically contiguous pages once support for multipath bio_vecs is
merged.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 block/bio.c | 45 +++++++++++++++++++++------------------------
 1 file changed, 21 insertions(+), 24 deletions(-)

diff --git a/block/bio.c b/block/bio.c
index d95fab72acb5..b09c133a1e24 100644
--- a/block/bio.c
+++ b/block/bio.c
@@ -871,6 +871,8 @@ int bio_add_page(struct bio *bio, struct page *page,
 }
 EXPORT_SYMBOL(bio_add_page);
 
+#define PAGE_PTRS_PER_BVEC	(sizeof(struct bio_vec) / sizeof(struct page *))
+
 /**
  * bio_iov_iter_get_pages - pin user or kernel pages and add them to a bio
  * @bio: bio to add pages to
@@ -882,38 +884,33 @@ EXPORT_SYMBOL(bio_add_page);
 int bio_iov_iter_get_pages(struct bio *bio, struct iov_iter *iter)
 {
 	unsigned short nr_pages = bio->bi_max_vecs - bio->bi_vcnt;
+	unsigned short entries_left = bio->bi_max_vecs - bio->bi_vcnt;
 	struct bio_vec *bv = bio->bi_io_vec + bio->bi_vcnt;
 	struct page **pages = (struct page **)bv;
-	size_t offset, diff;
-	ssize_t size;
+	ssize_t size, left;
+	unsigned len, i;
+	size_t offset;
+
+	/*
+	 * Move page array up in the allocated memory for the bio vecs as
+	 * far as possible so that we can start filling biovecs from the
+	 * beginning without overwriting the temporary page array.
+	 */
+	BUILD_BUG_ON(PAGE_PTRS_PER_BVEC < 2);
+	pages += entries_left * (PAGE_PTRS_PER_BVEC - 1);
 
 	size = iov_iter_get_pages(iter, pages, LONG_MAX, nr_pages, &offset);
 	if (unlikely(size <= 0))
 		return size ? size : -EFAULT;
-	nr_pages = (size + offset + PAGE_SIZE - 1) / PAGE_SIZE;
 
-	/*
-	 * Deep magic below:  We need to walk the pinned pages backwards
-	 * because we are abusing the space allocated for the bio_vecs
-	 * for the page array.  Because the bio_vecs are larger than the
-	 * page pointers by definition this will always work.  But it also
-	 * means we can't use bio_add_page, so any changes to it's semantics
-	 * need to be reflected here as well.
-	 */
-	bio->bi_iter.bi_size += size;
-	bio->bi_vcnt += nr_pages;
-
-	diff = (nr_pages * PAGE_SIZE - offset) - size;
-	while (nr_pages--) {
-		bv[nr_pages].bv_page = pages[nr_pages];
-		bv[nr_pages].bv_len = PAGE_SIZE;
-		bv[nr_pages].bv_offset = 0;
-	}
+	for (left = size, i = 0; left > 0; left -= len, i++) {
+		struct page *page = pages[i];
 
-	bv[0].bv_offset += offset;
-	bv[0].bv_len -= offset;
-	if (diff)
-		bv[bio->bi_vcnt - 1].bv_len -= diff;
+		len = min_t(size_t, PAGE_SIZE - offset, left);
+		if (WARN_ON_ONCE(bio_add_page(bio, page, len, offset) != len))
+			return -EINVAL;
+		offset = 0;
+	}
 
 	iov_iter_advance(iter, size);
 	return 0;
-- 
2.14.2

^ permalink raw reply related	[flat|nested] 8+ messages in thread

* Re: [PATCH 1/3] block: simplify bio_check_pages_dirty
  2018-06-04 13:58 ` [PATCH 1/3] block: simplify bio_check_pages_dirty Christoph Hellwig
@ 2018-06-05  3:23   ` Ming Lei
  0 siblings, 0 replies; 8+ messages in thread
From: Ming Lei @ 2018-06-05  3:23 UTC (permalink / raw)
  To: Christoph Hellwig; +Cc: Jens Axboe, linux-block

On Mon, Jun 04, 2018 at 03:58:51PM +0200, Christoph Hellwig wrote:
> bio_check_pages_dirty currently inviolates the invariant that bv_page of
> a bio_vec inside bi_vcnt shouldn't be zero, and that is going to become
> really annoying with multpath biovecs.  Fortunately there isn't any
> all that good reason for it - once we decide to defer freeing the bio
> to a workqueue holding onto a few additional pages isn't really an
> issue anymore.  So just check if there is a clean page that needs
> dirtying in the first path, and do a second pass to free them if there
> was none, while the cache is still hot.
> 
> Also use the chance to micro-optimize bio_dirty_fn a bit by not saving
> irq state - we know we are called from a workqueue.
> 
> Signed-off-by: Christoph Hellwig <hch@lst.de>
> ---
>  block/bio.c | 56 +++++++++++++++++++++-----------------------------------
>  1 file changed, 21 insertions(+), 35 deletions(-)
> 
> diff --git a/block/bio.c b/block/bio.c
> index 53e0f0a1ed94..50941c1c9118 100644
> --- a/block/bio.c
> +++ b/block/bio.c
> @@ -1590,19 +1590,15 @@ static void bio_release_pages(struct bio *bio)
>  	struct bio_vec *bvec;
>  	int i;
>  
> -	bio_for_each_segment_all(bvec, bio, i) {
> -		struct page *page = bvec->bv_page;
> -
> -		if (page)
> -			put_page(page);
> -	}
> +	bio_for_each_segment_all(bvec, bio, i)
> +		put_page(bvec->bv_page);
>  }
>  
>  /*
>   * bio_check_pages_dirty() will check that all the BIO's pages are still dirty.
>   * If they are, then fine.  If, however, some pages are clean then they must
>   * have been written out during the direct-IO read.  So we take another ref on
> - * the BIO and the offending pages and re-dirty the pages in process context.
> + * the BIO and re-dirty the pages in process context.
>   *
>   * It is expected that bio_check_pages_dirty() will wholly own the BIO from
>   * here on.  It will run one put_page() against each page and will run one
> @@ -1620,52 +1616,42 @@ static struct bio *bio_dirty_list;
>   */
>  static void bio_dirty_fn(struct work_struct *work)
>  {
> -	unsigned long flags;
> -	struct bio *bio;
> +	struct bio *bio, *next;
>  
> -	spin_lock_irqsave(&bio_dirty_lock, flags);
> -	bio = bio_dirty_list;
> +	spin_lock_irq(&bio_dirty_lock);
> +	next = bio_dirty_list;
>  	bio_dirty_list = NULL;
> -	spin_unlock_irqrestore(&bio_dirty_lock, flags);
> +	spin_unlock_irq(&bio_dirty_lock);
>  
> -	while (bio) {
> -		struct bio *next = bio->bi_private;
> +	while ((bio = next) != NULL) {
> +		next = bio->bi_private;
>  
>  		bio_set_pages_dirty(bio);
>  		bio_release_pages(bio);
>  		bio_put(bio);
> -		bio = next;
>  	}
>  }
>  
>  void bio_check_pages_dirty(struct bio *bio)
>  {
>  	struct bio_vec *bvec;
> -	int nr_clean_pages = 0;
> +	unsigned long flags;
>  	int i;
>  
>  	bio_for_each_segment_all(bvec, bio, i) {
> -		struct page *page = bvec->bv_page;
> -
> -		if (PageDirty(page) || PageCompound(page)) {
> -			put_page(page);
> -			bvec->bv_page = NULL;
> -		} else {
> -			nr_clean_pages++;
> -		}
> +		if (!PageDirty(bvec->bv_page) && !PageCompound(bvec->bv_page))
> +			goto defer;
>  	}
>  
> -	if (nr_clean_pages) {
> -		unsigned long flags;
> -
> -		spin_lock_irqsave(&bio_dirty_lock, flags);
> -		bio->bi_private = bio_dirty_list;
> -		bio_dirty_list = bio;
> -		spin_unlock_irqrestore(&bio_dirty_lock, flags);
> -		schedule_work(&bio_dirty_work);
> -	} else {
> -		bio_put(bio);
> -	}
> +	bio_release_pages(bio);
> +	bio_put(bio);
> +	return;
> +defer:
> +	spin_lock_irqsave(&bio_dirty_lock, flags);
> +	bio->bi_private = bio_dirty_list;
> +	bio_dirty_list = bio;
> +	spin_unlock_irqrestore(&bio_dirty_lock, flags);
> +	schedule_work(&bio_dirty_work);
>  }
>  
>  void generic_start_io_acct(struct request_queue *q, int rw,

It is good simplification, and the only effect may be some pages
which will be freed a bit late in partial clean case.

I have run fio randread on null_blk in a 512M ram fedora 27 cloud image
based VM, not see IOPS drop, so:

Tested-by: Ming Lei <ming.lei@redhat.com>
Reviewed-by: Ming Lei <ming.lei@redhat.com>

Thanks,
Ming

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH 2/3] block: bio_set_pages_dirty can't see NULL bv_page in a valid bio_vec
  2018-06-04 13:58 ` [PATCH 2/3] block: bio_set_pages_dirty can't see NULL bv_page in a valid bio_vec Christoph Hellwig
@ 2018-06-05  3:28   ` Ming Lei
  0 siblings, 0 replies; 8+ messages in thread
From: Ming Lei @ 2018-06-05  3:28 UTC (permalink / raw)
  To: Christoph Hellwig; +Cc: Jens Axboe, linux-block

On Mon, Jun 04, 2018 at 03:58:52PM +0200, Christoph Hellwig wrote:
> So don't bother handling it.
> 
> Signed-off-by: Christoph Hellwig <hch@lst.de>
> ---
>  block/bio.c | 6 ++----
>  1 file changed, 2 insertions(+), 4 deletions(-)
> 
> diff --git a/block/bio.c b/block/bio.c
> index 50941c1c9118..d95fab72acb5 100644
> --- a/block/bio.c
> +++ b/block/bio.c
> @@ -1578,10 +1578,8 @@ void bio_set_pages_dirty(struct bio *bio)
>  	int i;
>  
>  	bio_for_each_segment_all(bvec, bio, i) {
> -		struct page *page = bvec->bv_page;
> -
> -		if (page && !PageCompound(page))
> -			set_page_dirty_lock(page);
> +		if (!PageCompound(bvec->bv_page))
> +			set_page_dirty_lock(bvec->bv_page);
>  	}
>  }

Looks reasonable:

Reviewed-by: Ming Lei <ming.lei@redhat.com>


Thanks,
Ming

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH 3/3] block: use bio_add_page in bio_iov_iter_get_pages
  2018-06-04 13:58 ` [PATCH 3/3] block: use bio_add_page in bio_iov_iter_get_pages Christoph Hellwig
@ 2018-06-05  3:41   ` Ming Lei
  2018-06-05  3:42     ` Ming Lei
  0 siblings, 1 reply; 8+ messages in thread
From: Ming Lei @ 2018-06-05  3:41 UTC (permalink / raw)
  To: Christoph Hellwig; +Cc: Jens Axboe, linux-block

On Mon, Jun 04, 2018 at 03:58:53PM +0200, Christoph Hellwig wrote:
> Replace a nasty hack with a different nasty hack to prepare for multipage
> bio_vecs.  By moving the temporary page array as far up as possible in
> the space allocated for the bio_vec array we can iterate forward over it
> and thus use bio_add_page.  Using bio_add_page means we'll be able to
> merge physically contiguous pages once support for multipath bio_vecs is
> merged.
> 
> Signed-off-by: Christoph Hellwig <hch@lst.de>
> ---
>  block/bio.c | 45 +++++++++++++++++++++------------------------
>  1 file changed, 21 insertions(+), 24 deletions(-)
> 
> diff --git a/block/bio.c b/block/bio.c
> index d95fab72acb5..b09c133a1e24 100644
> --- a/block/bio.c
> +++ b/block/bio.c
> @@ -871,6 +871,8 @@ int bio_add_page(struct bio *bio, struct page *page,
>  }
>  EXPORT_SYMBOL(bio_add_page);
>  
> +#define PAGE_PTRS_PER_BVEC	(sizeof(struct bio_vec) / sizeof(struct page *))
> +
>  /**
>   * bio_iov_iter_get_pages - pin user or kernel pages and add them to a bio
>   * @bio: bio to add pages to
> @@ -882,38 +884,33 @@ EXPORT_SYMBOL(bio_add_page);
>  int bio_iov_iter_get_pages(struct bio *bio, struct iov_iter *iter)
>  {
>  	unsigned short nr_pages = bio->bi_max_vecs - bio->bi_vcnt;
> +	unsigned short entries_left = bio->bi_max_vecs - bio->bi_vcnt;
>  	struct bio_vec *bv = bio->bi_io_vec + bio->bi_vcnt;
>  	struct page **pages = (struct page **)bv;
> -	size_t offset, diff;
> -	ssize_t size;
> +	ssize_t size, left;
> +	unsigned len, i;
> +	size_t offset;
> +
> +	/*
> +	 * Move page array up in the allocated memory for the bio vecs as
> +	 * far as possible so that we can start filling biovecs from the
> +	 * beginning without overwriting the temporary page array.
> +	 */
> +	BUILD_BUG_ON(PAGE_PTRS_PER_BVEC < 2);
> +	pages += entries_left * (PAGE_PTRS_PER_BVEC - 1);
>  
>  	size = iov_iter_get_pages(iter, pages, LONG_MAX, nr_pages, &offset);
>  	if (unlikely(size <= 0))
>  		return size ? size : -EFAULT;
> -	nr_pages = (size + offset + PAGE_SIZE - 1) / PAGE_SIZE;
>  
> -	/*
> -	 * Deep magic below:  We need to walk the pinned pages backwards
> -	 * because we are abusing the space allocated for the bio_vecs
> -	 * for the page array.  Because the bio_vecs are larger than the
> -	 * page pointers by definition this will always work.  But it also
> -	 * means we can't use bio_add_page, so any changes to it's semantics
> -	 * need to be reflected here as well.
> -	 */
> -	bio->bi_iter.bi_size += size;
> -	bio->bi_vcnt += nr_pages;
> -
> -	diff = (nr_pages * PAGE_SIZE - offset) - size;
> -	while (nr_pages--) {
> -		bv[nr_pages].bv_page = pages[nr_pages];
> -		bv[nr_pages].bv_len = PAGE_SIZE;
> -		bv[nr_pages].bv_offset = 0;
> -	}
> +	for (left = size, i = 0; left > 0; left -= len, i++) {
> +		struct page *page = pages[i];
>  
> -	bv[0].bv_offset += offset;
> -	bv[0].bv_len -= offset;
> -	if (diff)
> -		bv[bio->bi_vcnt - 1].bv_len -= diff;
> +		len = min_t(size_t, PAGE_SIZE - offset, left);
> +		if (WARN_ON_ONCE(bio_add_page(bio, page, len, offset) != len))
> +			return -EINVAL;
> +		offset = 0;
> +	}

One invariant is that the page index 'i' is always <= bvec index
of the added page, and the two can only be same when adding the last
page.

So this approach is correct and looks smart & clean:

Reviewed-by: Ming Lei <ming.lei@redhat.com>

Thanks,
Ming

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH 3/3] block: use bio_add_page in bio_iov_iter_get_pages
  2018-06-05  3:41   ` Ming Lei
@ 2018-06-05  3:42     ` Ming Lei
  0 siblings, 0 replies; 8+ messages in thread
From: Ming Lei @ 2018-06-05  3:42 UTC (permalink / raw)
  To: Christoph Hellwig; +Cc: Jens Axboe, linux-block

On Tue, Jun 05, 2018 at 11:41:19AM +0800, Ming Lei wrote:
> On Mon, Jun 04, 2018 at 03:58:53PM +0200, Christoph Hellwig wrote:
> > Replace a nasty hack with a different nasty hack to prepare for multipage
> > bio_vecs.  By moving the temporary page array as far up as possible in
> > the space allocated for the bio_vec array we can iterate forward over it
> > and thus use bio_add_page.  Using bio_add_page means we'll be able to
> > merge physically contiguous pages once support for multipath bio_vecs is
> > merged.
> > 
> > Signed-off-by: Christoph Hellwig <hch@lst.de>
> > ---
> >  block/bio.c | 45 +++++++++++++++++++++------------------------
> >  1 file changed, 21 insertions(+), 24 deletions(-)
> > 
> > diff --git a/block/bio.c b/block/bio.c
> > index d95fab72acb5..b09c133a1e24 100644
> > --- a/block/bio.c
> > +++ b/block/bio.c
> > @@ -871,6 +871,8 @@ int bio_add_page(struct bio *bio, struct page *page,
> >  }
> >  EXPORT_SYMBOL(bio_add_page);
> >  
> > +#define PAGE_PTRS_PER_BVEC	(sizeof(struct bio_vec) / sizeof(struct page *))
> > +
> >  /**
> >   * bio_iov_iter_get_pages - pin user or kernel pages and add them to a bio
> >   * @bio: bio to add pages to
> > @@ -882,38 +884,33 @@ EXPORT_SYMBOL(bio_add_page);
> >  int bio_iov_iter_get_pages(struct bio *bio, struct iov_iter *iter)
> >  {
> >  	unsigned short nr_pages = bio->bi_max_vecs - bio->bi_vcnt;
> > +	unsigned short entries_left = bio->bi_max_vecs - bio->bi_vcnt;
> >  	struct bio_vec *bv = bio->bi_io_vec + bio->bi_vcnt;
> >  	struct page **pages = (struct page **)bv;
> > -	size_t offset, diff;
> > -	ssize_t size;
> > +	ssize_t size, left;
> > +	unsigned len, i;
> > +	size_t offset;
> > +
> > +	/*
> > +	 * Move page array up in the allocated memory for the bio vecs as
> > +	 * far as possible so that we can start filling biovecs from the
> > +	 * beginning without overwriting the temporary page array.
> > +	 */
> > +	BUILD_BUG_ON(PAGE_PTRS_PER_BVEC < 2);
> > +	pages += entries_left * (PAGE_PTRS_PER_BVEC - 1);
> >  
> >  	size = iov_iter_get_pages(iter, pages, LONG_MAX, nr_pages, &offset);
> >  	if (unlikely(size <= 0))
> >  		return size ? size : -EFAULT;
> > -	nr_pages = (size + offset + PAGE_SIZE - 1) / PAGE_SIZE;
> >  
> > -	/*
> > -	 * Deep magic below:  We need to walk the pinned pages backwards
> > -	 * because we are abusing the space allocated for the bio_vecs
> > -	 * for the page array.  Because the bio_vecs are larger than the
> > -	 * page pointers by definition this will always work.  But it also
> > -	 * means we can't use bio_add_page, so any changes to it's semantics
> > -	 * need to be reflected here as well.
> > -	 */
> > -	bio->bi_iter.bi_size += size;
> > -	bio->bi_vcnt += nr_pages;
> > -
> > -	diff = (nr_pages * PAGE_SIZE - offset) - size;
> > -	while (nr_pages--) {
> > -		bv[nr_pages].bv_page = pages[nr_pages];
> > -		bv[nr_pages].bv_len = PAGE_SIZE;
> > -		bv[nr_pages].bv_offset = 0;
> > -	}
> > +	for (left = size, i = 0; left > 0; left -= len, i++) {
> > +		struct page *page = pages[i];
> >  
> > -	bv[0].bv_offset += offset;
> > -	bv[0].bv_len -= offset;
> > -	if (diff)
> > -		bv[bio->bi_vcnt - 1].bv_len -= diff;
> > +		len = min_t(size_t, PAGE_SIZE - offset, left);
> > +		if (WARN_ON_ONCE(bio_add_page(bio, page, len, offset) != len))
> > +			return -EINVAL;
> > +		offset = 0;
> > +	}
> 
> One invariant is that the page index 'i' is always <= bvec index

oops, the above "<=" should have been ">=".

Thanks,
Ming

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2018-06-05  3:42 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2018-06-04 13:58 a few cleanups to prepare for multipage bio_vecs Christoph Hellwig
2018-06-04 13:58 ` [PATCH 1/3] block: simplify bio_check_pages_dirty Christoph Hellwig
2018-06-05  3:23   ` Ming Lei
2018-06-04 13:58 ` [PATCH 2/3] block: bio_set_pages_dirty can't see NULL bv_page in a valid bio_vec Christoph Hellwig
2018-06-05  3:28   ` Ming Lei
2018-06-04 13:58 ` [PATCH 3/3] block: use bio_add_page in bio_iov_iter_get_pages Christoph Hellwig
2018-06-05  3:41   ` Ming Lei
2018-06-05  3:42     ` Ming Lei

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox