* [PATCH v3 07/49] bcache: comment on direct access to bvec table
[not found] <20170808084548.18963-1-ming.lei@redhat.com>
@ 2017-08-08 8:45 ` Ming Lei
2017-08-08 12:36 ` Coly Li
2017-08-10 11:26 ` Christoph Hellwig
2017-08-08 8:45 ` [PATCH v3 33/49] bcache: convert to bio_for_each_segment_all_sp() Ming Lei
1 sibling, 2 replies; 6+ messages in thread
From: Ming Lei @ 2017-08-08 8:45 UTC (permalink / raw)
To: Jens Axboe, Christoph Hellwig, Huang Ying, Andrew Morton,
Alexander Viro
Cc: linux-kernel, linux-block, linux-fsdevel, linux-mm, Ming Lei,
linux-bcache
Looks all are safe after multipage bvec is supported.
Cc: linux-bcache@vger.kernel.org
Signed-off-by: Ming Lei <ming.lei@redhat.com>
---
drivers/md/bcache/btree.c | 1 +
drivers/md/bcache/super.c | 6 ++++++
drivers/md/bcache/util.c | 7 +++++++
3 files changed, 14 insertions(+)
diff --git a/drivers/md/bcache/btree.c b/drivers/md/bcache/btree.c
index 866dcf78ff8e..3da595ae565b 100644
--- a/drivers/md/bcache/btree.c
+++ b/drivers/md/bcache/btree.c
@@ -431,6 +431,7 @@ static void do_btree_node_write(struct btree *b)
continue_at(cl, btree_node_write_done, NULL);
} else {
+ /* No harm for multipage bvec since the new is just allocated */
b->bio->bi_vcnt = 0;
bch_bio_map(b->bio, i);
diff --git a/drivers/md/bcache/super.c b/drivers/md/bcache/super.c
index 8352fad765f6..6808f548cd13 100644
--- a/drivers/md/bcache/super.c
+++ b/drivers/md/bcache/super.c
@@ -208,6 +208,7 @@ static void write_bdev_super_endio(struct bio *bio)
static void __write_super(struct cache_sb *sb, struct bio *bio)
{
+ /* single page bio, safe for multipage bvec */
struct cache_sb *out = page_address(bio->bi_io_vec[0].bv_page);
unsigned i;
@@ -1154,6 +1155,8 @@ static void register_bdev(struct cache_sb *sb, struct page *sb_page,
dc->bdev->bd_holder = dc;
bio_init(&dc->sb_bio, dc->sb_bio.bi_inline_vecs, 1);
+
+ /* single page bio, safe for multipage bvec */
dc->sb_bio.bi_io_vec[0].bv_page = sb_page;
get_page(sb_page);
@@ -1799,6 +1802,7 @@ void bch_cache_release(struct kobject *kobj)
for (i = 0; i < RESERVE_NR; i++)
free_fifo(&ca->free[i]);
+ /* single page bio, safe for multipage bvec */
if (ca->sb_bio.bi_inline_vecs[0].bv_page)
put_page(ca->sb_bio.bi_io_vec[0].bv_page);
@@ -1854,6 +1858,8 @@ static int register_cache(struct cache_sb *sb, struct page *sb_page,
ca->bdev->bd_holder = ca;
bio_init(&ca->sb_bio, ca->sb_bio.bi_inline_vecs, 1);
+
+ /* single page bio, safe for multipage bvec */
ca->sb_bio.bi_io_vec[0].bv_page = sb_page;
get_page(sb_page);
diff --git a/drivers/md/bcache/util.c b/drivers/md/bcache/util.c
index 8c3a938f4bf0..11b4230ea6ad 100644
--- a/drivers/md/bcache/util.c
+++ b/drivers/md/bcache/util.c
@@ -223,6 +223,13 @@ uint64_t bch_next_delay(struct bch_ratelimit *d, uint64_t done)
: 0;
}
+/*
+ * Generally it isn't good to access .bi_io_vec and .bi_vcnt
+ * directly, the preferred way is bio_add_page, but in
+ * this case, bch_bio_map() supposes that the bvec table
+ * is empty, so it is safe to access .bi_vcnt & .bi_io_vec
+ * in this way even after multipage bvec is supported.
+ */
void bch_bio_map(struct bio *bio, void *base)
{
size_t size = bio->bi_iter.bi_size;
--
2.9.4
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply related [flat|nested] 6+ messages in thread
* [PATCH v3 33/49] bcache: convert to bio_for_each_segment_all_sp()
[not found] <20170808084548.18963-1-ming.lei@redhat.com>
2017-08-08 8:45 ` [PATCH v3 07/49] bcache: comment on direct access to bvec table Ming Lei
@ 2017-08-08 8:45 ` Ming Lei
2017-08-08 12:35 ` Coly Li
1 sibling, 1 reply; 6+ messages in thread
From: Ming Lei @ 2017-08-08 8:45 UTC (permalink / raw)
To: Jens Axboe, Christoph Hellwig, Huang Ying, Andrew Morton,
Alexander Viro
Cc: linux-kernel, linux-block, linux-fsdevel, linux-mm, Ming Lei,
linux-bcache
Cc: linux-bcache@vger.kernel.org
Signed-off-by: Ming Lei <ming.lei@redhat.com>
---
drivers/md/bcache/btree.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/drivers/md/bcache/btree.c b/drivers/md/bcache/btree.c
index 3da595ae565b..74cbb7387dc5 100644
--- a/drivers/md/bcache/btree.c
+++ b/drivers/md/bcache/btree.c
@@ -422,8 +422,9 @@ static void do_btree_node_write(struct btree *b)
int j;
struct bio_vec *bv;
void *base = (void *) ((unsigned long) i & ~(PAGE_SIZE - 1));
+ struct bvec_iter_all bia;
- bio_for_each_segment_all(bv, b->bio, j)
+ bio_for_each_segment_all_sp(bv, b->bio, j, bia)
memcpy(page_address(bv->bv_page),
base + j * PAGE_SIZE, PAGE_SIZE);
--
2.9.4
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply related [flat|nested] 6+ messages in thread
* Re: [PATCH v3 33/49] bcache: convert to bio_for_each_segment_all_sp()
2017-08-08 8:45 ` [PATCH v3 33/49] bcache: convert to bio_for_each_segment_all_sp() Ming Lei
@ 2017-08-08 12:35 ` Coly Li
0 siblings, 0 replies; 6+ messages in thread
From: Coly Li @ 2017-08-08 12:35 UTC (permalink / raw)
To: Ming Lei, Jens Axboe, Christoph Hellwig, Huang Ying,
Andrew Morton, Alexander Viro
Cc: linux-kernel, linux-block, linux-fsdevel, linux-mm, linux-bcache
On 2017/8/8 下午4:45, Ming Lei wrote:
> Cc: linux-bcache@vger.kernel.org
> Signed-off-by: Ming Lei <ming.lei@redhat.com>
The patch is good to me. Thanks.
Acked-by: Coly Li <colyli@suse.de>
> ---
> drivers/md/bcache/btree.c | 3 ++-
> 1 file changed, 2 insertions(+), 1 deletion(-)
>
> diff --git a/drivers/md/bcache/btree.c b/drivers/md/bcache/btree.c
> index 3da595ae565b..74cbb7387dc5 100644
> --- a/drivers/md/bcache/btree.c
> +++ b/drivers/md/bcache/btree.c
> @@ -422,8 +422,9 @@ static void do_btree_node_write(struct btree *b)
> int j;
> struct bio_vec *bv;
> void *base = (void *) ((unsigned long) i & ~(PAGE_SIZE - 1));
> + struct bvec_iter_all bia;
>
> - bio_for_each_segment_all(bv, b->bio, j)
> + bio_for_each_segment_all_sp(bv, b->bio, j, bia)
> memcpy(page_address(bv->bv_page),
> base + j * PAGE_SIZE, PAGE_SIZE);
>
>
--
Coly Li
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [PATCH v3 07/49] bcache: comment on direct access to bvec table
2017-08-08 8:45 ` [PATCH v3 07/49] bcache: comment on direct access to bvec table Ming Lei
@ 2017-08-08 12:36 ` Coly Li
2017-08-10 11:26 ` Christoph Hellwig
1 sibling, 0 replies; 6+ messages in thread
From: Coly Li @ 2017-08-08 12:36 UTC (permalink / raw)
To: Ming Lei, Jens Axboe, Christoph Hellwig, Huang Ying,
Andrew Morton, Alexander Viro
Cc: linux-kernel, linux-block, linux-fsdevel, linux-mm, linux-bcache
On 2017/8/8 下午4:45, Ming Lei wrote:
> Looks all are safe after multipage bvec is supported.
>
> Cc: linux-bcache@vger.kernel.org
> Signed-off-by: Ming Lei <ming.lei@redhat.com>
Acked-by: Coly Li <colyli@suse.de>
Coly Li
> ---
> drivers/md/bcache/btree.c | 1 +
> drivers/md/bcache/super.c | 6 ++++++
> drivers/md/bcache/util.c | 7 +++++++
> 3 files changed, 14 insertions(+)
>
> diff --git a/drivers/md/bcache/btree.c b/drivers/md/bcache/btree.c
> index 866dcf78ff8e..3da595ae565b 100644
> --- a/drivers/md/bcache/btree.c
> +++ b/drivers/md/bcache/btree.c
> @@ -431,6 +431,7 @@ static void do_btree_node_write(struct btree *b)
>
> continue_at(cl, btree_node_write_done, NULL);
> } else {
> + /* No harm for multipage bvec since the new is just allocated */
> b->bio->bi_vcnt = 0;
> bch_bio_map(b->bio, i);
>
> diff --git a/drivers/md/bcache/super.c b/drivers/md/bcache/super.c
> index 8352fad765f6..6808f548cd13 100644
> --- a/drivers/md/bcache/super.c
> +++ b/drivers/md/bcache/super.c
> @@ -208,6 +208,7 @@ static void write_bdev_super_endio(struct bio *bio)
>
> static void __write_super(struct cache_sb *sb, struct bio *bio)
> {
> + /* single page bio, safe for multipage bvec */
> struct cache_sb *out = page_address(bio->bi_io_vec[0].bv_page);
> unsigned i;
>
> @@ -1154,6 +1155,8 @@ static void register_bdev(struct cache_sb *sb, struct page *sb_page,
> dc->bdev->bd_holder = dc;
>
> bio_init(&dc->sb_bio, dc->sb_bio.bi_inline_vecs, 1);
> +
> + /* single page bio, safe for multipage bvec */
> dc->sb_bio.bi_io_vec[0].bv_page = sb_page;
> get_page(sb_page);
>
> @@ -1799,6 +1802,7 @@ void bch_cache_release(struct kobject *kobj)
> for (i = 0; i < RESERVE_NR; i++)
> free_fifo(&ca->free[i]);
>
> + /* single page bio, safe for multipage bvec */
> if (ca->sb_bio.bi_inline_vecs[0].bv_page)
> put_page(ca->sb_bio.bi_io_vec[0].bv_page);
>
> @@ -1854,6 +1858,8 @@ static int register_cache(struct cache_sb *sb, struct page *sb_page,
> ca->bdev->bd_holder = ca;
>
> bio_init(&ca->sb_bio, ca->sb_bio.bi_inline_vecs, 1);
> +
> + /* single page bio, safe for multipage bvec */
> ca->sb_bio.bi_io_vec[0].bv_page = sb_page;
> get_page(sb_page);
>
> diff --git a/drivers/md/bcache/util.c b/drivers/md/bcache/util.c
> index 8c3a938f4bf0..11b4230ea6ad 100644
> --- a/drivers/md/bcache/util.c
> +++ b/drivers/md/bcache/util.c
> @@ -223,6 +223,13 @@ uint64_t bch_next_delay(struct bch_ratelimit *d, uint64_t done)
> : 0;
> }
>
> +/*
> + * Generally it isn't good to access .bi_io_vec and .bi_vcnt
> + * directly, the preferred way is bio_add_page, but in
> + * this case, bch_bio_map() supposes that the bvec table
> + * is empty, so it is safe to access .bi_vcnt & .bi_io_vec
> + * in this way even after multipage bvec is supported.
> + */
> void bch_bio_map(struct bio *bio, void *base)
> {
> size_t size = bio->bi_iter.bi_size;
>
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [PATCH v3 07/49] bcache: comment on direct access to bvec table
2017-08-08 8:45 ` [PATCH v3 07/49] bcache: comment on direct access to bvec table Ming Lei
2017-08-08 12:36 ` Coly Li
@ 2017-08-10 11:26 ` Christoph Hellwig
2017-10-19 22:51 ` Ming Lei
1 sibling, 1 reply; 6+ messages in thread
From: Christoph Hellwig @ 2017-08-10 11:26 UTC (permalink / raw)
To: Ming Lei
Cc: Jens Axboe, Christoph Hellwig, Huang Ying, Andrew Morton,
Alexander Viro, linux-kernel, linux-block, linux-fsdevel,
linux-mm, linux-bcache
I think all this bcache code needs bigger attention. For one
bio_alloc_pages is only used in bcache, so we should move it in there.
Second the way bio_alloc_pages is currently written looks potentially
dangerous for multi-page biovecs, so we should think about a better
calling convention. The way bcache seems to generally use it is by
allocating a bio, then calling bch_bio_map on it and then calling
bio_alloc_pages. I think it just needs a new bio_alloc_pages calling
convention that passes the size to be allocated and stop looking into
the segment count.
Second bch_bio_map isn't something we should be doing in a driver,
it should be rewritten using bio_add_page.
> diff --git a/drivers/md/bcache/btree.c b/drivers/md/bcache/btree.c
> index 866dcf78ff8e..3da595ae565b 100644
> --- a/drivers/md/bcache/btree.c
> +++ b/drivers/md/bcache/btree.c
> @@ -431,6 +431,7 @@ static void do_btree_node_write(struct btree *b)
>
> continue_at(cl, btree_node_write_done, NULL);
> } else {
> + /* No harm for multipage bvec since the new is just allocated */
> b->bio->bi_vcnt = 0;
This should go away - bio_alloc_pages or it's replacement should not
modify bi_vcnt on failure.
> + /* single page bio, safe for multipage bvec */
> dc->sb_bio.bi_io_vec[0].bv_page = sb_page;
needs to use bio_add_page.
> + /* single page bio, safe for multipage bvec */
> ca->sb_bio.bi_io_vec[0].bv_page = sb_page;
needs to use bio_add_page.
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [PATCH v3 07/49] bcache: comment on direct access to bvec table
2017-08-10 11:26 ` Christoph Hellwig
@ 2017-10-19 22:51 ` Ming Lei
0 siblings, 0 replies; 6+ messages in thread
From: Ming Lei @ 2017-10-19 22:51 UTC (permalink / raw)
To: Christoph Hellwig
Cc: Jens Axboe, Huang Ying, Andrew Morton, Alexander Viro,
linux-kernel, linux-block, linux-fsdevel, linux-mm, linux-bcache
On Thu, Aug 10, 2017 at 04:26:03AM -0700, Christoph Hellwig wrote:
> I think all this bcache code needs bigger attention. For one
> bio_alloc_pages is only used in bcache, so we should move it in there.
Looks a good idea.
>
> Second the way bio_alloc_pages is currently written looks potentially
> dangerous for multi-page biovecs, so we should think about a better
> calling convention. The way bcache seems to generally use it is by
> allocating a bio, then calling bch_bio_map on it and then calling
> bio_alloc_pages. I think it just needs a new bio_alloc_pages calling
> convention that passes the size to be allocated and stop looking into
> the segment count.
Looks a good idea, will try to do in this way.
>
> Second bch_bio_map isn't something we should be doing in a driver,
> it should be rewritten using bio_add_page.
Yes, the idea way is to use bio_add_page always, but given
bch_bio_map() is used on a fresh bio, it is safe, and this
work can be done in another bcache cleanup patch.
>
> > diff --git a/drivers/md/bcache/btree.c b/drivers/md/bcache/btree.c
> > index 866dcf78ff8e..3da595ae565b 100644
> > --- a/drivers/md/bcache/btree.c
> > +++ b/drivers/md/bcache/btree.c
> > @@ -431,6 +431,7 @@ static void do_btree_node_write(struct btree *b)
> >
> > continue_at(cl, btree_node_write_done, NULL);
> > } else {
> > + /* No harm for multipage bvec since the new is just allocated */
> > b->bio->bi_vcnt = 0;
>
> This should go away - bio_alloc_pages or it's replacement should not
> modify bi_vcnt on failure.
OK.
>
> > + /* single page bio, safe for multipage bvec */
> > dc->sb_bio.bi_io_vec[0].bv_page = sb_page;
>
> needs to use bio_add_page.
OK.
>
> > + /* single page bio, safe for multipage bvec */
> > ca->sb_bio.bi_io_vec[0].bv_page = sb_page;
>
> needs to use bio_add_page.
OK.
--
Ming
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 6+ messages in thread
end of thread, other threads:[~2017-10-19 22:51 UTC | newest]
Thread overview: 6+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
[not found] <20170808084548.18963-1-ming.lei@redhat.com>
2017-08-08 8:45 ` [PATCH v3 07/49] bcache: comment on direct access to bvec table Ming Lei
2017-08-08 12:36 ` Coly Li
2017-08-10 11:26 ` Christoph Hellwig
2017-10-19 22:51 ` Ming Lei
2017-08-08 8:45 ` [PATCH v3 33/49] bcache: convert to bio_for_each_segment_all_sp() Ming Lei
2017-08-08 12:35 ` Coly Li
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox