stable.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Mike Snitzer <snitzer@redhat.com>
To: Ming Lei <ming.lei@redhat.com>
Cc: dm-devel@redhat.com, stable@vger.kernel.org
Subject: Re: [PATCH V2] dm-raid: fix updating of max_discard_sectors limit
Date: Wed, 11 Sep 2019 09:35:23 -0400	[thread overview]
Message-ID: <20190911133523.GA32121@redhat.com> (raw)
In-Reply-To: <20190911113133.837-1-ming.lei@redhat.com>

On Wed, Sep 11 2019 at  7:31am -0400,
Ming Lei <ming.lei@redhat.com> wrote:

> Unit of 'chunk_size' is byte, instead of sector, so fix it.
> 
> Without this fix, too big max_discard_sectors is applied on the request queue
> of dm-raid, finally raid code has to split the bio again.
> 
> This re-split done by raid causes the following nested clone_endio:
> 
> 1) one big bio 'A' is submitted to dm queue, and served as the original
> bio
> 
> 2) one new bio 'B' is cloned from the original bio 'A', and .map()
> is run on this bio of 'B', and B's original bio points to 'A'
> 
> 3) raid code sees that 'B' is too big, and split 'B' and re-submit
> the remainded part of 'B' to dm-raid queue via generic_make_request().
> 
> 4) now dm will hanlde 'B' as new original bio, then allocate a new
> clone bio of 'C' and run .map() on 'C'. Meantime C's original bio
> points to 'B'.
> 
> 5) suppose now 'C' is completed by raid direclty, then the following
> clone_endio() is called recursively:
> 
> 	clone_endio(C)
> 		->clone_endio(B)		#B is original bio of 'C'
> 			->bio_endio(A)
> 
> 'A' can be big enough to make handreds of nested clone_endio(), then
> stack can be corrupted easily.
> 
> Cc: <stable@vger.kernel.org>
> Signed-off-by: Ming Lei <ming.lei@redhat.com>
> ---
> V2:
> 	- fix commit log a bit
> 
>  drivers/md/dm-raid.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/drivers/md/dm-raid.c b/drivers/md/dm-raid.c
> index 8a60a4a070ac..c26aa4e8207a 100644
> --- a/drivers/md/dm-raid.c
> +++ b/drivers/md/dm-raid.c
> @@ -3749,7 +3749,7 @@ static void raid_io_hints(struct dm_target *ti, struct queue_limits *limits)
>  	 */
>  	if (rs_is_raid1(rs) || rs_is_raid10(rs)) {
>  		limits->discard_granularity = chunk_size;
> -		limits->max_discard_sectors = chunk_size;
> +		limits->max_discard_sectors = chunk_size >> 9;
>  	}
>  }
>  
> -- 
> 2.20.1
> 

Thanks a lot Ming!  But oof, really embarassing oversight on my part!

FYI, I added a "Fixes:" tag to the commit header and switched to
shifting by SECTOR_SHIFT instead of 9, staged commit for 5.4 is here:

https://git.kernel.org/pub/scm/linux/kernel/git/device-mapper/linux-dm.git/commit/?h=dm-5.4&id=509818079bf1fefff4ed02d6a1b994e20efc0480

  reply	other threads:[~2019-09-11 13:35 UTC|newest]

Thread overview: 5+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-09-11 11:31 [PATCH V2] dm-raid: fix updating of max_discard_sectors limit Ming Lei
2019-09-11 13:35 ` Mike Snitzer [this message]
2019-09-11 13:43   ` Mike Snitzer
2019-09-11 18:59   ` [dm-devel] " John Stoffel
2019-09-12  7:30 ` Sasha Levin

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20190911133523.GA32121@redhat.com \
    --to=snitzer@redhat.com \
    --cc=dm-devel@redhat.com \
    --cc=ming.lei@redhat.com \
    --cc=stable@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).