public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: Ming Lei <ming.lei@redhat.com>
To: Nikolay Borisov <nborisov@suse.com>
Cc: Jens Axboe <axboe@kernel.dk>, Omar Sandoval <osandov@osandov.com>,
	linux-block@vger.kernel.org, LKML <linux-kernel@vger.kernel.org>,
	linux-btrfs <linux-btrfs@vger.kernel.org>
Subject: Re: Possible bio merging breakage in mp bio rework
Date: Sat, 6 Apr 2019 08:16:54 +0800	[thread overview]
Message-ID: <20190406001653.GA4805@ming.t460p> (raw)
In-Reply-To: <59c19acf-999f-1911-b0b8-1a5cec8116c5@suse.com>

Hi Nikolay,

On Fri, Apr 05, 2019 at 07:04:18PM +0300, Nikolay Borisov wrote:
> Hello Ming, 
> 
> Following the mp biovec rework what is the maximum 
> data that a bio could contain? Should it be PAGE_SIZE * bio_vec 

There isn't any maximum data limit on the bio submitted from fs,
and block layer will make the final bio sent to driver correct
by applying all kinds of queue limit, such as max segment size,
max segment number, max sectors, ...

> or something else? Currently I can see bios as large as 127 megs 
> on sequential workloads, I got prompted to this since btrfs has a 
> memory allocation that is dependent on the data in the bio and this 
> particular memory allocation started failing with order 6 allocs. 

Could you share us the code? I don't see why order 6 allocs is a must.

> Further debugging showed that with the following xfs_io command line: 
> 
> 
> xfs_io -f -c "pwrite -S 0x61 -b 4m 0 10g" /media/scratch/file1
> 
> I can easily see very large bios: 
> 
> [  188.366540] kworker/-7       3.... 34847519us : btrfs_submit_bio_hook: bio: ffff8dffe9940bb0 bi_iter.bi_size = 134184960 bi_vcn: 28 bi_vcnt_max: 256
> [  188.367129] kworker/-658     2.... 34946536us : btrfs_submit_bio_hook: bio: ffff8dffe9940370 bi_iter.bi_size = 134246400 bi_vcn: 28 bi_vcnt_max: 256
> [  188.367714] kworker/-7       3.... 35107967us : btrfs_submit_bio_hook: bio: ffff8dffe9940bb0 bi_iter.bi_size = 134184960 bi_vcn: 30 bi_vcnt_max: 256
> [  188.368319] kworker/-658     2.... 35229894us : btrfs_submit_bio_hook: bio: ffff8dffe9940370 bi_iter.bi_size = 134246400 bi_vcn: 32 bi_vcnt_max: 256
> [  188.368909] kworker/-7       3.... 35374809us : btrfs_submit_bio_hook: bio: ffff8dffe9940bb0 bi_iter.bi_size = 134184960 bi_vcn: 25 bi_vcnt_max: 256
> [  188.369498] kworker/-658     2.... 35516194us : btrfs_submit_bio_hook: bio: ffff8dffe9940370 bi_iter.bi_size = 134246400 bi_vcn: 31 bi_vcnt_max: 256
> [  188.370086] kworker/-7       3.... 35663669us : btrfs_submit_bio_hook: bio: ffff8dffe9940bb0 bi_iter.bi_size = 134184960 bi_vcn: 32 bi_vcnt_max: 256
> [  188.370696] kworker/-658     2.... 35791006us : btrfs_submit_bio_hook: bio: ffff8dffe9940370 bi_iter.bi_size = 100655104 bi_vcn: 24 bi_vcnt_max: 256
> [  188.371335] kworker/-658     2.... 35816114us : btrfs_submit_bio_hook: bio: ffff8dffe99434f0 bi_iter.bi_size = 33591296 bi_vcn: 5 bi_vcnt_max: 256
> 
> 
> So that's 127 megs in a single bio? This stems from the new merging logic. 
> 07173c3ec276 ("block: enable multipage bvecs") made it so that physically 
> contiguous pages added to the bio would just modify bi_iter.bi_size and the 
> initial page's bio_vec's bv_len. There's no longer the 
> page == bv->bv_page portion of the check. 

bio_add_page() tries best to put physically contiguous pages into one bvec, and
I don't see anything is wrong in the log.

Could you show us what the real problem is?

Thanks,
Ming

  reply	other threads:[~2019-04-06  0:17 UTC|newest]

Thread overview: 8+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-04-05 16:04 Possible bio merging breakage in mp bio rework Nikolay Borisov
2019-04-06  0:16 ` Ming Lei [this message]
2019-04-06  6:09   ` Nikolay Borisov
2019-04-06  8:00     ` Qu Wenruo
2019-04-06 12:30     ` Ming Lei
2019-04-08  9:52   ` Johannes Thumshirn
2019-04-08 10:19     ` Ming Lei
2019-04-08 10:22       ` Johannes Thumshirn

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20190406001653.GA4805@ming.t460p \
    --to=ming.lei@redhat.com \
    --cc=axboe@kernel.dk \
    --cc=linux-block@vger.kernel.org \
    --cc=linux-btrfs@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=nborisov@suse.com \
    --cc=osandov@osandov.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox