public inbox for linux-mediatek@lists.infradead.org
 help / color / mirror / Atom feed
From: "Ed Tsai (蔡宗軒)" <Ed.Tsai@mediatek.com>
To: "ming.lei@redhat.com" <ming.lei@redhat.com>
Cc: "Will Shiu (許恭瑜)" <Will.Shiu@mediatek.com>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"linux-mediatek@lists.infradead.org"
	<linux-mediatek@lists.infradead.org>,
	"Peter Wang (王信友)" <peter.wang@mediatek.com>,
	"Alice Chao (趙珮均)" <Alice.Chao@mediatek.com>,
	"linux-block@vger.kernel.org" <linux-block@vger.kernel.org>,
	wsd_upstream <wsd_upstream@mediatek.com>,
	"axboe@kernel.dk" <axboe@kernel.dk>,
	"Casper Li (李中榮)" <casper.li@mediatek.com>,
	"Chun-Hung Wu (巫駿宏)" <Chun-hung.Wu@mediatek.com>,
	"Powen Kao (高伯文)" <Powen.Kao@mediatek.com>,
	"Naomi Chu (朱詠田)" <Naomi.Chu@mediatek.com>,
	"linux-arm-kernel@lists.infradead.org"
	<linux-arm-kernel@lists.infradead.org>,
	"Stanley Chu (朱原陞)" <stanley.chu@mediatek.com>,
	"matthias.bgg@gmail.com" <matthias.bgg@gmail.com>,
	"angelogioacchino.delregno@collabora.com"
	<angelogioacchino.delregno@collabora.com>
Subject: Re: [PATCH 1/1] block: Check the queue limit before bio submitting
Date: Tue, 7 Nov 2023 04:35:11 +0000	[thread overview]
Message-ID: <ec36dfd24bac079b128baa0251914d4c055a0c88.camel@mediatek.com> (raw)
In-Reply-To: <ZUmznGeKhJKnE7wx@fedora>

On Tue, 2023-11-07 at 11:48 +0800, Ming Lei wrote:
>  On Tue, Nov 07, 2023 at 02:53:20AM +0000, Ed Tsai (蔡宗軒) wrote:
> > On Mon, 2023-11-06 at 19:54 +0800, Ming Lei wrote:
> > >  On Mon, Nov 06, 2023 at 12:53:31PM +0800, Ming Lei wrote:
> > > > On Mon, Nov 06, 2023 at 01:40:12AM +0000, Ed Tsai (蔡宗軒) wrote:
> > > > > On Mon, 2023-11-06 at 09:33 +0800, Ed Tsai wrote:
> > > > > > On Sat, 2023-11-04 at 11:43 +0800, Ming Lei wrote:
> > > > 
> > > > ...
> > > > 
> > > > > Sorry for missing out on my dd command. Here it is:
> > > > > dd if=/data/test_file of=/dev/null bs=64m count=1
> iflag=direct
> > > > 
> > > > OK, thanks for the sharing.
> > > > 
> > > > I understand the issue now, but not sure if it is one good idea
> to
> > > check
> > > > queue limit in __bio_iov_iter_get_pages():
> > > > 
> > > > 1) bio->bi_bdev may not be set
> > > > 
> > > > 2) what matters is actually bio's alignment, and bio size still
> can
> > > > be big enough
> > > > 
> > > > So I cooked one patch, and it should address your issue:
> > > 
> > > The following one fixes several bugs, and is verified to be
> capable
> > > of
> > > making big & aligned bios, feel free to run your test against
> this
> > > one:
> > > 
> > >  block/bio.c | 28 +++++++++++++++++++++++++++-
> > >  1 file changed, 27 insertions(+), 1 deletion(-)
> > > 
> > > diff --git a/block/bio.c b/block/bio.c
> > > index 816d412c06e9..80b36ce57510 100644
> > > --- a/block/bio.c
> > > +++ b/block/bio.c
> > > @@ -1211,6 +1211,7 @@ static int
> bio_iov_add_zone_append_page(struct
> > > bio *bio, struct page *page,
> > >  }
> > >  
> > >  #define PAGE_PTRS_PER_BVEC     (sizeof(struct bio_vec) /
> > > sizeof(struct page *))
> > > +#define BIO_CHUNK_SIZE(256U << 10)
> > >  
> > >  /**
> > >   * __bio_iov_iter_get_pages - pin user or kernel pages and add
> them
> > > to a bio
> > > @@ -1266,6 +1267,31 @@ static int __bio_iov_iter_get_pages(struct
> bio
> > > *bio, struct iov_iter *iter)
> > >  size -= trim;
> > >  }
> > >  
> > > +/*
> > > + * Try to make bio aligned with 128KB if it isn't the last one,
> so
> > > + * we can avoid small bio in case of big chunk sequential IO
> because
> > > + * of bio split and multipage bvec.
> > > + *
> > > + * If nothing is added to this bio, simply allow unaligned since
> we
> > > + * have chance to add more bytes
> > > + */
> > > +if (iov_iter_count(iter) && bio->bi_iter.bi_size) {
> > > +unsigned int aligned_size = (bio->bi_iter.bi_size + size) &
> > > +~(BIO_CHUNK_SIZE - 1);
> > > +
> > > +if (aligned_size <= bio->bi_iter.bi_size) {
> > > +/* stop to add page if this bio can't keep aligned */
> > > +if (!(bio->bi_iter.bi_size & (BIO_CHUNK_SIZE - 1))) {
> > > +ret = left = size;
> > > +goto revert;
> > > +}
> > > +} else {
> > > +aligned_size -= bio->bi_iter.bi_size;
> > > +iov_iter_revert(iter, size - aligned_size);
> > > +size = aligned_size;
> > > +}
> > > +}
> > > +
> > >  if (unlikely(!size)) {
> > >  ret = -EFAULT;
> > >  goto out;
> > > @@ -1285,7 +1311,7 @@ static int __bio_iov_iter_get_pages(struct
> bio
> > > *bio, struct iov_iter *iter)
> > >  
> > >  offset = 0;
> > >  }
> > > -
> > > +revert:
> > >  iov_iter_revert(iter, left);
> > >  out:
> > >  while (i < nr_pages)
> > > -- 
> > > 2.41.0
> > > 
> > > 
> > > 
> > > Thanks, 
> > > Ming
> > > 
> > 
> > The latest patch you provided with 256 alignment does help
> alleviate
> > the severity of fragmentation. However, the actual aligned size may
> > vary depending on the device. Using a fixed and universal size of
> 128
> > or 256KB only provides partial relief from fragmentation.
> > 
> > I performed a dd direct I/O read of 64MB with your patch, and
> although
> > most of the bios were aligned, there were still cases of
> missalignment
> > to the device limit (e.g., 512MB for my device), as shown below:
> 
> 512MB is really big, and actually you have reached 3520MB in READ by
> limiting max bio size to 1MB in your original patch.
> 
> Just be curious what is the data if you change to align with max
> sectors
> against my last patch? which can try to maximize & align bio.

Sorry, it is a typo. Please disregard it. It should be 512KB instead.

> 
> > 
> > dd [000] ..... 392.976830: block_bio_queue: 254,52 R 2997760 + 3584
> > dd [000] ..... 392.979940: block_bio_queue: 254,52 R 3001344 + 3584
> > dd [000] ..... 392.983235: block_bio_queue: 254,52 R 3004928 + 3584
> > dd [000] ..... 392.986468: block_bio_queue: 254,52 R 3008512 + 3584
> 
> Yeah, I thought that 128KB should be fine for usual hardware, but
> looks not good enough.
> 
> > 
> > Comparing the results of the Antutu Sequential test to the previous
> > data, it is indeed an improvement, but still a slight difference
> with
> > limiting the bio size to max_sectors:
> > 
> > Sequential Read (average of 5 rounds):
> > Original: 3033.7 MB/sec
> > Limited to max_sectors: 3520.9 MB/sec
> > Aligned 256KB: 3471.5 MB/sec
> > 
> > Sequential Write (average of 5 rounds):
> > Original: 2225.4 MB/sec
> > Limited to max_sectors: 2800.3 MB/sec
> > Aligned 256KB: 2618.1 MB/sec
> 
> Thanks for sharing the data.
> 
> > 
> > What if we limit the bio size only for those who have set the
> > max_sectors?
> 
> I think it may be doable, but need more smart approach for avoiding
> extra cost of iov_iter_revert(), and one way is to add bio_shrink()
> (or bio_revert()) to run the alignment just once.
> 
> I will think further and write a new patch if it is doable.
> 
> 
> 
> Thanks,
> Ming
> 

Thank you very much. I will continue to stay updated on this issue to
see if there are any difficulties or alternative directions that may
arise.

Best,
Ed

      reply	other threads:[~2023-11-07  4:45 UTC|newest]

Thread overview: 15+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-10-25  9:22 [PATCH 1/1] block: Check the queue limit before bio submitting ed.tsai
2023-11-01  2:23 ` Ed Tsai (蔡宗軒)
2023-11-03  8:15   ` Christoph Hellwig
2023-11-03  9:05     ` Ed Tsai (蔡宗軒)
2023-11-03 16:20   ` Ming Lei
2023-11-04  1:11     ` Ed Tsai (蔡宗軒)
2023-11-04  2:12       ` Yu Kuai
2023-11-04  3:43       ` Ming Lei
2023-11-06  1:33         ` Ed Tsai (蔡宗軒)
2023-11-06  1:40           ` Ed Tsai (蔡宗軒)
2023-11-06  4:53             ` Ming Lei
2023-11-06 11:54               ` Ming Lei
2023-11-07  2:53                 ` Ed Tsai (蔡宗軒)
2023-11-07  3:48                   ` Ming Lei
2023-11-07  4:35                     ` Ed Tsai (蔡宗軒) [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=ec36dfd24bac079b128baa0251914d4c055a0c88.camel@mediatek.com \
    --to=ed.tsai@mediatek.com \
    --cc=Alice.Chao@mediatek.com \
    --cc=Chun-hung.Wu@mediatek.com \
    --cc=Naomi.Chu@mediatek.com \
    --cc=Powen.Kao@mediatek.com \
    --cc=Will.Shiu@mediatek.com \
    --cc=angelogioacchino.delregno@collabora.com \
    --cc=axboe@kernel.dk \
    --cc=casper.li@mediatek.com \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-block@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mediatek@lists.infradead.org \
    --cc=matthias.bgg@gmail.com \
    --cc=ming.lei@redhat.com \
    --cc=peter.wang@mediatek.com \
    --cc=stanley.chu@mediatek.com \
    --cc=wsd_upstream@mediatek.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox