linux-ext4.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Adrian Hunter <adrian.hunter@intel.com>
To: Evgeniy Didin <Evgeniy.Didin@synopsys.com>
Cc: "ulf.hansson@linaro.org" <ulf.hansson@linaro.org>,
	"linux-ext4@vger.kernel.org" <linux-ext4@vger.kernel.org>,
	"tytso@mit.edu" <tytso@mit.edu>,
	"linus.walleij@linaro.org" <linus.walleij@linaro.org>,
	"Alexey.Brodkin@synopsys.com" <Alexey.Brodkin@synopsys.com>,
	"linux-mmc@vger.kernel.org" <linux-mmc@vger.kernel.org>,
	"jh80.chung@samsung.com" <jh80.chung@samsung.com>,
	"adilger.kernel@dilger.ca" <adilger.kernel@dilger.ca>,
	"linux-snps-arc@lists.infradead.org"
	<linux-snps-arc@lists.infradead.org>,
	"Eugeniy.Paltsev@synopsys.com" <Eugeniy.Paltsev@synopsys.com>
Subject: Re: mmc: block: bonnie++ runs with errors on arc/hsdk board
Date: Tue, 20 Mar 2018 10:29:16 +0200	[thread overview]
Message-ID: <e00cf7f1-6ed8-124d-54b7-81e3d5722b10@intel.com> (raw)
In-Reply-To: <1521220236.10304.10.camel@synopsys.com>

On 16/03/18 19:10, Evgeniy Didin wrote:
> Hello Adrian,
> 
>> Yes.  Unfortunately the clock used is not accurate enough to correctly order
>> the events across different CPUs, which makes it very hard to see delays
>> between requests.  You could try a different clock - refer the --clockid
>> option to perf record.
>>
>> Nevertheless it shows there are no I/O errors which means the error recovery
>> can be ruled out as a problem.
>>
>> The issue could be caused by the I/O scheduler.  Under blk-mq the default
>> scheduler is the mq-deadline scheduler whereas without blk-mq you would
>> probably have been using cfq by default.  You could try the bfq scheduler:
>>
>> 	echo bfq > /sys/block/mmcblk0/queue/scheduler
>>
>> But you might need to add it to the kernel config i.e.
>>
>> 	CONFIG_IOSCHED_BFQ=y
>>
> Switching from mq-deadline scheduler to bfq fixed the issue.
> Also bonnie++ results have changed:
> -----------------------------------------------<8----------------------------------------------------------------------------
> bfq scheduler:
> ARCLinux,512M,6463,87,7297,0,5450,0,9827,99,342952,99,+++++,+++,16,17525,100,+++++,+++,24329,99,17621,100,+++++,+++,24001,101
> 
> mq-deadline scheduler:
> ARCLinux,512M,4453,36,6474,1,5852,0,12940,99,344329,100,+++++,+++,16,22168,98,+++++,+++,32760,99,22755,100,+++++,+++,32205,100
> -----------------------------------------------<8----------------------------------------------------------------------------
> As I see, the performance of sequential input per char and of file
> operations have decreased for ~25%.

You may need to aggregate more runs, and also compare to BFQ with blk-mq
against CFQ without blk-mq.  If you think BFQ is under-performing, then
contact the BFQ maintainers.

> 
> Do you have any idea what could be a reason for such a long stalling in
> case of mq-deadline IOscheduler?

Write starvation.

>                                 I would expect if there is some long
> async operation, kernel should not be blocked.

The kernel is not blocked.  AFAICT it is the EXT4 journal that is
blocked waiting on a write.

>                                                But what we see using
> mq-deadline is kernel blocked in bit_wait_io(). Do you think this is a
> valid behavior at least in case of mq-deadline IOscheduler?

mq-deadline is designed to favour reads over writes, so in that sense some
amount of write-starvation is normal.

> 
>> Alternatively you could fiddle with the scheduler parameters:
>>
>> With mq-deadline they are:
>>
>> # grep -H . /sys/block/mmcblk0/queue/iosched/*
>> /sys/block/mmcblk0/queue/iosched/fifo_batch:16
>> /sys/block/mmcblk0/queue/iosched/front_merges:1
>> /sys/block/mmcblk0/queue/iosched/read_expire:500
>> /sys/block/mmcblk0/queue/iosched/write_expire:5000
>> /sys/block/mmcblk0/queue/iosched/writes_starved:2
>>
>> You could try decreasing the write_expire and/or fifo_batch.
> It seems that decreasing doesn't affect on this issue.

That is surprising.  You could also try writes_starved=1 or
writes_starved=0.

_______________________________________________
linux-snps-arc mailing list
linux-snps-arc@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-snps-arc

  reply	other threads:[~2018-03-20  8:29 UTC|newest]

Thread overview: 11+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <1520942674.10285.8.camel@synopsys.com>
2018-03-13 14:05 ` mmc: block: bonnie++ runs with errors on arc/hsdk board Adrian Hunter
2018-03-13 15:56   ` Evgeniy Didin
2018-03-14 12:32     ` Evgeniy Didin
2018-03-15  9:27       ` Adrian Hunter
2018-03-15 15:08         ` Evgeniy Didin
2018-03-16 12:12           ` Adrian Hunter
2018-03-16 17:10             ` Evgeniy Didin
2018-03-20  8:29               ` Adrian Hunter [this message]
2018-03-15 16:38         ` Vineet Gupta
2018-03-15 17:21           ` Alexey Brodkin
2018-03-16  7:20             ` Adrian Hunter

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=e00cf7f1-6ed8-124d-54b7-81e3d5722b10@intel.com \
    --to=adrian.hunter@intel.com \
    --cc=Alexey.Brodkin@synopsys.com \
    --cc=Eugeniy.Paltsev@synopsys.com \
    --cc=Evgeniy.Didin@synopsys.com \
    --cc=adilger.kernel@dilger.ca \
    --cc=jh80.chung@samsung.com \
    --cc=linus.walleij@linaro.org \
    --cc=linux-ext4@vger.kernel.org \
    --cc=linux-mmc@vger.kernel.org \
    --cc=linux-snps-arc@lists.infradead.org \
    --cc=tytso@mit.edu \
    --cc=ulf.hansson@linaro.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).