linux-mtd.lists.infradead.org archive mirror
 help / color / mirror / Atom feed
From: Richard Weinberger <richard@nod.at>
To: Dongsheng Yang <yangds.fnst@cn.fujitsu.com>,
	Brian Norris <computersforpeace@gmail.com>
Cc: linux-mtd@lists.infradead.org
Subject: Re: [PATCH] ubi: do not re-read the data already read out in retry
Date: Mon, 24 Aug 2015 08:46:19 +0200	[thread overview]
Message-ID: <55DABDBB.4080404@nod.at> (raw)
In-Reply-To: <55DA6DC8.9010308@cn.fujitsu.com>

Am 24.08.2015 um 03:05 schrieb Dongsheng Yang:
> On 08/21/2015 05:17 PM, Richard Weinberger wrote:
>> Am 21.08.2015 um 09:59 schrieb Dongsheng Yang:
>>> In ubi_io_read(), we will retry if current reading failed
>>> to read all data we wanted. But we are doing a full re-do
>>> in the re-try path. Actually, we can skip the data which
>>> we have already read out in the last reading.
>>>
>>> Signed-off-by: Dongsheng Yang <yangds.fnst@cn.fujitsu.com>
>>> ---
>>>   drivers/mtd/ubi/io.c | 19 +++++++++++++------
>>>   1 file changed, 13 insertions(+), 6 deletions(-)
>>>
>>> diff --git a/drivers/mtd/ubi/io.c b/drivers/mtd/ubi/io.c
>>> index 5bbd1f0..a3ac643 100644
>>> --- a/drivers/mtd/ubi/io.c
>>> +++ b/drivers/mtd/ubi/io.c
>>> @@ -127,7 +127,7 @@ int ubi_io_read(const struct ubi_device *ubi, void *buf, int pnum, int offset,
>>>           int len)
>>>   {
>>>       int err, retries = 0;
>>> -    size_t read;
>>> +    size_t read, already_read = 0;
>>>       loff_t addr;
>>>
>>>       dbg_io("read %d bytes from PEB %d:%d", len, pnum, offset);
>>> @@ -165,6 +165,7 @@ int ubi_io_read(const struct ubi_device *ubi, void *buf, int pnum, int offset,
>>>       addr = (loff_t)pnum * ubi->peb_size + offset;
>>>   retry:
>>>       err = mtd_read(ubi->mtd, addr, len, &read, buf);
>>> +    already_read += read;
>>
>> Hmm, this change makes me nervous.
> 
> Ha, yes, understandable.
>>
>> Brian, does MTD core guarantee that upon an erroneous mtd_read() the number of "read" bytes
>> in "buf" are valid?
>>
>> So, my fear is that this change will introduce new regressions (due faulty MTD drivers, etc..)
>> without a real gain.
> 
> I would say "big gain" rather than "real gain". Consider this case, if
> you are going to read 4M from driver but it failed at the last byte
> twice. When we success on the third retry, we have read out 4*3=12M for
> 4M data user requested. That tripled the latency.

How do you hit this case?
What error is mtd_read() returning?

I had the feeling that this is more a theoretical fix.

Thanks,
//richard

  reply	other threads:[~2015-08-24  6:46 UTC|newest]

Thread overview: 8+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2015-08-21  7:59 [PATCH] ubi: do not re-read the data already read out in retry Dongsheng Yang
2015-08-21  9:17 ` Richard Weinberger
2015-08-24  1:05   ` Dongsheng Yang
2015-08-24  6:46     ` Richard Weinberger [this message]
2015-08-24  7:04       ` Dongsheng Yang
2015-08-24 16:59         ` Brian Norris
2015-08-25  0:34           ` Dongsheng Yang
2015-08-24 16:50     ` Brian Norris

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=55DABDBB.4080404@nod.at \
    --to=richard@nod.at \
    --cc=computersforpeace@gmail.com \
    --cc=linux-mtd@lists.infradead.org \
    --cc=yangds.fnst@cn.fujitsu.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).