linux-tegra.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Vladimir Zapolskiy <vladimir_zapolskiy@mentor.com>
To: Adrian Hunter <adrian.hunter@intel.com>
Cc: Stephen Warren <swarren@wwwdotorg.org>,
	Chris Ball <cjb@laptop.org>,
	linux-mmc@vger.kernel.org, linux-tegra@vger.kernel.org,
	Stephen Warren <swarren@nvidia.com>,
	Dong Aisheng <dongas86@gmail.com>,
	Ulf Hansson <ulf.hansson@linaro.org>,
	Vladimir Zapolskiy <vz@mleia.com>
Subject: Re: [PATCH] mmc: core: don't return 1 for max_discard
Date: Thu, 19 Dec 2013 10:14:31 +0100	[thread overview]
Message-ID: <52B2B8F7.1000905@mentor.com> (raw)
In-Reply-To: <52B2B5DF.1020702@intel.com>

On 12/19/13 10:01, Adrian Hunter wrote:
> On 19/12/13 01:00, Stephen Warren wrote:
>> On 12/18/2013 03:27 PM, Stephen Warren wrote:
>>> From: Stephen Warren<swarren@nvidia.com>
>>>
>>> In mmc_do_calc_max_discard(), if only a single erase block can be
>>> discarded within the host controller's timeout, don't allow discard
>>> operations at all.
>>>
>>> Previously, the code allowed sector-at-a-time discard (rather than
>>> erase-block-at-a-time), which was chronically slow.
>>>
>>> Without this patch, on the NVIDIA Tegra Cardhu board, the loops result
>>> in qty == 1, which is immediately returned. This causes discard to
>>> operate a single sector at a time, which is chronically slow. With this
>>> patch in place, discard operates a single erase block at a time, which
>>> is reasonably fast.
>>
>> Alternatively, is the real fix a revert of e056a1b5b67b "mmc: queue: let
>> host controllers specify maximum discard timeout", followed by:
>>
>>> diff --git a/drivers/mmc/core/core.c b/drivers/mmc/core/core.c
>>> index 050eb262485c..35c5b5d86c99 100644
>>> --- a/drivers/mmc/core/core.c
>>> +++ b/drivers/mmc/core/core.c
>>> @@ -1950,7 +1950,6 @@ static int mmc_do_erase(struct mmc_card *card, unsigned int from,
>>>          cmd.opcode = MMC_ERASE;
>>>          cmd.arg = arg;
>>>          cmd.flags = MMC_RSP_SPI_R1B | MMC_RSP_R1B | MMC_CMD_AC;
>>> -       cmd.cmd_timeout_ms = mmc_erase_timeout(card, arg, qty);
>>>          err = mmc_wait_for_cmd(card->host,&cmd, 0);
>>>          if (err) {
>>>                  pr_err("mmc_erase: erase error %d, status %#x\n",
>>> @@ -1962,7 +1961,7 @@ static int mmc_do_erase(struct mmc_card *card, unsigned int from,
>>>          if (mmc_host_is_spi(card->host))
>>>                  goto out;
>>>
>>> -       timeout = jiffies + msecs_to_jiffies(MMC_CORE_TIMEOUT_MS);
>>> +       timeout = jiffies + msecs_to_jiffies(mmc_erase_timeout(card, arg, qty));
>>>          do {
>>>                  memset(&cmd, 0, sizeof(struct mmc_command));
>>>                  cmd.opcode = MMC_SEND_STATUS;
>>
>> That certainly also seems to solve the problem on my board...
>
> But large erases will timeout when they should have been split into smaller
> chunks.
>
> A generic solution needs to be able to explain what happens when the host
> controller *does* timeout.

Please correct me, but if Data Timeout Error is disabled, then this is not
an issue for most of the host controllers.

With best wishes,
Vladimir

  reply	other threads:[~2013-12-19  9:14 UTC|newest]

Thread overview: 18+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2013-12-18 22:27 [PATCH] mmc: core: don't return 1 for max_discard Stephen Warren
     [not found] ` <1387405663-14253-1-git-send-email-swarren-3lzwWm7+Weoh9ZMKESR00Q@public.gmane.org>
2013-12-18 23:00   ` Stephen Warren
2013-12-19  8:22     ` Vladimir Zapolskiy
     [not found]     ` <52B22906.4010704-3lzwWm7+Weoh9ZMKESR00Q@public.gmane.org>
2013-12-19  9:01       ` Adrian Hunter
2013-12-19  9:14         ` Vladimir Zapolskiy [this message]
     [not found]           ` <52B2B8F7.1000905-nmGgyN9QBj3QT0dZR+AlfA@public.gmane.org>
2013-12-19  9:42             ` Adrian Hunter
     [not found]               ` <52B2BF95.302-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org>
2013-12-19 10:26                 ` Ulf Hansson
2013-12-19 11:18                   ` Dong Aisheng
2013-12-19 13:04                     ` Ulf Hansson
     [not found]                   ` <CAPDyKFoiGzspgrtRwXruPqOODxbfKA4AAZHj_VF8H7rpwm7eTQ-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2013-12-19 12:28                     ` Adrian Hunter
2013-12-19 13:29                       ` Ulf Hansson
     [not found]                         ` <CAPDyKFp1B6r+WyAO9PocL13LvzjsZDJ3HOUbXwJ+uTQ2Ayv-ug-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2013-12-19 13:49                           ` Adrian Hunter
2013-12-19 19:11         ` Stephen Warren
     [not found]           ` <52B344E0.5080009-3lzwWm7+Weoh9ZMKESR00Q@public.gmane.org>
2013-12-20  7:17             ` Adrian Hunter
2013-12-19  9:05       ` Dong Aisheng
2013-12-19 19:15         ` Stephen Warren
2013-12-19  8:39 ` Dong Aisheng
     [not found]   ` <CAA+hA=StHAna46_356Gfpaa+4Y3yt6KO15W6E7dS8uoz8TPqxg-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2013-12-19 19:08     ` Stephen Warren

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=52B2B8F7.1000905@mentor.com \
    --to=vladimir_zapolskiy@mentor.com \
    --cc=adrian.hunter@intel.com \
    --cc=cjb@laptop.org \
    --cc=dongas86@gmail.com \
    --cc=linux-mmc@vger.kernel.org \
    --cc=linux-tegra@vger.kernel.org \
    --cc=swarren@nvidia.com \
    --cc=swarren@wwwdotorg.org \
    --cc=ulf.hansson@linaro.org \
    --cc=vz@mleia.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).