linux-arm-kernel.lists.infradead.org archive mirror
 help / color / mirror / Atom feed
From: joerg.krause@embedded.rocks (Jörg Krause)
To: linux-arm-kernel@lists.infradead.org
Subject: Low network throughput on i.MX28
Date: Sun, 20 Nov 2016 10:14:35 +0100	[thread overview]
Message-ID: <1479633275.13699.1.camel@embedded.rocks> (raw)
In-Reply-To: <1442387496.277549.fb1ea129-460b-466b-9575-ed6f40b78b7e.open-xchange@email.1und1.de>

Hi Stefan,

On Sat, 2016-11-19 at 12:36 +0100, Stefan Wahren wrote:
> Hi J?rg,
> 
> > J?rg Krause <joerg.krause@embedded.rocks> hat am 19. November 2016
> > um 00:49
> > geschrieben:
> > 
> > 
> > Hi all,
> > 
> > [snip]
> > 
> > I did some time measurements on the wifi, mmc and dma driver to
> > compare
> > the performance between the vendor and the mainline kernel. For
> > this I
> > toggled some GPIOs and measured the time difference with an osci. I
> > started measuring the time before calling sdio_readsb() in the wifi
> > driver [1] and stopped the time when the call returns. Note that
> > the
> > time was only measured for a packet length of 1536 bytes.
> > 
> > The vendor kernel took about 250 us to return whereas the mainline
> > kernel took about 325 us. To investigate where this additional time
> > comes from I divided the whole procedure into seperate parts and
> > compared their time consumed.
> > 
> > I noticed that the mainline kernel does took much longer to return
> > after the DMA request is done, signalled in this case by calling
> > mxs_mmc_dma_irq_callback() [2] in the mxs-mmc driver. From here it
> > takes about 150 us to get back to sdio_readsb().
> > 
> > An example for consuming much more time is the mainline mmc driver
> > where it hangs in?mmc_wait_done() [2] about 50 us just calling
> > complete(), whereas the vendor mmc driver almost immediately
> > returns
> > here.
> > 
> > I wonder why this call to complete consumes so much time? Any
> > ideas?
> 
> i don't know why, but how about putting the SDIO clk signal parallel
> to the
> GPIOs at your osci? So could get a better view of the runtime
> behavior.

Unfortunately, the board layout does not allow me to access the SDIO
pins.

The main question for me is, why the mmc core driver needs around 120
us beginning from calling complete() in mmc_wait_done() [1] until
receiving the completion signal in mmc_wait_for_req_done() [2]. Why
does signaling the completion consumes so much time?

For comparision, the time to do the mmc request (preparing request,
preparing DMA, doing DMA, waiting, reading response, starting signal
completion) takes about 215 us, whereas just sending the signal that
completion is done takes 120 us. For me this issue is the bottleneck.

Does anyone has an idea what may be responsible that signaling the
completion is so slow?

[1] http://lxr.free-electrons.com/source/drivers/mmc/core/core.c#L386
[2] http://lxr.free-electrons.com/source/drivers/mmc/core/core.c#L492

> Btw you should also verify the necessary time between to 2 packets.
> 
> Stefan
> 
> > 
> > [1] http://lxr.free-electrons.com/source/drivers/net/wireless/broad
> > com/
> > brcm80211/brcmfmac/bcmsdh.c#L488
> > 
> > [2] http://lxr.free-electrons.com/source/drivers/mmc/host/mxs-mmc.c
> > #L17
> > 9
> > 
> > [3] http://lxr.free-electrons.com/source/drivers/mmc/core/core.c#L3
> > 86
> > 
> > Best regards,
> > J?rg Krause

  reply	other threads:[~2016-11-20  9:14 UTC|newest]

Thread overview: 31+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-10-12 23:09 Low network throughput on i.MX28 Jörg Krause
2016-10-13  6:48 ` Lothar Waßmann
2016-10-13 19:43   ` Jörg Krause
2016-10-13 20:42     ` Uwe Kleine-König
2016-10-14  6:13     ` Lothar Waßmann
2016-10-15  8:46       ` Jörg Krause
2016-10-15  8:59         ` Stefan Wahren
2016-10-15  9:41           ` Jörg Krause
2016-10-15 16:16             ` Stefan Wahren
2016-10-28 23:07               ` Jörg Krause
2016-10-29  9:08                 ` Stefan Wahren
2016-10-29 13:08                   ` Jörg Krause
2016-11-02  8:14                   ` Jörg Krause
2016-11-02  8:24                     ` Stefan Wahren
2016-11-02  8:30                       ` Jörg Krause
2016-11-04 18:44                       ` Jörg Krause
2016-11-04 19:30                         ` Stefan Wahren
2016-11-04 20:56                           ` Jörg Krause
2016-11-04 22:42                           ` Jörg Krause
2016-11-05 11:33                             ` Stefan Wahren
2016-11-05 12:06                               ` Jörg Krause
2016-11-05 12:39                                 ` Koul, Vinod
2016-11-05 12:47                                   ` Jörg Krause
2016-11-05 12:48                                   ` Fabio Estevam
2016-11-05 13:14                                   ` Jörg Krause
2016-11-05 15:45                                     ` Koul, Vinod
2016-11-05 22:37                                       ` Jörg Krause
2016-11-18 23:49                                       ` Jörg Krause
2016-11-19 11:36                                         ` Stefan Wahren
2016-11-20  9:14                                           ` Jörg Krause [this message]
2016-10-15 11:18           ` Jörg Krause

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1479633275.13699.1.camel@embedded.rocks \
    --to=joerg.krause@embedded.rocks \
    --cc=linux-arm-kernel@lists.infradead.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).