From: Grygorii Strashko <grygorii.strashko-l0cyMroinI0@public.gmane.org>
To: Cyrille Pitchen
<cyrille.pitchen-AIFe0yeh4nAAvxtiuMwx3w@public.gmane.org>,
Mark Brown <broonie-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org>
Cc: <linus.walleij-QSEj5FYQhm4dnm+yROfE0A@public.gmane.org>,
"Nicolas.FERRE-AIFe0yeh4nAAvxtiuMwx3w@public.gmane.org"
<Nicolas.FERRE-AIFe0yeh4nAAvxtiuMwx3w@public.gmane.org>,
"Wenyou.Yang-AIFe0yeh4nAAvxtiuMwx3w@public.gmane.org"
<Wenyou.Yang-AIFe0yeh4nAAvxtiuMwx3w@public.gmane.org>,
"linux-spi-u79uwXL29TY76Z2rM5mHXA@public.gmane.org"
<linux-spi-u79uwXL29TY76Z2rM5mHXA@public.gmane.org>,
<hs-ynQEQJNshbs@public.gmane.org>,
"linux-arm-kernel-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r@public.gmane.org"
<linux-arm-kernel-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r@public.gmane.org>
Subject: Re: SPI: performance regression when using the common message queuing infrastructure
Date: Wed, 6 Jul 2016 13:03:19 +0300 [thread overview]
Message-ID: <577CD767.2080309@ti.com> (raw)
In-Reply-To: <577CD464.6050506-AIFe0yeh4nAAvxtiuMwx3w@public.gmane.org>
On 07/06/2016 12:50 PM, Cyrille Pitchen wrote:
> Hi Mark,
>
> recently Heiko reported to us a performance regression with Atmel SPI
> controllers. He noticed the issue on a sam9g15ek board and I was also able to
> reproduce it on a sama5d36ek board.
>
> We found out that the performance regression was introduced in 3.14 by commit:
> 8090d6d1a415d3ae1a7208995decfab8f60f4f36
> spi: atmel: Refactor spi-atmel to use SPI framework queue
>
> For the test, I connected a Spansion S25FL512 memory on the SPI1 controller of
> a sama5d36ek board. Then with an oscilloscope I monitored the chip-select, clock
> and MOSI signals on the SPI bus.
>
>
> 1 - Reading 512 bytes from the memory
>
> # dd if=/dev/mtd6 bs=512 count=1 of=/dev/null
>
> With the oscilloscope, I measured the time between the chip-select fell before
> the Read Status command (05h) and the chip-select rose after all data had been
> read by the 4-byte address Fast Read 1-1-1 command (13h).
>
> 3.14 vanilla : 305 µs
> 3.14 commit 8090d6d1a415 reverted : 242 µs -21%
>
> 2 - Reading 1000 x 1024 bytes from the memory
>
> # dd if=/dev/mtd6 bs=1024 count=1000 of=/dev/null
>
> Still with the scope, I measured the time to read all data.
>
> 3.14 vanilla : 435 ms
> 3.14 commit 8090d6d1a415 reverted : 361 ms -17%
>
>
> Indeed the oscilloscope shows that more time is spent between messages and
> transfers.
>
> commit 8090d6d1a415 replaced the tasklet used to manage a SPI message/transfer
> queue by a workqueue provided by the SPI framework.
>
> The support of this (optional) workqueue was introduced by commit:
> ffbbdd21329f3e15eeca6df2d4bc11c04d9d91c0
> spi: create a message queuing infrastructure
>
> Though the commit message claims that is common infrastructure is optional,
> the patch also claims the .transfer() hook is deprecated, suggesting drivers
> should implement the new .transfer_one_message() hook instead.
>
> This is the reason why commit 8090d6d1a415 was submitted. However we lost
> quite amount of performances moving from our tasklet to the generic workqueue.
>
> So do you recommend us to keep our current generic implementation relying on
> the SPI framework workqueue or to go back to a custom implementation using
> tasklet?
> If we keep the current implementation, is there a way to improve the
> performances so we go back to something close to what he had before?
>
> We saw in commit ffbbdd21329f that we can change the workqueue thread
> scheduling policy to SCHED_FIFO by setting master->rt.
>
master->rt is not a good choice as i know and
you may find thread [1] useful for you.
[1] http://www.spinics.net/lists/linux-rt-users/msg14347.html
--
regards,
-grygorii
--
To unsubscribe from this list: send the line "unsubscribe linux-spi" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
next prev parent reply other threads:[~2016-07-06 10:03 UTC|newest]
Thread overview: 7+ messages / expand[flat|nested] mbox.gz Atom feed top
2016-07-06 9:50 SPI: performance regression when using the common message queuing infrastructure Cyrille Pitchen
[not found] ` <577CD464.6050506-AIFe0yeh4nAAvxtiuMwx3w@public.gmane.org>
2016-07-06 10:03 ` Grygorii Strashko [this message]
[not found] ` <577CD767.2080309-l0cyMroinI0@public.gmane.org>
2016-07-07 8:12 ` Cyrille Pitchen
[not found] ` <577E0EF3.6000308-AIFe0yeh4nAAvxtiuMwx3w@public.gmane.org>
2016-07-25 4:51 ` Heiko Schocher
[not found] ` <57959ADD.40700-ynQEQJNshbs@public.gmane.org>
2016-07-29 9:33 ` Cyrille Pitchen
[not found] ` <41cb8a2a-7138-d2c0-e668-6c03add1882e-AIFe0yeh4nAAvxtiuMwx3w@public.gmane.org>
2016-07-29 12:11 ` Mark Brown
2016-07-07 9:50 ` Mark Brown
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=577CD767.2080309@ti.com \
--to=grygorii.strashko-l0cymroini0@public.gmane.org \
--cc=Nicolas.FERRE-AIFe0yeh4nAAvxtiuMwx3w@public.gmane.org \
--cc=Wenyou.Yang-AIFe0yeh4nAAvxtiuMwx3w@public.gmane.org \
--cc=broonie-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org \
--cc=cyrille.pitchen-AIFe0yeh4nAAvxtiuMwx3w@public.gmane.org \
--cc=hs-ynQEQJNshbs@public.gmane.org \
--cc=linus.walleij-QSEj5FYQhm4dnm+yROfE0A@public.gmane.org \
--cc=linux-arm-kernel-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r@public.gmane.org \
--cc=linux-spi-u79uwXL29TY76Z2rM5mHXA@public.gmane.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).