linux-crypto.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Corentin Labbe <clabbe.montjoie@gmail.com>
To: Iuliana Prodan <iuliana.prodan@nxp.com>
Cc: Herbert Xu <herbert@gondor.apana.org.au>,
	Baolin Wang <baolin.wang@linaro.org>,
	Ard Biesheuvel <ard.biesheuvel@linaro.org>,
	Horia Geanta <horia.geanta@nxp.com>,
	Maxime Coquelin <mcoquelin.stm32@gmail.com>,
	Alexandre Torgue <alexandre.torgue@st.com>,
	Maxime Ripard <mripard@kernel.org>,
	Aymen Sghaier <aymen.sghaier@nxp.com>,
	"David S. Miller" <davem@davemloft.net>,
	Silvano Di Ninno <silvano.dininno@nxp.com>,
	Franck Lenormand <franck.lenormand@nxp.com>,
	"linux-crypto@vger.kernel.org" <linux-crypto@vger.kernel.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	dl-linux-imx <linux-imx@nxp.com>
Subject: Re: [RFC PATCH] Crypto-engine support for parallel requests
Date: Wed, 22 Jan 2020 14:22:04 +0100	[thread overview]
Message-ID: <20200122132204.GB13173@Red> (raw)
In-Reply-To: <VI1PR04MB44455343230CBA7400D21C998C0C0@VI1PR04MB4445.eurprd04.prod.outlook.com>

On Wed, Jan 22, 2020 at 12:29:22PM +0000, Iuliana Prodan wrote:
> On 1/22/2020 12:41 PM, Corentin Labbe wrote:
> > On Tue, Jan 21, 2020 at 02:20:27PM +0000, Iuliana Prodan wrote:
> >> On 1/21/2020 12:00 PM, Corentin Labbe wrote:
> >>> On Tue, Jan 21, 2020 at 01:32:29AM +0200, Iuliana Prodan wrote:
> >>>> Added support for executing multiple requests, in parallel,
> >>>> for crypto engine.
> >>>> A no_reqs is initialized and set in the new
> >>>> crypto_engine_alloc_init_and_set function.
> >>>> 
> >>>>
> >>>
> >>> Hello
> >>>
> >>> In your model, who is running finalize_request() ?
> >> finalize_request() in CAAM, and in other drivers, is called on the _done
> >> callback (stm32, virtio and omap).
> >>
> >>> In caam it seems that you have a taskqueue dedicated for that but you cannot assume that all drivers will have this.
> >>> I think the crypto_engine should be sufficient by itself and does not need external thread/taskqueue.
> >>>
> >>> But in your case, it seems that you dont have the choice, since do_one_request does not "do" but simply enqueue the request in the "jobring".
> >>>
> >> But, do_one_request it shouldn't, necessary,  execute the request. Is ok
> >> to enqueue it, since we have asynchronous requests. do_one_request is
> >> not blocking.
> >>
> >>> What about adding along prepare/do_one_request/unprepare a new enqueue()/can_do_more() function ?
> >>>
> >>> The stream will be:
> >>> retry:
> >>> optionnal prepare
> >>> optionnal enqueue
> >>> optionnal can_do_more() (goto retry)
> >>> optionnal do_one_request
> >>>
> >>> then
> >>> finalize()
> >>> optionnal unprepare
> >>>
> >>
> >> I'm planning to improve crypto-engine incrementally, so I'm taking one
> >> step at a time :)
> >> But I'm not sure if adding an enqueue operation is a good idea, since,
> >> my understanding, is that do_one_request is a non-blocking operation and
> >> it shouldn't execute the request.
> > 
> > do_one_request is a blocking operation on amlogic/sun8i-ce/sun8i-ss and the "documentation" is clear "@do_one_request: do encryption for current request".
> > But I agree that is a bit small for a documentation.
> > 
> 
> Herbert, Baolin,
> 
> What do you think about do_one_requet operation: is blocking or not?
> 
> There are several drivers (stm32, omap, virtio, caam) that include 
> crypto-engine, and uses do_one_request as non-blocking, only the ones 
> mentioned and implemented by Corentin use do_one_request as blocking.
> 
> >>
> >> IMO, the crypto-engine flow should be kept simple:
> >> 1. a request comes to hw -> this is doing transfer_request_to_engine;
> >> 2. CE enqueue the requests
> >> 3. on pump_requests:
> >> 	3. a) optional prepare operation
> >> 	3. b) sends the reqs to hw, by do_one_request operation. To wait for
> >> completion here it contradicts the asynchronous crypto API.
> > 
> > There are no contradiction, the call is asynchronous for the user of the API.
> > 
> >> do_one_request operation has a crypto_async_request type as argument.
> >> Note: Step 3. b) can be done several times, depending on size of hw queue.
> >> 4. in driver, when req is done:
> >> 	4. a) optional unprepare operation
> >> 	4. b) crypto_finalize_request is called
> >> 	
> > 
> > Since Herbert say the same thing than me:
> > "Instead, we should just let the driver tell us when it is ready to accept more requests."
> > Let me insist on my proposal, I have updated my serie, and it should handle your case and mine.
> > I will send it within minutes.
> > 
> 
> Corentin,
> 
> In your new proposal, a few patches include my modifications. The others 
> include a solution that fits your drivers very well, but implies 
> modifications in all the other 4 drivers. It's not backwards compatible.
> I believe it can be done better, so we won't need to modify, _at all_, 
> the other drivers.

Others driver should work the same, I dont see what changes they need.
As I said, I tested caam with my changes, it works the same.
Please show me what changes they need to continue to work and proof that they are broken with my changes.

> 
> I'm working on a new version for my RFC, that has the can_enqueue_more, 
> as Herbert suggested, but I would really want to know how 
> crypto-engine's do_one_request was thought: blocking or non-blocking?

We know that both way works, so this is not really a problem.

  reply	other threads:[~2020-01-22 13:30 UTC|newest]

Thread overview: 9+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-01-20 23:32 [RFC PATCH] Crypto-engine support for parallel requests Iuliana Prodan
2020-01-21  9:11 ` Herbert Xu
2020-01-21 10:08   ` Iuliana Prodan
2020-01-21 10:00 ` Corentin Labbe
2020-01-21 14:20   ` Iuliana Prodan
2020-01-22 10:41     ` Corentin Labbe
2020-01-22 12:29       ` Iuliana Prodan
2020-01-22 13:22         ` Corentin Labbe [this message]
2020-01-22 14:41         ` Herbert Xu

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20200122132204.GB13173@Red \
    --to=clabbe.montjoie@gmail.com \
    --cc=alexandre.torgue@st.com \
    --cc=ard.biesheuvel@linaro.org \
    --cc=aymen.sghaier@nxp.com \
    --cc=baolin.wang@linaro.org \
    --cc=davem@davemloft.net \
    --cc=franck.lenormand@nxp.com \
    --cc=herbert@gondor.apana.org.au \
    --cc=horia.geanta@nxp.com \
    --cc=iuliana.prodan@nxp.com \
    --cc=linux-crypto@vger.kernel.org \
    --cc=linux-imx@nxp.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mcoquelin.stm32@gmail.com \
    --cc=mripard@kernel.org \
    --cc=silvano.dininno@nxp.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).