linux-crypto.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Jussi Kivilinna <jussi.kivilinna@iki.fi>
To: Ard Biesheuvel <ard.biesheuvel@linaro.org>,
	linux-arm-kernel@lists.infradead.org,
	linux-crypto@vger.kernel.org
Cc: catalin.marinas@arm.com, will.deacon@arm.com, steve.capper@linaro.org
Subject: Re: [PATCH resend 13/15] arm64/crypto: add voluntary preemption to Crypto Extensions SHA1
Date: Tue, 13 May 2014 21:58:51 +0300	[thread overview]
Message-ID: <53726B6B.6070007@iki.fi> (raw)
In-Reply-To: <1398959486-8222-4-git-send-email-ard.biesheuvel@linaro.org>

On 01.05.2014 18:51, Ard Biesheuvel wrote:
> The Crypto Extensions based SHA1 implementation uses the NEON register file,
> and hence runs with preemption disabled. This patch adds a TIF_NEED_RESCHED
> check to its inner loop so we at least give up the CPU voluntarily when we
> are running in process context and have been tagged for preemption by the
> scheduler.
> 
> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
> ---
<snip>
> @@ -42,6 +42,7 @@ static int sha1_update(struct shash_desc *desc, const u8 *data,
>  	sctx->count += len;
>  
>  	if ((partial + len) >= SHA1_BLOCK_SIZE) {
> +		struct thread_info *ti = NULL;
>  		int blocks;
>  
>  		if (partial) {
> @@ -52,16 +53,30 @@ static int sha1_update(struct shash_desc *desc, const u8 *data,
>  			len -= p;
>  		}
>  
> +		/*
> +		 * Pass current's thread info pointer to sha1_ce_transform()
> +		 * below if we want it to play nice under preemption.
> +		 */
> +		if ((IS_ENABLED(CONFIG_PREEMPT_VOLUNTARY) ||
> +		     IS_ENABLED(CONFIG_PREEMPT)) && !in_interrupt())
> +			ti = current_thread_info();
> +
>  		blocks = len / SHA1_BLOCK_SIZE;
>  		len %= SHA1_BLOCK_SIZE;
>  
> -		kernel_neon_begin_partial(16);
> -		sha1_ce_transform(blocks, data, sctx->state,
> -				  partial ? sctx->buffer : NULL, 0);
> -		kernel_neon_end();
> +		do {
> +			int rem;
> +
> +			kernel_neon_begin_partial(16);
> +			rem = sha1_ce_transform(blocks, data, sctx->state,
> +						partial ? sctx->buffer : NULL,
> +						0, ti);
> +			kernel_neon_end();
>  
> -		data += blocks * SHA1_BLOCK_SIZE;
> -		partial = 0;
> +			data += (blocks - rem) * SHA1_BLOCK_SIZE;
> +			blocks = rem;
> +			partial = 0;
> +		} while (unlikely(ti && blocks > 0));
>  	}
>  	if (len)
>  		memcpy(sctx->buffer + partial, data, len);
> @@ -94,6 +109,7 @@ static int sha1_finup(struct shash_desc *desc, const u8 *data,
>  		      unsigned int len, u8 *out)
>  {
>  	struct sha1_state *sctx = shash_desc_ctx(desc);
> +	struct thread_info *ti = NULL;
>  	__be32 *dst = (__be32 *)out;
>  	int blocks;
>  	int i;
> @@ -111,9 +127,20 @@ static int sha1_finup(struct shash_desc *desc, const u8 *data,
>  	 */
>  	blocks = len / SHA1_BLOCK_SIZE;
>  
> -	kernel_neon_begin_partial(16);
> -	sha1_ce_transform(blocks, data, sctx->state, NULL, len);
> -	kernel_neon_end();
> +	if ((IS_ENABLED(CONFIG_PREEMPT_VOLUNTARY) ||
> +	     IS_ENABLED(CONFIG_PREEMPT)) && !in_interrupt())
> +		ti = current_thread_info();
> +
> +	do {
> +		int rem;
> +
> +		kernel_neon_begin_partial(16);
> +		rem = sha1_ce_transform(blocks, data, sctx->state,
> +					NULL, len, ti);
> +		kernel_neon_end();
> +		data += (blocks - rem) * SHA1_BLOCK_SIZE;
> +		blocks = rem;
> +	} while (unlikely(ti && blocks > 0));
>  

These seem to be similar, how about renaming assembly function to __sha1_ce_transform
and moving this loop to new sha1_ce_transform.

Otherwise, patches looks good.

-Jussi

  reply	other threads:[~2014-05-13 18:58 UTC|newest]

Thread overview: 8+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2014-05-01 15:51 [PATCH resend 10/15] arm64: pull in <asm/simd.h> from asm-generic Ard Biesheuvel
2014-05-01 15:51 ` [PATCH resend 11/15] arm64/crypto: AES-ECB/CBC/CTR/XTS using ARMv8 NEON and Crypto Extensions Ard Biesheuvel
2014-05-01 15:51 ` [PATCH resend 12/15] arm64/crypto: add shared macro to test for NEED_RESCHED Ard Biesheuvel
2014-05-01 15:51 ` [PATCH resend 13/15] arm64/crypto: add voluntary preemption to Crypto Extensions SHA1 Ard Biesheuvel
2014-05-13 18:58   ` Jussi Kivilinna [this message]
2014-05-14  1:36   ` Herbert Xu
2014-05-01 15:51 ` [PATCH resend 14/15] arm64/crypto: add voluntary preemption to Crypto Extensions SHA2 Ard Biesheuvel
2014-05-01 15:51 ` [PATCH resend 15/15] arm64/crypto: add voluntary preemption to Crypto Extensions GHASH Ard Biesheuvel

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=53726B6B.6070007@iki.fi \
    --to=jussi.kivilinna@iki.fi \
    --cc=ard.biesheuvel@linaro.org \
    --cc=catalin.marinas@arm.com \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-crypto@vger.kernel.org \
    --cc=steve.capper@linaro.org \
    --cc=will.deacon@arm.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).