* Patch "crypto: vmx - Adding enable_kernel_vsx() to access VSX instructions" has been added to the 4.1-stable tree
@ 2015-09-23 4:58 gregkh
0 siblings, 0 replies; only message in thread
From: gregkh @ 2015-09-23 4:58 UTC (permalink / raw)
To: leosilva, gregkh, herbert, mpe; +Cc: stable, stable-commits
This is a note to let you know that I've just added the patch titled
crypto: vmx - Adding enable_kernel_vsx() to access VSX instructions
to the 4.1-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary
The filename of the patch is:
crypto-vmx-adding-enable_kernel_vsx-to-access-vsx-instructions.patch
and it can be found in the queue-4.1 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable@vger.kernel.org> know about it.
>From 2d6f0600b2cd755959527230ef5a6fba97bb762a Mon Sep 17 00:00:00 2001
From: Leonidas Da Silva Barbosa <leosilva@linux.vnet.ibm.com>
Date: Mon, 13 Jul 2015 13:51:39 -0300
Subject: crypto: vmx - Adding enable_kernel_vsx() to access VSX instructions
From: Leonidas Da Silva Barbosa <leosilva@linux.vnet.ibm.com>
commit 2d6f0600b2cd755959527230ef5a6fba97bb762a upstream.
vmx-crypto driver make use of some VSX instructions which are
only available if VSX is enabled. Running in cases where VSX
are not enabled vmx-crypto fails in a VSX exception.
In order to fix this enable_kernel_vsx() was added to turn on
VSX instructions for vmx-crypto.
Signed-off-by: Leonidas S. Barbosa <leosilva@linux.vnet.ibm.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
---
drivers/crypto/vmx/aes.c | 3 +++
drivers/crypto/vmx/aes_cbc.c | 3 +++
drivers/crypto/vmx/aes_ctr.c | 3 +++
drivers/crypto/vmx/ghash.c | 4 ++++
4 files changed, 13 insertions(+)
--- a/drivers/crypto/vmx/aes.c
+++ b/drivers/crypto/vmx/aes.c
@@ -80,6 +80,7 @@ static int p8_aes_setkey(struct crypto_t
pagefault_disable();
enable_kernel_altivec();
+ enable_kernel_vsx();
ret = aes_p8_set_encrypt_key(key, keylen * 8, &ctx->enc_key);
ret += aes_p8_set_decrypt_key(key, keylen * 8, &ctx->dec_key);
pagefault_enable();
@@ -97,6 +98,7 @@ static void p8_aes_encrypt(struct crypto
} else {
pagefault_disable();
enable_kernel_altivec();
+ enable_kernel_vsx();
aes_p8_encrypt(src, dst, &ctx->enc_key);
pagefault_enable();
}
@@ -111,6 +113,7 @@ static void p8_aes_decrypt(struct crypto
} else {
pagefault_disable();
enable_kernel_altivec();
+ enable_kernel_vsx();
aes_p8_decrypt(src, dst, &ctx->dec_key);
pagefault_enable();
}
--- a/drivers/crypto/vmx/aes_cbc.c
+++ b/drivers/crypto/vmx/aes_cbc.c
@@ -81,6 +81,7 @@ static int p8_aes_cbc_setkey(struct cryp
pagefault_disable();
enable_kernel_altivec();
+ enable_kernel_vsx();
ret = aes_p8_set_encrypt_key(key, keylen * 8, &ctx->enc_key);
ret += aes_p8_set_decrypt_key(key, keylen * 8, &ctx->dec_key);
pagefault_enable();
@@ -108,6 +109,7 @@ static int p8_aes_cbc_encrypt(struct blk
} else {
pagefault_disable();
enable_kernel_altivec();
+ enable_kernel_vsx();
blkcipher_walk_init(&walk, dst, src, nbytes);
ret = blkcipher_walk_virt(desc, &walk);
@@ -143,6 +145,7 @@ static int p8_aes_cbc_decrypt(struct blk
} else {
pagefault_disable();
enable_kernel_altivec();
+ enable_kernel_vsx();
blkcipher_walk_init(&walk, dst, src, nbytes);
ret = blkcipher_walk_virt(desc, &walk);
--- a/drivers/crypto/vmx/aes_ctr.c
+++ b/drivers/crypto/vmx/aes_ctr.c
@@ -79,6 +79,7 @@ static int p8_aes_ctr_setkey(struct cryp
pagefault_disable();
enable_kernel_altivec();
+ enable_kernel_vsx();
ret = aes_p8_set_encrypt_key(key, keylen * 8, &ctx->enc_key);
pagefault_enable();
@@ -97,6 +98,7 @@ static void p8_aes_ctr_final(struct p8_a
pagefault_disable();
enable_kernel_altivec();
+ enable_kernel_vsx();
aes_p8_encrypt(ctrblk, keystream, &ctx->enc_key);
pagefault_enable();
@@ -127,6 +129,7 @@ static int p8_aes_ctr_crypt(struct blkci
while ((nbytes = walk.nbytes) >= AES_BLOCK_SIZE) {
pagefault_disable();
enable_kernel_altivec();
+ enable_kernel_vsx();
aes_p8_ctr32_encrypt_blocks(walk.src.virt.addr, walk.dst.virt.addr,
(nbytes & AES_BLOCK_MASK)/AES_BLOCK_SIZE, &ctx->enc_key, walk.iv);
pagefault_enable();
--- a/drivers/crypto/vmx/ghash.c
+++ b/drivers/crypto/vmx/ghash.c
@@ -116,6 +116,7 @@ static int p8_ghash_setkey(struct crypto
pagefault_disable();
enable_kernel_altivec();
+ enable_kernel_vsx();
enable_kernel_fp();
gcm_init_p8(ctx->htable, (const u64 *) key);
pagefault_enable();
@@ -142,6 +143,7 @@ static int p8_ghash_update(struct shash_
GHASH_DIGEST_SIZE - dctx->bytes);
pagefault_disable();
enable_kernel_altivec();
+ enable_kernel_vsx();
enable_kernel_fp();
gcm_ghash_p8(dctx->shash, ctx->htable, dctx->buffer,
GHASH_DIGEST_SIZE);
@@ -154,6 +156,7 @@ static int p8_ghash_update(struct shash_
if (len) {
pagefault_disable();
enable_kernel_altivec();
+ enable_kernel_vsx();
enable_kernel_fp();
gcm_ghash_p8(dctx->shash, ctx->htable, src, len);
pagefault_enable();
@@ -182,6 +185,7 @@ static int p8_ghash_final(struct shash_d
dctx->buffer[i] = 0;
pagefault_disable();
enable_kernel_altivec();
+ enable_kernel_vsx();
enable_kernel_fp();
gcm_ghash_p8(dctx->shash, ctx->htable, dctx->buffer,
GHASH_DIGEST_SIZE);
Patches currently in stable-queue which might be from leosilva@linux.vnet.ibm.com are
queue-4.1/powerpc-uncomment-and-make-enable_kernel_vsx-routine-available.patch
queue-4.1/crypto-vmx-adding-enable_kernel_vsx-to-access-vsx-instructions.patch
^ permalink raw reply [flat|nested] only message in thread
only message in thread, other threads:[~2015-09-23 4:58 UTC | newest]
Thread overview: (only message) (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2015-09-23 4:58 Patch "crypto: vmx - Adding enable_kernel_vsx() to access VSX instructions" has been added to the 4.1-stable tree gregkh
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).