* [PATCH stable 1/2] powerpc: Uncomment and make enable_kernel_vsx() routine available @ 2015-09-16 1:40 Michael Ellerman 2015-09-16 1:40 ` [PATCH stable 2/2] crypto: vmx - Adding enable_kernel_vsx() to access VSX instructions Michael Ellerman 0 siblings, 1 reply; 3+ messages in thread From: Michael Ellerman @ 2015-09-16 1:40 UTC (permalink / raw) To: stable; +Cc: linuxppc-dev, linux-crypto, Leonidas Da Silva Barbosa, Herbert Xu From: Leonidas Da Silva Barbosa <leosilva@linux.vnet.ibm.com> commit 72cd7b44bc99376b3f3c93cedcd052663fcdf705 upstream. enable_kernel_vsx() function was commented since anything was using it. However, vmx-crypto driver uses VSX instructions which are only available if VSX is enable. Otherwise it rises an exception oops. This patch uncomment enable_kernel_vsx() routine and makes it available. Signed-off-by: Leonidas S. Barbosa <leosilva@linux.vnet.ibm.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Cc: stable@vger.kernel.org # 4.1 Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> --- arch/powerpc/include/asm/switch_to.h | 1 + arch/powerpc/kernel/process.c | 3 --- 2 files changed, 1 insertion(+), 3 deletions(-) diff --git a/arch/powerpc/include/asm/switch_to.h b/arch/powerpc/include/asm/switch_to.h index 58abeda64cb7..15cca17cba4b 100644 --- a/arch/powerpc/include/asm/switch_to.h +++ b/arch/powerpc/include/asm/switch_to.h @@ -29,6 +29,7 @@ static inline void save_early_sprs(struct thread_struct *prev) {} extern void enable_kernel_fp(void); extern void enable_kernel_altivec(void); +extern void enable_kernel_vsx(void); extern int emulate_altivec(struct pt_regs *); extern void __giveup_vsx(struct task_struct *); extern void giveup_vsx(struct task_struct *); diff --git a/arch/powerpc/kernel/process.c b/arch/powerpc/kernel/process.c index febb50dd5328..0596373cd1c3 100644 --- a/arch/powerpc/kernel/process.c +++ b/arch/powerpc/kernel/process.c @@ -204,8 +204,6 @@ EXPORT_SYMBOL_GPL(flush_altivec_to_thread); #endif /* CONFIG_ALTIVEC */ #ifdef CONFIG_VSX -#if 0 -/* not currently used, but some crazy RAID module might want to later */ void enable_kernel_vsx(void) { WARN_ON(preemptible()); @@ -220,7 +218,6 @@ void enable_kernel_vsx(void) #endif /* CONFIG_SMP */ } EXPORT_SYMBOL(enable_kernel_vsx); -#endif void giveup_vsx(struct task_struct *tsk) { -- 2.1.4 ^ permalink raw reply related [flat|nested] 3+ messages in thread
* [PATCH stable 2/2] crypto: vmx - Adding enable_kernel_vsx() to access VSX instructions 2015-09-16 1:40 [PATCH stable 1/2] powerpc: Uncomment and make enable_kernel_vsx() routine available Michael Ellerman @ 2015-09-16 1:40 ` Michael Ellerman 0 siblings, 0 replies; 3+ messages in thread From: Michael Ellerman @ 2015-09-16 1:40 UTC (permalink / raw) To: stable; +Cc: linuxppc-dev, linux-crypto, Leonidas S. Barbosa, Herbert Xu commit 2d6f0600b2cd755959527230ef5a6fba97bb762a upstream. vmx-crypto driver make use of some VSX instructions which are only available if VSX is enabled. Running in cases where VSX are not enabled vmx-crypto fails in a VSX exception. In order to fix this enable_kernel_vsx() was added to turn on VSX instructions for vmx-crypto. Signed-off-by: Leonidas S. Barbosa <leosilva@linux.vnet.ibm.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Fixes: 8676590a1593 ("crypto: vmx - Adding AES routines for VMX module") Cc: stable@vger.kernel.org # 4.1 Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> --- drivers/crypto/vmx/aes.c | 3 +++ drivers/crypto/vmx/aes_cbc.c | 3 +++ drivers/crypto/vmx/aes_ctr.c | 3 +++ drivers/crypto/vmx/ghash.c | 4 ++++ 4 files changed, 13 insertions(+) diff --git a/drivers/crypto/vmx/aes.c b/drivers/crypto/vmx/aes.c index ab300ea19434..41f93334cc44 100644 --- a/drivers/crypto/vmx/aes.c +++ b/drivers/crypto/vmx/aes.c @@ -80,6 +80,7 @@ static int p8_aes_setkey(struct crypto_tfm *tfm, const u8 *key, pagefault_disable(); enable_kernel_altivec(); + enable_kernel_vsx(); ret = aes_p8_set_encrypt_key(key, keylen * 8, &ctx->enc_key); ret += aes_p8_set_decrypt_key(key, keylen * 8, &ctx->dec_key); pagefault_enable(); @@ -97,6 +98,7 @@ static void p8_aes_encrypt(struct crypto_tfm *tfm, u8 *dst, const u8 *src) } else { pagefault_disable(); enable_kernel_altivec(); + enable_kernel_vsx(); aes_p8_encrypt(src, dst, &ctx->enc_key); pagefault_enable(); } @@ -111,6 +113,7 @@ static void p8_aes_decrypt(struct crypto_tfm *tfm, u8 *dst, const u8 *src) } else { pagefault_disable(); enable_kernel_altivec(); + enable_kernel_vsx(); aes_p8_decrypt(src, dst, &ctx->dec_key); pagefault_enable(); } diff --git a/drivers/crypto/vmx/aes_cbc.c b/drivers/crypto/vmx/aes_cbc.c index 1a559b7dddb5..c8e7f653e5d3 100644 --- a/drivers/crypto/vmx/aes_cbc.c +++ b/drivers/crypto/vmx/aes_cbc.c @@ -81,6 +81,7 @@ static int p8_aes_cbc_setkey(struct crypto_tfm *tfm, const u8 *key, pagefault_disable(); enable_kernel_altivec(); + enable_kernel_vsx(); ret = aes_p8_set_encrypt_key(key, keylen * 8, &ctx->enc_key); ret += aes_p8_set_decrypt_key(key, keylen * 8, &ctx->dec_key); pagefault_enable(); @@ -108,6 +109,7 @@ static int p8_aes_cbc_encrypt(struct blkcipher_desc *desc, } else { pagefault_disable(); enable_kernel_altivec(); + enable_kernel_vsx(); blkcipher_walk_init(&walk, dst, src, nbytes); ret = blkcipher_walk_virt(desc, &walk); @@ -143,6 +145,7 @@ static int p8_aes_cbc_decrypt(struct blkcipher_desc *desc, } else { pagefault_disable(); enable_kernel_altivec(); + enable_kernel_vsx(); blkcipher_walk_init(&walk, dst, src, nbytes); ret = blkcipher_walk_virt(desc, &walk); diff --git a/drivers/crypto/vmx/aes_ctr.c b/drivers/crypto/vmx/aes_ctr.c index 96dbee4bf4a6..266e708d63df 100644 --- a/drivers/crypto/vmx/aes_ctr.c +++ b/drivers/crypto/vmx/aes_ctr.c @@ -79,6 +79,7 @@ static int p8_aes_ctr_setkey(struct crypto_tfm *tfm, const u8 *key, pagefault_disable(); enable_kernel_altivec(); + enable_kernel_vsx(); ret = aes_p8_set_encrypt_key(key, keylen * 8, &ctx->enc_key); pagefault_enable(); @@ -97,6 +98,7 @@ static void p8_aes_ctr_final(struct p8_aes_ctr_ctx *ctx, pagefault_disable(); enable_kernel_altivec(); + enable_kernel_vsx(); aes_p8_encrypt(ctrblk, keystream, &ctx->enc_key); pagefault_enable(); @@ -127,6 +129,7 @@ static int p8_aes_ctr_crypt(struct blkcipher_desc *desc, while ((nbytes = walk.nbytes) >= AES_BLOCK_SIZE) { pagefault_disable(); enable_kernel_altivec(); + enable_kernel_vsx(); aes_p8_ctr32_encrypt_blocks(walk.src.virt.addr, walk.dst.virt.addr, (nbytes & AES_BLOCK_MASK)/AES_BLOCK_SIZE, &ctx->enc_key, walk.iv); pagefault_enable(); diff --git a/drivers/crypto/vmx/ghash.c b/drivers/crypto/vmx/ghash.c index d0ffe277af5c..917b3f09e724 100644 --- a/drivers/crypto/vmx/ghash.c +++ b/drivers/crypto/vmx/ghash.c @@ -116,6 +116,7 @@ static int p8_ghash_setkey(struct crypto_shash *tfm, const u8 *key, pagefault_disable(); enable_kernel_altivec(); + enable_kernel_vsx(); enable_kernel_fp(); gcm_init_p8(ctx->htable, (const u64 *) key); pagefault_enable(); @@ -142,6 +143,7 @@ static int p8_ghash_update(struct shash_desc *desc, GHASH_DIGEST_SIZE - dctx->bytes); pagefault_disable(); enable_kernel_altivec(); + enable_kernel_vsx(); enable_kernel_fp(); gcm_ghash_p8(dctx->shash, ctx->htable, dctx->buffer, GHASH_DIGEST_SIZE); @@ -154,6 +156,7 @@ static int p8_ghash_update(struct shash_desc *desc, if (len) { pagefault_disable(); enable_kernel_altivec(); + enable_kernel_vsx(); enable_kernel_fp(); gcm_ghash_p8(dctx->shash, ctx->htable, src, len); pagefault_enable(); @@ -182,6 +185,7 @@ static int p8_ghash_final(struct shash_desc *desc, u8 *out) dctx->buffer[i] = 0; pagefault_disable(); enable_kernel_altivec(); + enable_kernel_vsx(); enable_kernel_fp(); gcm_ghash_p8(dctx->shash, ctx->htable, dctx->buffer, GHASH_DIGEST_SIZE); -- 2.1.4 ^ permalink raw reply related [flat|nested] 3+ messages in thread
* [PATCH stable 1/2] powerpc: Uncomment and make enable_kernel_vsx() routine available @ 2015-09-16 1:34 Michael Ellerman 0 siblings, 0 replies; 3+ messages in thread From: Michael Ellerman @ 2015-09-16 1:34 UTC (permalink / raw) To: stable; +Cc: linuxppc-dev, linux-crypto, Leonidas Da Silva Barbosa, Herbert Xu From: Leonidas Da Silva Barbosa <leosilva@linux.vnet.ibm.com> commit 72cd7b44bc99376b3f3c93cedcd052663fcdf705 upstream. enable_kernel_vsx() function was commented since anything was using it. However, vmx-crypto driver uses VSX instructions which are only available if VSX is enable. Otherwise it rises an exception oops. This patch uncomment enable_kernel_vsx() routine and makes it available. Signed-off-by: Leonidas S. Barbosa <leosilva@linux.vnet.ibm.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Cc: stable@vger.kernel.org # 4.2 Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> --- arch/powerpc/include/asm/switch_to.h | 1 + arch/powerpc/kernel/process.c | 3 --- 2 files changed, 1 insertion(+), 3 deletions(-) diff --git a/arch/powerpc/include/asm/switch_to.h b/arch/powerpc/include/asm/switch_to.h index 58abeda64cb7..15cca17cba4b 100644 --- a/arch/powerpc/include/asm/switch_to.h +++ b/arch/powerpc/include/asm/switch_to.h @@ -29,6 +29,7 @@ static inline void save_early_sprs(struct thread_struct *prev) {} extern void enable_kernel_fp(void); extern void enable_kernel_altivec(void); +extern void enable_kernel_vsx(void); extern int emulate_altivec(struct pt_regs *); extern void __giveup_vsx(struct task_struct *); extern void giveup_vsx(struct task_struct *); diff --git a/arch/powerpc/kernel/process.c b/arch/powerpc/kernel/process.c index 8005e18d1b40..64e6e9d9e656 100644 --- a/arch/powerpc/kernel/process.c +++ b/arch/powerpc/kernel/process.c @@ -204,8 +204,6 @@ EXPORT_SYMBOL_GPL(flush_altivec_to_thread); #endif /* CONFIG_ALTIVEC */ #ifdef CONFIG_VSX -#if 0 -/* not currently used, but some crazy RAID module might want to later */ void enable_kernel_vsx(void) { WARN_ON(preemptible()); @@ -220,7 +218,6 @@ void enable_kernel_vsx(void) #endif /* CONFIG_SMP */ } EXPORT_SYMBOL(enable_kernel_vsx); -#endif void giveup_vsx(struct task_struct *tsk) { -- 2.1.4 ^ permalink raw reply related [flat|nested] 3+ messages in thread
end of thread, other threads:[~2015-09-16 1:40 UTC | newest] Thread overview: 3+ messages (download: mbox.gz follow: Atom feed -- links below jump to the message on this page -- 2015-09-16 1:40 [PATCH stable 1/2] powerpc: Uncomment and make enable_kernel_vsx() routine available Michael Ellerman 2015-09-16 1:40 ` [PATCH stable 2/2] crypto: vmx - Adding enable_kernel_vsx() to access VSX instructions Michael Ellerman -- strict thread matches above, loose matches on Subject: below -- 2015-09-16 1:34 [PATCH stable 1/2] powerpc: Uncomment and make enable_kernel_vsx() routine available Michael Ellerman
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).