From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Eli Cooper , Martin Willi , Herbert Xu Subject: [PATCH 4.4 101/117] crypto: chacha20-ssse3 - Align stack pointer to 64 bytes Date: Sun, 14 Feb 2016 14:22:18 -0800 Message-Id: <20160214222144.450659993@linuxfoundation.org> In-Reply-To: <20160214222141.393531627@linuxfoundation.org> References: <20160214222141.393531627@linuxfoundation.org> MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Sender: linux-kernel-owner@vger.kernel.org List-ID: 4.4-stable review patch. If anyone has any objections, please let me know. ------------------ From: Eli Cooper commit cbe09bd51bf23b42c3a94c5fb6815e1397c5fc3f upstream. This aligns the stack pointer in chacha20_4block_xor_ssse3 to 64 bytes. Fixes general protection faults and potential kernel panics. Signed-off-by: Eli Cooper Acked-by: Martin Willi Signed-off-by: Herbert Xu Signed-off-by: Greg Kroah-Hartman --- arch/x86/crypto/chacha20-ssse3-x86_64.S | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) --- a/arch/x86/crypto/chacha20-ssse3-x86_64.S +++ b/arch/x86/crypto/chacha20-ssse3-x86_64.S @@ -157,7 +157,9 @@ ENTRY(chacha20_4block_xor_ssse3) # done with the slightly better performing SSSE3 byte shuffling, # 7/12-bit word rotation uses traditional shift+OR. - sub $0x40,%rsp + mov %rsp,%r11 + sub $0x80,%rsp + and $~63,%rsp # x0..15[0-3] = s0..3[0..3] movq 0x00(%rdi),%xmm1 @@ -620,6 +622,6 @@ ENTRY(chacha20_4block_xor_ssse3) pxor %xmm1,%xmm15 movdqu %xmm15,0xf0(%rsi) - add $0x40,%rsp + mov %r11,%rsp ret ENDPROC(chacha20_4block_xor_ssse3)