From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Ingo Franzki , Harald Freudenberger , Martin Schwidefsky Subject: [PATCH 4.14 079/173] s390/crypto: Fix return code checking in cbc_paes_crypt() Date: Mon, 24 Sep 2018 13:51:53 +0200 Message-Id: <20180924113121.849688106@linuxfoundation.org> In-Reply-To: <20180924113114.334025954@linuxfoundation.org> References: <20180924113114.334025954@linuxfoundation.org> MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org List-ID: 4.14-stable review patch. If anyone has any objections, please let me know. ------------------ From: Ingo Franzki commit b81126e01a8c6048249955feea46c8217ebefa91 upstream. The return code of cpacf_kmc() is less than the number of bytes to process in case of an error, not greater. The crypt routines for the other cipher modes already have this correctly. Cc: stable@vger.kernel.org # v4.11+ Fixes: 279378430768 ("s390/crypt: Add protected key AES module") Signed-off-by: Ingo Franzki Acked-by: Harald Freudenberger Signed-off-by: Martin Schwidefsky Signed-off-by: Greg Kroah-Hartman --- arch/s390/crypto/paes_s390.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) --- a/arch/s390/crypto/paes_s390.c +++ b/arch/s390/crypto/paes_s390.c @@ -212,7 +212,7 @@ static int cbc_paes_crypt(struct blkciph walk->dst.virt.addr, walk->src.virt.addr, n); if (k) ret = blkcipher_walk_done(desc, walk, nbytes - k); - if (n < k) { + if (k < n) { if (__cbc_paes_set_key(ctx) != 0) return blkcipher_walk_done(desc, walk, -EIO); memcpy(param.key, ctx->pk.protkey, MAXPROTKEYSIZE);