From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Google-Smtp-Source: AH8x225KMkT008xey1tjS9EWDVxi9KV972fE+Q6LfUmCcl+LH+lqf6Y0pdzAW2RgVHtZU+8IDrPl ARC-Seal: i=1; a=rsa-sha256; t=1516610477; cv=none; d=google.com; s=arc-20160816; b=Gltcsd02nncxDw33QKGWyydCSgnPThXl6uC79JoO7egyjeIqm3X51DH22KloBRGowm J5lXB50fFeUTGbcupCn184VGufpR+efCbTZHjUJQ7AWx7HyoH9/DxgElMejsBQmRSRbV UjMzxF7dl+gpxgtm4EY/dgQgtXG740pWC/E3tmVLw+36eNDvTXtRtXjGqJlN7Gg2YXQL 5NGpenqlwQigSjOJb6olTll+zsRTyomO13iR3LVywfjYBqehyRakHxpPXpSRg6VswE0c kbMVAS8ium/6hv//VOpqAGjtqiRYDEtNtismR0HErXjUOTb/oPvSPvSAVhZItGOVVKIl taTg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=mime-version:user-agent:references:in-reply-to:message-id:date :subject:cc:to:from:arc-authentication-results; bh=0sYdSVML/wPxyTccOdJF/kVAiQN+0M5rtN75HQtp6AM=; b=hd5RJI+X1Wxe1NWI2ncyRPlZNnC2xzJf7auaqWxewgGNBDWkgIkyUrEKOkC+BcMLam vOiIzJVcc4UC0N74C87tJoT6oHJd7rIlLJQJsBnGLnMAjq2gonvFZHWyFuk71Ho4btw/ WTLPOmcpTLXaITTPTF1sxePl7RNOED/97S0jyFlLjjBpMQvlAF8ChvTGlUj6vraqLMJQ 1WAhmpT0WV9d5HQ9hk/rRfnUKKcfWsSxSEaheQDLV0MM3oxMKkbVboOaP2LdJTZFhYiL C8QEIZ7pSKRdi3oQdRYx+yW6Iro09y/lxl4fBLba9MIo2p+/ynP9G6oTfxgRi0nDhQPo /sBw== ARC-Authentication-Results: i=1; mx.google.com; spf=softfail (google.com: domain of transitioning gregkh@linuxfoundation.org does not designate 90.92.71.90 as permitted sender) smtp.mailfrom=gregkh@linuxfoundation.org Authentication-Results: mx.google.com; spf=softfail (google.com: domain of transitioning gregkh@linuxfoundation.org does not designate 90.92.71.90 as permitted sender) smtp.mailfrom=gregkh@linuxfoundation.org From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, David Woodhouse , Thomas Gleixner , Arjan van de Ven , Ingo Molnar , gnomes@lxorguk.ukuu.org.uk, Rik van Riel , Andi Kleen , Josh Poimboeuf , thomas.lendacky@amd.com, Peter Zijlstra , Linus Torvalds , Jiri Kosina , Andy Lutomirski , Dave Hansen , Kees Cook , Tim Chen , Paul Turner , Greg Kroah-Hartman Subject: [PATCH 4.4 17/53] x86/retpoline/checksum32: Convert assembler indirect jumps Date: Mon, 22 Jan 2018 09:40:09 +0100 Message-Id: <20180122083911.022199723@linuxfoundation.org> X-Mailer: git-send-email 2.16.0 In-Reply-To: <20180122083910.299610926@linuxfoundation.org> References: <20180122083910.299610926@linuxfoundation.org> User-Agent: quilt/0.65 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-LABELS: =?utf-8?b?IlxcU2VudCI=?= X-GMAIL-THRID: =?utf-8?q?1590281348203901826?= X-GMAIL-MSGID: =?utf-8?q?1590281348203901826?= X-Mailing-List: linux-kernel@vger.kernel.org List-ID: 4.4-stable review patch. If anyone has any objections, please let me know. ------------------ From: David Woodhouse commit 5096732f6f695001fa2d6f1335a2680b37912c69 upstream. Convert all indirect jumps in 32bit checksum assembler code to use non-speculative sequences when CONFIG_RETPOLINE is enabled. Signed-off-by: David Woodhouse Signed-off-by: Thomas Gleixner Acked-by: Arjan van de Ven Acked-by: Ingo Molnar Cc: gnomes@lxorguk.ukuu.org.uk Cc: Rik van Riel Cc: Andi Kleen Cc: Josh Poimboeuf Cc: thomas.lendacky@amd.com Cc: Peter Zijlstra Cc: Linus Torvalds Cc: Jiri Kosina Cc: Andy Lutomirski Cc: Dave Hansen Cc: Kees Cook Cc: Tim Chen Cc: Greg Kroah-Hartman Cc: Paul Turner Link: https://lkml.kernel.org/r/1515707194-20531-11-git-send-email-dwmw@amazon.co.uk Signed-off-by: Greg Kroah-Hartman --- arch/x86/lib/checksum_32.S | 7 ++++--- 1 file changed, 4 insertions(+), 3 deletions(-) --- a/arch/x86/lib/checksum_32.S +++ b/arch/x86/lib/checksum_32.S @@ -28,7 +28,8 @@ #include #include #include - +#include + /* * computes a partial checksum, e.g. for TCP/UDP fragments */ @@ -155,7 +156,7 @@ ENTRY(csum_partial) negl %ebx lea 45f(%ebx,%ebx,2), %ebx testl %esi, %esi - jmp *%ebx + JMP_NOSPEC %ebx # Handle 2-byte-aligned regions 20: addw (%esi), %ax @@ -437,7 +438,7 @@ ENTRY(csum_partial_copy_generic) andl $-32,%edx lea 3f(%ebx,%ebx), %ebx testl %esi, %esi - jmp *%ebx + JMP_NOSPEC %ebx 1: addl $64,%esi addl $64,%edi SRC(movb -32(%edx),%bl) ; SRC(movb (%edx),%bl)