From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 69D0CC61DA4 for ; Fri, 3 Feb 2023 10:32:16 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233239AbjBCKcO (ORCPT ); Fri, 3 Feb 2023 05:32:14 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50408 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233490AbjBCKak (ORCPT ); Fri, 3 Feb 2023 05:30:40 -0500 Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CDFCAA07FE for ; Fri, 3 Feb 2023 02:29:41 -0800 (PST) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 7236FB82A68 for ; Fri, 3 Feb 2023 10:29:41 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id D1EE5C433D2; Fri, 3 Feb 2023 10:29:39 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1675420180; bh=dG0ecsWTjyA24pZfgwVP5l/shWn2X0Ond7gYjAZ0R6s=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=I5BGAYrpHaPcp51Xm76hp9fcIwvKiFAUDNt0p8mT+Ig/eUlQ/OIqozGMi7F6qVT8Y OKgYjR2hNbVJZ4gtIxPshaWC55exqXW+SiQxQlNSgKtR6/K6zpVv9WyMo/8x2mIDZU G9bAbkrRKEtslxNwapjasQRsEO+Thv0ykBnDZUWU= From: Greg Kroah-Hartman To: stable@vger.kernel.org Cc: Greg Kroah-Hartman , patches@lists.linux.dev, Colin Ian King , "Peter Zijlstra (Intel)" , Ian Rogers , Kim Phillips Subject: [PATCH 5.4 107/134] perf/x86/amd: fix potential integer overflow on shift of a int Date: Fri, 3 Feb 2023 11:13:32 +0100 Message-Id: <20230203101028.643246582@linuxfoundation.org> X-Mailer: git-send-email 2.39.1 In-Reply-To: <20230203101023.832083974@linuxfoundation.org> References: <20230203101023.832083974@linuxfoundation.org> User-Agent: quilt/0.67 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Colin Ian King commit 08245672cdc6505550d1a5020603b0a8d4a6dcc7 upstream. The left shift of int 32 bit integer constant 1 is evaluated using 32 bit arithmetic and then passed as a 64 bit function argument. In the case where i is 32 or more this can lead to an overflow. Avoid this by shifting using the BIT_ULL macro instead. Fixes: 471af006a747 ("perf/x86/amd: Constrain Large Increment per Cycle events") Signed-off-by: Colin Ian King Signed-off-by: Peter Zijlstra (Intel) Acked-by: Ian Rogers Acked-by: Kim Phillips Link: https://lore.kernel.org/r/20221202135149.1797974-1-colin.i.king@gmail.com Signed-off-by: Greg Kroah-Hartman --- arch/x86/events/amd/core.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) --- a/arch/x86/events/amd/core.c +++ b/arch/x86/events/amd/core.c @@ -969,7 +969,7 @@ static int __init amd_core_pmu_init(void * numbered counter following it. */ for (i = 0; i < x86_pmu.num_counters - 1; i += 2) - even_ctr_mask |= 1 << i; + even_ctr_mask |= BIT_ULL(i); pair_constraint = (struct event_constraint) __EVENT_CONSTRAINT(0, even_ctr_mask, 0,