From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id AC680C43334 for ; Mon, 27 Jun 2022 11:44:28 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237177AbiF0Lo0 (ORCPT ); Mon, 27 Jun 2022 07:44:26 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43150 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237437AbiF0Lmv (ORCPT ); Mon, 27 Jun 2022 07:42:51 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [IPv6:2604:1380:4601:e00::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 192CF1BE; Mon, 27 Jun 2022 04:37:57 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id C3A6BB8111B; Mon, 27 Jun 2022 11:37:55 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 1E67DC341C7; Mon, 27 Jun 2022 11:37:53 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1656329874; bh=zckJ25KeNwOmmOgkYtdLY02GYh+uT/iHtBb4i7YgXj4=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=YebQubZy0IYFCyJWtBr3QrIcdcfxQgwt+Gp+Zv0+G+ppBXkPQ+LaQVnOX36mnwQ0X VE7tNBNfYIMGdYHHaOvmJYZYDbFDs7zHszL2mawcZZ7kTNwpFLYh43AG7CC49yr/id AHpBwz5cHAEmWqJYdue9G+AwqienuhSzuMADOLQ4= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Dominik Brodowski , Sebastian Andrzej Siewior , "Jason A. Donenfeld" Subject: [PATCH 5.18 001/181] random: schedule mix_interrupt_randomness() less often Date: Mon, 27 Jun 2022 13:19:34 +0200 Message-Id: <20220627111944.599548853@linuxfoundation.org> X-Mailer: git-send-email 2.36.1 In-Reply-To: <20220627111944.553492442@linuxfoundation.org> References: <20220627111944.553492442@linuxfoundation.org> User-Agent: quilt/0.66 X-stable: review X-Patchwork-Hint: ignore MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Jason A. Donenfeld commit 534d2eaf1970274150596fdd2bf552721e65d6b2 upstream. It used to be that mix_interrupt_randomness() would credit 1 bit each time it ran, and so add_interrupt_randomness() would schedule mix() to run every 64 interrupts, a fairly arbitrary number, but nonetheless considered to be a decent enough conservative estimate. Since e3e33fc2ea7f ("random: do not use input pool from hard IRQs"), mix() is now able to credit multiple bits, depending on the number of calls to add(). This was done for reasons separate from this commit, but it has the nice side effect of enabling this patch to schedule mix() less often. Currently the rules are: a) Credit 1 bit for every 64 calls to add(). b) Schedule mix() once a second that add() is called. c) Schedule mix() once every 64 calls to add(). Rules (a) and (c) no longer need to be coupled. It's still important to have _some_ value in (c), so that we don't "over-saturate" the fast pool, but the once per second we get from rule (b) is a plenty enough baseline. So, by increasing the 64 in rule (c) to something larger, we avoid calling queue_work_on() as frequently during irq storms. This commit changes that 64 in rule (c) to be 1024, which means we schedule mix() 16 times less often. And it does *not* need to change the 64 in rule (a). Fixes: 58340f8e952b ("random: defer fast pool mixing to worker") Cc: stable@vger.kernel.org Cc: Dominik Brodowski Acked-by: Sebastian Andrzej Siewior Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman --- drivers/char/random.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -1038,7 +1038,7 @@ void add_interrupt_randomness(int irq) if (new_count & MIX_INFLIGHT) return; - if (new_count < 64 && !time_is_before_jiffies(fast_pool->last + HZ)) + if (new_count < 1024 && !time_is_before_jiffies(fast_pool->last + HZ)) return; if (unlikely(!fast_pool->mix.func))