From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1750996AbcCAFRO (ORCPT ); Tue, 1 Mar 2016 00:17:14 -0500 Received: from mga01.intel.com ([192.55.52.88]:54382 "EHLO mga01.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750777AbcCAFRN (ORCPT ); Tue, 1 Mar 2016 00:17:13 -0500 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.22,522,1449561600"; d="scan'208";a="57335497" From: Andi Kleen To: tytso@mit.edu Cc: linux-kernel@vger.kernel.org, Andi Kleen Subject: [PATCH 2/3] random: Make input to output pool balancing per cpu Date: Mon, 29 Feb 2016 21:17:05 -0800 Message-Id: <1456809426-19341-2-git-send-email-andi@firstfloor.org> X-Mailer: git-send-email 2.5.0 In-Reply-To: <1456809426-19341-1-git-send-email-andi@firstfloor.org> References: <1456809426-19341-1-git-send-email-andi@firstfloor.org> Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Andi Kleen The load balancing from input pool to output pools was essentially unlocked. Before it didn't matter much because there were only two choices (blocking and non blocking). But now with the distributed non blocking pools we have a lot more pools, and unlocked access of the counters may systematically deprive some nodes from their deserved entropy. Turn the round-robin state into per CPU variables to avoid any possibility of races. This code already runs with preemption disabled. v2: Check for non initialized pools. v3: Make per cpu variables global to avoid warnings in some configurations (0day) Signed-off-by: Andi Kleen --- drivers/char/random.c | 20 +++++++++++++------- 1 file changed, 13 insertions(+), 7 deletions(-) diff --git a/drivers/char/random.c b/drivers/char/random.c index e7e02c0..21ae44b 100644 --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -675,6 +675,9 @@ void init_node_pools(void) #endif } +static DEFINE_PER_CPU(struct entropy_store *, lastp) = &blocking_pool; +static DEFINE_PER_CPU(int, next_pool); + /* * Credit (or debit) the entropy store with n bits of entropy. * Use credit_entropy_bits_safe() if the value comes from userspace @@ -774,15 +777,17 @@ retry: if (entropy_bits > random_write_wakeup_bits && r->initialized && r->entropy_total >= 2*random_read_wakeup_bits) { - static struct entropy_store *last = &blocking_pool; - static int next_pool = -1; - struct entropy_store *other = &blocking_pool; + struct entropy_store *other = &blocking_pool, *last; + int np; /* -1: use blocking pool, 0<=max_node: node nb pool */ - if (next_pool > -1) - other = nonblocking_node_pool[next_pool]; - if (++next_pool >= num_possible_nodes()) - next_pool = -1; + np = __this_cpu_read(next_pool); + if (np > -1 && nonblocking_node_pool) + other = nonblocking_node_pool[np]; + if (++np >= num_possible_nodes()) + np = -1; + __this_cpu_write(next_pool, np); + last = __this_cpu_read(lastp); if (other->entropy_count <= 3 * other->poolinfo->poolfracbits / 4) last = other; @@ -791,6 +796,7 @@ retry: schedule_work(&last->push_work); r->entropy_total = 0; } + __this_cpu_write(lastp, last); } } } -- 2.5.0