From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753146AbbJFWFu (ORCPT ); Tue, 6 Oct 2015 18:05:50 -0400 Received: from mga11.intel.com ([192.55.52.93]:57417 "EHLO mga11.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752663AbbJFWFr (ORCPT ); Tue, 6 Oct 2015 18:05:47 -0400 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.17,646,1437462000"; d="scan'208";a="820838872" From: Andi Kleen To: tytso@mit.edu Cc: linux-kernel@vger.kernel.org, Andi Kleen Subject: [PATCH 2/3] random: Make input to output pool balancing per cpu Date: Tue, 6 Oct 2015 15:05:39 -0700 Message-Id: <1444169140-4938-2-git-send-email-andi@firstfloor.org> X-Mailer: git-send-email 2.4.3 In-Reply-To: <1444169140-4938-1-git-send-email-andi@firstfloor.org> References: <1444169140-4938-1-git-send-email-andi@firstfloor.org> Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Andi Kleen The load balancing from input pool to output pools was essentially unlocked. Before it didn't matter much because there were only two choices (blocking and non blocking). But now with the distributed non blocking pools we have a lot more pools, and unlocked access of the counters may systematically deprive some nodes from their deserved entropy. Turn the round-robin state into per CPU variables to avoid any possibility of races. This code already runs with preemption disabled. v2: Check for non initialized pools. Signed-off-by: Andi Kleen --- drivers/char/random.c | 20 +++++++++++++------- 1 file changed, 13 insertions(+), 7 deletions(-) diff --git a/drivers/char/random.c b/drivers/char/random.c index e7e02c0..a395f783 100644 --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -774,15 +774,20 @@ retry: if (entropy_bits > random_write_wakeup_bits && r->initialized && r->entropy_total >= 2*random_read_wakeup_bits) { - static struct entropy_store *last = &blocking_pool; - static int next_pool = -1; - struct entropy_store *other = &blocking_pool; + static DEFINE_PER_CPU(struct entropy_store *, lastp) = + &blocking_pool; + static DEFINE_PER_CPU(int, next_pool); + struct entropy_store *other = &blocking_pool, *last; + int np; /* -1: use blocking pool, 0<=max_node: node nb pool */ - if (next_pool > -1) - other = nonblocking_node_pool[next_pool]; - if (++next_pool >= num_possible_nodes()) - next_pool = -1; + np = __this_cpu_read(next_pool); + if (np > -1 && nonblocking_node_pool) + other = nonblocking_node_pool[np]; + if (++np >= num_possible_nodes()) + np = -1; + __this_cpu_write(next_pool, np); + last = __this_cpu_read(lastp); if (other->entropy_count <= 3 * other->poolinfo->poolfracbits / 4) last = other; @@ -791,6 +796,7 @@ retry: schedule_work(&last->push_work); r->entropy_total = 0; } + __this_cpu_write(lastp, last); } } } -- 2.4.3