From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail.linuxfoundation.org ([140.211.169.12]:45382 "EHLO mail.linuxfoundation.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S933183AbcILRJx (ORCPT ); Mon, 12 Sep 2016 13:09:53 -0400 From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Vitaly Kuznetsov , "K. Y. Srinivasan" , Sasha Levin Subject: [PATCH 4.4 093/192] [PATCH 097/135] Drivers: hv: vmbus: avoid infinite loop in init_vp_index() Date: Mon, 12 Sep 2016 19:00:02 +0200 Message-Id: <20160912152202.835751723@linuxfoundation.org> In-Reply-To: <20160912152158.855601725@linuxfoundation.org> References: <20160912152158.855601725@linuxfoundation.org> MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Sender: stable-owner@vger.kernel.org List-ID: 4.4-stable review patch. If anyone has any objections, please let me know. ------------------ [ Upstream commit 79fd8e706637a5c7c41f9498fe0fbfb437abfdc8 ] When we pick a CPU to use for a new subchannel we try find a non-used one on the appropriate NUMA node, we keep track of them with the primary->alloced_cpus_in_node mask. Under normal circumstances we don't run out of available CPUs but it is possible when we we don't initialize some cpus in Linux, e.g. when we boot with 'nr_cpus=' limitation. Avoid the infinite loop in init_vp_index() by checking that we still have non-used CPUs in the alloced_cpus_in_node mask and resetting it in case we don't. Signed-off-by: Vitaly Kuznetsov Signed-off-by: K. Y. Srinivasan Signed-off-by: Greg Kroah-Hartman Signed-off-by: Sasha Levin Signed-off-by: Greg Kroah-Hartman --- drivers/hv/channel_mgmt.c | 11 +++++++++++ 1 file changed, 11 insertions(+) --- a/drivers/hv/channel_mgmt.c +++ b/drivers/hv/channel_mgmt.c @@ -459,6 +459,17 @@ static void init_vp_index(struct vmbus_c cpumask_of_node(primary->numa_node)); cur_cpu = -1; + + /* + * Normally Hyper-V host doesn't create more subchannels than there + * are VCPUs on the node but it is possible when not all present VCPUs + * on the node are initialized by guest. Clear the alloced_cpus_in_node + * to start over. + */ + if (cpumask_equal(&primary->alloced_cpus_in_node, + cpumask_of_node(primary->numa_node))) + cpumask_clear(&primary->alloced_cpus_in_node); + while (true) { cur_cpu = cpumask_next(cur_cpu, &available_mask); if (cur_cpu >= nr_cpu_ids) {