From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail.linuxfoundation.org ([140.211.169.12]:46356 "EHLO mail.linuxfoundation.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1031610AbdEWUWG (ORCPT ); Tue, 23 May 2017 16:22:06 -0400 From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, "K. Y. Srinivasan" , Bjorn Helgaas , Long Li Subject: [PATCH 4.11 178/197] PCI: hv: Specify CPU_AFFINITY_ALL for MSI affinity when >= 32 CPUs Date: Tue, 23 May 2017 22:08:59 +0200 Message-Id: <20170523200839.364450569@linuxfoundation.org> In-Reply-To: <20170523200821.666872592@linuxfoundation.org> References: <20170523200821.666872592@linuxfoundation.org> MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Sender: stable-owner@vger.kernel.org List-ID: 4.11-stable review patch. If anyone has any objections, please let me know. ------------------ From: K. Y. Srinivasan commit 433fcf6b7b31f1f233dd50aeb9d066a0f6ed4b9d upstream. When we have 32 or more CPUs in the affinity mask, we should use a special constant to specify that to the host. Fix this issue. Signed-off-by: K. Y. Srinivasan Signed-off-by: Bjorn Helgaas Reviewed-by: Long Li Signed-off-by: Greg Kroah-Hartman --- drivers/pci/host/pci-hyperv.c | 11 ++++++++--- 1 file changed, 8 insertions(+), 3 deletions(-) --- a/drivers/pci/host/pci-hyperv.c +++ b/drivers/pci/host/pci-hyperv.c @@ -72,6 +72,7 @@ enum { PCI_PROTOCOL_VERSION_CURRENT = PCI_PROTOCOL_VERSION_1_1 }; +#define CPU_AFFINITY_ALL -1ULL #define PCI_CONFIG_MMIO_LENGTH 0x2000 #define CFG_PAGE_OFFSET 0x1000 #define CFG_PAGE_SIZE (PCI_CONFIG_MMIO_LENGTH - CFG_PAGE_OFFSET) @@ -897,9 +898,13 @@ static void hv_compose_msi_msg(struct ir * processors because Hyper-V only supports 64 in a guest. */ affinity = irq_data_get_affinity_mask(data); - for_each_cpu_and(cpu, affinity, cpu_online_mask) { - int_pkt->int_desc.cpu_mask |= - (1ULL << vmbus_cpu_number_to_vp_number(cpu)); + if (cpumask_weight(affinity) >= 32) { + int_pkt->int_desc.cpu_mask = CPU_AFFINITY_ALL; + } else { + for_each_cpu_and(cpu, affinity, cpu_online_mask) { + int_pkt->int_desc.cpu_mask |= + (1ULL << vmbus_cpu_number_to_vp_number(cpu)); + } } ret = vmbus_sendpacket(hpdev->hbus->hdev->channel, int_pkt,