From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail.linuxfoundation.org ([140.211.169.12]:38426 "EHLO mail.linuxfoundation.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752339AbdEWSkD (ORCPT ); Tue, 23 May 2017 14:40:03 -0400 Subject: Patch "PCI: hv: Specify CPU_AFFINITY_ALL for MSI affinity when >= 32 CPUs" has been added to the 4.9-stable tree To: kys@microsoft.com, bhelgaas@google.com, gregkh@linuxfoundation.org, longli@microsoft.com Cc: , From: Date: Tue, 23 May 2017 20:38:16 +0200 Message-ID: <1495564696122221@kroah.com> MIME-Version: 1.0 Content-Type: text/plain; charset=ANSI_X3.4-1968 Content-Transfer-Encoding: 8bit Sender: stable-owner@vger.kernel.org List-ID: This is a note to let you know that I've just added the patch titled PCI: hv: Specify CPU_AFFINITY_ALL for MSI affinity when >= 32 CPUs to the 4.9-stable tree which can be found at: http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary The filename of the patch is: pci-hv-specify-cpu_affinity_all-for-msi-affinity-when-32-cpus.patch and it can be found in the queue-4.9 subdirectory. If you, or anyone else, feels it should not be added to the stable tree, please let know about it. >>From 433fcf6b7b31f1f233dd50aeb9d066a0f6ed4b9d Mon Sep 17 00:00:00 2001 From: "K. Y. Srinivasan" Date: Fri, 24 Mar 2017 11:07:21 -0700 Subject: PCI: hv: Specify CPU_AFFINITY_ALL for MSI affinity when >= 32 CPUs From: K. Y. Srinivasan commit 433fcf6b7b31f1f233dd50aeb9d066a0f6ed4b9d upstream. When we have 32 or more CPUs in the affinity mask, we should use a special constant to specify that to the host. Fix this issue. Signed-off-by: K. Y. Srinivasan Signed-off-by: Bjorn Helgaas Reviewed-by: Long Li Signed-off-by: Greg Kroah-Hartman --- drivers/pci/host/pci-hyperv.c | 11 ++++++++--- 1 file changed, 8 insertions(+), 3 deletions(-) --- a/drivers/pci/host/pci-hyperv.c +++ b/drivers/pci/host/pci-hyperv.c @@ -72,6 +72,7 @@ enum { PCI_PROTOCOL_VERSION_CURRENT = PCI_PROTOCOL_VERSION_1_1 }; +#define CPU_AFFINITY_ALL -1ULL #define PCI_CONFIG_MMIO_LENGTH 0x2000 #define CFG_PAGE_OFFSET 0x1000 #define CFG_PAGE_SIZE (PCI_CONFIG_MMIO_LENGTH - CFG_PAGE_OFFSET) @@ -889,9 +890,13 @@ static void hv_compose_msi_msg(struct ir * processors because Hyper-V only supports 64 in a guest. */ affinity = irq_data_get_affinity_mask(data); - for_each_cpu_and(cpu, affinity, cpu_online_mask) { - int_pkt->int_desc.cpu_mask |= - (1ULL << vmbus_cpu_number_to_vp_number(cpu)); + if (cpumask_weight(affinity) >= 32) { + int_pkt->int_desc.cpu_mask = CPU_AFFINITY_ALL; + } else { + for_each_cpu_and(cpu, affinity, cpu_online_mask) { + int_pkt->int_desc.cpu_mask |= + (1ULL << vmbus_cpu_number_to_vp_number(cpu)); + } } ret = vmbus_sendpacket(hpdev->hbus->hdev->channel, int_pkt, Patches currently in stable-queue which might be from kys@microsoft.com are queue-4.9/pci-hv-allocate-interrupt-descriptors-with-gfp_atomic.patch queue-4.9/pci-hv-specify-cpu_affinity_all-for-msi-affinity-when-32-cpus.patch