From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D3611C433F5 for ; Mon, 7 Mar 2022 14:01:19 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S242966AbiCGOCM (ORCPT ); Mon, 7 Mar 2022 09:02:12 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54330 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S242968AbiCGOCK (ORCPT ); Mon, 7 Mar 2022 09:02:10 -0500 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3AFBF8CD8E for ; Mon, 7 Mar 2022 06:01:06 -0800 (PST) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id A85AF61266 for ; Mon, 7 Mar 2022 14:01:05 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 186B7C340EB; Mon, 7 Mar 2022 14:01:05 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1646661665; bh=CveUmyOkKJ/uk55CEozFx02EZAKIgqQa+D8gpNKUGsE=; h=Date:From:To:Cc:Subject:In-Reply-To:References:From; b=aWsCk9+HeczIiajFTQ5K9IuWLV/V3mw3ShLXEWPRdpHz2wX41QuqzzISA7hKXaIXV JtIXyBA3bj0PcS9JxRUJxaXJvRAx6MRvlmFu6ut5fVIpD9C/CGw16st6OFFlUcFOgH rs9Pws4zZYFTP3mcoqaKcbRg1xdwJi7Tnoi97nWLhmt2vK+kepu+rv7bVIUCh/Xo0P YDsNSux8/+aNMYDaQ0JaqndxlSzmntTBCeAHGS8gP4HRFgkoYIJo3TLVkwW7kzl1h1 d0j1m3D+eyfAtTA8jBHGtI3DL0mDRCgvZ0aisqM9WhNiItBCrwkt7JwQI61gQkcQaa PScvTZSypCJlQ== Received: from sofa.misterjones.org ([185.219.108.64] helo=why.misterjones.org) by disco-boy.misterjones.org with esmtpsa (TLS1.3) tls TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 (Exim 4.94.2) (envelope-from ) id 1nRDv4-00Cnsc-MC; Mon, 07 Mar 2022 14:01:02 +0000 Date: Mon, 07 Mar 2022 14:01:02 +0000 Message-ID: <87a6e1276p.wl-maz@kernel.org> From: Marc Zyngier To: John Garry Cc: Thomas Gleixner , chenxiang , Shameer Kolothum , "linux-kernel@vger.kernel.org" , "liuqi (BA)" , , David Decotigny Subject: Re: PCI MSI issue for maxcpus=1 In-Reply-To: <452d97ed-459f-7936-99e4-600380608615@huawei.com> References: <78615d08-1764-c895-f3b7-bfddfbcbdfb9@huawei.com> <87a6g8vp8k.wl-maz@kernel.org> <19d55cdf-9ef7-e4a3-5ae5-0970f0d7751b@huawei.com> <87v8yjyjc0.wl-maz@kernel.org> <87k0ey9122.wl-maz@kernel.org> <5f529b4e-1f6c-5a7d-236c-09ebe3a7db29@huawei.com> <1cbe7daa-8003-562b-06fa-5a50f7ee6ed2@huawei.com> <87a6e4tnkm.wl-maz@kernel.org> <452d97ed-459f-7936-99e4-600380608615@huawei.com> User-Agent: Wanderlust/2.15.9 (Almost Unreal) SEMI-EPG/1.14.7 (Harue) FLIM-LB/1.14.9 (=?UTF-8?B?R29qxY0=?=) APEL-LB/10.8 EasyPG/1.0.0 Emacs/27.1 (x86_64-pc-linux-gnu) MULE/6.0 (HANACHIRUSATO) MIME-Version: 1.0 (generated by SEMI-EPG 1.14.7 - "Harue") Content-Type: text/plain; charset=US-ASCII X-SA-Exim-Connect-IP: 185.219.108.64 X-SA-Exim-Rcpt-To: john.garry@huawei.com, tglx@linutronix.de, chenxiang66@hisilicon.com, shameerali.kolothum.thodi@huawei.com, linux-kernel@vger.kernel.org, liuqi115@huawei.com, wangxiongfeng2@huawei.com, decot@google.com X-SA-Exim-Mail-From: maz@kernel.org X-SA-Exim-Scanned: No (on disco-boy.misterjones.org); SAEximRunCond expanded to false Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Hi John, On Mon, 07 Mar 2022 13:48:11 +0000, John Garry wrote: > > Hi Marc, > > > > > diff --git a/kernel/irq/msi.c b/kernel/irq/msi.c > > index 2bdfce5edafd..97e9eb9aecc6 100644 > > --- a/kernel/irq/msi.c > > +++ b/kernel/irq/msi.c > > @@ -823,6 +823,19 @@ static int msi_init_virq(struct irq_domain *domain, int virq, unsigned int vflag > > if (!(vflags & VIRQ_ACTIVATE)) > > return 0; > > + if (!(vflags & VIRQ_CAN_RESERVE)) { > > + /* > > + * If the interrupt is managed but no CPU is available > > + * to service it, shut it down until better times. > > + */ > > + if (irqd_affinity_is_managed(irqd) && > > + !cpumask_intersects(irq_data_get_affinity_mask(irqd), > > + cpu_online_mask)) { > > + irqd_set_managed_shutdown(irqd); > > + return 0; > > + } > > + } > > + > > ret = irq_domain_activate_irq(irqd, vflags & VIRQ_CAN_RESERVE); > > if (ret) > > return ret; > > > > Yeah, that seems to solve the issue. I will test it a bit more. Thanks. For the record, I have pushed a branch at [1]. The patch is extremely similar, just moved up a tiny bit to avoid duplicating the !VIRQ_CAN_RESERVE case. > We need to check the isolcpus cmdline issue as well - wang xiongfeng, > please assist here. I assume that this feature just never worked for > arm64 since it was added. That one is still on my list. isolcpus certainly has had as little testing as you can imagine. > Out of interest, is the virtio managed interrupts support just in > your sandbox? You did mention earlier in the thread that you were > considering adding this feature. As it turns out, QEMU's non-legacy virtio support allows the kernel to do the right thing (multi-queue support and affinity management). Using kvmtool, I only get a single interrupt although the device pretends to support some MQ extension. I haven't dug into it yet. M. -- Without deviation from the norm, progress is not possible.