From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.5 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1DBD3C636CE for ; Wed, 21 Jul 2021 13:13:58 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id DFC316120A for ; Wed, 21 Jul 2021 13:13:57 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org DFC316120A Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Subject:Cc:To:From:Message-ID:Date:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=T6Vy5HsVeiEdqWS02bVjOr9pThZ4Rasg15LCj3Q8hzM=; b=asNCsRpPmyJIjp knA4eFrjVoY7MKUkXapej3AfoD3twrsaFOlXR5njbMlH4l2g3XRVJDNbkhdx4uzXd/Ab6joydp8/E jwGv/6/PHivCMV5M+Cd7TL9qCNgF4wdwqh/Jg1URIp8qLQz7rKnL0IbhnJFA2dXEAJjVIAb3x4VmV 09vRn3+xCuGoj99cQxl/F81Tv2Knc57cytxt1AiIUusTW38MS/ZUnw3MtCGaEKCu9n4lIrJMCTWsX OQWuminjWsRM+RNHOX3MOHRgf3iu42V6gD6bVYPMHOFlSH1/ZJo0Bgk4z6Dr+Hih4nJOVmyLBPa97 +aFXAkp0en1KUUH6cD/A==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1m6C1P-00FiNY-SC; Wed, 21 Jul 2021 13:12:24 +0000 Received: from mail.kernel.org ([198.145.29.99]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1m6C1L-00FiN2-Pm for linux-arm-kernel@lists.infradead.org; Wed, 21 Jul 2021 13:12:21 +0000 Received: from disco-boy.misterjones.org (disco-boy.misterjones.org [51.254.78.96]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 0A3C561029; Wed, 21 Jul 2021 13:12:19 +0000 (UTC) Received: from sofa.misterjones.org ([185.219.108.64] helo=why.misterjones.org) by disco-boy.misterjones.org with esmtpsa (TLS1.3) tls TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 (Exim 4.94.2) (envelope-from ) id 1m6C1I-0003gp-V1; Wed, 21 Jul 2021 14:12:17 +0100 Date: Wed, 21 Jul 2021 14:12:15 +0100 Message-ID: <878s1z3j68.wl-maz@kernel.org> From: Marc Zyngier To: Robin Murphy , Bixuan Cui Cc: iommu@lists.linux-foundation.org, linux-kernel@vger.kernel.org, will@kernel.org, weiyongjun1@huawei.com, john.wanghui@huawei.com, dingtianhong@huawei.com, thunder.leizhen@huawei.com, guohanjun@huawei.com, joro@8bytes.org, jean-philippe@linaro.org, Jonathan.Cameron@huawei.com, song.bao.hua@hisilicon.com, linux-arm-kernel@lists.infradead.org Subject: Re: [PATCH -next] iommu/arm-smmu-v3: Add suspend and resume support In-Reply-To: <4e506481-5f6c-9c5e-eda3-300861581080@arm.com> References: <20210721013350.17664-1-cuibixuan@huawei.com> <4e506481-5f6c-9c5e-eda3-300861581080@arm.com> User-Agent: Wanderlust/2.15.9 (Almost Unreal) SEMI-EPG/1.14.7 (Harue) FLIM-LB/1.14.9 (=?UTF-8?B?R29qxY0=?=) APEL-LB/10.8 EasyPG/1.0.0 Emacs/27.1 (x86_64-pc-linux-gnu) MULE/6.0 (HANACHIRUSATO) MIME-Version: 1.0 (generated by SEMI-EPG 1.14.7 - "Harue") X-SA-Exim-Connect-IP: 185.219.108.64 X-SA-Exim-Rcpt-To: robin.murphy@arm.com, cuibixuan@huawei.com, iommu@lists.linux-foundation.org, linux-kernel@vger.kernel.org, will@kernel.org, weiyongjun1@huawei.com, john.wanghui@huawei.com, dingtianhong@huawei.com, thunder.leizhen@huawei.com, guohanjun@huawei.com, joro@8bytes.org, jean-philippe@linaro.org, Jonathan.Cameron@huawei.com, song.bao.hua@hisilicon.com, linux-arm-kernel@lists.infradead.org X-SA-Exim-Mail-From: maz@kernel.org X-SA-Exim-Scanned: No (on disco-boy.misterjones.org); SAEximRunCond expanded to false X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210721_061219_919284_4FD500F2 X-CRM114-Status: GOOD ( 49.15 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org On Wed, 21 Jul 2021 12:42:14 +0100, Robin Murphy wrote: > > [ +Marc for MSI bits ] > > On 2021-07-21 02:33, Bixuan Cui wrote: > > Add suspend and resume support for arm-smmu-v3 by low-power mode. > > > > When the smmu is suspended, it is powered off and the registers are > > cleared. So saves the msi_msg context during msi interrupt initialization > > of smmu. When resume happens it calls arm_smmu_device_reset() to restore > > the registers. > > > > Signed-off-by: Bixuan Cui > > Reviewed-by: Wei Yongjun > > Reviewed-by: Zhen Lei > > Reviewed-by: Ding Tianhong > > Reviewed-by: Hanjun Guo > > --- > > > > drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c | 72 ++++++++++++++++++--- > > 1 file changed, 64 insertions(+), 8 deletions(-) > > > > diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c > > index 235f9bdaeaf2..bf1163acbcb1 100644 > > --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c > > +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c > > @@ -40,6 +40,7 @@ MODULE_PARM_DESC(disable_bypass, > > static bool disable_msipolling; > > module_param(disable_msipolling, bool, 0444); > > +static bool bypass; > > MODULE_PARM_DESC(disable_msipolling, > > "Disable MSI-based polling for CMD_SYNC completion."); > > @@ -3129,11 +3130,37 @@ static void arm_smmu_write_msi_msg(struct > > msi_desc *desc, struct msi_msg *msg) > > doorbell = (((u64)msg->address_hi) << 32) | msg->address_lo; > > doorbell &= MSI_CFG0_ADDR_MASK; > > + /* Saves the msg context for resume if desc->msg is empty */ > > + if (desc->msg.address_lo == 0 && desc->msg.address_hi == 0) { > > + desc->msg.address_lo = msg->address_lo; > > + desc->msg.address_hi = msg->address_hi; > > + desc->msg.data = msg->data; > > + } > > My gut feeling is that this is something a device driver maybe > shouldn't be poking into, but I'm not entirely familiar with the area > :/ Certainly not. If you rely on the message being stored into the descriptors, then implement this in the core code, like we do for PCI. > > > + > > writeq_relaxed(doorbell, smmu->base + cfg[0]); > > writel_relaxed(msg->data, smmu->base + cfg[1]); > > writel_relaxed(ARM_SMMU_MEMATTR_DEVICE_nGnRE, smmu->base + cfg[2]); > > } > > +static void arm_smmu_resume_msis(struct arm_smmu_device *smmu) > > +{ > > + struct msi_desc *desc; > > + struct device *dev = smmu->dev; > > + > > + for_each_msi_entry(desc, dev) { > > + switch (desc->platform.msi_index) { > > + case EVTQ_MSI_INDEX: > > + case GERROR_MSI_INDEX: > > + case PRIQ_MSI_INDEX: > > + arm_smmu_write_msi_msg(desc, &(desc->msg)); Consider using get_cached_msi_msg() instead of using the internals of the descriptor. > > + break; > > + default: > > + continue; > > + > > + } > > + } > > +} > > + > > static void arm_smmu_setup_msis(struct arm_smmu_device *smmu) > > { > > struct msi_desc *desc; > > @@ -3184,11 +3211,17 @@ static void arm_smmu_setup_msis(struct arm_smmu_device *smmu) > > devm_add_action(dev, arm_smmu_free_msis, dev); > > } > > -static void arm_smmu_setup_unique_irqs(struct arm_smmu_device > > *smmu) > > +static void arm_smmu_setup_unique_irqs(struct arm_smmu_device *smmu, bool resume_mode) > > { > > int irq, ret; > > - arm_smmu_setup_msis(smmu); > > + if (!resume_mode) > > + arm_smmu_setup_msis(smmu); > > + else { > > + /* The irq doesn't need to be re-requested during resume */ > > + arm_smmu_resume_msis(smmu); > > + return; > > What about wired IRQs? Yeah. I assume the SMMU needs to be told which event gets signalled using MSIs or IRQs? Or is that implied by the MSI being configured or not (I used to know the answer to that, but I have long paged it out). > > > + } > > /* Request interrupt lines */ > > irq = smmu->evtq.q.irq; > > @@ -3230,7 +3263,7 @@ static void arm_smmu_setup_unique_irqs(struct arm_smmu_device *smmu) > > } > > } > > -static int arm_smmu_setup_irqs(struct arm_smmu_device *smmu) > > +static int arm_smmu_setup_irqs(struct arm_smmu_device *smmu, bool resume_mode) > > { > > int ret, irq; > > u32 irqen_flags = IRQ_CTRL_EVTQ_IRQEN | IRQ_CTRL_GERROR_IRQEN; > > @@ -3257,7 +3290,7 @@ static int arm_smmu_setup_irqs(struct arm_smmu_device *smmu) > > if (ret < 0) > > dev_warn(smmu->dev, "failed to enable combined irq\n"); > > } else > > - arm_smmu_setup_unique_irqs(smmu); > > + arm_smmu_setup_unique_irqs(smmu, resume_mode); > > if (smmu->features & ARM_SMMU_FEAT_PRI) > > irqen_flags |= IRQ_CTRL_PRIQ_IRQEN; > > @@ -3282,7 +3315,7 @@ static int arm_smmu_device_disable(struct arm_smmu_device *smmu) > > return ret; > > } > > -static int arm_smmu_device_reset(struct arm_smmu_device *smmu, > > bool bypass) > > +static int arm_smmu_device_reset(struct arm_smmu_device *smmu, bool resume_mode) > > Er, what about the use of "bypass" towards the end of the > function. Have you even compiled this? The author of the patch has conveniently made it a global value (see line 3 of the patch). I'm sure it doesn't break anything... :-( > > > { > > int ret; > > u32 reg, enables; > > @@ -3392,7 +3425,7 @@ static int arm_smmu_device_reset(struct arm_smmu_device *smmu, bool bypass) > > } > > } > > - ret = arm_smmu_setup_irqs(smmu); > > + ret = arm_smmu_setup_irqs(smmu, resume_mode); > > if (ret) { > > dev_err(smmu->dev, "failed to setup irqs\n"); > > return ret; > > @@ -3749,6 +3782,24 @@ static void __iomem *arm_smmu_ioremap(struct device *dev, resource_size_t start, > > return devm_ioremap_resource(dev, &res); > > } > > +static int __maybe_unused arm_smmu_suspend(struct device *dev) > > +{ > > + /* > > + * The smmu is powered off and related registers are automatically > > + * cleared when suspend. No need to do anything. > > + */ > > Is that guaranteed? What if suspend is only implemented by external > clock-gating? +1. This seems awfully implementation/integration specific. I'd be more in favour of a controlled teardown. > > > + return 0; > > +} > > + > > +static int __maybe_unused arm_smmu_resume(struct device *dev) > > +{ > > + struct arm_smmu_device *smmu = dev_get_drvdata(dev); > > + > > + arm_smmu_device_reset(smmu, true); > > + > > + return 0; > > +} > > + > > static int arm_smmu_device_probe(struct platform_device *pdev) > > { > > int irq, ret; > > @@ -3756,7 +3807,6 @@ static int arm_smmu_device_probe(struct platform_device *pdev) > > resource_size_t ioaddr; > > struct arm_smmu_device *smmu; > > struct device *dev = &pdev->dev; > > - bool bypass; > > Once again... > > > smmu = devm_kzalloc(dev, sizeof(*smmu), GFP_KERNEL); > > if (!smmu) > > @@ -3831,7 +3881,7 @@ static int arm_smmu_device_probe(struct platform_device *pdev) > > platform_set_drvdata(pdev, smmu); > > /* Reset the device */ > > - ret = arm_smmu_device_reset(smmu, bypass); > > ...either this is based on some out-of-tree hack which introduced its > own uninitialised-usage bug here, or it doesn't even compile. > > > + ret = arm_smmu_device_reset(smmu, false); > > if (ret) > > return ret; > > @@ -3884,6 +3934,11 @@ static const struct of_device_id > > arm_smmu_of_match[] = { > > }; > > MODULE_DEVICE_TABLE(of, arm_smmu_of_match); > > +static const struct dev_pm_ops arm_smmu_pm_ops = { > > + .suspend = arm_smmu_suspend, > > + .resume = arm_smmu_resume, > > Either use SET_SYSTEM_SLEEP_PM_OPS() here or drop the __maybe_unused > annmotations above - they're pointless if the callbacks are referenced > unconditionally. Thanks, M. -- Without deviation from the norm, progress is not possible. _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel