From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 65D1AC77B73 for ; Mon, 22 May 2023 14:59:08 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233322AbjEVO7E (ORCPT ); Mon, 22 May 2023 10:59:04 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39832 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231321AbjEVO7A (ORCPT ); Mon, 22 May 2023 10:59:00 -0400 Received: from mail.8bytes.org (mail.8bytes.org [85.214.250.239]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 50853185 for ; Mon, 22 May 2023 07:58:36 -0700 (PDT) Received: from 8bytes.org (p200300c2773e310086ad4f9d2505dd0d.dip0.t-ipconnect.de [IPv6:2003:c2:773e:3100:86ad:4f9d:2505:dd0d]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange ECDHE (P-256) server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mail.8bytes.org (Postfix) with ESMTPSA id B14892476DC; Mon, 22 May 2023 16:58:34 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=8bytes.org; s=default; t=1684767514; bh=lZHPcZFttC8PwMH3huLv0AsmR9poQhlnJzue5QlLdFM=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=ye1AkBufdcMtnsCFVaACxTZkR8vc5S5B2YWnJIKjPUsnHtsa4j0AwqbkLEWQ8WVPS GZZBIpH7EFJe9trjXQEBF27HhCsipnxq0Xr6Y4Nvk1gVWp4xB+uDsEWQVoit2SzXJI 0a8a0CAOkRIIUc0INZpmtb2ncvDUYLRvFYrYTGITMfo3mpCmInJM/Dz2p+ZEoE2xDI QCm36mNah4TwpMlWgYrgtWeRuy2VL2Z4YMV3yLz4ZPBfsZleS4IRlBFtJ9MEH42Thv FOpCCgfviHrhbFG8EUunsXJLBWhxwKZlGMUaxXRU0lJBle6UDFfnODTEEblN5sKMIc 1wlKFcZyxwuLg== Date: Mon, 22 May 2023 16:58:33 +0200 From: Joerg Roedel To: Vasant Hegde Cc: Jerry Snitselaar , Peng Zhang , Robin Murphy , will@kernel.org, iommu@lists.linux.dev, linux-kernel@vger.kernel.org, Li Bin , Xie XiuQi , Yang Yingliang , Suravee Suthikulpanit Subject: Re: [PATCH] iommu: Avoid softlockup and rcu stall in fq_flush_timeout(). Message-ID: References: <20230216071148.2060-1-zhangpeng.00@bytedance.com> <7bede423-690c-4f6a-9c23-def4ad08344e@amd.com> <21f69b43-a1e7-6c84-a360-dae410bedb3f@amd.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <21f69b43-a1e7-6c84-a360-dae410bedb3f@amd.com> Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Hi, On Fri, Apr 28, 2023 at 11:14:54AM +0530, Vasant Hegde wrote: > Ping. Any suggestion on below proposal (schedule work on each CPU to free iova)? Optimizing the flush-timeout path seems to be working on the symptoms rather than the cause. The main question to look into first is why are so many CPUs competing for the IOVA allocator lock. That is a situation which the flush-queue code is there to avoid, obviously it does not scale to the workloads tested here. Any chance to check why? My guess is that the allocations are too big and not covered by the allocation sizes supported by the flush-queue code. But maybe this is something that can be fixed. Or the flush-queue code could even be changed to auto-adapt to allocation patterns of the device driver? Regards, Joerg