From: "yf.wang--- via iommu" <iommu@lists.linux-foundation.org>
To: Joerg Roedel <joro@8bytes.org>, Will Deacon <will@kernel.org>,
"Matthias Brugger" <matthias.bgg@gmail.com>,
"open list:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>,
open list <linux-kernel@vger.kernel.org>,
"moderated list:ARM/Mediatek SoC support"
<linux-arm-kernel@lists.infradead.org>,
"moderated list:ARM/Mediatek SoC support"
<linux-mediatek@lists.infradead.org>
Cc: wsd_upstream@mediatek.com, Libo Kang <Libo.Kang@mediatek.com>,
Yunfei Wang <yf.wang@mediatek.com>,
stable@vger.kernel.org, Ning Li <Ning.Li@mediatek.com>
Subject: [PATCH] iommu/iova: Free all CPU rcache for retry when iova alloc failure
Date: Fri, 4 Mar 2022 12:46:34 +0800 [thread overview]
Message-ID: <20220304044635.4273-1-yf.wang@mediatek.com> (raw)
[-- Attachment #1.1: Type: text/html, Size: 2338 bytes --]
[-- Attachment #1.2: Type: text/plain, Size: 1401 bytes --]
From: Yunfei Wang <yf.wang@mediatek.com>
In alloc_iova_fast function, if an iova alloc request fail,
it will free the iova ranges present in the percpu iova
rcaches and free global iova rcache and then retry, but
flushing CPU iova rcaches only for each online CPU, which
will cause incomplete rcache cleaning, and iova rcaches of
not online CPU cannot be flushed, because iova rcaches may
also lead to fragmentation of iova space, so the next retry
action may still be fail.
Based on the above, so need to flushing all iova rcaches
for each possible CPU, use for_each_possible_cpu instead of
for_each_online_cpu like in free_iova_rcaches function,
so that all rcaches can be completely released to try
replenishing IOVAs.
Signed-off-by: Yunfei Wang <yf.wang@mediatek.com>
Cc: <stable@vger.kernel.org> # 5.4.*
---
drivers/iommu/iova.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/iommu/iova.c b/drivers/iommu/iova.c
index b28c9435b898..5a0637cd7bc2 100644
--- a/drivers/iommu/iova.c
+++ b/drivers/iommu/iova.c
@@ -460,7 +460,7 @@ alloc_iova_fast(struct iova_domain *iovad, unsigned long size,
/* Try replenishing IOVAs by flushing rcache. */
flush_rcache = false;
- for_each_online_cpu(cpu)
+ for_each_possible_cpu(cpu)
free_cpu_cached_iovas(cpu, iovad);
free_global_cached_iovas(iovad);
goto retry;
--
2.18.0
[-- Attachment #2: Type: text/plain, Size: 156 bytes --]
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu
next reply other threads:[~2022-03-04 4:52 UTC|newest]
Thread overview: 7+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-03-04 4:46 yf.wang--- via iommu [this message]
2022-03-04 5:00 ` [PATCH] iommu/iova: Free all CPU rcache for retry when iova alloc failure yf.wang--- via iommu
2022-03-04 9:22 ` John Garry via iommu
2022-03-07 3:18 ` yf.wang--- via iommu
2022-03-04 14:03 ` Robin Murphy
2022-03-07 3:32 ` yf.wang--- via iommu
2022-03-07 3:49 ` yf.wang--- via iommu
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20220304044635.4273-1-yf.wang@mediatek.com \
--to=iommu@lists.linux-foundation.org \
--cc=Libo.Kang@mediatek.com \
--cc=Ning.Li@mediatek.com \
--cc=joro@8bytes.org \
--cc=linux-arm-kernel@lists.infradead.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mediatek@lists.infradead.org \
--cc=matthias.bgg@gmail.com \
--cc=stable@vger.kernel.org \
--cc=will@kernel.org \
--cc=wsd_upstream@mediatek.com \
--cc=yf.wang@mediatek.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox