From: Catalin Marinas <catalin.marinas@arm.com>
To: Zhenyu Ye <yezhenyu2@huawei.com>
Cc: linux-arch@vger.kernel.org, suzuki.poulose@arm.com,
maz@kernel.org, linux-kernel@vger.kernel.org,
xiexiangyou@huawei.com, steven.price@arm.com,
zhangshaokun@hisilicon.com, linux-mm@kvack.org, arm@kernel.org,
prime.zeng@hisilicon.com, guohanjun@huawei.com, olof@lixom.net,
kuhn.chenqun@huawei.com, will@kernel.org,
linux-arm-kernel@lists.infradead.org
Subject: Re: [PATCH v2 2/2] arm64: tlb: Use the TLBI RANGE feature in arm64
Date: Sun, 12 Jul 2020 13:03:36 +0100 [thread overview]
Message-ID: <20200712120335.GA30896@gaia> (raw)
In-Reply-To: <b34e3d42-faaa-73ba-9b54-8e4017514ee0@huawei.com>
On Sat, Jul 11, 2020 at 02:50:46PM +0800, Zhenyu Ye wrote:
> On 2020/7/11 2:31, Catalin Marinas wrote:
> > On Fri, Jul 10, 2020 at 05:44:20PM +0800, Zhenyu Ye wrote:
> >> - if ((end - start) >= (MAX_TLBI_OPS * stride)) {
> >> + if ((!cpus_have_const_cap(ARM64_HAS_TLBI_RANGE) &&
> >> + (end - start) >= (MAX_TLBI_OPS * stride)) ||
> >> + pages >= MAX_TLBI_RANGE_PAGES) {
> >> flush_tlb_mm(vma->vm_mm);
> >> return;
> >> }
> >
> > I think we can use strictly greater here rather than greater or equal.
> > MAX_TLBI_RANGE_PAGES can be encoded as num 31, scale 3.
>
> Sorry, we can't.
> For a boundary value (such as 2^6), we have two way to express it
> in TLBI RANGE operations:
> 1. scale = 0, num = 31.
> 2. scale = 1, num = 0.
>
> I used the second way in following implementation. However, for the
> MAX_TLBI_RANGE_PAGES, we can only use scale = 3, num = 31.
> So if use strictly greater here, ERROR will happen when range pages
> equal to MAX_TLBI_RANGE_PAGES.
You are right, I got confused by the __TLBI_RANGE_NUM() macro which
doesn't return the actual 'num' for the TLBI argument as it would go
from 0 to 31. After subtracting 1, num end sup from -1 to 30, so we
never get the maximum range. I think for scale 3 and num 31, this would
be 8GB with 4K pages, so the maximum we'd cover 8GB - 64K * 4K.
> There are two ways to avoid this bug:
> 1. Just keep 'greater or equal' here. The ARM64 specification does
> not specify how we flush tlb entries in this case, flush_tlb_mm()
> is also a good choice for such a wide range of pages.
I'll go for this option, I don't think it would make much difference in
practice if we stop at 8GB - 256M range.
> 2. Add check in the loop, just like: (this may cause the codes a bit ugly)
>
> num = __TLBI_RANGE_NUM(pages, scale) - 1;
>
> /* scale = 4, num = 0 is equal to scale = 3, num = 31. */
> if (scale == 4 && num == 0) {
> scale = 3;
> num = 31;
> }
>
> if (num >= 0) {
> ...
>
> Which one do you prefer and how do you want to fix this error? Just
> a fix patch again?
I'll fold the diff below and refresh the patch:
diff --git a/arch/arm64/include/asm/tlbflush.h b/arch/arm64/include/asm/tlbflush.h
index 1eb0588718fb..0300e433ffe6 100644
--- a/arch/arm64/include/asm/tlbflush.h
+++ b/arch/arm64/include/asm/tlbflush.h
@@ -147,9 +147,13 @@ static inline unsigned long get_trans_granule(void)
#define __TLBI_RANGE_PAGES(num, scale) (((num) + 1) << (5 * (scale) + 1))
#define MAX_TLBI_RANGE_PAGES __TLBI_RANGE_PAGES(31, 3)
+/*
+ * Generate 'num' values from -1 to 30 with -1 rejected by the
+ * __flush_tlb_range() loop below.
+ */
#define TLBI_RANGE_MASK GENMASK_ULL(4, 0)
#define __TLBI_RANGE_NUM(range, scale) \
- (((range) >> (5 * (scale) + 1)) & TLBI_RANGE_MASK)
+ ((((range) >> (5 * (scale) + 1)) & TLBI_RANGE_MASK) - 1)
/*
* TLB Invalidation
@@ -285,8 +289,8 @@ static inline void __flush_tlb_range(struct vm_area_struct *vma,
pages = (end - start) >> PAGE_SHIFT;
if ((!cpus_have_const_cap(ARM64_HAS_TLB_RANGE) &&
- (end - start) > (MAX_TLBI_OPS * stride)) ||
- pages > MAX_TLBI_RANGE_PAGES) {
+ (end - start) >= (MAX_TLBI_OPS * stride)) ||
+ pages >= MAX_TLBI_RANGE_PAGES) {
flush_tlb_mm(vma->vm_mm);
return;
}
@@ -306,6 +310,10 @@ static inline void __flush_tlb_range(struct vm_area_struct *vma,
* Start from scale = 0, flush the corresponding number of pages
* ((num+1)*2^(5*scale+1) starting from 'addr'), then increase it
* until no pages left.
+ *
+ * Note that certain ranges can be represented by either num = 31 and
+ * scale or num = 0 and scale + 1. The loop below favours the latter
+ * since num is limited to 30 by the __TLBI_RANGE_NUM() macro.
*/
while (pages > 0) {
if (!cpus_have_const_cap(ARM64_HAS_TLB_RANGE) ||
@@ -323,7 +331,7 @@ static inline void __flush_tlb_range(struct vm_area_struct *vma,
continue;
}
- num = __TLBI_RANGE_NUM(pages, scale) - 1;
+ num = __TLBI_RANGE_NUM(pages, scale);
if (num >= 0) {
addr = __TLBI_VADDR_RANGE(start, asid, scale,
num, tlb_level);
> >> - /* Convert the stride into units of 4k */
> >> - stride >>= 12;
> >> + dsb(ishst);
> >>
> >> - start = __TLBI_VADDR(start, asid);
> >> - end = __TLBI_VADDR(end, asid);
> >> + /*
> >> + * When cpu does not support TLBI RANGE feature, we flush the tlb
> >> + * entries one by one at the granularity of 'stride'.
> >> + * When cpu supports the TLBI RANGE feature, then:
> >> + * 1. If pages is odd, flush the first page through non-RANGE
> >> + * instruction;
> >> + * 2. For remaining pages: The minimum range granularity is decided
> >> + * by 'scale', so we can not flush all pages by one instruction
> >> + * in some cases.
> >> + * Here, we start from scale = 0, flush corresponding pages
> >> + * (from 2^(5*scale + 1) to 2^(5*(scale + 1) + 1)), and increase
> >> + * it until no pages left.
> >> + */
> >> + while (pages > 0) {
> >
> > I did some simple checks on ((end - start) % stride) and never
> > triggered. I had a slight worry that pages could become negative (and
> > we'd loop forever since it's unsigned long) for some mismatched stride
> > and flush size. It doesn't seem like.
>
> The start and end are round_down/up in the function:
>
> start = round_down(start, stride);
> end = round_up(end, stride);
>
> So the flush size and stride will never mismatch.
Right.
To make sure we don't miss any corner cases, I'll try to through the
algorithm above at CBMC (model checker; hopefully next week if I find
some time).
--
Catalin
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
next prev parent reply other threads:[~2020-07-12 12:05 UTC|newest]
Thread overview: 17+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-07-10 9:44 [PATCH v2 0/2] arm64: tlb: add support for TLBI RANGE instructions Zhenyu Ye
2020-07-10 9:44 ` [PATCH v2 1/2] arm64: tlb: Detect the ARMv8.4 TLBI RANGE feature Zhenyu Ye
2020-07-10 9:44 ` [PATCH v2 2/2] arm64: tlb: Use the TLBI RANGE feature in arm64 Zhenyu Ye
2020-07-10 18:31 ` Catalin Marinas
2020-07-11 6:50 ` Zhenyu Ye
2020-07-12 12:03 ` Catalin Marinas [this message]
2020-07-13 14:27 ` Jon Hunter
2020-07-13 14:39 ` Zhenyu Ye
2020-07-13 14:44 ` Jon Hunter
2020-07-13 17:21 ` Catalin Marinas
2020-07-14 10:36 ` Catalin Marinas
2020-07-14 13:51 ` Zhenyu Ye
[not found] ` <159440712962.27784.4664678472466095995.b4-ty@arm.com>
2020-07-13 12:21 ` [PATCH v2 0/2] arm64: tlb: add support for TLBI RANGE instructions Catalin Marinas
2020-07-13 12:41 ` Zhenyu Ye
2020-07-13 16:59 ` Catalin Marinas
2020-07-14 15:17 ` Zhenyu Ye
2020-07-14 15:58 ` Catalin Marinas
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20200712120335.GA30896@gaia \
--to=catalin.marinas@arm.com \
--cc=arm@kernel.org \
--cc=guohanjun@huawei.com \
--cc=kuhn.chenqun@huawei.com \
--cc=linux-arch@vger.kernel.org \
--cc=linux-arm-kernel@lists.infradead.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=maz@kernel.org \
--cc=olof@lixom.net \
--cc=prime.zeng@hisilicon.com \
--cc=steven.price@arm.com \
--cc=suzuki.poulose@arm.com \
--cc=will@kernel.org \
--cc=xiexiangyou@huawei.com \
--cc=yezhenyu2@huawei.com \
--cc=zhangshaokun@hisilicon.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).