* [PATCH v3] riscv: set max_pfn to the PFN of the last page
@ 2020-04-27 6:59 Vincent Chen
2020-05-04 21:13 ` Palmer Dabbelt
0 siblings, 1 reply; 3+ messages in thread
From: Vincent Chen @ 2020-04-27 6:59 UTC (permalink / raw)
To: paul.walmsley, palmer; +Cc: Vincent Chen, linux-riscv, stable
The current max_pfn equals to zero. In this case, I found it caused users
cannot get some page information through /proc such as kpagecount in v5.6
kernel because of new sanity checks. The following message is displayed by
stress-ng test suite with the command "stress-ng --verbose --physpage 1 -t
1" on HiFive unleashed board.
# stress-ng --verbose --physpage 1 -t 1
stress-ng: debug: [109] 4 processors online, 4 processors configured
stress-ng: info: [109] dispatching hogs: 1 physpage
stress-ng: debug: [109] cache allocate: reducing cache level from L3 (too high) to L0
stress-ng: debug: [109] get_cpu_cache: invalid cache_level: 0
stress-ng: info: [109] cache allocate: using built-in defaults as no suitable cache found
stress-ng: debug: [109] cache allocate: default cache size: 2048K
stress-ng: debug: [109] starting stressors
stress-ng: debug: [109] 1 stressor spawned
stress-ng: debug: [110] stress-ng-physpage: started [110] (instance 0)
stress-ng: error: [110] stress-ng-physpage: cannot read page count for address 0x3fd34de000 in /proc/kpagecount, errno=0 (Success)
stress-ng: error: [110] stress-ng-physpage: cannot read page count for address 0x3fd32db078 in /proc/kpagecount, errno=0 (Success)
...
stress-ng: error: [110] stress-ng-physpage: cannot read page count for address 0x3fd32db078 in /proc/kpagecount, errno=0 (Success)
stress-ng: debug: [110] stress-ng-physpage: exited [110] (instance 0)
stress-ng: debug: [109] process [110] terminated
stress-ng: info: [109] successful run completed in 1.00s
#
After applying this patch, the kernel can pass the test.
# stress-ng --verbose --physpage 1 -t 1
stress-ng: debug: [104] 4 processors online, 4 processors configured stress-ng: info: [104] dispatching hogs: 1 physpage
stress-ng: info: [104] cache allocate: using defaults, can't determine cache details from sysfs
stress-ng: debug: [104] cache allocate: default cache size: 2048K
stress-ng: debug: [104] starting stressors
stress-ng: debug: [104] 1 stressor spawned
stress-ng: debug: [105] stress-ng-physpage: started [105] (instance 0) stress-ng: debug: [105] stress-ng-physpage: exited [105] (instance 0) stress-ng: debug: [104] process [105] terminated
stress-ng: info: [104] successful run completed in 1.01s
#
Fixes: 0651c263c8e3 (RISC-V: Move setup_bootmem() to mm/init.c)
Cc: stable@vger.kernel.org
Signed-off-by: Vincent Chen <vincent.chen@sifive.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
Reviewed-by: Yash Shah <yash.shah@sifive.com>
Tested-by: Yash Shah <yash.shah@sifive.com>
Changes since v1:
1. Add Fixes line and Cc stable kernel
Changes since v2:
1. Fix typo in Anup email address
---
arch/riscv/mm/init.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/arch/riscv/mm/init.c b/arch/riscv/mm/init.c
index fab855963c73..157924baa191 100644
--- a/arch/riscv/mm/init.c
+++ b/arch/riscv/mm/init.c
@@ -149,7 +149,8 @@ void __init setup_bootmem(void)
memblock_reserve(vmlinux_start, vmlinux_end - vmlinux_start);
set_max_mapnr(PFN_DOWN(mem_size));
- max_low_pfn = PFN_DOWN(memblock_end_of_DRAM());
+ max_pfn = PFN_DOWN(memblock_end_of_DRAM());
+ max_low_pfn = max_pfn;
#ifdef CONFIG_BLK_DEV_INITRD
setup_initrd();
--
2.7.4
^ permalink raw reply related [flat|nested] 3+ messages in thread
* Re: [PATCH v3] riscv: set max_pfn to the PFN of the last page
2020-04-27 6:59 [PATCH v3] riscv: set max_pfn to the PFN of the last page Vincent Chen
@ 2020-05-04 21:13 ` Palmer Dabbelt
2021-01-28 1:54 ` Guo Ren
0 siblings, 1 reply; 3+ messages in thread
From: Palmer Dabbelt @ 2020-05-04 21:13 UTC (permalink / raw)
To: vincent.chen; +Cc: vincent.chen, linux-riscv, stable, Paul Walmsley
On Sun, 26 Apr 2020 23:59:24 PDT (-0700), vincent.chen@sifive.com wrote:
> The current max_pfn equals to zero. In this case, I found it caused users
> cannot get some page information through /proc such as kpagecount in v5.6
> kernel because of new sanity checks. The following message is displayed by
> stress-ng test suite with the command "stress-ng --verbose --physpage 1 -t
> 1" on HiFive unleashed board.
>
> # stress-ng --verbose --physpage 1 -t 1
> stress-ng: debug: [109] 4 processors online, 4 processors configured
> stress-ng: info: [109] dispatching hogs: 1 physpage
> stress-ng: debug: [109] cache allocate: reducing cache level from L3 (too high) to L0
> stress-ng: debug: [109] get_cpu_cache: invalid cache_level: 0
> stress-ng: info: [109] cache allocate: using built-in defaults as no suitable cache found
> stress-ng: debug: [109] cache allocate: default cache size: 2048K
> stress-ng: debug: [109] starting stressors
> stress-ng: debug: [109] 1 stressor spawned
> stress-ng: debug: [110] stress-ng-physpage: started [110] (instance 0)
> stress-ng: error: [110] stress-ng-physpage: cannot read page count for address 0x3fd34de000 in /proc/kpagecount, errno=0 (Success)
> stress-ng: error: [110] stress-ng-physpage: cannot read page count for address 0x3fd32db078 in /proc/kpagecount, errno=0 (Success)
> ...
> stress-ng: error: [110] stress-ng-physpage: cannot read page count for address 0x3fd32db078 in /proc/kpagecount, errno=0 (Success)
> stress-ng: debug: [110] stress-ng-physpage: exited [110] (instance 0)
> stress-ng: debug: [109] process [110] terminated
> stress-ng: info: [109] successful run completed in 1.00s
> #
>
> After applying this patch, the kernel can pass the test.
>
> # stress-ng --verbose --physpage 1 -t 1
> stress-ng: debug: [104] 4 processors online, 4 processors configured stress-ng: info: [104] dispatching hogs: 1 physpage
> stress-ng: info: [104] cache allocate: using defaults, can't determine cache details from sysfs
> stress-ng: debug: [104] cache allocate: default cache size: 2048K
> stress-ng: debug: [104] starting stressors
> stress-ng: debug: [104] 1 stressor spawned
> stress-ng: debug: [105] stress-ng-physpage: started [105] (instance 0) stress-ng: debug: [105] stress-ng-physpage: exited [105] (instance 0) stress-ng: debug: [104] process [105] terminated
> stress-ng: info: [104] successful run completed in 1.01s
> #
>
> Fixes: 0651c263c8e3 (RISC-V: Move setup_bootmem() to mm/init.c)
> Cc: stable@vger.kernel.org
>
> Signed-off-by: Vincent Chen <vincent.chen@sifive.com>
> Reviewed-by: Anup Patel <anup@brainfault.org>
> Reviewed-by: Yash Shah <yash.shah@sifive.com>
> Tested-by: Yash Shah <yash.shah@sifive.com>
>
> Changes since v1:
> 1. Add Fixes line and Cc stable kernel
> Changes since v2:
> 1. Fix typo in Anup email address
> ---
> arch/riscv/mm/init.c | 3 ++-
> 1 file changed, 2 insertions(+), 1 deletion(-)
>
> diff --git a/arch/riscv/mm/init.c b/arch/riscv/mm/init.c
> index fab855963c73..157924baa191 100644
> --- a/arch/riscv/mm/init.c
> +++ b/arch/riscv/mm/init.c
> @@ -149,7 +149,8 @@ void __init setup_bootmem(void)
> memblock_reserve(vmlinux_start, vmlinux_end - vmlinux_start);
>
> set_max_mapnr(PFN_DOWN(mem_size));
> - max_low_pfn = PFN_DOWN(memblock_end_of_DRAM());
> + max_pfn = PFN_DOWN(memblock_end_of_DRAM());
> + max_low_pfn = max_pfn;
>
> #ifdef CONFIG_BLK_DEV_INITRD
> setup_initrd();
I'm dropping the Fixes tag, as the actual bug goes back farther than that
commit, that's just as far as it'll auto-apply.
^ permalink raw reply [flat|nested] 3+ messages in thread
* Re: [PATCH v3] riscv: set max_pfn to the PFN of the last page
2020-05-04 21:13 ` Palmer Dabbelt
@ 2021-01-28 1:54 ` Guo Ren
0 siblings, 0 replies; 3+ messages in thread
From: Guo Ren @ 2021-01-28 1:54 UTC (permalink / raw)
To: Palmer Dabbelt; +Cc: Vincent Chen, linux-riscv, stable, Paul Walmsley
Hi Palmer & Vicent,
Please have a look at the patch:
https://lore.kernel.org/linux-riscv/20210121063117.3164494-1-guoren@kernel.org/T/#u
Seems our set_max_mapnr is wrong and it will make pfn_valid fault in
non-zero start-address.
On Tue, May 5, 2020 at 5:14 AM Palmer Dabbelt <palmer@dabbelt.com> wrote:
>
> On Sun, 26 Apr 2020 23:59:24 PDT (-0700), vincent.chen@sifive.com wrote:
> > The current max_pfn equals to zero. In this case, I found it caused users
> > cannot get some page information through /proc such as kpagecount in v5.6
> > kernel because of new sanity checks. The following message is displayed by
> > stress-ng test suite with the command "stress-ng --verbose --physpage 1 -t
> > 1" on HiFive unleashed board.
> >
> > # stress-ng --verbose --physpage 1 -t 1
> > stress-ng: debug: [109] 4 processors online, 4 processors configured
> > stress-ng: info: [109] dispatching hogs: 1 physpage
> > stress-ng: debug: [109] cache allocate: reducing cache level from L3 (too high) to L0
> > stress-ng: debug: [109] get_cpu_cache: invalid cache_level: 0
> > stress-ng: info: [109] cache allocate: using built-in defaults as no suitable cache found
> > stress-ng: debug: [109] cache allocate: default cache size: 2048K
> > stress-ng: debug: [109] starting stressors
> > stress-ng: debug: [109] 1 stressor spawned
> > stress-ng: debug: [110] stress-ng-physpage: started [110] (instance 0)
> > stress-ng: error: [110] stress-ng-physpage: cannot read page count for address 0x3fd34de000 in /proc/kpagecount, errno=0 (Success)
> > stress-ng: error: [110] stress-ng-physpage: cannot read page count for address 0x3fd32db078 in /proc/kpagecount, errno=0 (Success)
> > ...
> > stress-ng: error: [110] stress-ng-physpage: cannot read page count for address 0x3fd32db078 in /proc/kpagecount, errno=0 (Success)
> > stress-ng: debug: [110] stress-ng-physpage: exited [110] (instance 0)
> > stress-ng: debug: [109] process [110] terminated
> > stress-ng: info: [109] successful run completed in 1.00s
> > #
> >
> > After applying this patch, the kernel can pass the test.
> >
> > # stress-ng --verbose --physpage 1 -t 1
> > stress-ng: debug: [104] 4 processors online, 4 processors configured stress-ng: info: [104] dispatching hogs: 1 physpage
> > stress-ng: info: [104] cache allocate: using defaults, can't determine cache details from sysfs
> > stress-ng: debug: [104] cache allocate: default cache size: 2048K
> > stress-ng: debug: [104] starting stressors
> > stress-ng: debug: [104] 1 stressor spawned
> > stress-ng: debug: [105] stress-ng-physpage: started [105] (instance 0) stress-ng: debug: [105] stress-ng-physpage: exited [105] (instance 0) stress-ng: debug: [104] process [105] terminated
> > stress-ng: info: [104] successful run completed in 1.01s
> > #
> >
> > Fixes: 0651c263c8e3 (RISC-V: Move setup_bootmem() to mm/init.c)
> > Cc: stable@vger.kernel.org
> >
> > Signed-off-by: Vincent Chen <vincent.chen@sifive.com>
> > Reviewed-by: Anup Patel <anup@brainfault.org>
> > Reviewed-by: Yash Shah <yash.shah@sifive.com>
> > Tested-by: Yash Shah <yash.shah@sifive.com>
> >
> > Changes since v1:
> > 1. Add Fixes line and Cc stable kernel
> > Changes since v2:
> > 1. Fix typo in Anup email address
> > ---
> > arch/riscv/mm/init.c | 3 ++-
> > 1 file changed, 2 insertions(+), 1 deletion(-)
> >
> > diff --git a/arch/riscv/mm/init.c b/arch/riscv/mm/init.c
> > index fab855963c73..157924baa191 100644
> > --- a/arch/riscv/mm/init.c
> > +++ b/arch/riscv/mm/init.c
> > @@ -149,7 +149,8 @@ void __init setup_bootmem(void)
> > memblock_reserve(vmlinux_start, vmlinux_end - vmlinux_start);
> >
> > set_max_mapnr(PFN_DOWN(mem_size));
> > - max_low_pfn = PFN_DOWN(memblock_end_of_DRAM());
> > + max_pfn = PFN_DOWN(memblock_end_of_DRAM());
> > + max_low_pfn = max_pfn;
> >
> > #ifdef CONFIG_BLK_DEV_INITRD
> > setup_initrd();
>
> I'm dropping the Fixes tag, as the actual bug goes back farther than that
> commit, that's just as far as it'll auto-apply.
>
--
Best Regards
Guo Ren
ML: https://lore.kernel.org/linux-csky/
_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv
^ permalink raw reply [flat|nested] 3+ messages in thread
end of thread, other threads:[~2021-01-28 1:54 UTC | newest]
Thread overview: 3+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2020-04-27 6:59 [PATCH v3] riscv: set max_pfn to the PFN of the last page Vincent Chen
2020-05-04 21:13 ` Palmer Dabbelt
2021-01-28 1:54 ` Guo Ren
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).