* Re: Contributing to OpenRISC Linux
[not found] <2613c1c6-2a2f-4ceb-8adb-f819961ec61f@gmail.com>
@ 2025-01-12 7:28 ` Stafford Horne
2025-01-13 6:12 ` Sahil Siddiq
0 siblings, 1 reply; 11+ messages in thread
From: Stafford Horne @ 2025-01-12 7:28 UTC (permalink / raw)
To: Sahil Siddiq; +Cc: Linux OpenRISC
Hi Sunil,
+CC List
yes, the cacheinfo task is still open. There are many things that are still not
implemented in OpenRISC, you can always just look under the kernel
Documentation/features.
For example:
< shorne@antec ~/work/linux > grep -r -e openrisc.*TODO Documentation/features | column -t
Documentation/features/vm/huge-vmap/arch-support.txt: | openrisc: | TODO |
Documentation/features/vm/ELF-ASLR/arch-support.txt: | openrisc: | TODO |
Documentation/features/vm/ioremap_prot/arch-support.txt: | openrisc: | TODO |
Documentation/features/vm/pte_special/arch-support.txt: | openrisc: | TODO |
Documentation/features/perf/kprobes-event/arch-support.txt: | openrisc: | TODO |
...
How far have you come with OpenRISC so far? If you haven't already I suggest
working through:
- Get a simulator, I use QEMU for most development as it's faster and supports
more memory than most FPGA. Final verification can be done on an FPGA.
- Get a working compiler toolchain.
- Compile and boot the openrisc kernel.
- Build a userspace environment, either buildroot, toybox or busybox.
I have some tools to help with this in or1k-utils [1], also there are prebuilt
environments and docs in the linux kernel [2] and qemu [3].
At the momoment, I am also thinking of what to work on next for OpenRISC, there is:
- kexec
- jump_label
- kprobes
- perf_events
- ftrace
[1] https://github.com/stffrdhrn/or1k-utils
[2] https://docs.kernel.org/arch/openrisc/openrisc_port.html
[3] https://wiki.qemu.org/Documentation/Platforms/OpenRISC.
On Sat, Jan 11, 2025 at 05:51:17PM +0530, Sahil Siddiq wrote:
> Hi,
>
> While hunting for project ideas related to Linux kernel
> development, I came across the "OpenRISC Linux Feature
> Development" [1] project on The FOSSi Foundation's GSoC
> page.
>
> While I am not eligible to take part in GSoC, I am still
> interested in working on the tasks in this project. I
> noticed that progress has been made in adding rseq support
> [2]. However, I am unable to tell if progress has been
> made in the second task (reporting CPU details using the
> cacheinfo API).
>
> If the second task is still open, I would like to give it
> a shot. I believe I'll get to learn a lot while working on
> this.
>
> Thanks,
> Sahil
>
> [1] https://fossi-foundation.org/gsoc/gsoc24-ideas#openrisc-linux-feature-development
> [2] https://lore.kernel.org/openrisc/20250110102248.3295944-1-shorne@gmail.com/T/#t
>
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: Contributing to OpenRISC Linux
2025-01-12 7:28 ` Contributing to OpenRISC Linux Stafford Horne
@ 2025-01-13 6:12 ` Sahil Siddiq
2025-01-13 6:31 ` Stafford Horne
0 siblings, 1 reply; 11+ messages in thread
From: Sahil Siddiq @ 2025-01-13 6:12 UTC (permalink / raw)
To: Stafford Horne; +Cc: Linux OpenRISC
Hi,
Thank you for your reply.
On 1/12/25 12:58 PM, Stafford Horne wrote:
> Hi Sunil,
>
> +CC List
>
> yes, the cacheinfo task is still open. There are many things that are still not
> implemented in OpenRISC, you can always just look under the kernel
> Documentation/features.
>
> For example:
>
> < shorne@antec ~/work/linux > grep -r -e openrisc.*TODO Documentation/features | column -t
> Documentation/features/vm/huge-vmap/arch-support.txt: | openrisc: | TODO |
> Documentation/features/vm/ELF-ASLR/arch-support.txt: | openrisc: | TODO |
> Documentation/features/vm/ioremap_prot/arch-support.txt: | openrisc: | TODO |
> Documentation/features/vm/pte_special/arch-support.txt: | openrisc: | TODO |
> Documentation/features/perf/kprobes-event/arch-support.txt: | openrisc: | TODO |
> ...
Got it. I did find this list in the online documentation [1] but I couldn't find
the cacheinfo task listed there.
> How far have you come with OpenRISC so far? If you haven't already I suggest
> working through:
>
> - Get a simulator, I use QEMU for most development as it's faster and supports
> more memory than most FPGA. Final verification can be done on an FPGA.
> - Get a working compiler toolchain.
> - Compile and boot the openrisc kernel.
> - Build a userspace environment, either buildroot, toybox or busybox.
>
> I have some tools to help with this in or1k-utils [1], also there are prebuilt
> environments and docs in the linux kernel [2] and qemu [3].
I don't have an environment set up yet. I'll start with the steps above. I'll use
QEMU for development. I don't have an FPGA with me currently.
> At the momoment, I am also thinking of what to work on next for OpenRISC, there is:
>
> - kexec
> - jump_label
> - kprobes
> - perf_events
> - ftrace
Is the virtio task [2] also still a part of the roadmap? I can't find that either
in the TODO list.
> [1] https://github.com/stffrdhrn/or1k-utils
> [2] https://docs.kernel.org/arch/openrisc/openrisc_port.html
> [3] https://wiki.qemu.org/Documentation/Platforms/OpenRISC.
> [...]
Thanks,
Sahil
[1] https://docs.kernel.org/arch/openrisc/features.html
[2] https://fossi-foundation.org/gsoc/gsoc24-ideas#openrisc-linux-feature-development
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: Contributing to OpenRISC Linux
2025-01-13 6:12 ` Sahil Siddiq
@ 2025-01-13 6:31 ` Stafford Horne
2025-01-22 18:55 ` Sahil Siddiq
0 siblings, 1 reply; 11+ messages in thread
From: Stafford Horne @ 2025-01-13 6:31 UTC (permalink / raw)
To: Sahil Siddiq; +Cc: Linux OpenRISC
On Mon, Jan 13, 2025 at 11:42:08AM +0530, Sahil Siddiq wrote:
> Hi,
>
> Thank you for your reply.
>
> On 1/12/25 12:58 PM, Stafford Horne wrote:
> > Hi Sunil,
> >
> > +CC List
> >
> > yes, the cacheinfo task is still open. There are many things that are still not
> > implemented in OpenRISC, you can always just look under the kernel
> > Documentation/features.
> >
> > For example:
> >
> > < shorne@antec ~/work/linux > grep -r -e openrisc.*TODO Documentation/features | column -t
> > Documentation/features/vm/huge-vmap/arch-support.txt: | openrisc: | TODO |
> > Documentation/features/vm/ELF-ASLR/arch-support.txt: | openrisc: | TODO |
> > Documentation/features/vm/ioremap_prot/arch-support.txt: | openrisc: | TODO |
> > Documentation/features/vm/pte_special/arch-support.txt: | openrisc: | TODO |
> > Documentation/features/perf/kprobes-event/arch-support.txt: | openrisc: | TODO |
> > ...
>
> Got it. I did find this list in the online documentation [1] but I couldn't find
> the cacheinfo task listed there.
Right, not all features have config flags that are documented.
Cacheinfo is implemented by overriding some weak symbols.
> > How far have you come with OpenRISC so far? If you haven't already I suggest
> > working through:
> >
> > - Get a simulator, I use QEMU for most development as it's faster and supports
> > more memory than most FPGA. Final verification can be done on an FPGA.
> > - Get a working compiler toolchain.
> > - Compile and boot the openrisc kernel.
> > - Build a userspace environment, either buildroot, toybox or busybox.
> >
> > I have some tools to help with this in or1k-utils [1], also there are prebuilt
> > environments and docs in the linux kernel [2] and qemu [3].
>
> I don't have an environment set up yet. I'll start with the steps above. I'll use
> QEMU for development. I don't have an FPGA with me currently.
>
> > At the momoment, I am also thinking of what to work on next for OpenRISC, there is:
> >
> > - kexec
> > - jump_label
> > - kprobes
> > - perf_events
> > - ftrace
>
> Is the virtio task [2] also still a part of the roadmap? I can't find that either
> in the TODO list.
The virtio task is still possible but will be more advanced and may need some
architecture changes to support hypervisors.
-Stafford
> > [1] https://github.com/stffrdhrn/or1k-utils
> > [2] https://docs.kernel.org/arch/openrisc/openrisc_port.html
> > [3] https://wiki.qemu.org/Documentation/Platforms/OpenRISC.
> > [...]
>
> Thanks,
> Sahil
>
> [1] https://docs.kernel.org/arch/openrisc/features.html
> [2] https://fossi-foundation.org/gsoc/gsoc24-ideas#openrisc-linux-feature-development
>
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: Contributing to OpenRISC Linux
2025-01-13 6:31 ` Stafford Horne
@ 2025-01-22 18:55 ` Sahil Siddiq
2025-01-25 7:30 ` Stafford Horne
0 siblings, 1 reply; 11+ messages in thread
From: Sahil Siddiq @ 2025-01-22 18:55 UTC (permalink / raw)
To: Stafford Horne; +Cc: Linux OpenRISC
Hi,
On 1/13/25 12:01 PM, Stafford Horne wrote:
> On Mon, Jan 13, 2025 at 11:42:08AM +0530, Sahil Siddiq wrote:
>> Hi,
>>
>> Thank you for your reply.
>>
>> On 1/12/25 12:58 PM, Stafford Horne wrote:
>>> Hi Sunil,
>>>
>>> +CC List
>>>
>>> yes, the cacheinfo task is still open. There are many things that are still not
>>> implemented in OpenRISC, you can always just look under the kernel
>>> Documentation/features.
>>>
>>> For example:
>>>
>>> < shorne@antec ~/work/linux > grep -r -e openrisc.*TODO Documentation/features | column -t
>>> Documentation/features/vm/huge-vmap/arch-support.txt: | openrisc: | TODO |
>>> Documentation/features/vm/ELF-ASLR/arch-support.txt: | openrisc: | TODO |
>>> Documentation/features/vm/ioremap_prot/arch-support.txt: | openrisc: | TODO |
>>> Documentation/features/vm/pte_special/arch-support.txt: | openrisc: | TODO |
>>> Documentation/features/perf/kprobes-event/arch-support.txt: | openrisc: | TODO |
>>> ...
>>
>> Got it. I did find this list in the online documentation [1] but I couldn't find
>> the cacheinfo task listed there.
>
> Right, not all features have config flags that are documented.
>
Understood. Do you know how one finds these flags? I wasn't able
to find much related to cpu or caches in arch/openrisc/Kconfig [1].
I did find the usage of ARCH_HAS_CPU_CACHE_ALIASING in
include/linux/cacheinfo.h [2]. I am not sure if this is relevant.
>>> How far have you come with OpenRISC so far? If you haven't already I suggest
>>> working through:
>>>
>>> - Get a simulator, I use QEMU for most development as it's faster and supports
>>> more memory than most FPGA. Final verification can be done on an FPGA.
>>> - Get a working compiler toolchain.
>>> - Compile and boot the openrisc kernel.
>>> - Build a userspace environment, either buildroot, toybox or busybox.
>>>
>>> I have some tools to help with this in or1k-utils [1], also there are prebuilt
>>> environments and docs in the linux kernel [2] and qemu [3].
>>
>> I don't have an environment set up yet. I'll start with the steps above. I'll use
>> QEMU for development. I don't have an FPGA with me currently.
I have set up a fairly basic environment. I built both QEMU and
openrisc-linux from the master branch. I used a prebuilt compiler
toolchain to build openrisc-linux and busybox, and manually
created an initramfs image. I used the default configuration
options to build linux.
The userspace environment only has utilities that are provided
by busybox. Only the following filesystems have been mounted -
rootfs, devtmpfs, sys, proc and tmpfs.
I tried to understand the workings of some of the scripts in
or1k-utils. There were a few things that I didn't understand
and I'll need some more time to wrap my head around them. I
don't think this should hinder the cacheinfo task though.
Is there anything else that I'll need to set up in the environment
before progressing?
I don't see any cache-related info in /sys. Based on what I have
understood, it'll be possible to fetch these details once cacheinfo
is supported.
>>> At the momoment, I am also thinking of what to work on next for OpenRISC, there is:
>>>
>>> - kexec
>>> - jump_label
>>> - kprobes
>>> - perf_events
>>> - ftrace
>>
>> Is the virtio task [2] also still a part of the roadmap? I can't find that either
>> in the TODO list.
>
> The virtio task is still possible but will be more advanced and may need some
> architecture changes to support hypervisors.
Got it.
Thanks,
Sahil
[1] https://github.com/torvalds/linux/blob/master/arch/openrisc/Kconfig
[2] https://github.com/torvalds/linux/blob/master/include/linux/cacheinfo.h#L156
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: Contributing to OpenRISC Linux
2025-01-22 18:55 ` Sahil Siddiq
@ 2025-01-25 7:30 ` Stafford Horne
2025-01-25 10:53 ` Stafford Horne
2025-01-26 19:54 ` Sahil Siddiq
0 siblings, 2 replies; 11+ messages in thread
From: Stafford Horne @ 2025-01-25 7:30 UTC (permalink / raw)
To: Sahil Siddiq; +Cc: Linux OpenRISC
On Thu, Jan 23, 2025 at 12:25:59AM +0530, Sahil Siddiq wrote:
> Hi,
>
> On 1/13/25 12:01 PM, Stafford Horne wrote:
> > On Mon, Jan 13, 2025 at 11:42:08AM +0530, Sahil Siddiq wrote:
> > > Hi,
> > >
> > > Thank you for your reply.
> > >
> > > On 1/12/25 12:58 PM, Stafford Horne wrote:
> > > > Hi Sunil,
> > > >
> > > > +CC List
> > > >
> > > > yes, the cacheinfo task is still open. There are many things that are still not
> > > > implemented in OpenRISC, you can always just look under the kernel
> > > > Documentation/features.
> > > >
> > > > For example:
> > > >
> > > > < shorne@antec ~/work/linux > grep -r -e openrisc.*TODO Documentation/features | column -t
> > > > Documentation/features/vm/huge-vmap/arch-support.txt: | openrisc: | TODO |
> > > > Documentation/features/vm/ELF-ASLR/arch-support.txt: | openrisc: | TODO |
> > > > Documentation/features/vm/ioremap_prot/arch-support.txt: | openrisc: | TODO |
> > > > Documentation/features/vm/pte_special/arch-support.txt: | openrisc: | TODO |
> > > > Documentation/features/perf/kprobes-event/arch-support.txt: | openrisc: | TODO |
> > > > ...
> > >
> > > Got it. I did find this list in the online documentation [1] but I couldn't find
> > > the cacheinfo task listed there.
> >
> > Right, not all features have config flags that are documented.
> >
>
> Understood. Do you know how one finds these flags? I wasn't able
> to find much related to cpu or caches in arch/openrisc/Kconfig [1].
> I did find the usage of ARCH_HAS_CPU_CACHE_ALIASING in
> include/linux/cacheinfo.h [2]. I am not sure if this is relevant.
Not all features are enabled using ARCH_ flags. The cacheinfo apis are always
enabled and compiled, even for or1k now. But if we look, we see the default
implementation is defined with __weak symbols:
drivers/base/cacheinfo.c
include/linux/cacheinfo.h
int __weak init_cache_level(unsigned int cpu)
{
return -ENOENT;
}
int __weak populate_cache_leaves(unsigned int cpu)
{
return -ENOENT;
}
In order for us to add support I think all we will need to do is define these
functions under arch/openrisc.
We can look to cpuinfo_or1k for how in openrisc we can pull cache details from
the UPR registers in (we can maybe just get the info from the cpuinfo_or1k
structure):
arch/openrisc/kernel/setup.c
You can read about the background of the cacheinfo work and motivations in the
original patch series:
https://lore.kernel.org/all/1412084912-2767-1-git-send-email-sudeep.holla@arm.com/
> > > > How far have you come with OpenRISC so far? If you haven't already I suggest
> > > > working through:
> > > >
> > > > - Get a simulator, I use QEMU for most development as it's faster and supports
> > > > more memory than most FPGA. Final verification can be done on an FPGA.
> > > > - Get a working compiler toolchain.
> > > > - Compile and boot the openrisc kernel.
> > > > - Build a userspace environment, either buildroot, toybox or busybox.
> > > >
> > > > I have some tools to help with this in or1k-utils [1], also there are prebuilt
> > > > environments and docs in the linux kernel [2] and qemu [3].
> > >
> > > I don't have an environment set up yet. I'll start with the steps above. I'll use
> > > QEMU for development. I don't have an FPGA with me currently.
>
> I have set up a fairly basic environment. I built both QEMU and
> openrisc-linux from the master branch. I used a prebuilt compiler
> toolchain to build openrisc-linux and busybox, and manually
> created an initramfs image. I used the default configuration
> options to build linux.
>
> The userspace environment only has utilities that are provided
> by busybox. Only the following filesystems have been mounted -
> rootfs, devtmpfs, sys, proc and tmpfs.
>
> I tried to understand the workings of some of the scripts in
> or1k-utils. There were a few things that I didn't understand
> and I'll need some more time to wrap my head around them. I
> don't think this should hinder the cacheinfo task though.
>
> Is there anything else that I'll need to set up in the environment
> before progressing?
I think its a good start. As long as /sys is available in your environment
it should be enough for you to test your changes.
> I don't see any cache-related info in /sys. Based on what I have
> understood, it'll be possible to fetch these details once cacheinfo
> is supported.
On my x86 machine I see:
$ tree /sys/devices/system/cpu/cpu0/cache/
/sys/devices/system/cpu/cpu0/cache/
├── index0
│ ├── coherency_line_size
│ ├── id
│ ├── level
│ ├── number_of_sets
│ ├── physical_line_partition
│ ├── shared_cpu_list
On or1k I only see (no cache info yet):
$ tree /sys/devices/system/cpu/cpu0/
/sys/devices/system/cpu/cpu0/
|-- of_node -> ../../../../firmware/devicetree/base/cpus/cpu@0
|-- subsystem -> ../../../../bus/cpu
|-- topology
| |-- core_cpus
| |-- core_cpus_list
| |-- core_id
| |-- core_siblings
| |-- core_siblings_list
| |-- package_cpus
| |-- package_cpus_list
| |-- physical_package_id
| |-- thread_siblings
| `-- thread_siblings_list
`-- uevent
But we do have some cache info available via cpuinfo:
$ cat /proc/cpuinfo | head -n16
processor : 0
cpu : OpenRISC-13
revision : 8
frequency : 20000000
dcache size : 256 bytes
dcache block size : 16 bytes
dcache ways : 1
icache size : 32 bytes
icache block size : 16 bytes
icache ways : 2
immu : 128 entries, 1 ways
dmmu : 128 entries, 1 ways
bogomips : 40.00
features : orbis32 orfpx32
> > > > At the momoment, I am also thinking of what to work on next for OpenRISC, there is:
> > > >
> > > > - kexec
> > > > - jump_label
> > > > - kprobes
> > > > - perf_events
> > > > - ftrace
> > >
> > > Is the virtio task [2] also still a part of the roadmap? I can't find that either
> > > in the TODO list.
> >
> > The virtio task is still possible but will be more advanced and may need some
> > architecture changes to support hypervisors.
>
> Got it.
I am slowly working on kexec support right now.
-Stafford
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: Contributing to OpenRISC Linux
2025-01-25 7:30 ` Stafford Horne
@ 2025-01-25 10:53 ` Stafford Horne
2025-01-26 19:54 ` Sahil Siddiq
1 sibling, 0 replies; 11+ messages in thread
From: Stafford Horne @ 2025-01-25 10:53 UTC (permalink / raw)
To: Sahil Siddiq; +Cc: Linux OpenRISC
On Sat, Jan 25, 2025 at 07:30:43AM +0000, Stafford Horne wrote:
> On Thu, Jan 23, 2025 at 12:25:59AM +0530, Sahil Siddiq wrote:
> > Hi,
> >
> > On 1/13/25 12:01 PM, Stafford Horne wrote:
> > > On Mon, Jan 13, 2025 at 11:42:08AM +0530, Sahil Siddiq wrote:
> > > > Hi,
> > > >
> > > > Thank you for your reply.
> > > >
> > > > On 1/12/25 12:58 PM, Stafford Horne wrote:
> > > > > Hi Sunil,
> > > > >
> > > > > +CC List
> > > > >
> > > > > yes, the cacheinfo task is still open. There are many things that are still not
> > > > > implemented in OpenRISC, you can always just look under the kernel
> > > > > Documentation/features.
> > > > >
> > > > > For example:
> > > > >
> > > > > < shorne@antec ~/work/linux > grep -r -e openrisc.*TODO Documentation/features | column -t
> > > > > Documentation/features/vm/huge-vmap/arch-support.txt: | openrisc: | TODO |
> > > > > Documentation/features/vm/ELF-ASLR/arch-support.txt: | openrisc: | TODO |
> > > > > Documentation/features/vm/ioremap_prot/arch-support.txt: | openrisc: | TODO |
> > > > > Documentation/features/vm/pte_special/arch-support.txt: | openrisc: | TODO |
> > > > > Documentation/features/perf/kprobes-event/arch-support.txt: | openrisc: | TODO |
> > > > > ...
> > > >
> > > > Got it. I did find this list in the online documentation [1] but I couldn't find
> > > > the cacheinfo task listed there.
> > >
> > > Right, not all features have config flags that are documented.
> > >
> >
> > Understood. Do you know how one finds these flags? I wasn't able
> > to find much related to cpu or caches in arch/openrisc/Kconfig [1].
> > I did find the usage of ARCH_HAS_CPU_CACHE_ALIASING in
> > include/linux/cacheinfo.h [2]. I am not sure if this is relevant.
>
> Not all features are enabled using ARCH_ flags. The cacheinfo apis are always
> enabled and compiled, even for or1k now. But if we look, we see the default
> implementation is defined with __weak symbols:
>
> drivers/base/cacheinfo.c
> include/linux/cacheinfo.h
>
> int __weak init_cache_level(unsigned int cpu)
> {
> return -ENOENT;
> }
>
> int __weak populate_cache_leaves(unsigned int cpu)
> {
> return -ENOENT;
> }
>
> In order for us to add support I think all we will need to do is define these
> functions under arch/openrisc.
>
> We can look to cpuinfo_or1k for how in openrisc we can pull cache details from
> the UPR registers in (we can maybe just get the info from the cpuinfo_or1k
> structure):
>
> arch/openrisc/kernel/setup.c
>
> You can read about the background of the cacheinfo work and motivations in the
> original patch series:
>
> https://lore.kernel.org/all/1412084912-2767-1-git-send-email-sudeep.holla@arm.com/
Hi Sahil,
Also please check these comments [0] from Sudeep Holla and Stefan Kristiansson.
>> 14/03/17 13:11, Stefan Kristiansson wrote:
>>> On Tue, Mar 14, 2017 at 12:08:33PM +0000, Sudeep Holla wrote:
>>>> On Tue, Feb 21, 2017 at 7:11 PM, Stafford Horne <shorne@gmail.com> wrote:
>>>>> From: Stefan Kristiansson <stefan.kristiansson@saunalahti.fi>
>>>>>
>>>>> Motivation for this is to be able to print the way information
>>>>> properly in print_cpuinfo(), instead of hardcoding it to one.
>>>>>
>>>>
>>>> Any particular reason not to use generic cacheinfo sysfs infrastructure ?
>>>>
>>>
>>> No reason as far as I can see, the creation of this patch predates the
>>> generic cacheinfo sysfs infrastructure.
>>>
>>> The patch itself doesn't add cache information to cpuinfo though,
>>> only corrects a bug in the information that is already there.
>>>
>>> We should look into exposing the info in the generic cache info sysfs
>>> and potentially removing the information from cpuinfo.
-Stafford
[0[] https://lkml.org/lkml/2017/3/14/639
> > > > > How far have you come with OpenRISC so far? If you haven't already I suggest
> > > > > working through:
> > > > >
> > > > > - Get a simulator, I use QEMU for most development as it's faster and supports
> > > > > more memory than most FPGA. Final verification can be done on an FPGA.
> > > > > - Get a working compiler toolchain.
> > > > > - Compile and boot the openrisc kernel.
> > > > > - Build a userspace environment, either buildroot, toybox or busybox.
> > > > >
> > > > > I have some tools to help with this in or1k-utils [1], also there are prebuilt
> > > > > environments and docs in the linux kernel [2] and qemu [3].
> > > >
> > > > I don't have an environment set up yet. I'll start with the steps above. I'll use
> > > > QEMU for development. I don't have an FPGA with me currently.
> >
> > I have set up a fairly basic environment. I built both QEMU and
> > openrisc-linux from the master branch. I used a prebuilt compiler
> > toolchain to build openrisc-linux and busybox, and manually
> > created an initramfs image. I used the default configuration
> > options to build linux.
> >
> > The userspace environment only has utilities that are provided
> > by busybox. Only the following filesystems have been mounted -
> > rootfs, devtmpfs, sys, proc and tmpfs.
> >
> > I tried to understand the workings of some of the scripts in
> > or1k-utils. There were a few things that I didn't understand
> > and I'll need some more time to wrap my head around them. I
> > don't think this should hinder the cacheinfo task though.
> >
> > Is there anything else that I'll need to set up in the environment
> > before progressing?
>
> I think its a good start. As long as /sys is available in your environment
> it should be enough for you to test your changes.
>
> > I don't see any cache-related info in /sys. Based on what I have
> > understood, it'll be possible to fetch these details once cacheinfo
> > is supported.
>
> On my x86 machine I see:
>
> $ tree /sys/devices/system/cpu/cpu0/cache/
> /sys/devices/system/cpu/cpu0/cache/
> ├── index0
> │ ├── coherency_line_size
> │ ├── id
> │ ├── level
> │ ├── number_of_sets
> │ ├── physical_line_partition
> │ ├── shared_cpu_list
>
> On or1k I only see (no cache info yet):
>
> $ tree /sys/devices/system/cpu/cpu0/
> /sys/devices/system/cpu/cpu0/
> |-- of_node -> ../../../../firmware/devicetree/base/cpus/cpu@0
> |-- subsystem -> ../../../../bus/cpu
> |-- topology
> | |-- core_cpus
> | |-- core_cpus_list
> | |-- core_id
> | |-- core_siblings
> | |-- core_siblings_list
> | |-- package_cpus
> | |-- package_cpus_list
> | |-- physical_package_id
> | |-- thread_siblings
> | `-- thread_siblings_list
> `-- uevent
>
> But we do have some cache info available via cpuinfo:
>
> $ cat /proc/cpuinfo | head -n16
> processor : 0
> cpu : OpenRISC-13
> revision : 8
> frequency : 20000000
> dcache size : 256 bytes
> dcache block size : 16 bytes
> dcache ways : 1
> icache size : 32 bytes
> icache block size : 16 bytes
> icache ways : 2
> immu : 128 entries, 1 ways
> dmmu : 128 entries, 1 ways
> bogomips : 40.00
> features : orbis32 orfpx32
>
> > > > > At the momoment, I am also thinking of what to work on next for OpenRISC, there is:
> > > > >
> > > > > - kexec
> > > > > - jump_label
> > > > > - kprobes
> > > > > - perf_events
> > > > > - ftrace
> > > >
> > > > Is the virtio task [2] also still a part of the roadmap? I can't find that either
> > > > in the TODO list.
> > >
> > > The virtio task is still possible but will be more advanced and may need some
> > > architecture changes to support hypervisors.
> >
> > Got it.
>
> I am slowly working on kexec support right now.
>
> -Stafford
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: Contributing to OpenRISC Linux
2025-01-25 7:30 ` Stafford Horne
2025-01-25 10:53 ` Stafford Horne
@ 2025-01-26 19:54 ` Sahil Siddiq
2025-02-12 15:29 ` Queries regarding OpenRISC CPU cache Sahil Siddiq
1 sibling, 1 reply; 11+ messages in thread
From: Sahil Siddiq @ 2025-01-26 19:54 UTC (permalink / raw)
To: Stafford Horne; +Cc: Linux OpenRISC
Hi,
On 1/25/25 1:00 PM, Stafford Horne wrote:
> On Thu, Jan 23, 2025 at 12:25:59AM +0530, Sahil Siddiq wrote:
> [...]
>>>> Got it. I did find this list in the online documentation [1] but I couldn't find
>>>> the cacheinfo task listed there.
>>>
>>> Right, not all features have config flags that are documented.
>>>
>>
>> Understood. Do you know how one finds these flags? I wasn't able
>> to find much related to cpu or caches in arch/openrisc/Kconfig [1].
>> I did find the usage of ARCH_HAS_CPU_CACHE_ALIASING in
>> include/linux/cacheinfo.h [2]. I am not sure if this is relevant.
>
> Not all features are enabled using ARCH_ flags. The cacheinfo apis are always
> enabled and compiled, even for or1k now. But if we look, we see the default
> implementation is defined with __weak symbols:
>
> drivers/base/cacheinfo.c
> include/linux/cacheinfo.h
>
> int __weak init_cache_level(unsigned int cpu)
> {
> return -ENOENT;
> }
>
> int __weak populate_cache_leaves(unsigned int cpu)
> {
> return -ENOENT;
> }
>
Got it, this makes sense now.
> In order for us to add support I think all we will need to do is define these
> functions under arch/openrisc.
>
> We can look to cpuinfo_or1k for how in openrisc we can pull cache details from
> the UPR registers in (we can maybe just get the info from the cpuinfo_or1k
> structure):
>
> arch/openrisc/kernel/setup.c
>
> You can read about the background of the cacheinfo work and motivations in the
> original patch series:
>
> https://lore.kernel.org/all/1412084912-2767-1-git-send-email-sudeep.holla@arm.com/
Thank you for the link. I'll go through this patch series. I am beginning
to get an idea of how this has been implemented in other architectures as
well.
On 1/25/25 4:23 PM, Stafford Horne wrote:
> Hi Sahil,
>
> Also please check these comments [0] from Sudeep Holla and Stefan Kristiansson.
>
> >> 14/03/17 13:11, Stefan Kristiansson wrote:
> >>> On Tue, Mar 14, 2017 at 12:08:33PM +0000, Sudeep Holla wrote:
> >>>> On Tue, Feb 21, 2017 at 7:11 PM, Stafford Horne <shorne@gmail.com> wrote:
> >>>>> From: Stefan Kristiansson <stefan.kristiansson@saunalahti.fi>
> >>>>>
> >>>>> Motivation for this is to be able to print the way information
> >>>>> properly in print_cpuinfo(), instead of hardcoding it to one.
> >>>>>
> >>>>
> >>>> Any particular reason not to use generic cacheinfo sysfs infrastructure ?
> >>>>
> >>>
> >>> No reason as far as I can see, the creation of this patch predates the
> >>> generic cacheinfo sysfs infrastructure.
> >>>
> >>> The patch itself doesn't add cache information to cpuinfo though,
> >>> only corrects a bug in the information that is already there.
> >>>
> >>> We should look into exposing the info in the generic cache info sysfs
> >>> and potentially removing the information from cpuinfo.
>
> -Stafford
>
> [0[] https://lkml.org/lkml/2017/3/14/639
>
Sure. I'll go through the aforementioned patch series and comments.
>> [...]
>> I have set up a fairly basic environment. I built both QEMU and
>> openrisc-linux from the master branch. I used a prebuilt compiler
>> toolchain to build openrisc-linux and busybox, and manually
>> created an initramfs image. I used the default configuration
>> options to build linux.
>>
>> The userspace environment only has utilities that are provided
>> by busybox. Only the following filesystems have been mounted -
>> rootfs, devtmpfs, sys, proc and tmpfs.
>>
>> I tried to understand the workings of some of the scripts in
>> or1k-utils. There were a few things that I didn't understand
>> and I'll need some more time to wrap my head around them. I
>> don't think this should hinder the cacheinfo task though.
>>
>> Is there anything else that I'll need to set up in the environment
>> before progressing?
>
> I think its a good start. As long as /sys is available in your environment
> it should be enough for you to test your changes.
Understood.
>> I don't see any cache-related info in /sys. Based on what I have
>> understood, it'll be possible to fetch these details once cacheinfo
>> is supported.
>
> On my x86 machine I see:
>
> $ tree /sys/devices/system/cpu/cpu0/cache/
> /sys/devices/system/cpu/cpu0/cache/
> ├── index0
> │ ├── coherency_line_size
> │ ├── id
> │ ├── level
> │ ├── number_of_sets
> │ ├── physical_line_partition
> │ ├── shared_cpu_list
>
> On or1k I only see (no cache info yet):
>
> $ tree /sys/devices/system/cpu/cpu0/
> /sys/devices/system/cpu/cpu0/
> |-- of_node -> ../../../../firmware/devicetree/base/cpus/cpu@0
> |-- subsystem -> ../../../../bus/cpu
> |-- topology
> | |-- core_cpus
> | |-- core_cpus_list
> | |-- core_id
> | |-- core_siblings
> | |-- core_siblings_list
> | |-- package_cpus
> | |-- package_cpus_list
> | |-- physical_package_id
> | |-- thread_siblings
> | `-- thread_siblings_list
> `-- uevent
>
> But we do have some cache info available via cpuinfo:
>
> $ cat /proc/cpuinfo | head -n16
> processor : 0
> cpu : OpenRISC-13
> revision : 8
> frequency : 20000000
> dcache size : 256 bytes
> dcache block size : 16 bytes
> dcache ways : 1
> icache size : 32 bytes
> icache block size : 16 bytes
> icache ways : 2
> immu : 128 entries, 1 ways
> dmmu : 128 entries, 1 ways
> bogomips : 40.00
> features : orbis32 orfpx32
Ah yes. I missed this. I see this in my environment as well.
>>>>> At the momoment, I am also thinking of what to work on next for OpenRISC, there is:
>>>>>
>>>>> - kexec
>>>>> - jump_label
>>>>> - kprobes
>>>>> - perf_events
>>>>> - ftrace
>>>>
>>>> Is the virtio task [2] also still a part of the roadmap? I can't find that either
>>>> in the TODO list.
>>>
>>> The virtio task is still possible but will be more advanced and may need some
>>> architecture changes to support hypervisors.
>>
>> Got it.
>
> I am slowly working on kexec support right now.
>
Understood. Once cacheinfo is implemented, I would like to pick one of the
other tasks as well :)
Thanks,
Sahil
^ permalink raw reply [flat|nested] 11+ messages in thread
* Queries regarding OpenRISC CPU cache
2025-01-26 19:54 ` Sahil Siddiq
@ 2025-02-12 15:29 ` Sahil Siddiq
2025-02-18 19:38 ` Sahil Siddiq
0 siblings, 1 reply; 11+ messages in thread
From: Sahil Siddiq @ 2025-02-12 15:29 UTC (permalink / raw)
To: Stafford Horne; +Cc: Linux OpenRISC
Hi,
I have started working on implementing cacheinfo support for OpenRISC, and
I have got a few queries.
Q1.
Does OpenRISC support multilevel cache hierarchy? I couldn't find anything
in the architecture manual [1] mentioning the possibility of a multilevel
cache hierarchy. I wasn't able to find anything in the verilog implementation
either [2].
Q2.
In /sys/devices/system/cpu/cpuX/cache, each indexY directory corresponds
to a CPU cache. According to the OpenRISC 1000 Architecture specification [1],
it is possible to have a unified cache. In such a case, registers controlling
the instruction and data cache represent the same thing (section 9.1 of the
manual).
Consequently, should there be only one index (i.e. index0), or should there
be 2 index directories (index0 and index1) with identical content? If aunified
cache is used, will the DCP and ICP bits both be set in the UPR register?
Q3.
Cache-related arithmetic performed in arch/openrisc/kernel/setup.c:setup_cpuinfo() [3]
cannot be avoided. But we can expose this info in sysfs instead of cpuinfo as
specified in this mail [4]. So, I was thinking of moving these calculations to
arch/openrisc/kernel/cacheinfo.c in init_cache_level().
What are your thoughts on this?
Another thing I noticed in the current implementation is that these calculations
are performed for icache and dcache before using the UPR register to check if
they even exist. Should these calculations instead be performed after determining
that they exist?
Thanks,
Sahil
[1] https://openrisc.io/or1k.html
[2] https://github.com/openrisc/mor1kx
[3] https://github.com/openrisc/linux/blob/for-next/arch/openrisc/kernel/setup.c#L155
[4] https://lkml.org/lkml/2017/3/14/620
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: Queries regarding OpenRISC CPU cache
2025-02-12 15:29 ` Queries regarding OpenRISC CPU cache Sahil Siddiq
@ 2025-02-18 19:38 ` Sahil Siddiq
2025-02-20 6:12 ` Stafford Horne
0 siblings, 1 reply; 11+ messages in thread
From: Sahil Siddiq @ 2025-02-18 19:38 UTC (permalink / raw)
To: Stafford Horne; +Cc: Linux OpenRISC
Hi,
On 2/12/25 8:59 PM, Sahil Siddiq wrote:
> Hi,
>
> I have started working on implementing cacheinfo support for OpenRISC, and
> I have got a few queries.
>
> Q1.
> Does OpenRISC support multilevel cache hierarchy? I couldn't find anything
> in the architecture manual [1] mentioning the possibility of a multilevel
> cache hierarchy. I wasn't able to find anything in the verilog implementation
> either [2].
>
> Q2.
> In /sys/devices/system/cpu/cpuX/cache, each indexY directory corresponds
> to a CPU cache. According to the OpenRISC 1000 Architecture specification [1],
> it is possible to have a unified cache. In such a case, registers controlling
> the instruction and data cache represent the same thing (section 9.1 of the
> manual).
>
> Consequently, should there be only one index (i.e. index0), or should there
> be 2 index directories (index0 and index1) with identical content? If aunified
> cache is used, will the DCP and ICP bits both be set in the UPR register?
>
> Q3.
> Cache-related arithmetic performed in arch/openrisc/kernel/setup.c:setup_cpuinfo() [3]
> cannot be avoided. But we can expose this info in sysfs instead of cpuinfo as
> specified in this mail [4]. So, I was thinking of moving these calculations to
> arch/openrisc/kernel/cacheinfo.c in init_cache_level().
>
> What are your thoughts on this?
>
> Another thing I noticed in the current implementation is that these calculations
> are performed for icache and dcache before using the UPR register to check if
> they even exist. Should these calculations instead be performed after determining
> that they exist?
> [...]
Building on my previous mail, I noticed another thing:
On 1/25/25 1:00 PM, Stafford Horne wrote:
> On my x86 machine I see:
>
> $ tree /sys/devices/system/cpu/cpu0/cache/
> /sys/devices/system/cpu/cpu0/cache/
> ├── index0
> │ ├── coherency_line_size
> │ ├── id
> │ ├── level
> │ ├── number_of_sets
> │ ├── physical_line_partition
> │ ├── shared_cpu_list
>
> On or1k I only see (no cache info yet):
>
> $ tree /sys/devices/system/cpu/cpu0/
> /sys/devices/system/cpu/cpu0/
> |-- of_node -> ../../../../firmware/devicetree/base/cpus/cpu@0
> |-- subsystem -> ../../../../bus/cpu
> |-- topology
> | |-- core_cpus
> | |-- core_cpus_list
> | |-- core_id
> | |-- core_siblings
> | |-- core_siblings_list
> | |-- package_cpus
> | |-- package_cpus_list
> | |-- physical_package_id
> | |-- thread_siblings
> | `-- thread_siblings_list
> `-- uevent
>
> But we do have some cache info available via cpuinfo:
>
> $ cat /proc/cpuinfo | head -n16
> processor : 0
> cpu : OpenRISC-13
> revision : 8
> frequency : 20000000
> dcache size : 256 bytes
> dcache block size : 16 bytes
> dcache ways : 1
> icache size : 32 bytes
> icache block size : 16 bytes
> icache ways : 2
> immu : 128 entries, 1 ways
> dmmu : 128 entries, 1 ways
> bogomips : 40.00
> features : orbis32 orfpx32
>
While I can see icache and dcache info in /proc/cpuinfo, dmesg reports
that the caches have been disabled.
OF: reserved mem: Reserved memory: No reserved-memory node in the DT
CPU: OpenRISC-13 (revision 8) @20 MHz
-- dcache disabled
-- icache disabled
-- dmmu: 128 entries, 1 way(s)
-- immu: 128 entries, 1 way(s)
-- additional features:
-- power management
-- PIC
-- timer
This is because the value of upr is 0x719 and the value of SPR_UPR_DCP is
0x2. ANDing both values gives 0 which, according to the arch manual [1], means
there's no dcache. The same thing applies to icache.
Thanks,
Sahil
[1] https://raw.githubusercontent.com/openrisc/doc/master/openrisc-arch-1.4-rev0.pdf (Section 16.3)
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: Queries regarding OpenRISC CPU cache
2025-02-18 19:38 ` Sahil Siddiq
@ 2025-02-20 6:12 ` Stafford Horne
2025-02-22 11:35 ` Sahil Siddiq
0 siblings, 1 reply; 11+ messages in thread
From: Stafford Horne @ 2025-02-20 6:12 UTC (permalink / raw)
To: Sahil Siddiq; +Cc: Linux OpenRISC
Hi Sahil,
Sorry I missed replying to your last mail. I hope the below answers help.
On Wed, Feb 19, 2025 at 01:08:18AM +0530, Sahil Siddiq wrote:
> Hi,
>
> On 2/12/25 8:59 PM, Sahil Siddiq wrote:
> > Hi,
> >
> > I have started working on implementing cacheinfo support for OpenRISC, and
> > I have got a few queries.
> >
> > Q1.
> > Does OpenRISC support multilevel cache hierarchy? I couldn't find anything
> > in the architecture manual [1] mentioning the possibility of a multilevel
> > cache hierarchy. I wasn't able to find anything in the verilog implementation
> > either [2].
Currently all implementations support only a single level of cache. I think if
we would support multiple levels ever we could use device tree to define L2
caches, etc.
See for example:
Documentation/devicetree/bindings/cache/freescale-l2cache.txt
These are the device tree bindings for a freescale l2cache.
> > Q2.
> > In /sys/devices/system/cpu/cpuX/cache, each indexY directory corresponds
> > to a CPU cache. According to the OpenRISC 1000 Architecture specification [1],
> > it is possible to have a unified cache. In such a case, registers controlling
> > the instruction and data cache represent the same thing (section 9.1 of the
> > manual).
> >
> > Consequently, should there be only one index (i.e. index0), or should there
> > be 2 index directories (index0 and index1) with identical content? If aunified
> > cache is used, will the DCP and ICP bits both be set in the UPR register?
If we have a unified cache, I think there is actually no way to tell its unified
from the configuration registers. So I think we have to treat it as if we have
both iCache and dCache.
So we should have index0 and index1.
> > Q3.
> > Cache-related arithmetic performed in arch/openrisc/kernel/setup.c:setup_cpuinfo() [3]
> > cannot be avoided. But we can expose this info in sysfs instead of cpuinfo as
> > specified in this mail [4]. So, I was thinking of moving these calculations to
> > arch/openrisc/kernel/cacheinfo.c in init_cache_level().
> >
> > What are your thoughts on this?
Yes, I think so too.
> > Another thing I noticed in the current implementation is that these calculations
> > are performed for icache and dcache before using the UPR register to check if
> > they even exist. Should these calculations instead be performed after determining
> > that they exist?
> > [...]
Yes, that would be better.
> Building on my previous mail, I noticed another thing:
>
> On 1/25/25 1:00 PM, Stafford Horne wrote:
> > On my x86 machine I see:
> >
> > $ tree /sys/devices/system/cpu/cpu0/cache/
> > /sys/devices/system/cpu/cpu0/cache/
> > ├── index0
> > │ ├── coherency_line_size
> > │ ├── id
> > │ ├── level
> > │ ├── number_of_sets
> > │ ├── physical_line_partition
> > │ ├── shared_cpu_list
> >
> > On or1k I only see (no cache info yet):
> >
> > $ tree /sys/devices/system/cpu/cpu0/
> > /sys/devices/system/cpu/cpu0/
> > |-- of_node -> ../../../../firmware/devicetree/base/cpus/cpu@0
> > |-- subsystem -> ../../../../bus/cpu
> > |-- topology
> > | |-- core_cpus
> > | |-- core_cpus_list
> > | |-- core_id
> > | |-- core_siblings
> > | |-- core_siblings_list
> > | |-- package_cpus
> > | |-- package_cpus_list
> > | |-- physical_package_id
> > | |-- thread_siblings
> > | `-- thread_siblings_list
> > `-- uevent
> >
> > But we do have some cache info available via cpuinfo:
> >
> > $ cat /proc/cpuinfo | head -n16
> > processor : 0
> > cpu : OpenRISC-13
> > revision : 8
> > frequency : 20000000
> > dcache size : 256 bytes
> > dcache block size : 16 bytes
> > dcache ways : 1
> > icache size : 32 bytes
> > icache block size : 16 bytes
> > icache ways : 2
> > immu : 128 entries, 1 ways
> > dmmu : 128 entries, 1 ways
> > bogomips : 40.00
> > features : orbis32 orfpx32
> >
>
> While I can see icache and dcache info in /proc/cpuinfo, dmesg reports
> that the caches have been disabled.
>
> OF: reserved mem: Reserved memory: No reserved-memory node in the DT
> CPU: OpenRISC-13 (revision 8) @20 MHz
> -- dcache disabled
> -- icache disabled
> -- dmmu: 128 entries, 1 way(s)
> -- immu: 128 entries, 1 way(s)
> -- additional features:
> -- power management
> -- PIC
> -- timer
>
> This is because the value of upr is 0x719 and the value of SPR_UPR_DCP is
> 0x2. ANDing both values gives 0 which, according to the arch manual [1], means
> there's no dcache. The same thing applies to icache.
If you are using qemu this is correct.
target/openrisc/cpu.c: cpu->env.upr = UPR_UP | UPR_DMP | UPR_IMP | UPR_PICP | UPR_TTP | UPR_PMP;
There is no cache in qemu as we use the actual host cpu cache to speed up
memory operations. You could modify qemu to help with testing your cacheinfo
changes though.
-Stafford
> Thanks,
> Sahil
>
> [1] https://raw.githubusercontent.com/openrisc/doc/master/openrisc-arch-1.4-rev0.pdf (Section 16.3)
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: Queries regarding OpenRISC CPU cache
2025-02-20 6:12 ` Stafford Horne
@ 2025-02-22 11:35 ` Sahil Siddiq
0 siblings, 0 replies; 11+ messages in thread
From: Sahil Siddiq @ 2025-02-22 11:35 UTC (permalink / raw)
To: Stafford Horne; +Cc: Linux OpenRISC
Hi,
On 2/20/25 11:42 AM, Stafford Horne wrote:
> Hi Sahil,
>
> Sorry I missed replying to your last mail.
No worries :)
> On Wed, Feb 19, 2025 at 01:08:18AM +0530, Sahil Siddiq wrote:
>> Hi,
>>
>> On 2/12/25 8:59 PM, Sahil Siddiq wrote:
>>> Hi,
>>>
>>> I have started working on implementing cacheinfo support for OpenRISC, and
>>> I have got a few queries.
>>>
>>> Q1.
>>> Does OpenRISC support multilevel cache hierarchy? I couldn't find anything
>>> in the architecture manual [1] mentioning the possibility of a multilevel
>>> cache hierarchy. I wasn't able to find anything in the verilog implementation
>>> either [2].
>
> Currently all implementations support only a single level of cache. I think if
> we would support multiple levels ever we could use device tree to define L2
> caches, etc.
>
> See for example:
>
> Documentation/devicetree/bindings/cache/freescale-l2cache.txt
>
> These are the device tree bindings for a freescale l2cache.
Got it, this makes sense.
>>> Q2.
>>> In /sys/devices/system/cpu/cpuX/cache, each indexY directory corresponds
>>> to a CPU cache. According to the OpenRISC 1000 Architecture specification [1],
>>> it is possible to have a unified cache. In such a case, registers controlling
>>> the instruction and data cache represent the same thing (section 9.1 of the
>>> manual).
>>>
>>> Consequently, should there be only one index (i.e. index0), or should there
>>> be 2 index directories (index0 and index1) with identical content? If a unified
>>> cache is used, will the DCP and ICP bits both be set in the UPR register?
>
> If we have a unified cache, I think there is actually no way to tell its unified
> from the configuration registers. So I think we have to treat it as if we have
> both iCache and dCache.
>
> So we should have index0 and index1.
>
Understood. I guess another configuration register can be added in the future,
if ever there's a requirement to distinguish between a unified and split cache.
>>> Q3.
>>> Cache-related arithmetic performed in arch/openrisc/kernel/setup.c:setup_cpuinfo() [3]
>>> cannot be avoided. But we can expose this info in sysfs instead of cpuinfo as
>>> specified in this mail [4]. So, I was thinking of moving these calculations to
>>> arch/openrisc/kernel/cacheinfo.c in init_cache_level().
>>>
>>> What are your thoughts on this?
>
> Yes, I think so too.
>
>>> Another thing I noticed in the current implementation is that these calculations
>>> are performed for icache and dcache before using the UPR register to check if
>>> they even exist. Should these calculations instead be performed after determining
>>> that they exist?
>>> [...]
>
> Yes, that would be better.
Thank you for the clarification.
>> Building on my previous mail, I noticed another thing:
>>
>> On 1/25/25 1:00 PM, Stafford Horne wrote:
>>> On my x86 machine I see:
>>>
>>> $ tree /sys/devices/system/cpu/cpu0/cache/
>>> /sys/devices/system/cpu/cpu0/cache/
>>> ├── index0
>>> │ ├── coherency_line_size
>>> │ ├── id
>>> │ ├── level
>>> │ ├── number_of_sets
>>> │ ├── physical_line_partition
>>> │ ├── shared_cpu_list
>>>
>>> On or1k I only see (no cache info yet):
>>>
>>> $ tree /sys/devices/system/cpu/cpu0/
>>> /sys/devices/system/cpu/cpu0/
>>> |-- of_node -> ../../../../firmware/devicetree/base/cpus/cpu@0
>>> |-- subsystem -> ../../../../bus/cpu
>>> |-- topology
>>> | |-- core_cpus
>>> | |-- core_cpus_list
>>> | |-- core_id
>>> | |-- core_siblings
>>> | |-- core_siblings_list
>>> | |-- package_cpus
>>> | |-- package_cpus_list
>>> | |-- physical_package_id
>>> | |-- thread_siblings
>>> | `-- thread_siblings_list
>>> `-- uevent
>>>
>>> But we do have some cache info available via cpuinfo:
>>>
>>> $ cat /proc/cpuinfo | head -n16
>>> processor : 0
>>> cpu : OpenRISC-13
>>> revision : 8
>>> frequency : 20000000
>>> dcache size : 256 bytes
>>> dcache block size : 16 bytes
>>> dcache ways : 1
>>> icache size : 32 bytes
>>> icache block size : 16 bytes
>>> icache ways : 2
>>> immu : 128 entries, 1 ways
>>> dmmu : 128 entries, 1 ways
>>> bogomips : 40.00
>>> features : orbis32 orfpx32
>>>
>>
>> While I can see icache and dcache info in /proc/cpuinfo, dmesg reports
>> that the caches have been disabled.
>>
>> OF: reserved mem: Reserved memory: No reserved-memory node in the DT
>> CPU: OpenRISC-13 (revision 8) @20 MHz
>> -- dcache disabled
>> -- icache disabled
>> -- dmmu: 128 entries, 1 way(s)
>> -- immu: 128 entries, 1 way(s)
>> -- additional features:
>> -- power management
>> -- PIC
>> -- timer
>>
>> This is because the value of upr is 0x719 and the value of SPR_UPR_DCP is
>> 0x2. ANDing both values gives 0 which, according to the arch manual [1], means
>> there's no dcache. The same thing applies to icache.
>
> If you are using qemu this is correct.
>
> target/openrisc/cpu.c: cpu->env.upr = UPR_UP | UPR_DMP | UPR_IMP | UPR_PICP | UPR_TTP | UPR_PMP;
Yes, I am using QEMU.
> There is no cache in qemu as we use the actual host cpu cache to speed up
> memory operations. You could modify qemu to help with testing your cacheinfo
> changes though.
I'll modify QEMU in that case. It'll help verify that cache-related calculations
are performed correctly only after detecting they exist.
Thanks,
Sahil
^ permalink raw reply [flat|nested] 11+ messages in thread
end of thread, other threads:[~2025-02-22 11:35 UTC | newest]
Thread overview: 11+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
[not found] <2613c1c6-2a2f-4ceb-8adb-f819961ec61f@gmail.com>
2025-01-12 7:28 ` Contributing to OpenRISC Linux Stafford Horne
2025-01-13 6:12 ` Sahil Siddiq
2025-01-13 6:31 ` Stafford Horne
2025-01-22 18:55 ` Sahil Siddiq
2025-01-25 7:30 ` Stafford Horne
2025-01-25 10:53 ` Stafford Horne
2025-01-26 19:54 ` Sahil Siddiq
2025-02-12 15:29 ` Queries regarding OpenRISC CPU cache Sahil Siddiq
2025-02-18 19:38 ` Sahil Siddiq
2025-02-20 6:12 ` Stafford Horne
2025-02-22 11:35 ` Sahil Siddiq
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).