Linux CXL
 help / color / mirror / Atom feed
* Re: A confusion about cxl.mem in CXL drivers
       [not found] ` <2023092018244461102314@phytium.com.cn>
@ 2023-09-20 12:19   ` Jonathan Cameron
  2023-09-22  8:49     ` Yuquan Wang
  0 siblings, 1 reply; 4+ messages in thread
From: Jonathan Cameron @ 2023-09-20 12:19 UTC (permalink / raw)
  To: Yuquan Wang; +Cc: linux-cxl

On Wed, 20 Sep 2023 18:24:46 +0800
Yuquan Wang <wangyuquan1236@phytium.com.cn> wrote:

> Hi, Jonathan
> 
> There jumped a silly confusion when I am analyzing cxl drivers: 
> Since Host to access the cxl device memory and to access its mmio registers
> both use load/store instructions, how system software distinguish 
> with cxl.io and cxl.mem protocols?
> 
> Many thanks
> Yuquan

Hi Yuquan

I'm afraid I don't really understand the question.  So I'm guessing a bit whilst
trying to answer.

Is it about the kernel side of things, or more on what we are doing in QEMU
where we use iomem regions for both CXL.io and CXL.mem? For CXL.mem that
is done in QEMU to give us the ability to do fine grained address decoding
(below page level) but it doesn't in practice matter to the OS on top. It's
just an implementation detail in QEMU. There are knock on effects if you run with
KVM though as instructions can end up being read from the memory
(so that's not advised!).
However for both CXL.io and CXL.mem it is a host physical address range
and how the OS deals with it is dependent on how it is mapped.

So from driver side of things, the CXL.IO stuff is either in ECAM (for config
space) or mapped as PCIe BARs.  The CXL.mem stuff is mapped via the Host Physical
Addresses described in a CXL Fixed Memory Window.

So the right type of access is used based on the underlying hardware performing
the routing for the appropriate Host Physical Address range.  Same applies
on top of QEMU.

Jonathan

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: A confusion about cxl.mem in CXL drivers
  2023-09-20 12:19   ` Jonathan Cameron
@ 2023-09-22  8:49     ` Yuquan Wang
  0 siblings, 0 replies; 4+ messages in thread
From: Yuquan Wang @ 2023-09-22  8:49 UTC (permalink / raw)
  To: linux-cxl

On 2023-09-20 20:19,  jonathan.cameron wrote:

Thanks for your patient explanation.

>
> So from driver side of things, the CXL.IO stuff is either in ECAM (for config
> space) or mapped as PCIe BARs.  The CXL.mem stuff is mapped via the Host Physical
> Addresses described in a CXL Fixed Memory Window.
> 
> So the right type of access is used based on the underlying hardware performing
> the routing for the appropriate Host Physical Address range.  Same applies
> on top of QEMU.
> 

Therefore, from the view of kernel side, the kernel do not need to distinguish the physical cxl subprotocol
(cxl.io,cxl.cache,cxl.mem). In fact, the underlying hardware would directly finish this work so system software
don't care about it.

According to the instance of qemu virt machine, my understanding is below:
1) CXL.IO: finding, setting and enumerating CXL ECAM/BARs (programmed in cxl_acpi, cxl_pci drivers)
    Underlying hardware performing the routing : PCIe RC
    
2) CXL.MEM: host is going to access the memory mapped in CFMWs 
    Underlying hardware performing the routing : HDM decoders

Many thanks
Yuquan

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: A confusion about cxl.mem in CXL drivers
@ 2023-10-08  6:50 Yuquan Wang
  2023-10-09 15:37 ` Jonathan Cameron
  0 siblings, 1 reply; 4+ messages in thread
From: Yuquan Wang @ 2023-10-08  6:50 UTC (permalink / raw)
  To: Jonathan.Cameron, dan.j.williams; +Cc: linux-cxl

Sorry for resending this mail as I still confused about it.

On 2023-09-20 20:19,  jonathan.cameron wrote:

Thanks for your patient explanation.

>
> So from driver side of things, the CXL.IO stuff is either in ECAM (for config
> space) or mapped as PCIe BARs.  The CXL.mem stuff is mapped via the Host Physical
> Addresses described in a CXL Fixed Memory Window.
> 
> So the right type of access is used based on the underlying hardware performing
> the routing for the appropriate Host Physical Address range.  Same applies
> on top of QEMU.
> 

Therefore, from the view of kernel side, the kernel do not need to distinguish the physical cxl subprotocol
(cxl.io,cxl.cache,cxl.mem). In fact, the underlying hardware would directly finish this work so system software
don't care about it.

According to the instance of qemu virt machine, my understanding is below:
1) CXL.IO: finding, setting and enumerating CXL ECAM/BARs (programmed in cxl_acpi, cxl_pci drivers)
    Underlying hardware performing the routing : PCIe RC
    
2) CXL.MEM: host is going to access the memory mapped in CFMWs 
    Underlying hardware performing the routing : HDM decoders

Many thanks
Yuquan


^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: A confusion about cxl.mem in CXL drivers
  2023-10-08  6:50 A confusion about cxl.mem in CXL drivers Yuquan Wang
@ 2023-10-09 15:37 ` Jonathan Cameron
  0 siblings, 0 replies; 4+ messages in thread
From: Jonathan Cameron @ 2023-10-09 15:37 UTC (permalink / raw)
  To: Yuquan Wang; +Cc: dan.j.williams, linux-cxl

On Sun, 8 Oct 2023 14:50:36 +0800
Yuquan Wang <wangyuquan1236@phytium.com.cn> wrote:

> Sorry for resending this mail as I still confused about it.
> 
> On 2023-09-20 20:19,  jonathan.cameron wrote:
> 
> Thanks for your patient explanation.
> 
> >
> > So from driver side of things, the CXL.IO stuff is either in ECAM (for config
> > space) or mapped as PCIe BARs.  The CXL.mem stuff is mapped via the Host Physical
> > Addresses described in a CXL Fixed Memory Window.
> > 
> > So the right type of access is used based on the underlying hardware performing
> > the routing for the appropriate Host Physical Address range.  Same applies
> > on top of QEMU.
> >   
> 
> Therefore, from the view of kernel side, the kernel do not need to distinguish the physical cxl subprotocol
> (cxl.io,cxl.cache,cxl.mem). In fact, the underlying hardware would directly finish this work so system software
> don't care about it.

The kernel does distinguish between these and may need to do different things to ensure
ordering of reads and writes etc.  They all work on a simple system by reading and writing
from host physical addresses (after mapping them into a virtual address in the kernel).

CXL.io is two things:
1.PCI config space accesses so those will typically be done via ECAM which is
a special region of the host physical address space where reads and writes resolve in PCI
config space transactions.
2. PCI Bar access.  Just like any normal PCI device these end up mapped to specific
host addresses programmed into the EP either by firmware or the kernel during bus enumeration.
As with PCI config reads and writes, accessing this memory address range results in the
right protocol messages being sent over the PCI bus.

CXL.cache is only relevant to accelerators - today we only support memory devices.
There is a Type 2 prototype around from Ira, but we haven't upstreamed it.

CXL.mem is accessed by the kernel via a host physical address range.  We program various
encoders in CXL.io space and make use of fixed system address decoders (described by the
ACPI CEDT table) to work out how to route a particular host physical address.  The CXL
root complex (or some protocol translation agent near that will route to the device in
question).

So the kernel basically needs to know what region to send read and writes to.  Note that
not all systems have simple memory based access to some of these protocols but most
systems likely to run CXL probably do.

> 
> According to the instance of qemu virt machine, my understanding is below:
> 1) CXL.IO: finding, setting and enumerating CXL ECAM/BARs (programmed in cxl_acpi, cxl_pci drivers)
>     Underlying hardware performing the routing : PCIe RC

Yes.

>     
> 2) CXL.MEM: host is going to access the memory mapped in CFMWs 
>     Underlying hardware performing the routing : HDM decoders

+ the host decoders which are described by CXL Fixed memory windows.

> 
> Many thanks
> Yuquan
> 
> 


^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2023-10-09 15:37 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2023-10-08  6:50 A confusion about cxl.mem in CXL drivers Yuquan Wang
2023-10-09 15:37 ` Jonathan Cameron
     [not found] <2023082215220191352877@phytium.com.cn>
     [not found] ` <2023092018244461102314@phytium.com.cn>
2023-09-20 12:19   ` Jonathan Cameron
2023-09-22  8:49     ` Yuquan Wang

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox