Linux CXL
 help / color / mirror / Atom feed
From: Jonathan Cameron <Jonathan.Cameron@Huawei.com>
To: Yuquan Wang <wangyuquan1236@phytium.com.cn>
Cc: <dan.j.williams@intel.com>, <linux-cxl@vger.kernel.org>
Subject: Re: A confusion about cxl.mem in CXL drivers
Date: Mon, 9 Oct 2023 16:37:16 +0100	[thread overview]
Message-ID: <20231009163716.00003325@Huawei.com> (raw)
In-Reply-To: <20231008065036.468324-1-wangyuquan1236@phytium.com.cn>

On Sun, 8 Oct 2023 14:50:36 +0800
Yuquan Wang <wangyuquan1236@phytium.com.cn> wrote:

> Sorry for resending this mail as I still confused about it.
> 
> On 2023-09-20 20:19,  jonathan.cameron wrote:
> 
> Thanks for your patient explanation.
> 
> >
> > So from driver side of things, the CXL.IO stuff is either in ECAM (for config
> > space) or mapped as PCIe BARs.  The CXL.mem stuff is mapped via the Host Physical
> > Addresses described in a CXL Fixed Memory Window.
> > 
> > So the right type of access is used based on the underlying hardware performing
> > the routing for the appropriate Host Physical Address range.  Same applies
> > on top of QEMU.
> >   
> 
> Therefore, from the view of kernel side, the kernel do not need to distinguish the physical cxl subprotocol
> (cxl.io,cxl.cache,cxl.mem). In fact, the underlying hardware would directly finish this work so system software
> don't care about it.

The kernel does distinguish between these and may need to do different things to ensure
ordering of reads and writes etc.  They all work on a simple system by reading and writing
from host physical addresses (after mapping them into a virtual address in the kernel).

CXL.io is two things:
1.PCI config space accesses so those will typically be done via ECAM which is
a special region of the host physical address space where reads and writes resolve in PCI
config space transactions.
2. PCI Bar access.  Just like any normal PCI device these end up mapped to specific
host addresses programmed into the EP either by firmware or the kernel during bus enumeration.
As with PCI config reads and writes, accessing this memory address range results in the
right protocol messages being sent over the PCI bus.

CXL.cache is only relevant to accelerators - today we only support memory devices.
There is a Type 2 prototype around from Ira, but we haven't upstreamed it.

CXL.mem is accessed by the kernel via a host physical address range.  We program various
encoders in CXL.io space and make use of fixed system address decoders (described by the
ACPI CEDT table) to work out how to route a particular host physical address.  The CXL
root complex (or some protocol translation agent near that will route to the device in
question).

So the kernel basically needs to know what region to send read and writes to.  Note that
not all systems have simple memory based access to some of these protocols but most
systems likely to run CXL probably do.

> 
> According to the instance of qemu virt machine, my understanding is below:
> 1) CXL.IO: finding, setting and enumerating CXL ECAM/BARs (programmed in cxl_acpi, cxl_pci drivers)
>     Underlying hardware performing the routing : PCIe RC

Yes.

>     
> 2) CXL.MEM: host is going to access the memory mapped in CFMWs 
>     Underlying hardware performing the routing : HDM decoders

+ the host decoders which are described by CXL Fixed memory windows.

> 
> Many thanks
> Yuquan
> 
> 


  reply	other threads:[~2023-10-09 15:37 UTC|newest]

Thread overview: 4+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-10-08  6:50 A confusion about cxl.mem in CXL drivers Yuquan Wang
2023-10-09 15:37 ` Jonathan Cameron [this message]
     [not found] <2023082215220191352877@phytium.com.cn>
     [not found] ` <2023092018244461102314@phytium.com.cn>
2023-09-20 12:19   ` Jonathan Cameron
2023-09-22  8:49     ` Yuquan Wang

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20231009163716.00003325@Huawei.com \
    --to=jonathan.cameron@huawei.com \
    --cc=dan.j.williams@intel.com \
    --cc=linux-cxl@vger.kernel.org \
    --cc=wangyuquan1236@phytium.com.cn \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox