Linux CXL
 help / color / mirror / Atom feed
From: Jonathan Cameron <Jonathan.Cameron@Huawei.com>
To: Ira Weiny <ira.weiny@intel.com>
Cc: Dongsheng Yang <dongsheng.yang@easystack.cn>, <dave@stgolabs.net>,
	<ave.jiang@intel.com>, <alison.schofield@intel.com>,
	<vishal.l.verma@intel.com>, <dan.j.williams@intel.com>,
	<linux-cxl@vger.kernel.org>
Subject: Re: [RFC PATCH 0/4] cxl: introduce CXL Virtualization module
Date: Mon, 8 Jan 2024 12:28:01 +0000	[thread overview]
Message-ID: <20240108122801.00000e47@Huawei.com> (raw)
In-Reply-To: <659597dc1fb49_16f746294a1@iweiny-mobl.notmuch>

On Wed, 3 Jan 2024 09:22:36 -0800
Ira Weiny <ira.weiny@intel.com> wrote:

> Dongsheng Yang wrote:
> > Hi all:
> > 	This patchset introduce cxlv module to allow user to
> > create virtual cxl device. it's based linux6.7-rc5, you can
> > get the code from https://github.com/DataTravelGuide/linux
> > 
> > 	As the real CXL device is not widely available now, we need
> > some virtual cxl device to do uplayer software developing or
> > testing. Qemu is good for functional testing, but not good
> > for some performance testing.  
> 
> Do you have more details on what performance is missing from Qemu and why
> this solution is better than a solution to fix Qemu?
> 
> Long term it seems better to fix Qemu for this type of work.

I plan to look at this sometime soon, but note that the fix will be special
cases only (no interleave!) in the short term.  Emulating interleave
is always going to be costly - can probably be better than we have it today
for large granularity (pages) but I'm not sure we will ever care enough
to implement that.

For virtualization usecases, if we go with CXL emulation as the path for
DCD then we'll just emulate direct connected devices and patch up the
perf characteristics to cover interleave, switches etc.

With such limitations we can get QEMU to perform well. I'm not keen on
separating QEMU for functional testing from QEMU for workload testing
but meh, we can at least make it automatic to use a higher perf root
if the interleave config allows it.

> 
> Are their other advantages to having this additional test infrastructure
> in the kernel?  We already have cxl_test.
> 
> Ira
> 
> > 
> > 	The new CXLV module allow user to use the reserved RAM[1], to
> > create virtual cxl device. When the cxlv module load, it will
> > create a directory named as "cxl_virt" under /sys/devices/virtual:
> > 
> > 	"/sys/devices/virtual/cxl_virt/"
> > 
> > that's the top level device for all cxlv devices.
> > At the same time, cxlv module will create a debugfs directory:
> > 
> > /sys/kernel/debug/cxl/cxlv
> > ├── create
> > └── remove
> > 
> > the create and remove debugfs file is the cxlv entry to create or remove
> > a cxlv device.
> > 
> > 	Each cxlv device have its owned virtual pci related bridge and bus, cxlv
> > will create a new root_port for the new cxlv device, setup cxl ports for
> > dport and nvdimm-bridge. After that, we will add the virtual pci device,
> > that will go into the cxl_pci_probe to setup new memdev.
> > 
> > 	Then we can see the cxl device with cxl list and use it as a real cxl
> > device.
> > 
> >  $ echo "memstart=$((8*1024*1024*1024)),cxltype=3,pmem=1,memsize=$((2*1024*1024*1024))" > /sys/kernel/debug/cxl/cxlv/create
> >  $ cxl list
> > [
> >   {
> >     "memdev":"mem0",
> >     "pmem_size":1879048192,
> >     "serial":0,
> >     "numa_node":0,
> >     "host":"0010:01:00.0"
> >   }
> > ]
> >  $ cxl create-region -m mem0 -d decoder0.0 -t pmem
> > {
> >   "region":"region0",
> >   "resource":"0x210000000",
> >   "size":"1792.00 MiB (1879.05 MB)",
> >   "type":"pmem",
> >   "interleave_ways":1,
> >   "interleave_granularity":256,
> >   "decode_state":"commit",
> >   "mappings":[
> >     {
> >       "position":0,
> >       "memdev":"mem0",
> >       "decoder":"decoder2.0"
> >     }
> >   ]
> > }
> > cxl region: cmd_create_region: created 1 region
> > 
> >  $ ndctl create-namespace -r region0 -m fsdax --map dev -t pmem -b 0
> > {
> >   "dev":"namespace0.0",
> >   "mode":"fsdax",
> >   "map":"dev",
> >   "size":"1762.00 MiB (1847.59 MB)",
> >   "uuid":"686fd289-a252-42cf-a3a5-95a39ed5c9d5",
> >   "sector_size":512,
> >   "align":2097152,
> >   "blockdev":"pmem0"
> > }
> > 
> >  $ mkfs.xfs -f /dev/pmem0 
> > meta-data=/dev/pmem0             isize=512    agcount=4, agsize=112768
> > blks
> >          =                       sectsz=4096  attr=2, projid32bit=1
> >          =                       crc=1        finobt=1, sparse=1,
> > rmapbt=0
> >          =                       reflink=1    bigtime=0 inobtcount=0
> > data     =                       bsize=4096   blocks=451072, imaxpct=25
> >          =                       sunit=0      swidth=0 blks
> > naming   =version 2              bsize=4096   ascii-ci=0, ftype=1
> > log      =internal log           bsize=4096   blocks=2560, version=2
> >          =                       sectsz=4096  sunit=1 blks, lazy-count=1
> > realtime =none                   extsz=4096   blocks=0, rtextents=0
> > 
> > Any comment is welcome!
> > 
> > TODO: implement cxlv command in ndctl to do cxlv device management.
> > 
> > [1]: Add argument in kernel command line: "memmap=nn[KMG]$ss[KMG]",
> > detail in Documentation/driver-api/cxl/memory-devices.rst
> > 
> > Thanx
> > 
> > Dongsheng Yang (4):
> >   cxl: move some function from acpi module to core module
> >   cxl/port: allow dport host to be driver-less device
> >   cxl/port: introduce cxl_disable_port() function
> >   cxl: introduce CXL Virtualization module
> > 
> >  MAINTAINERS                         |   6 +
> >  drivers/cxl/Kconfig                 |  11 +
> >  drivers/cxl/Makefile                |   1 +
> >  drivers/cxl/acpi.c                  | 143 +-----
> >  drivers/cxl/core/port.c             | 231 ++++++++-
> >  drivers/cxl/cxl.h                   |   6 +
> >  drivers/cxl/cxl_virt/Makefile       |   5 +
> >  drivers/cxl/cxl_virt/cxlv.h         |  87 ++++
> >  drivers/cxl/cxl_virt/cxlv_debugfs.c | 260 ++++++++++
> >  drivers/cxl/cxl_virt/cxlv_device.c  | 311 ++++++++++++
> >  drivers/cxl/cxl_virt/cxlv_main.c    |  67 +++
> >  drivers/cxl/cxl_virt/cxlv_pci.c     | 710 ++++++++++++++++++++++++++++
> >  drivers/cxl/cxl_virt/cxlv_pci.h     | 549 +++++++++++++++++++++
> >  drivers/cxl/cxl_virt/cxlv_port.c    | 149 ++++++
> >  14 files changed, 2388 insertions(+), 148 deletions(-)
> >  create mode 100644 drivers/cxl/cxl_virt/Makefile
> >  create mode 100644 drivers/cxl/cxl_virt/cxlv.h
> >  create mode 100644 drivers/cxl/cxl_virt/cxlv_debugfs.c
> >  create mode 100644 drivers/cxl/cxl_virt/cxlv_device.c
> >  create mode 100644 drivers/cxl/cxl_virt/cxlv_main.c
> >  create mode 100644 drivers/cxl/cxl_virt/cxlv_pci.c
> >  create mode 100644 drivers/cxl/cxl_virt/cxlv_pci.h
> >  create mode 100644 drivers/cxl/cxl_virt/cxlv_port.c
> > 
> > -- 
> > 2.34.1
> >   
> 
> 
> 


  reply	other threads:[~2024-01-08 12:28 UTC|newest]

Thread overview: 13+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-12-28  6:05 [RFC PATCH 0/4] cxl: introduce CXL Virtualization module Dongsheng Yang
2023-12-28  6:05 ` [RFC PATCH 1/4] cxl: move some function from acpi module to core module Dongsheng Yang
2023-12-28  6:43   ` Dongsheng Yang
2023-12-28  6:05 ` [RFC PATCH 3/4] cxl/port: introduce cxl_disable_port() function Dongsheng Yang
2023-12-28  6:05 ` [RFC PATCH 4/4] cxl: introduce CXL Virtualization module Dongsheng Yang
2024-01-03 17:22 ` [RFC PATCH 0/4] " Ira Weiny
2024-01-08 12:28   ` Jonathan Cameron [this message]
2024-01-10  2:07   ` Dongsheng Yang
2024-01-03 20:48 ` Dan Williams
     [not found]   ` <a32d859f-054f-11ca-e8a3-dff7a5234d0a@easystack.cn>
2024-01-25  3:49     ` Dan Williams
2024-01-25  6:49       ` Dongsheng Yang
2024-01-25  7:46         ` Dan Williams
2024-05-03  5:12 ` Hyeongtak Ji

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20240108122801.00000e47@Huawei.com \
    --to=jonathan.cameron@huawei.com \
    --cc=alison.schofield@intel.com \
    --cc=ave.jiang@intel.com \
    --cc=dan.j.williams@intel.com \
    --cc=dave@stgolabs.net \
    --cc=dongsheng.yang@easystack.cn \
    --cc=ira.weiny@intel.com \
    --cc=linux-cxl@vger.kernel.org \
    --cc=vishal.l.verma@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox