From: Gregory Price <gregory.price@memverge.com>
To: Fan Ni <fan.ni@samsung.com>
Cc: "Verma, Vishal L" <vishal.l.verma@intel.com>,
"Williams, Dan J" <dan.j.williams@intel.com>,
"Jonathan.Cameron@huawei.com" <Jonathan.Cameron@huawei.com>,
"linux-cxl@vger.kernel.org" <linux-cxl@vger.kernel.org>,
Adam Manzanares <a.manzanares@samsung.com>,
"dave@stgolabs.net" <dave@stgolabs.net>
Subject: Re: [GIT preview] for-6.3/cxl-ram-region
Date: Wed, 1 Feb 2023 20:06:59 -0500 [thread overview]
Message-ID: <Y9sMs0FGulQSIe9t@memverge.com> (raw)
In-Reply-To: <Y9rWskfaz9dLoW+l@memverge.com>
On Wed, Feb 01, 2023 at 04:16:34PM -0500, Gregory Price wrote:
> Looks like i can bypass this with CONFIG_CXL_REGION_INVALIDATION_TEST
> but just wanted to report back incase this is not intended.
>
> On x86, this invalidate_memregion() call maps to not having the
> hypervisor bit set:
>
> bool cpu_cache_has_invalidate_memregion(void)
> {
> return !cpu_feature_enabled(X86_FEATURE_HYPERVISOR);
> }
> EXPORT_SYMBOL_NS_GPL(cpu_cache_has_invalidate_memregion, DEVMEM);
>
>
>
> I presume if i enable the invalidate_test bit in my config this will
> work, but if anyone can validate that this is expected behavior without
> it, that would be great.
>
> Thanks!
> ~Gregory
For the sake of completeness and lurking readers, here is my complete
setup. I was able to successfully test cxl-cli onlining a region, and
that the memory is accessible to the guest via /dev/mem.
Was also able to verify the routing through the decoders via gdb.
Looks like the last step is to setup a dax device and them wire up up
the memory blocks and attach them to a numa node :].
Linux kernel configurations required:
CONFIG_CXL_REGION_INVALIDATION_TEST=y
linux branch:
https://git.kernel.org/pub/scm/linux/kernel/git/cxl/cxl.git/log/?h=for-6.3/cxl-ram-region
merged into
https://github.com/l1k/linux/commits/doe
plus this additional commit
https://lore.kernel.org/linux-cxl/20221215170909.2650271-1-fan.ni@samsung.com/
ndctl branch
https://github.com/pmem/ndctl/commits/vv/volatile-regions
devmem2
https://github.com/hackndev/tools/blob/master/devmem2.c
you'll need to modify the strtoul to strtoull to write to the correct
addresses, right now this is 32-bit bound and the CXL physical addresses
are always agove 4GB.
qemu config:
sudo /opt/qemu-cxl/bin/qemu-system-x86_64 \
-drive file=/var/lib/libvirt/images/cxl.qcow2,format=qcow2,index=0,media=disk,id=hd \
-m 2G,slots=4,maxmem=4G \
-smp 4 \
-machine type=q35,accel=kvm,cxl=on \
-enable-kvm \
-nographic \
-device pxb-cxl,id=cxl.0,bus=pcie.0,bus_nr=52 \
-device cxl-rp,id=rp0,bus=cxl.0,chassis=0,port=0,slot=0 \
-object memory-backend-file,id=mem0,mem-path=/tmp/mem0,size=1G,share=true \
-device cxl-type3,bus=rp0,volatile-memdev=mem0,id=cxl-mem0 \
-M cxl-fmw.0.targets.0=cxl.0,cxl-fmw.0.size=1G
[root@fedora cxl]./cxl create-region -m -t ram -d decoder0.0 -w 1 -g 4096 mem0
[ 128.790228] cxl_region region0: Bypassing cpu_cache_invalidate_memregion() for testing!
{
"region":"region0",
"resource":"0x290000000",
"size":"1024.00 MiB (1073.74 MB)",
"type":"ram",
"interleave_ways":1,
"interleave_granularity":4096,
"decode_state":"commit",
"mappings":[
{
"position":0,
"memdev":"mem0",
"decoder":"decoder2.0"
}
]
}
cxl region: cmd_create_region: created 1 region
[root@fedora cxl]# ./cxl list
[
{
"memdevs":[
{
"memdev":"mem0",
"ram_size":1073741824,
"serial":0,
"host":"0000:35:00.0"
}
]
},
{
"regions":[
{
"region":"region0",
"resource":11005853696,
"size":1073741824,
"type":"ram",
"interleave_ways":1,
"interleave_granularity":4096,
"decode_state":"commit"
}
]
}
]
[root@fedora ~]# ./devmem2 0x290000000 w 0x11111111
/dev/mem opened.
Memory mapped at address 0x7fa9997db000.
Value at address 0x290000000 (0x7fa9997db000): 0xAAAAAAAA
Written 0x11111111; readback 0x11111111
on host:
[gourry@fedora linux]$ xxd /tmp/mem0 | head
00000000: 1111 1111 0000 0000 0000 0000 0000 0000 ................
next prev parent reply other threads:[~2023-02-02 19:16 UTC|newest]
Thread overview: 34+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-01-26 6:25 [GIT preview] for-6.3/cxl-ram-region Dan Williams
2023-01-26 6:29 ` Dan Williams
2023-01-26 18:50 ` Jonathan Cameron
2023-01-26 19:34 ` Jonathan Cameron
2023-01-30 14:16 ` Gregory Price
2023-01-30 20:10 ` Dan Williams
2023-01-30 20:58 ` Gregory Price
2023-01-30 23:18 ` Dan Williams
2023-01-30 22:00 ` Gregory Price
2023-01-31 2:00 ` Gregory Price
2023-01-31 16:56 ` Dan Williams
2023-01-31 17:59 ` Verma, Vishal L
2023-01-31 19:03 ` Gregory Price
2023-01-31 19:46 ` Verma, Vishal L
2023-01-31 20:24 ` Verma, Vishal L
2023-01-31 23:03 ` Gregory Price
2023-01-31 23:17 ` Gregory Price
2023-01-31 23:50 ` Fan Ni
2023-02-01 5:29 ` Gregory Price
2023-02-01 21:16 ` Gregory Price
2023-02-02 1:06 ` Gregory Price [this message]
2023-02-02 16:03 ` Jonathan Cameron
2023-02-01 22:05 ` Gregory Price
2023-02-02 18:13 ` Jonathan Cameron
2023-02-02 0:43 ` Gregory Price
2023-02-02 18:18 ` Dan Williams
2023-02-02 0:44 ` Gregory Price
2023-02-07 16:31 ` Jonathan Cameron
2023-01-30 14:23 ` Gregory Price
2023-01-31 14:56 ` Jonathan Cameron
2023-01-31 17:34 ` Gregory Price
2023-01-26 22:05 ` Gregory Price
2023-01-26 22:20 ` Dan Williams
2023-02-04 2:36 ` Dan Williams
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=Y9sMs0FGulQSIe9t@memverge.com \
--to=gregory.price@memverge.com \
--cc=Jonathan.Cameron@huawei.com \
--cc=a.manzanares@samsung.com \
--cc=dan.j.williams@intel.com \
--cc=dave@stgolabs.net \
--cc=fan.ni@samsung.com \
--cc=linux-cxl@vger.kernel.org \
--cc=vishal.l.verma@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox