From: Jonathan Cameron via <qemu-devel@nongnu.org>
To: Maverickk 78 <maverickk1778@gmail.com>
Cc: Jonathan Cameron via <qemu-devel@nongnu.org>,
<linux-cxl@vger.kernel.org>
Subject: Re: CXL volatile memory is not listed
Date: Wed, 23 Aug 2023 18:02:34 +0100 [thread overview]
Message-ID: <20230823180234.00005fa2@Huawei.com> (raw)
In-Reply-To: <CALfBBTsMLP8_eTfmFt5mB+ywF1D0WTR7m=PBqUVzhhvcwC+zYA@mail.gmail.com>
On Fri, 18 Aug 2023 10:48:35 +0530
Maverickk 78 <maverickk1778@gmail.com> wrote:
> Hi Jonathan,
>
> The use case of CXL switch will always need some sort of management
> agent + FM configuring the available CXL memory connected.
>
> In most cases it would be bmc controller configuring MLD/MHD's to
> host, and in very rare scenarios it may be one of the host interacting
> with FM firmware inside the switch which would do the trick.
>
> Another use case is static hardcoding between CXL memory & host in
> built in cxl switch
>
> There is no scenario where one of the host BIOS can push the select
> CXL memory to itself.
>
>
> Is my understanding correct?
It's possible that a particular set of systems work on the basis
of the FM (BMC) in a memory appliance etc having already set up the
memory before the hosts using it are booted. This would be done for
legacy systems / unaware operating systems for instances.
In those cases the BIOS could enumerate and configure everything present
when it starts and provide that info the the OS running on the host as
EFI (and/or e820) and SRAT etc.
If doing more dynamic memory pooling, I'd expect the OS to do all the
hard work. Note a common case in real systems is likely to be Multi Head
devices for memory pooling, but they also require configuration before the
memory is available to the host, so the points above are the same.
Jonathan
>
>
>
> On Fri, 11 Aug 2023 at 19:25, Jonathan Cameron
> <Jonathan.Cameron@huawei.com> wrote:
> >
> > On Fri, 11 Aug 2023 08:04:26 +0530
> > Maverickk 78 <maverickk1778@gmail.com> wrote:
> >
> > > Jonathan,
> > >
> > > > More generally for the flow that would bring the memory up as system ram
> > > > you would typically need the bios to have done the CXL enumeration or
> > > > a bunch of scripts in the kernel to have done it. In general it can't
> > > > be fully automated, because there are policy decisions to make on things like
> > > > interleaving.
> > >
> > > BIOS CXL enumeration? is CEDT not enough? or BIOS further needs to
> > > create an entry
> > > in the e820 table?
> > On intel platforms 'maybe' :) I know how it works on those that just
> > use the nice standard EFI tables - less familiar with the e820 stuff :)
> >
> > CEDT says where to find the the various bits of system related CXL stuff.
> > Nothing in there on the configuration that should be used such as interleaving
> > as that depends on what the administrator wants. Or on what the BIOS has
> > decided the users should have.
> >
> > >
> > > >
> > > > I'm not aware of any open source BIOSs that do it yet. So you have
> > > > to rely on the same kernel paths as for persistent memory - manual configuration
> > > > etc in the kernel.
> > > >
> > > Manual works with "cxl create regiton" :)
> > Great.
> >
> > Jonathan
> >
> > >
> > > On Thu, 10 Aug 2023 at 16:05, Jonathan Cameron
> > > <Jonathan.Cameron@huawei.com> wrote:
> > > >
> > > > On Wed, 9 Aug 2023 04:21:47 +0530
> > > > Maverickk 78 <maverickk1778@gmail.com> wrote:
> > > >
> > > > > Hello,
> > > > >
> > > > > I am running qemu-system-x86_64
> > > > >
> > > > > qemu-system-x86_64 --version
> > > > > QEMU emulator version 8.0.92 (v8.1.0-rc2-80-g0450cf0897)
> > > > >
> > > > +Cc linux-cxl as the answer is more todo with linux than qemu.
> > > >
> > > > > qemu-system-x86_64 \
> > > > > -m 2G,slots=4,maxmem=4G \
> > > > > -smp 4 \
> > > > > -machine type=q35,accel=kvm,cxl=on \
> > > > > -enable-kvm \
> > > > > -nographic \
> > > > > -device pxb-cxl,id=cxl.0,bus=pcie.0,bus_nr=52 \
> > > > > -device cxl-rp,id=rp0,bus=cxl.0,chassis=0,port=0,slot=0 \
> > > > > -object memory-backend-file,id=mem0,mem-path=/tmp/mem0,size=1G,share=true \
> > > > > -device cxl-type3,bus=rp0,volatile-memdev=mem0,id=cxl-mem0 \
> > > > > -M cxl-fmw.0.targets.0=cxl.0,cxl-fmw.0.size=1G
> > > >
> > > > There are some problems upstream at the moment (probably not cxl related but
> > > > I'm digging). So today I can't boot an x86 machine. (goody)
> > > >
> > > >
> > > > More generally for the flow that would bring the memory up as system ram
> > > > you would typically need the bios to have done the CXL enumeration or
> > > > a bunch of scripts in the kernel to have done it. In general it can't
> > > > be fully automated, because there are policy decisions to make on things like
> > > > interleaving.
> > > >
> > > > I'm not aware of any open source BIOSs that do it yet. So you have
> > > > to rely on the same kernel paths as for persistent memory - manual configuration
> > > > etc in the kernel.
> > > >
> > > > There is support in ndctl for those enabling flows, so I'd look there
> > > > for more information
> > > >
> > > > Jonathan
> > > >
> > > >
> > > > >
> > > > >
> > > > > I was expecting the CXL memory to be listed in "System Ram", the lsmem
> > > > > shows only 2G memory which is System RAM, it's not listing the CXL
> > > > > memory.
> > > > >
> > > > > Do I need to pass any particular parameter in the kernel command line?
> > > > >
> > > > > Is there any documentation available? I followed the inputs provided in
> > > > >
> > > > > https://lore.kernel.org/linux-mm/Y+CSOeHVLKudN0A6@kroah.com/T/
> > > > >
> > > > > Is there any documentation/blog listed?
> > > >
> >
next prev parent reply other threads:[~2023-08-23 17:02 UTC|newest]
Thread overview: 15+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-08-08 22:51 CXL volatile memory is not listed Maverickk 78
2023-08-10 10:35 ` Jonathan Cameron via
2023-08-11 2:34 ` Maverickk 78
2023-08-11 13:54 ` Jonathan Cameron via
2023-08-18 5:18 ` Maverickk 78
2023-08-18 11:30 ` Shreyas Shah via
2023-08-23 16:59 ` Jonathan Cameron via
2023-08-23 17:02 ` Jonathan Cameron via [this message]
2023-08-10 10:59 ` Philippe Mathieu-Daudé
2023-08-10 11:18 ` David Hildenbrand
2023-08-11 2:42 ` Maverickk 78
2023-08-10 16:32 ` Fan Ni
2023-08-11 2:22 ` Maverickk 78
2023-08-11 16:48 ` Fan Ni
2023-08-18 5:05 ` Maverickk 78
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20230823180234.00005fa2@Huawei.com \
--to=qemu-devel@nongnu.org \
--cc=Jonathan.Cameron@Huawei.com \
--cc=linux-cxl@vger.kernel.org \
--cc=maverickk1778@gmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).