From: Fan Ni <nifan.cxl@gmail.com>
To: Gregory Price <gourry@gourry.net>
Cc: Jonathan Cameron <Jonathan.Cameron@huawei.com>,
Junjie Fu <fujunjie1@qq.com>,
qemu-devel@nongnu.org, linux-cxl@vger.kernel.org,
viacheslav.dubeyko@bytedance.com, zhitingz@cs.utexas.edu,
svetly.todorov@memverge.com, a.manzanares@samsung.com,
fan.ni@samsung.com, anisa.su887@gmail.com, dave@stgolabs.net
Subject: Re: CXL memory pooling emulation inqury
Date: Thu, 13 Mar 2025 09:03:37 -0700 [thread overview]
Message-ID: <Z9MB2bl2mUXjgetJ@debian> (raw)
In-Reply-To: <Z9HheFFgOdGE9BcW@gourry-fedora-PF4VCD3F>
On Wed, Mar 12, 2025 at 03:33:12PM -0400, Gregory Price wrote:
> On Wed, Mar 12, 2025 at 06:05:43PM +0000, Jonathan Cameron wrote:
> >
> > Longer term I remain a little unconvinced by whether this is the best approach
> > because I also want a single management path (so fake CCI etc) and that may
> > need to be exposed to one of the hosts for tests purposes. In the current
> > approach commands are issued to each host directly to surface memory.
> >
>
> Lets say we implement this
>
> ----------- -----------
> | Host 1 | | Host 2 |
> | | | | |
> | v | Add | |
> | CCI | ------> | Evt Log |
> ----------- -----------
> ^
> What mechanism
> do you use here?
>
> And how does it not just replicate QMP logic?
>
> Not arguing against it, I just see what amounts to more code than
> required to test the functionality. QMP fits the bill so split the CCI
> interface for single-host management testing and the MHSLD interface.
We have recently discussed the approach internally. Our idea is to do
something similar to what you have done with MHSLD emulation, use shmem
dev to share information (mailbox???) between the two devices.
>
> Why not leave the 1-node DCD with inbound CCI interface for testing and
> leave QMP interface for development of a reference fabric manager
> outside the scope of another host?
For this two hosts setup, for now I can see benefits,
the two hosts can have different kernel, that is to say, the one served
as FM only need to support for exmaple out-of-band communication with
the hardware (MCTP over i2c), and do not need to evolve with what we
want to test on the target host (boot with kernel with features we care).
That is very important at least for test purpose, as mctp over i2c
support for x86 support is not upstreamed yet, we do not want to rebase
whenever the kernel is updated.
More speficially, let's say, we deploy libcxlmi test framework on the FM
host, and then we can test the target host whatever features needed to
test (DCD etc). Again the FM host does not need to have dcd kernel support.
Compared to qmp interface, since libcxlmi already supports a lot of
commands and more commands are being included. It should be much more
convinient than implementing them with qmp interface.
Fan
>
> TL;DR: :[ distributed systems are hard to test
>
> > >
> > > 2.If not fully supported yet, are there any available development branches
> > > or patches that implement this functionality?
> > >
> > > 3.Are there any guidelines or considerations for configuring and testing CXL memory pooling in QEMU?
> >
> > There is some information in that patch series cover letter.
> >
>
> The attached series implements an MHSLD, but implementing the pooling
> mechanism (i.e. fabric manager logic) is left to the imagination of the
> reader. You will want to look at Fan Ni's DCD patch set to understand
> the QMP Add/Remove logic for DCD capacity. This patch set just enables
> you to manage 2+ QEMU Guests sharing a DCD State in shared memory.
>
> So you'll have to send DCD commands individual guest QEMU via QMP, but
> the underlying logic manages the shared state via locks to emulate real
> MHSLD behavior.
> QMP|---> Host 1 --------v
> [FM]-----| [Shared State]
> QMP|---> Host 2 --------^
>
> This differs from a real DCD in that a real DCD is a single endpoint for
> management, rather than N endpoints (1 per vm).
>
> |---> Host 1
> [FM] ---> [DCD] --|
> |---> Host 2
>
> However this is an implementation detail on the FM side, so I chose to
> do it this way to simplify the QEMU MHSLD implementation. There's far
> fewer interactions this way - with the downside that having one of the
> hosts manage the shared state isn't possible via the current emulation.
>
> It could probably be done, but I'm not sure what value it has since the
> FM implementation difference is a matter a small amount of python.
>
> It's been a while since I played with this patch set and I do not have a
> reference pooling manager available to me any longer unfortunately. But
> I'm happy to provide some guidance where I can.
>
> ~Gregory
next prev parent reply other threads:[~2025-03-13 16:04 UTC|newest]
Thread overview: 12+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-02-08 22:28 CXL 2.0 memory pooling emulation zhiting zhu
2023-02-15 15:18 ` Jonathan Cameron via
2023-02-15 9:10 ` Gregory Price
2023-02-16 18:00 ` Jonathan Cameron via
2023-02-16 20:52 ` Gregory Price
2023-02-17 11:14 ` Jonathan Cameron via
2023-02-17 11:02 ` Gregory Price
2025-03-10 8:02 ` CXL memory pooling emulation inqury Junjie Fu
2025-03-12 18:05 ` Jonathan Cameron via
2025-03-12 19:33 ` Gregory Price
2025-03-13 16:03 ` Fan Ni [this message]
2025-04-08 4:47 ` Fan Ni
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=Z9MB2bl2mUXjgetJ@debian \
--to=nifan.cxl@gmail.com \
--cc=Jonathan.Cameron@huawei.com \
--cc=a.manzanares@samsung.com \
--cc=anisa.su887@gmail.com \
--cc=dave@stgolabs.net \
--cc=fan.ni@samsung.com \
--cc=fujunjie1@qq.com \
--cc=gourry@gourry.net \
--cc=linux-cxl@vger.kernel.org \
--cc=qemu-devel@nongnu.org \
--cc=svetly.todorov@memverge.com \
--cc=viacheslav.dubeyko@bytedance.com \
--cc=zhitingz@cs.utexas.edu \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).