Linux CXL
 help / color / mirror / Atom feed
From: Gregory Price <gourry@gourry.net>
To: Jonathan Cameron <Jonathan.Cameron@huawei.com>
Cc: Junjie Fu <fujunjie1@qq.com>,
	qemu-devel@nongnu.org, linux-cxl@vger.kernel.org,
	viacheslav.dubeyko@bytedance.com, zhitingz@cs.utexas.edu,
	svetly.todorov@memverge.com
Subject: Re: CXL memory pooling emulation inqury
Date: Wed, 12 Mar 2025 15:33:12 -0400	[thread overview]
Message-ID: <Z9HheFFgOdGE9BcW@gourry-fedora-PF4VCD3F> (raw)
In-Reply-To: <20250312180543.00002132@huawei.com>

On Wed, Mar 12, 2025 at 06:05:43PM +0000, Jonathan Cameron wrote:
> 
> Longer term I remain a little unconvinced by whether this is the best approach
> because I also want a single management path (so fake CCI etc) and that may
> need to be exposed to one of the hosts for tests purposes.  In the current
> approach commands are issued to each host directly to surface memory.
>

Lets say we implement this

  -----------         -----------
  |  Host 1 |         | Host 2  |
  |    |    |         |         |
  |    v    |   Add   |         |
  |   CCI   | ------> | Evt Log |
  -----------         -----------
                 ^ 
	    What mechanism
	   do you use here?

And how does it not just replicate QMP logic?

Not arguing against it, I just see what amounts to more code than
required to test the functionality.  QMP fits the bill so split the CCI
interface for single-host management testing and the MHSLD interface.

Why not leave the 1-node DCD with inbound CCI interface for testing and
leave QMP interface for development of a reference fabric manager
outside the scope of another host?

TL;DR:  :[ distributed systems are hard to test

> > 
> > 2.If not fully supported yet, are there any available development branches 
> > or patches that implement this functionality?
> > 
> > 3.Are there any guidelines or considerations for configuring and testing CXL memory pooling in QEMU?
> 
> There is some information in that patch series cover letter.
>

The attached series implements an MHSLD, but implementing the pooling
mechanism (i.e. fabric manager logic) is left to the imagination of the
reader.   You will want to look at Fan Ni's DCD patch set to understand
the QMP Add/Remove logic for DCD capacity.  This patch set just enables
you to manage 2+ QEMU Guests sharing a DCD State in shared memory.

So you'll have to send DCD commands individual guest QEMU via QMP, but
the underlying logic manages the shared state via locks to emulate real
MHSLD behavior.
                     QMP|---> Host 1 --------v
               [FM]-----|              [Shared State]
	             QMP|---> Host 2 --------^

This differs from a real DCD in that a real DCD is a single endpoint for
management, rather than N endpoints (1 per vm).

                                  |---> Host 1
                [FM] ---> [DCD] --|
		                  |---> Host 2

However this is an implementation detail on the FM side, so I chose to
do it this way to simplify the QEMU MHSLD implementation.  There's far
fewer interactions this way - with the downside that having one of the
hosts manage the shared state isn't possible via the current emulation.

It could probably be done, but I'm not sure what value it has since the
FM implementation difference is a matter a small amount of python.

It's been a while since I played with this patch set and I do not have a
reference pooling manager available to me any longer unfortunately. But
I'm happy to provide some guidance where I can.

~Gregory

  reply	other threads:[~2025-03-12 19:33 UTC|newest]

Thread overview: 8+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <CA+3q14z5pa1RPQuMB=6foVGcwgCHE+GME+zMES_adpVoDYCsvw@mail.gmail.com>
     [not found] ` <20230215151854.00003e34@Huawei.com>
2023-02-15  9:10   ` CXL 2.0 memory pooling emulation Gregory Price
     [not found]     ` <20230216180057.00006c49@huawei.com>
2023-02-16 20:52       ` Gregory Price
     [not found]         ` <20230217111418.000014d2@Huawei.com>
2023-02-17 11:02           ` Gregory Price
2025-03-10  8:02   ` CXL memory pooling emulation inqury Junjie Fu
2025-03-12 18:05     ` Jonathan Cameron
2025-03-12 19:33       ` Gregory Price [this message]
2025-03-13 16:03         ` Fan Ni
2025-04-08  4:47         ` Fan Ni

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=Z9HheFFgOdGE9BcW@gourry-fedora-PF4VCD3F \
    --to=gourry@gourry.net \
    --cc=Jonathan.Cameron@huawei.com \
    --cc=fujunjie1@qq.com \
    --cc=linux-cxl@vger.kernel.org \
    --cc=qemu-devel@nongnu.org \
    --cc=svetly.todorov@memverge.com \
    --cc=viacheslav.dubeyko@bytedance.com \
    --cc=zhitingz@cs.utexas.edu \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox