Linux CXL
 help / color / mirror / Atom feed
From: Dave Jiang <dave.jiang@intel.com>
To: "Cao, Quanquan/曹 全全" <caoqq@fujitsu.com>,
	linux-cxl@vger.kernel.org, nvdimm@lists.linux.dev
Cc: vishal.l.verma@intel.com
Subject: Re: Question about forcing 'disable-memdev'
Date: Tue, 27 Feb 2024 09:40:28 -0700	[thread overview]
Message-ID: <dd61a8f2-ef80-46cc-8033-b3a4b987b3f4@intel.com> (raw)
In-Reply-To: <3788c116-50aa-ae97-adca-af6559f5c59a@fujitsu.com>



On 2/26/24 10:32 PM, Cao, Quanquan/曹 全全 wrote:
> Hi, Dave
> 
> On the basis of this patch, I conducted some tests and encountered unexpected errors. I would like to inquire whether the design here is reasonable? Below are the steps of my testing:
> 
> Link: https://lore.kernel.org/linux-cxl/170138109724.2882696.123294980050048623.stgit@djiang5-mobl3/
> 
> 
> Problem description: after creating a region, directly forcing 'disable-memdev' and then consuming memory leads to a kernel panic.

If you are forcing memory disable when the memory cannot be offlined, then this behavior is expected. You are ripping the memory away from underneath kernel mm. The reason the check was added is to prevent the users from doing exactly that.


> 
> 
> Test environment:
> KERNEL    6.8.0-rc1
> QEMU    8.2.0-rc4
> 
> Test steps:
>       step1: set memory auto_online to movable zones.
>            echo online_movable > /sys/devices/system/memory/auto_online_blocks
>       step2: create region
>            cxl create-region -t ram -d decoder0.0 -m mem0
>       step3: disable memdev
>            cxl disable-memdev mem0 -f
>       step4: consum CXL memory
>            ./consumemem   <------kernel panic
> 
> numactl node status:
>       step1: numactl -H
> 
>     available: 2 nodes (0-1)
>     node 0 cpus: 0 1
>     node 0 size: 968 MB
>     node 0 free: 664 MB
>     node 1 cpus: 2 3
>     node 1 size: 683 MB
>     node 1 free: 333 MB
>     node distances:
>     node   0   1
>       0:  10  20
> 
>     step2: numactl -H
> 
>     available: 3 nodes (0-2)
>     node 0 cpus: 0 1
>     node 0 size: 968 MB
>     node 0 free: 677 MB
>     node 1 cpus: 2 3
>     node 1 size: 683 MB
>     node 1 free: 333 MB
>     node 2 cpus:
>     node 2 size: 256 MB
>     node 2 free: 256 MB
>     node distances:
>     node   0   1   2
>       0:  10  20  20
>       1:  20  10  20
>       2:  20  20  10
> 
>     step3: numactl -H
> 
>     available: 3 nodes (0-2)
>     node 0 cpus: 0 1
>     node 0 size: 968 MB
>     node 0 free: 686 MB
>     node 1 cpus: 2 3
>     node 1 size: 683 MB
>     node 1 free: 336 MB
>     node 2 cpus:
>     node 2 size: 256 MB
>     node 2 free: 256 MB
>     node distances:
>     node   0   1   2
>       0:  10  20  20
>       1:  20  10  20
>       2:  20  20  10

  reply	other threads:[~2024-02-27 16:40 UTC|newest]

Thread overview: 6+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-02-27  5:32 Question about forcing 'disable-memdev' Cao, Quanquan/曹 全全
2024-02-27 16:40 ` Dave Jiang [this message]
2024-02-27 20:24   ` Jane Chu
2024-02-27 20:28     ` Dave Jiang
2024-02-28 20:17       ` Jane Chu
2024-03-07 20:55         ` Dan Williams

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=dd61a8f2-ef80-46cc-8033-b3a4b987b3f4@intel.com \
    --to=dave.jiang@intel.com \
    --cc=caoqq@fujitsu.com \
    --cc=linux-cxl@vger.kernel.org \
    --cc=nvdimm@lists.linux.dev \
    --cc=vishal.l.verma@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox