From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mail-wm1-f46.google.com (mail-wm1-f46.google.com [209.85.128.46]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id BA9903A4F56 for ; Mon, 27 Apr 2026 22:28:10 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.46 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1777328892; cv=none; b=TF5W9hkTni51FssWvF4j7PV331vFWYpTcF4n40euovYZkzRhO8yFce3Ce6dYvNvQXF63XulWpvok3UHjP+OPawoZlPDM9/jreN8ywXUzabLzxuIXpprLgiYRT76s1zodupkoRBjSGjS6iSQ5C8yN9U9QIUQV2dwoU86ajhtLTuY= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1777328892; c=relaxed/simple; bh=OWUivCv4zEIft7NtrxbKIzy+5mPXal2rztDt74M075w=; h=Date:From:To:Cc:Subject:Message-ID:References:MIME-Version: Content-Type:Content-Disposition:In-Reply-To; b=bg/Lthnw9rjxmK6WOi7RX75jS9bhcCIZuGqWGY/rD+5b+D2eOShGZHWilmwJCe7eWdMkavw7qQuZeXj/L607ctR9lh7jreA2XHvsheaQTPg9AyYq0dpDG+aPpTBH4VWhJnxKfGnp9A03GtG2wY8b72QUh2ZvxbCAG1UrUUloUHQ= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=gourry.net; spf=pass smtp.mailfrom=gourry.net; dkim=pass (2048-bit key) header.d=gourry.net header.i=@gourry.net header.b=P/zKgAuS; arc=none smtp.client-ip=209.85.128.46 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=gourry.net Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gourry.net Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gourry.net header.i=@gourry.net header.b="P/zKgAuS" Received: by mail-wm1-f46.google.com with SMTP id 5b1f17b1804b1-48334ee0aeaso112401465e9.1 for ; Mon, 27 Apr 2026 15:28:10 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gourry.net; s=google; t=1777328889; x=1777933689; darn=vger.kernel.org; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:from:to:cc:subject:date:message-id:reply-to; bh=P9hfpEncvlzWZw9qEgc5pnCnun8WUnWYl5bmHD7iYIU=; b=P/zKgAuSLzLPSH789XAOwV+rhQrJmoWPiMm0nHbuVnH4U0baGk44DJkGTfRAlhQITb F1kfFZ1Y9XRfzgAOCEmFOi50HyphqmijoTnYZaAWgvP4bMEH1mGHdoJf18VArbZ7sqhb QO3sVqvUj+qu3K5jHJ/FW11MR+9R65VIoxVpzgBsZz3HOktQqtwZVazL2LrFqwOi5yBs yY1gyjpVc/RU7JZsoR3g/c4UevytCygkCjvJnsz+suv2rsefA7oWNXAfLh6vLoTh+FVo lu9b/S78ovpMilPOrEb8Psbg6pNznBG/UEgF3pw7M/Rm2hCZyojPU7EBIpPYGhUc+j4K Ibpw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20251104; t=1777328889; x=1777933689; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:x-gm-gg:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=P9hfpEncvlzWZw9qEgc5pnCnun8WUnWYl5bmHD7iYIU=; b=PJUZ13ikirV6hIECJ5hOulw1QKy+NrLPYOFUm/aey3yv7W7uk+GxDZL1OSZ4Sh5M4y Wy14LZ/MrAXsN3Hhdy5xc6Q8Lh53auwVOQYlILVCYlx723MgjWM0WnGBE1vZfsC6sNGA 02lesr6Q0qV2i3cFlI52x30jGvOMvDT1qCCD9hLpyBA3Z9QqV7W8U0bC4b2ZhqiyALHU ilGBM79IsigBL8YFodF2IQMFo+MzJ6bNMjKyuO1LoQxrKuGQ+oekgby1EXcb5vaJ3G/r eca3TnnJ4QnFotqGp8UVdw33kUtNfjXfnlI3Rw7StyLJeWAOIBqUHmNyQk6q3apovS6f 6CXQ== X-Forwarded-Encrypted: i=1; AFNElJ8S5KNNE6NGmPqEIttSRbMVRtZ0NBCa/muTCvearpAy7rYfwlGGRPRRFA2ZhxmiVU+4FgFOCJ7meAU=@vger.kernel.org X-Gm-Message-State: AOJu0YyV8+Hznpjf7XxiUHM+owbvxZE9htzX6Mu8XmrW+pz0s8DBNpzx DgL3FxvmuyKvcz2O63te9I68HNaTR0JGNSPBHszaikMKHqvmQN0JFYZgmQdeuti+1Gw= X-Gm-Gg: AeBDies1ErDfD3bmcvPj6K3DhqqSkEpn5H2H+ZwrgwebnAPKWbkwkAP71K5tOLsTh9v axalVNB6coeuIzpCuWskZWclgdq35OWBtXDubW0mnOxLE2aqdZTZvLisj6qDNrTuLwR/DLAIvca gjanof8jeeX+R8IF4AL0cJe15Wz7IV7xKY5KEejSPDX2jmCExlvsGBs9r7lNJ/mVekB6m5h1KmZ D6uSrYuPqT1ZSjDDbPzUIG5x6LfsYnh1Ced9JQqM06R04S4Ml5UnqP8WrrK2Xy6birY3ej46bQ5 xsLoUsHUqI07L7OpCz1HgL9bDY9nXNYrdb2xnnBlJBKOCW0Do9q5pHwGX+8LX7UAqnSKdBwwxYi f8FHXGywpTD45ltvrbAqB16wzLutmqH+zW/0zJOnGn69WEdB+/xJcMJ2+b0P4URJWhrAbfCxOrl l6wyAdB8kRFukZq6pG/COwU5kqCkRybZ1csg7JDTs= X-Received: by 2002:a05:600c:46c5:b0:483:64b4:79da with SMTP id 5b1f17b1804b1-48a77b22e28mr7190105e9.26.1777328888562; Mon, 27 Apr 2026 15:28:08 -0700 (PDT) Received: from gourry-fedora-PF4VCD3F ([2a00:23c8:67a7:3101::e3b]) by smtp.gmail.com with ESMTPSA id 5b1f17b1804b1-48a77afabcdsm15735785e9.8.2026.04.27.15.28.06 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 27 Apr 2026 15:28:07 -0700 (PDT) Date: Mon, 27 Apr 2026 23:28:04 +0100 From: Gregory Price To: Arun George Cc: lsf-pc@lists.linux-foundation.org, linux-kernel@vger.kernel.org, linux-cxl@vger.kernel.org, cgroups@vger.kernel.org, linux-mm@kvack.org, linux-trace-kernel@vger.kernel.org, damon@lists.linux.dev, kernel-team@meta.com, gregkh@linuxfoundation.org, rafael@kernel.org, dakr@kernel.org, dave@stgolabs.net, jonathan.cameron@huawei.com, dave.jiang@intel.com, alison.schofield@intel.com, vishal.l.verma@intel.com, ira.weiny@intel.com, dan.j.williams@intel.com, longman@redhat.com, akpm@linux-foundation.org, david@kernel.org, lorenzo.stoakes@oracle.com, Liam.Howlett@oracle.com, vbabka@suse.cz, rppt@kernel.org, surenb@google.com, mhocko@suse.com, osalvador@suse.de, ziy@nvidia.com, matthew.brost@intel.com, joshua.hahnjy@gmail.com, rakie.kim@sk.com, byungchul@sk.com, ying.huang@linux.alibaba.com, apopple@nvidia.com, axelrasmussen@google.com, yuanchu@google.com, weixugc@google.com, yury.norov@gmail.com, linux@rasmusvillemoes.dk, mhiramat@kernel.org, mathieu.desnoyers@efficios.com, tj@kernel.org, hannes@cmpxchg.org, mkoutny@suse.com, jackmanb@google.com, sj@kernel.org, baolin.wang@linux.alibaba.com, npache@redhat.com, ryan.roberts@arm.com, dev.jain@arm.com, baohua@kernel.org, lance.yang@linux.dev, muchun.song@linux.dev, xu.xin16@zte.com.cn, chengming.zhou@linux.dev, jannh@google.com, linmiaohe@huawei.com, nao.horiguchi@gmail.com, pfalcato@suse.de, rientjes@google.com, shakeel.butt@linux.dev, riel@surriel.com, harry.yoo@oracle.com, cl@gentwo.org, roman.gushchin@linux.dev, chrisl@kernel.org, kasong@tencent.com, shikemeng@huaweicloud.com, nphamcs@gmail.com, bhe@redhat.com, zhengqi.arch@bytedance.com, terry.bowman@amd.com, gost.dev@samsung.com, arungeorge05@gmail.com, cpgs@samsung.com Subject: Re: [LSF/MM/BPF TOPIC][RFC PATCH v4 00/27] Private Memory Nodes (w/ Compressed RAM) Message-ID: References: <20260222084842.1824063-1-gourry@gourry.net> <1983025922.01777297382206.JavaMail.epsvc@epcpadp2new> Precedence: bulk X-Mailing-List: linux-cxl@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <1983025922.01777297382206.JavaMail.epsvc@epcpadp2new> On Mon, Apr 27, 2026 at 06:02:57PM +0530, Arun George wrote: > > Appreciate the work as we also chase the same problem statement. > A few queries please. > > I see the current support relies on read-only mappings which might > limit the performance. Any particular workload you are targeting with > this (which can tolerate this latency)? > > Any deployments you think of where the goal is a capacity expansion > with a compromise in performance? > Primary use cases for us are any workload that benefits from zswap - which is many, many (many, many [many, many]) workloads. That said, performance is quite irrelevant if you cannot guarantee correctness. In a scenario where a multi-threaded CPU can write many many GB/s to a compressed device - I can't see a scenario where completely uncontended writes to such a device can provide reliability. I suppose you could increase the latency of a writable cacheline from Xns to NXns - but you've only slowed the bear down. Meanwhile, running away from said bear includes trying to migrate stuff off the device... presumably to swap - so your migration process has to have higher throughput than whatever writes are coming in from the CPU. Meanwhile - the system is clearly already pressured, and is likely to continue demoting new data to the compressed tier. So you end up, at best, in a footrace hoping the bear loses interest, or at worst in a fight hoping to dodge its claws (generating poison on some write that fails). > On the device side, are you targeting beyond compressed RAM like > devices such as memory with NAND etc.? > For private nodes - I have been collecting use cases, but I haven't seen a NAND proposal. Unless someone is willing to demonstrate such a device actually working without causing bus-lockup issues, most believe the error-recovery overhead for NAND is too expensive to service cacheline fetches. > The TL;DR talked about mmap/mbind way of user space allocation from > the private node. But the allocation is controlled by GFP flag > N_MEMORY_PRIVATE. Does the user space path of allocation set this > flag along the way? > No. Userspace does mbind() and it works - if the device's driver (or service) has opted that node into allowing mempolicy syscalls. The kernel injects the __GFP_PRIVATE for the relevant VMA in the vma fault path if that VMA has a nodemask with a valid private node. > And I believe the bear-proof cage might work in the normal scenarios, > but may not work for all. If it can't work for all workloads, then it's likely not general purpose enough to find core kernel support and should seek to use the existing interfaces (DAX and friends). > We might not be able to rely on the control > path (backpressure) fully. The control path could go slow, slower and > even die as well. Should the device respond with something like > 'bus error' if the host tries to write when it is not capable of > taking any more writes? > You need two controls over compressed RAM for it to be reliable: - Allocation control (acquiring new struct page to write to) - Write-control (preventing new writes to compressed pages) Private nodes provide the allocation control. A read-only mapping, and guarantee that only memory that can reach the device is userland memory - is the only way to control the cpu writes from the OS perspective. (Bonus: page cache can't live here, because buffered I/O bypasses this by using direct writes from the kernel). Slowing the bus down just puts you in competition with swap, and bus error is basically equivalent to poison being reported at write time. That's basically the whole story. Loosening the write-protection can be seen as trading optimization for risk - where the risk is hitting poison in userland-only memory. In the next version of the RFC i'll demonstrate cram.c as a new swap backend that allows for read-only mappings to be soft-faulted in, migration on write, isolation to ANON memory, and some optional settings that allow a device or administrator a "writable budget" which allows some number of pages to be made writable without migration. ~Gregory