From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mail-wm1-f54.google.com (mail-wm1-f54.google.com [209.85.128.54]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B83AD397E73 for ; Mon, 27 Apr 2026 22:28:10 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.54 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1777328892; cv=none; b=Migd5pGc7mX4wWGKMSFIIyZ2Kbngrb6jQvEN0sKpgiXpKtkxqPiWqM4L/ez1AImxOps7cZPDVSM3HzfS8tuUgxE6lQyrdvkcT0mmfW6pmEuUgJcNyPzW1tfgbxHP1BG7If+4PxQKaU9Iow4FbKGvZq8/Ie7lXmP7bafVH3sKxH8= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1777328892; c=relaxed/simple; bh=OWUivCv4zEIft7NtrxbKIzy+5mPXal2rztDt74M075w=; h=Date:From:To:Cc:Subject:Message-ID:References:MIME-Version: Content-Type:Content-Disposition:In-Reply-To; b=bg/Lthnw9rjxmK6WOi7RX75jS9bhcCIZuGqWGY/rD+5b+D2eOShGZHWilmwJCe7eWdMkavw7qQuZeXj/L607ctR9lh7jreA2XHvsheaQTPg9AyYq0dpDG+aPpTBH4VWhJnxKfGnp9A03GtG2wY8b72QUh2ZvxbCAG1UrUUloUHQ= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=gourry.net; spf=pass smtp.mailfrom=gourry.net; dkim=pass (2048-bit key) header.d=gourry.net header.i=@gourry.net header.b=P/zKgAuS; arc=none smtp.client-ip=209.85.128.54 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=gourry.net Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gourry.net Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gourry.net header.i=@gourry.net header.b="P/zKgAuS" Received: by mail-wm1-f54.google.com with SMTP id 5b1f17b1804b1-488ff90d6c7so91822205e9.2 for ; Mon, 27 Apr 2026 15:28:10 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gourry.net; s=google; t=1777328889; x=1777933689; darn=vger.kernel.org; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:from:to:cc:subject:date:message-id:reply-to; bh=P9hfpEncvlzWZw9qEgc5pnCnun8WUnWYl5bmHD7iYIU=; b=P/zKgAuSLzLPSH789XAOwV+rhQrJmoWPiMm0nHbuVnH4U0baGk44DJkGTfRAlhQITb F1kfFZ1Y9XRfzgAOCEmFOi50HyphqmijoTnYZaAWgvP4bMEH1mGHdoJf18VArbZ7sqhb QO3sVqvUj+qu3K5jHJ/FW11MR+9R65VIoxVpzgBsZz3HOktQqtwZVazL2LrFqwOi5yBs yY1gyjpVc/RU7JZsoR3g/c4UevytCygkCjvJnsz+suv2rsefA7oWNXAfLh6vLoTh+FVo lu9b/S78ovpMilPOrEb8Psbg6pNznBG/UEgF3pw7M/Rm2hCZyojPU7EBIpPYGhUc+j4K Ibpw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20251104; t=1777328889; x=1777933689; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:x-gm-gg:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=P9hfpEncvlzWZw9qEgc5pnCnun8WUnWYl5bmHD7iYIU=; b=bakVZdwSzVbjkVXhr8/2S2iTKndkRXU4GfJm+yJBQvI1Ibi0y/OJ+fEzE6ujqs9668 wLHYBUohw8rbQphmbbNNIQe+2jQFn5mEbFdQUovQIvumi0GZaCwOGijLxk7nmAMvP0p8 8JkfAYyDaT8br1/sfVAVZdP+3229KmG7v1U+M5dy4m0M6mAfqdgullppGOIVODeMv87p 4kyP+yFAwNZB52Ba/7RHSS7d3CqGvGZH76imD5nedZDsH6XaK82OsPXpUEAa4VhovD+m 57pUke9BCHV6UwNbAK0J4pWC05Fk0Sy3f3Q2nwEAABBYrR+UDlC+chLCkzGZZ5rju742 eIhA== X-Forwarded-Encrypted: i=1; AFNElJ+XW/aHL6ZLS9PCFbi/tXgJLeEUfLsamQO3Yl4YdBNupJqSIEvQ+/KvkEP2DRDEu3zvuSoAktb5Slo+5Lg=@vger.kernel.org X-Gm-Message-State: AOJu0YztlZejieAeoqQq13JZvxj0aXbPtHv8J/EpVfNQ0OW8pehoWfsX nTeYtQ2nqlGqkWYw97pUGX2kZSSO94UU4qdrk4BVTO5srn5PqdxzDa3KwzdX5P53Mbs= X-Gm-Gg: AeBDietotZe0c60bD+jmAvf5wI2aBFwe60dUIQCFAEgu3yIqDv3Whsxao/JaojVR+ET 6Qc+EuUoL+DBOR9TFNFLKTLdanO0YuVWqz6p02NFlsyr/pvVU9s7nqZdI9q/0YeRWSQdozd24G6 1uDMBnlptcyt4PnPLqTShYS6xOP/STH6aOg0fxdeOQaW8OlIhmr7NLOsKWzlJ2uV7EBnBnA75L7 EWdlrlhzt+2Mr8Q5ziu5A6YrI8OQ+v5uKtnw/veSlI6XX6ep4ENuAijRxG+FFYA1jpNds6l2pLk hbb2KKaUrqwpYkQe92u8QHyQCP33xo0GDSxY6NKiobzP8VdHd1fWfg1GxSvDTlEJdZ9+YpFYyOy T6ehHBLapKU6kPIdUX4a1/5/kRlRyWFCrxNv1n/AnscpEw6Z0LnhX++iTECyM7hS/JIJgDLBhGm fD8XchvF6GdHgJ9XRQ0a3KLpLz227vIxxLXOy3wZ0= X-Received: by 2002:a05:600c:46c5:b0:483:64b4:79da with SMTP id 5b1f17b1804b1-48a77b22e28mr7190105e9.26.1777328888562; Mon, 27 Apr 2026 15:28:08 -0700 (PDT) Received: from gourry-fedora-PF4VCD3F ([2a00:23c8:67a7:3101::e3b]) by smtp.gmail.com with ESMTPSA id 5b1f17b1804b1-48a77afabcdsm15735785e9.8.2026.04.27.15.28.06 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 27 Apr 2026 15:28:07 -0700 (PDT) Date: Mon, 27 Apr 2026 23:28:04 +0100 From: Gregory Price To: Arun George Cc: lsf-pc@lists.linux-foundation.org, linux-kernel@vger.kernel.org, linux-cxl@vger.kernel.org, cgroups@vger.kernel.org, linux-mm@kvack.org, linux-trace-kernel@vger.kernel.org, damon@lists.linux.dev, kernel-team@meta.com, gregkh@linuxfoundation.org, rafael@kernel.org, dakr@kernel.org, dave@stgolabs.net, jonathan.cameron@huawei.com, dave.jiang@intel.com, alison.schofield@intel.com, vishal.l.verma@intel.com, ira.weiny@intel.com, dan.j.williams@intel.com, longman@redhat.com, akpm@linux-foundation.org, david@kernel.org, lorenzo.stoakes@oracle.com, Liam.Howlett@oracle.com, vbabka@suse.cz, rppt@kernel.org, surenb@google.com, mhocko@suse.com, osalvador@suse.de, ziy@nvidia.com, matthew.brost@intel.com, joshua.hahnjy@gmail.com, rakie.kim@sk.com, byungchul@sk.com, ying.huang@linux.alibaba.com, apopple@nvidia.com, axelrasmussen@google.com, yuanchu@google.com, weixugc@google.com, yury.norov@gmail.com, linux@rasmusvillemoes.dk, mhiramat@kernel.org, mathieu.desnoyers@efficios.com, tj@kernel.org, hannes@cmpxchg.org, mkoutny@suse.com, jackmanb@google.com, sj@kernel.org, baolin.wang@linux.alibaba.com, npache@redhat.com, ryan.roberts@arm.com, dev.jain@arm.com, baohua@kernel.org, lance.yang@linux.dev, muchun.song@linux.dev, xu.xin16@zte.com.cn, chengming.zhou@linux.dev, jannh@google.com, linmiaohe@huawei.com, nao.horiguchi@gmail.com, pfalcato@suse.de, rientjes@google.com, shakeel.butt@linux.dev, riel@surriel.com, harry.yoo@oracle.com, cl@gentwo.org, roman.gushchin@linux.dev, chrisl@kernel.org, kasong@tencent.com, shikemeng@huaweicloud.com, nphamcs@gmail.com, bhe@redhat.com, zhengqi.arch@bytedance.com, terry.bowman@amd.com, gost.dev@samsung.com, arungeorge05@gmail.com, cpgs@samsung.com Subject: Re: [LSF/MM/BPF TOPIC][RFC PATCH v4 00/27] Private Memory Nodes (w/ Compressed RAM) Message-ID: References: <20260222084842.1824063-1-gourry@gourry.net> <1983025922.01777297382206.JavaMail.epsvc@epcpadp2new> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <1983025922.01777297382206.JavaMail.epsvc@epcpadp2new> On Mon, Apr 27, 2026 at 06:02:57PM +0530, Arun George wrote: > > Appreciate the work as we also chase the same problem statement. > A few queries please. > > I see the current support relies on read-only mappings which might > limit the performance. Any particular workload you are targeting with > this (which can tolerate this latency)? > > Any deployments you think of where the goal is a capacity expansion > with a compromise in performance? > Primary use cases for us are any workload that benefits from zswap - which is many, many (many, many [many, many]) workloads. That said, performance is quite irrelevant if you cannot guarantee correctness. In a scenario where a multi-threaded CPU can write many many GB/s to a compressed device - I can't see a scenario where completely uncontended writes to such a device can provide reliability. I suppose you could increase the latency of a writable cacheline from Xns to NXns - but you've only slowed the bear down. Meanwhile, running away from said bear includes trying to migrate stuff off the device... presumably to swap - so your migration process has to have higher throughput than whatever writes are coming in from the CPU. Meanwhile - the system is clearly already pressured, and is likely to continue demoting new data to the compressed tier. So you end up, at best, in a footrace hoping the bear loses interest, or at worst in a fight hoping to dodge its claws (generating poison on some write that fails). > On the device side, are you targeting beyond compressed RAM like > devices such as memory with NAND etc.? > For private nodes - I have been collecting use cases, but I haven't seen a NAND proposal. Unless someone is willing to demonstrate such a device actually working without causing bus-lockup issues, most believe the error-recovery overhead for NAND is too expensive to service cacheline fetches. > The TL;DR talked about mmap/mbind way of user space allocation from > the private node. But the allocation is controlled by GFP flag > N_MEMORY_PRIVATE. Does the user space path of allocation set this > flag along the way? > No. Userspace does mbind() and it works - if the device's driver (or service) has opted that node into allowing mempolicy syscalls. The kernel injects the __GFP_PRIVATE for the relevant VMA in the vma fault path if that VMA has a nodemask with a valid private node. > And I believe the bear-proof cage might work in the normal scenarios, > but may not work for all. If it can't work for all workloads, then it's likely not general purpose enough to find core kernel support and should seek to use the existing interfaces (DAX and friends). > We might not be able to rely on the control > path (backpressure) fully. The control path could go slow, slower and > even die as well. Should the device respond with something like > 'bus error' if the host tries to write when it is not capable of > taking any more writes? > You need two controls over compressed RAM for it to be reliable: - Allocation control (acquiring new struct page to write to) - Write-control (preventing new writes to compressed pages) Private nodes provide the allocation control. A read-only mapping, and guarantee that only memory that can reach the device is userland memory - is the only way to control the cpu writes from the OS perspective. (Bonus: page cache can't live here, because buffered I/O bypasses this by using direct writes from the kernel). Slowing the bus down just puts you in competition with swap, and bus error is basically equivalent to poison being reported at write time. That's basically the whole story. Loosening the write-protection can be seen as trading optimization for risk - where the risk is hitting poison in userland-only memory. In the next version of the RFC i'll demonstrate cram.c as a new swap backend that allows for read-only mappings to be soft-faulted in, migration on write, isolation to ANON memory, and some optional settings that allow a device or administrator a "writable budget" which allows some number of pages to be made writable without migration. ~Gregory