From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mail-wm1-f42.google.com (mail-wm1-f42.google.com [209.85.128.42]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B767A39768D for ; Mon, 27 Apr 2026 22:28:10 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.42 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1777328892; cv=none; b=kNKX+T+ov87VUK6DK33o5uV2fjets+VdYo0EjnXbIE3iE3qeXyDBTQCZPgdoZzFKpNycCLEaXW/qMtay/bu47HLm4a4jUckEEUGAbF9wI58xn5roUyFGnK+6t3R1G7yC5nOpj2tQE67Jdm9/9mYAoDMzwroP3dPd75S6mXQ2ST8= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1777328892; c=relaxed/simple; bh=OWUivCv4zEIft7NtrxbKIzy+5mPXal2rztDt74M075w=; h=Date:From:To:Cc:Subject:Message-ID:References:MIME-Version: Content-Type:Content-Disposition:In-Reply-To; b=bg/Lthnw9rjxmK6WOi7RX75jS9bhcCIZuGqWGY/rD+5b+D2eOShGZHWilmwJCe7eWdMkavw7qQuZeXj/L607ctR9lh7jreA2XHvsheaQTPg9AyYq0dpDG+aPpTBH4VWhJnxKfGnp9A03GtG2wY8b72QUh2ZvxbCAG1UrUUloUHQ= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=gourry.net; spf=pass smtp.mailfrom=gourry.net; dkim=pass (2048-bit key) header.d=gourry.net header.i=@gourry.net header.b=P/zKgAuS; arc=none smtp.client-ip=209.85.128.42 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=gourry.net Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gourry.net Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gourry.net header.i=@gourry.net header.b="P/zKgAuS" Received: by mail-wm1-f42.google.com with SMTP id 5b1f17b1804b1-488ff90d6c7so91822265e9.2 for ; Mon, 27 Apr 2026 15:28:10 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gourry.net; s=google; t=1777328889; x=1777933689; darn=vger.kernel.org; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:from:to:cc:subject:date:message-id:reply-to; bh=P9hfpEncvlzWZw9qEgc5pnCnun8WUnWYl5bmHD7iYIU=; b=P/zKgAuSLzLPSH789XAOwV+rhQrJmoWPiMm0nHbuVnH4U0baGk44DJkGTfRAlhQITb F1kfFZ1Y9XRfzgAOCEmFOi50HyphqmijoTnYZaAWgvP4bMEH1mGHdoJf18VArbZ7sqhb QO3sVqvUj+qu3K5jHJ/FW11MR+9R65VIoxVpzgBsZz3HOktQqtwZVazL2LrFqwOi5yBs yY1gyjpVc/RU7JZsoR3g/c4UevytCygkCjvJnsz+suv2rsefA7oWNXAfLh6vLoTh+FVo lu9b/S78ovpMilPOrEb8Psbg6pNznBG/UEgF3pw7M/Rm2hCZyojPU7EBIpPYGhUc+j4K Ibpw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20251104; t=1777328889; x=1777933689; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:x-gm-gg:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=P9hfpEncvlzWZw9qEgc5pnCnun8WUnWYl5bmHD7iYIU=; b=lziN1Ry3TdlzfmXRuKJvYGuZNu3zRnFiUrd4rtMkX11w6K4oBdT6bT1ZLdY8nVIF2J owYH2bppAewDFw9z4o3uiXYsVmlBW3R5xfy2ivyjHwzeZJMHNqku3UX4DIgFt7CteemB YYvxibXiZj3FKd9DD3saNsni5CZRGrSjGlrMIhoar/nNhi15aWF9bkVytYNzKA5Mw5TI PCd9K7gN4DP7LLLoCGd/BrMVkCUAdbku0nwBhDDFdJq5kE/N2Z8PojiVfmCOEZoIv2lM 7r5QJwfSntahMUaNlxbrbLT2LIMR37KaHdtp7IjlqLbUCkC0VOSyozhs/H0o0PqaXp7W jHQQ== X-Forwarded-Encrypted: i=1; AFNElJ9zxgeFa1ChFIDysMjNvI7w8FT8PLt6/SQN9YJlP5AizEh54f+axtr8FNGeNtDFqbYU/W8hAnzTRO1mK8SYGVNoVCc=@vger.kernel.org X-Gm-Message-State: AOJu0YxlMXXkeSIAa+Wmfw3jLMVarOQUWtwomWT15EqBQF3cBo3aIxFt YtX5hSt/xlyB1dHo6pSBTYXva0B1c+uAghvB+P68TIq03mD7jDWEDH+1mLI9d5KQWIo= X-Gm-Gg: AeBDiet0LtEGE4toqcoSefUUbccQ042gDcGFZg58LXX++sctjf9o7SNyxJyh8bTMTmC 14jmy9LnFNtpFpDumAI63gA/kcqA9A4VYEVvlo36UxsNyNZTHUHxGyb/Nhb6KUvwadcCDn6qvBn 86r6oXrDJBmmS50msOnziq8S8h4fNQYIJyoluMxu9K4Mv0ZInjwtM0agdMiyLT9gmiYlHPR/AcR OmRLJkJ+Rg7FdBG1ToaIMDmiEhDFL/xfjL6kKAHf3jNuyBaRJesPJH+nVjV6uwGp4ssjdFHyarD jQEF+dIRArpdGOr3kkTGy+2vPnApJHaPBFLxL8nMia97KKEdx2D35azw6dLZiktZSLLeG8yQKyT 3+dCIxxrvynTDpORlTVvbGCCvlzFOW1sbpDDS2BSrzhHkrAsUcmEkgMw9Xux9eEduY+rIRJlsGN dIQVl5AxvzR6WtquTi6Bj0AO5H2+Ms1K7X8L/5L0w= X-Received: by 2002:a05:600c:46c5:b0:483:64b4:79da with SMTP id 5b1f17b1804b1-48a77b22e28mr7190105e9.26.1777328888562; Mon, 27 Apr 2026 15:28:08 -0700 (PDT) Received: from gourry-fedora-PF4VCD3F ([2a00:23c8:67a7:3101::e3b]) by smtp.gmail.com with ESMTPSA id 5b1f17b1804b1-48a77afabcdsm15735785e9.8.2026.04.27.15.28.06 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 27 Apr 2026 15:28:07 -0700 (PDT) Date: Mon, 27 Apr 2026 23:28:04 +0100 From: Gregory Price To: Arun George Cc: lsf-pc@lists.linux-foundation.org, linux-kernel@vger.kernel.org, linux-cxl@vger.kernel.org, cgroups@vger.kernel.org, linux-mm@kvack.org, linux-trace-kernel@vger.kernel.org, damon@lists.linux.dev, kernel-team@meta.com, gregkh@linuxfoundation.org, rafael@kernel.org, dakr@kernel.org, dave@stgolabs.net, jonathan.cameron@huawei.com, dave.jiang@intel.com, alison.schofield@intel.com, vishal.l.verma@intel.com, ira.weiny@intel.com, dan.j.williams@intel.com, longman@redhat.com, akpm@linux-foundation.org, david@kernel.org, lorenzo.stoakes@oracle.com, Liam.Howlett@oracle.com, vbabka@suse.cz, rppt@kernel.org, surenb@google.com, mhocko@suse.com, osalvador@suse.de, ziy@nvidia.com, matthew.brost@intel.com, joshua.hahnjy@gmail.com, rakie.kim@sk.com, byungchul@sk.com, ying.huang@linux.alibaba.com, apopple@nvidia.com, axelrasmussen@google.com, yuanchu@google.com, weixugc@google.com, yury.norov@gmail.com, linux@rasmusvillemoes.dk, mhiramat@kernel.org, mathieu.desnoyers@efficios.com, tj@kernel.org, hannes@cmpxchg.org, mkoutny@suse.com, jackmanb@google.com, sj@kernel.org, baolin.wang@linux.alibaba.com, npache@redhat.com, ryan.roberts@arm.com, dev.jain@arm.com, baohua@kernel.org, lance.yang@linux.dev, muchun.song@linux.dev, xu.xin16@zte.com.cn, chengming.zhou@linux.dev, jannh@google.com, linmiaohe@huawei.com, nao.horiguchi@gmail.com, pfalcato@suse.de, rientjes@google.com, shakeel.butt@linux.dev, riel@surriel.com, harry.yoo@oracle.com, cl@gentwo.org, roman.gushchin@linux.dev, chrisl@kernel.org, kasong@tencent.com, shikemeng@huaweicloud.com, nphamcs@gmail.com, bhe@redhat.com, zhengqi.arch@bytedance.com, terry.bowman@amd.com, gost.dev@samsung.com, arungeorge05@gmail.com, cpgs@samsung.com Subject: Re: [LSF/MM/BPF TOPIC][RFC PATCH v4 00/27] Private Memory Nodes (w/ Compressed RAM) Message-ID: References: <20260222084842.1824063-1-gourry@gourry.net> <1983025922.01777297382206.JavaMail.epsvc@epcpadp2new> Precedence: bulk X-Mailing-List: linux-trace-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <1983025922.01777297382206.JavaMail.epsvc@epcpadp2new> On Mon, Apr 27, 2026 at 06:02:57PM +0530, Arun George wrote: > > Appreciate the work as we also chase the same problem statement. > A few queries please. > > I see the current support relies on read-only mappings which might > limit the performance. Any particular workload you are targeting with > this (which can tolerate this latency)? > > Any deployments you think of where the goal is a capacity expansion > with a compromise in performance? > Primary use cases for us are any workload that benefits from zswap - which is many, many (many, many [many, many]) workloads. That said, performance is quite irrelevant if you cannot guarantee correctness. In a scenario where a multi-threaded CPU can write many many GB/s to a compressed device - I can't see a scenario where completely uncontended writes to such a device can provide reliability. I suppose you could increase the latency of a writable cacheline from Xns to NXns - but you've only slowed the bear down. Meanwhile, running away from said bear includes trying to migrate stuff off the device... presumably to swap - so your migration process has to have higher throughput than whatever writes are coming in from the CPU. Meanwhile - the system is clearly already pressured, and is likely to continue demoting new data to the compressed tier. So you end up, at best, in a footrace hoping the bear loses interest, or at worst in a fight hoping to dodge its claws (generating poison on some write that fails). > On the device side, are you targeting beyond compressed RAM like > devices such as memory with NAND etc.? > For private nodes - I have been collecting use cases, but I haven't seen a NAND proposal. Unless someone is willing to demonstrate such a device actually working without causing bus-lockup issues, most believe the error-recovery overhead for NAND is too expensive to service cacheline fetches. > The TL;DR talked about mmap/mbind way of user space allocation from > the private node. But the allocation is controlled by GFP flag > N_MEMORY_PRIVATE. Does the user space path of allocation set this > flag along the way? > No. Userspace does mbind() and it works - if the device's driver (or service) has opted that node into allowing mempolicy syscalls. The kernel injects the __GFP_PRIVATE for the relevant VMA in the vma fault path if that VMA has a nodemask with a valid private node. > And I believe the bear-proof cage might work in the normal scenarios, > but may not work for all. If it can't work for all workloads, then it's likely not general purpose enough to find core kernel support and should seek to use the existing interfaces (DAX and friends). > We might not be able to rely on the control > path (backpressure) fully. The control path could go slow, slower and > even die as well. Should the device respond with something like > 'bus error' if the host tries to write when it is not capable of > taking any more writes? > You need two controls over compressed RAM for it to be reliable: - Allocation control (acquiring new struct page to write to) - Write-control (preventing new writes to compressed pages) Private nodes provide the allocation control. A read-only mapping, and guarantee that only memory that can reach the device is userland memory - is the only way to control the cpu writes from the OS perspective. (Bonus: page cache can't live here, because buffered I/O bypasses this by using direct writes from the kernel). Slowing the bus down just puts you in competition with swap, and bus error is basically equivalent to poison being reported at write time. That's basically the whole story. Loosening the write-protection can be seen as trading optimization for risk - where the risk is hitting poison in userland-only memory. In the next version of the RFC i'll demonstrate cram.c as a new swap backend that allows for read-only mappings to be soft-faulted in, migration on write, isolation to ANON memory, and some optional settings that allow a device or administrator a "writable budget" which allows some number of pages to be made writable without migration. ~Gregory