On 22/02/26 03:48AM, Gregory Price wrote: >Topic type: MM > >Presenter: Gregory Price > >This series introduces N_MEMORY_PRIVATE, a NUMA node state for memory >managed by the buddy allocator but excluded from normal allocations. > >I present it with an end-to-end Compressed RAM service (mm/cram.c) >that would otherwise not be possible (or would be considerably more >difficult, be device-specific, and add to the ZONE_DEVICE boondoggle). > > >TL;DR >=== > Appreciate the work as we also chase the same problem statement. A few queries please. I see the current support relies on read-only mappings which might limit the performance. Any particular workload you are targeting with this (which can tolerate this latency)? Any deployments you think of where the goal is a capacity expansion with a compromise in performance? On the device side, are you targeting beyond compressed RAM like devices such as memory with NAND etc.? The TL;DR talked about mmap/mbind way of user space allocation from the private node. But the allocation is controlled by GFP flag N_MEMORY_PRIVATE. Does the user space path of allocation set this flag along the way? And I believe the bear-proof cage might work in the normal scenarios, but may not work for all. We might not be able to rely on the control path (backpressure) fully. The control path could go slow, slower and even die as well. Should the device respond with something like 'bus error' if the host tries to write when it is not capable of taking any more writes? Are there any workloads (VM?) where this 'bus error'or similar error could be an OK / recoverable scenario? This is assuming that checking with the device on every operation (whether it is safe to write or not) could be slow. --- Arun George