Linux-mm Archive on lore.kernel.org
 help / color / mirror / Atom feed
 messages from 2026-05-13 15:00:44 to 2026-05-13 18:27:12 UTC [more...]

[PATCH v3 00/12] mm, swap: swap table phase IV: unify allocation and reduce static metadata
 2026-05-13 18:27 UTC  (7+ messages)
` [PATCH v3 04/12] mm, swap: add support for stable large allocation in swap cache directly
` [PATCH v3 12/12] mm, swap: merge zeromap into swap table

[PATCH v2 00/69] mm: Generalize HVO for HugeTLB and device DAX
 2026-05-13 18:26 UTC  (3+ messages)

[RFC PATCH v2 00/28] mm/damon: introduce data attributes monitoring
 2026-05-13 18:07 UTC  (3+ messages)
` [RFC PATCH v2 18/28] mm/damon: trace probe_hits

[PATCH v1] landlock: Account all audit data allocations to user space
 2026-05-13 18:03 UTC 

[PATCH v2 00/22] mm: Add __GFP_UNMAPPED
 2026-05-13 17:59 UTC  (10+ messages)
` [PATCH v2 19/22] mm/page_alloc: implement __GFP_UNMAPPED allocations
` [PATCH v2 20/22] mm/page_alloc: implement __GFP_UNMAPPED|__GFP_ZERO allocations

[akpm-mm:mm-new 301/315] mm/oom_kill.c:922:12: warning: 'kill_all_shared_mm' defined but not used
 2026-05-13 17:27 UTC  (2+ messages)

[PATCH 0/4] mm: misc cleanups from __GFP_UNMAPPED series
 2026-05-13 17:19 UTC  (5+ messages)
` [PATCH 1/4] mm: introduce for_each_free_list()
` [PATCH 4/4] mm/page_alloc: remove ifdefs from pindex helpers

[PATCH] mm/mmu_notifier: Fix a begin vs. start typo in the invalidate range comment
 2026-05-13 17:12 UTC  (2+ messages)

[PATCH RFC 0/5] memcg: dma-buf per-cgroup accounting via pid_fd
 2026-05-13 16:39 UTC  (8+ messages)
` [PATCH RFC 2/5] dma-heap: charge dma-buf memory via explicit memcg

[PATCH v7 00/31] mm/virtio: skip redundant zeroing of host-zeroed pages
 2026-05-13 16:34 UTC  (3+ messages)

[PATCH] drivers/base/memory: set mem->altmap after successful device registration
 2026-05-13 16:07 UTC  (5+ messages)

[PATCH v4 0/9] mm: thp: always enable mTHP support
 2026-05-13 15:58 UTC  (10+ messages)
` [PATCH v4 2/9] mm: introduce pgtable_has_pmd_leaves()
` [PATCH v4 3/9] drivers: dax: use pgtable_has_pmd_leaves()
` [PATCH v4 8/9] mm: replace thp_disabled_by_hw() with pgtable_has_pmd_leaves()
` [PATCH v4 9/9] mm: thp: always enable mTHP support

[PATCH v6 0/3] mm/swap: use swap_ops to register swap device's methods
 2026-05-13 15:44 UTC  (9+ messages)
` [PATCH v6 2/3] "
` [PATCH v6 3/3] mm/swap_io.c: rename swap_writepage_* to swap_write_folio_*

[PATCH v6 0/7] locking: contended_release tracepoint instrumentation
 2026-05-13 15:43 UTC  (9+ messages)
` [PATCH v6 4/7] locking: Factor out queued_spin_release()
` [PATCH v6 5/7] locking: Add contended_release tracepoint to qspinlock
` [PATCH v6 6/7] locking: Factor out __queued_read_unlock()/__queued_write_unlock()
` [PATCH v6 7/7] locking: Add contended_release tracepoint to qrwlock

[PATCH v7 0/6] mm/memory-failure: add panic option for unrecoverable pages
 2026-05-13 15:39 UTC  (7+ messages)
` [PATCH v7 1/6] mm/memory-failure: drop dead error_states[] entry for reserved pages
` [PATCH v7 2/6] mm/memory-failure: surface unhandlable kernel pages as -ENOTRECOVERABLE
` [PATCH v7 3/6] mm/memory-failure: report MF_MSG_KERNEL for unrecoverable kernel pages
` [PATCH v7 4/6] mm/memory-failure: short-circuit PG_reserved before get_hwpoison_page()
` [PATCH v7 5/6] mm/memory-failure: add panic option for unrecoverable pages
` [PATCH v7 6/6] Documentation: document panic_on_unrecoverable_memory_failure sysctl

[PATCH 1/2] mm/page_alloc: add tracepoints for zone->lock acquisitions
 2026-05-13 15:32 UTC  (9+ messages)
` [PATCH 2/2] selftests/mm: add zone->lock tracepoint verification test

[GIT PULL] liveupdate updates for v7.1-rc4
 2026-05-13 15:28 UTC  (2+ messages)

[PATCH v6 0/4] mm/memory-failure: add panic option for unrecoverable pages
 2026-05-13 15:07 UTC  (6+ messages)
` [PATCH v6 2/4] mm/memory-failure: classify get_any_page() failures by reason

[PATCH 1/1] kho: fix KHO_TREE_MAX_DEPTH for non-4KB page sizes
 2026-05-13 15:07 UTC  (2+ messages)


This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox