messages from 2013-03-19 16:44:24 to 2013-03-22 09:20:07 UTC [more...]
[PATCHv2, RFC 00/30] Transparent huge page cache
2013-03-22 9:21 UTC (27+ messages)
` [PATCHv2, RFC 01/30] block: implement add_bdi_stat()
` [PATCHv2, RFC 02/30] mm: implement zero_huge_user_segment and friends
` [PATCHv2, RFC 03/30] mm: drop actor argument of do_generic_file_read()
` [PATCHv2, RFC 04/30] radix-tree: implement preload for multiple contiguous elements
` [PATCHv2, RFC 05/30] thp, mm: avoid PageUnevictable on active/inactive lru lists
` [PATCHv2, RFC 07/30] thp, mm: introduce mapping_can_have_hugepages() predicate
` [PATCHv2, RFC 08/30] thp, mm: rewrite add_to_page_cache_locked() to support huge pages
` [PATCHv2, RFC 10/30] thp, mm: locking tail page is a bug
` [PATCHv2, RFC 12/30] thp, mm: add event counters for huge page alloc on write to a file
` [PATCHv2, RFC 13/30] thp, mm: implement grab_cache_huge_page_write_begin()
[RFC PATCH 0/8] Reduce system disruption due to kswapd
2013-03-22 8:37 UTC (62+ messages)
` [PATCH 01/10] mm: vmscan: Limit the number of pages kswapd reclaims at each priority
` [PATCH 02/10] mm: vmscan: Obey proportional scanning requirements for kswapd
` [PATCH 03/10] mm: vmscan: Flatten kswapd priority loop
` [PATCH 04/10] mm: vmscan: Decide whether to compact the pgdat based on reclaim progress
` [PATCH 05/10] mm: vmscan: Do not allow kswapd to scan at maximum priority
` [PATCH 06/10] mm: vmscan: Have kswapd writeback pages based on dirty pages encountered, not priority
` [PATCH 07/10] mm: vmscan: Block kswapd if it is encountering pages under writeback
` [PATCH 07/10 -v2r1] "
` [PATCH 08/10] mm: vmscan: Have kswapd shrink slab only once per priority
` [PATCH 09/10] mm: vmscan: Check if kswapd should writepage "
` [PATCH 10/10] mm: vmscan: Move logic from balance_pgdat() to kswapd_shrink_zone()
[PATCH] memcg: fix memcg_cache_name() to use cgroup_name()
2013-03-22 8:22 UTC (7+ messages)
[patch] mm: speedup in __early_pfn_to_nid
2013-03-22 7:25 UTC (7+ messages)
[PATCH v3] memcg: Add memory.pressure_level events
2013-03-22 7:13 UTC
[RFC v7 00/11] Support vrange for anonymous page
2013-03-22 6:01 UTC (3+ messages)
BUG at kmem_cache_alloc
2013-03-22 4:18 UTC
[PATCH] mm/hotplug: use -EPERM instead of -1 for return value in online_pages()
2013-03-22 3:56 UTC
[RFC][PATCH 0/9] extend hugepage migration
2013-03-21 23:46 UTC (26+ messages)
` [PATCH 1/9] migrate: add migrate_entry_wait_huge()
` [PATCH 5/9] migrate: enable migrate_pages() to migrate hugepage
` [PATCH 8/9] memory-hotplug: enable memory hotplug to handle hugepage
[PATCH] USB: EHCI: fix for leaking isochronous data
2013-03-21 22:16 UTC (11+ messages)
[bugfix] mm: zone_end_pfn is too small
2013-03-21 11:00 UTC (2+ messages)
[RFC PATCH part2 0/4] Allow allocating pagetable on local node in movablemem_map
2013-03-21 9:21 UTC (5+ messages)
` [PATCH part2 1/4] x86, mm, numa, acpi: Introduce numa_meminfo_all to store all the numa meminfo
` [PATCH part2 2/4] x86, mm, numa, acpi: Introduce hotplug info into struct numa_meminfo
` [PATCH part2 3/4] x86, mm, numa, acpi: Consider hotplug info when cleanup numa_meminfo
` [PATCH part2 4/4] x86, mm, numa, acpi: Sanitize movablemem_map after memory mapping initialized
[RESEND PATCH part1 0/9] Introduce movablemem_map boot option
2013-03-21 9:20 UTC (10+ messages)
` [RESEND PATCH part1 1/9] x86: get pg_data_t's memory from other node
` [RESEND PATCH part1 2/9] acpi: Print hotplug info in SRAT
` [RESEND PATCH part1 3/9] x86, mm, numa, acpi: Add movable_memmap boot option
` [RESEND PATCH part1 4/9] x86, mm, numa, acpi: Introduce zone_movable_limit[] to store start pfn of ZONE_MOVABLE
` [RESEND PATCH part1 5/9] x86, mm, numa, acpi: Extend movablemem_map to the end of each node
` [RESEND PATCH part1 6/9] x86, mm, numa, acpi: Support getting hotplug info from SRAT
` [RESEND PATCH part1 7/9] x86, mm, numa, acpi: Sanitize zone_movable_limit[]
` [RESEND PATCH part1 8/9] x86, mm, numa, acpi: make movablemem_map have higher priority
` [RESEND PATCH part1 9/9] x86, mm, numa, acpi: Memblock limit with movablemem_map
[PATCH] mm: page_alloc: Avoid marking zones full prematurely after zone_reclaim()
2013-03-21 8:59 UTC (10+ messages)
[RFC PATCH -V2 00/21] THP support for PPC64
2013-03-21 8:17 UTC (2+ messages)
[PATCH, RFC 00/16] Transparent huge page cache
2013-03-21 8:00 UTC (2+ messages)
OOM triggered with plenty of memory free
2013-03-21 7:07 UTC (4+ messages)
[PATCH v2 0/5] bypass root memcg charges if no memcgs are possible
2013-03-21 6:08 UTC (20+ messages)
` [PATCH v2 2/5] memcg: provide root figures from system totals
` [PATCH v2 3/5] memcg: make it suck faster
kernel BUG at mm/huge_memory.c:1802!
2013-03-21 6:04 UTC (2+ messages)
[PATCH] mm/migrate: fix comment typo syncronous->synchronous
2013-03-21 4:16 UTC
[patch 1/4 v3]swap: change block allocation algorithm for SSD
2013-03-21 2:02 UTC (5+ messages)
[PATCH} mm: Merging memory blocks resets mempolicy
2013-03-20 22:02 UTC (2+ messages)
[patch 4/4 v3]swap: make cluster allocation per-cpu
2013-03-20 21:58 UTC (4+ messages)
[patch 3/4 v3]swap: make swap discard async
2013-03-20 20:51 UTC (4+ messages)
[PATCH 1/3] mm, nobootmem: fix wrong usage of max_low_pfn
2013-03-20 20:18 UTC (7+ messages)
[patch] mm, hugetlb: include hugepages in meminfo
2013-03-20 19:52 UTC (8+ messages)
` [patch v2] "
[patch 0/5] sparse-vmemmap: hotplug fixes & cleanups
2013-03-20 18:43 UTC (7+ messages)
` [patch 1/5] mm: Try harder to allocate vmemmap blocks
` [patch 2/5] sparse-vmemmap: specify vmemmap population range in bytes
` [patch 3/5] x86-64: remove dead debugging code for !pse setups
` [patch 4/5] x86-64: use vmemmap_populate_basepages() "
` [patch 5/5] x86-64: fall back to regular page vmemmap on allocation failure
trinity fuzz-tester mailing list
2013-03-20 16:37 UTC
[PATCH v4 0/8] staging: zcache: Support zero-filled pages more efficiently
2013-03-20 10:43 UTC (6+ messages)
` [PATCH v4 2/8] staging: zcache: zero-filled pages awareness
[PATCH v2 0/4] zcache: Support zero-filled pages more efficiently
2013-03-20 10:22 UTC (12+ messages)
` [PATCH v2 1/4] introduce zero filled pages handler
` [PATCH v2 3/4] introduce zero-filled page stat count
[RFC]about commit "[PATCH] Align the node_mem_map endpoints to a MAX_ORDER boundary"
2013-03-20 9:35 UTC
kswapd craziness round 2
2013-03-20 8:39 UTC (5+ messages)
[PATCH 0/6] memcg: bypass root memcg page stat accounting
2013-03-20 7:09 UTC (5+ messages)
` [PATCH 2/6] memcg: Don't account root memcg CACHE/RSS stats
` [PATCH 6/6] memcg: disable memcg page stat accounting
mmotm 2013-03-19-16-36 uploaded
2013-03-19 23:37 UTC
[PATCH V2 0/3] Drivers: hv: balloon
2013-03-19 21:39 UTC (4+ messages)
` [PATCH V2 1/3] mm: Export split_page()
[patch 2/4 v3]swap: __swap_duplicate check bad swap entry
2013-03-19 21:34 UTC
page: next (older) | prev (newer) | latest
- recent:[subjects (threaded)|topics (new)|topics (active)]
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).