messages from 2011-05-13 08:51:58 to 2011-05-17 01:54:15 UTC [more...]
[PATCH 0/3] v4 Improve task->comm locking situation
2011-05-17 1:47 UTC (15+ messages)
` [PATCH 1/3] comm: Introduce comm_lock seqlock to protect task->comm access
` [PATCH 2/3] printk: Add %ptc to safely print a task's comm
` [PATCH 3/3] checkpatch.pl: Add check for task comm references
[PATCHSET v3.1 0/7] data integrity: Stabilize pages during writeback for various fses
2011-05-17 1:23 UTC (13+ messages)
[rfc patch 0/6] mm: memcg naturalization
2011-05-17 0:53 UTC (24+ messages)
` [rfc patch 1/6] memcg: remove unused retry signal from reclaim
` [rfc patch 2/6] vmscan: make distinction between memcg reclaim and LRU list selection
` [rfc patch 3/6] mm: memcg-aware global reclaim
` [rfc patch 4/6] memcg: reclaim statistics
` [rfc patch 5/6] memcg: remove global LRU list
[PATCH 0/4] Reduce impact to overall system of SLUB using high-order allocations V2
2011-05-17 0:48 UTC (27+ messages)
` [PATCH 1/4] mm: vmscan: Correct use of pgdat_balanced in sleeping_prematurely
` [PATCH 2/4] mm: slub: Do not wake kswapd for SLUBs speculative high-order allocations
` [PATCH 3/4] mm: slub: Do not take expensive steps "
` [PATCH 4/4] mm: vmscan: If kswapd has been running too long, allow it to sleep
[PATCH] memcg: fix typo in the soft_limit stats
2011-05-17 0:18 UTC (6+ messages)
[PATCH 0/2] Eliminate hangs when using frequent high-order allocations V3
2011-05-16 23:05 UTC (7+ messages)
` [PATCH 1/2] mm: vmscan: Correct use of pgdat_balanced in sleeping_prematurely
` [PATCH 2/2] mm: vmscan: If kswapd has been running too long, allow it to sleep
[PATCH 0/3] v3 Improve task->comm locking situation
2011-05-16 21:23 UTC (12+ messages)
` [PATCH 1/3] comm: Introduce comm_lock seqlock to protect task->comm access
` [PATCH 3/3] checkpatch.pl: Add check for current->comm references
[PATCH 0/3] Reduce impact to overall system of SLUB using high-order allocations
2011-05-16 21:03 UTC (44+ messages)
` [PATCH 3/3] mm: slub: Default slub_max_order to 0
[PATCHSET v3.1 0/7] data integrity: Stabilize pages during writeback for various fses
2011-05-16 20:55 UTC (10+ messages)
OOM Killer don't works at all if the system have >gigabytes memory (was Re: [PATCH] mm: check zone->all_unreclaimable in all_unreclaimable())
2011-05-16 20:46 UTC (2+ messages)
OOM Killer don't works at all if the system have >gigabytes memory (was Re: [PATCH] mm: check zone->all_unreclaimable in all_unreclaimable())
2011-05-16 20:42 UTC (7+ messages)
[PATCH 1/4] mm: Remove dependency on CONFIG_FLATMEM from online_page()
2011-05-16 20:32 UTC (6+ messages)
[slubllv5 00/25] SLUB: Lockless freelists for objects V5
2011-05-16 20:26 UTC (26+ messages)
` [slubllv5 01/25] slub: Avoid warning for !CONFIG_SLUB_DEBUG
` [slubllv5 02/25] slub: Fix control flow in slab_alloc
` [slubllv5 03/25] slub: Make CONFIG_PAGE_ALLOC work with new fastpath
` [slubllv5 04/25] slub: Push irq disable into allocate_slab()
` [slubllv5 05/25] slub: Do not use frozen page flag but a bit in the page counters
` [slubllv5 06/25] slub: Move page->frozen handling near where the page->freelist handling occurs
` [slubllv5 07/25] x86: Add support for cmpxchg_double
` [slubllv5 08/25] mm: Rearrange struct page
` [slubllv5 09/25] slub: Add cmpxchg_double_slab()
` [slubllv5 10/25] slub: explicit list_lock taking
` [slubllv5 11/25] slub: Pass kmem_cache struct to lock and freeze slab
` [slubllv5 12/25] slub: Rework allocator fastpaths
` [slubllv5 13/25] slub: Invert locking and avoid slab lock
` [slubllv5 14/25] slub: Disable interrupts in free_debug processing
` [slubllv5 15/25] slub: Avoid disabling interrupts in free slowpath
` [slubllv5 16/25] slub: Get rid of the another_slab label
` [slubllv5 17/25] slub: Add statistics for the case that the current slab does not match the node
` [slubllv5 18/25] slub: fast release on full slab
` [slubllv5 19/25] slub: Not necessary to check for empty slab on load_freelist
` [slubllv5 20/25] slub: slabinfo update for cmpxchg handling
` [slubllv5 21/25] slub: Prepare inuse field in new_slab()
` [slubllv5 22/25] slub: pass kmem_cache_cpu pointer to get_partial()
` [slubllv5 23/25] slub: return object pointer from get_partial() / new_slab()
` [slubllv5 24/25] slub: Remove gotos from __slab_free()
` [slubllv5 25/25] slub: Remove gotos from __slab_alloc()
Kernel falls apart under light memory pressure (i.e. linking vmlinux)
2011-05-16 8:51 UTC (10+ messages)
[PATCH 0/3] swap token revisit
2011-05-16 8:22 UTC (10+ messages)
` [PATCH 1/3] vmscan,memcg: memcg aware swap token
` [PATCH 2/3] vmscan: implement swap token trace
` [PATCH 3/3] vmscan: implement swap token priority decay
Possible sandybridge livelock issue
2011-05-16 6:52 UTC (7+ messages)
[RFC][PATCH v7 00/14] memcg: per cgroup dirty page accounting
2011-05-16 5:58 UTC (24+ messages)
` [RFC][PATCH v7 03/14] memcg: add mem_cgroup_mark_inode_dirty()
` [RFC][PATCH v7 04/14] memcg: add dirty page accounting infrastructure
` [RFC][PATCH v7 08/14] writeback: add memcg fields to writeback_control
` [RFC][PATCH v7 09/14] cgroup: move CSS_ID_MAX to cgroup.h
` [RFC][PATCH v7 10/14] memcg: dirty page accounting support routines
` [RFC][PATCH v7 11/14] memcg: create support routines for writeback
` [RFC][PATCH v7 12/14] memcg: create support routines for page-writeback
` [RFC][PATCH v7 13/14] writeback: make background writeback cgroup aware
` [RFC][PATCH v7 14/14] memcg: check memcg dirty limits in page writeback
[PATCH v2 0/9] avoid allocation in show_numa_map()
2011-05-15 22:20 UTC (10+ messages)
` [PATCH v2 1/9] mm: export get_vma_policy()
` [PATCH v2 2/9] mm: use walk_page_range() instead of custom page table walking code
` [PATCH v2 3/9] mm: remove MPOL_MF_STATS
` [PATCH v2 4/9] mm: make gather_stats() type-safe and remove forward declaration
` [PATCH v2 5/9] mm: remove check_huge_range()
` [PATCH v2 6/9] mm: declare mpol_to_str() when CONFIG_TMPFS=n
` [PATCH v2 7/9] mm: proc: move show_numa_map() to fs/proc/task_mmu.c
` [PATCH v2 8/9] proc: make struct proc_maps_private truly private
` [PATCH v2 9/9] proc: allocate storage for numa_maps statistics once
[PATCH] tmpfs: fix race between swapoff and writepage
2011-05-14 19:06 UTC
[RFC][PATCH 0/7] memcg async reclaim
2011-05-14 0:29 UTC (11+ messages)
Batch locking for rmap fork/exit processing v2
2011-05-13 23:46 UTC (5+ messages)
` [PATCH 1/4] VM/RMAP: Add infrastructure for batching the rmap chain locking v2
` [PATCH 2/4] VM/RMAP: Batch anon vma chain root locking in fork
` [PATCH 3/4] VM/RMAP: Batch anon_vma_unlink in exit
` [PATCH 4/4] VM/RMAP: Move avc freeing outside the lock
[RFC][PATCH 0/3] v2 Improve task->comm locking situation
2011-05-13 21:56 UTC (8+ messages)
` [PATCH 2/3] printk: Add %ptc to safely print a task's comm
Unending loop in __alloc_pages_slowpath following OOM-kill; rfc: patch
2011-05-13 21:31 UTC
[Slub cleanup6 0/5] SLUB: Cleanups V6
2011-05-13 14:49 UTC (6+ messages)
` [Slub cleanup6 4/5] slub: Move node determination out of hotpath
` [patch] slub: avoid label inside conditional
slub: Add statistics for this_cmpxchg_double failures
2011-05-13 14:48 UTC (5+ messages)
[PATCH] mm: check zone->all_unreclaimable in all_unreclaimable()
2011-05-13 10:30 UTC (13+ messages)
` OOM Killer don't works at all if the system have >gigabytes memory (was Re: [PATCH] mm: check zone->all_unreclaimable in all_unreclaimable())
` [PATCH 1/4] oom: improve dump_tasks() show items
` [PATCH 2/4] oom: kill younger process first
` [PATCH 3/4] oom: oom-killer don't use permillage of system-ram internally
page: next (older) | prev (newer) | latest
- recent:[subjects (threaded)|topics (new)|topics (active)]
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).