messages from 2011-04-15 01:47:10 to 2011-04-18 14:08:10 UTC [more...]
[PATCH 00/12] Swap-over-NBD without deadlocking v1
2011-04-18 14:08 UTC (9+ messages)
` [PATCH 08/12] netvm: Allow skb allocation to use PFMEMALLOC reserves
` [PATCH 12/12] mm: Throttle direct reclaimers if PF_MEMALLOC reserves are low and swap is backed by network storage
[PATCH 0/3] convert mm->cpu_vm_mask into cpumask_var_t
2011-04-18 13:08 UTC (5+ messages)
` [PATCH 1/3] mn10300: replace mm->cpu_vm_mask with mm_cpumask
` [PATCH 2/3] tile: replace mm->cpu_vm_mask with mm_cpumask()
` [PATCH 3/3] mm: convert mm->cpu_vm_cpumask into cpumask_var_t
[PATCH 1/1] Add check for dirty_writeback_interval in bdi_wakeup_thread_delayed
2011-04-18 12:26 UTC (6+ messages)
` [TOME] "
[PATCH v3 2.6.39-rc1-tip 0/26] 0: Uprobes patchset with perf probe support
2011-04-18 12:20 UTC (7+ messages)
` [PATCH v3 2.6.39-rc1-tip 4/26] 4: uprobes: Breakground page replacement
` [PATCH v3 2.6.39-rc1-tip 5/26] 5: uprobes: Adding and remove a uprobe in a rb tree
` [PATCH v3 2.6.39-rc1-tip 6/26] 6: Uprobes: register/unregister probes
[PATCH] mm: Check if PTE is already allocated during page fault
2011-04-18 10:23 UTC (6+ messages)
[PATCH 00/12] IO-less dirty throttling v7
2011-04-18 10:22 UTC (27+ messages)
` [PATCH 01/12] writeback: account per-bdi accumulated written pages
` [PATCH 02/12] writeback: account per-bdi accumulated dirtied pages
` [PATCH 03/12] writeback: bdi write bandwidth estimation
` [PATCH 04/12] writeback: smoothed global/bdi dirty pages
` [PATCH 05/12] writeback: smoothed dirty threshold and limit
` [PATCH 06/12] writeback: enforce 1/4 gap between the dirty/background thresholds
` [PATCH 07/12] writeback: base throttle bandwidth and position ratio
` [PATCH 08/12] writeback: IO-less balance_dirty_pages()
` [PATCH 09/12] writeback: show bdi write bandwidth in debugfs
` [PATCH 10/12] writeback: trace dirty_ratelimit
` [PATCH 11/12] writeback: trace balance_dirty_pages
` [PATCH 12/12] writeback: trace global_dirty_state
[PATCH] mm: make expand_downwards symmetrical to expand_upwards
2011-04-18 10:01 UTC (3+ messages)
` [PATCH v2] "
[PATCH V4 00/10] memcg: per cgroup background reclaim
2011-04-18 9:13 UTC (30+ messages)
` [PATCH V4 01/10] Add kswapd descriptor
` [PATCH V4 02/10] Add per memcg reclaim watermarks
` [PATCH V4 03/10] New APIs to adjust per-memcg wmarks
` [PATCH V4 04/10] Infrastructure to support per-memcg reclaim
` [PATCH V4 05/10] Implement the select_victim_node within memcg
` [PATCH V4 06/10] Per-memcg background reclaim
` [PATCH V4 09/10] Add API to export per-memcg kswapd pid
[PATCH] xen: cleancache shim to Xen Transcendent Memory
2011-04-18 8:47 UTC (4+ messages)
[PATCH v2] cpusets: randomize node rotor used in cpuset_mem_spread_node()
2011-04-18 8:42 UTC (5+ messages)
` [PATCH incremental] cpusets: initialize spread rotor lazily
[PATCH V8 4/8] mm/fs: add hooks to support cleancache
2011-04-18 5:32 UTC (7+ messages)
[PATCH V5 00/10] memcg: per cgroup background reclaim
2011-04-18 5:01 UTC (17+ messages)
` [PATCH V5 01/10] Add kswapd descriptor
` [PATCH V5 02/10] Add per memcg reclaim watermarks
` [PATCH V5 03/10] New APIs to adjust per-memcg wmarks
` [PATCH V5 04/10] Infrastructure to support per-memcg reclaim
` [PATCH V5 05/10] Implement the select_victim_node within memcg
` [PATCH V5 06/10] Per-memcg background reclaim
` [PATCH V5 07/10] Add per-memcg zone "unreclaimable"
` [PATCH V5 08/10] Enable per-memcg background reclaim
` [PATCH V5 09/10] Add API to export per-memcg kswapd pid
` [PATCH V5 10/10] Add some per-memcg stats
mm: convert vma->vm_flags to 64bit
2011-04-18 3:34 UTC (5+ messages)
[PATCH]mmap: avoid unnecessary anon_vma lock
2011-04-18 3:05 UTC (2+ messages)
[Slub cleanup6 0/5] SLUB: Cleanups V6
2011-04-17 11:05 UTC (7+ messages)
` [Slub cleanup6 1/5] slub: Use NUMA_NO_NODE in get_partial
` [Slub cleanup6 2/5] slub: get_map() function to establish map of free objects in a slab
` [Slub cleanup6 3/5] slub: Eliminate repeated use of c->page through a new page variable
` [Slub cleanup6 4/5] slub: Move node determination out of hotpath
` [Slub cleanup6 5/5] slub: Move debug handlign in __slab_free
[PATCH 0/4] trivial writeback fixes
2011-04-17 2:11 UTC (15+ messages)
` [PATCH 4/4] writeback: reduce per-bdi dirty threshold ramp up time
[PATCH 1/2] break out page allocation warning code
2011-04-17 0:03 UTC (6+ messages)
` [PATCH 2/2] print vmalloc() state after allocation failures
[PATCH 0/1] mm: make read-only accessors take const pointer parameters
2011-04-16 23:48 UTC (10+ messages)
` [PATCH] mm: make read-only accessors take const parameters
[RFC][PATCH 0/3] track pte pages and use in OOM score
2011-04-16 9:44 UTC (5+ messages)
` [RFC][PATCH 1/3] pass mm in to pgtable ctor/dtor
` [RFC][PATCH 2/3] track numbers of pagetable pages
` [RFC][PATCH 3/3] use pte pages in OOM score
[0/7,v10] NUMA Hotplug Emulator (v10)
2011-04-16 2:32 UTC (4+ messages)
[patch] oom: replace PF_OOM_ORIGIN with toggling oom_score_adj
2011-04-16 1:48 UTC (7+ messages)
` [patch v2] "
` [patch v3] "
percpu: preemptless __per_cpu_counter_add
2011-04-15 23:52 UTC (11+ messages)
` [PATCH] "
[PATCH] mempolicy: reduce references to the current
2011-04-15 23:35 UTC (5+ messages)
` [PATCH v2] "
[slubllv333num@/21] SLUB: Lockless freelists for objects V3
2011-04-15 20:13 UTC (22+ messages)
` [slubllv333num@/21] slub: Use NUMA_NO_NODE in get_partial
` [slubllv333num@/21] slub: get_map() function to establish map of free objects in a slab
` [slubllv333num@/21] slub: Eliminate repeated use of c->page through a new page variable
` [slubllv333num@/21] slub: Move node determination out of hotpath
` [slubllv333num@/21] slub: Move debug handlign in __slab_free
` [slubllv333num@/21] slub: Per object NUMA support
` [slubllv333num@/21] slub: Do not use frozen page flag but a bit in the page counters
` [slubllv333num@/21] slub: Move page->frozen handling near where the page->freelist handling occurs
` [slubllv333num@/21] x86: Add support for cmpxchg_double
` [slubllv333num@/21] mm: Rearrange struct page
` [slubllv333num@/21] slub: Add cmpxchg_double_slab()
` [slubllv333num@/21] slub: explicit list_lock taking
` [slubllv333num@/21] slub: Pass kmem_cache struct to lock and freeze slab
` [slubllv333num@/21] slub: Rework allocator fastpaths
` [slubllv333num@/21] slub: Invert locking and avoid slab lock
` [slubllv333num@/21] slub: Disable interrupts in free_debug processing
` [slubllv333num@/21] slub: Avoid disabling interrupts in free slowpath
` [slubllv333num@/21] slub: Get rid of the another_slab label
` [slubllv333num@/21] slub: fast release on full slab
` [slubllv333num@/21] slub: Not necessary to check for empty slab on load_freelist
` [slubllv333num@/21] slub: update statistics for cmpxchg handling
[PATCH V8 1/8] mm/fs: cleancache documentation
2011-04-15 20:06 UTC (4+ messages)
BUILD_BUG_ON() breaks sparse gfp_t checks
2011-04-15 19:16 UTC (5+ messages)
` [PATCH] make new gfp.h BUG_ON() in to VM_BUG_ON()
[PATCH] make sparse happy with gfp.h
2011-04-15 14:27 UTC (8+ messages)
` [PATCH] fix sparse happy borkage when including gfp.h
` [PATCH] define dummy BUILD_BUG_ON definition for sparse
` [PATCH] define __must_be_array() for __CHECKER__
` [PATCH] Undef __compiletime_{warning,error} if __CHECKER__ is defined
Regression from 2.6.36
2011-04-15 14:15 UTC (19+ messages)
[PATCH] shmem: factor out remove_indirect_page()
2011-04-15 4:01 UTC (3+ messages)
[PATCH V8 4/8] mm/fs: add hooks to support cleancache
2011-04-14 21:17 UTC
page: next (older) | prev (newer) | latest
- recent:[subjects (threaded)|topics (new)|topics (active)]
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).