messages from 2022-03-02 00:51:00 to 2022-03-28 03:59:46 UTC [more...]
[RFC v4 0/8] Proposal for a GPU cgroup controller
2022-03-28 3:59 UTC (8+ messages)
` [RFC v4 1/8] gpu: rfc: "
` [RFC v4 2/8] cgroup: gpu: Add a cgroup controller for allocator attribution of GPU memory
` [RFC v4 4/8] dmabuf: heaps: export system_heap buffers with GPU cgroup charging
` [RFC v4 3/8] dmabuf: Use the GPU cgroup charge/uncharge APIs
` [RFC v4 5/8] dmabuf: Add gpu cgroup charge transfer function
` [RFC v4 6/8] binder: Add a buffer flag to relinquish ownership of fds
` [RFC v4 7/8] binder: use __kernel_pid_t and __kernel_uid_t for userspace
[RFC PATCH] cgroup: introduce proportional protection on memcg
2022-03-25 12:49 UTC (7+ messages)
[RFC PATCH] mm: memcg: Do not count memory.low reclaim if it does not happen
2022-03-25 10:31 UTC (5+ messages)
[PATCH -next 00/11] support concurrent sync io for bfq on a specail occasion
2022-03-25 7:30 UTC (17+ messages)
` [PATCH -next 02/11] block, bfq: apply news apis where root group is not expected
` [PATCH -next 03/11] block, bfq: cleanup for __bfq_activate_requeue_entity()
` [PATCH -next 01/11] block, bfq: add new apis to iterate bfq entities
` [PATCH -next 04/11] block, bfq: move the increasement of 'num_groups_with_pending_reqs' to it's caller
` [PATCH -next 05/11] block, bfq: count root group into 'num_groups_with_pending_reqs'
` [PATCH -next 07/11] block, bfq: only count parent bfqg when bfqq is activated
` [PATCH -next 08/11] block, bfq: record how many queues have pending requests in bfq_group
` [PATCH -next 10/11] block, bfq: decrease 'num_groups_with_pending_reqs' earlier
` [PATCH -next 06/11] block, bfq: do not idle if only one cgroup is activated
` [PATCH -next 09/11] block, bfq: move forward __bfq_weights_tree_remove()
` [PATCH -next 11/11] block, bfq: cleanup bfqq_group()
[RFC PATCH v2 0/4] Introduce group balancer
2022-03-24 6:50 UTC (11+ messages)
` [RFC PATCH v2 1/4] sched, cpuset: Introduce infrastructure of "
` [RFC PATCH v2 2/4] cpuset: Handle input of partition info for "
` [RFC PATCH v2 3/4] sched: Introduce "
` [RFC PATCH v2 4/4] cpuset, gb: Add stat for "
[RFC PATCH 0/5] Split a huge page to any lower order pages
2022-03-24 2:02 UTC (24+ messages)
` [RFC PATCH 1/5] mm: memcg: make memcg huge page split support any order split
` [RFC PATCH 2/5] mm: page_owner: add support for splitting to any order in split page_owner
` [RFC PATCH 3/5] mm: thp: split huge page to any lower order pages
` [RFC PATCH 4/5] mm: truncate: split huge page cache page to a non-zero order if possible
` [mm] 2757cee2d6: UBSAN:shift-out-of-bounds_in_include/linux/log2.h
` [RFC PATCH 5/5] mm: huge_memory: enable debugfs to split huge pages to any order
[RFC v3 5/8] dmabuf: Add gpu cgroup charge transfer function
2022-03-23 23:37 UTC (2+ messages)
[GIT PULL] cgroup changes for v5.18-rc1
2022-03-23 19:50 UTC (2+ messages)
[RFC v3 0/8] Proposal for a GPU cgroup controller
2022-03-23 10:40 UTC (24+ messages)
` [RFC v3 1/8] gpu: rfc: "
` [RFC v3 2/8] cgroup: gpu: Add a cgroup controller for allocator attribution of GPU memory
` [RFC v3 3/8] dmabuf: Use the GPU cgroup charge/uncharge APIs
` [RFC v3 4/8] dmabuf: heaps: export system_heap buffers with GPU cgroup charging
` [RFC v3 5/8] dmabuf: Add gpu cgroup charge transfer function
` [RFC v3 6/8] binder: Add a buffer flag to relinquish ownership of fds
` [RFC v3 8/8] selftests: Add binder cgroup gpu memory transfer test
` [RFC v3 7/8] binder: use __kernel_pid_t and __kernel_uid_t for userspace
[RFC bpf-next] Hierarchical Cgroup Stats Collection Using BPF
2022-03-22 22:06 UTC (10+ messages)
[RFC] memcg: Convert mc_target.page to mc_target.folio
2022-03-18 9:12 UTC (4+ messages)
[PATCH v9] block: cancel all throttled bios in del_gendisk()
2022-03-18 7:04 UTC (9+ messages)
Split process across multiple schedulers?
2022-03-17 9:30 UTC (11+ messages)
` [EXTERNAL] "
[PATCH] memcg: sync flush only if periodic flush is delayed
2022-03-16 16:26 UTC (9+ messages)
[PATCH -next] block: Add parameter description in kernel-doc comment
2022-03-16 12:53 UTC
[Patch v2 1/3] mm/memcg: mz already removed from rb_tree in mem_cgroup_largest_soft_limit_node()
2022-03-15 23:54 UTC (12+ messages)
` [Patch v2 2/3] mm/memcg: __mem_cgroup_remove_exceeded could handle a !on-tree mz properly
` [Patch v2 3/3] mm/memcg: add next_mz back to soft limit tree if not reclaimed yet
[Patch v3] mm/memcg: mz already removed from rb_tree if not NULL
2022-03-14 23:30 UTC
[tj-cgroup:for-5.18] BUILD SUCCESS f9da322e864e5cd3dc217480e73f78f47cf40c5b
2022-03-14 18:07 UTC
[tj-cgroup:for-next] BUILD SUCCESS 1be9b7206b7dbff54b223eee7ef3bc91b80433aa
2022-03-14 18:07 UTC
[PATCH] cgroup: cleanup comments
2022-03-14 5:20 UTC (2+ messages)
[syzbot] memory leak in blk_iolatency_init (2)
2022-03-13 10:10 UTC (2+ messages)
[PATCH linux-next] cgroup: fix suspicious rcu_dereference_check() usage warning
2022-03-12 15:54 UTC (6+ messages)
` [External] "
[PATCH 1/3] mm/memcg: mz already removed from rb_tree in mem_cgroup_largest_soft_limit_node()
2022-03-10 23:57 UTC (11+ messages)
` [PATCH 2/3] mm/memcg: __mem_cgroup_remove_exceeded could handle a !on-tree mz properly
` [PATCH 3/3] mm/memcg: add next_mz back if not reclaimed yet
[PATCH 0/3] mm: vmalloc: introduce array allocation functions
2022-03-10 4:18 UTC (13+ messages)
` [PATCH 1/3] "
` [PATCH 2/3] mm: use vmalloc_array and vcalloc for array allocations
` [PATCH 3/3] KVM: use vcalloc/__vcalloc for very large allocations
WARNING: suspicious RCU usage since next-20220304
2022-03-08 2:30 UTC (2+ messages)
` [External] "
[syzbot] linux-next boot error: WARNING: suspicious RCU usage in cpuacct_charge
2022-03-04 10:41 UTC
[PATCH 1/2] mm/memcontrol: return 1 from cgroup.memory __setup() handler
2022-03-03 22:53 UTC (7+ messages)
[PATCH 1/2] cgroup: Use irqsave in cgroup_rstat_flush_locked()
2022-03-02 15:47 UTC (4+ messages)
` [PATCH] cgroup: Add a comment to cgroup_rstat_flush_locked()
[PATCH -next v2] blk-throttle: Set BIO_THROTTLED when bio has been throttled
2022-03-02 13:51 UTC (2+ messages)
page: next (older) | prev (newer) | latest
- recent:[subjects (threaded)|topics (new)|topics (active)]
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox