messages from 2023-03-14 10:00:50 to 2023-03-28 01:15:11 UTC [more...]
move bio cgroup punting into btrfs
2023-03-28 1:15 UTC (12+ messages)
` [PATCH 1/7] btrfs: move kthread_associate_blkcg out of btrfs_submit_compressed_write
` [PATCH 2/7] btrfs: don't free the async_extent in submit_uncompressed_range
` [PATCH 5/7] btrfs, block: move REQ_CGROUP_PUNT to btrfs
` [PATCH 7/7] block: make blkcg_punt_bio_submit optional
` [PATCH 3/7] btrfs: also use kthread_associate_blkcg for uncompressible ranges
` [PATCH 4/7] btrfs, mm: remove the punt_to_cgroup field in struct writeback_control
` [PATCH 6/7] block: async_bio_lock does not need to be bh-safe
[RFC PATCH 0/7] Make rstat flushing IRQ and sleep friendly
2023-03-27 23:23 UTC (50+ messages)
` [RFC PATCH 1/7] cgroup: rstat: only disable interrupts for the percpu lock
` [RFC PATCH 6/7] workingset: memcg: sleep when flushing stats in workingset_refault()
` [RFC PATCH 2/7] memcg: do not disable interrupts when holding stats_flush_lock
` [RFC PATCH 3/7] cgroup: rstat: remove cgroup_rstat_flush_irqsafe()
` [RFC PATCH 4/7] memcg: sleep during flushing stats in safe contexts
` [RFC PATCH 5/7] vmscan: memcg: sleep when flushing stats during reclaim
` [RFC PATCH 7/7] memcg: do not modify rstat tree for zero updates
[RFC] memcg v1: provide read access to memory.pressure_level
2023-03-27 20:40 UTC (6+ messages)
[PATCH] selftests: cgroup: Fix exception handling in test_memcg_oom_group_score_events()
2023-03-27 9:13 UTC (6+ messages)
[PATCH 0/1] Fix vmstat_percpu incorrect subtraction after reparent
2023-03-27 1:29 UTC (5+ messages)
` [PATCH 1/1] mm: memcontrol: fix vmstats_percpu state "
[PATCH 0/5] cgroup/cpuset: Miscellaneous updates
2023-03-25 22:08 UTC (20+ messages)
` [PATCH 1/5] cgroup/cpuset: Skip task update if hotplug doesn't affect current cpuset
` [PATCH 2/5] cgroup/cpuset: Include offline CPUs when tasks' cpumasks in top_cpuset are updated
` [PATCH 3/5] cgroup/cpuset: Find another usable CPU if none found in current cpuset
[RFC v4 00/10] DRM scheduling cgroup controller
2023-03-25 1:43 UTC (16+ messages)
` [RFC 01/10] drm: Track clients by tgid and not tid
` [RFC 02/10] drm: Update file owner during use
` [RFC 03/10] cgroup: Add the DRM cgroup controller
` [RFC 04/10] drm/cgroup: Track DRM clients per cgroup
` [RFC 05/10] drm/cgroup: Add ability to query drm cgroup GPU time
` [RFC 06/10] drm/cgroup: Add over budget signalling callback
` [RFC 07/10] drm/cgroup: Only track clients which are providing drm_cgroup_ops
` [RFC 08/10] cgroup/drm: Introduce weight based drm cgroup control
` [RFC 09/10] drm/i915: Wire up with drm controller GPU time query
` [RFC 10/10] drm/i915: Implement cgroup controller over budget throttling
[PATCH 0/5] Split a folio to any lower order folios
2023-03-24 15:22 UTC (11+ messages)
` [PATCH 3/5] mm: thp: split huge page to any lower order pages
` [PATCH 1/5] mm: memcg: make memcg huge page split support any order split
` [PATCH 2/5] mm: page_owner: add support for splitting to any order in split page_owner
` [PATCH 4/5] mm: truncate: split huge page cache page to a non-zero order if possible
` [PATCH 5/5] mm: huge_memory: enable debugfs to split huge pages to any order
[RFC PATCH 0/2] sched/cpuset: Fix DL BW accounting in case can_attach() fails
2023-03-24 14:56 UTC (6+ messages)
` [RFC PATCH 1/2] sched/deadline: Create DL BW alloc, free & check overflow interface
` [RFC PATCH 2/2] cgroup/cpuset: Free DL BW in case can_attach() fails
[tj-cgroup:for-6.4] BUILD SUCCESS 8e4645226b4931e96d55546a1fb3863aa50b5e62
2023-03-24 14:48 UTC
[tj-cgroup:for-next] BUILD SUCCESS 70a0eb104712a7a657e6869aa69bec6417d4877f
2023-03-24 14:47 UTC
[PATCH] cpuset: Remove unused cpuset_node_allowed
2023-03-24 2:03 UTC (6+ messages)
` [PATCH] cpuset: Clean up cpuset_node_allowed
[RFC PATCH 0/3] sched/deadline: cpuset: Rework DEADLINE bandwidth restoration
2023-03-22 14:05 UTC (16+ messages)
` [RFC PATCH 1/3] sched/cpuset: Bring back cpuset_mutex
` [RFC PATCH 2/3] sched/cpuset: Keep track of SCHED_DEADLINE tasks in cpusets
` [RFC PATCH 3/3] cgroup/cpuset: Iterate only if DEADLINE tasks are present
[PATCH] memcg: page_cgroup_ino() get memcg from compound_head(page)
2023-03-22 6:52 UTC (23+ messages)
[PATCH v2 0/4] cgroup/cpuset: Miscellaneous updates
2023-03-20 15:04 UTC (8+ messages)
` [PATCH v2 1/4] cgroup/cpuset: Fix partition root's cpuset.cpus update bug
` [PATCH v2 2/4] cgroup/cpuset: Skip task update if hotplug doesn't affect current cpuset
` [PATCH v2 4/4] cgroup/cpuset: Minor updates to test_cpuset_prs.sh
` [PATCH v2 3/4] cgroup/cpuset: Include offline CPUs when tasks' cpumasks in top_cpuset are updated
[PATCH v2 4.19 0/3] Backport patches to fix threadgroup_rwsem <-> cpus_read_lock() deadlock
2023-03-20 1:15 UTC (4+ messages)
` [PATCH v2 4.19 1/3] cgroup/cpuset: Change cpuset_rwsem and hotplug lock order
` [PATCH v2 4.19 2/3] cgroup: Fix threadgroup_rwsem <-> cpus_read_lock() deadlock
` [PATCH v2 4.19 3/3] cgroup: Add missing cpus_read_lock() to cgroup_attach_task_all()
[tj-cgroup:for-6.4] BUILD SUCCESS 4cdb91b0dea7d7f59fa84a13c7753cd434fdedcf
2023-03-18 11:17 UTC
[tj-cgroup:for-6.3-fixes] BUILD SUCCESS fcdb1eda5302599045bb366e679cccb4216f3873
2023-03-18 11:16 UTC
[tj-cgroup:for-next] BUILD SUCCESS f7ac82ea4b457c02eeedf5bb9ceccc777448e1ce
2023-03-18 11:16 UTC
[PATCH v3, 0/4] mm, memcg: cgroup v1 and v2 tunable load/store tearing fixes
2023-03-18 0:58 UTC (3+ messages)
[PATCH] cgroup: fix display of forceidle time at root
2023-03-17 22:20 UTC (4+ messages)
[PATCH] cgroup: bpf: use cgroup_lock()/cgroup_unlock() wrappers
2023-03-17 22:08 UTC (2+ messages)
[PATCH 4.19 0/3] Backport patches to fix threadgroup_rwsem <-> cpus_read_lock() deadlock
2023-03-16 7:45 UTC (3+ messages)
[PATCH] io_uring/sqpoll: Do not set PF_NO_SETAFFINITY on sqpoll threads
2023-03-15 12:51 UTC (2+ messages)
[PATCH] io_uring/io-wq: stop setting PF_NO_SETAFFINITY on io-wq workers
2023-03-14 18:17 UTC (4+ messages)
[PATCH v3] sched: cpuset: Don't rebuild root domains on suspend-resume
2023-03-14 11:41 UTC (13+ messages)
page: next (older) | prev (newer) | latest
- recent:[subjects (threaded)|topics (new)|topics (active)]
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).