* [PATCH v8 2/8] writeback, cgroup: add smp_mb() to cgroup_writeback_umount()
[not found] ` <20210608013123.1088882-1-guro-b10kYP2dOMg@public.gmane.org>
@ 2021-06-08 1:31 ` Roman Gushchin
2021-06-08 8:43 ` Jan Kara
2021-06-08 1:31 ` [PATCH v8 3/8] writeback, cgroup: increment isw_nr_in_flight before grabbing an inode Roman Gushchin
` (3 subsequent siblings)
4 siblings, 1 reply; 14+ messages in thread
From: Roman Gushchin @ 2021-06-08 1:31 UTC (permalink / raw)
To: Jan Kara, Tejun Heo
Cc: linux-fsdevel-u79uwXL29TY76Z2rM5mHXA,
linux-kernel-u79uwXL29TY76Z2rM5mHXA,
linux-mm-Bw31MaZKKs3YtjvyW6yDsg, Alexander Viro, Dennis Zhou,
Dave Chinner, cgroups-u79uwXL29TY76Z2rM5mHXA, Roman Gushchin
A full memory barrier is required between clearing SB_ACTIVE flag
in generic_shutdown_super() and checking isw_nr_in_flight in
cgroup_writeback_umount(), otherwise a new switch operation might
be scheduled after atomic_read(&isw_nr_in_flight) returned 0.
This would result in a non-flushed isw_wq, and a potential crash.
The problem hasn't yet been seen in the real life and was discovered
by Jan Kara by looking into the code.
Suggested-by: Jan Kara <jack-AlSwsSmVLrQ@public.gmane.org>
Signed-off-by: Roman Gushchin <guro-b10kYP2dOMg@public.gmane.org>
---
fs/fs-writeback.c | 6 ++++++
1 file changed, 6 insertions(+)
diff --git a/fs/fs-writeback.c b/fs/fs-writeback.c
index bd99890599e0..3564efcc4b78 100644
--- a/fs/fs-writeback.c
+++ b/fs/fs-writeback.c
@@ -1000,6 +1000,12 @@ int cgroup_writeback_by_id(u64 bdi_id, int memcg_id, unsigned long nr,
*/
void cgroup_writeback_umount(void)
{
+ /*
+ * SB_ACTIVE should be reliably cleared before checking
+ * isw_nr_in_flight, see generic_shutdown_super().
+ */
+ smp_mb();
+
if (atomic_read(&isw_nr_in_flight)) {
/*
* Use rcu_barrier() to wait for all pending callbacks to
--
2.31.1
^ permalink raw reply related [flat|nested] 14+ messages in thread* Re: [PATCH v8 2/8] writeback, cgroup: add smp_mb() to cgroup_writeback_umount()
2021-06-08 1:31 ` [PATCH v8 2/8] writeback, cgroup: add smp_mb() to cgroup_writeback_umount() Roman Gushchin
@ 2021-06-08 8:43 ` Jan Kara
0 siblings, 0 replies; 14+ messages in thread
From: Jan Kara @ 2021-06-08 8:43 UTC (permalink / raw)
To: Roman Gushchin
Cc: Jan Kara, Tejun Heo, linux-fsdevel, linux-kernel, linux-mm,
Alexander Viro, Dennis Zhou, Dave Chinner, cgroups
On Mon 07-06-21 18:31:17, Roman Gushchin wrote:
> A full memory barrier is required between clearing SB_ACTIVE flag
> in generic_shutdown_super() and checking isw_nr_in_flight in
> cgroup_writeback_umount(), otherwise a new switch operation might
> be scheduled after atomic_read(&isw_nr_in_flight) returned 0.
> This would result in a non-flushed isw_wq, and a potential crash.
>
> The problem hasn't yet been seen in the real life and was discovered
> by Jan Kara by looking into the code.
>
> Suggested-by: Jan Kara <jack@suse.cz>
> Signed-off-by: Roman Gushchin <guro@fb.com>
Looks good. Feel free to add:
Reviewed-by: Jan Kara <jack@suse.cz>
Honza
> ---
> fs/fs-writeback.c | 6 ++++++
> 1 file changed, 6 insertions(+)
>
> diff --git a/fs/fs-writeback.c b/fs/fs-writeback.c
> index bd99890599e0..3564efcc4b78 100644
> --- a/fs/fs-writeback.c
> +++ b/fs/fs-writeback.c
> @@ -1000,6 +1000,12 @@ int cgroup_writeback_by_id(u64 bdi_id, int memcg_id, unsigned long nr,
> */
> void cgroup_writeback_umount(void)
> {
> + /*
> + * SB_ACTIVE should be reliably cleared before checking
> + * isw_nr_in_flight, see generic_shutdown_super().
> + */
> + smp_mb();
> +
> if (atomic_read(&isw_nr_in_flight)) {
> /*
> * Use rcu_barrier() to wait for all pending callbacks to
> --
> 2.31.1
>
--
Jan Kara <jack@suse.com>
SUSE Labs, CR
^ permalink raw reply [flat|nested] 14+ messages in thread
* [PATCH v8 3/8] writeback, cgroup: increment isw_nr_in_flight before grabbing an inode
[not found] ` <20210608013123.1088882-1-guro-b10kYP2dOMg@public.gmane.org>
2021-06-08 1:31 ` [PATCH v8 2/8] writeback, cgroup: add smp_mb() to cgroup_writeback_umount() Roman Gushchin
@ 2021-06-08 1:31 ` Roman Gushchin
[not found] ` <20210608013123.1088882-4-guro-b10kYP2dOMg@public.gmane.org>
2021-06-08 1:31 ` [PATCH v8 5/8] writeback, cgroup: keep list of inodes attached to bdi_writeback Roman Gushchin
` (2 subsequent siblings)
4 siblings, 1 reply; 14+ messages in thread
From: Roman Gushchin @ 2021-06-08 1:31 UTC (permalink / raw)
To: Jan Kara, Tejun Heo
Cc: linux-fsdevel-u79uwXL29TY76Z2rM5mHXA,
linux-kernel-u79uwXL29TY76Z2rM5mHXA,
linux-mm-Bw31MaZKKs3YtjvyW6yDsg, Alexander Viro, Dennis Zhou,
Dave Chinner, cgroups-u79uwXL29TY76Z2rM5mHXA, Roman Gushchin,
Jan Kara
isw_nr_in_flight is used do determine whether the inode switch queue
should be flushed from the umount path. Currently it's increased
after grabbing an inode and even scheduling the switch work. It means
the umount path can be walked past cleanup_offline_cgwb() with active
inode references, which can result in a "Busy inodes after unmount."
message and use-after-free issues (with inode->i_sb which gets freed).
Fix it by incrementing isw_nr_in_flight before doing anything with
the inode and decrementing in the case when switching wasn't scheduled.
The problem hasn't yet been seen in the real life and was discovered
by Jan Kara by looking into the code.
Suggested-by: Jan Kara <jack-IBi9RG/b67k@public.gmane.org>
Signed-off-by: Roman Gushchin <guro-b10kYP2dOMg@public.gmane.org>
---
fs/fs-writeback.c | 5 +++--
1 file changed, 3 insertions(+), 2 deletions(-)
diff --git a/fs/fs-writeback.c b/fs/fs-writeback.c
index 3564efcc4b78..e2cc860a001b 100644
--- a/fs/fs-writeback.c
+++ b/fs/fs-writeback.c
@@ -505,6 +505,8 @@ static void inode_switch_wbs(struct inode *inode, int new_wb_id)
if (!isw)
return;
+ atomic_inc(&isw_nr_in_flight);
+
/* find and pin the new wb */
rcu_read_lock();
memcg_css = css_from_id(new_wb_id, &memory_cgrp_subsys);
@@ -535,11 +537,10 @@ static void inode_switch_wbs(struct inode *inode, int new_wb_id)
* Let's continue after I_WB_SWITCH is guaranteed to be visible.
*/
call_rcu(&isw->rcu_head, inode_switch_wbs_rcu_fn);
-
- atomic_inc(&isw_nr_in_flight);
return;
out_free:
+ atomic_dec(&isw_nr_in_flight);
if (isw->new_wb)
wb_put(isw->new_wb);
kfree(isw);
--
2.31.1
^ permalink raw reply related [flat|nested] 14+ messages in thread* [PATCH v8 5/8] writeback, cgroup: keep list of inodes attached to bdi_writeback
[not found] ` <20210608013123.1088882-1-guro-b10kYP2dOMg@public.gmane.org>
2021-06-08 1:31 ` [PATCH v8 2/8] writeback, cgroup: add smp_mb() to cgroup_writeback_umount() Roman Gushchin
2021-06-08 1:31 ` [PATCH v8 3/8] writeback, cgroup: increment isw_nr_in_flight before grabbing an inode Roman Gushchin
@ 2021-06-08 1:31 ` Roman Gushchin
2021-06-08 1:31 ` [PATCH v8 6/8] writeback, cgroup: split out the functional part of inode_switch_wbs_work_fn() Roman Gushchin
2021-06-08 1:31 ` [PATCH v8 8/8] writeback, cgroup: release dying cgwbs by switching attached inodes Roman Gushchin
4 siblings, 0 replies; 14+ messages in thread
From: Roman Gushchin @ 2021-06-08 1:31 UTC (permalink / raw)
To: Jan Kara, Tejun Heo
Cc: linux-fsdevel-u79uwXL29TY76Z2rM5mHXA,
linux-kernel-u79uwXL29TY76Z2rM5mHXA,
linux-mm-Bw31MaZKKs3YtjvyW6yDsg, Alexander Viro, Dennis Zhou,
Dave Chinner, cgroups-u79uwXL29TY76Z2rM5mHXA, Roman Gushchin
Currently there is no way to iterate over inodes attached to a
specific cgwb structure. It limits the ability to efficiently
reclaim the writeback structure itself and associated memory and
block cgroup structures without scanning all inodes belonging to a sb,
which can be prohibitively expensive.
While dirty/in-active-writeback an inode belongs to one of the
bdi_writeback's io lists: b_dirty, b_io, b_more_io and b_dirty_time.
Once cleaned up, it's removed from all io lists. So the
inode->i_io_list can be reused to maintain the list of inodes,
attached to a bdi_writeback structure.
This patch introduces a new wb->b_attached list, which contains all
inodes which were dirty at least once and are attached to the given
cgwb. Inodes attached to the root bdi_writeback structures are never
placed on such list. The following patch will use this list to try to
release cgwbs structures more efficiently.
Suggested-by: Jan Kara <jack-AlSwsSmVLrQ@public.gmane.org>
Signed-off-by: Roman Gushchin <guro-b10kYP2dOMg@public.gmane.org>
Reviewed-by: Jan Kara <jack-AlSwsSmVLrQ@public.gmane.org>
Acked-by: Tejun Heo <tj-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org>
Acked-by: Dennis Zhou <dennis-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org>
---
fs/fs-writeback.c | 93 ++++++++++++++++++++------------
include/linux/backing-dev-defs.h | 1 +
mm/backing-dev.c | 2 +
3 files changed, 62 insertions(+), 34 deletions(-)
diff --git a/fs/fs-writeback.c b/fs/fs-writeback.c
index 96974e13a203..87b305ee5348 100644
--- a/fs/fs-writeback.c
+++ b/fs/fs-writeback.c
@@ -131,25 +131,6 @@ static bool inode_io_list_move_locked(struct inode *inode,
return false;
}
-/**
- * inode_io_list_del_locked - remove an inode from its bdi_writeback IO list
- * @inode: inode to be removed
- * @wb: bdi_writeback @inode is being removed from
- *
- * Remove @inode which may be on one of @wb->b_{dirty|io|more_io} lists and
- * clear %WB_has_dirty_io if all are empty afterwards.
- */
-static void inode_io_list_del_locked(struct inode *inode,
- struct bdi_writeback *wb)
-{
- assert_spin_locked(&wb->list_lock);
- assert_spin_locked(&inode->i_lock);
-
- inode->i_state &= ~I_SYNC_QUEUED;
- list_del_init(&inode->i_io_list);
- wb_io_lists_depopulated(wb);
-}
-
static void wb_wakeup(struct bdi_writeback *wb)
{
spin_lock_bh(&wb->work_lock);
@@ -278,6 +259,28 @@ void __inode_attach_wb(struct inode *inode, struct page *page)
}
EXPORT_SYMBOL_GPL(__inode_attach_wb);
+/**
+ * inode_cgwb_move_to_attached - put the inode onto wb->b_attached list
+ * @inode: inode of interest with i_lock held
+ * @wb: target bdi_writeback
+ *
+ * Remove the inode from wb's io lists and if necessarily put onto b_attached
+ * list. Only inodes attached to cgwb's are kept on this list.
+ */
+static void inode_cgwb_move_to_attached(struct inode *inode,
+ struct bdi_writeback *wb)
+{
+ assert_spin_locked(&wb->list_lock);
+ assert_spin_locked(&inode->i_lock);
+
+ inode->i_state &= ~I_SYNC_QUEUED;
+ if (wb != &wb->bdi->wb)
+ list_move(&inode->i_io_list, &wb->b_attached);
+ else
+ list_del_init(&inode->i_io_list);
+ wb_io_lists_depopulated(wb);
+}
+
/**
* locked_inode_to_wb_and_lock_list - determine a locked inode's wb and lock it
* @inode: inode of interest with i_lock held
@@ -418,21 +421,28 @@ static void inode_switch_wbs_work_fn(struct work_struct *work)
wb_get(new_wb);
/*
- * Transfer to @new_wb's IO list if necessary. The specific list
- * @inode was on is ignored and the inode is put on ->b_dirty which
- * is always correct including from ->b_dirty_time. The transfer
- * preserves @inode->dirtied_when ordering.
+ * Transfer to @new_wb's IO list if necessary. If the @inode is dirty,
+ * the specific list @inode was on is ignored and the @inode is put on
+ * ->b_dirty which is always correct including from ->b_dirty_time.
+ * The transfer preserves @inode->dirtied_when ordering. If the @inode
+ * was clean, it means it was on the b_attached list, so move it onto
+ * the b_attached list of @new_wb.
*/
if (!list_empty(&inode->i_io_list)) {
- struct inode *pos;
-
- inode_io_list_del_locked(inode, old_wb);
inode->i_wb = new_wb;
- list_for_each_entry(pos, &new_wb->b_dirty, i_io_list)
- if (time_after_eq(inode->dirtied_when,
- pos->dirtied_when))
- break;
- inode_io_list_move_locked(inode, new_wb, pos->i_io_list.prev);
+
+ if (inode->i_state & I_DIRTY_ALL) {
+ struct inode *pos;
+
+ list_for_each_entry(pos, &new_wb->b_dirty, i_io_list)
+ if (time_after_eq(inode->dirtied_when,
+ pos->dirtied_when))
+ break;
+ inode_io_list_move_locked(inode, new_wb,
+ pos->i_io_list.prev);
+ } else {
+ inode_cgwb_move_to_attached(inode, new_wb);
+ }
} else {
inode->i_wb = new_wb;
}
@@ -1021,6 +1031,17 @@ fs_initcall(cgroup_writeback_init);
static void bdi_down_write_wb_switch_rwsem(struct backing_dev_info *bdi) { }
static void bdi_up_write_wb_switch_rwsem(struct backing_dev_info *bdi) { }
+static void inode_cgwb_move_to_attached(struct inode *inode,
+ struct bdi_writeback *wb)
+{
+ assert_spin_locked(&wb->list_lock);
+ assert_spin_locked(&inode->i_lock);
+
+ inode->i_state &= ~I_SYNC_QUEUED;
+ list_del_init(&inode->i_io_list);
+ wb_io_lists_depopulated(wb);
+}
+
static struct bdi_writeback *
locked_inode_to_wb_and_lock_list(struct inode *inode)
__releases(&inode->i_lock)
@@ -1121,7 +1142,11 @@ void inode_io_list_del(struct inode *inode)
wb = inode_to_wb_and_lock_list(inode);
spin_lock(&inode->i_lock);
- inode_io_list_del_locked(inode, wb);
+
+ inode->i_state &= ~I_SYNC_QUEUED;
+ list_del_init(&inode->i_io_list);
+ wb_io_lists_depopulated(wb);
+
spin_unlock(&inode->i_lock);
spin_unlock(&wb->list_lock);
}
@@ -1434,7 +1459,7 @@ static void requeue_inode(struct inode *inode, struct bdi_writeback *wb,
inode->i_state &= ~I_SYNC_QUEUED;
} else {
/* The inode is clean. Remove from writeback lists. */
- inode_io_list_del_locked(inode, wb);
+ inode_cgwb_move_to_attached(inode, wb);
}
}
@@ -1586,7 +1611,7 @@ static int writeback_single_inode(struct inode *inode,
* responsible for the writeback lists.
*/
if (!(inode->i_state & I_DIRTY_ALL))
- inode_io_list_del_locked(inode, wb);
+ inode_cgwb_move_to_attached(inode, wb);
spin_unlock(&wb->list_lock);
inode_sync_complete(inode);
out:
diff --git a/include/linux/backing-dev-defs.h b/include/linux/backing-dev-defs.h
index fff9367a6348..e5dc238ebe4f 100644
--- a/include/linux/backing-dev-defs.h
+++ b/include/linux/backing-dev-defs.h
@@ -154,6 +154,7 @@ struct bdi_writeback {
struct cgroup_subsys_state *blkcg_css; /* and blkcg */
struct list_head memcg_node; /* anchored at memcg->cgwb_list */
struct list_head blkcg_node; /* anchored at blkcg->cgwb_list */
+ struct list_head b_attached; /* attached inodes, protected by list_lock */
union {
struct work_struct release_work;
diff --git a/mm/backing-dev.c b/mm/backing-dev.c
index 576220acd686..54c5dc4b8c24 100644
--- a/mm/backing-dev.c
+++ b/mm/backing-dev.c
@@ -396,6 +396,7 @@ static void cgwb_release_workfn(struct work_struct *work)
fprop_local_destroy_percpu(&wb->memcg_completions);
percpu_ref_exit(&wb->refcnt);
wb_exit(wb);
+ WARN_ON_ONCE(!list_empty(&wb->b_attached));
kfree_rcu(wb, rcu);
}
@@ -472,6 +473,7 @@ static int cgwb_create(struct backing_dev_info *bdi,
wb->memcg_css = memcg_css;
wb->blkcg_css = blkcg_css;
+ INIT_LIST_HEAD(&wb->b_attached);
INIT_WORK(&wb->release_work, cgwb_release_workfn);
set_bit(WB_registered, &wb->state);
--
2.31.1
^ permalink raw reply related [flat|nested] 14+ messages in thread* [PATCH v8 6/8] writeback, cgroup: split out the functional part of inode_switch_wbs_work_fn()
[not found] ` <20210608013123.1088882-1-guro-b10kYP2dOMg@public.gmane.org>
` (2 preceding siblings ...)
2021-06-08 1:31 ` [PATCH v8 5/8] writeback, cgroup: keep list of inodes attached to bdi_writeback Roman Gushchin
@ 2021-06-08 1:31 ` Roman Gushchin
2021-06-08 1:31 ` [PATCH v8 8/8] writeback, cgroup: release dying cgwbs by switching attached inodes Roman Gushchin
4 siblings, 0 replies; 14+ messages in thread
From: Roman Gushchin @ 2021-06-08 1:31 UTC (permalink / raw)
To: Jan Kara, Tejun Heo
Cc: linux-fsdevel-u79uwXL29TY76Z2rM5mHXA,
linux-kernel-u79uwXL29TY76Z2rM5mHXA,
linux-mm-Bw31MaZKKs3YtjvyW6yDsg, Alexander Viro, Dennis Zhou,
Dave Chinner, cgroups-u79uwXL29TY76Z2rM5mHXA, Roman Gushchin
Split out the functional part of the inode_switch_wbs_work_fn()
function as inode_do switch_wbs() to reuse it later for switching
inodes attached to dying cgwbs.
This commit doesn't bring any functional changes.
Signed-off-by: Roman Gushchin <guro-b10kYP2dOMg@public.gmane.org>
Reviewed-by: Jan Kara <jack-AlSwsSmVLrQ@public.gmane.org>
Acked-by: Tejun Heo <tj-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org>
Acked-by: Dennis Zhou <dennis-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org>
---
fs/fs-writeback.c | 19 +++++++++++--------
1 file changed, 11 insertions(+), 8 deletions(-)
diff --git a/fs/fs-writeback.c b/fs/fs-writeback.c
index 87b305ee5348..5520a6b5cc4d 100644
--- a/fs/fs-writeback.c
+++ b/fs/fs-writeback.c
@@ -351,15 +351,12 @@ static void bdi_up_write_wb_switch_rwsem(struct backing_dev_info *bdi)
up_write(&bdi->wb_switch_rwsem);
}
-static void inode_switch_wbs_work_fn(struct work_struct *work)
+static void inode_do_switch_wbs(struct inode *inode,
+ struct bdi_writeback *new_wb)
{
- struct inode_switch_wbs_context *isw =
- container_of(to_rcu_work(work), struct inode_switch_wbs_context, work);
- struct inode *inode = isw->inode;
struct backing_dev_info *bdi = inode_to_bdi(inode);
struct address_space *mapping = inode->i_mapping;
struct bdi_writeback *old_wb = inode->i_wb;
- struct bdi_writeback *new_wb = isw->new_wb;
XA_STATE(xas, &mapping->i_pages, 0);
struct page *page;
bool switched = false;
@@ -470,11 +467,17 @@ static void inode_switch_wbs_work_fn(struct work_struct *work)
wb_wakeup(new_wb);
wb_put(old_wb);
}
- wb_put(new_wb);
+}
- iput(inode);
- kfree(isw);
+static void inode_switch_wbs_work_fn(struct work_struct *work)
+{
+ struct inode_switch_wbs_context *isw =
+ container_of(to_rcu_work(work), struct inode_switch_wbs_context, work);
+ inode_do_switch_wbs(isw->inode, isw->new_wb);
+ wb_put(isw->new_wb);
+ iput(isw->inode);
+ kfree(isw);
atomic_dec(&isw_nr_in_flight);
}
--
2.31.1
^ permalink raw reply related [flat|nested] 14+ messages in thread* [PATCH v8 8/8] writeback, cgroup: release dying cgwbs by switching attached inodes
[not found] ` <20210608013123.1088882-1-guro-b10kYP2dOMg@public.gmane.org>
` (3 preceding siblings ...)
2021-06-08 1:31 ` [PATCH v8 6/8] writeback, cgroup: split out the functional part of inode_switch_wbs_work_fn() Roman Gushchin
@ 2021-06-08 1:31 ` Roman Gushchin
2021-06-08 8:54 ` Jan Kara
[not found] ` <20210608013123.1088882-9-guro-b10kYP2dOMg@public.gmane.org>
4 siblings, 2 replies; 14+ messages in thread
From: Roman Gushchin @ 2021-06-08 1:31 UTC (permalink / raw)
To: Jan Kara, Tejun Heo
Cc: linux-fsdevel-u79uwXL29TY76Z2rM5mHXA,
linux-kernel-u79uwXL29TY76Z2rM5mHXA,
linux-mm-Bw31MaZKKs3YtjvyW6yDsg, Alexander Viro, Dennis Zhou,
Dave Chinner, cgroups-u79uwXL29TY76Z2rM5mHXA, Roman Gushchin
Asynchronously try to release dying cgwbs by switching attached inodes
to the nearest living ancestor wb. It helps to get rid of per-cgroup
writeback structures themselves and of pinned memory and block cgroups,
which are significantly larger structures (mostly due to large per-cpu
statistics data). This prevents memory waste and helps to avoid
different scalability problems caused by large piles of dying cgroups.
Reuse the existing mechanism of inode switching used for foreign inode
detection. To speed things up batch up to 115 inode switching in a
single operation (the maximum number is selected so that the resulting
struct inode_switch_wbs_context can fit into 1024 bytes). Because
every switching consists of two steps divided by an RCU grace period,
it would be too slow without batching. Please note that the whole
batch counts as a single operation (when increasing/decreasing
isw_nr_in_flight). This allows to keep umounting working (flush the
switching queue), however prevents cleanups from consuming the whole
switching quota and effectively blocking the frn switching.
A cgwb cleanup operation can fail due to different reasons (e.g. not
enough memory, the cgwb has an in-flight/pending io, an attached inode
in a wrong state, etc). In this case the next scheduled cleanup will
make a new attempt. An attempt is made each time a new cgwb is offlined
(in other words a memcg and/or a blkcg is deleted by a user). In the
future an additional attempt scheduled by a timer can be implemented.
Signed-off-by: Roman Gushchin <guro-b10kYP2dOMg@public.gmane.org>
Acked-by: Tejun Heo <tj-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org>
Acked-by: Dennis Zhou <dennis-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org>
---
fs/fs-writeback.c | 102 ++++++++++++++++++++++++++++---
include/linux/backing-dev-defs.h | 1 +
include/linux/writeback.h | 1 +
mm/backing-dev.c | 67 +++++++++++++++++++-
4 files changed, 159 insertions(+), 12 deletions(-)
diff --git a/fs/fs-writeback.c b/fs/fs-writeback.c
index 737ac27adb77..96eb6e6cdbc2 100644
--- a/fs/fs-writeback.c
+++ b/fs/fs-writeback.c
@@ -225,6 +225,12 @@ void wb_wait_for_completion(struct wb_completion *done)
/* one round can affect upto 5 slots */
#define WB_FRN_MAX_IN_FLIGHT 1024 /* don't queue too many concurrently */
+/*
+ * Maximum inodes per isw. A specific value has been chosen to make
+ * struct inode_switch_wbs_context fit into 1024 bytes kmalloc.
+ */
+#define WB_MAX_INODES_PER_ISW 115
+
static atomic_t isw_nr_in_flight = ATOMIC_INIT(0);
static struct workqueue_struct *isw_wq;
@@ -503,6 +509,24 @@ static void inode_switch_wbs_work_fn(struct work_struct *work)
atomic_dec(&isw_nr_in_flight);
}
+static bool inode_prepare_wbs_switch(struct inode *inode,
+ struct bdi_writeback *new_wb)
+{
+ /* while holding I_WB_SWITCH, no one else can update the association */
+ spin_lock(&inode->i_lock);
+ if (!(inode->i_sb->s_flags & SB_ACTIVE) ||
+ inode->i_state & (I_WB_SWITCH | I_FREEING | I_WILL_FREE) ||
+ inode_to_wb(inode) == new_wb) {
+ spin_unlock(&inode->i_lock);
+ return false;
+ }
+ inode->i_state |= I_WB_SWITCH;
+ __iget(inode);
+ spin_unlock(&inode->i_lock);
+
+ return true;
+}
+
/**
* inode_switch_wbs - change the wb association of an inode
* @inode: target inode
@@ -540,17 +564,8 @@ static void inode_switch_wbs(struct inode *inode, int new_wb_id)
if (!isw->new_wb)
goto out_free;
- /* while holding I_WB_SWITCH, no one else can update the association */
- spin_lock(&inode->i_lock);
- if (!(inode->i_sb->s_flags & SB_ACTIVE) ||
- inode->i_state & (I_WB_SWITCH | I_FREEING | I_WILL_FREE) ||
- inode_to_wb(inode) == isw->new_wb) {
- spin_unlock(&inode->i_lock);
+ if (!inode_prepare_wbs_switch(inode, isw->new_wb))
goto out_free;
- }
- inode->i_state |= I_WB_SWITCH;
- __iget(inode);
- spin_unlock(&inode->i_lock);
isw->inodes[0] = inode;
@@ -571,6 +586,73 @@ static void inode_switch_wbs(struct inode *inode, int new_wb_id)
kfree(isw);
}
+/**
+ * cleanup_offline_cgwb - detach associated inodes
+ * @wb: target wb
+ *
+ * Switch all inodes attached to @wb to a nearest living ancestor's wb in order
+ * to eventually release the dying @wb. Returns %true if not all inodes were
+ * switched and the function has to be restarted.
+ */
+bool cleanup_offline_cgwb(struct bdi_writeback *wb)
+{
+ struct cgroup_subsys_state *memcg_css;
+ struct inode_switch_wbs_context *isw;
+ struct inode *inode;
+ int nr;
+ bool restart = false;
+
+ isw = kzalloc(sizeof(*isw) + WB_MAX_INODES_PER_ISW *
+ sizeof(struct inode *), GFP_KERNEL);
+ if (!isw)
+ return restart;
+
+ atomic_inc(&isw_nr_in_flight);
+
+ for (memcg_css = wb->memcg_css->parent; memcg_css;
+ memcg_css = memcg_css->parent) {
+ isw->new_wb = wb_get_lookup(wb->bdi, memcg_css);
+ if (isw->new_wb)
+ break;
+ }
+ if (unlikely(!isw->new_wb))
+ isw->new_wb = &wb->bdi->wb; /* wb_get() is noop for bdi's wb */
+
+ nr = 0;
+ spin_lock(&wb->list_lock);
+ list_for_each_entry(inode, &wb->b_attached, i_io_list) {
+ if (!inode_prepare_wbs_switch(inode, isw->new_wb))
+ continue;
+
+ isw->inodes[nr++] = inode;
+
+ if (nr >= WB_MAX_INODES_PER_ISW - 1) {
+ restart = true;
+ break;
+ }
+ }
+ spin_unlock(&wb->list_lock);
+
+ /* no attached inodes? bail out */
+ if (nr == 0) {
+ atomic_dec(&isw_nr_in_flight);
+ wb_put(isw->new_wb);
+ kfree(isw);
+ return restart;
+ }
+
+ /*
+ * In addition to synchronizing among switchers, I_WB_SWITCH tells
+ * the RCU protected stat update paths to grab the i_page
+ * lock so that stat transfer can synchronize against them.
+ * Let's continue after I_WB_SWITCH is guaranteed to be visible.
+ */
+ INIT_RCU_WORK(&isw->work, inode_switch_wbs_work_fn);
+ queue_rcu_work(isw_wq, &isw->work);
+
+ return restart;
+}
+
/**
* wbc_attach_and_unlock_inode - associate wbc with target inode and unlock it
* @wbc: writeback_control of interest
diff --git a/include/linux/backing-dev-defs.h b/include/linux/backing-dev-defs.h
index 63f52ad2ce7a..1d7edad9914f 100644
--- a/include/linux/backing-dev-defs.h
+++ b/include/linux/backing-dev-defs.h
@@ -155,6 +155,7 @@ struct bdi_writeback {
struct list_head memcg_node; /* anchored at memcg->cgwb_list */
struct list_head blkcg_node; /* anchored at blkcg->cgwb_list */
struct list_head b_attached; /* attached inodes, protected by list_lock */
+ struct list_head offline_node; /* anchored at offline_cgwbs */
union {
struct work_struct release_work;
diff --git a/include/linux/writeback.h b/include/linux/writeback.h
index 8e5c5bb16e2d..95de51c10248 100644
--- a/include/linux/writeback.h
+++ b/include/linux/writeback.h
@@ -221,6 +221,7 @@ void wbc_account_cgroup_owner(struct writeback_control *wbc, struct page *page,
int cgroup_writeback_by_id(u64 bdi_id, int memcg_id, unsigned long nr_pages,
enum wb_reason reason, struct wb_completion *done);
void cgroup_writeback_umount(void);
+bool cleanup_offline_cgwb(struct bdi_writeback *wb);
/**
* inode_attach_wb - associate an inode with its wb
diff --git a/mm/backing-dev.c b/mm/backing-dev.c
index 54c5dc4b8c24..faa45027c854 100644
--- a/mm/backing-dev.c
+++ b/mm/backing-dev.c
@@ -371,12 +371,16 @@ static void wb_exit(struct bdi_writeback *wb)
#include <linux/memcontrol.h>
/*
- * cgwb_lock protects bdi->cgwb_tree, blkcg->cgwb_list, and memcg->cgwb_list.
- * bdi->cgwb_tree is also RCU protected.
+ * cgwb_lock protects bdi->cgwb_tree, blkcg->cgwb_list, offline_cgwbs and
+ * memcg->cgwb_list. bdi->cgwb_tree is also RCU protected.
*/
static DEFINE_SPINLOCK(cgwb_lock);
static struct workqueue_struct *cgwb_release_wq;
+static LIST_HEAD(offline_cgwbs);
+static void cleanup_offline_cgwbs_workfn(struct work_struct *work);
+static DECLARE_WORK(cleanup_offline_cgwbs_work, cleanup_offline_cgwbs_workfn);
+
static void cgwb_release_workfn(struct work_struct *work)
{
struct bdi_writeback *wb = container_of(work, struct bdi_writeback,
@@ -395,6 +399,11 @@ static void cgwb_release_workfn(struct work_struct *work)
fprop_local_destroy_percpu(&wb->memcg_completions);
percpu_ref_exit(&wb->refcnt);
+
+ spin_lock_irq(&cgwb_lock);
+ list_del(&wb->offline_node);
+ spin_unlock_irq(&cgwb_lock);
+
wb_exit(wb);
WARN_ON_ONCE(!list_empty(&wb->b_attached));
kfree_rcu(wb, rcu);
@@ -414,6 +423,7 @@ static void cgwb_kill(struct bdi_writeback *wb)
WARN_ON(!radix_tree_delete(&wb->bdi->cgwb_tree, wb->memcg_css->id));
list_del(&wb->memcg_node);
list_del(&wb->blkcg_node);
+ list_add(&wb->offline_node, &offline_cgwbs);
percpu_ref_kill(&wb->refcnt);
}
@@ -635,6 +645,57 @@ static void cgwb_bdi_unregister(struct backing_dev_info *bdi)
mutex_unlock(&bdi->cgwb_release_mutex);
}
+/**
+ * cleanup_offline_cgwbs - try to release dying cgwbs
+ *
+ * Try to release dying cgwbs by switching attached inodes to the nearest
+ * living ancestor's writeback. Processed wbs are placed at the end
+ * of the list to guarantee the forward progress.
+ *
+ * Should be called with the acquired cgwb_lock lock, which might
+ * be released and re-acquired in the process.
+ */
+static void cleanup_offline_cgwbs_workfn(struct work_struct *work)
+{
+ struct bdi_writeback *wb;
+ LIST_HEAD(processed);
+
+ spin_lock_irq(&cgwb_lock);
+
+ while (!list_empty(&offline_cgwbs)) {
+ wb = list_first_entry(&offline_cgwbs, struct bdi_writeback,
+ offline_node);
+ list_move(&wb->offline_node, &processed);
+
+ /*
+ * If wb is dirty, cleaning up the writeback by switching
+ * attached inodes will result in an effective removal of any
+ * bandwidth restrictions, which isn't the goal. Instead,
+ * it can be postponed until the next time, when all io
+ * will be likely completed. If in the meantime some inodes
+ * will get re-dirtied, they should be eventually switched to
+ * a new cgwb.
+ */
+ if (wb_has_dirty_io(wb))
+ continue;
+
+ if (!wb_tryget(wb))
+ continue;
+
+ spin_unlock_irq(&cgwb_lock);
+ while ((cleanup_offline_cgwb(wb)))
+ cond_resched();
+ spin_lock_irq(&cgwb_lock);
+
+ wb_put(wb);
+ }
+
+ if (!list_empty(&processed))
+ list_splice_tail(&processed, &offline_cgwbs);
+
+ spin_unlock_irq(&cgwb_lock);
+}
+
/**
* wb_memcg_offline - kill all wb's associated with a memcg being offlined
* @memcg: memcg being offlined
@@ -651,6 +712,8 @@ void wb_memcg_offline(struct mem_cgroup *memcg)
cgwb_kill(wb);
memcg_cgwb_list->next = NULL; /* prevent new wb's */
spin_unlock_irq(&cgwb_lock);
+
+ queue_work(system_unbound_wq, &cleanup_offline_cgwbs_work);
}
/**
--
2.31.1
^ permalink raw reply related [flat|nested] 14+ messages in thread* Re: [PATCH v8 8/8] writeback, cgroup: release dying cgwbs by switching attached inodes
2021-06-08 1:31 ` [PATCH v8 8/8] writeback, cgroup: release dying cgwbs by switching attached inodes Roman Gushchin
@ 2021-06-08 8:54 ` Jan Kara
[not found] ` <20210608013123.1088882-9-guro-b10kYP2dOMg@public.gmane.org>
1 sibling, 0 replies; 14+ messages in thread
From: Jan Kara @ 2021-06-08 8:54 UTC (permalink / raw)
To: Roman Gushchin
Cc: Jan Kara, Tejun Heo, linux-fsdevel, linux-kernel, linux-mm,
Alexander Viro, Dennis Zhou, Dave Chinner, cgroups
On Mon 07-06-21 18:31:23, Roman Gushchin wrote:
> Asynchronously try to release dying cgwbs by switching attached inodes
> to the nearest living ancestor wb. It helps to get rid of per-cgroup
> writeback structures themselves and of pinned memory and block cgroups,
> which are significantly larger structures (mostly due to large per-cpu
> statistics data). This prevents memory waste and helps to avoid
> different scalability problems caused by large piles of dying cgroups.
>
> Reuse the existing mechanism of inode switching used for foreign inode
> detection. To speed things up batch up to 115 inode switching in a
> single operation (the maximum number is selected so that the resulting
> struct inode_switch_wbs_context can fit into 1024 bytes). Because
> every switching consists of two steps divided by an RCU grace period,
> it would be too slow without batching. Please note that the whole
> batch counts as a single operation (when increasing/decreasing
> isw_nr_in_flight). This allows to keep umounting working (flush the
> switching queue), however prevents cleanups from consuming the whole
> switching quota and effectively blocking the frn switching.
>
> A cgwb cleanup operation can fail due to different reasons (e.g. not
> enough memory, the cgwb has an in-flight/pending io, an attached inode
> in a wrong state, etc). In this case the next scheduled cleanup will
> make a new attempt. An attempt is made each time a new cgwb is offlined
> (in other words a memcg and/or a blkcg is deleted by a user). In the
> future an additional attempt scheduled by a timer can be implemented.
>
> Signed-off-by: Roman Gushchin <guro@fb.com>
> Acked-by: Tejun Heo <tj@kernel.org>
> Acked-by: Dennis Zhou <dennis@kernel.org>
The patch looks good. Feel free to add:
Reviewed-by: Jan Kara <jack@suse.cz>
Just one codingstyle nit below.
> + if (!wb_tryget(wb))
> + continue;
> +
> + spin_unlock_irq(&cgwb_lock);
> + while ((cleanup_offline_cgwb(wb)))
^^ too many parentheses here...
> + cond_resched();
> + spin_lock_irq(&cgwb_lock);
Honza
--
Jan Kara <jack@suse.com>
SUSE Labs, CR
^ permalink raw reply [flat|nested] 14+ messages in thread[parent not found: <20210608013123.1088882-9-guro-b10kYP2dOMg@public.gmane.org>]
* Re: [PATCH v8 8/8] writeback, cgroup: release dying cgwbs by switching attached inodes
[not found] ` <20210608013123.1088882-9-guro-b10kYP2dOMg@public.gmane.org>
@ 2021-06-08 16:08 ` Dennis Zhou
[not found] ` <YL+WC5beBH/N0ddz-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>
0 siblings, 1 reply; 14+ messages in thread
From: Dennis Zhou @ 2021-06-08 16:08 UTC (permalink / raw)
To: Roman Gushchin
Cc: Jan Kara, Tejun Heo, linux-fsdevel-u79uwXL29TY76Z2rM5mHXA,
linux-kernel-u79uwXL29TY76Z2rM5mHXA,
linux-mm-Bw31MaZKKs3YtjvyW6yDsg, Alexander Viro, Dave Chinner,
cgroups-u79uwXL29TY76Z2rM5mHXA
Hello,
On Mon, Jun 07, 2021 at 06:31:23PM -0700, Roman Gushchin wrote:
> Asynchronously try to release dying cgwbs by switching attached inodes
> to the nearest living ancestor wb. It helps to get rid of per-cgroup
> writeback structures themselves and of pinned memory and block cgroups,
> which are significantly larger structures (mostly due to large per-cpu
> statistics data). This prevents memory waste and helps to avoid
> different scalability problems caused by large piles of dying cgroups.
>
> Reuse the existing mechanism of inode switching used for foreign inode
> detection. To speed things up batch up to 115 inode switching in a
> single operation (the maximum number is selected so that the resulting
> struct inode_switch_wbs_context can fit into 1024 bytes). Because
> every switching consists of two steps divided by an RCU grace period,
> it would be too slow without batching. Please note that the whole
> batch counts as a single operation (when increasing/decreasing
> isw_nr_in_flight). This allows to keep umounting working (flush the
> switching queue), however prevents cleanups from consuming the whole
> switching quota and effectively blocking the frn switching.
>
> A cgwb cleanup operation can fail due to different reasons (e.g. not
> enough memory, the cgwb has an in-flight/pending io, an attached inode
> in a wrong state, etc). In this case the next scheduled cleanup will
> make a new attempt. An attempt is made each time a new cgwb is offlined
> (in other words a memcg and/or a blkcg is deleted by a user). In the
> future an additional attempt scheduled by a timer can be implemented.
>
> Signed-off-by: Roman Gushchin <guro-b10kYP2dOMg@public.gmane.org>
> Acked-by: Tejun Heo <tj-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org>
> Acked-by: Dennis Zhou <dennis-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org>
> ---
> fs/fs-writeback.c | 102 ++++++++++++++++++++++++++++---
> include/linux/backing-dev-defs.h | 1 +
> include/linux/writeback.h | 1 +
> mm/backing-dev.c | 67 +++++++++++++++++++-
> 4 files changed, 159 insertions(+), 12 deletions(-)
>
> diff --git a/fs/fs-writeback.c b/fs/fs-writeback.c
> index 737ac27adb77..96eb6e6cdbc2 100644
> --- a/fs/fs-writeback.c
> +++ b/fs/fs-writeback.c
> @@ -225,6 +225,12 @@ void wb_wait_for_completion(struct wb_completion *done)
> /* one round can affect upto 5 slots */
> #define WB_FRN_MAX_IN_FLIGHT 1024 /* don't queue too many concurrently */
>
> +/*
> + * Maximum inodes per isw. A specific value has been chosen to make
> + * struct inode_switch_wbs_context fit into 1024 bytes kmalloc.
> + */
> +#define WB_MAX_INODES_PER_ISW 115
> +
> static atomic_t isw_nr_in_flight = ATOMIC_INIT(0);
> static struct workqueue_struct *isw_wq;
>
> @@ -503,6 +509,24 @@ static void inode_switch_wbs_work_fn(struct work_struct *work)
> atomic_dec(&isw_nr_in_flight);
> }
>
> +static bool inode_prepare_wbs_switch(struct inode *inode,
> + struct bdi_writeback *new_wb)
> +{
> + /* while holding I_WB_SWITCH, no one else can update the association */
> + spin_lock(&inode->i_lock);
> + if (!(inode->i_sb->s_flags & SB_ACTIVE) ||
> + inode->i_state & (I_WB_SWITCH | I_FREEING | I_WILL_FREE) ||
> + inode_to_wb(inode) == new_wb) {
> + spin_unlock(&inode->i_lock);
> + return false;
> + }
> + inode->i_state |= I_WB_SWITCH;
> + __iget(inode);
> + spin_unlock(&inode->i_lock);
> +
> + return true;
> +}
> +
> /**
> * inode_switch_wbs - change the wb association of an inode
> * @inode: target inode
> @@ -540,17 +564,8 @@ static void inode_switch_wbs(struct inode *inode, int new_wb_id)
> if (!isw->new_wb)
> goto out_free;
>
> - /* while holding I_WB_SWITCH, no one else can update the association */
> - spin_lock(&inode->i_lock);
> - if (!(inode->i_sb->s_flags & SB_ACTIVE) ||
> - inode->i_state & (I_WB_SWITCH | I_FREEING | I_WILL_FREE) ||
> - inode_to_wb(inode) == isw->new_wb) {
> - spin_unlock(&inode->i_lock);
> + if (!inode_prepare_wbs_switch(inode, isw->new_wb))
> goto out_free;
> - }
> - inode->i_state |= I_WB_SWITCH;
> - __iget(inode);
> - spin_unlock(&inode->i_lock);
>
> isw->inodes[0] = inode;
>
> @@ -571,6 +586,73 @@ static void inode_switch_wbs(struct inode *inode, int new_wb_id)
> kfree(isw);
> }
>
> +/**
> + * cleanup_offline_cgwb - detach associated inodes
> + * @wb: target wb
> + *
> + * Switch all inodes attached to @wb to a nearest living ancestor's wb in order
> + * to eventually release the dying @wb. Returns %true if not all inodes were
> + * switched and the function has to be restarted.
> + */
> +bool cleanup_offline_cgwb(struct bdi_writeback *wb)
> +{
> + struct cgroup_subsys_state *memcg_css;
> + struct inode_switch_wbs_context *isw;
> + struct inode *inode;
> + int nr;
> + bool restart = false;
> +
> + isw = kzalloc(sizeof(*isw) + WB_MAX_INODES_PER_ISW *
> + sizeof(struct inode *), GFP_KERNEL);
> + if (!isw)
> + return restart;
> +
> + atomic_inc(&isw_nr_in_flight);
> +
> + for (memcg_css = wb->memcg_css->parent; memcg_css;
> + memcg_css = memcg_css->parent) {
> + isw->new_wb = wb_get_lookup(wb->bdi, memcg_css);
Should this be wb_get_create()? I suspect intermediate cgroups wouldn't
have cgwb's due to the no internal process constraint. cgwb's aren't
like blkcgs where they pin the parent and maintain the tree hierarchy.
> + if (isw->new_wb)
> + break;
> + }
> + if (unlikely(!isw->new_wb))
> + isw->new_wb = &wb->bdi->wb; /* wb_get() is noop for bdi's wb */
> +
> + nr = 0;
> + spin_lock(&wb->list_lock);
> + list_for_each_entry(inode, &wb->b_attached, i_io_list) {
> + if (!inode_prepare_wbs_switch(inode, isw->new_wb))
> + continue;
> +
> + isw->inodes[nr++] = inode;
> +
> + if (nr >= WB_MAX_INODES_PER_ISW - 1) {
> + restart = true;
> + break;
> + }
> + }
> + spin_unlock(&wb->list_lock);
> +
> + /* no attached inodes? bail out */
> + if (nr == 0) {
> + atomic_dec(&isw_nr_in_flight);
> + wb_put(isw->new_wb);
> + kfree(isw);
> + return restart;
> + }
> +
> + /*
> + * In addition to synchronizing among switchers, I_WB_SWITCH tells
> + * the RCU protected stat update paths to grab the i_page
> + * lock so that stat transfer can synchronize against them.
> + * Let's continue after I_WB_SWITCH is guaranteed to be visible.
> + */
> + INIT_RCU_WORK(&isw->work, inode_switch_wbs_work_fn);
> + queue_rcu_work(isw_wq, &isw->work);
> +
> + return restart;
> +}
> +
> /**
> * wbc_attach_and_unlock_inode - associate wbc with target inode and unlock it
> * @wbc: writeback_control of interest
> diff --git a/include/linux/backing-dev-defs.h b/include/linux/backing-dev-defs.h
> index 63f52ad2ce7a..1d7edad9914f 100644
> --- a/include/linux/backing-dev-defs.h
> +++ b/include/linux/backing-dev-defs.h
> @@ -155,6 +155,7 @@ struct bdi_writeback {
> struct list_head memcg_node; /* anchored at memcg->cgwb_list */
> struct list_head blkcg_node; /* anchored at blkcg->cgwb_list */
> struct list_head b_attached; /* attached inodes, protected by list_lock */
> + struct list_head offline_node; /* anchored at offline_cgwbs */
>
> union {
> struct work_struct release_work;
> diff --git a/include/linux/writeback.h b/include/linux/writeback.h
> index 8e5c5bb16e2d..95de51c10248 100644
> --- a/include/linux/writeback.h
> +++ b/include/linux/writeback.h
> @@ -221,6 +221,7 @@ void wbc_account_cgroup_owner(struct writeback_control *wbc, struct page *page,
> int cgroup_writeback_by_id(u64 bdi_id, int memcg_id, unsigned long nr_pages,
> enum wb_reason reason, struct wb_completion *done);
> void cgroup_writeback_umount(void);
> +bool cleanup_offline_cgwb(struct bdi_writeback *wb);
>
> /**
> * inode_attach_wb - associate an inode with its wb
> diff --git a/mm/backing-dev.c b/mm/backing-dev.c
> index 54c5dc4b8c24..faa45027c854 100644
> --- a/mm/backing-dev.c
> +++ b/mm/backing-dev.c
> @@ -371,12 +371,16 @@ static void wb_exit(struct bdi_writeback *wb)
> #include <linux/memcontrol.h>
>
> /*
> - * cgwb_lock protects bdi->cgwb_tree, blkcg->cgwb_list, and memcg->cgwb_list.
> - * bdi->cgwb_tree is also RCU protected.
> + * cgwb_lock protects bdi->cgwb_tree, blkcg->cgwb_list, offline_cgwbs and
> + * memcg->cgwb_list. bdi->cgwb_tree is also RCU protected.
> */
> static DEFINE_SPINLOCK(cgwb_lock);
> static struct workqueue_struct *cgwb_release_wq;
>
> +static LIST_HEAD(offline_cgwbs);
> +static void cleanup_offline_cgwbs_workfn(struct work_struct *work);
> +static DECLARE_WORK(cleanup_offline_cgwbs_work, cleanup_offline_cgwbs_workfn);
> +
> static void cgwb_release_workfn(struct work_struct *work)
> {
> struct bdi_writeback *wb = container_of(work, struct bdi_writeback,
> @@ -395,6 +399,11 @@ static void cgwb_release_workfn(struct work_struct *work)
>
> fprop_local_destroy_percpu(&wb->memcg_completions);
> percpu_ref_exit(&wb->refcnt);
> +
> + spin_lock_irq(&cgwb_lock);
> + list_del(&wb->offline_node);
> + spin_unlock_irq(&cgwb_lock);
> +
> wb_exit(wb);
> WARN_ON_ONCE(!list_empty(&wb->b_attached));
> kfree_rcu(wb, rcu);
> @@ -414,6 +423,7 @@ static void cgwb_kill(struct bdi_writeback *wb)
> WARN_ON(!radix_tree_delete(&wb->bdi->cgwb_tree, wb->memcg_css->id));
> list_del(&wb->memcg_node);
> list_del(&wb->blkcg_node);
> + list_add(&wb->offline_node, &offline_cgwbs);
> percpu_ref_kill(&wb->refcnt);
> }
>
> @@ -635,6 +645,57 @@ static void cgwb_bdi_unregister(struct backing_dev_info *bdi)
> mutex_unlock(&bdi->cgwb_release_mutex);
> }
>
> +/**
> + * cleanup_offline_cgwbs - try to release dying cgwbs
> + *
> + * Try to release dying cgwbs by switching attached inodes to the nearest
> + * living ancestor's writeback. Processed wbs are placed at the end
> + * of the list to guarantee the forward progress.
> + *
> + * Should be called with the acquired cgwb_lock lock, which might
> + * be released and re-acquired in the process.
> + */
> +static void cleanup_offline_cgwbs_workfn(struct work_struct *work)
> +{
> + struct bdi_writeback *wb;
> + LIST_HEAD(processed);
> +
> + spin_lock_irq(&cgwb_lock);
> +
> + while (!list_empty(&offline_cgwbs)) {
> + wb = list_first_entry(&offline_cgwbs, struct bdi_writeback,
> + offline_node);
> + list_move(&wb->offline_node, &processed);
> +
> + /*
> + * If wb is dirty, cleaning up the writeback by switching
> + * attached inodes will result in an effective removal of any
> + * bandwidth restrictions, which isn't the goal. Instead,
> + * it can be postponed until the next time, when all io
> + * will be likely completed. If in the meantime some inodes
> + * will get re-dirtied, they should be eventually switched to
> + * a new cgwb.
> + */
> + if (wb_has_dirty_io(wb))
> + continue;
> +
> + if (!wb_tryget(wb))
> + continue;
> +
> + spin_unlock_irq(&cgwb_lock);
> + while ((cleanup_offline_cgwb(wb)))
> + cond_resched();
> + spin_lock_irq(&cgwb_lock);
> +
> + wb_put(wb);
> + }
> +
> + if (!list_empty(&processed))
> + list_splice_tail(&processed, &offline_cgwbs);
> +
> + spin_unlock_irq(&cgwb_lock);
> +}
> +
> /**
> * wb_memcg_offline - kill all wb's associated with a memcg being offlined
> * @memcg: memcg being offlined
> @@ -651,6 +712,8 @@ void wb_memcg_offline(struct mem_cgroup *memcg)
> cgwb_kill(wb);
> memcg_cgwb_list->next = NULL; /* prevent new wb's */
> spin_unlock_irq(&cgwb_lock);
> +
> + queue_work(system_unbound_wq, &cleanup_offline_cgwbs_work);
> }
>
> /**
> --
> 2.31.1
>
Thanks,
Dennis
^ permalink raw reply [flat|nested] 14+ messages in thread