* Re: [PATCH 01/29] mm: shrinker: add shrinker::private_data field
[not found] ` <20230622085335.77010-2-zhengqi.arch@bytedance.com>
@ 2023-06-22 14:47 ` Vlastimil Babka
0 siblings, 0 replies; 6+ messages in thread
From: Vlastimil Babka @ 2023-06-22 14:47 UTC (permalink / raw)
To: Qi Zheng, akpm, david, tkhai, roman.gushchin, djwong, brauner,
paulmck, tytso
Cc: linux-bcache, linux-xfs, linux-nfs, linux-arm-msm, intel-gfx,
linux-kernel, dri-devel, virtualization, linux-raid, linux-mm,
dm-devel, linux-fsdevel, linux-ext4, linux-btrfs
On 6/22/23 10:53, Qi Zheng wrote:
> To prepare for the dynamic allocation of shrinker instances
> embedded in other structures, add a private_data field to
> struct shrinker, so that we can use shrinker::private_data
> to record and get the original embedded structure.
>
> Signed-off-by: Qi Zheng <zhengqi.arch@bytedance.com>
I would fold this to 02/29, less churn.
> ---
> include/linux/shrinker.h | 2 ++
> 1 file changed, 2 insertions(+)
>
> diff --git a/include/linux/shrinker.h b/include/linux/shrinker.h
> index 224293b2dd06..43e6fcabbf51 100644
> --- a/include/linux/shrinker.h
> +++ b/include/linux/shrinker.h
> @@ -70,6 +70,8 @@ struct shrinker {
> int seeks; /* seeks to recreate an obj */
> unsigned flags;
>
> + void *private_data;
> +
> /* These are for internal use */
> struct list_head list;
> #ifdef CONFIG_MEMCG
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [PATCH 29/29] mm: shrinker: move shrinker-related code into a separate file
[not found] ` <20230622085335.77010-30-zhengqi.arch@bytedance.com>
@ 2023-06-22 14:53 ` Vlastimil Babka
0 siblings, 0 replies; 6+ messages in thread
From: Vlastimil Babka @ 2023-06-22 14:53 UTC (permalink / raw)
To: Qi Zheng, akpm, david, tkhai, roman.gushchin, djwong, brauner,
paulmck, tytso
Cc: linux-bcache, linux-xfs, linux-nfs, linux-arm-msm, intel-gfx,
linux-kernel, dri-devel, virtualization, linux-raid, linux-mm,
dm-devel, linux-fsdevel, linux-ext4, linux-btrfs
On 6/22/23 10:53, Qi Zheng wrote:
> The mm/vmscan.c file is too large, so separate the shrinker-related
> code from it into a separate file. No functional changes.
>
> Signed-off-by: Qi Zheng <zhengqi.arch@bytedance.com>
Maybe do this move as patch 01 so the further changes are done in the new
file already?
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [PATCH 24/29] mm: vmscan: make global slab shrink lockless
[not found] ` <20230622085335.77010-25-zhengqi.arch@bytedance.com>
@ 2023-06-22 15:12 ` Vlastimil Babka
2023-06-23 6:29 ` Dave Chinner via Virtualization
0 siblings, 1 reply; 6+ messages in thread
From: Vlastimil Babka @ 2023-06-22 15:12 UTC (permalink / raw)
To: Qi Zheng, akpm, david, tkhai, roman.gushchin, djwong, brauner,
paulmck, tytso
Cc: linux-bcache, linux-xfs, linux-nfs, linux-arm-msm, intel-gfx,
linux-kernel, dri-devel, virtualization, linux-raid, linux-mm,
dm-devel, linux-fsdevel, linux-ext4, linux-btrfs
On 6/22/23 10:53, Qi Zheng wrote:
> The shrinker_rwsem is a global read-write lock in
> shrinkers subsystem, which protects most operations
> such as slab shrink, registration and unregistration
> of shrinkers, etc. This can easily cause problems in
> the following cases.
>
> 1) When the memory pressure is high and there are many
> filesystems mounted or unmounted at the same time,
> slab shrink will be affected (down_read_trylock()
> failed).
>
> Such as the real workload mentioned by Kirill Tkhai:
>
> ```
> One of the real workloads from my experience is start
> of an overcommitted node containing many starting
> containers after node crash (or many resuming containers
> after reboot for kernel update). In these cases memory
> pressure is huge, and the node goes round in long reclaim.
> ```
>
> 2) If a shrinker is blocked (such as the case mentioned
> in [1]) and a writer comes in (such as mount a fs),
> then this writer will be blocked and cause all
> subsequent shrinker-related operations to be blocked.
>
> Even if there is no competitor when shrinking slab, there
> may still be a problem. If we have a long shrinker list
> and we do not reclaim enough memory with each shrinker,
> then the down_read_trylock() may be called with high
> frequency. Because of the poor multicore scalability of
> atomic operations, this can lead to a significant drop
> in IPC (instructions per cycle).
>
> We used to implement the lockless slab shrink with
> SRCU [1], but then kernel test robot reported -88.8%
> regression in stress-ng.ramfs.ops_per_sec test case [2],
> so we reverted it [3].
>
> This commit uses the refcount+RCU method [4] proposed by
> by Dave Chinner to re-implement the lockless global slab
> shrink. The memcg slab shrink is handled in the subsequent
> patch.
>
> Currently, the shrinker instances can be divided into
> the following three types:
>
> a) global shrinker instance statically defined in the kernel,
> such as workingset_shadow_shrinker.
>
> b) global shrinker instance statically defined in the kernel
> modules, such as mmu_shrinker in x86.
>
> c) shrinker instance embedded in other structures.
>
> For case a, the memory of shrinker instance is never freed.
> For case b, the memory of shrinker instance will be freed
> after the module is unloaded. But we will call synchronize_rcu()
> in free_module() to wait for RCU read-side critical section to
> exit. For case c, the memory of shrinker instance will be
> dynamically freed by calling kfree_rcu(). So we can use
> rcu_read_{lock,unlock}() to ensure that the shrinker instance
> is valid.
>
> The shrinker::refcount mechanism ensures that the shrinker
> instance will not be run again after unregistration. So the
> structure that records the pointer of shrinker instance can be
> safely freed without waiting for the RCU read-side critical
> section.
>
> In this way, while we implement the lockless slab shrink, we
> don't need to be blocked in unregister_shrinker() to wait
> RCU read-side critical section.
>
> The following are the test results:
>
> stress-ng --timeout 60 --times --verify --metrics-brief --ramfs 9 &
>
> 1) Before applying this patchset:
>
> setting to a 60 second run per stressor
> dispatching hogs: 9 ramfs
> stressor bogo ops real time usr time sys time bogo ops/s bogo ops/s
> (secs) (secs) (secs) (real time) (usr+sys time)
> ramfs 880623 60.02 7.71 226.93 14671.45 3753.09
> ramfs:
> 1 System Management Interrupt
> for a 60.03s run time:
> 5762.40s available CPU time
> 7.71s user time ( 0.13%)
> 226.93s system time ( 3.94%)
> 234.64s total time ( 4.07%)
> load average: 8.54 3.06 2.11
> passed: 9: ramfs (9)
> failed: 0
> skipped: 0
> successful run completed in 60.03s (1 min, 0.03 secs)
>
> 2) After applying this patchset:
>
> setting to a 60 second run per stressor
> dispatching hogs: 9 ramfs
> stressor bogo ops real time usr time sys time bogo ops/s bogo ops/s
> (secs) (secs) (secs) (real time) (usr+sys time)
> ramfs 847562 60.02 7.44 230.22 14120.66 3566.23
> ramfs:
> 4 System Management Interrupts
> for a 60.12s run time:
> 5771.95s available CPU time
> 7.44s user time ( 0.13%)
> 230.22s system time ( 3.99%)
> 237.66s total time ( 4.12%)
> load average: 8.18 2.43 0.84
> passed: 9: ramfs (9)
> failed: 0
> skipped: 0
> successful run completed in 60.12s (1 min, 0.12 secs)
>
> We can see that the ops/s has hardly changed.
>
> [1]. https://lore.kernel.org/lkml/20230313112819.38938-1-zhengqi.arch@bytedance.com/
> [2]. https://lore.kernel.org/lkml/202305230837.db2c233f-yujie.liu@intel.com/
> [3]. https://lore.kernel.org/all/20230609081518.3039120-1-qi.zheng@linux.dev/
> [4]. https://lore.kernel.org/lkml/ZIJhou1d55d4H1s0@dread.disaster.area/
>
> Signed-off-by: Qi Zheng <zhengqi.arch@bytedance.com>
> ---
> include/linux/shrinker.h | 6 ++++++
> mm/vmscan.c | 33 ++++++++++++++-------------------
> 2 files changed, 20 insertions(+), 19 deletions(-)
>
> diff --git a/include/linux/shrinker.h b/include/linux/shrinker.h
> index 7bfeb2f25246..b0c6c2df9db8 100644
> --- a/include/linux/shrinker.h
> +++ b/include/linux/shrinker.h
> @@ -74,6 +74,7 @@ struct shrinker {
>
> refcount_t refcount;
> struct completion completion_wait;
> + struct rcu_head rcu;
>
> void *private_data;
>
> @@ -123,6 +124,11 @@ struct shrinker *shrinker_alloc_and_init(count_objects_cb count,
> void shrinker_free(struct shrinker *shrinker);
> void unregister_and_free_shrinker(struct shrinker *shrinker);
>
> +static inline bool shrinker_try_get(struct shrinker *shrinker)
> +{
> + return refcount_inc_not_zero(&shrinker->refcount);
> +}
> +
> static inline void shrinker_put(struct shrinker *shrinker)
> {
> if (refcount_dec_and_test(&shrinker->refcount))
> diff --git a/mm/vmscan.c b/mm/vmscan.c
> index 6f9c4750effa..767569698946 100644
> --- a/mm/vmscan.c
> +++ b/mm/vmscan.c
> @@ -57,6 +57,7 @@
> #include <linux/khugepaged.h>
> #include <linux/rculist_nulls.h>
> #include <linux/random.h>
> +#include <linux/rculist.h>
>
> #include <asm/tlbflush.h>
> #include <asm/div64.h>
> @@ -742,7 +743,7 @@ void register_shrinker_prepared(struct shrinker *shrinker)
> down_write(&shrinker_rwsem);
> refcount_set(&shrinker->refcount, 1);
> init_completion(&shrinker->completion_wait);
> - list_add_tail(&shrinker->list, &shrinker_list);
> + list_add_tail_rcu(&shrinker->list, &shrinker_list);
> shrinker->flags |= SHRINKER_REGISTERED;
> shrinker_debugfs_add(shrinker);
> up_write(&shrinker_rwsem);
> @@ -800,7 +801,7 @@ void unregister_shrinker(struct shrinker *shrinker)
> wait_for_completion(&shrinker->completion_wait);
>
> down_write(&shrinker_rwsem);
> - list_del(&shrinker->list);
> + list_del_rcu(&shrinker->list);
> shrinker->flags &= ~SHRINKER_REGISTERED;
> if (shrinker->flags & SHRINKER_MEMCG_AWARE)
> unregister_memcg_shrinker(shrinker);
> @@ -845,7 +846,7 @@ EXPORT_SYMBOL(shrinker_free);
> void unregister_and_free_shrinker(struct shrinker *shrinker)
> {
> unregister_shrinker(shrinker);
> - kfree(shrinker);
> + kfree_rcu(shrinker, rcu);
> }
> EXPORT_SYMBOL(unregister_and_free_shrinker);
>
> @@ -1067,33 +1068,27 @@ static unsigned long shrink_slab(gfp_t gfp_mask, int nid,
> if (!mem_cgroup_disabled() && !mem_cgroup_is_root(memcg))
> return shrink_slab_memcg(gfp_mask, nid, memcg, priority);
>
> - if (!down_read_trylock(&shrinker_rwsem))
> - goto out;
> -
> - list_for_each_entry(shrinker, &shrinker_list, list) {
> + rcu_read_lock();
> + list_for_each_entry_rcu(shrinker, &shrinker_list, list) {
> struct shrink_control sc = {
> .gfp_mask = gfp_mask,
> .nid = nid,
> .memcg = memcg,
> };
>
> + if (!shrinker_try_get(shrinker))
> + continue;
> + rcu_read_unlock();
I don't think you can do this unlock?
> +
> ret = do_shrink_slab(&sc, shrinker, priority);
> if (ret == SHRINK_EMPTY)
> ret = 0;
> freed += ret;
> - /*
> - * Bail out if someone want to register a new shrinker to
> - * prevent the registration from being stalled for long periods
> - * by parallel ongoing shrinking.
> - */
> - if (rwsem_is_contended(&shrinker_rwsem)) {
> - freed = freed ? : 1;
> - break;
> - }
> - }
>
> - up_read(&shrinker_rwsem);
> -out:
> + rcu_read_lock();
That new rcu_read_lock() won't help AFAIK, the whole
list_for_each_entry_rcu() needs to be under the single rcu_read_lock() to be
safe.
IIUC this is why Dave in [4] suggests unifying shrink_slab() with
shrink_slab_memcg(), as the latter doesn't iterate the list but uses IDR.
> + shrinker_put(shrinker);
> + }
> + rcu_read_unlock();
> cond_resched();
> return freed;
> }
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [PATCH 02/29] mm: vmscan: introduce some helpers for dynamically allocating shrinker
[not found] ` <20230622085335.77010-3-zhengqi.arch@bytedance.com>
@ 2023-06-23 6:12 ` Dave Chinner via Virtualization
0 siblings, 0 replies; 6+ messages in thread
From: Dave Chinner via Virtualization @ 2023-06-23 6:12 UTC (permalink / raw)
To: Qi Zheng
Cc: djwong, roman.gushchin, dri-devel, virtualization, linux-mm,
dm-devel, linux-ext4, paulmck, linux-arm-msm, intel-gfx,
linux-nfs, linux-raid, linux-bcache, vbabka, brauner, tytso,
linux-kernel, linux-xfs, linux-fsdevel, akpm, linux-btrfs, tkhai
On Thu, Jun 22, 2023 at 04:53:08PM +0800, Qi Zheng wrote:
> Introduce some helpers for dynamically allocating shrinker instance,
> and their uses are as follows:
>
> 1. shrinker_alloc_and_init()
>
> Used to allocate and initialize a shrinker instance, the priv_data
> parameter is used to pass the pointer of the previously embedded
> structure of the shrinker instance.
>
> 2. shrinker_free()
>
> Used to free the shrinker instance when the registration of shrinker
> fails.
>
> 3. unregister_and_free_shrinker()
>
> Used to unregister and free the shrinker instance, and the kfree()
> will be changed to kfree_rcu() later.
>
> Signed-off-by: Qi Zheng <zhengqi.arch@bytedance.com>
> ---
> include/linux/shrinker.h | 12 ++++++++++++
> mm/vmscan.c | 35 +++++++++++++++++++++++++++++++++++
> 2 files changed, 47 insertions(+)
>
> diff --git a/include/linux/shrinker.h b/include/linux/shrinker.h
> index 43e6fcabbf51..8e9ba6fa3fcc 100644
> --- a/include/linux/shrinker.h
> +++ b/include/linux/shrinker.h
> @@ -107,6 +107,18 @@ extern void unregister_shrinker(struct shrinker *shrinker);
> extern void free_prealloced_shrinker(struct shrinker *shrinker);
> extern void synchronize_shrinkers(void);
>
> +typedef unsigned long (*count_objects_cb)(struct shrinker *s,
> + struct shrink_control *sc);
> +typedef unsigned long (*scan_objects_cb)(struct shrinker *s,
> + struct shrink_control *sc);
> +
> +struct shrinker *shrinker_alloc_and_init(count_objects_cb count,
> + scan_objects_cb scan, long batch,
> + int seeks, unsigned flags,
> + void *priv_data);
> +void shrinker_free(struct shrinker *shrinker);
> +void unregister_and_free_shrinker(struct shrinker *shrinker);
Hmmmm. Not exactly how I envisioned this to be done.
Ok, this will definitely work, but I don't think it is an
improvement. It's certainly not what I was thinking of when I
suggested dynamically allocating shrinkers.
The main issue is that this doesn't simplify the API - it expands it
and creates a minefield of old and new functions that have to be
used in exactly the right order for the right things to happen.
What I was thinking of was moving the entire shrinker setup code
over to the prealloc/register_prepared() algorithm, where the setup
is already separated from the activation of the shrinker.
That is, we start by renaming prealloc_shrinker() to
shrinker_alloc(), adding a flags field to tell it everything that it
needs to alloc (i.e. the NUMA/MEMCG_AWARE flags) and having it
returned a fully allocated shrinker ready to register. Initially
this also contains an internal flag to say the shrinker was
allocated so that unregister_shrinker() knows to free it.
The caller then fills out the shrinker functions, seeks, etc. just
like the do now, and then calls register_shrinker_prepared() to make
the shrinker active when it wants to turn it on.
When it is time to tear down the shrinker, no API needs to change.
unregister_shrinker() does all the shutdown and frees all the
internal memory like it does now. If the shrinker is also marked as
allocated, it frees the shrinker via RCU, too.
Once everything is converted to this API, we then remove
register_shrinker(), rename register_shrinker_prepared() to
shrinker_register(), rename unregister_shrinker to
shrinker_unregister(), get rid of the internal "allocated" flag
and always free the shrinker.
At the end of the patchset, every shrinker should be set
up in a manner like this:
sb->shrinker = shrinker_alloc(SHRINKER_MEMCG_AWARE|SHRINKER_NUMA_AWARE,
"sb-%s", type->name);
if (!sb->shrinker)
return -ENOMEM;
sb->shrinker->count_objects = super_cache_count;
sb->shrinker->scan_objects = super_cache_scan;
sb->shrinker->batch = 1024;
sb->shrinker->private = sb;
.....
shrinker_register(sb->shrinker);
And teardown is just a call to shrinker_unregister(sb->shrinker)
as it is now.
i.e. the entire shrinker regsitration API is now just three
functions, down from the current four, and much simpler than the
the seven functions this patch set results in...
The other advantage of this is that it will break all the existing
out of tree code and third party modules using the old API and will
no longer work with a kernel using lockless slab shrinkers. They
need to break (both at the source and binary levels) to stop bad
things from happening due to using uncoverted shrinkers in the new
setup.
-Dave.
--
Dave Chinner
david@fromorbit.com
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [PATCH 24/29] mm: vmscan: make global slab shrink lockless
2023-06-22 15:12 ` [PATCH 24/29] mm: vmscan: make global slab shrink lockless Vlastimil Babka
@ 2023-06-23 6:29 ` Dave Chinner via Virtualization
[not found] ` <a21047bb-3b87-a50a-94a7-f3fa4847bc08@bytedance.com>
0 siblings, 1 reply; 6+ messages in thread
From: Dave Chinner via Virtualization @ 2023-06-23 6:29 UTC (permalink / raw)
To: Vlastimil Babka
Cc: djwong, roman.gushchin, Qi Zheng, virtualization, linux-mm,
dm-devel, linux-ext4, paulmck, linux-arm-msm, intel-gfx,
linux-nfs, linux-raid, linux-bcache, dri-devel, brauner, tytso,
linux-kernel, linux-xfs, linux-fsdevel, akpm, linux-btrfs, tkhai
On Thu, Jun 22, 2023 at 05:12:02PM +0200, Vlastimil Babka wrote:
> On 6/22/23 10:53, Qi Zheng wrote:
> > @@ -1067,33 +1068,27 @@ static unsigned long shrink_slab(gfp_t gfp_mask, int nid,
> > if (!mem_cgroup_disabled() && !mem_cgroup_is_root(memcg))
> > return shrink_slab_memcg(gfp_mask, nid, memcg, priority);
> >
> > - if (!down_read_trylock(&shrinker_rwsem))
> > - goto out;
> > -
> > - list_for_each_entry(shrinker, &shrinker_list, list) {
> > + rcu_read_lock();
> > + list_for_each_entry_rcu(shrinker, &shrinker_list, list) {
> > struct shrink_control sc = {
> > .gfp_mask = gfp_mask,
> > .nid = nid,
> > .memcg = memcg,
> > };
> >
> > + if (!shrinker_try_get(shrinker))
> > + continue;
> > + rcu_read_unlock();
>
> I don't think you can do this unlock?
>
> > +
> > ret = do_shrink_slab(&sc, shrinker, priority);
> > if (ret == SHRINK_EMPTY)
> > ret = 0;
> > freed += ret;
> > - /*
> > - * Bail out if someone want to register a new shrinker to
> > - * prevent the registration from being stalled for long periods
> > - * by parallel ongoing shrinking.
> > - */
> > - if (rwsem_is_contended(&shrinker_rwsem)) {
> > - freed = freed ? : 1;
> > - break;
> > - }
> > - }
> >
> > - up_read(&shrinker_rwsem);
> > -out:
> > + rcu_read_lock();
>
> That new rcu_read_lock() won't help AFAIK, the whole
> list_for_each_entry_rcu() needs to be under the single rcu_read_lock() to be
> safe.
Yeah, that's the pattern we've been taught and the one we can look
at and immediately say "this is safe".
This is a different pattern, as has been explained bi Qi, and I
think it *might* be safe.
*However.*
Right now I don't have time to go through a novel RCU list iteration
pattern it one step at to determine the correctness of the
algorithm. I'm mostly worried about list manipulations that can
occur outside rcu_read_lock() section bleeding into the RCU
critical section because rcu_read_lock() by itself is not a memory
barrier.
Maybe Paul has seen this pattern often enough he could simply tell
us what conditions it is safe in. But for me to work that out from
first principles? I just don't have the time to do that right now.
> IIUC this is why Dave in [4] suggests unifying shrink_slab() with
> shrink_slab_memcg(), as the latter doesn't iterate the list but uses IDR.
Yes, I suggested the IDR route because radix tree lookups under RCU
with reference counted objects are a known safe pattern that we can
easily confirm is correct or not. Hence I suggested the unification
+ IDR route because it makes the life of reviewers so, so much
easier...
Cheers,
Dave.
--
Dave Chinner
david@fromorbit.com
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [PATCH 24/29] mm: vmscan: make global slab shrink lockless
[not found] ` <a21047bb-3b87-a50a-94a7-f3fa4847bc08@bytedance.com>
@ 2023-06-23 22:19 ` Dave Chinner via Virtualization
0 siblings, 0 replies; 6+ messages in thread
From: Dave Chinner via Virtualization @ 2023-06-23 22:19 UTC (permalink / raw)
To: Qi Zheng
Cc: djwong, roman.gushchin, dri-devel, virtualization, linux-mm,
dm-devel, linux-ext4, paulmck, linux-arm-msm, intel-gfx,
linux-nfs, linux-raid, linux-bcache, Vlastimil Babka, brauner,
tytso, linux-kernel, linux-xfs, linux-fsdevel, akpm, linux-btrfs,
tkhai
On Fri, Jun 23, 2023 at 09:10:57PM +0800, Qi Zheng wrote:
> On 2023/6/23 14:29, Dave Chinner wrote:
> > On Thu, Jun 22, 2023 at 05:12:02PM +0200, Vlastimil Babka wrote:
> > > On 6/22/23 10:53, Qi Zheng wrote:
> > Yes, I suggested the IDR route because radix tree lookups under RCU
> > with reference counted objects are a known safe pattern that we can
> > easily confirm is correct or not. Hence I suggested the unification
> > + IDR route because it makes the life of reviewers so, so much
> > easier...
>
> In fact, I originally planned to try the unification + IDR method you
> suggested at the beginning. But in the case of CONFIG_MEMCG disabled,
> the struct mem_cgroup is not even defined, and root_mem_cgroup and
> shrinker_info will not be allocated. This required more code changes, so
> I ended up keeping the shrinker_list and implementing the above pattern.
Yes. Go back and read what I originally said needed to be done
first. In the case of CONFIG_MEMCG=n, a dummy root memcg still needs
to exist that holds all of the global shrinkers. Then shrink_slab()
is only ever passed a memcg that should be iterated.
Yes, it needs changes external to the shrinker code itself to be
made to work. And even if memcg's are not enabled, we can still use
the memcg structures to ensure a common abstraction is used for the
shrinker tracking infrastructure....
> If the above pattern is not safe, I will go back to the unification +
> IDR method.
And that is exactly how we got into this mess in the first place....
-Dave
--
Dave Chinner
david@fromorbit.com
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
^ permalink raw reply [flat|nested] 6+ messages in thread
end of thread, other threads:[~2023-06-23 22:19 UTC | newest]
Thread overview: 6+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
[not found] <20230622085335.77010-1-zhengqi.arch@bytedance.com>
[not found] ` <20230622085335.77010-2-zhengqi.arch@bytedance.com>
2023-06-22 14:47 ` [PATCH 01/29] mm: shrinker: add shrinker::private_data field Vlastimil Babka
[not found] ` <20230622085335.77010-30-zhengqi.arch@bytedance.com>
2023-06-22 14:53 ` [PATCH 29/29] mm: shrinker: move shrinker-related code into a separate file Vlastimil Babka
[not found] ` <20230622085335.77010-25-zhengqi.arch@bytedance.com>
2023-06-22 15:12 ` [PATCH 24/29] mm: vmscan: make global slab shrink lockless Vlastimil Babka
2023-06-23 6:29 ` Dave Chinner via Virtualization
[not found] ` <a21047bb-3b87-a50a-94a7-f3fa4847bc08@bytedance.com>
2023-06-23 22:19 ` Dave Chinner via Virtualization
[not found] ` <20230622085335.77010-3-zhengqi.arch@bytedance.com>
2023-06-23 6:12 ` [PATCH 02/29] mm: vmscan: introduce some helpers for dynamically allocating shrinker Dave Chinner via Virtualization
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).