* [PATCH v3 0/3] debugobjects: Do some minor optimizations, fixes and cleaups
@ 2024-09-11 8:35 Zhen Lei
2024-09-11 8:35 ` [PATCH v3 1/3] debugobjects: Delete a piece of redundant code Zhen Lei
` (2 more replies)
0 siblings, 3 replies; 11+ messages in thread
From: Zhen Lei @ 2024-09-11 8:35 UTC (permalink / raw)
To: Andrew Morton, Thomas Gleixner, linux-kernel; +Cc: Zhen Lei
v2 --> v3:
1. Use the lockless mode to fill the pool.
v1 --> v2:
1. Fix the compilation attributes of some global variables
2. Update comments and commit messages.
v1:
The summary changes of the first two patches is as follows:
if (likely(READ_ONCE(obj_pool_free) >= debug_objects_pool_min_level))
return;
- while (READ_ONCE(obj_nr_tofree) && (READ_ONCE(obj_pool_free) < obj_pool_min_free)) {
+ if (READ_ONCE(obj_nr_tofree)) {
raw_spin_lock_irqsave(&pool_lock, flags);
- while (obj_nr_tofree && (obj_pool_free < obj_pool_min_free)) {
+ while (obj_nr_tofree && (obj_pool_free < debug_objects_pool_min_level)) {
... ...
}
raw_spin_unlock_irqrestore(&pool_lock, flags);
Zhen Lei (3):
debugobjects: Delete a piece of redundant code
debugobjects: Use hlist_splice_init() to reduce lock conflicts
debugobjects: Reduce contention on pool lock in fill_pool()
lib/debugobjects.c | 105 ++++++++++++++++++++++++++++++---------------
1 file changed, 70 insertions(+), 35 deletions(-)
--
2.34.1
^ permalink raw reply [flat|nested] 11+ messages in thread
* [PATCH v3 1/3] debugobjects: Delete a piece of redundant code
2024-09-11 8:35 [PATCH v3 0/3] debugobjects: Do some minor optimizations, fixes and cleaups Zhen Lei
@ 2024-09-11 8:35 ` Zhen Lei
2024-10-15 15:36 ` [tip: core/debugobjects] " tip-bot2 for Zhen Lei
2024-09-11 8:35 ` [PATCH v3 2/3] debugobjects: Use hlist_splice_init() to reduce lock conflicts Zhen Lei
2024-09-11 8:35 ` [PATCH v3 3/3] debugobjects: Reduce contention on pool lock in fill_pool() Zhen Lei
2 siblings, 1 reply; 11+ messages in thread
From: Zhen Lei @ 2024-09-11 8:35 UTC (permalink / raw)
To: Andrew Morton, Thomas Gleixner, linux-kernel; +Cc: Zhen Lei
The statically allocated objects are all located in obj_static_pool[],
the whole memory of obj_static_pool[] will be reclaimed later. Therefore,
there is no need to split the remaining statically nodes in list obj_pool
into isolated ones, no one will use them anymore. Just write
INIT_HLIST_HEAD(&obj_pool) is enough. Since hlist_move_list() directly
discards the old list, even this can be omitted.
Signed-off-by: Zhen Lei <thunder.leizhen@huawei.com>
---
lib/debugobjects.c | 8 ++++----
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/lib/debugobjects.c b/lib/debugobjects.c
index 5ce473ad499bad3..df48acc5d4b34fc 100644
--- a/lib/debugobjects.c
+++ b/lib/debugobjects.c
@@ -1325,10 +1325,10 @@ static int __init debug_objects_replace_static_objects(void)
* active object references.
*/
- /* Remove the statically allocated objects from the pool */
- hlist_for_each_entry_safe(obj, tmp, &obj_pool, node)
- hlist_del(&obj->node);
- /* Move the allocated objects to the pool */
+ /*
+ * Replace the statically allocated objects list with the allocated
+ * objects list.
+ */
hlist_move_list(&objects, &obj_pool);
/* Replace the active object references */
--
2.34.1
^ permalink raw reply related [flat|nested] 11+ messages in thread
* [PATCH v3 2/3] debugobjects: Use hlist_splice_init() to reduce lock conflicts
2024-09-11 8:35 [PATCH v3 0/3] debugobjects: Do some minor optimizations, fixes and cleaups Zhen Lei
2024-09-11 8:35 ` [PATCH v3 1/3] debugobjects: Delete a piece of redundant code Zhen Lei
@ 2024-09-11 8:35 ` Zhen Lei
2024-10-15 15:36 ` [tip: core/debugobjects] debugobjects: Collect newly allocated objects in a list to reduce lock contention tip-bot2 for Zhen Lei
2024-09-11 8:35 ` [PATCH v3 3/3] debugobjects: Reduce contention on pool lock in fill_pool() Zhen Lei
2 siblings, 1 reply; 11+ messages in thread
From: Zhen Lei @ 2024-09-11 8:35 UTC (permalink / raw)
To: Andrew Morton, Thomas Gleixner, linux-kernel; +Cc: Zhen Lei
The newly allocated debug_obj control blocks can be concatenated into a
sub list in advance outside the lock, so that the operation time inside
the lock can be reduced and the possibility of lock conflict can be
reduced.
Signed-off-by: Zhen Lei <thunder.leizhen@huawei.com>
---
lib/debugobjects.c | 18 ++++++++++--------
1 file changed, 10 insertions(+), 8 deletions(-)
diff --git a/lib/debugobjects.c b/lib/debugobjects.c
index df48acc5d4b34fc..19a91c6bc67eb9c 100644
--- a/lib/debugobjects.c
+++ b/lib/debugobjects.c
@@ -161,23 +161,25 @@ static void fill_pool(void)
return;
while (READ_ONCE(obj_pool_free) < debug_objects_pool_min_level) {
- struct debug_obj *new[ODEBUG_BATCH_SIZE];
+ struct debug_obj *new, *last = NULL;
+ HLIST_HEAD(freelist);
int cnt;
for (cnt = 0; cnt < ODEBUG_BATCH_SIZE; cnt++) {
- new[cnt] = kmem_cache_zalloc(obj_cache, gfp);
- if (!new[cnt])
+ new = kmem_cache_zalloc(obj_cache, gfp);
+ if (!new)
break;
+ hlist_add_head(&new->node, &freelist);
+ if (!last)
+ last = new;
}
if (!cnt)
return;
raw_spin_lock_irqsave(&pool_lock, flags);
- while (cnt) {
- hlist_add_head(&new[--cnt]->node, &obj_pool);
- debug_objects_allocated++;
- WRITE_ONCE(obj_pool_free, obj_pool_free + 1);
- }
+ hlist_splice_init(&freelist, &last->node, &obj_pool);
+ debug_objects_allocated += cnt;
+ WRITE_ONCE(obj_pool_free, obj_pool_free + cnt);
raw_spin_unlock_irqrestore(&pool_lock, flags);
}
}
--
2.34.1
^ permalink raw reply related [flat|nested] 11+ messages in thread
* [PATCH v3 3/3] debugobjects: Reduce contention on pool lock in fill_pool()
2024-09-11 8:35 [PATCH v3 0/3] debugobjects: Do some minor optimizations, fixes and cleaups Zhen Lei
2024-09-11 8:35 ` [PATCH v3 1/3] debugobjects: Delete a piece of redundant code Zhen Lei
2024-09-11 8:35 ` [PATCH v3 2/3] debugobjects: Use hlist_splice_init() to reduce lock conflicts Zhen Lei
@ 2024-09-11 8:35 ` Zhen Lei
2024-09-11 9:04 ` Leizhen (ThunderTown)
2024-10-07 14:04 ` Thomas Gleixner
2 siblings, 2 replies; 11+ messages in thread
From: Zhen Lei @ 2024-09-11 8:35 UTC (permalink / raw)
To: Andrew Morton, Thomas Gleixner, linux-kernel; +Cc: Zhen Lei
If the conditions for starting fill are met, it means that all cores that
call fill() later are blocked until the first core completes the fill
operation.
Since it is low cost to move a set of free nodes from list obj_to_free
into obj_pool, once it is necessary to fill, trying to move regardless
of whether the context is preemptible. To reduce contention on pool
lock, use atomic operation to test state. Only the first comer is allowed
to try. If the last comer finds that someone is already trying, it will
give up.
Scenarios that use allocated node filling can also be applied lockless
mechanisms, but slightly different. The global list obj_to_free can only
be operated exclusively by one core, while kmem_cache_zalloc() can be
invoked by multiple cores simultaneously. Use atomic counting to mark how
many cores are filling, to reduce atomic write conflicts during check. In
principle, only the first comer is allowed to fill, but there is a very
low probability that multiple comers may fill at the time.
Suggested-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Zhen Lei <thunder.leizhen@huawei.com>
---
lib/debugobjects.c | 79 ++++++++++++++++++++++++++++++++--------------
1 file changed, 56 insertions(+), 23 deletions(-)
diff --git a/lib/debugobjects.c b/lib/debugobjects.c
index 19a91c6bc67eb9c..568aae9cd9c3c4f 100644
--- a/lib/debugobjects.c
+++ b/lib/debugobjects.c
@@ -125,14 +125,10 @@ static const char *obj_states[ODEBUG_STATE_MAX] = {
[ODEBUG_STATE_NOTAVAILABLE] = "not available",
};
-static void fill_pool(void)
+static void fill_pool_from_freelist(void)
{
- gfp_t gfp = __GFP_HIGH | __GFP_NOWARN;
+ static unsigned long state;
struct debug_obj *obj;
- unsigned long flags;
-
- if (likely(READ_ONCE(obj_pool_free) >= debug_objects_pool_min_level))
- return;
/*
* Reuse objs from the global obj_to_free list; they will be
@@ -141,25 +137,53 @@ static void fill_pool(void)
* obj_nr_tofree is checked locklessly; the READ_ONCE() pairs with
* the WRITE_ONCE() in pool_lock critical sections.
*/
- if (READ_ONCE(obj_nr_tofree)) {
- raw_spin_lock_irqsave(&pool_lock, flags);
- /*
- * Recheck with the lock held as the worker thread might have
- * won the race and freed the global free list already.
- */
- while (obj_nr_tofree && (obj_pool_free < debug_objects_pool_min_level)) {
- obj = hlist_entry(obj_to_free.first, typeof(*obj), node);
- hlist_del(&obj->node);
- WRITE_ONCE(obj_nr_tofree, obj_nr_tofree - 1);
- hlist_add_head(&obj->node, &obj_pool);
- WRITE_ONCE(obj_pool_free, obj_pool_free + 1);
- }
- raw_spin_unlock_irqrestore(&pool_lock, flags);
+ if (!READ_ONCE(obj_nr_tofree))
+ return;
+
+ /*
+ * Prevent the context from being scheduled or interrupted after
+ * setting the state flag;
+ */
+ guard(irqsave)();
+
+ /*
+ * Avoid lock contention on &pool_lock and avoid making the cache
+ * line exclusive by testing the bit before attempting to set it.
+ */
+ if (test_bit(0, &state) || test_and_set_bit(0, &state))
+ return;
+
+ guard(raw_spinlock)(&pool_lock);
+ /*
+ * Recheck with the lock held as the worker thread might have
+ * won the race and freed the global free list already.
+ */
+ while (obj_nr_tofree && (obj_pool_free < debug_objects_pool_min_level)) {
+ obj = hlist_entry(obj_to_free.first, typeof(*obj), node);
+ hlist_del(&obj->node);
+ WRITE_ONCE(obj_nr_tofree, obj_nr_tofree - 1);
+ hlist_add_head(&obj->node, &obj_pool);
+ WRITE_ONCE(obj_pool_free, obj_pool_free + 1);
}
+ clear_bit(0, &state);
+}
+
+static void fill_pool(void)
+{
+ gfp_t gfp = __GFP_HIGH | __GFP_NOWARN;
+ static atomic_t cpus_allocating;
if (unlikely(!obj_cache))
return;
+ /*
+ * Avoid allocation and lock contention when another CPU is already
+ * in the allocation path.
+ */
+ if (atomic_read(&cpus_allocating))
+ return;
+
+ atomic_inc(&cpus_allocating);
while (READ_ONCE(obj_pool_free) < debug_objects_pool_min_level) {
struct debug_obj *new, *last = NULL;
HLIST_HEAD(freelist);
@@ -174,14 +198,14 @@ static void fill_pool(void)
last = new;
}
if (!cnt)
- return;
+ break;
- raw_spin_lock_irqsave(&pool_lock, flags);
+ guard(raw_spinlock_irqsave)(&pool_lock);
hlist_splice_init(&freelist, &last->node, &obj_pool);
debug_objects_allocated += cnt;
WRITE_ONCE(obj_pool_free, obj_pool_free + cnt);
- raw_spin_unlock_irqrestore(&pool_lock, flags);
}
+ atomic_dec(&cpus_allocating);
}
/*
@@ -600,6 +624,15 @@ static struct debug_obj *lookup_object_or_alloc(void *addr, struct debug_bucket
static void debug_objects_fill_pool(void)
{
+ if (likely(READ_ONCE(obj_pool_free) >= debug_objects_pool_min_level))
+ return;
+
+ /* Try reusing objects from obj_to_free_list */
+ fill_pool_from_freelist();
+
+ if (likely(READ_ONCE(obj_pool_free) >= debug_objects_pool_min_level))
+ return;
+
/*
* On RT enabled kernels the pool refill must happen in preemptible
* context -- for !RT kernels we rely on the fact that spinlock_t and
--
2.34.1
^ permalink raw reply related [flat|nested] 11+ messages in thread
* Re: [PATCH v3 3/3] debugobjects: Reduce contention on pool lock in fill_pool()
2024-09-11 8:35 ` [PATCH v3 3/3] debugobjects: Reduce contention on pool lock in fill_pool() Zhen Lei
@ 2024-09-11 9:04 ` Leizhen (ThunderTown)
2024-09-17 12:19 ` Thomas Gleixner
2024-10-07 14:04 ` Thomas Gleixner
1 sibling, 1 reply; 11+ messages in thread
From: Leizhen (ThunderTown) @ 2024-09-11 9:04 UTC (permalink / raw)
To: Andrew Morton, Thomas Gleixner, linux-kernel
On 2024/9/11 16:35, Zhen Lei wrote:
> If the conditions for starting fill are met, it means that all cores that
> call fill() later are blocked until the first core completes the fill
> operation.
>
> Since it is low cost to move a set of free nodes from list obj_to_free
> into obj_pool, once it is necessary to fill, trying to move regardless
> of whether the context is preemptible. To reduce contention on pool
> lock, use atomic operation to test state. Only the first comer is allowed
> to try. If the last comer finds that someone is already trying, it will
> give up.
>
> Scenarios that use allocated node filling can also be applied lockless
> mechanisms, but slightly different. The global list obj_to_free can only
> be operated exclusively by one core, while kmem_cache_zalloc() can be
> invoked by multiple cores simultaneously. Use atomic counting to mark how
> many cores are filling, to reduce atomic write conflicts during check. In
> principle, only the first comer is allowed to fill, but there is a very
> low probability that multiple comers may fill at the time.
>
> Suggested-by: Thomas Gleixner <tglx@linutronix.de>
> Signed-off-by: Zhen Lei <thunder.leizhen@huawei.com>
Hi, Thomas:
I was going to mark "Signed-off-by" as you. Because except for the
following line of changes, you wrote everything. But you're maintainer.
It doesn't seem good if I post a patch with your Signed-off-by. Please
feel free to change it, but do not forget to add "Reported-by" or
"Tested-by" for me.
@@ -174,14 +198,14 @@ static void fill_pool(void)
last = new;
}
if (!cnt)
- return;
+ break;
--
Regards,
Zhen Lei
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH v3 3/3] debugobjects: Reduce contention on pool lock in fill_pool()
2024-09-11 9:04 ` Leizhen (ThunderTown)
@ 2024-09-17 12:19 ` Thomas Gleixner
2024-09-25 2:03 ` Leizhen (ThunderTown)
0 siblings, 1 reply; 11+ messages in thread
From: Thomas Gleixner @ 2024-09-17 12:19 UTC (permalink / raw)
To: Leizhen (ThunderTown), Andrew Morton, linux-kernel
On Wed, Sep 11 2024 at 17:04, Leizhen wrote:
> On 2024/9/11 16:35, Zhen Lei wrote:
>> Scenarios that use allocated node filling can also be applied lockless
>> mechanisms, but slightly different. The global list obj_to_free can only
>> be operated exclusively by one core, while kmem_cache_zalloc() can be
>> invoked by multiple cores simultaneously. Use atomic counting to mark how
>> many cores are filling, to reduce atomic write conflicts during check. In
>> principle, only the first comer is allowed to fill, but there is a very
>> low probability that multiple comers may fill at the time.
>>
>> Suggested-by: Thomas Gleixner <tglx@linutronix.de>
>> Signed-off-by: Zhen Lei <thunder.leizhen@huawei.com>
>
> Hi, Thomas:
> I was going to mark "Signed-off-by" as you. Because except for the
> following line of changes, you wrote everything. But you're maintainer.
> It doesn't seem good if I post a patch with your Signed-off-by. Please
> feel free to change it, but do not forget to add "Reported-by" or
> "Tested-by" for me.
Suggested-by is fine. I look at it after back from travel and
conferencing.
Thanks,
tglx
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH v3 3/3] debugobjects: Reduce contention on pool lock in fill_pool()
2024-09-17 12:19 ` Thomas Gleixner
@ 2024-09-25 2:03 ` Leizhen (ThunderTown)
0 siblings, 0 replies; 11+ messages in thread
From: Leizhen (ThunderTown) @ 2024-09-25 2:03 UTC (permalink / raw)
To: Thomas Gleixner, Andrew Morton, linux-kernel
On 2024/9/17 20:19, Thomas Gleixner wrote:
> On Wed, Sep 11 2024 at 17:04, Leizhen wrote:
>> On 2024/9/11 16:35, Zhen Lei wrote:
>>> Scenarios that use allocated node filling can also be applied lockless
>>> mechanisms, but slightly different. The global list obj_to_free can only
>>> be operated exclusively by one core, while kmem_cache_zalloc() can be
>>> invoked by multiple cores simultaneously. Use atomic counting to mark how
>>> many cores are filling, to reduce atomic write conflicts during check. In
>>> principle, only the first comer is allowed to fill, but there is a very
>>> low probability that multiple comers may fill at the time.
>>>
>>> Suggested-by: Thomas Gleixner <tglx@linutronix.de>
>>> Signed-off-by: Zhen Lei <thunder.leizhen@huawei.com>
>>
>> Hi, Thomas:
>> I was going to mark "Signed-off-by" as you. Because except for the
>> following line of changes, you wrote everything. But you're maintainer.
>> It doesn't seem good if I post a patch with your Signed-off-by. Please
>> feel free to change it, but do not forget to add "Reported-by" or
>> "Tested-by" for me.
>
> Suggested-by is fine. I look at it after back from travel and
> conferencing.
Thank you very much. You're such a gentleman.
>
> Thanks,
>
> tglx
> .
>
--
Regards,
Zhen Lei
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH v3 3/3] debugobjects: Reduce contention on pool lock in fill_pool()
2024-09-11 8:35 ` [PATCH v3 3/3] debugobjects: Reduce contention on pool lock in fill_pool() Zhen Lei
2024-09-11 9:04 ` Leizhen (ThunderTown)
@ 2024-10-07 14:04 ` Thomas Gleixner
2024-10-10 3:33 ` Leizhen (ThunderTown)
1 sibling, 1 reply; 11+ messages in thread
From: Thomas Gleixner @ 2024-10-07 14:04 UTC (permalink / raw)
To: Zhen Lei, Andrew Morton, linux-kernel; +Cc: Zhen Lei
On Wed, Sep 11 2024 at 16:35, Zhen Lei wrote:
> + /*
> + * Avoid allocation and lock contention when another CPU is already
> + * in the allocation path.
> + */
> + if (atomic_read(&cpus_allocating))
> + return;
Hmm. I really don't want to rely on a single CPU doing allocations in
case that the pool level reached a critical state. That CPU might be
scheduled out and all others are consuming objects up to the point where
the pool becomes empty.
Let me integrate this into the series I'm going to post soon.
Thanks,
tglx
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH v3 3/3] debugobjects: Reduce contention on pool lock in fill_pool()
2024-10-07 14:04 ` Thomas Gleixner
@ 2024-10-10 3:33 ` Leizhen (ThunderTown)
0 siblings, 0 replies; 11+ messages in thread
From: Leizhen (ThunderTown) @ 2024-10-10 3:33 UTC (permalink / raw)
To: Thomas Gleixner, Andrew Morton, linux-kernel
On 2024/10/7 22:04, Thomas Gleixner wrote:
> On Wed, Sep 11 2024 at 16:35, Zhen Lei wrote:
>> + /*
>> + * Avoid allocation and lock contention when another CPU is already
>> + * in the allocation path.
>> + */
>> + if (atomic_read(&cpus_allocating))
>> + return;
>
> Hmm. I really don't want to rely on a single CPU doing allocations in
> case that the pool level reached a critical state. That CPU might be
> scheduled out and all others are consuming objects up to the point where
> the pool becomes empty.
That makes sense, you're thoughtful.
>
> Let me integrate this into the series I'm going to post soon.
>
> Thanks,
>
> tglx
> .
>
--
Regards,
Zhen Lei
^ permalink raw reply [flat|nested] 11+ messages in thread
* [tip: core/debugobjects] debugobjects: Collect newly allocated objects in a list to reduce lock contention
2024-09-11 8:35 ` [PATCH v3 2/3] debugobjects: Use hlist_splice_init() to reduce lock conflicts Zhen Lei
@ 2024-10-15 15:36 ` tip-bot2 for Zhen Lei
0 siblings, 0 replies; 11+ messages in thread
From: tip-bot2 for Zhen Lei @ 2024-10-15 15:36 UTC (permalink / raw)
To: linux-tip-commits; +Cc: Zhen Lei, Thomas Gleixner, x86, linux-kernel
The following commit has been merged into the core/debugobjects branch of tip:
Commit-ID: 813fd07858cfb410bc9574c05b7922185f65989b
Gitweb: https://git.kernel.org/tip/813fd07858cfb410bc9574c05b7922185f65989b
Author: Zhen Lei <thunder.leizhen@huawei.com>
AuthorDate: Mon, 07 Oct 2024 18:49:53 +02:00
Committer: Thomas Gleixner <tglx@linutronix.de>
CommitterDate: Tue, 15 Oct 2024 17:30:30 +02:00
debugobjects: Collect newly allocated objects in a list to reduce lock contention
Collect the newly allocated debug objects in a list outside the lock, so
that the lock held time and the potential lock contention is reduced.
Signed-off-by: Zhen Lei <thunder.leizhen@huawei.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Link: https://lore.kernel.org/all/20240911083521.2257-3-thunder.leizhen@huawei.com
Link: https://lore.kernel.org/all/20241007164913.073653668@linutronix.de
---
lib/debugobjects.c | 18 ++++++++++--------
1 file changed, 10 insertions(+), 8 deletions(-)
diff --git a/lib/debugobjects.c b/lib/debugobjects.c
index df48acc..798ce5a 100644
--- a/lib/debugobjects.c
+++ b/lib/debugobjects.c
@@ -161,23 +161,25 @@ static void fill_pool(void)
return;
while (READ_ONCE(obj_pool_free) < debug_objects_pool_min_level) {
- struct debug_obj *new[ODEBUG_BATCH_SIZE];
+ struct debug_obj *new, *last = NULL;
+ HLIST_HEAD(head);
int cnt;
for (cnt = 0; cnt < ODEBUG_BATCH_SIZE; cnt++) {
- new[cnt] = kmem_cache_zalloc(obj_cache, gfp);
- if (!new[cnt])
+ new = kmem_cache_zalloc(obj_cache, gfp);
+ if (!new)
break;
+ hlist_add_head(&new->node, &head);
+ if (!last)
+ last = new;
}
if (!cnt)
return;
raw_spin_lock_irqsave(&pool_lock, flags);
- while (cnt) {
- hlist_add_head(&new[--cnt]->node, &obj_pool);
- debug_objects_allocated++;
- WRITE_ONCE(obj_pool_free, obj_pool_free + 1);
- }
+ hlist_splice_init(&head, &last->node, &obj_pool);
+ debug_objects_allocated += cnt;
+ WRITE_ONCE(obj_pool_free, obj_pool_free + cnt);
raw_spin_unlock_irqrestore(&pool_lock, flags);
}
}
^ permalink raw reply related [flat|nested] 11+ messages in thread
* [tip: core/debugobjects] debugobjects: Delete a piece of redundant code
2024-09-11 8:35 ` [PATCH v3 1/3] debugobjects: Delete a piece of redundant code Zhen Lei
@ 2024-10-15 15:36 ` tip-bot2 for Zhen Lei
0 siblings, 0 replies; 11+ messages in thread
From: tip-bot2 for Zhen Lei @ 2024-10-15 15:36 UTC (permalink / raw)
To: linux-tip-commits; +Cc: Zhen Lei, Thomas Gleixner, x86, linux-kernel
The following commit has been merged into the core/debugobjects branch of tip:
Commit-ID: a0ae95040853aa05dc006f4b16f8c82c6f9dd9e4
Gitweb: https://git.kernel.org/tip/a0ae95040853aa05dc006f4b16f8c82c6f9dd9e4
Author: Zhen Lei <thunder.leizhen@huawei.com>
AuthorDate: Mon, 07 Oct 2024 18:49:52 +02:00
Committer: Thomas Gleixner <tglx@linutronix.de>
CommitterDate: Tue, 15 Oct 2024 17:30:30 +02:00
debugobjects: Delete a piece of redundant code
The statically allocated objects are all located in obj_static_pool[],
the whole memory of obj_static_pool[] will be reclaimed later. Therefore,
there is no need to split the remaining statically nodes in list obj_pool
into isolated ones, no one will use them anymore. Just write
INIT_HLIST_HEAD(&obj_pool) is enough. Since hlist_move_list() directly
discards the old list, even this can be omitted.
Signed-off-by: Zhen Lei <thunder.leizhen@huawei.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Link: https://lore.kernel.org/all/20240911083521.2257-2-thunder.leizhen@huawei.com
Link: https://lore.kernel.org/all/20241007164913.009849239@linutronix.de
---
lib/debugobjects.c | 8 ++++----
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/lib/debugobjects.c b/lib/debugobjects.c
index 5ce473a..df48acc 100644
--- a/lib/debugobjects.c
+++ b/lib/debugobjects.c
@@ -1325,10 +1325,10 @@ static int __init debug_objects_replace_static_objects(void)
* active object references.
*/
- /* Remove the statically allocated objects from the pool */
- hlist_for_each_entry_safe(obj, tmp, &obj_pool, node)
- hlist_del(&obj->node);
- /* Move the allocated objects to the pool */
+ /*
+ * Replace the statically allocated objects list with the allocated
+ * objects list.
+ */
hlist_move_list(&objects, &obj_pool);
/* Replace the active object references */
^ permalink raw reply related [flat|nested] 11+ messages in thread
end of thread, other threads:[~2024-10-15 15:36 UTC | newest]
Thread overview: 11+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-09-11 8:35 [PATCH v3 0/3] debugobjects: Do some minor optimizations, fixes and cleaups Zhen Lei
2024-09-11 8:35 ` [PATCH v3 1/3] debugobjects: Delete a piece of redundant code Zhen Lei
2024-10-15 15:36 ` [tip: core/debugobjects] " tip-bot2 for Zhen Lei
2024-09-11 8:35 ` [PATCH v3 2/3] debugobjects: Use hlist_splice_init() to reduce lock conflicts Zhen Lei
2024-10-15 15:36 ` [tip: core/debugobjects] debugobjects: Collect newly allocated objects in a list to reduce lock contention tip-bot2 for Zhen Lei
2024-09-11 8:35 ` [PATCH v3 3/3] debugobjects: Reduce contention on pool lock in fill_pool() Zhen Lei
2024-09-11 9:04 ` Leizhen (ThunderTown)
2024-09-17 12:19 ` Thomas Gleixner
2024-09-25 2:03 ` Leizhen (ThunderTown)
2024-10-07 14:04 ` Thomas Gleixner
2024-10-10 3:33 ` Leizhen (ThunderTown)
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox