* [PATCH] mm/vmscan: try to optimize branch procedures.
@ 2017-11-28 1:49 Jiang Biao
2017-11-28 8:03 ` Michal Hocko
2017-11-28 9:40 ` Mel Gorman
0 siblings, 2 replies; 5+ messages in thread
From: Jiang Biao @ 2017-11-28 1:49 UTC (permalink / raw)
To: akpm, mhocko, hannes, hillf.zj, minchan, ying.huang, mgorman
Cc: linux-mm, linux-kernel, jiang.biao2, zhong.weidong
1. Use unlikely to try to improve branch prediction. The
*total_scan < 0* branch is unlikely to reach, so use unlikely.
2. Optimize *next_deferred >= scanned* condition.
*next_deferred >= scanned* condition could be optimized into
*next_deferred > scanned*, because when *next_deferred == scanned*,
next_deferred shoud be 0, which is covered by the else branch.
3. Merge two branch blocks into one. The *next_deferred > 0* branch
could be merged into *next_deferred > scanned* to simplify the code.
Signed-off-by: Jiang Biao <jiang.biao2@zte.com.cn>
---
mm/vmscan.c | 10 ++++------
1 file changed, 4 insertions(+), 6 deletions(-)
diff --git a/mm/vmscan.c b/mm/vmscan.c
index eb2f031..5f5d4ab 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -338,7 +338,7 @@ static unsigned long do_shrink_slab(struct shrink_control *shrinkctl,
delta *= freeable;
do_div(delta, nr_eligible + 1);
total_scan += delta;
- if (total_scan < 0) {
+ if (unlikely(total_scan < 0)) {
pr_err("shrink_slab: %pF negative objects to delete nr=%ld\n",
shrinker->scan_objects, total_scan);
total_scan = freeable;
@@ -407,18 +407,16 @@ static unsigned long do_shrink_slab(struct shrink_control *shrinkctl,
cond_resched();
}
- if (next_deferred >= scanned)
- next_deferred -= scanned;
- else
- next_deferred = 0;
/*
* move the unused scan count back into the shrinker in a
* manner that handles concurrent updates. If we exhausted the
* scan, there is no need to do an update.
*/
- if (next_deferred > 0)
+ if (next_deferred > scanned) {
+ next_deferred -= scanned;
new_nr = atomic_long_add_return(next_deferred,
&shrinker->nr_deferred[nid]);
+ }
else
new_nr = atomic_long_read(&shrinker->nr_deferred[nid]);
--
2.7.4
^ permalink raw reply related [flat|nested] 5+ messages in thread* Re: [PATCH] mm/vmscan: try to optimize branch procedures.
2017-11-28 1:49 [PATCH] mm/vmscan: try to optimize branch procedures Jiang Biao
@ 2017-11-28 8:03 ` Michal Hocko
[not found] ` <201711281719103258154@zte.com.cn>
2017-11-28 9:40 ` Mel Gorman
1 sibling, 1 reply; 5+ messages in thread
From: Michal Hocko @ 2017-11-28 8:03 UTC (permalink / raw)
To: Jiang Biao
Cc: akpm, hannes, hillf.zj, minchan, ying.huang, mgorman, linux-mm,
linux-kernel, zhong.weidong
On Tue 28-11-17 09:49:45, Jiang Biao wrote:
> 1. Use unlikely to try to improve branch prediction. The
> *total_scan < 0* branch is unlikely to reach, so use unlikely.
>
> 2. Optimize *next_deferred >= scanned* condition.
> *next_deferred >= scanned* condition could be optimized into
> *next_deferred > scanned*, because when *next_deferred == scanned*,
> next_deferred shoud be 0, which is covered by the else branch.
>
> 3. Merge two branch blocks into one. The *next_deferred > 0* branch
> could be merged into *next_deferred > scanned* to simplify the code.
How have you measured benefit of this patch?
--
Michal Hocko
SUSE Labs
^ permalink raw reply [flat|nested] 5+ messages in thread* Re: [PATCH] mm/vmscan: try to optimize branch procedures.
2017-11-28 1:49 [PATCH] mm/vmscan: try to optimize branch procedures Jiang Biao
2017-11-28 8:03 ` Michal Hocko
@ 2017-11-28 9:40 ` Mel Gorman
1 sibling, 0 replies; 5+ messages in thread
From: Mel Gorman @ 2017-11-28 9:40 UTC (permalink / raw)
To: Jiang Biao
Cc: akpm, mhocko, hannes, hillf.zj, minchan, ying.huang, linux-mm,
linux-kernel, zhong.weidong
On Tue, Nov 28, 2017 at 09:49:45AM +0800, Jiang Biao wrote:
> 1. Use unlikely to try to improve branch prediction. The
> *total_scan < 0* branch is unlikely to reach, so use unlikely.
>
> 2. Optimize *next_deferred >= scanned* condition.
> *next_deferred >= scanned* condition could be optimized into
> *next_deferred > scanned*, because when *next_deferred == scanned*,
> next_deferred shoud be 0, which is covered by the else branch.
>
> 3. Merge two branch blocks into one. The *next_deferred > 0* branch
> could be merged into *next_deferred > scanned* to simplify the code.
>
> Signed-off-by: Jiang Biao <jiang.biao2@zte.com.cn>
These are slow paths. Do you have perf data indicating the branches are
frequently mispredicted? Do you have data showing this improves
performance?
--
Mel Gorman
SUSE Labs
^ permalink raw reply [flat|nested] 5+ messages in thread
end of thread, other threads:[~2017-11-28 9:46 UTC | newest]
Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2017-11-28 1:49 [PATCH] mm/vmscan: try to optimize branch procedures Jiang Biao
2017-11-28 8:03 ` Michal Hocko
[not found] ` <201711281719103258154@zte.com.cn>
2017-11-28 9:31 ` Michal Hocko
2017-11-28 9:46 ` Mel Gorman
2017-11-28 9:40 ` Mel Gorman
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox