linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 1/2] mm/page_alloc.c: use list_{first,last}_entry instead of list_entry
@ 2015-12-02 15:12 Geliang Tang
  2015-12-02 15:12 ` [PATCH 2/2] mm/page_alloc.c: use list_for_each_entry in mark_free_pages() Geliang Tang
                   ` (3 more replies)
  0 siblings, 4 replies; 8+ messages in thread
From: Geliang Tang @ 2015-12-02 15:12 UTC (permalink / raw)
  To: Andrew Morton, Vlastimil Babka, Michal Hocko, Mel Gorman,
	David Rientjes, Joonsoo Kim, Kirill A. Shutemov, Johannes Weiner,
	Alexander Duyck
  Cc: Geliang Tang, linux-mm, linux-kernel

To make the intention clearer, use list_{first,last}_entry instead
of list_entry.

Signed-off-by: Geliang Tang <geliangtang@163.com>
---
 mm/page_alloc.c | 23 +++++++++++------------
 1 file changed, 11 insertions(+), 12 deletions(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index d6d7c97..0d38185 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -830,7 +830,7 @@ static void free_pcppages_bulk(struct zone *zone, int count,
 		do {
 			int mt;	/* migratetype of the to-be-freed page */
 
-			page = list_entry(list->prev, struct page, lru);
+			page = list_last_entry(list, struct page, lru);
 			/* must delete as __free_one_page list manipulates */
 			list_del(&page->lru);
 
@@ -1457,11 +1457,10 @@ struct page *__rmqueue_smallest(struct zone *zone, unsigned int order,
 	/* Find a page of the appropriate size in the preferred list */
 	for (current_order = order; current_order < MAX_ORDER; ++current_order) {
 		area = &(zone->free_area[current_order]);
-		if (list_empty(&area->free_list[migratetype]))
-			continue;
-
-		page = list_entry(area->free_list[migratetype].next,
+		page = list_first_entry_or_null(&area->free_list[migratetype],
 							struct page, lru);
+		if (!page)
+			continue;
 		list_del(&page->lru);
 		rmv_page_order(page);
 		area->nr_free--;
@@ -1740,12 +1739,12 @@ static void unreserve_highatomic_pageblock(const struct alloc_context *ac)
 		for (order = 0; order < MAX_ORDER; order++) {
 			struct free_area *area = &(zone->free_area[order]);
 
-			if (list_empty(&area->free_list[MIGRATE_HIGHATOMIC]))
+			page = list_first_entry_or_null(
+					&area->free_list[MIGRATE_HIGHATOMIC],
+					struct page, lru);
+			if (!page)
 				continue;
 
-			page = list_entry(area->free_list[MIGRATE_HIGHATOMIC].next,
-						struct page, lru);
-
 			/*
 			 * It should never happen but changes to locking could
 			 * inadvertently allow a per-cpu drain to add pages
@@ -1793,7 +1792,7 @@ __rmqueue_fallback(struct zone *zone, unsigned int order, int start_migratetype)
 		if (fallback_mt == -1)
 			continue;
 
-		page = list_entry(area->free_list[fallback_mt].next,
+		page = list_first_entry(&area->free_list[fallback_mt],
 						struct page, lru);
 		if (can_steal)
 			steal_suitable_fallback(zone, page, start_migratetype);
@@ -2252,9 +2251,9 @@ struct page *buffered_rmqueue(struct zone *preferred_zone,
 		}
 
 		if (cold)
-			page = list_entry(list->prev, struct page, lru);
+			page = list_last_entry(list, struct page, lru);
 		else
-			page = list_entry(list->next, struct page, lru);
+			page = list_first_entry(list, struct page, lru);
 
 		list_del(&page->lru);
 		pcp->count--;
-- 
2.5.0


--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH 2/2] mm/page_alloc.c: use list_for_each_entry in mark_free_pages()
  2015-12-02 15:12 [PATCH 1/2] mm/page_alloc.c: use list_{first,last}_entry instead of list_entry Geliang Tang
@ 2015-12-02 15:12 ` Geliang Tang
  2015-12-02 16:32   ` Michal Hocko
                     ` (2 more replies)
  2015-12-02 16:28 ` [PATCH 1/2] mm/page_alloc.c: use list_{first,last}_entry instead of list_entry Michal Hocko
                   ` (2 subsequent siblings)
  3 siblings, 3 replies; 8+ messages in thread
From: Geliang Tang @ 2015-12-02 15:12 UTC (permalink / raw)
  To: Andrew Morton, Vlastimil Babka, Michal Hocko, Mel Gorman,
	David Rientjes, Joonsoo Kim, Kirill A. Shutemov, Johannes Weiner,
	Alexander Duyck
  Cc: Geliang Tang, linux-mm, linux-kernel

Use list_for_each_entry instead of list_for_each + list_entry to
simplify the code.

Signed-off-by: Geliang Tang <geliangtang@163.com>
---
 mm/page_alloc.c | 10 +++++-----
 1 file changed, 5 insertions(+), 5 deletions(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 0d38185..1c1ad58 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -2027,7 +2027,7 @@ void mark_free_pages(struct zone *zone)
 	unsigned long pfn, max_zone_pfn;
 	unsigned long flags;
 	unsigned int order, t;
-	struct list_head *curr;
+	struct page *page;
 
 	if (zone_is_empty(zone))
 		return;
@@ -2037,17 +2037,17 @@ void mark_free_pages(struct zone *zone)
 	max_zone_pfn = zone_end_pfn(zone);
 	for (pfn = zone->zone_start_pfn; pfn < max_zone_pfn; pfn++)
 		if (pfn_valid(pfn)) {
-			struct page *page = pfn_to_page(pfn);
-
+			page = pfn_to_page(pfn);
 			if (!swsusp_page_is_forbidden(page))
 				swsusp_unset_page_free(page);
 		}
 
 	for_each_migratetype_order(order, t) {
-		list_for_each(curr, &zone->free_area[order].free_list[t]) {
+		list_for_each_entry(page,
+				&zone->free_area[order].free_list[t], lru) {
 			unsigned long i;
 
-			pfn = page_to_pfn(list_entry(curr, struct page, lru));
+			pfn = page_to_pfn(page);
 			for (i = 0; i < (1UL << order); i++)
 				swsusp_set_page_free(pfn_to_page(pfn + i));
 		}
-- 
2.5.0


--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 8+ messages in thread

* Re: [PATCH 1/2] mm/page_alloc.c: use list_{first,last}_entry instead of list_entry
  2015-12-02 15:12 [PATCH 1/2] mm/page_alloc.c: use list_{first,last}_entry instead of list_entry Geliang Tang
  2015-12-02 15:12 ` [PATCH 2/2] mm/page_alloc.c: use list_for_each_entry in mark_free_pages() Geliang Tang
@ 2015-12-02 16:28 ` Michal Hocko
  2015-12-02 16:46 ` Mel Gorman
  2015-12-03  1:23 ` David Rientjes
  3 siblings, 0 replies; 8+ messages in thread
From: Michal Hocko @ 2015-12-02 16:28 UTC (permalink / raw)
  To: Geliang Tang
  Cc: Andrew Morton, Vlastimil Babka, Mel Gorman, David Rientjes,
	Joonsoo Kim, Kirill A. Shutemov, Johannes Weiner, Alexander Duyck,
	linux-mm, linux-kernel

On Wed 02-12-15 23:12:40, Geliang Tang wrote:
> To make the intention clearer, use list_{first,last}_entry instead
> of list_entry.

I like list_{first,last}_entry that indeed helps readability, the
_or_null is less clear from the name, though. Previous check for an
empty list was easier to read, at least for me. But it seems like
this is a general pattern...

> Signed-off-by: Geliang Tang <geliangtang@163.com>

Acked-by: Michal Hocko <mhocko@suse.com>

> ---
>  mm/page_alloc.c | 23 +++++++++++------------
>  1 file changed, 11 insertions(+), 12 deletions(-)
> 
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index d6d7c97..0d38185 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -830,7 +830,7 @@ static void free_pcppages_bulk(struct zone *zone, int count,
>  		do {
>  			int mt;	/* migratetype of the to-be-freed page */
>  
> -			page = list_entry(list->prev, struct page, lru);
> +			page = list_last_entry(list, struct page, lru);
>  			/* must delete as __free_one_page list manipulates */
>  			list_del(&page->lru);
>  
> @@ -1457,11 +1457,10 @@ struct page *__rmqueue_smallest(struct zone *zone, unsigned int order,
>  	/* Find a page of the appropriate size in the preferred list */
>  	for (current_order = order; current_order < MAX_ORDER; ++current_order) {
>  		area = &(zone->free_area[current_order]);
> -		if (list_empty(&area->free_list[migratetype]))
> -			continue;
> -
> -		page = list_entry(area->free_list[migratetype].next,
> +		page = list_first_entry_or_null(&area->free_list[migratetype],
>  							struct page, lru);
> +		if (!page)
> +			continue;
>  		list_del(&page->lru);
>  		rmv_page_order(page);
>  		area->nr_free--;
> @@ -1740,12 +1739,12 @@ static void unreserve_highatomic_pageblock(const struct alloc_context *ac)
>  		for (order = 0; order < MAX_ORDER; order++) {
>  			struct free_area *area = &(zone->free_area[order]);
>  
> -			if (list_empty(&area->free_list[MIGRATE_HIGHATOMIC]))
> +			page = list_first_entry_or_null(
> +					&area->free_list[MIGRATE_HIGHATOMIC],
> +					struct page, lru);
> +			if (!page)
>  				continue;
>  
> -			page = list_entry(area->free_list[MIGRATE_HIGHATOMIC].next,
> -						struct page, lru);
> -
>  			/*
>  			 * It should never happen but changes to locking could
>  			 * inadvertently allow a per-cpu drain to add pages
> @@ -1793,7 +1792,7 @@ __rmqueue_fallback(struct zone *zone, unsigned int order, int start_migratetype)
>  		if (fallback_mt == -1)
>  			continue;
>  
> -		page = list_entry(area->free_list[fallback_mt].next,
> +		page = list_first_entry(&area->free_list[fallback_mt],
>  						struct page, lru);
>  		if (can_steal)
>  			steal_suitable_fallback(zone, page, start_migratetype);
> @@ -2252,9 +2251,9 @@ struct page *buffered_rmqueue(struct zone *preferred_zone,
>  		}
>  
>  		if (cold)
> -			page = list_entry(list->prev, struct page, lru);
> +			page = list_last_entry(list, struct page, lru);
>  		else
> -			page = list_entry(list->next, struct page, lru);
> +			page = list_first_entry(list, struct page, lru);
>  
>  		list_del(&page->lru);
>  		pcp->count--;
> -- 
> 2.5.0
> 
> 
> --
> To unsubscribe, send a message with 'unsubscribe linux-mm' in
> the body to majordomo@kvack.org.  For more info on Linux MM,
> see: http://www.linux-mm.org/ .
> Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

-- 
Michal Hocko
SUSE Labs

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH 2/2] mm/page_alloc.c: use list_for_each_entry in mark_free_pages()
  2015-12-02 15:12 ` [PATCH 2/2] mm/page_alloc.c: use list_for_each_entry in mark_free_pages() Geliang Tang
@ 2015-12-02 16:32   ` Michal Hocko
  2015-12-02 16:46   ` Mel Gorman
  2015-12-03  1:24   ` David Rientjes
  2 siblings, 0 replies; 8+ messages in thread
From: Michal Hocko @ 2015-12-02 16:32 UTC (permalink / raw)
  To: Geliang Tang
  Cc: Andrew Morton, Vlastimil Babka, Mel Gorman, David Rientjes,
	Joonsoo Kim, Kirill A. Shutemov, Johannes Weiner, Alexander Duyck,
	linux-mm, linux-kernel

On Wed 02-12-15 23:12:41, Geliang Tang wrote:
> Use list_for_each_entry instead of list_for_each + list_entry to
> simplify the code.
> 
> Signed-off-by: Geliang Tang <geliangtang@163.com>

Acked-by: Michal Hocko <mhocko@suse.com>

> ---
>  mm/page_alloc.c | 10 +++++-----
>  1 file changed, 5 insertions(+), 5 deletions(-)
> 
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index 0d38185..1c1ad58 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -2027,7 +2027,7 @@ void mark_free_pages(struct zone *zone)
>  	unsigned long pfn, max_zone_pfn;
>  	unsigned long flags;
>  	unsigned int order, t;
> -	struct list_head *curr;
> +	struct page *page;
>  
>  	if (zone_is_empty(zone))
>  		return;
> @@ -2037,17 +2037,17 @@ void mark_free_pages(struct zone *zone)
>  	max_zone_pfn = zone_end_pfn(zone);
>  	for (pfn = zone->zone_start_pfn; pfn < max_zone_pfn; pfn++)
>  		if (pfn_valid(pfn)) {
> -			struct page *page = pfn_to_page(pfn);
> -
> +			page = pfn_to_page(pfn);
>  			if (!swsusp_page_is_forbidden(page))
>  				swsusp_unset_page_free(page);
>  		}
>  
>  	for_each_migratetype_order(order, t) {
> -		list_for_each(curr, &zone->free_area[order].free_list[t]) {
> +		list_for_each_entry(page,
> +				&zone->free_area[order].free_list[t], lru) {
>  			unsigned long i;
>  
> -			pfn = page_to_pfn(list_entry(curr, struct page, lru));
> +			pfn = page_to_pfn(page);
>  			for (i = 0; i < (1UL << order); i++)
>  				swsusp_set_page_free(pfn_to_page(pfn + i));
>  		}
> -- 
> 2.5.0
> 
> 
> --
> To unsubscribe, send a message with 'unsubscribe linux-mm' in
> the body to majordomo@kvack.org.  For more info on Linux MM,
> see: http://www.linux-mm.org/ .
> Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

-- 
Michal Hocko
SUSE Labs

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH 1/2] mm/page_alloc.c: use list_{first,last}_entry instead of list_entry
  2015-12-02 15:12 [PATCH 1/2] mm/page_alloc.c: use list_{first,last}_entry instead of list_entry Geliang Tang
  2015-12-02 15:12 ` [PATCH 2/2] mm/page_alloc.c: use list_for_each_entry in mark_free_pages() Geliang Tang
  2015-12-02 16:28 ` [PATCH 1/2] mm/page_alloc.c: use list_{first,last}_entry instead of list_entry Michal Hocko
@ 2015-12-02 16:46 ` Mel Gorman
  2015-12-03  1:23 ` David Rientjes
  3 siblings, 0 replies; 8+ messages in thread
From: Mel Gorman @ 2015-12-02 16:46 UTC (permalink / raw)
  To: Geliang Tang
  Cc: Andrew Morton, Vlastimil Babka, Michal Hocko, David Rientjes,
	Joonsoo Kim, Kirill A. Shutemov, Johannes Weiner, Alexander Duyck,
	linux-mm, linux-kernel

On Wed, Dec 02, 2015 at 11:12:40PM +0800, Geliang Tang wrote:
> To make the intention clearer, use list_{first,last}_entry instead
> of list_entry.
> 
> Signed-off-by: Geliang Tang <geliangtang@163.com>

Acked-by: Mel Gorman <mgorman@techsingularity.net>

-- 
Mel Gorman
SUSE Labs

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH 2/2] mm/page_alloc.c: use list_for_each_entry in mark_free_pages()
  2015-12-02 15:12 ` [PATCH 2/2] mm/page_alloc.c: use list_for_each_entry in mark_free_pages() Geliang Tang
  2015-12-02 16:32   ` Michal Hocko
@ 2015-12-02 16:46   ` Mel Gorman
  2015-12-03  1:24   ` David Rientjes
  2 siblings, 0 replies; 8+ messages in thread
From: Mel Gorman @ 2015-12-02 16:46 UTC (permalink / raw)
  To: Geliang Tang
  Cc: Andrew Morton, Vlastimil Babka, Michal Hocko, David Rientjes,
	Joonsoo Kim, Kirill A. Shutemov, Johannes Weiner, Alexander Duyck,
	linux-mm, linux-kernel

On Wed, Dec 02, 2015 at 11:12:41PM +0800, Geliang Tang wrote:
> Use list_for_each_entry instead of list_for_each + list_entry to
> simplify the code.
> 
> Signed-off-by: Geliang Tang <geliangtang@163.com>

Acked-by: Mel Gorman <mgorman@techsingularity.net>

-- 
Mel Gorman
SUSE Labs

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH 1/2] mm/page_alloc.c: use list_{first,last}_entry instead of list_entry
  2015-12-02 15:12 [PATCH 1/2] mm/page_alloc.c: use list_{first,last}_entry instead of list_entry Geliang Tang
                   ` (2 preceding siblings ...)
  2015-12-02 16:46 ` Mel Gorman
@ 2015-12-03  1:23 ` David Rientjes
  3 siblings, 0 replies; 8+ messages in thread
From: David Rientjes @ 2015-12-03  1:23 UTC (permalink / raw)
  To: Geliang Tang
  Cc: Andrew Morton, Vlastimil Babka, Michal Hocko, Mel Gorman,
	Joonsoo Kim, Kirill A. Shutemov, Johannes Weiner, Alexander Duyck,
	linux-mm, linux-kernel

On Wed, 2 Dec 2015, Geliang Tang wrote:

> To make the intention clearer, use list_{first,last}_entry instead
> of list_entry.
> 
> Signed-off-by: Geliang Tang <geliangtang@163.com>

Acked-by: David Rientjes <rientjes@google.com>

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH 2/2] mm/page_alloc.c: use list_for_each_entry in mark_free_pages()
  2015-12-02 15:12 ` [PATCH 2/2] mm/page_alloc.c: use list_for_each_entry in mark_free_pages() Geliang Tang
  2015-12-02 16:32   ` Michal Hocko
  2015-12-02 16:46   ` Mel Gorman
@ 2015-12-03  1:24   ` David Rientjes
  2 siblings, 0 replies; 8+ messages in thread
From: David Rientjes @ 2015-12-03  1:24 UTC (permalink / raw)
  To: Geliang Tang
  Cc: Andrew Morton, Vlastimil Babka, Michal Hocko, Mel Gorman,
	Joonsoo Kim, Kirill A. Shutemov, Johannes Weiner, Alexander Duyck,
	linux-mm, linux-kernel

On Wed, 2 Dec 2015, Geliang Tang wrote:

> Use list_for_each_entry instead of list_for_each + list_entry to
> simplify the code.
> 
> Signed-off-by: Geliang Tang <geliangtang@163.com>

Acked-by: David Rientjes <rientjes@google.com>

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2015-12-03  1:24 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2015-12-02 15:12 [PATCH 1/2] mm/page_alloc.c: use list_{first,last}_entry instead of list_entry Geliang Tang
2015-12-02 15:12 ` [PATCH 2/2] mm/page_alloc.c: use list_for_each_entry in mark_free_pages() Geliang Tang
2015-12-02 16:32   ` Michal Hocko
2015-12-02 16:46   ` Mel Gorman
2015-12-03  1:24   ` David Rientjes
2015-12-02 16:28 ` [PATCH 1/2] mm/page_alloc.c: use list_{first,last}_entry instead of list_entry Michal Hocko
2015-12-02 16:46 ` Mel Gorman
2015-12-03  1:23 ` David Rientjes

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).