* [PATCH 1/3] swap: Use __try_to_reclaim_swap() in free_swap_and_cache()
2018-08-27 7:55 [PATCH 0/3] swap: Code refactoring for some swap free related functions Huang Ying
@ 2018-08-27 7:55 ` Huang Ying
2018-08-27 22:44 ` Andrew Morton
2018-08-27 7:55 ` [PATCH 2/3] swap: call free_swap_slot() in __swap_entry_free() Huang Ying
2018-08-27 7:55 ` [PATCH 3/3] swap: Clear si->swap_map[] in swap_free_cluster() Huang Ying
2 siblings, 1 reply; 5+ messages in thread
From: Huang Ying @ 2018-08-27 7:55 UTC (permalink / raw)
To: Andrew Morton
Cc: linux-mm, linux-kernel, Huang Ying, Dave Hansen, Michal Hocko,
Johannes Weiner, Shaohua Li, Hugh Dickins, Minchan Kim
The code path to reclaim the swap entry in free_swap_and_cache() is
almost same as that of __try_to_reclaim_swap(). The largest
difference is just coding style. So the support to the additional
requirement of free_swap_and_cache() is added into
__try_to_reclaim_swap(). free_swap_and_cache() is changed to call
__try_to_reclaim_swap(), and delete the duplicated code. This will
improve code readability and reduce the potential bugs.
There are 2 functionality differences between __try_to_reclaim_swap()
and swap entry reclaim code of free_swap_and_cache().
- free_swap_and_cache() only reclaims the swap entry if the page is
unmapped or swap is getting full. The support has been added into
__try_to_reclaim_swap().
- try_to_free_swap() (called by __try_to_reclaim_swap()) checks
pm_suspended_storage(), while free_swap_and_cache() not. I think
this is OK. Because the page and the swap entry can be reclaimed
later eventually.
Signed-off-by: "Huang, Ying" <ying.huang@intel.com>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Michal Hocko <mhocko@suse.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Shaohua Li <shli@kernel.org>
Cc: Hugh Dickins <hughd@google.com>
Cc: Minchan Kim <minchan@kernel.org>
---
mm/swapfile.c | 57 +++++++++++++++++++++++++--------------------------------
1 file changed, 25 insertions(+), 32 deletions(-)
diff --git a/mm/swapfile.c b/mm/swapfile.c
index 9211c427a9ad..409926079607 100644
--- a/mm/swapfile.c
+++ b/mm/swapfile.c
@@ -104,26 +104,39 @@ static inline unsigned char swap_count(unsigned char ent)
return ent & ~SWAP_HAS_CACHE; /* may include COUNT_CONTINUED flag */
}
+/* Reclaim the swap entry anyway if possible */
+#define TTRS_ANYWAY 0x1
+/*
+ * Reclaim the swap entry if there are no more mappings of the
+ * corresponding page
+ */
+#define TTRS_UNMAPPED 0x2
+/* Reclaim the swap entry if swap is getting full*/
+#define TTRS_FULL 0x4
+
/* returns 1 if swap entry is freed */
-static int
-__try_to_reclaim_swap(struct swap_info_struct *si, unsigned long offset)
+static int __try_to_reclaim_swap(struct swap_info_struct *si,
+ unsigned long offset, unsigned long flags)
{
swp_entry_t entry = swp_entry(si->type, offset);
struct page *page;
int ret = 0;
- page = find_get_page(swap_address_space(entry), swp_offset(entry));
+ page = find_get_page(swap_address_space(entry), offset);
if (!page)
return 0;
/*
- * This function is called from scan_swap_map() and it's called
- * by vmscan.c at reclaiming pages. So, we hold a lock on a page, here.
- * We have to use trylock for avoiding deadlock. This is a special
+ * When this function is called from scan_swap_map_slots() and it's
+ * called by vmscan.c at reclaiming pages. So, we hold a lock on a page,
+ * here. We have to use trylock for avoiding deadlock. This is a special
* case and you should use try_to_free_swap() with explicit lock_page()
* in usual operations.
*/
if (trylock_page(page)) {
- ret = try_to_free_swap(page);
+ if ((flags & TTRS_ANYWAY) ||
+ ((flags & TTRS_UNMAPPED) && !page_mapped(page)) ||
+ ((flags & TTRS_FULL) && mem_cgroup_swap_full(page)))
+ ret = try_to_free_swap(page);
unlock_page(page);
}
put_page(page);
@@ -781,7 +794,7 @@ static int scan_swap_map_slots(struct swap_info_struct *si,
int swap_was_freed;
unlock_cluster(ci);
spin_unlock(&si->lock);
- swap_was_freed = __try_to_reclaim_swap(si, offset);
+ swap_was_freed = __try_to_reclaim_swap(si, offset, TTRS_ANYWAY);
spin_lock(&si->lock);
/* entry was freed successfully, try to use this again */
if (swap_was_freed)
@@ -1680,7 +1693,6 @@ int try_to_free_swap(struct page *page)
int free_swap_and_cache(swp_entry_t entry)
{
struct swap_info_struct *p;
- struct page *page = NULL;
unsigned char count;
if (non_swap_entry(entry))
@@ -1690,31 +1702,12 @@ int free_swap_and_cache(swp_entry_t entry)
if (p) {
count = __swap_entry_free(p, entry, 1);
if (count == SWAP_HAS_CACHE &&
- !swap_page_trans_huge_swapped(p, entry)) {
- page = find_get_page(swap_address_space(entry),
- swp_offset(entry));
- if (page && !trylock_page(page)) {
- put_page(page);
- page = NULL;
- }
- } else if (!count)
+ !swap_page_trans_huge_swapped(p, entry))
+ __try_to_reclaim_swap(p, swp_offset(entry),
+ TTRS_UNMAPPED | TTRS_FULL);
+ else if (!count)
free_swap_slot(entry);
}
- if (page) {
- /*
- * Not mapped elsewhere, or swap space full? Free it!
- * Also recheck PageSwapCache now page is locked (above).
- */
- if (PageSwapCache(page) && !PageWriteback(page) &&
- (!page_mapped(page) || mem_cgroup_swap_full(page)) &&
- !swap_page_trans_huge_swapped(p, entry)) {
- page = compound_head(page);
- delete_from_swap_cache(page);
- SetPageDirty(page);
- }
- unlock_page(page);
- put_page(page);
- }
return p != NULL;
}
--
2.16.4
^ permalink raw reply related [flat|nested] 5+ messages in thread* Re: [PATCH 1/3] swap: Use __try_to_reclaim_swap() in free_swap_and_cache()
2018-08-27 7:55 ` [PATCH 1/3] swap: Use __try_to_reclaim_swap() in free_swap_and_cache() Huang Ying
@ 2018-08-27 22:44 ` Andrew Morton
0 siblings, 0 replies; 5+ messages in thread
From: Andrew Morton @ 2018-08-27 22:44 UTC (permalink / raw)
To: Huang Ying
Cc: linux-mm, linux-kernel, Dave Hansen, Michal Hocko,
Johannes Weiner, Shaohua Li, Hugh Dickins, Minchan Kim
On Mon, 27 Aug 2018 15:55:33 +0800 Huang Ying <ying.huang@intel.com> wrote:
> The code path to reclaim the swap entry in free_swap_and_cache() is
> almost same as that of __try_to_reclaim_swap(). The largest
> difference is just coding style. So the support to the additional
> requirement of free_swap_and_cache() is added into
> __try_to_reclaim_swap(). free_swap_and_cache() is changed to call
> __try_to_reclaim_swap(), and delete the duplicated code. This will
> improve code readability and reduce the potential bugs.
>
> There are 2 functionality differences between __try_to_reclaim_swap()
> and swap entry reclaim code of free_swap_and_cache().
>
> - free_swap_and_cache() only reclaims the swap entry if the page is
> unmapped or swap is getting full. The support has been added into
> __try_to_reclaim_swap().
>
> - try_to_free_swap() (called by __try_to_reclaim_swap()) checks
> pm_suspended_storage(), while free_swap_and_cache() not. I think
> this is OK. Because the page and the swap entry can be reclaimed
> later eventually.
hm. Having functions take `mode' arguments which specify their actions
in this manner isn't popular (Linus ;)) but I guess the end result is
somewhat better.
^ permalink raw reply [flat|nested] 5+ messages in thread
* [PATCH 2/3] swap: call free_swap_slot() in __swap_entry_free()
2018-08-27 7:55 [PATCH 0/3] swap: Code refactoring for some swap free related functions Huang Ying
2018-08-27 7:55 ` [PATCH 1/3] swap: Use __try_to_reclaim_swap() in free_swap_and_cache() Huang Ying
@ 2018-08-27 7:55 ` Huang Ying
2018-08-27 7:55 ` [PATCH 3/3] swap: Clear si->swap_map[] in swap_free_cluster() Huang Ying
2 siblings, 0 replies; 5+ messages in thread
From: Huang Ying @ 2018-08-27 7:55 UTC (permalink / raw)
To: Andrew Morton
Cc: linux-mm, linux-kernel, Huang Ying, Dave Hansen, Michal Hocko,
Johannes Weiner, Shaohua Li, Hugh Dickins, Minchan Kim
This is a code cleanup patch without functionality change.
Originally, when __swap_entry_free() is called, and its return value
is 0, free_swap_slot() will always be called to free the swap entry to
the per-CPU pool. So move the call to free_swap_slot() to
__swap_entry_free() to simplify the code.
Signed-off-by: "Huang, Ying" <ying.huang@intel.com>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Michal Hocko <mhocko@suse.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Shaohua Li <shli@kernel.org>
Cc: Hugh Dickins <hughd@google.com>
Cc: Minchan Kim <minchan@kernel.org>
---
mm/swapfile.c | 10 ++++------
1 file changed, 4 insertions(+), 6 deletions(-)
diff --git a/mm/swapfile.c b/mm/swapfile.c
index 409926079607..ef974bbd7715 100644
--- a/mm/swapfile.c
+++ b/mm/swapfile.c
@@ -1241,6 +1241,8 @@ static unsigned char __swap_entry_free(struct swap_info_struct *p,
ci = lock_cluster_or_swap_info(p, offset);
usage = __swap_entry_free_locked(p, offset, usage);
unlock_cluster_or_swap_info(p, ci);
+ if (!usage)
+ free_swap_slot(entry);
return usage;
}
@@ -1271,10 +1273,8 @@ void swap_free(swp_entry_t entry)
struct swap_info_struct *p;
p = _swap_info_get(entry);
- if (p) {
- if (!__swap_entry_free(p, entry, 1))
- free_swap_slot(entry);
- }
+ if (p)
+ __swap_entry_free(p, entry, 1);
}
/*
@@ -1705,8 +1705,6 @@ int free_swap_and_cache(swp_entry_t entry)
!swap_page_trans_huge_swapped(p, entry))
__try_to_reclaim_swap(p, swp_offset(entry),
TTRS_UNMAPPED | TTRS_FULL);
- else if (!count)
- free_swap_slot(entry);
}
return p != NULL;
}
--
2.16.4
^ permalink raw reply related [flat|nested] 5+ messages in thread* [PATCH 3/3] swap: Clear si->swap_map[] in swap_free_cluster()
2018-08-27 7:55 [PATCH 0/3] swap: Code refactoring for some swap free related functions Huang Ying
2018-08-27 7:55 ` [PATCH 1/3] swap: Use __try_to_reclaim_swap() in free_swap_and_cache() Huang Ying
2018-08-27 7:55 ` [PATCH 2/3] swap: call free_swap_slot() in __swap_entry_free() Huang Ying
@ 2018-08-27 7:55 ` Huang Ying
2 siblings, 0 replies; 5+ messages in thread
From: Huang Ying @ 2018-08-27 7:55 UTC (permalink / raw)
To: Andrew Morton
Cc: linux-mm, linux-kernel, Huang Ying, Dave Hansen, Michal Hocko,
Johannes Weiner, Shaohua Li, Hugh Dickins, Minchan Kim
si->swap_map[] of the swap entries in cluster needs to be cleared
during freeing. Previously, this is done in the caller of
swap_free_cluster(). This may cause code duplication (one user now,
will add more users later) and lock/unlock cluster unnecessarily. In
this patch, the clearing code is moved to swap_free_cluster() to avoid
the downside.
Signed-off-by: "Huang, Ying" <ying.huang@intel.com>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Michal Hocko <mhocko@suse.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Shaohua Li <shli@kernel.org>
Cc: Hugh Dickins <hughd@google.com>
Cc: Minchan Kim <minchan@kernel.org>
---
mm/swapfile.c | 4 +---
1 file changed, 1 insertion(+), 3 deletions(-)
diff --git a/mm/swapfile.c b/mm/swapfile.c
index ef974bbd7715..97a1bd1a7c9a 100644
--- a/mm/swapfile.c
+++ b/mm/swapfile.c
@@ -933,6 +933,7 @@ static void swap_free_cluster(struct swap_info_struct *si, unsigned long idx)
struct swap_cluster_info *ci;
ci = lock_cluster(si, offset);
+ memset(si->swap_map + offset, 0, SWAPFILE_CLUSTER);
cluster_set_count_flag(ci, 0, 0);
free_cluster(si, idx);
unlock_cluster(ci);
@@ -1309,9 +1310,6 @@ void put_swap_page(struct page *page, swp_entry_t entry)
if (free_entries == SWAPFILE_CLUSTER) {
unlock_cluster_or_swap_info(si, ci);
spin_lock(&si->lock);
- ci = lock_cluster(si, offset);
- memset(map, 0, SWAPFILE_CLUSTER);
- unlock_cluster(ci);
mem_cgroup_uncharge_swap(entry, SWAPFILE_CLUSTER);
swap_free_cluster(si, idx);
spin_unlock(&si->lock);
--
2.16.4
^ permalink raw reply related [flat|nested] 5+ messages in thread