public inbox for llvm@lists.linux.dev
 help / color / mirror / Atom feed
* Re: [PATCH 01/21] MM: create new mm/swap.h header file.
       [not found] <164420916109.29374.8959231877111146366.stgit@noble.brown>
@ 2022-02-07 14:26 ` kernel test robot
  2022-02-07 15:18 ` kernel test robot
  1 sibling, 0 replies; 2+ messages in thread
From: kernel test robot @ 2022-02-07 14:26 UTC (permalink / raw)
  To: NeilBrown; +Cc: llvm, kbuild-all

Hi NeilBrown,

Thank you for the patch! Yet something to improve:

[auto build test ERROR on trondmy-nfs/linux-next]
[also build test ERROR on hnaz-mm/master cifs/for-next linus/master v5.17-rc3 next-20220207]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch]

url:    https://github.com/0day-ci/linux/commits/NeilBrown/Repair-SWAP-over_NFS/20220207-125206
base:   git://git.linux-nfs.org/projects/trondmy/linux-nfs.git linux-next
config: x86_64-randconfig-a002-20220207 (https://download.01.org/0day-ci/archive/20220207/202202072219.lW7FXue8-lkp@intel.com/config)
compiler: clang version 15.0.0 (https://github.com/llvm/llvm-project 0d8850ae2cae85d49bea6ae0799fa41c7202c05c)
reproduce (this is a W=1 build):
        wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
        chmod +x ~/bin/make.cross
        # https://github.com/0day-ci/linux/commit/06d2bcb84187037252a0f764881ab51965e931ea
        git remote add linux-review https://github.com/0day-ci/linux
        git fetch --no-tags linux-review NeilBrown/Repair-SWAP-over_NFS/20220207-125206
        git checkout 06d2bcb84187037252a0f764881ab51965e931ea
        # save the config file to linux build tree
        mkdir build_dir
        COMPILER_INSTALL_PATH=$HOME/0day COMPILER=clang make.cross W=1 O=build_dir ARCH=x86_64 SHELL=/bin/bash

If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot <lkp@intel.com>

All errors (new ones prefixed by >>):

>> mm/huge_memory.c:2423:16: error: implicit declaration of function 'swap_address_space' [-Werror,-Wimplicit-function-declaration]
                   swap_cache = swap_address_space(entry);
                                ^
   mm/huge_memory.c:2423:14: warning: incompatible integer to pointer conversion assigning to 'struct address_space *' from 'int' [-Wint-conversion]
                   swap_cache = swap_address_space(entry);
                              ^ ~~~~~~~~~~~~~~~~~~~~~~~~~
   1 warning and 1 error generated.


vim +/swap_address_space +2423 mm/huge_memory.c

e9b61f19858a5d Kirill A. Shutemov      2016-01-15  2404  
baa355fd331424 Kirill A. Shutemov      2016-07-26  2405  static void __split_huge_page(struct page *page, struct list_head *list,
b6769834aac1d4 Alex Shi                2020-12-15  2406  		pgoff_t end)
e9b61f19858a5d Kirill A. Shutemov      2016-01-15  2407  {
e809c3fedeeb80 Matthew Wilcox (Oracle  2021-06-28  2408) 	struct folio *folio = page_folio(page);
e809c3fedeeb80 Matthew Wilcox (Oracle  2021-06-28  2409) 	struct page *head = &folio->page;
e9b61f19858a5d Kirill A. Shutemov      2016-01-15  2410  	struct lruvec *lruvec;
4101196b19d7f9 Matthew Wilcox (Oracle  2019-09-23  2411) 	struct address_space *swap_cache = NULL;
4101196b19d7f9 Matthew Wilcox (Oracle  2019-09-23  2412) 	unsigned long offset = 0;
8cce54756806e5 Kirill A. Shutemov      2020-10-15  2413  	unsigned int nr = thp_nr_pages(head);
8df651c7059e79 Kirill A. Shutemov      2016-03-15  2414  	int i;
e9b61f19858a5d Kirill A. Shutemov      2016-01-15  2415  
e9b61f19858a5d Kirill A. Shutemov      2016-01-15  2416  	/* complete memcg works before add pages to LRU */
be6c8982e4ab9a Zhou Guanghui           2021-03-12  2417  	split_page_memcg(head, nr);
e9b61f19858a5d Kirill A. Shutemov      2016-01-15  2418  
4101196b19d7f9 Matthew Wilcox (Oracle  2019-09-23  2419) 	if (PageAnon(head) && PageSwapCache(head)) {
4101196b19d7f9 Matthew Wilcox (Oracle  2019-09-23  2420) 		swp_entry_t entry = { .val = page_private(head) };
4101196b19d7f9 Matthew Wilcox (Oracle  2019-09-23  2421) 
4101196b19d7f9 Matthew Wilcox (Oracle  2019-09-23  2422) 		offset = swp_offset(entry);
4101196b19d7f9 Matthew Wilcox (Oracle  2019-09-23 @2423) 		swap_cache = swap_address_space(entry);
4101196b19d7f9 Matthew Wilcox (Oracle  2019-09-23  2424) 		xa_lock(&swap_cache->i_pages);
4101196b19d7f9 Matthew Wilcox (Oracle  2019-09-23  2425) 	}
4101196b19d7f9 Matthew Wilcox (Oracle  2019-09-23  2426) 
f0953a1bbaca71 Ingo Molnar             2021-05-06  2427  	/* lock lru list/PageCompound, ref frozen by page_ref_freeze */
e809c3fedeeb80 Matthew Wilcox (Oracle  2021-06-28  2428) 	lruvec = folio_lruvec_lock(folio);
b6769834aac1d4 Alex Shi                2020-12-15  2429  
eac96c3efdb593 Yang Shi                2021-10-28  2430  	ClearPageHasHWPoisoned(head);
eac96c3efdb593 Yang Shi                2021-10-28  2431  
8cce54756806e5 Kirill A. Shutemov      2020-10-15  2432  	for (i = nr - 1; i >= 1; i--) {
8df651c7059e79 Kirill A. Shutemov      2016-03-15  2433  		__split_huge_page_tail(head, i, lruvec, list);
d144bf6205342a Hugh Dickins            2021-09-02  2434  		/* Some pages can be beyond EOF: drop them from page cache */
baa355fd331424 Kirill A. Shutemov      2016-07-26  2435  		if (head[i].index >= end) {
2d077d4b59924a Hugh Dickins            2018-06-01  2436  			ClearPageDirty(head + i);
baa355fd331424 Kirill A. Shutemov      2016-07-26  2437  			__delete_from_page_cache(head + i, NULL);
d144bf6205342a Hugh Dickins            2021-09-02  2438  			if (shmem_mapping(head->mapping))
800d8c63b2e989 Kirill A. Shutemov      2016-07-26  2439  				shmem_uncharge(head->mapping->host, 1);
baa355fd331424 Kirill A. Shutemov      2016-07-26  2440  			put_page(head + i);
4101196b19d7f9 Matthew Wilcox (Oracle  2019-09-23  2441) 		} else if (!PageAnon(page)) {
4101196b19d7f9 Matthew Wilcox (Oracle  2019-09-23  2442) 			__xa_store(&head->mapping->i_pages, head[i].index,
4101196b19d7f9 Matthew Wilcox (Oracle  2019-09-23  2443) 					head + i, 0);
4101196b19d7f9 Matthew Wilcox (Oracle  2019-09-23  2444) 		} else if (swap_cache) {
4101196b19d7f9 Matthew Wilcox (Oracle  2019-09-23  2445) 			__xa_store(&swap_cache->i_pages, offset + i,
4101196b19d7f9 Matthew Wilcox (Oracle  2019-09-23  2446) 					head + i, 0);
baa355fd331424 Kirill A. Shutemov      2016-07-26  2447  		}
baa355fd331424 Kirill A. Shutemov      2016-07-26  2448  	}
e9b61f19858a5d Kirill A. Shutemov      2016-01-15  2449  
e9b61f19858a5d Kirill A. Shutemov      2016-01-15  2450  	ClearPageCompound(head);
6168d0da2b479c Alex Shi                2020-12-15  2451  	unlock_page_lruvec(lruvec);
b6769834aac1d4 Alex Shi                2020-12-15  2452  	/* Caller disabled irqs, so they are still disabled here */
f7da677bc6e720 Vlastimil Babka         2019-08-24  2453  
8cce54756806e5 Kirill A. Shutemov      2020-10-15  2454  	split_page_owner(head, nr);
f7da677bc6e720 Vlastimil Babka         2019-08-24  2455  
baa355fd331424 Kirill A. Shutemov      2016-07-26  2456  	/* See comment in __split_huge_page_tail() */
baa355fd331424 Kirill A. Shutemov      2016-07-26  2457  	if (PageAnon(head)) {
aa5dc07f70c50a Matthew Wilcox          2017-12-04  2458  		/* Additional pin to swap cache */
4101196b19d7f9 Matthew Wilcox (Oracle  2019-09-23  2459) 		if (PageSwapCache(head)) {
38d8b4e6bdc872 Huang Ying              2017-07-06  2460  			page_ref_add(head, 2);
4101196b19d7f9 Matthew Wilcox (Oracle  2019-09-23  2461) 			xa_unlock(&swap_cache->i_pages);
4101196b19d7f9 Matthew Wilcox (Oracle  2019-09-23  2462) 		} else {
baa355fd331424 Kirill A. Shutemov      2016-07-26  2463  			page_ref_inc(head);
4101196b19d7f9 Matthew Wilcox (Oracle  2019-09-23  2464) 		}
baa355fd331424 Kirill A. Shutemov      2016-07-26  2465  	} else {
aa5dc07f70c50a Matthew Wilcox          2017-12-04  2466  		/* Additional pin to page cache */
baa355fd331424 Kirill A. Shutemov      2016-07-26  2467  		page_ref_add(head, 2);
b93b016313b3ba Matthew Wilcox          2018-04-10  2468  		xa_unlock(&head->mapping->i_pages);
baa355fd331424 Kirill A. Shutemov      2016-07-26  2469  	}
b6769834aac1d4 Alex Shi                2020-12-15  2470  	local_irq_enable();
e9b61f19858a5d Kirill A. Shutemov      2016-01-15  2471  
8cce54756806e5 Kirill A. Shutemov      2020-10-15  2472  	remap_page(head, nr);
e9b61f19858a5d Kirill A. Shutemov      2016-01-15  2473  
c4f9c701f9b442 Huang Ying              2020-10-15  2474  	if (PageSwapCache(head)) {
c4f9c701f9b442 Huang Ying              2020-10-15  2475  		swp_entry_t entry = { .val = page_private(head) };
c4f9c701f9b442 Huang Ying              2020-10-15  2476  
c4f9c701f9b442 Huang Ying              2020-10-15  2477  		split_swap_cluster(entry);
c4f9c701f9b442 Huang Ying              2020-10-15  2478  	}
c4f9c701f9b442 Huang Ying              2020-10-15  2479  
8cce54756806e5 Kirill A. Shutemov      2020-10-15  2480  	for (i = 0; i < nr; i++) {
e9b61f19858a5d Kirill A. Shutemov      2016-01-15  2481  		struct page *subpage = head + i;
e9b61f19858a5d Kirill A. Shutemov      2016-01-15  2482  		if (subpage == page)
e9b61f19858a5d Kirill A. Shutemov      2016-01-15  2483  			continue;
e9b61f19858a5d Kirill A. Shutemov      2016-01-15  2484  		unlock_page(subpage);
e9b61f19858a5d Kirill A. Shutemov      2016-01-15  2485  
e9b61f19858a5d Kirill A. Shutemov      2016-01-15  2486  		/*
e9b61f19858a5d Kirill A. Shutemov      2016-01-15  2487  		 * Subpages may be freed if there wasn't any mapping
e9b61f19858a5d Kirill A. Shutemov      2016-01-15  2488  		 * like if add_to_swap() is running on a lru page that
e9b61f19858a5d Kirill A. Shutemov      2016-01-15  2489  		 * had its mapping zapped. And freeing these pages
e9b61f19858a5d Kirill A. Shutemov      2016-01-15  2490  		 * requires taking the lru_lock so we do the put_page
e9b61f19858a5d Kirill A. Shutemov      2016-01-15  2491  		 * of the tail pages after the split is complete.
e9b61f19858a5d Kirill A. Shutemov      2016-01-15  2492  		 */
e9b61f19858a5d Kirill A. Shutemov      2016-01-15  2493  		put_page(subpage);
e9b61f19858a5d Kirill A. Shutemov      2016-01-15  2494  	}
e9b61f19858a5d Kirill A. Shutemov      2016-01-15  2495  }
e9b61f19858a5d Kirill A. Shutemov      2016-01-15  2496  

---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/kbuild-all@lists.01.org

^ permalink raw reply	[flat|nested] 2+ messages in thread

* Re: [PATCH 01/21] MM: create new mm/swap.h header file.
       [not found] <164420916109.29374.8959231877111146366.stgit@noble.brown>
  2022-02-07 14:26 ` [PATCH 01/21] MM: create new mm/swap.h header file kernel test robot
@ 2022-02-07 15:18 ` kernel test robot
  1 sibling, 0 replies; 2+ messages in thread
From: kernel test robot @ 2022-02-07 15:18 UTC (permalink / raw)
  To: NeilBrown; +Cc: llvm, kbuild-all

Hi NeilBrown,

Thank you for the patch! Yet something to improve:

[auto build test ERROR on trondmy-nfs/linux-next]
[also build test ERROR on hnaz-mm/master cifs/for-next linus/master v5.17-rc3 next-20220207]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch]

url:    https://github.com/0day-ci/linux/commits/NeilBrown/Repair-SWAP-over_NFS/20220207-125206
base:   git://git.linux-nfs.org/projects/trondmy/linux-nfs.git linux-next
config: hexagon-randconfig-r005-20220207 (https://download.01.org/0day-ci/archive/20220207/202202072351.RqMHsM0e-lkp@intel.com/config)
compiler: clang version 15.0.0 (https://github.com/llvm/llvm-project 0d8850ae2cae85d49bea6ae0799fa41c7202c05c)
reproduce (this is a W=1 build):
        wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
        chmod +x ~/bin/make.cross
        # https://github.com/0day-ci/linux/commit/06d2bcb84187037252a0f764881ab51965e931ea
        git remote add linux-review https://github.com/0day-ci/linux
        git fetch --no-tags linux-review NeilBrown/Repair-SWAP-over_NFS/20220207-125206
        git checkout 06d2bcb84187037252a0f764881ab51965e931ea
        # save the config file to linux build tree
        mkdir build_dir
        COMPILER_INSTALL_PATH=$HOME/0day COMPILER=clang make.cross W=1 O=build_dir ARCH=hexagon SHELL=/bin/bash

If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot <lkp@intel.com>

All errors (new ones prefixed by >>):

>> mm/zswap.c:906:13: error: implicit declaration of function '__read_swap_cache_async' [-Werror,-Wimplicit-function-declaration]
           *retpage = __read_swap_cache_async(entry, GFP_KERNEL,
                      ^
   mm/zswap.c:906:11: warning: incompatible integer to pointer conversion assigning to 'struct page *' from 'int' [-Wint-conversion]
           *retpage = __read_swap_cache_async(entry, GFP_KERNEL,
                    ^ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
>> mm/zswap.c:1014:2: error: implicit declaration of function '__swap_writepage' [-Werror,-Wimplicit-function-declaration]
           __swap_writepage(page, &wbc, end_swap_bio_write);
           ^
>> mm/zswap.c:1014:31: error: use of undeclared identifier 'end_swap_bio_write'
           __swap_writepage(page, &wbc, end_swap_bio_write);
                                        ^
   1 warning and 3 errors generated.


vim +/__read_swap_cache_async +906 mm/zswap.c

2b2811178e85553 Seth Jennings      2013-07-10   884  
2b2811178e85553 Seth Jennings      2013-07-10   885  /*
2b2811178e85553 Seth Jennings      2013-07-10   886   * zswap_get_swap_cache_page
2b2811178e85553 Seth Jennings      2013-07-10   887   *
2b2811178e85553 Seth Jennings      2013-07-10   888   * This is an adaption of read_swap_cache_async()
2b2811178e85553 Seth Jennings      2013-07-10   889   *
2b2811178e85553 Seth Jennings      2013-07-10   890   * This function tries to find a page with the given swap entry
2b2811178e85553 Seth Jennings      2013-07-10   891   * in the swapper_space address space (the swap cache).  If the page
2b2811178e85553 Seth Jennings      2013-07-10   892   * is found, it is returned in retpage.  Otherwise, a page is allocated,
2b2811178e85553 Seth Jennings      2013-07-10   893   * added to the swap cache, and returned in retpage.
2b2811178e85553 Seth Jennings      2013-07-10   894   *
2b2811178e85553 Seth Jennings      2013-07-10   895   * If success, the swap cache page is returned in retpage
67d13fe846c57a5 Weijie Yang        2013-11-12   896   * Returns ZSWAP_SWAPCACHE_EXIST if page was already in the swap cache
67d13fe846c57a5 Weijie Yang        2013-11-12   897   * Returns ZSWAP_SWAPCACHE_NEW if the new page needs to be populated,
67d13fe846c57a5 Weijie Yang        2013-11-12   898   *     the new page is added to swapcache and locked
67d13fe846c57a5 Weijie Yang        2013-11-12   899   * Returns ZSWAP_SWAPCACHE_FAIL on error
2b2811178e85553 Seth Jennings      2013-07-10   900   */
2b2811178e85553 Seth Jennings      2013-07-10   901  static int zswap_get_swap_cache_page(swp_entry_t entry,
2b2811178e85553 Seth Jennings      2013-07-10   902  				struct page **retpage)
2b2811178e85553 Seth Jennings      2013-07-10   903  {
5b999aadbae6569 Dmitry Safonov     2015-09-08   904  	bool page_was_allocated;
2b2811178e85553 Seth Jennings      2013-07-10   905  
5b999aadbae6569 Dmitry Safonov     2015-09-08  @906  	*retpage = __read_swap_cache_async(entry, GFP_KERNEL,
5b999aadbae6569 Dmitry Safonov     2015-09-08   907  			NULL, 0, &page_was_allocated);
5b999aadbae6569 Dmitry Safonov     2015-09-08   908  	if (page_was_allocated)
2b2811178e85553 Seth Jennings      2013-07-10   909  		return ZSWAP_SWAPCACHE_NEW;
5b999aadbae6569 Dmitry Safonov     2015-09-08   910  	if (!*retpage)
67d13fe846c57a5 Weijie Yang        2013-11-12   911  		return ZSWAP_SWAPCACHE_FAIL;
2b2811178e85553 Seth Jennings      2013-07-10   912  	return ZSWAP_SWAPCACHE_EXIST;
2b2811178e85553 Seth Jennings      2013-07-10   913  }
2b2811178e85553 Seth Jennings      2013-07-10   914  
2b2811178e85553 Seth Jennings      2013-07-10   915  /*
2b2811178e85553 Seth Jennings      2013-07-10   916   * Attempts to free an entry by adding a page to the swap cache,
2b2811178e85553 Seth Jennings      2013-07-10   917   * decompressing the entry data into the page, and issuing a
2b2811178e85553 Seth Jennings      2013-07-10   918   * bio write to write the page back to the swap device.
2b2811178e85553 Seth Jennings      2013-07-10   919   *
2b2811178e85553 Seth Jennings      2013-07-10   920   * This can be thought of as a "resumed writeback" of the page
2b2811178e85553 Seth Jennings      2013-07-10   921   * to the swap device.  We are basically resuming the same swap
2b2811178e85553 Seth Jennings      2013-07-10   922   * writeback path that was intercepted with the frontswap_store()
2b2811178e85553 Seth Jennings      2013-07-10   923   * in the first place.  After the page has been decompressed into
2b2811178e85553 Seth Jennings      2013-07-10   924   * the swap cache, the compressed version stored by zswap can be
2b2811178e85553 Seth Jennings      2013-07-10   925   * freed.
2b2811178e85553 Seth Jennings      2013-07-10   926   */
12d79d64bfd3913 Dan Streetman      2014-08-06   927  static int zswap_writeback_entry(struct zpool *pool, unsigned long handle)
2b2811178e85553 Seth Jennings      2013-07-10   928  {
2b2811178e85553 Seth Jennings      2013-07-10   929  	struct zswap_header *zhdr;
2b2811178e85553 Seth Jennings      2013-07-10   930  	swp_entry_t swpentry;
2b2811178e85553 Seth Jennings      2013-07-10   931  	struct zswap_tree *tree;
2b2811178e85553 Seth Jennings      2013-07-10   932  	pgoff_t offset;
2b2811178e85553 Seth Jennings      2013-07-10   933  	struct zswap_entry *entry;
2b2811178e85553 Seth Jennings      2013-07-10   934  	struct page *page;
1ec3b5fe6eec782 Barry Song         2020-12-14   935  	struct scatterlist input, output;
1ec3b5fe6eec782 Barry Song         2020-12-14   936  	struct crypto_acomp_ctx *acomp_ctx;
1ec3b5fe6eec782 Barry Song         2020-12-14   937  
fc6697a89f56d97 Tian Tao           2021-02-25   938  	u8 *src, *tmp = NULL;
2b2811178e85553 Seth Jennings      2013-07-10   939  	unsigned int dlen;
0ab0abcf511545d Weijie Yang        2013-11-12   940  	int ret;
2b2811178e85553 Seth Jennings      2013-07-10   941  	struct writeback_control wbc = {
2b2811178e85553 Seth Jennings      2013-07-10   942  		.sync_mode = WB_SYNC_NONE,
2b2811178e85553 Seth Jennings      2013-07-10   943  	};
2b2811178e85553 Seth Jennings      2013-07-10   944  
fc6697a89f56d97 Tian Tao           2021-02-25   945  	if (!zpool_can_sleep_mapped(pool)) {
fc6697a89f56d97 Tian Tao           2021-02-25   946  		tmp = kmalloc(PAGE_SIZE, GFP_ATOMIC);
fc6697a89f56d97 Tian Tao           2021-02-25   947  		if (!tmp)
fc6697a89f56d97 Tian Tao           2021-02-25   948  			return -ENOMEM;
fc6697a89f56d97 Tian Tao           2021-02-25   949  	}
fc6697a89f56d97 Tian Tao           2021-02-25   950  
2b2811178e85553 Seth Jennings      2013-07-10   951  	/* extract swpentry from data */
12d79d64bfd3913 Dan Streetman      2014-08-06   952  	zhdr = zpool_map_handle(pool, handle, ZPOOL_MM_RO);
2b2811178e85553 Seth Jennings      2013-07-10   953  	swpentry = zhdr->swpentry; /* here */
2b2811178e85553 Seth Jennings      2013-07-10   954  	tree = zswap_trees[swp_type(swpentry)];
2b2811178e85553 Seth Jennings      2013-07-10   955  	offset = swp_offset(swpentry);
2b2811178e85553 Seth Jennings      2013-07-10   956  
2b2811178e85553 Seth Jennings      2013-07-10   957  	/* find and ref zswap entry */
2b2811178e85553 Seth Jennings      2013-07-10   958  	spin_lock(&tree->lock);
0ab0abcf511545d Weijie Yang        2013-11-12   959  	entry = zswap_entry_find_get(&tree->rbroot, offset);
2b2811178e85553 Seth Jennings      2013-07-10   960  	if (!entry) {
2b2811178e85553 Seth Jennings      2013-07-10   961  		/* entry was invalidated */
2b2811178e85553 Seth Jennings      2013-07-10   962  		spin_unlock(&tree->lock);
068619e32ff6229 Vitaly Wool        2019-09-23   963  		zpool_unmap_handle(pool, handle);
fc6697a89f56d97 Tian Tao           2021-02-25   964  		kfree(tmp);
2b2811178e85553 Seth Jennings      2013-07-10   965  		return 0;
2b2811178e85553 Seth Jennings      2013-07-10   966  	}
2b2811178e85553 Seth Jennings      2013-07-10   967  	spin_unlock(&tree->lock);
2b2811178e85553 Seth Jennings      2013-07-10   968  	BUG_ON(offset != entry->offset);
2b2811178e85553 Seth Jennings      2013-07-10   969  
46b76f2e09dc35f Miaohe Lin         2021-06-30   970  	src = (u8 *)zhdr + sizeof(struct zswap_header);
46b76f2e09dc35f Miaohe Lin         2021-06-30   971  	if (!zpool_can_sleep_mapped(pool)) {
46b76f2e09dc35f Miaohe Lin         2021-06-30   972  		memcpy(tmp, src, entry->length);
46b76f2e09dc35f Miaohe Lin         2021-06-30   973  		src = tmp;
46b76f2e09dc35f Miaohe Lin         2021-06-30   974  		zpool_unmap_handle(pool, handle);
46b76f2e09dc35f Miaohe Lin         2021-06-30   975  	}
46b76f2e09dc35f Miaohe Lin         2021-06-30   976  
2b2811178e85553 Seth Jennings      2013-07-10   977  	/* try to allocate swap cache page */
2b2811178e85553 Seth Jennings      2013-07-10   978  	switch (zswap_get_swap_cache_page(swpentry, &page)) {
67d13fe846c57a5 Weijie Yang        2013-11-12   979  	case ZSWAP_SWAPCACHE_FAIL: /* no memory or invalidate happened */
2b2811178e85553 Seth Jennings      2013-07-10   980  		ret = -ENOMEM;
2b2811178e85553 Seth Jennings      2013-07-10   981  		goto fail;
2b2811178e85553 Seth Jennings      2013-07-10   982  
67d13fe846c57a5 Weijie Yang        2013-11-12   983  	case ZSWAP_SWAPCACHE_EXIST:
2b2811178e85553 Seth Jennings      2013-07-10   984  		/* page is already in the swap cache, ignore for now */
09cbfeaf1a5a67b Kirill A. Shutemov 2016-04-01   985  		put_page(page);
2b2811178e85553 Seth Jennings      2013-07-10   986  		ret = -EEXIST;
2b2811178e85553 Seth Jennings      2013-07-10   987  		goto fail;
2b2811178e85553 Seth Jennings      2013-07-10   988  
2b2811178e85553 Seth Jennings      2013-07-10   989  	case ZSWAP_SWAPCACHE_NEW: /* page is locked */
2b2811178e85553 Seth Jennings      2013-07-10   990  		/* decompress */
1ec3b5fe6eec782 Barry Song         2020-12-14   991  		acomp_ctx = raw_cpu_ptr(entry->pool->acomp_ctx);
2b2811178e85553 Seth Jennings      2013-07-10   992  		dlen = PAGE_SIZE;
fc6697a89f56d97 Tian Tao           2021-02-25   993  
1ec3b5fe6eec782 Barry Song         2020-12-14   994  		mutex_lock(acomp_ctx->mutex);
1ec3b5fe6eec782 Barry Song         2020-12-14   995  		sg_init_one(&input, src, entry->length);
1ec3b5fe6eec782 Barry Song         2020-12-14   996  		sg_init_table(&output, 1);
1ec3b5fe6eec782 Barry Song         2020-12-14   997  		sg_set_page(&output, page, PAGE_SIZE, 0);
1ec3b5fe6eec782 Barry Song         2020-12-14   998  		acomp_request_set_params(acomp_ctx->req, &input, &output, entry->length, dlen);
1ec3b5fe6eec782 Barry Song         2020-12-14   999  		ret = crypto_wait_req(crypto_acomp_decompress(acomp_ctx->req), &acomp_ctx->wait);
1ec3b5fe6eec782 Barry Song         2020-12-14  1000  		dlen = acomp_ctx->req->dlen;
1ec3b5fe6eec782 Barry Song         2020-12-14  1001  		mutex_unlock(acomp_ctx->mutex);
1ec3b5fe6eec782 Barry Song         2020-12-14  1002  
2b2811178e85553 Seth Jennings      2013-07-10  1003  		BUG_ON(ret);
2b2811178e85553 Seth Jennings      2013-07-10  1004  		BUG_ON(dlen != PAGE_SIZE);
2b2811178e85553 Seth Jennings      2013-07-10  1005  
2b2811178e85553 Seth Jennings      2013-07-10  1006  		/* page is up to date */
2b2811178e85553 Seth Jennings      2013-07-10  1007  		SetPageUptodate(page);
2b2811178e85553 Seth Jennings      2013-07-10  1008  	}
2b2811178e85553 Seth Jennings      2013-07-10  1009  
b349acc76b7f654 Weijie Yang        2013-11-12  1010  	/* move it to the tail of the inactive list after end_writeback */
b349acc76b7f654 Weijie Yang        2013-11-12  1011  	SetPageReclaim(page);
b349acc76b7f654 Weijie Yang        2013-11-12  1012  
2b2811178e85553 Seth Jennings      2013-07-10  1013  	/* start writeback */
2b2811178e85553 Seth Jennings      2013-07-10 @1014  	__swap_writepage(page, &wbc, end_swap_bio_write);
09cbfeaf1a5a67b Kirill A. Shutemov 2016-04-01  1015  	put_page(page);
2b2811178e85553 Seth Jennings      2013-07-10  1016  	zswap_written_back_pages++;
2b2811178e85553 Seth Jennings      2013-07-10  1017  
2b2811178e85553 Seth Jennings      2013-07-10  1018  	spin_lock(&tree->lock);
2b2811178e85553 Seth Jennings      2013-07-10  1019  	/* drop local reference */
0ab0abcf511545d Weijie Yang        2013-11-12  1020  	zswap_entry_put(tree, entry);
2b2811178e85553 Seth Jennings      2013-07-10  1021  
2b2811178e85553 Seth Jennings      2013-07-10  1022  	/*
0ab0abcf511545d Weijie Yang        2013-11-12  1023  	* There are two possible situations for entry here:
0ab0abcf511545d Weijie Yang        2013-11-12  1024  	* (1) refcount is 1(normal case),  entry is valid and on the tree
0ab0abcf511545d Weijie Yang        2013-11-12  1025  	* (2) refcount is 0, entry is freed and not on the tree
0ab0abcf511545d Weijie Yang        2013-11-12  1026  	*     because invalidate happened during writeback
0ab0abcf511545d Weijie Yang        2013-11-12  1027  	*  search the tree and free the entry if find entry
2b2811178e85553 Seth Jennings      2013-07-10  1028  	*/
0ab0abcf511545d Weijie Yang        2013-11-12  1029  	if (entry == zswap_rb_search(&tree->rbroot, offset))
0ab0abcf511545d Weijie Yang        2013-11-12  1030  		zswap_entry_put(tree, entry);
2b2811178e85553 Seth Jennings      2013-07-10  1031  	spin_unlock(&tree->lock);
2b2811178e85553 Seth Jennings      2013-07-10  1032  
0ab0abcf511545d Weijie Yang        2013-11-12  1033  	goto end;
0ab0abcf511545d Weijie Yang        2013-11-12  1034  
0ab0abcf511545d Weijie Yang        2013-11-12  1035  	/*
0ab0abcf511545d Weijie Yang        2013-11-12  1036  	* if we get here due to ZSWAP_SWAPCACHE_EXIST
c0c641d77b9ab0d Randy Dunlap       2021-02-25  1037  	* a load may be happening concurrently.
c0c641d77b9ab0d Randy Dunlap       2021-02-25  1038  	* it is safe and okay to not free the entry.
0ab0abcf511545d Weijie Yang        2013-11-12  1039  	* if we free the entry in the following put
c0c641d77b9ab0d Randy Dunlap       2021-02-25  1040  	* it is also okay to return !0
0ab0abcf511545d Weijie Yang        2013-11-12  1041  	*/
2b2811178e85553 Seth Jennings      2013-07-10  1042  fail:
2b2811178e85553 Seth Jennings      2013-07-10  1043  	spin_lock(&tree->lock);
0ab0abcf511545d Weijie Yang        2013-11-12  1044  	zswap_entry_put(tree, entry);
2b2811178e85553 Seth Jennings      2013-07-10  1045  	spin_unlock(&tree->lock);
0ab0abcf511545d Weijie Yang        2013-11-12  1046  
0ab0abcf511545d Weijie Yang        2013-11-12  1047  end:
fc6697a89f56d97 Tian Tao           2021-02-25  1048  	if (zpool_can_sleep_mapped(pool))
068619e32ff6229 Vitaly Wool        2019-09-23  1049  		zpool_unmap_handle(pool, handle);
fc6697a89f56d97 Tian Tao           2021-02-25  1050  	else
fc6697a89f56d97 Tian Tao           2021-02-25  1051  		kfree(tmp);
fc6697a89f56d97 Tian Tao           2021-02-25  1052  
2b2811178e85553 Seth Jennings      2013-07-10  1053  	return ret;
2b2811178e85553 Seth Jennings      2013-07-10  1054  }
2b2811178e85553 Seth Jennings      2013-07-10  1055  

---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/kbuild-all@lists.01.org

^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2022-02-07 15:18 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
     [not found] <164420916109.29374.8959231877111146366.stgit@noble.brown>
2022-02-07 14:26 ` [PATCH 01/21] MM: create new mm/swap.h header file kernel test robot
2022-02-07 15:18 ` kernel test robot

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox