public inbox for llvm@lists.linux.dev
 help / color / mirror / Atom feed
From: kernel test robot <lkp@intel.com>
To: Kanchana P Sridhar <kanchana.p.sridhar@intel.com>
Cc: llvm@lists.linux.dev, oe-kbuild-all@lists.linux.dev
Subject: Re: [PATCH v3 13/13] mm: zswap: Compress batching with Intel IAA in zswap_store() of large folios.
Date: Thu, 7 Nov 2024 13:24:56 +0800	[thread overview]
Message-ID: <202411071309.tajLPRwF-lkp@intel.com> (raw)
In-Reply-To: <20241106192105.6731-14-kanchana.p.sridhar@intel.com>

Hi Kanchana,

kernel test robot noticed the following build errors:

[auto build test ERROR on 7994b7ea6ac880efd0c38fedfbffd5ab8b1b7b2b]

url:    https://github.com/intel-lab-lkp/linux/commits/Kanchana-P-Sridhar/crypto-acomp-Define-two-new-interfaces-for-compress-decompress-batching/20241107-032310
base:   7994b7ea6ac880efd0c38fedfbffd5ab8b1b7b2b
patch link:    https://lore.kernel.org/r/20241106192105.6731-14-kanchana.p.sridhar%40intel.com
patch subject: [PATCH v3 13/13] mm: zswap: Compress batching with Intel IAA in zswap_store() of large folios.
config: powerpc-randconfig-002-20241107 (https://download.01.org/0day-ci/archive/20241107/202411071309.tajLPRwF-lkp@intel.com/config)
compiler: clang version 20.0.0git (https://github.com/llvm/llvm-project 592c0fe55f6d9a811028b5f3507be91458ab2713)
reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20241107/202411071309.tajLPRwF-lkp@intel.com/reproduce)

If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <lkp@intel.com>
| Closes: https://lore.kernel.org/oe-kbuild-all/202411071309.tajLPRwF-lkp@intel.com/

All errors (new ones prefixed by >>):

   In file included from mm/zswap.c:18:
   In file included from include/linux/highmem.h:8:
   In file included from include/linux/cacheflush.h:5:
   In file included from arch/powerpc/include/asm/cacheflush.h:7:
   In file included from include/linux/mm.h:2213:
   include/linux/vmstat.h:518:36: warning: arithmetic between different enumeration types ('enum node_stat_item' and 'enum lru_list') [-Wenum-enum-conversion]
     518 |         return node_stat_name(NR_LRU_BASE + lru) + 3; // skip "nr_"
         |                               ~~~~~~~~~~~ ^ ~~~
   In file included from mm/zswap.c:40:
   In file included from mm/internal.h:13:
   include/linux/mm_inline.h:47:41: warning: arithmetic between different enumeration types ('enum node_stat_item' and 'enum lru_list') [-Wenum-enum-conversion]
      47 |         __mod_lruvec_state(lruvec, NR_LRU_BASE + lru, nr_pages);
         |                                    ~~~~~~~~~~~ ^ ~~~
   include/linux/mm_inline.h:49:22: warning: arithmetic between different enumeration types ('enum zone_stat_item' and 'enum lru_list') [-Wenum-enum-conversion]
      49 |                                 NR_ZONE_LRU_BASE + lru, nr_pages);
         |                                 ~~~~~~~~~~~~~~~~ ^ ~~~
>> mm/zswap.c:1660:33: error: cannot take the address of an rvalue of type 'int'
    1660 |         if (folio_test_large(folio) && READ_ONCE(compress_batching)) {
         |                                        ^         ~~~~~~~~~~~~~~~~~
   include/asm-generic/rwonce.h:50:2: note: expanded from macro 'READ_ONCE'
      50 |         __READ_ONCE(x);                                                 \
         |         ^           ~
   include/asm-generic/rwonce.h:44:70: note: expanded from macro '__READ_ONCE'
      44 | #define __READ_ONCE(x)  (*(const volatile __unqual_scalar_typeof(x) *)&(x))
         |                                                                       ^ ~
   3 warnings and 1 error generated.


vim +/int +1660 mm/zswap.c

  1634	
  1635	/*
  1636	 * Modified to use the IAA compress batching framework implemented in
  1637	 * __zswap_store_batch_core() if sysctl vm.compress-batching is 1.
  1638	 * The batching code is intended to significantly improve folio store
  1639	 * performance over the sequential code.
  1640	 */
  1641	bool zswap_store(struct folio *folio)
  1642	{
  1643		long nr_pages = folio_nr_pages(folio);
  1644		swp_entry_t swp = folio->swap;
  1645		struct crypto_acomp_ctx *acomp_ctx;
  1646		struct obj_cgroup *objcg = NULL;
  1647		struct mem_cgroup *memcg = NULL;
  1648		struct zswap_pool *pool;
  1649		size_t compressed_bytes = 0;
  1650		bool ret = false;
  1651		long index;
  1652	
  1653		/*
  1654		 * Improve large folio zswap_store() latency with IAA compress batching,
  1655		 * if this is enabled by setting sysctl vm.compress-batching to "1".
  1656		 * If enabled, the large folio's pages are compressed in parallel in
  1657		 * batches of SWAP_CRYPTO_BATCH_SIZE pages. If disabled, every page in
  1658		 * the large folio is compressed sequentially.
  1659		 */
> 1660		if (folio_test_large(folio) && READ_ONCE(compress_batching)) {
  1661			pool = zswap_pool_current_get();
  1662			if (!pool) {
  1663				pr_err("Cannot setup acomp_batch_ctx for compress batching: no current pool found\n");
  1664				goto sequential_store;
  1665			}
  1666	
  1667			if (zswap_pool_can_batch(pool)) {
  1668				int error = -1;
  1669				bool store_batch = __zswap_store_batch_core(
  1670							folio_nid(folio),
  1671							&folio, &error, 1);
  1672	
  1673				if (store_batch) {
  1674					zswap_pool_put(pool);
  1675					if (!error)
  1676						ret = true;
  1677					return ret;
  1678				}
  1679			}
  1680			zswap_pool_put(pool);
  1681		}
  1682	
  1683	sequential_store:
  1684	
  1685		VM_WARN_ON_ONCE(!folio_test_locked(folio));
  1686		VM_WARN_ON_ONCE(!folio_test_swapcache(folio));
  1687	
  1688		if (!zswap_enabled)
  1689			goto check_old;
  1690	
  1691		objcg = get_obj_cgroup_from_folio(folio);
  1692		if (objcg && !obj_cgroup_may_zswap(objcg)) {
  1693			memcg = get_mem_cgroup_from_objcg(objcg);
  1694			if (shrink_memcg(memcg)) {
  1695				mem_cgroup_put(memcg);
  1696				goto put_objcg;
  1697			}
  1698			mem_cgroup_put(memcg);
  1699		}
  1700	
  1701		if (zswap_check_limits())
  1702			goto put_objcg;
  1703	
  1704		pool = zswap_pool_current_get();
  1705		if (!pool)
  1706			goto put_objcg;
  1707	
  1708		if (objcg) {
  1709			memcg = get_mem_cgroup_from_objcg(objcg);
  1710			if (memcg_list_lru_alloc(memcg, &zswap_list_lru, GFP_KERNEL)) {
  1711				mem_cgroup_put(memcg);
  1712				goto put_pool;
  1713			}
  1714			mem_cgroup_put(memcg);
  1715		}
  1716	
  1717		acomp_ctx = raw_cpu_ptr(pool->acomp_ctx);
  1718		mutex_lock(&acomp_ctx->mutex);
  1719	
  1720		for (index = 0; index < nr_pages; ++index) {
  1721			struct page *page = folio_page(folio, index);
  1722			ssize_t bytes;
  1723	
  1724			bytes = zswap_store_page(page, objcg, pool);
  1725			if (bytes < 0)
  1726				goto put_pool;
  1727			compressed_bytes += bytes;
  1728		}
  1729	
  1730		if (objcg) {
  1731			obj_cgroup_charge_zswap(objcg, compressed_bytes);
  1732			count_objcg_events(objcg, ZSWPOUT, nr_pages);
  1733		}
  1734	
  1735		atomic_long_add(nr_pages, &zswap_stored_pages);
  1736		count_vm_events(ZSWPOUT, nr_pages);
  1737	
  1738		ret = true;
  1739	
  1740	put_pool:
  1741		mutex_unlock(&acomp_ctx->mutex);
  1742		zswap_pool_put(pool);
  1743	put_objcg:
  1744		obj_cgroup_put(objcg);
  1745		if (!ret && zswap_pool_reached_full)
  1746			queue_work(shrink_wq, &zswap_shrink_work);
  1747	check_old:
  1748		/*
  1749		 * If the zswap store fails or zswap is disabled, we must invalidate
  1750		 * the possibly stale entries which were previously stored at the
  1751		 * offsets corresponding to each page of the folio. Otherwise,
  1752		 * writeback could overwrite the new data in the swapfile.
  1753		 */
  1754		if (!ret) {
  1755			unsigned type = swp_type(swp);
  1756			pgoff_t offset = swp_offset(swp);
  1757			struct zswap_entry *entry;
  1758			struct xarray *tree;
  1759	
  1760			for (index = 0; index < nr_pages; ++index) {
  1761				tree = swap_zswap_tree(swp_entry(type, offset + index));
  1762				entry = xa_erase(tree, offset + index);
  1763				if (entry)
  1764					zswap_entry_free(entry);
  1765			}
  1766		}
  1767	
  1768		return ret;
  1769	}
  1770	

-- 
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki

           reply	other threads:[~2024-11-07  5:25 UTC|newest]

Thread overview: expand[flat|nested]  mbox.gz  Atom feed
 [parent not found: <20241106192105.6731-14-kanchana.p.sridhar@intel.com>]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=202411071309.tajLPRwF-lkp@intel.com \
    --to=lkp@intel.com \
    --cc=kanchana.p.sridhar@intel.com \
    --cc=llvm@lists.linux.dev \
    --cc=oe-kbuild-all@lists.linux.dev \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox