From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 702D9C4345F for ; Thu, 18 Apr 2024 18:21:59 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id ECA3E6B0089; Thu, 18 Apr 2024 14:21:58 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id E7AA86B008C; Thu, 18 Apr 2024 14:21:58 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D41C26B0092; Thu, 18 Apr 2024 14:21:58 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id B34D96B0089 for ; Thu, 18 Apr 2024 14:21:58 -0400 (EDT) Received: from smtpin07.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id EB7D21211A1 for ; Thu, 18 Apr 2024 18:21:57 +0000 (UTC) X-FDA: 82023471474.07.22BEBA5 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.17]) by imf17.hostedemail.com (Postfix) with ESMTP id 80CBF4000E for ; Thu, 18 Apr 2024 18:21:55 +0000 (UTC) Authentication-Results: imf17.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=Yj8zIomP; spf=pass (imf17.hostedemail.com: domain of lkp@intel.com designates 192.198.163.17 as permitted sender) smtp.mailfrom=lkp@intel.com; dmarc=pass (policy=none) header.from=intel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1713464516; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=arQB4u/26qC0ddBXoSwN4AHhGIzBucpf01R4aKgFJJ8=; b=WhvfdElpqN81Ep2u5D49B8Z8L7ceBwo0Gs6hc45M0Av5CsG+v3UaUp4To89d0qmkjkDwee mfRLbAp/xIKw+ybhFDqpNwDGMApd5vV37UZAvk4hVXLkAfHd3feCbBhle71sbTQdgsjaKg QwmnOGlnhz8UAGKC+O3UyfaHsa8TTJQ= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1713464516; a=rsa-sha256; cv=none; b=DZX41GL3Jzhmt7XU3eoM4HHVfcMhKAIiDL84OQTJYgxlv3O/ZyEf8IER8cDtD0EEDJwaY5 e+q4hjC9Ke3mjHQEphSHX3aw8fKNJINBtQVBfgosfjUnAF6FlKAPgBSDEDGcOW4nE3ACes Ufofr40vc2h/FRpZDvG+vI3tYbMOXx4= ARC-Authentication-Results: i=1; imf17.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=Yj8zIomP; spf=pass (imf17.hostedemail.com: domain of lkp@intel.com designates 192.198.163.17 as permitted sender) smtp.mailfrom=lkp@intel.com; dmarc=pass (policy=none) header.from=intel.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1713464515; x=1745000515; h=date:from:to:cc:subject:message-id:references: mime-version:in-reply-to; bh=di7AEMML3XBmS6NNk5yHpV2YGbf6QjEDD4R85XDT5mo=; b=Yj8zIomPAp/NsqO9KBVr/jBLe4t4kql2scN8Vslk5KsXPkb0HhV6K8Aq jlVIU+ilppiLp52/ivqtrBHp3ZbfR+EoWoMHzDk2iZf64QDlwNMD4YcFT X50Ki8YPft9IRUm+QyVDWfQcMh6fEZ44uzsofhgJj5vtOU7BxU6ARoGCH EI2HyJJa8L55CSFTDoJWxrwFNgTLIeKxBcYKrUDwo18j/kjLT1kXjqgov Q6CdRjNdDJiSKpjR2mirb9U6HhUdNuHLAl48a9Jxgj12scR+hPglmr79l xNJQ62YBW33Grge6tsTf+YntYFH0d3FnSQ83GLsPkqsEVT292agIh9Ycf w==; X-CSE-ConnectionGUID: mPcMd4n1RmeY5qdmAySKDw== X-CSE-MsgGUID: mMA6u0FWSrajMhfU/Ac5qA== X-IronPort-AV: E=McAfee;i="6600,9927,11047"; a="8898356" X-IronPort-AV: E=Sophos;i="6.07,212,1708416000"; d="scan'208";a="8898356" Received: from fmviesa005.fm.intel.com ([10.60.135.145]) by fmvoesa111.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Apr 2024 11:21:53 -0700 X-CSE-ConnectionGUID: W6zduy+0QkW3zModnl3a/g== X-CSE-MsgGUID: JpytdpXORt2rOjRS5AKkpw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.07,212,1708416000"; d="scan'208";a="27531776" Received: from unknown (HELO 23c141fc0fd8) ([10.239.97.151]) by fmviesa005.fm.intel.com with ESMTP; 18 Apr 2024 11:21:49 -0700 Received: from kbuild by 23c141fc0fd8 with local (Exim 4.96) (envelope-from ) id 1rxWOJ-00095q-0X; Thu, 18 Apr 2024 18:21:47 +0000 Date: Fri, 19 Apr 2024 02:21:13 +0800 From: kernel test robot To: Kairui Song , linux-mm@kvack.org Cc: oe-kbuild-all@lists.linux.dev, Andrew Morton , Linux Memory Management List , "Huang, Ying" , Matthew Wilcox , Chris Li , Barry Song , Ryan Roberts , Neil Brown , Minchan Kim , Hugh Dickins , David Hildenbrand , Yosry Ahmed , linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, Kairui Song Subject: Re: [PATCH 8/8] mm/swap: reduce swap cache search space Message-ID: <202404190258.wljFnvCL-lkp@intel.com> References: <20240417160842.76665-9-ryncsn@gmail.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20240417160842.76665-9-ryncsn@gmail.com> X-Stat-Signature: ja3kcq9qf5w8yw549brh46xmpmknk3zn X-Rspamd-Queue-Id: 80CBF4000E X-Rspamd-Server: rspam06 X-Rspam-User: X-HE-Tag: 1713464515-509320 X-HE-Meta: U2FsdGVkX1+bdM2OK769+uZ2IvL0SdrCdQBQNtOt8AOvzSk+nOji7uhN9AHAeDOdjAZLiPELUR2szK8INrBvAcxSAjOXkFJN9S9VQ681MXNA92aK0CYAitC/vI625CI7fXrgRD/G5w4pUiNrT4FW836Lzs+wNLFKmdHwpBFMp87xTMCjt+vIgfeVHfPUZZXMw6RXEzQW41HsRQQoFeDQKMWnVJeKeELFUSMFNpK9tnRYiARPWAduHG6jFNP0v2EI5+eG1v7cLU94z7A1w9BWOmltik0ewNKFIvGa8EOd7xxTsvdo6Sh9r4MEHgdninN3lGy8ofIV56Gs/4jCoeo8Ymdr0mWHPoWOTk3yjcfN5igmNxz6JfemmEucBQN9pSPyB4O6sUFIltArjsSZwS9SZ+qWvJG9hzP6J2H7fA428S47PySJhn+izOKxV7QC8/oW7RiB9D/38UCZn11nYxI+in/JmiIoSO70LHiEJeq5xdJCd16D3bkscO2bnt5Nh9fsnYFnaAW56dmGGuwT7v2FhNufsrRj5p3tnyrCH6BUuqGw/JZHPTdW5z0aABeQ11D0uqq0O0b02XxwFUmhZW9xHHY4Gz0hmLusZl3QG9jZdE8kShKLcuEYk8yvGxnu39/OihyTijdWA5N0Q5ZS9jIrn4hYcQgOyeuOFUrRDVPCwcKfwPSpXLb48Fb6B/vQE7Op4d/3ouBocoXG6/SdCMA18m0D2w6fvJZUJPACxyVmTOFEMWbdQBQlbJgfR1wSMRZL3zlG/qsaA353ScbGrsn61mdQgC/EMltT4Idm2xK/5r+YWk/U5oVxsWW12yBK5pvatjXX6emdsVGpGj9o3spWLCGZz77fi8WS X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Hi Kairui, kernel test robot noticed the following build errors: [auto build test ERROR on ceph-client/testing] [also build test ERROR on ceph-client/for-linus trondmy-nfs/linux-next konis-nilfs2/upstream jaegeuk-f2fs/dev-test jaegeuk-f2fs/dev cifs/for-next linus/master v6.9-rc4] [cannot apply to akpm-mm/mm-everything next-20240418] [If your patch is applied to the wrong git tree, kindly drop us a note. And when submitting patch, we suggest to use '--base' as documented in https://git-scm.com/docs/git-format-patch#_base_tree_information] url: https://github.com/intel-lab-lkp/linux/commits/Kairui-Song/NFS-remove-nfs_page_lengthg-and-usage-of-page_index/20240418-001343 base: https://github.com/ceph/ceph-client.git testing patch link: https://lore.kernel.org/r/20240417160842.76665-9-ryncsn%40gmail.com patch subject: [PATCH 8/8] mm/swap: reduce swap cache search space config: i386-buildonly-randconfig-002-20240419 (https://download.01.org/0day-ci/archive/20240419/202404190258.wljFnvCL-lkp@intel.com/config) compiler: gcc-9 (Ubuntu 9.5.0-4ubuntu2) 9.5.0 reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20240419/202404190258.wljFnvCL-lkp@intel.com/reproduce) If you fix the issue in a separate patch/commit (i.e. not just a new version of the same patch/commit), kindly add following tags | Reported-by: kernel test robot | Closes: https://lore.kernel.org/oe-kbuild-all/202404190258.wljFnvCL-lkp@intel.com/ All errors (new ones prefixed by >>): mm/huge_memory.c: In function '__split_huge_page': >> mm/huge_memory.c:2906:12: error: implicit declaration of function 'swap_cache_index' [-Werror=implicit-function-declaration] 2906 | offset = swap_cache_index(folio->swap); | ^~~~~~~~~~~~~~~~ cc1: some warnings being treated as errors vim +/swap_cache_index +2906 mm/huge_memory.c 2888 2889 static void __split_huge_page(struct page *page, struct list_head *list, 2890 pgoff_t end, unsigned int new_order) 2891 { 2892 struct folio *folio = page_folio(page); 2893 struct page *head = &folio->page; 2894 struct lruvec *lruvec; 2895 struct address_space *swap_cache = NULL; 2896 unsigned long offset = 0; 2897 int i, nr_dropped = 0; 2898 unsigned int new_nr = 1 << new_order; 2899 int order = folio_order(folio); 2900 unsigned int nr = 1 << order; 2901 2902 /* complete memcg works before add pages to LRU */ 2903 split_page_memcg(head, order, new_order); 2904 2905 if (folio_test_anon(folio) && folio_test_swapcache(folio)) { > 2906 offset = swap_cache_index(folio->swap); 2907 swap_cache = swap_address_space(folio->swap); 2908 xa_lock(&swap_cache->i_pages); 2909 } 2910 2911 /* lock lru list/PageCompound, ref frozen by page_ref_freeze */ 2912 lruvec = folio_lruvec_lock(folio); 2913 2914 ClearPageHasHWPoisoned(head); 2915 2916 for (i = nr - new_nr; i >= new_nr; i -= new_nr) { 2917 __split_huge_page_tail(folio, i, lruvec, list, new_order); 2918 /* Some pages can be beyond EOF: drop them from page cache */ 2919 if (head[i].index >= end) { 2920 struct folio *tail = page_folio(head + i); 2921 2922 if (shmem_mapping(folio->mapping)) 2923 nr_dropped++; 2924 else if (folio_test_clear_dirty(tail)) 2925 folio_account_cleaned(tail, 2926 inode_to_wb(folio->mapping->host)); 2927 __filemap_remove_folio(tail, NULL); 2928 folio_put(tail); 2929 } else if (!PageAnon(page)) { 2930 __xa_store(&folio->mapping->i_pages, head[i].index, 2931 head + i, 0); 2932 } else if (swap_cache) { 2933 __xa_store(&swap_cache->i_pages, offset + i, 2934 head + i, 0); 2935 } 2936 } 2937 2938 if (!new_order) 2939 ClearPageCompound(head); 2940 else { 2941 struct folio *new_folio = (struct folio *)head; 2942 2943 folio_set_order(new_folio, new_order); 2944 } 2945 unlock_page_lruvec(lruvec); 2946 /* Caller disabled irqs, so they are still disabled here */ 2947 2948 split_page_owner(head, order, new_order); 2949 2950 /* See comment in __split_huge_page_tail() */ 2951 if (folio_test_anon(folio)) { 2952 /* Additional pin to swap cache */ 2953 if (folio_test_swapcache(folio)) { 2954 folio_ref_add(folio, 1 + new_nr); 2955 xa_unlock(&swap_cache->i_pages); 2956 } else { 2957 folio_ref_inc(folio); 2958 } 2959 } else { 2960 /* Additional pin to page cache */ 2961 folio_ref_add(folio, 1 + new_nr); 2962 xa_unlock(&folio->mapping->i_pages); 2963 } 2964 local_irq_enable(); 2965 2966 if (nr_dropped) 2967 shmem_uncharge(folio->mapping->host, nr_dropped); 2968 remap_page(folio, nr); 2969 2970 if (folio_test_swapcache(folio)) 2971 split_swap_cluster(folio->swap); 2972 2973 /* 2974 * set page to its compound_head when split to non order-0 pages, so 2975 * we can skip unlocking it below, since PG_locked is transferred to 2976 * the compound_head of the page and the caller will unlock it. 2977 */ 2978 if (new_order) 2979 page = compound_head(page); 2980 2981 for (i = 0; i < nr; i += new_nr) { 2982 struct page *subpage = head + i; 2983 struct folio *new_folio = page_folio(subpage); 2984 if (subpage == page) 2985 continue; 2986 folio_unlock(new_folio); 2987 2988 /* 2989 * Subpages may be freed if there wasn't any mapping 2990 * like if add_to_swap() is running on a lru page that 2991 * had its mapping zapped. And freeing these pages 2992 * requires taking the lru_lock so we do the put_page 2993 * of the tail pages after the split is complete. 2994 */ 2995 free_page_and_swap_cache(subpage); 2996 } 2997 } 2998 -- 0-DAY CI Kernel Test Service https://github.com/intel/lkp-tests/wiki