From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 01BA3C636D4 for ; Mon, 13 Feb 2023 12:35:54 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 96E456B007E; Mon, 13 Feb 2023 07:35:54 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 8F8256B0081; Mon, 13 Feb 2023 07:35:54 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 7712B6B0082; Mon, 13 Feb 2023 07:35:54 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 65EE06B007E for ; Mon, 13 Feb 2023 07:35:54 -0500 (EST) Received: from smtpin17.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 3AF27C05A6 for ; Mon, 13 Feb 2023 12:35:54 +0000 (UTC) X-FDA: 80462215428.17.CC05351 Received: from mga12.intel.com (mga12.intel.com [192.55.52.136]) by imf22.hostedemail.com (Postfix) with ESMTP id 09421C0015 for ; Mon, 13 Feb 2023 12:35:51 +0000 (UTC) Authentication-Results: imf22.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b="USK/pRUU"; spf=pass (imf22.hostedemail.com: domain of ying.huang@intel.com designates 192.55.52.136 as permitted sender) smtp.mailfrom=ying.huang@intel.com; dmarc=pass (policy=none) header.from=intel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1676291752; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=Za/An/+pIf91H6aGxbJOURUY4wcorEleUW4nfQS+vus=; b=sRmLE9y6JFQ/VuZZ24B82dhzKdrgw/zQDzv2uVOC67+NFo5YwUKVZ3UJfLbyinPUgLj38S 2pX/dxdBGm1lTNlwt43pmcShqmi1dxEndsOVMLmf4s+YZqOP5Atr6AtLvoeOV8rL/Is/Rq olCx6+F/uZTZaoGuJ1CvD/RoYnsTSLU= ARC-Authentication-Results: i=1; imf22.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b="USK/pRUU"; spf=pass (imf22.hostedemail.com: domain of ying.huang@intel.com designates 192.55.52.136 as permitted sender) smtp.mailfrom=ying.huang@intel.com; dmarc=pass (policy=none) header.from=intel.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1676291752; a=rsa-sha256; cv=none; b=cnwhxpcXa41aWHhYgCLiByCPVj2oceZVuP7O/JKoXqBcc4TTjne3F2oa4/T3atK4GEhNK/ 84pBcyP/MaUlfDerMXDGdSdwFFODzyOEYqAl0LMf71VGBC/wdzGh4QO2DPEN75WZL9R3Wy /ill21/idLPWXNLCXF3Igpdhpx8JIQk= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1676291752; x=1707827752; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=dO4m0t0CfUL8xaVC72bSSWz5Mdmx3h1ISiajGHuEsMg=; b=USK/pRUUN969BQ8clF7lXKUc1xRrKMZXDiFEGDc/QTwbXk1GXGa0+fgC PzsZHlFJTohGWcff75T3XtYDcEGqENaTqA9zv6pxje83EVlrrGDKUUNYU rmzqM148LqqIfiJYHSvAqLP0FvBt28tSUPGjDKcVaFPlyEPvRF4TncLJy atOR/msskl5YgloiqOmz/g024Jj6AoNKRdQH83bsHiEAEukE1g399DRRS hDpdKV5rGVW/CsWi9sk08hzw3XlAk90V93LyjnFshxsswimwvc+nxCWoo MKCdnTEgD8ylC4nj8vQ0ZTjjoDf1YKwV9NB1Yky/k1iyoW8LDMe9y686+ g==; X-IronPort-AV: E=McAfee;i="6500,9779,10619"; a="310513333" X-IronPort-AV: E=Sophos;i="5.97,294,1669104000"; d="scan'208";a="310513333" Received: from orsmga006.jf.intel.com ([10.7.209.51]) by fmsmga106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 13 Feb 2023 04:35:51 -0800 X-IronPort-AV: E=McAfee;i="6500,9779,10619"; a="646366670" X-IronPort-AV: E=Sophos;i="5.97,294,1669104000"; d="scan'208";a="646366670" Received: from changxin-mobl2.ccr.corp.intel.com (HELO yhuang6-mobl2.ccr.corp.intel.com) ([10.255.28.171]) by orsmga006-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 13 Feb 2023 04:35:47 -0800 From: Huang Ying To: Andrew Morton Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Huang Ying , Alistair Popple , Zi Yan , Yang Shi , Baolin Wang , Oscar Salvador , Matthew Wilcox , Bharata B Rao , Xin Hao , Minchan Kim , Mike Kravetz , Hyeonggon Yoo <42.hyeyoo@gmail.com> Subject: [PATCH -v5 9/9] migrate_pages: move THP/hugetlb migration support check to simplify code Date: Mon, 13 Feb 2023 20:34:44 +0800 Message-Id: <20230213123444.155149-10-ying.huang@intel.com> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20230213123444.155149-1-ying.huang@intel.com> References: <20230213123444.155149-1-ying.huang@intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Rspam-User: X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 09421C0015 X-Stat-Signature: 14eseddpynky5inkqw9jp9fihzo9w193 X-HE-Tag: 1676291751-353171 X-HE-Meta: U2FsdGVkX18mclHfDWp9K2xQLUZT2Dcu5rbkH+RYpuwdB4DczoeoKAbiMJhy1kbDTDpTUIUTjsD5KtBxkXKjIKEOQCjsgLEIxCkpCJ98Ew2ANk3m3eOCWtJ+tl9QhXDOefZAB4V0fJ6ObB6ym2eVy37Az4mfRb1Oi6GdEk8O+g3+5E1Go82IxWkUl92DrfYv3S40bHUVn4+mUXmpO7kX03w6CXLiWiPZUM0kuXUQPPxtPB907rsJ+bEK86w/JDMvOv3N6v8cJtTK6G4QoShhjY2hm+SJBvGX5630aUV/ypU5WN7AAhwlZ/yIy21lGCeyvLXHC6v/yF0l/babYqMTQO2t+XzVONuKbpA8TAc6i8HM9EWxHa5g2sUnffwm+OhFFat6KU+qzBN9lML+aijmeXRQN9i+wXtAiOsB7K/hSqL4UR+j5NK3oS6PwkM39KYXYaVzbE4vBOfgOdo/EhLEaAk7DNfu9H5uv1w/yjjnGcnI5OcxMwrIwNS247eIUzXB1bDDFaOJAjy8VItysorcRYYqAlQ8bZZUnurr72te3pE7K7qLPZGUf3tLuVK5q8eQe8yIurEtxPc3zGJgFL0I8dnLygGhrcdjG8U2tXVDR8YjYjgZnFpCiIEfJ4cQnt9K4DUR2E9QoOIMFfFz2gazlya5Ohl0UfBAC5nYemr3Ur5DQzclUQuRuiFWGuuLtptqvWwB8VGhO1Cs5NOugmoHvYmtT1qRGo5q0j8BfZU1itoCPkgWzF9ZbJbQH5lQvebK4+CgAy6Jv2c6u5oxViEq4xZ/8bZJv82Mcm3aOHDnUBWpBTPjiVtslb553wd8BZjYvbGUtLKPawW2Qntfb/YJ8ROODkAo9zXEJwKdfbCem7Z5DgezOqtmwN6izGUWITX4XA1NguyaUh8H9sNGf0S9HHvUMRmKXulsl3V0B3/J6wpcsRfI8YYOMqibWhpCFYxC8EoGNnM65W7uXNqkvTD l6mlLEYA XG8n7zpghfUhPT1d6EmYf6gc0a17H/+Do1JGfaDE46aW62X82odWA1S+qtbVp9sIlXpOzXOUomHlLAe19gMZQvDRqF8/nyaA4ijEqle6rCU3s8TwOXfhweYKWmrUNMFd3FhXHDDRPvM316Ihof6npeJubND397rLAzfgjPC3KgHNC2rtH4utRTjbskIMVIRLSrg16yt04XP3cyIl3FVsmKTPx8mTqSwtL6O/s3NKWtzalNfvSaLiBp8bitaC/QGLTtq6gw4QurngqnVCmAFtpGj3/F2IdfgsBOo8K/aElJEvFxWYQAUiNU31m1Qri2pn4Y3TrxzWJ/O3GMD3FsMZ3kP8+j/p0BfZbDH1yhRYey7FnQqxgpg82It5diKIhcu448lQ4VczHU8ESm4b9bM/ZrWvDMYfwpd9bYt7HU4y9LNVcnNRr6IVS3Rs+wvCDNYoWS3+CpOjfdd0oT8dWQqln9sPoSeD6JnGrjI7un0F7GrUGeMiRcB+QL8Pf8ZiRmOaulTLuWuFh4G8mIqQycCcJbjrroQ== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: This is a code cleanup patch, no functionality change is expected. After the change, the line number reduces especially in the long migrate_pages_batch(). Signed-off-by: "Huang, Ying" Suggested-by: Alistair Popple Reviewed-by: Zi Yan Cc: Yang Shi Cc: Baolin Wang Cc: Oscar Salvador Cc: Matthew Wilcox Cc: Bharata B Rao Cc: Xin Hao Cc: Minchan Kim Cc: Mike Kravetz Cc: Hyeonggon Yoo <42.hyeyoo@gmail.com> --- mm/migrate.c | 83 +++++++++++++++++++++++----------------------------- 1 file changed, 36 insertions(+), 47 deletions(-) diff --git a/mm/migrate.c b/mm/migrate.c index 2fa420e4f68c..ef68a1aff35c 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -1117,9 +1117,6 @@ static int migrate_folio_unmap(new_page_t get_new_page, free_page_t put_new_page bool locked = false; bool dst_locked = false; - if (!thp_migration_supported() && folio_test_transhuge(src)) - return -ENOSYS; - if (folio_ref_count(src) == 1) { /* Folio was freed from under us. So we are done. */ folio_clear_active(src); @@ -1380,16 +1377,6 @@ static int unmap_and_move_huge_page(new_page_t get_new_page, struct anon_vma *anon_vma = NULL; struct address_space *mapping = NULL; - /* - * Migratability of hugepages depends on architectures and their size. - * This check is necessary because some callers of hugepage migration - * like soft offline and memory hotremove don't walk through page - * tables or check whether the hugepage is pmd-based or not before - * kicking migration. - */ - if (!hugepage_migration_supported(page_hstate(hpage))) - return -ENOSYS; - if (folio_ref_count(src) == 1) { /* page was freed from under us. So we are done. */ folio_putback_active_hugetlb(src); @@ -1556,6 +1543,20 @@ static int migrate_hugetlbs(struct list_head *from, new_page_t get_new_page, cond_resched(); + /* + * Migratability of hugepages depends on architectures and + * their size. This check is necessary because some callers + * of hugepage migration like soft offline and memory + * hotremove don't walk through page tables or check whether + * the hugepage is pmd-based or not before kicking migration. + */ + if (!hugepage_migration_supported(folio_hstate(folio))) { + nr_failed++; + stats->nr_failed_pages += nr_pages; + list_move_tail(&folio->lru, ret_folios); + continue; + } + rc = unmap_and_move_huge_page(get_new_page, put_new_page, private, &folio->page, pass > 2, mode, @@ -1565,16 +1566,9 @@ static int migrate_hugetlbs(struct list_head *from, new_page_t get_new_page, * Success: hugetlb folio will be put back * -EAGAIN: stay on the from list * -ENOMEM: stay on the from list - * -ENOSYS: stay on the from list * Other errno: put on ret_folios list */ switch(rc) { - case -ENOSYS: - /* Hugetlb migration is unsupported */ - nr_failed++; - stats->nr_failed_pages += nr_pages; - list_move_tail(&folio->lru, ret_folios); - break; case -ENOMEM: /* * When memory is low, don't bother to try to migrate @@ -1664,6 +1658,28 @@ static int migrate_pages_batch(struct list_head *from, new_page_t get_new_page, cond_resched(); + /* + * Large folio migration might be unsupported or + * the allocation might be failed so we should retry + * on the same folio with the large folio split + * to normal folios. + * + * Split folios are put in split_folios, and + * we will migrate them after the rest of the + * list is processed. + */ + if (!thp_migration_supported() && is_thp) { + nr_large_failed++; + stats->nr_thp_failed++; + if (!try_split_folio(folio, &split_folios)) { + stats->nr_thp_split++; + continue; + } + stats->nr_failed_pages += nr_pages; + list_move_tail(&folio->lru, ret_folios); + continue; + } + rc = migrate_folio_unmap(get_new_page, put_new_page, private, folio, &dst, pass > 2, avoid_force_lock, mode, reason, ret_folios); @@ -1675,36 +1691,9 @@ static int migrate_pages_batch(struct list_head *from, new_page_t get_new_page, * -EAGAIN: stay on the from list * -EDEADLOCK: stay on the from list * -ENOMEM: stay on the from list - * -ENOSYS: stay on the from list * Other errno: put on ret_folios list */ switch(rc) { - /* - * Large folio migration might be unsupported or - * the allocation could've failed so we should retry - * on the same folio with the large folio split - * to normal folios. - * - * Split folios are put in split_folios, and - * we will migrate them after the rest of the - * list is processed. - */ - case -ENOSYS: - /* Large folio migration is unsupported */ - if (is_large) { - nr_large_failed++; - stats->nr_thp_failed += is_thp; - if (!try_split_folio(folio, &split_folios)) { - stats->nr_thp_split += is_thp; - break; - } - } else if (!no_split_folio_counting) { - nr_failed++; - } - - stats->nr_failed_pages += nr_pages; - list_move_tail(&folio->lru, ret_folios); - break; case -ENOMEM: /* * When memory is low, don't bother to try to migrate -- 2.35.1