From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.6 required=3.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E6529C33CAF for ; Thu, 16 Jan 2020 04:13:20 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 8041F20723 for ; Thu, 16 Jan 2020 04:13:20 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="GcSvQVYk" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 8041F20723 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id E72808E0030; Wed, 15 Jan 2020 23:13:19 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id E22068E0026; Wed, 15 Jan 2020 23:13:19 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D13778E0030; Wed, 15 Jan 2020 23:13:19 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0226.hostedemail.com [216.40.44.226]) by kanga.kvack.org (Postfix) with ESMTP id BB2918E0026 for ; Wed, 15 Jan 2020 23:13:19 -0500 (EST) Received: from smtpin18.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with SMTP id 7D09A180AD817 for ; Thu, 16 Jan 2020 04:13:19 +0000 (UTC) X-FDA: 76382177718.18.front02_91688e4db653e X-HE-Tag: front02_91688e4db653e X-Filterd-Recvd-Size: 6325 Received: from mail-wr1-f67.google.com (mail-wr1-f67.google.com [209.85.221.67]) by imf01.hostedemail.com (Postfix) with ESMTP for ; Thu, 16 Jan 2020 04:13:19 +0000 (UTC) Received: by mail-wr1-f67.google.com with SMTP id d16so17715953wre.10 for ; Wed, 15 Jan 2020 20:13:19 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id; bh=ShEoZ+W1nuazvv4VojjaXO77AUpnsleIwe1MphYrQmQ=; b=GcSvQVYk46csYJQexHd/B2J21QQKYB9amJEwX3+FZ/F0pateRjFM3lVPdrhg10lEEv PNUOgjW5lQFDRWYK09r8t88SN01PMiaYHei9GecWAylMbif4Z8efL4msGWMKjpZ6z0GD ExYjPXDw57ZRuAIZlLeaYD/hAYK8aPs25KSjDdzDlBsjeno9l44J4GpewVPtzFPPf+Tg I35W5+/9+0y0AVI70fFhrKfrwMf2lbIk8IX5qAsbXJvCK4UOXaBtzFlmaLDpIFZ8BoaW HyiJPETYRhWh6iN9NwGv78rxI2hehI1az83c0YZwPr7R2e/vF6XXHs4dZtpp/aB6Uf7N 5UBg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id; bh=ShEoZ+W1nuazvv4VojjaXO77AUpnsleIwe1MphYrQmQ=; b=n6O5PLTO+e/jL97yJGHKUwfYMm2oF1XsrDrEcjAplc0jOk8uZDxNcoQptqFrXpAMSf WGq4ETl8iNm+KYtvTDkGApRvwlJke97EssnxWqQDox3FksDqjMWo9ZznfHr3JJXjnfRq NJjVfhu4jmuRy6+kAFuaVP87TdpJRz8YaC8Ve0jr8jlKqDyBJOIQuAgc0jrLFHXG7EtY PpNCJ4j6nbNVmwhaD9y3VQ67iYMw1abA/sRrJUw8gXWNQSDKBUR7/I5x3SSgEUpGk3cS qHPv8gJxVgweMN5lkT615UC3/7fWdMbsVGoiK2uKVfl6C+TTLrz7Vw47dXhnMlrUnFNT EsWA== X-Gm-Message-State: APjAAAUcS0H0B14NPMwcMwzZ1slKKsxlr8o543x8lez6ssu9SkeT9RgZ tmx8KwFnYx4HQccI10yBV7ytDQ1x X-Google-Smtp-Source: APXvYqxpm1K1UAAI4+Ivij6VmxVjuu5fVUNFyBFErHSGTZ4lbMmIn8GEPvqfrTApZPK8ZlE48sG5yw== X-Received: by 2002:adf:df90:: with SMTP id z16mr832224wrl.273.1579147997594; Wed, 15 Jan 2020 20:13:17 -0800 (PST) Received: from localhost.localdomain.localdomain ([131.228.2.21]) by smtp.gmail.com with ESMTPSA id k16sm28055095wru.0.2020.01.15.20.13.14 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 15 Jan 2020 20:13:16 -0800 (PST) From: Li Xinhai To: linux-mm@kvack.org Cc: akpm@linux-foundation.org, Michal Hocko , Mike Kravetz Subject: [PATCH v4] mm/mempolicy,hugetlb: Checking hstate for hugetlbfs page in vma_migratable Date: Thu, 16 Jan 2020 04:11:25 +0000 Message-Id: <1579147885-23511-1-git-send-email-lixinhai.lxh@gmail.com> X-Mailer: git-send-email 1.8.3.1 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Checking hstate at early phase when isolating page, instead of during unmap and move phase, to avoid useless isolation. Signed-off-by: Li Xinhai Cc: Michal Hocko Cc: Mike Kravetz --- include/linux/hugetlb.h | 10 ++++++++++ include/linux/mempolicy.h | 29 +---------------------------- mm/mempolicy.c | 28 ++++++++++++++++++++++++++++ 3 files changed, 39 insertions(+), 28 deletions(-) diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h index 31d4920..c9d871d 100644 --- a/include/linux/hugetlb.h +++ b/include/linux/hugetlb.h @@ -598,6 +598,11 @@ static inline bool hugepage_migration_supported(struct hstate *h) return arch_hugetlb_migration_supported(h); } +static inline bool vm_hugepage_migration_supported(struct vm_area_struct *vma) +{ + return hugepage_migration_supported(hstate_vma(vma)); +} + /* * Movability check is different as compared to migration check. * It determines whether or not a huge page should be placed on @@ -809,6 +814,11 @@ static inline bool hugepage_migration_supported(struct hstate *h) return false; } +static inline bool vm_hugepage_migration_supported(struct vm_area_struct *vma) +{ + return false; +} + static inline bool hugepage_movable_supported(struct hstate *h) { return false; diff --git a/include/linux/mempolicy.h b/include/linux/mempolicy.h index 5228c62..8165278 100644 --- a/include/linux/mempolicy.h +++ b/include/linux/mempolicy.h @@ -173,34 +173,7 @@ int do_migrate_pages(struct mm_struct *mm, const nodemask_t *from, extern void mpol_to_str(char *buffer, int maxlen, struct mempolicy *pol); /* Check if a vma is migratable */ -static inline bool vma_migratable(struct vm_area_struct *vma) -{ - if (vma->vm_flags & (VM_IO | VM_PFNMAP)) - return false; - - /* - * DAX device mappings require predictable access latency, so avoid - * incurring periodic faults. - */ - if (vma_is_dax(vma)) - return false; - -#ifndef CONFIG_ARCH_ENABLE_HUGEPAGE_MIGRATION - if (vma->vm_flags & VM_HUGETLB) - return false; -#endif - - /* - * Migration allocates pages in the highest zone. If we cannot - * do so then migration (at least from node to node) is not - * possible. - */ - if (vma->vm_file && - gfp_zone(mapping_gfp_mask(vma->vm_file->f_mapping)) - < policy_zone) - return false; - return true; -} +extern bool vma_migratable(struct vm_area_struct *vma); extern int mpol_misplaced(struct page *, struct vm_area_struct *, unsigned long); extern void mpol_put_task_policy(struct task_struct *); diff --git a/mm/mempolicy.c b/mm/mempolicy.c index 067cf7d..8a01fb1 100644 --- a/mm/mempolicy.c +++ b/mm/mempolicy.c @@ -1714,6 +1714,34 @@ static int kernel_get_mempolicy(int __user *policy, #endif /* CONFIG_COMPAT */ +bool vma_migratable(struct vm_area_struct *vma) +{ + if (vma->vm_flags & (VM_IO | VM_PFNMAP)) + return false; + + /* + * DAX device mappings require predictable access latency, so avoid + * incurring periodic faults. + */ + if (vma_is_dax(vma)) + return false; + + if (is_vm_hugetlb_page(vma) && + !vm_hugepage_migration_supported(vma)) + return false; + + /* + * Migration allocates pages in the highest zone. If we cannot + * do so then migration (at least from node to node) is not + * possible. + */ + if (vma->vm_file && + gfp_zone(mapping_gfp_mask(vma->vm_file->f_mapping)) + < policy_zone) + return false; + return true; +} + struct mempolicy *__get_vma_policy(struct vm_area_struct *vma, unsigned long addr) { -- 1.8.3.1