From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id D422BC433EF for ; Mon, 24 Jan 2022 04:40:12 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 19E4A6B0085; Sun, 23 Jan 2022 23:40:12 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 14DED6B0087; Sun, 23 Jan 2022 23:40:12 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 03CAE6B0088; Sun, 23 Jan 2022 23:40:12 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0001.hostedemail.com [216.40.44.1]) by kanga.kvack.org (Postfix) with ESMTP id E9C796B0085 for ; Sun, 23 Jan 2022 23:40:11 -0500 (EST) Received: from smtpin20.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 8F4D6181AEF21 for ; Mon, 24 Jan 2022 04:40:11 +0000 (UTC) X-FDA: 79063928622.20.77047DC Received: from mail-qv1-f54.google.com (mail-qv1-f54.google.com [209.85.219.54]) by imf11.hostedemail.com (Postfix) with ESMTP id 6EBC340007 for ; Mon, 24 Jan 2022 04:40:10 +0000 (UTC) Received: by mail-qv1-f54.google.com with SMTP id hu2so19398596qvb.8 for ; Sun, 23 Jan 2022 20:40:10 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:from:to:cc:subject:in-reply-to:message-id:references :mime-version; bh=HLxUcWd7dVt4zM6ARzZEFCYobg5Hs0gky2Kh47F01IM=; b=JzJoT1keTyruhmu6LfsrHvokjyz5pOuyzWPRQIhE75TjBNs1Jppr2WuQWnkOCBcVIo UiDbqa0BDVEHjQK7upM3oUxfLgZgwWFmTDK2Rpz8MnNvlaQedMD0gpOEOxVkliLdtBXT pSNy7RgoyvwvPOAmcbwDH4Rbi7CNrHnHdKQNw2vRRd17Et/z9cY7sAJfEsgmgBsRNeC+ MpxrXuiMF6WXafgmzRFlcHyyiEexDtNoitbyiHrHkGNCpobI3NFySlunM6zUQVBerqtZ jW6wTmizpt+twh4A0/GgFvSJc6OQ/pi0U6lW1kInU9Dj5nn8VA3mHiisoQdz4hzUO4m/ TaIQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:from:to:cc:subject:in-reply-to:message-id :references:mime-version; bh=HLxUcWd7dVt4zM6ARzZEFCYobg5Hs0gky2Kh47F01IM=; b=cugsZjgsKhdfk3er6LPE8oZQ9YuawzbM10fZX7PI/Y5Ie6UaKIKzX/DepuwE+WrS/P OECSeOrE+p8VeUpC2TJf0S2/6K+m6WUhew7eWjMSDLMtUoEdNEAV8byFASU0NxF1OpM4 a93PIVuY3nmI6O8OUUXw5sjhBDd61LOshFbGexz8m2+fmot/Xbwko3jkQ1itRo+KSs6M 6oAQKiguBdH9sHt7uVTcj1Xo8e1rrCObtkDvUtYFCpV4m9iqVqMb2eLMFDqYmGaNypE6 wDJQ3pc7qqWNOTaYrwMVCrUQy4pDLRozusksyvm7qKWad8QNkaijTbv+QLBhS34AuAYT KBHw== X-Gm-Message-State: AOAM5330YxQVyd7JZxEGSwOBI5tNaFBdnobKOyEbPIhnO88Cc1bMcgsM XI+aJ+DIF8V5U5QasCPMyo41zg== X-Google-Smtp-Source: ABdhPJwR+V0WzMsB7cBIMcWHZpYUI+5Y53M4Il9+JmXkC0w/qBw/kGxFMxmWP4i6fNtdzQhF8OGtqw== X-Received: by 2002:a05:6214:20af:: with SMTP id 15mr929915qvd.72.1642999209915; Sun, 23 Jan 2022 20:40:09 -0800 (PST) Received: from ripple.attlocal.net (172-10-233-147.lightspeed.sntcca.sbcglobal.net. [172.10.233.147]) by smtp.gmail.com with ESMTPSA id n2sm1575414qti.59.2022.01.23.20.40.09 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 23 Jan 2022 20:40:09 -0800 (PST) Date: Sun, 23 Jan 2022 20:40:08 -0800 (PST) From: Hugh Dickins X-X-Sender: hugh@ripple.anvils To: Matthew Wilcox cc: Hugh Dickins , linux-mm@kvack.org, Dave Hansen Subject: Re: [RFC] Simplify users of vma_address_end() In-Reply-To: Message-ID: References: MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII X-Rspamd-Server: rspam08 X-Rspamd-Queue-Id: 6EBC340007 X-Stat-Signature: yyg1awiwyp1pooa4wg5tm1d5po7xx8pa Authentication-Results: imf11.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=JzJoT1ke; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf11.hostedemail.com: domain of hughd@google.com designates 209.85.219.54 as permitted sender) smtp.mailfrom=hughd@google.com X-HE-Tag: 1642999210-780043 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Wed, 19 Jan 2022, Matthew Wilcox wrote: > Hi Hugh, > > What do you think to this simplification? Dave dislikes the ?: usage > in this context, and I thought we could usefully put the condition > inside the (inline) function. Yes, that's an improvement. I think when I created vma_address_end() originally (long before posting), there was only one caller who needed to worry about PageKsm, so I avoided inflicting that awkwardness on the others; but now it looks like most (of its few) callers do have to worry. But I'll suggest one nit, which you can ignore if you feel strongly otherwise: I'd prefer vma_address_end(page, vma, address) as a better match to vma_address(page, vma). > > I could also go for: > > if (!PageCompound(page)) > return address + PAGE_SIZE; > VM_BUG_ON_PAGE(PageKsm(page), page); /* KSM page->index unusable */ Another way to do it, yes: I don't mind either way. Thanks, Hugh > > in case anyone starts to create compound KSM pages. > > diff --git a/mm/internal.h b/mm/internal.h > index c774075b3893..7cd33ee4df32 100644 > --- a/mm/internal.h > +++ b/mm/internal.h > @@ -462,13 +462,13 @@ vma_address(struct page *page, struct vm_area_struct *vma) > * Assumes that vma_address() already returned a good starting address. > * If page is a compound head, the entire compound page is considered. > */ > -static inline unsigned long > -vma_address_end(struct page *page, struct vm_area_struct *vma) > +static inline unsigned long vma_address_end(struct page *page, > + unsigned long address, struct vm_area_struct *vma) > { > pgoff_t pgoff; > - unsigned long address; > > - VM_BUG_ON_PAGE(PageKsm(page), page); /* KSM page->index unusable */ > + if (PageKsm(page) || !PageCompound(page)) > + return address + PAGE_SIZE; > pgoff = page_to_pgoff(page) + compound_nr(page); > address = vma->vm_start + ((pgoff - vma->vm_pgoff) << PAGE_SHIFT); > /* Check for address beyond vma (or wrapped through 0?) */ > diff --git a/mm/page_vma_mapped.c b/mm/page_vma_mapped.c > index f7b331081791..fcd7b9ccfb1e 100644 > --- a/mm/page_vma_mapped.c > +++ b/mm/page_vma_mapped.c > @@ -181,15 +181,7 @@ bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw) > return true; > } > > - /* > - * Seek to next pte only makes sense for THP. > - * But more important than that optimization, is to filter out > - * any PageKsm page: whose page->index misleads vma_address() > - * and vma_address_end() to disaster. > - */ > - end = PageTransCompound(page) ? > - vma_address_end(page, pvmw->vma) : > - pvmw->address + PAGE_SIZE; > + end = vma_address_end(page, pvmw->address, pvmw->vma); > if (pvmw->pte) > goto next_pte; > restart: > diff --git a/mm/rmap.c b/mm/rmap.c > index a531b64d53fa..5d5dc2a60a26 100644 > --- a/mm/rmap.c > +++ b/mm/rmap.c > @@ -946,7 +946,7 @@ static bool page_mkclean_one(struct page *page, struct vm_area_struct *vma, > */ > mmu_notifier_range_init(&range, MMU_NOTIFY_PROTECTION_PAGE, > 0, vma, vma->vm_mm, address, > - vma_address_end(page, vma)); > + vma_address_end(page, address, vma)); > mmu_notifier_invalidate_range_start(&range); > > while (page_vma_mapped_walk(&pvmw)) { > @@ -1453,8 +1453,7 @@ static bool try_to_unmap_one(struct page *page, struct vm_area_struct *vma, > * Note that the page can not be free in this function as call of > * try_to_unmap() must hold a reference on the page. > */ > - range.end = PageKsm(page) ? > - address + PAGE_SIZE : vma_address_end(page, vma); > + range.end = vma_address_end(page, address, vma); > mmu_notifier_range_init(&range, MMU_NOTIFY_CLEAR, 0, vma, vma->vm_mm, > address, range.end); > if (PageHuge(page)) { > @@ -1757,8 +1756,7 @@ static bool try_to_migrate_one(struct page *page, struct vm_area_struct *vma, > * Note that the page can not be free in this function as call of > * try_to_unmap() must hold a reference on the page. > */ > - range.end = PageKsm(page) ? > - address + PAGE_SIZE : vma_address_end(page, vma); > + range.end = vma_address_end(page, address, vma); > mmu_notifier_range_init(&range, MMU_NOTIFY_CLEAR, 0, vma, vma->vm_mm, > address, range.end); > if (PageHuge(page)) {