From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id CCD6FC433FE for ; Thu, 6 Oct 2022 19:28:43 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 1D43F6B0072; Thu, 6 Oct 2022 15:28:43 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 184AB8E0001; Thu, 6 Oct 2022 15:28:43 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id F3F5D6B0074; Thu, 6 Oct 2022 15:28:42 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id E26246B0072 for ; Thu, 6 Oct 2022 15:28:42 -0400 (EDT) Received: from smtpin10.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id A5A081A03F4 for ; Thu, 6 Oct 2022 19:28:42 +0000 (UTC) X-FDA: 79991511684.10.94354F8 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by imf21.hostedemail.com (Postfix) with ESMTP id 476F91C0014 for ; Thu, 6 Oct 2022 19:28:42 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1665084521; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=1wwtCkg9brQ3dbfSvGkTv+uYwYqKS+oibTQyuGU5eVE=; b=aSra8tYQSpbdgA4m3WPi118tbzXX1oqYemkOM9AXQDDSbh0O1MC9cFypU56dEzP97Ze8l8 Ds7aOYiGWycaUGyVWiQDamWmcM94IOCBjKhPrVYSpx0zejfv2PI/sZLz3Rz3j3VmphhVqY 5R3xb2OFpcXjEYQ4ja2FuO09lQ0H/8c= Received: from mail-qv1-f72.google.com (mail-qv1-f72.google.com [209.85.219.72]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_128_GCM_SHA256) id us-mta-401-fSEPildUPK6XzJIC1I0L5w-1; Thu, 06 Oct 2022 15:28:40 -0400 X-MC-Unique: fSEPildUPK6XzJIC1I0L5w-1 Received: by mail-qv1-f72.google.com with SMTP id mz8-20020a0562142d0800b004b18a95b180so1693209qvb.8 for ; Thu, 06 Oct 2022 12:28:40 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:x-gm-message-state:from:to:cc:subject:date :message-id:reply-to; bh=1wwtCkg9brQ3dbfSvGkTv+uYwYqKS+oibTQyuGU5eVE=; b=T/I4pURwwvRjT30xXZUpFacytsxJQuf1cL+VpqbpD/85iiSOvqJGpGwIRH4UR/vU70 skN7eDOP3sQgSeM8wDEjFy29sbZ3iuRygp1neKUBHIAla6rwwlvsvyIi2sYUiMODawmL HpYNec0J6u7EtlakJmhxLgnQ9e5ahuD+HYDIruMWyJeBBPQhW1xN3vHqBKZ9XmtZXaMx AeTLEVLRSOWVNTC297k+rr97Pw3UgiSx3Xkd/SBxwuHSYbFHEpcFNd1oWDqeDRrMhcz+ xMI/LRD8XsbZSK+T1cewB6eS5qvD34/1fbq2ajnpIkTy8XaeKGtAryw4ZUsy1vS2t4UW Iiuw== X-Gm-Message-State: ACrzQf0P45Gzg15LkcI+vM5LWqchjejuKLd4GjEA1Yoywy0/7jUotFZV 0P9gomkAeZ6MdmKbw6MAxGMvtNHUCCqMB9t3wpHyI1C9LmPFU8dx4jxUZUKls+ridVSOECnum2i 9zJ91fRjx9CM= X-Received: by 2002:a05:620a:2989:b0:6ce:1913:83ce with SMTP id r9-20020a05620a298900b006ce191383cemr1404401qkp.49.1665084520145; Thu, 06 Oct 2022 12:28:40 -0700 (PDT) X-Google-Smtp-Source: AMsMyM5O0opIAgOKkQpMkYtzkmV6L2RfyYpkvzkuo1iBKAjxfLrtcmURtP29Z5SsTjxfFNHsMZ/MfQ== X-Received: by 2002:a05:620a:2989:b0:6ce:1913:83ce with SMTP id r9-20020a05620a298900b006ce191383cemr1404378qkp.49.1665084519888; Thu, 06 Oct 2022 12:28:39 -0700 (PDT) Received: from x1n (bras-base-aurron9127w-grc-46-70-31-27-79.dsl.bell.ca. [70.31.27.79]) by smtp.gmail.com with ESMTPSA id o18-20020a05620a0d5200b006ce407b996asm19445857qkl.69.2022.10.06.12.28.38 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 06 Oct 2022 12:28:39 -0700 (PDT) Date: Thu, 6 Oct 2022 15:28:37 -0400 From: Peter Xu To: David Hildenbrand Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-kselftest@vger.kernel.org, Andrew Morton , Shuah Khan , Hugh Dickins , Vlastimil Babka , Andrea Arcangeli , "Matthew Wilcox (Oracle)" , Jason Gunthorpe , John Hubbard Subject: Re: [PATCH v1 6/7] mm/ksm: convert break_ksm() to use walk_page_range_vma() Message-ID: References: <20220930141931.174362-1-david@redhat.com> <20220930141931.174362-7-david@redhat.com> <9a84440f-1462-2193-7dd6-c84e8bb22232@redhat.com> MIME-Version: 1.0 In-Reply-To: <9a84440f-1462-2193-7dd6-c84e8bb22232@redhat.com> X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Type: text/plain; charset=utf-8 Content-Disposition: inline ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1665084522; a=rsa-sha256; cv=none; b=QQnp89zFppyKsK0Z899jYSUiKQyxStVOXvz3d+bqx/dwc89w3716+yXjsEODalxi8etq15 SeO6/GwNz+ezzTorVV8a23xTEn2Is/Y3+nC/cggTf8LbbPmlJo5V7fS24cjSNkh0b9LDwX g2rhrbrNEvMbiRyM5WutDtA235s39lw= ARC-Authentication-Results: i=1; imf21.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=aSra8tYQ; spf=pass (imf21.hostedemail.com: domain of peterx@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=peterx@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1665084522; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=1wwtCkg9brQ3dbfSvGkTv+uYwYqKS+oibTQyuGU5eVE=; b=g8F/Q24MB8kMVntXKh1f9QfvGkCRkoNWNEJnQGRNwBvrbMreF6nlzxD34bzLZbeAnVzAkz gCAvb9xSYwrH1IgOU9sBBZSM+yLiZtLPiobG0KlpF6a3T5TGnESaNB9+fI/73hzvzNT+1M kyCDJs6zmfvL5ZLM+etAHnKJykWlSHU= X-Rspamd-Queue-Id: 476F91C0014 X-Rspam-User: Authentication-Results: imf21.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=aSra8tYQ; spf=pass (imf21.hostedemail.com: domain of peterx@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=peterx@redhat.com; dmarc=pass (policy=none) header.from=redhat.com X-Stat-Signature: ay7j38r3zyoebhkmr7og1g7tmdexxdmm X-Rspamd-Server: rspam04 X-HE-Tag: 1665084522-180282 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Thu, Oct 06, 2022 at 11:20:42AM +0200, David Hildenbrand wrote: > > > +int break_ksm_pud_entry(pud_t *pud, unsigned long addr, unsigned long next, > > > + struct mm_walk *walk) > > > +{ > > > + /* We only care about page tables to walk to a single base page. */ > > > + if (pud_leaf(*pud) || !pud_present(*pud)) > > > + return 1; > > > + return 0; > > > +} > > > > Is this needed? I thought the pgtable walker handlers this already. > > > > [...] > > > > Most probably yes. I was trying to avoid about PUD splits, but I guess we > simply should not care in VMAs that are considered by KSM (MERGABLE). Most > probably never ever happens. I was surprised the split is the default approach; didn't really notice that before. Yeah maybe better to keep it. > > > > static int break_ksm(struct vm_area_struct *vma, unsigned long addr) > > > { > > > - struct page *page; > > > vm_fault_t ret = 0; > > > + if (WARN_ON_ONCE(!IS_ALIGNED(addr, PAGE_SIZE))) > > > + return -EINVAL; > > > + > > > do { > > > bool ksm_page = false; > > > cond_resched(); > > > - page = follow_page(vma, addr, > > > - FOLL_GET | FOLL_MIGRATION | FOLL_REMOTE); > > > - if (IS_ERR_OR_NULL(page)) > > > - break; > > > - if (PageKsm(page)) > > > - ksm_page = true; > > > - put_page(page); > > > + ret = walk_page_range_vma(vma, addr, addr + PAGE_SIZE, > > > + &break_ksm_ops, &ksm_page); > > > + if (WARN_ON_ONCE(ret < 0)) > > > + return ret; > > > > I'm not sure this would be worth it, especially with a 4% degrade. The > > next patch will be able to bring 50- LOC, but this patch does 60+ anyway, > > based on another new helper just introduced... > > > > I just don't see whether there's strong enough reason to do so to drop > > FOLL_MIGRATE. It's different to the previous VM_FAULT_WRITE refactor > > because of the unshare approach was much of a good reasoning to me. > > > > Perhaps I missed something? > > My main motivation is to remove most of that GUP hackery here, which is > 1) Getting a reference on a page and waiting for migration to finish > even though both is unnecessary. > 2) As we don't have sufficient control, we added FOLL_MIGRATION hacks to > MM core to work around limitations in the GUP-based approacj. I saw one thing of adding FOLL_MIGRATION from Hugh was to have a hint for follow page users: I'd have preferred to avoid another flag, and do it every time, in case someone else makes the same easy mistake.. Though.. > 3) We rely on legacy follow_page() interface that we should really get > rid of in the long term. ..this is part of effort to remove follow_page()? More context will be helpful in that case. > > All we want to do is walk the page tables and make a decision if something > we care about is mapped. Instead of leaking these details via hacks into GUP > code and making that code harder to grasp/maintain, this patch moves that > logic to the actual user, while reusing generic page walking code. Indeed there's only one ksm user, at least proving that the flag was not widely used. > > Yes, we have to extend page walking code, but it's just the natural, > non-hacky way of doing it. > > Regarding the 4% performance degradation (if I wouldn't have added the > benchmarks, nobody would know and probably care ;) ), I am not quite sure > why that is the case. We're just walking page tables after all in both > cases. Maybe the callback-based implementation of pagewalk code is less > efficient, but we might be able to improve that implementation if we really > care about performance here. Maybe removing break_ksm_pud_entry() already > improves the numbers slightly. Yeah it could be the walker is just slower. And for !ksm walking your code should be faster when hit migration entries, but that should really be rare anyway. -- Peter Xu