From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.3 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_SANE_1 autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 44A63C48BE5 for ; Wed, 16 Jun 2021 01:51:27 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id DDC9660E09 for ; Wed, 16 Jun 2021 01:51:26 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org DDC9660E09 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=intel.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 456396B006C; Tue, 15 Jun 2021 21:51:26 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 42C3D6B006E; Tue, 15 Jun 2021 21:51:26 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 2F4B36B0070; Tue, 15 Jun 2021 21:51:26 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0074.hostedemail.com [216.40.44.74]) by kanga.kvack.org (Postfix) with ESMTP id F143F6B006C for ; Tue, 15 Jun 2021 21:51:25 -0400 (EDT) Received: from smtpin26.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 98948F01C for ; Wed, 16 Jun 2021 01:51:25 +0000 (UTC) X-FDA: 78257909730.26.EFCADF7 Received: from mga12.intel.com (mga12.intel.com [192.55.52.136]) by imf20.hostedemail.com (Postfix) with ESMTP id 79DA537C for ; Wed, 16 Jun 2021 01:51:09 +0000 (UTC) IronPort-SDR: 0WohJykLnuiYqx63p0ix1FrD8mu/yJ9W/Fikx1U7sppq2M21M56an4k/AAMiTI6uK4JBYypKI/ l4fE7kNw60OA== X-IronPort-AV: E=McAfee;i="6200,9189,10016"; a="185791754" X-IronPort-AV: E=Sophos;i="5.83,276,1616482800"; d="scan'208";a="185791754" Received: from orsmga002.jf.intel.com ([10.7.209.21]) by fmsmga106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 15 Jun 2021 18:51:20 -0700 IronPort-SDR: U5fC1bgIU8nt2ppwlZbxbXYV1JHh+KiKdHS5yY6JJPRGtkAQsZ0RqHBJDo76W7L2SmjvhpGbIt QYwplAfkgRHg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.83,276,1616482800"; d="scan'208";a="421319755" Received: from shbuild999.sh.intel.com (HELO localhost) ([10.239.147.94]) by orsmga002.jf.intel.com with ESMTP; 15 Jun 2021 18:51:18 -0700 Date: Wed, 16 Jun 2021 09:51:17 +0800 From: Feng Tang To: Peter Xu Cc: Jason Gunthorpe , linux-mm@kvack.org, Andrew Morton , Linus Torvalds , kernel test robot , John Hubbard , linux-kernel@vger.kernel.org, lkp@lists.01.org Subject: Re: [PATCH] mm: relocate 'write_protect_seq' in struct mm_struct Message-ID: <20210616015117.GA55795@shbuild999.sh.intel.com> References: <1623376482-92265-1-git-send-email-feng.tang@intel.com> <20210611170917.GW1002214@nvidia.com> <20210614032738.GA72794@shbuild999.sh.intel.com> <20210615011102.GA38780@shbuild999.sh.intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.5.24 (2015-08-30) Authentication-Results: imf20.hostedemail.com; dkim=none; dmarc=fail reason="No valid SPF, No valid DKIM" header.from=intel.com (policy=none); spf=none (imf20.hostedemail.com: domain of feng.tang@intel.com has no SPF policy when checking 192.55.52.136) smtp.mailfrom=feng.tang@intel.com X-Stat-Signature: bkt9iargezdrtckp9rdeyczn3ozjgkpf X-Rspamd-Queue-Id: 79DA537C X-Rspamd-Server: rspam06 X-HE-Tag: 1623808269-582966 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Hi Peter, On Tue, Jun 15, 2021 at 02:52:49PM -0400, Peter Xu wrote: > On Tue, Jun 15, 2021 at 09:11:03AM +0800, Feng Tang wrote: > > On Mon, Jun 14, 2021 at 11:27:39AM +0800, Feng Tang wrote: > > > > > > > > It seems Ok to me, but didn't we earlier add the has_pinned which > > > > would have changed the layout too? Are we chasing performance delta's > > > > nobody cares about? > > > > > > Good point! I checked my email folder for 0day's reports, and haven't > > > found a report related with Peter's commit 008cfe4418b3 ("mm: Introduce > > > mm_struct.has_pinned) which adds 'has_pinned' field. > > > > > > Will run the same test for it and report back. > > > > I run the same will-it-scale/mmap1 case for Peter's commit 008cfe4418b3 > > and its parent commit, and there is no obvious performance diff: > > > > a1bffa48745afbb5 008cfe4418b3dbda2ff820cdd7b > > ---------------- --------------------------- > > > > 344353 -0.4% 342929 will-it-scale.48.threads > > 7173 -0.4% 7144 will-it-scale.per_thread_ops > > > > And from the pahole info for the 2 kernels, Peter's commit adds the > > 'has_pinned' is put into an existing 4 bytes hole, so all other following > > fields keep their alignment unchanged. Peter may do it purposely > > considering the alignment. So no performance change is expected. > > Thanks for verifying this. I didn't do it on purpose at least for the initial > version, but I do remember some comment to fill up that hole, so it may have > got moved around. > > Also note that if nothing goes wrong, has_pinned will be gone in the next > release with commit 3c0c4cda6d48 ("mm: gup: pack has_pinned in MMF_HAS_PINNED", > 2021-05-26); it's in -mm-next but not reaching the main branch yet. So then I > think the 4 bytes hole should come back to us again, with no perf loss either. Thanks for the heads up. > What I'm thinking is whether we should move some important (and especially > CONFIG_* irrelevant) fields at the top of the whole struct define, make sure > they're most optimal for the common workload and make them static. Then > there'll be less or no possibility some new field regress some common workload > by accident. Not sure whether it makes sense to do so. Yep, it makes sense to me, as it makes the alignment more predictable and controllable. But usually we dare not to move the fields around, as it could cause improvments to some cases and regressions to other cases, given different benchmarks can see different hotspots. And most of our patches changing data structure's layout are mostly regression driven :) Thanks, Feng > Thanks, > > -- > Peter Xu