From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0EAB2C433F5 for ; Fri, 8 Apr 2022 09:26:09 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 911606B0071; Fri, 8 Apr 2022 05:26:08 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 898EA6B0072; Fri, 8 Apr 2022 05:26:08 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 743736B0074; Fri, 8 Apr 2022 05:26:08 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0020.hostedemail.com [216.40.44.20]) by kanga.kvack.org (Postfix) with ESMTP id 5E4876B0071 for ; Fri, 8 Apr 2022 05:26:08 -0400 (EDT) Received: from smtpin29.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 135778250D09 for ; Fri, 8 Apr 2022 09:26:08 +0000 (UTC) X-FDA: 79333180416.29.2A24456 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by imf25.hostedemail.com (Postfix) with ESMTP id 8AA5BA0006 for ; Fri, 8 Apr 2022 09:26:07 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1649409967; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=T1epDrVyBwEQgvDGzorG+PC/lz08GO+3kCvmSNbPLn8=; b=gFUO8gZfy4S7eLbwNKkMPINuGVUzeBvdT5VuwVnIetiYD3DEWBk14uhbah1D19GqDWai6f oq7ZC7abT/ewJ9PEz1azmTSZif69dk8jIDbdhZHJBRRYsr9BPuiLla4KCLE5bZ9fPkXNlO nAEaYEpOawMqJC1vhZaKq0/lEM9D4X8= Received: from mail-wr1-f72.google.com (mail-wr1-f72.google.com [209.85.221.72]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-530-oz4cn_F1OeaFznPA-bqOfg-1; Fri, 08 Apr 2022 05:26:06 -0400 X-MC-Unique: oz4cn_F1OeaFznPA-bqOfg-1 Received: by mail-wr1-f72.google.com with SMTP id g4-20020adfa484000000b002061151874eso2047264wrb.21 for ; Fri, 08 Apr 2022 02:26:06 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:message-id:date:mime-version:user-agent:subject :content-language:to:cc:references:from:organization:in-reply-to :content-transfer-encoding; bh=T1epDrVyBwEQgvDGzorG+PC/lz08GO+3kCvmSNbPLn8=; b=OU4AmOY6cRomtS2tUBDVxdsbldpBUEoQqKlK379Oo33yRPtU4WJx/IwLETt2ry3hOF T57w0AcN6l8Rf1Bz6ajbe8dIcDA7FVdWe+hM9aDFZu7bQ5ggOomLyV/E3Iy5vXjKDukW 17CCLAFR/yonMQDdY9hRR61uy0Dz6ye3OOZH3X32ZxDlRvbg9A7rxtrMHKVKBN9xOvxl EAFDqOzXQVt4dV/t38Tt03qwfarmvuw3blV3ILylCsZwNwTXOZ5Wi60m5aYJdztP4Fwh 2Y+e76YfOvbNBMHOccY+K+yLeG8gvDMlirWaPs1dat1kYcdDuK+k99JNc0JaaB64U6Ps Y1MA== X-Gm-Message-State: AOAM531QNzBKv5Tv9DthsyvqeBVOKVZ2TtWI95tXbNwuYDcUCjBPVA4I fwy6xnH2NqB1MT36UNOhV0K7ojsS9ucN5Y0FREQ5M3aCb476ofLT/mPy0u08OwZE3c/IPY70313 30LodffppuNA= X-Received: by 2002:a5d:5108:0:b0:207:8a3e:dc0a with SMTP id s8-20020a5d5108000000b002078a3edc0amr5311250wrt.675.1649409964996; Fri, 08 Apr 2022 02:26:04 -0700 (PDT) X-Google-Smtp-Source: ABdhPJykKAIz79E8TG/VSntdo5a/hLx8x45PcnYHzk/ulEsXZTfQfIYBUYwMui1zrqtYlQLvi+hS2A== X-Received: by 2002:a5d:5108:0:b0:207:8a3e:dc0a with SMTP id s8-20020a5d5108000000b002078a3edc0amr5311229wrt.675.1649409964742; Fri, 08 Apr 2022 02:26:04 -0700 (PDT) Received: from ?IPV6:2003:cb:c704:fd00:612:f12b:a4a2:26b0? (p200300cbc704fd000612f12ba4a226b0.dip0.t-ipconnect.de. [2003:cb:c704:fd00:612:f12b:a4a2:26b0]) by smtp.gmail.com with ESMTPSA id h9-20020a05600c350900b0038cbcbcf994sm10549617wmq.36.2022.04.08.02.26.03 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Fri, 08 Apr 2022 02:26:04 -0700 (PDT) Message-ID: Date: Fri, 8 Apr 2022 11:26:03 +0200 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101 Thunderbird/91.6.2 Subject: Re: [RFC PATCH 0/5] hugetlb: Change huge pmd sharing To: Mike Kravetz , linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: Michal Hocko , Peter Xu , Naoya Horiguchi , "Aneesh Kumar K . V" , Andrea Arcangeli , "Kirill A . Shutemov" , Davidlohr Bueso , Prakash Sangappa , James Houghton , Mina Almasry , Ray Fucillo , Andrew Morton References: <20220406204823.46548-1-mike.kravetz@oracle.com> <045a59a1-0929-a969-b184-1311f81504b8@redhat.com> <4ddf7d53-db45-4201-8ae0-095698ec7e1a@oracle.com> From: David Hildenbrand Organization: Red Hat In-Reply-To: <4ddf7d53-db45-4201-8ae0-095698ec7e1a@oracle.com> X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Language: en-US Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit X-Rspamd-Server: rspam09 X-Rspam-User: X-Stat-Signature: 9i3o9kfdonwao96zpkx8b7e4xtxwfgba Authentication-Results: imf25.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=gFUO8gZf; dmarc=pass (policy=none) header.from=redhat.com; spf=none (imf25.hostedemail.com: domain of david@redhat.com has no SPF policy when checking 170.10.133.124) smtp.mailfrom=david@redhat.com X-Rspamd-Queue-Id: 8AA5BA0006 X-HE-Tag: 1649409967-636853 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: >> >> Let's assume a 4 TiB device and 2 MiB hugepage size. That's 2097152 huge >> pages. Each such PMD entry consumes 8 bytes. That's 16 MiB. >> >> Sure, with thousands of processes sharing that memory, the size of page >> tables required would increase with each and every process. But TBH, >> that's in no way different to other file systems where we're even >> dealing with PTE tables. > > The numbers for a real use case I am frequently quoted are something like: > 1TB shared mapping, 10,000 processes sharing the mapping > 4K PMD Page per 1GB of shared mapping > 4M saving for each shared process > 9,999 * 4M ~= 39GB savings 3.7 % of all memory. Noticeable if the feature is removed? yes. Do we care about supporting such corner cases that result in a maintenance burden? My take is a clear no. > > However, if you look at commit 39dde65c9940c which introduced huge pmd sharing > it states that performance rather than memory savings was the primary > objective. > > "For hugetlb, the saving on page table memory is not the primary > objective (as hugetlb itself already cuts down page table overhead > significantly), instead, the purpose of using shared page table on hugetlb is > to allow faster TLB refill and smaller cache pollution upon TLB miss. > > With PT sharing, pte entries are shared among hundreds of processes, the > cache consumption used by all the page table is smaller and in return, > application gets much higher cache hit ratio. One other effect is that > cache hit ratio with hardware page walker hitting on pte in cache will be > higher and this helps to reduce tlb miss latency. These two effects > contribute to higher application performance." > > That 'makes sense', but I have never tried to measure any such performance > benefit. It is easier to calculate the memory savings. It does makes sense; but then, again, what's specific here about hugetlb? Most probably it was just easy to add to hugetlb in contrast to other types of shared memory. > >> >> Which results in me wondering if >> >> a) We should simply use gigantic pages for such extreme use case. Allows >> for freeing up more memory via vmemmap either way. > > The only problem with this is that many processors in use today have > limited TLB entries for gigantic pages. > >> b) We should instead look into reclaiming reconstruct-able page table. >> It's hard to imagine that each and every process accesses each and >> every part of the gigantic file all of the time. >> c) We should instead establish a more generic page table sharing >> mechanism. > > Yes. I think that is the direction taken by mshare() proposal. If we have > a more generic approach we can certainly start deprecating hugetlb pmd > sharing. My strong opinion is to remove it ASAP and get something proper into place. -- Thanks, David / dhildenb