From: Michal Hocko <mhocko@suse.com>
To: David Hildenbrand <david@redhat.com>
Cc: Yin Fengwei <fengwei.yin@intel.com>,
linux-kernel@vger.kernel.org,
Linus Torvalds <torvalds@linux-foundation.org>,
linux-mm@kvack.org, Andrew Morton <akpm@linux-foundation.org>,
Matthew Wilcox <willy@infradead.org>,
Ryan Roberts <ryan.roberts@arm.com>,
Catalin Marinas <catalin.marinas@arm.com>,
Will Deacon <will@kernel.org>,
"Aneesh Kumar K.V" <aneesh.kumar@linux.ibm.com>,
Nick Piggin <npiggin@gmail.com>,
Peter Zijlstra <peterz@infradead.org>,
Michael Ellerman <mpe@ellerman.id.au>,
Christophe Leroy <christophe.leroy@csgroup.eu>,
"Naveen N. Rao" <naveen.n.rao@linux.ibm.com>,
Heiko Carstens <hca@linux.ibm.com>,
Vasily Gorbik <gor@linux.ibm.com>,
Alexander Gordeev <agordeev@linux.ibm.com>,
Christian Borntraeger <borntraeger@linux.ibm.com>,
Sven Schnelle <svens@linux.ibm.com>,
Arnd Bergmann <arnd@arndb.de>,
linux-arch@vger.kernel.org, linuxppc-dev@lists.ozlabs.org,
linux-s390@vger.kernel.org, "Huang, Ying" <ying.huang@intel.com>
Subject: Re: [PATCH v1 0/9] mm/memory: optimize unmap/zap with PTE-mapped THP
Date: Wed, 31 Jan 2024 15:03:47 +0100 [thread overview]
Message-ID: <ZbpTQ80ebCUnAXW0@tiehlicka> (raw)
In-Reply-To: <ee94b8ca-9723-44c0-aa17-75c9678015c6@redhat.com>
On Wed 31-01-24 11:16:01, David Hildenbrand wrote:
[...]
> This 10000 pages limit was introduced in 53a59fc67f97 ("mm: limit mmu_gather
> batching to fix soft lockups on !CONFIG_PREEMPT") where we wanted to handle
> soft-lockups.
AFAIR at the time of this patch this was mostly just to put some cap on
the number of batches to collect and free at once. If there is a lot of
free memory and a large process exiting this could grow really high. Now
that those pages^Wfolios can represent larger memory chunks it could
mean more physical memory being freed but from which might make the
operation take longer but still far from soft lockup triggering.
Now latency might suck on !PREEMPT kernels with too many pages to
free in a single batch but I guess this is somehow expected for this
preemption model. The soft lockup has to be avoided because this can
panic the machine in some configurations.
--
Michal Hocko
SUSE Labs
next prev parent reply other threads:[~2024-01-31 14:03 UTC|newest]
Thread overview: 40+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-01-29 14:32 [PATCH v1 0/9] mm/memory: optimize unmap/zap with PTE-mapped THP David Hildenbrand
2024-01-29 14:32 ` [PATCH v1 1/9] mm/memory: factor out zapping of present pte into zap_present_pte() David Hildenbrand
2024-01-30 8:13 ` Ryan Roberts
2024-01-30 8:41 ` David Hildenbrand
2024-01-30 8:46 ` Ryan Roberts
2024-01-30 8:49 ` David Hildenbrand
2024-01-29 14:32 ` [PATCH v1 2/9] mm/memory: handle !page case in zap_present_pte() separately David Hildenbrand
2024-01-30 8:20 ` Ryan Roberts
2024-01-29 14:32 ` [PATCH v1 3/9] mm/memory: further separate anon and pagecache folio handling in zap_present_pte() David Hildenbrand
2024-01-30 8:31 ` Ryan Roberts
2024-01-30 8:37 ` David Hildenbrand
2024-01-30 8:45 ` Ryan Roberts
2024-01-30 8:47 ` David Hildenbrand
2024-01-29 14:32 ` [PATCH v1 4/9] mm/memory: factor out zapping folio pte into zap_present_folio_pte() David Hildenbrand
2024-01-30 8:47 ` Ryan Roberts
2024-01-29 14:32 ` [PATCH v1 5/9] mm/mmu_gather: pass "delay_rmap" instead of encoded page to __tlb_remove_page_size() David Hildenbrand
2024-01-30 8:41 ` Ryan Roberts
2024-01-29 14:32 ` [PATCH v1 6/9] mm/mmu_gather: define ENCODED_PAGE_FLAG_DELAY_RMAP David Hildenbrand
2024-01-30 9:03 ` Ryan Roberts
2024-01-29 14:32 ` [PATCH v1 7/9] mm/mmu_gather: add __tlb_remove_folio_pages() David Hildenbrand
2024-01-30 9:21 ` Ryan Roberts
2024-01-30 9:33 ` David Hildenbrand
2024-01-29 14:32 ` [PATCH v1 8/9] mm/mmu_gather: add tlb_remove_tlb_entries() David Hildenbrand
2024-01-30 9:33 ` Ryan Roberts
2024-01-29 14:32 ` [PATCH v1 9/9] mm/memory: optimize unmap/zap with PTE-mapped THP David Hildenbrand
2024-01-30 9:08 ` David Hildenbrand
2024-01-30 9:48 ` Ryan Roberts
2024-01-31 10:21 ` David Hildenbrand
2024-01-31 10:31 ` Ryan Roberts
2024-01-31 11:13 ` David Hildenbrand
2024-01-31 2:30 ` Yin Fengwei
2024-01-31 10:30 ` David Hildenbrand
2024-01-31 10:43 ` Yin, Fengwei
2024-01-31 2:20 ` [PATCH v1 0/9] " Yin Fengwei
2024-01-31 10:16 ` David Hildenbrand
2024-01-31 10:26 ` Ryan Roberts
2024-01-31 14:08 ` Michal Hocko
2024-01-31 14:20 ` David Hildenbrand
2024-01-31 14:03 ` Michal Hocko [this message]
2024-01-31 10:43 ` David Hildenbrand
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=ZbpTQ80ebCUnAXW0@tiehlicka \
--to=mhocko@suse.com \
--cc=agordeev@linux.ibm.com \
--cc=akpm@linux-foundation.org \
--cc=aneesh.kumar@linux.ibm.com \
--cc=arnd@arndb.de \
--cc=borntraeger@linux.ibm.com \
--cc=catalin.marinas@arm.com \
--cc=christophe.leroy@csgroup.eu \
--cc=david@redhat.com \
--cc=fengwei.yin@intel.com \
--cc=gor@linux.ibm.com \
--cc=hca@linux.ibm.com \
--cc=linux-arch@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=linux-s390@vger.kernel.org \
--cc=linuxppc-dev@lists.ozlabs.org \
--cc=mpe@ellerman.id.au \
--cc=naveen.n.rao@linux.ibm.com \
--cc=npiggin@gmail.com \
--cc=peterz@infradead.org \
--cc=ryan.roberts@arm.com \
--cc=svens@linux.ibm.com \
--cc=torvalds@linux-foundation.org \
--cc=will@kernel.org \
--cc=willy@infradead.org \
--cc=ying.huang@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).