From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.4 required=3.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,USER_AGENT_SANE_2 autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C7C44C4CED1 for ; Wed, 2 Oct 2019 21:37:27 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 8BDA12070B for ; Wed, 2 Oct 2019 21:37:27 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lca.pw header.i=@lca.pw header.b="L6EyKcHi" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 8BDA12070B Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=lca.pw Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 206436B0003; Wed, 2 Oct 2019 17:37:27 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 191BA6B0006; Wed, 2 Oct 2019 17:37:27 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 0A5626B0007; Wed, 2 Oct 2019 17:37:27 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0216.hostedemail.com [216.40.44.216]) by kanga.kvack.org (Postfix) with ESMTP id D6F646B0003 for ; Wed, 2 Oct 2019 17:37:26 -0400 (EDT) Received: from smtpin24.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with SMTP id 6C3A3181AC9AE for ; Wed, 2 Oct 2019 21:37:26 +0000 (UTC) X-FDA: 76000156092.24.fish83_23cc96b5ffb0a X-HE-Tag: fish83_23cc96b5ffb0a X-Filterd-Recvd-Size: 6068 Received: from mail-qk1-f193.google.com (mail-qk1-f193.google.com [209.85.222.193]) by imf40.hostedemail.com (Postfix) with ESMTP for ; Wed, 2 Oct 2019 21:37:25 +0000 (UTC) Received: by mail-qk1-f193.google.com with SMTP id p10so231652qkg.8 for ; Wed, 02 Oct 2019 14:37:25 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=lca.pw; s=google; h=message-id:subject:from:to:cc:date:in-reply-to:references :mime-version:content-transfer-encoding; bh=OFUfxxApQM/wOvv7cwP36LDDkmb0Ckr8pi1d84sDhxQ=; b=L6EyKcHiu04K/islvAwkUEMgORzQSKx5TpajWgelNjt/ueqnHPxO2yUNPSJHd7oXK3 sfJwSi5zTinflkXUQWcNQPPyI9rVD7/fM0f9UQJCKVaynbLwbo0qoT6W6cRHti/occLH pkZRRbAQWW8vgwdDbT0MWEz9/5aul+kc4nxOAgc/ckJKu4D8LRlzZ0Ct0qPsAiDwo6GB OKsl39hQr5GoNdAHREF6Mgu7DNFT0q2BH/aL4y+tiuwH4Q8KYgETKC9+O/ZK33g0bU8l 22b5WTvQnI0pPHkETYdzjoky16uEbAKAJ0W0L7Erd1fR1y+cK+o7F9peYa+3zeHfZ5va gjTQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:message-id:subject:from:to:cc:date:in-reply-to :references:mime-version:content-transfer-encoding; bh=OFUfxxApQM/wOvv7cwP36LDDkmb0Ckr8pi1d84sDhxQ=; b=SzWLVZq2GZ17/JUB0bglqLB0FFhIBcLc9A6xHkZsUpX/RBKNuHX/mcGc63wXrSvfdJ BOOje9VpxBMjgmiVS+PSs0xCPKeHsd8/3nDMP9ZIHSEK1T6TAGIoXTk04KyfXtXLNROZ GgFTtwCxAa0KY+v+mgPwQ+EcirZxJNpM4fgEPWRYNJk5oGxj8BIV0WdlE8YsrIurWTJO 7FEHuNB+2iSXTRCL6M3WcofTCm9c94YcYWli0tMuHcdTpefLiAFr10nURwVrfRIC6hbg NPZFpEQsUSGWLtkUQ0Kyu7t0F5jIot7mjLjMeiGytFggvpnSrYNI9ostMGJwcKEEXjjW MXDA== X-Gm-Message-State: APjAAAVaZWAdLBvN/kVv+gdTjSkoXdOuhsLhMyxj3BJYa0AWzhmkodQm 5t7cBqqwexzOm9Bpyksr49t9FA== X-Google-Smtp-Source: APXvYqwQJsoFFpm6x25BeaMqpbL8UY4PdxdZEVr4bEgnaXGS3uuDvF2zV8ZkMakf/dgAsRknl3i9jA== X-Received: by 2002:a05:620a:5b5:: with SMTP id q21mr984182qkq.160.1570052245160; Wed, 02 Oct 2019 14:37:25 -0700 (PDT) Received: from dhcp-41-57.bos.redhat.com (nat-pool-bos-t.redhat.com. [66.187.233.206]) by smtp.gmail.com with ESMTPSA id v26sm404185qta.88.2019.10.02.14.37.23 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 02 Oct 2019 14:37:24 -0700 (PDT) Message-ID: <1570052242.5576.266.camel@lca.pw> Subject: Re: [PATCH v1] mm/memory_hotplug: Don't take the cpu_hotplug_lock From: Qian Cai To: David Hildenbrand , linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org, Andrew Morton , Oscar Salvador , Michal Hocko , Pavel Tatashin , Dan Williams , Thomas Gleixner Date: Wed, 02 Oct 2019 17:37:22 -0400 In-Reply-To: <20190924143615.19628-1-david@redhat.com> References: <20190924143615.19628-1-david@redhat.com> Content-Type: text/plain; charset="UTF-8" X-Mailer: Evolution 3.22.6 (3.22.6-10.el7) Mime-Version: 1.0 Content-Transfer-Encoding: 7bit X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Tue, 2019-09-24 at 16:36 +0200, David Hildenbrand wrote: > Since commit 3f906ba23689 ("mm/memory-hotplug: switch locking to a percpu > rwsem") we do a cpus_read_lock() in mem_hotplug_begin(). This was > introduced to fix a potential deadlock between get_online_mems() and > get_online_cpus() - the memory and cpu hotplug lock. The root issue was > that build_all_zonelists() -> stop_machine() required the cpu hotplug lock: > The reason is that memory hotplug takes the memory hotplug lock and > then calls stop_machine() which calls get_online_cpus(). That's the > reverse lock order to get_online_cpus(); get_online_mems(); in > mm/slub_common.c > > So memory hotplug never really required any cpu lock itself, only > stop_machine() and lru_add_drain_all() required it. Back then, > stop_machine_cpuslocked() and lru_add_drain_all_cpuslocked() were used > as the cpu hotplug lock was now obtained in the caller. > > Since commit 11cd8638c37f ("mm, page_alloc: remove stop_machine from build > all_zonelists"), the stop_machine_cpuslocked() call is gone. > build_all_zonelists() does no longer require the cpu lock and does no > longer make use of stop_machine(). > > Since commit 9852a7212324 ("mm: drop hotplug lock from > lru_add_drain_all()"), lru_add_drain_all() "Doesn't need any cpu hotplug > locking because we do rely on per-cpu kworkers being shut down before our > page_alloc_cpu_dead callback is executed on the offlined cpu.". The > lru_add_drain_all_cpuslocked() variant was removed. > > So there is nothing left that requires the cpu hotplug lock. The memory > hotplug lock and the device hotplug lock are sufficient. Actually, powerpc does, arch_add_memory() resize_hpt_for_hotplug() pseries_lpar_resize_hpt() stop_machine_cpuslocked() > > Cc: Andrew Morton > Cc: Oscar Salvador > Cc: Michal Hocko > Cc: Pavel Tatashin > Cc: Dan Williams > Cc: Thomas Gleixner > Signed-off-by: David Hildenbrand > --- > > RFC -> v1: > - Reword and add more details why the cpu hotplug lock was needed here > in the first place, and why we no longer require it. > > --- > mm/memory_hotplug.c | 2 -- > 1 file changed, 2 deletions(-) > > diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c > index c3e9aed6023f..5fa30f3010e1 100644 > --- a/mm/memory_hotplug.c > +++ b/mm/memory_hotplug.c > @@ -88,14 +88,12 @@ __setup("memhp_default_state=", setup_memhp_default_state); > > void mem_hotplug_begin(void) > { > - cpus_read_lock(); > percpu_down_write(&mem_hotplug_lock); > } > > void mem_hotplug_done(void) > { > percpu_up_write(&mem_hotplug_lock); > - cpus_read_unlock(); > } > > u64 max_mem_size = U64_MAX;