From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 916B5CDB465 for ; Mon, 16 Oct 2023 16:12:08 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 1E5828D00B2; Mon, 16 Oct 2023 12:12:08 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 1954E8D0001; Mon, 16 Oct 2023 12:12:08 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 036F48D00B2; Mon, 16 Oct 2023 12:12:07 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id E6AE98D0001 for ; Mon, 16 Oct 2023 12:12:07 -0400 (EDT) Received: from smtpin29.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id AC193C0A35 for ; Mon, 16 Oct 2023 16:12:07 +0000 (UTC) X-FDA: 81351816294.29.8E2E588 Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.9]) by imf15.hostedemail.com (Postfix) with ESMTP id 11788A0025 for ; Mon, 16 Oct 2023 16:12:04 +0000 (UTC) Authentication-Results: imf15.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b="Yr7ZBx/n"; dmarc=pass (policy=none) header.from=intel.com; spf=none (imf15.hostedemail.com: domain of kirill.shutemov@linux.intel.com has no SPF policy when checking 198.175.65.9) smtp.mailfrom=kirill.shutemov@linux.intel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1697472725; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=anDCXxCDkZ3ZuqmjSeNckcA/BGHT54W/K4Y7xplaRkA=; b=p4Fb0wh0BUfDJeQkbgiOsEieUxbsc5SNtTTzXQhCiM2NpOotMTDzyAWRaBi53IOOUZ09hT ljI4XYfF0GNky8b5pNYN28EfPmZmGwG+8Nzw8V36o6nv5CIdGIfh7zG7kwL8dxbqd7qujA 5+ZukkldPDrJYIit8NOTsqSzAvi/mds= ARC-Authentication-Results: i=1; imf15.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b="Yr7ZBx/n"; dmarc=pass (policy=none) header.from=intel.com; spf=none (imf15.hostedemail.com: domain of kirill.shutemov@linux.intel.com has no SPF policy when checking 198.175.65.9) smtp.mailfrom=kirill.shutemov@linux.intel.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1697472725; a=rsa-sha256; cv=none; b=0oIyS6hBQD6HVxkYqFAty13ZOr8bm7ATZSR4a+rjYG50hs9dO4EQ55tX7H7TobsNOtWIsY OnbOkXlIXx52hD0AGZNTS5IIDqGxwYSMBIptLZYYgGWXoF+m2LvXAHuRjPIX0F5rMEYibk nKa/ZbEjc9zehZhknZuRJPznYuyZaX0= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1697472725; x=1729008725; h=date:from:to:cc:subject:message-id:references: mime-version:in-reply-to; bh=th4TeJcqs9Lsg7FgXl8/atfXmZLKWJrhml83JQ6RL6w=; b=Yr7ZBx/nmwPlg1TfquzTYHa1j1m3ZbGu1rLWRdtGMbJkgs+f4ZpaxdoR RppWtVX0QfbGcSOG2V1Q7Nj138iNxPxftk6xhu3XcpOT4LNoOO09KQZlA 7gl+xuizOQtHM4n9JTOkMZSr8j7UBk3DE6aePJfQQgZbIElt2CDmR5bkO j4iAF3stwsXY3ya+7Xwtx1QfHPnGiavT/3U1I9xHZ6PD2qdGS9RkHo4xl YXmbriTm48BNC5noMoOmtpz2mcsDlNO97oCdW30bsjyQ9XtYEmifQIEXN ZiHgmSj787olkJf6VFrmJ7hD+r/x2Yc5izR6hCEbQ/bBU5xp0w8yIYvuL g==; X-IronPort-AV: E=McAfee;i="6600,9927,10865"; a="4176287" X-IronPort-AV: E=Sophos;i="6.03,229,1694761200"; d="scan'208";a="4176287" Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by orvoesa101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 16 Oct 2023 09:12:02 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10865"; a="1087125336" X-IronPort-AV: E=Sophos;i="6.03,229,1694761200"; d="scan'208";a="1087125336" Received: from ranaelna-mobl.amr.corp.intel.com (HELO box.shutemov.name) ([10.251.208.247]) by fmsmga005-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 16 Oct 2023 09:11:48 -0700 Received: by box.shutemov.name (Postfix, from userid 1000) id 423FC10A1EC; Mon, 16 Oct 2023 19:11:46 +0300 (+03) Date: Mon, 16 Oct 2023 19:11:46 +0300 From: "Kirill A. Shutemov" To: Vlastimil Babka Cc: Borislav Petkov , Andy Lutomirski , Dave Hansen , Sean Christopherson , Andrew Morton , Joerg Roedel , Ard Biesheuvel , Nikolay Borisov , Andi Kleen , Kuppuswamy Sathyanarayanan , David Rientjes , Tom Lendacky , Thomas Gleixner , Peter Zijlstra , Paolo Bonzini , Ingo Molnar , Dario Faggioli , Mike Rapoport , David Hildenbrand , Mel Gorman , marcelo.cerri@canonical.com, tim.gardner@canonical.com, khalid.elmously@canonical.com, philip.cox@canonical.com, aarcange@redhat.com, peterx@redhat.com, x86@kernel.org, linux-mm@kvack.org, linux-coco@lists.linux.dev, linux-efi@vger.kernel.org, linux-kernel@vger.kernel.org, stable@kernel.org Subject: Re: [PATCH] efi/unaccepted: Fix soft lockups caused by parallel memory acceptance Message-ID: <20231016161146.4j24jprkeaanflpf@box.shutemov.name> References: <20231014204040.28765-1-kirill.shutemov@linux.intel.com> <088593ea-e001-fa87-909f-a196b1373ca4@suse.cz> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <088593ea-e001-fa87-909f-a196b1373ca4@suse.cz> X-Rspam-User: X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: 11788A0025 X-Stat-Signature: sar87454a1cdc96bdbsgx8zudd99ux9i X-HE-Tag: 1697472724-389344 X-HE-Meta: U2FsdGVkX1/Egz4qSwJjv4UegPWz7EZOr/5II0ReYj5taEUU18DjRCOZAx5eCg/XcBkv+J/x7JO/Npatok7/ET+MKps08ZuLFNDNSR47h/n+mwqHNUkMOZUsYzpyltxQbF8QoFURiB3UXKcJL+gwujcYnKg5A4Qj03jPcjxJY8ompwZf1dMXF2RIcAwN9prPt0TJbQ0k0ZXFjlv2R3WEEGkriNfccp29mHgTxy0ROl4Q+ojC9GeJuRbjE+LvRG1p8jRuC/mZdoyQvj+YhjEfFTioVF4wHEGHwX0Rxd8DE13H3Fqp3ka0gM8dji0ZJRLXZY1pR9b2iRfTrR/aBbVHtprhcwZQF8i1jWVDSZh0RtuAN+0GUVIDq2KEtzbGi8P3yYXJWTumxZeLEZv8tFc5grukPE0p1JaacJNR+CjHq3J8yYtoNzvhQeE+dOWQjpXQyqWOZ4HjF//qSrNJrzlxmKUAZQUY/SREbXW8pWFwc6nXNZovo+20fTSiq5Kg9TTjhBC4kVX7RNSApHBObOj+DNjy5XjymdUN1sZkVyyf9IUKwDBi5za4QHvGXppnHin1lr/6nHtBYffAuUI0dnQWO+If8dE7pS3uTih8imrspfGjc7Sn1O9lwQarnvq5GOnGHMkOZD5NBg+Gnw1mPWSCGt5v89c4JPJS6REjGCc9AgWnFqTFRibVHAzdpyCQA5e/RlqGQgB6GlBd0hUnCz3PtUGw+pYXzIIEQyCI6mCf6QdMoPmjcqjAEg6H8qDu8wDx33RILD3iqfOJwx4ToJ8rk9vURLobQYBegUCDtic7GEELzW9pmggxxK+hw++ohLS6Xmf5Bzk2M/UIBHU9eHyfEEBu5iE5kROrz6j8tDw2rx6FUJzzPV6+DAfi/yr0SPQ4WXImQGyaYDftTf/YDAI0hQPPuLGJWi+w75JEGJDN8Yc1G7cc5mls1A3Qx0OG2Xy1S/a+S7HALwVrr3CZSqT CkfdbqdI wy+Z3PSFvCtNEYMW5xbNCxL6ydtX/afMR5X13Or3bSKVUOLq3MkGls4CJtVKawq2JDMHo1LmOAZYKYg3aoxM8reQpJe2IwHrfvvdMffp2N4ztmldJ5ky2wbvDSAUpyALjwR1uhFBd4Hk8Mz99TKjRmnjo0q96Mu8GUAeobsWrsfO4oJxBUjxuhYNK57x7ZWoiaHrWpb2po86wvM6Rio43N3K+5N3AyGiMShH0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Mon, Oct 16, 2023 at 12:58:34PM +0200, Vlastimil Babka wrote: > On 10/14/23 22:40, Kirill A. Shutemov wrote: > > Michael reported soft lockups on a system that has unaccepted memory. > > This occurs when a user attempts to allocate and accept memory on > > multiple CPUs simultaneously. > > > > The root cause of the issue is that memory acceptance is serialized with > > a spinlock, allowing only one CPU to accept memory at a time. The other > > CPUs spin and wait for their turn, leading to starvation and soft lockup > > reports. > > > > To address this, the code has been modified to release the spinlock > > while accepting memory. This allows for parallel memory acceptance on > > multiple CPUs. > > > > A newly introduced "accepting_list" keeps track of which memory is > > currently being accepted. This is necessary to prevent parallel > > acceptance of the same memory block. If a collision occurs, the lock is > > released and the process is retried. > > > > Such collisions should rarely occur. The main path for memory acceptance > > is the page allocator, which accepts memory in MAX_ORDER chunks. As long > > as MAX_ORDER is equal to or larger than the unit_size, collisions will > > never occur because the caller fully owns the memory block being > > accepted. > > > > Aside from the page allocator, only memblock and deferered_free_range() > > accept memory, but this only happens during boot. > > > > The code has been tested with unit_size == 128MiB to trigger collisions > > and validate the retry codepath. > > > > Signed-off-by: Kirill A. Shutemov > > Reported-by: Michael Roth > Fixes: 2053bc57f367 ("efi: Add unaccepted memory support") > > Cc: > > --- > > drivers/firmware/efi/unaccepted_memory.c | 55 ++++++++++++++++++++++-- > > 1 file changed, 51 insertions(+), 4 deletions(-) > > > > diff --git a/drivers/firmware/efi/unaccepted_memory.c b/drivers/firmware/efi/unaccepted_memory.c > > index 853f7dc3c21d..8af0306c8e5c 100644 > > --- a/drivers/firmware/efi/unaccepted_memory.c > > +++ b/drivers/firmware/efi/unaccepted_memory.c > > @@ -5,9 +5,17 @@ > > #include > > #include > > > > -/* Protects unaccepted memory bitmap */ > > +/* Protects unaccepted memory bitmap and accepting_list */ > > static DEFINE_SPINLOCK(unaccepted_memory_lock); > > > > +struct accept_range { > > + struct list_head list; > > + unsigned long start; > > + unsigned long end; > > +}; > > + > > +static LIST_HEAD(accepting_list); > > + > > /* > > * accept_memory() -- Consult bitmap and accept the memory if needed. > > * > > @@ -24,6 +32,7 @@ void accept_memory(phys_addr_t start, phys_addr_t end) > > { > > struct efi_unaccepted_memory *unaccepted; > > unsigned long range_start, range_end; > > + struct accept_range range, *entry; > > unsigned long flags; > > u64 unit_size; > > > > @@ -78,20 +87,58 @@ void accept_memory(phys_addr_t start, phys_addr_t end) > > if (end > unaccepted->size * unit_size * BITS_PER_BYTE) > > end = unaccepted->size * unit_size * BITS_PER_BYTE; > > > > - range_start = start / unit_size; > > - > > + range.start = start / unit_size; > > + range.end = DIV_ROUND_UP(end, unit_size); > > +retry: > > spin_lock_irqsave(&unaccepted_memory_lock, flags); > > + > > + /* > > + * Check if anybody works on accepting the same range of the memory. > > + * > > + * The check with unit_size granularity. It is crucial to catch all > > "The check is done ..." ? Yep. > > + * accept requests to the same unit_size block, even if they don't > > + * overlap on physical address level. > > + */ > > + list_for_each_entry(entry, &accepting_list, list) { > > + if (entry->end < range.start) > > + continue; > > + if (entry->start >= range.end) > > + continue; > > Hmm we really don't have a macro for ranges_intersect()? Given how easy is > to make a mistake. I found only zone_intersects(). I don't know any. > > + > > + /* > > + * Somebody else accepting the range. Or at least part of it. > > + * > > + * Drop the lock and retry until it is complete. > > + */ > > + spin_unlock_irqrestore(&unaccepted_memory_lock, flags); > > + cond_resched(); > > + goto retry; > > + } > > + > > + /* > > + * Register that the range is about to be accepted. > > + * Make sure nobody else will accept it. > > + */ > > + list_add(&range.list, &accepting_list); > > + > > + range_start = range.start; > > for_each_set_bitrange_from(range_start, range_end, unaccepted->bitmap, > > - DIV_ROUND_UP(end, unit_size)) { > > + range.end) { > > unsigned long phys_start, phys_end; > > unsigned long len = range_end - range_start; > > > > phys_start = range_start * unit_size + unaccepted->phys_base; > > phys_end = range_end * unit_size + unaccepted->phys_base; > > > > + spin_unlock_irqrestore(&unaccepted_memory_lock, flags); > > Hm so this is bad, AFAICS. We enable IRQs, then an IRQ can come and try to > accept in the same unit_size block, so it will keep the retrying by the goto > above and itself have irqs disabled so the cond_resched() will never let us > finish? Good catch. Will fix in the next version. -- Kiryl Shutsemau / Kirill A. Shutemov