From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0B095C433F5 for ; Sat, 25 Dec 2021 18:54:18 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 0A1796B0074; Sat, 25 Dec 2021 13:54:18 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 04FB06B0075; Sat, 25 Dec 2021 13:54:17 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E5A3A6B0078; Sat, 25 Dec 2021 13:54:17 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0191.hostedemail.com [216.40.44.191]) by kanga.kvack.org (Postfix) with ESMTP id D8A596B0074 for ; Sat, 25 Dec 2021 13:54:17 -0500 (EST) Received: from smtpin04.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 9CE278A9A9 for ; Sat, 25 Dec 2021 18:54:17 +0000 (UTC) X-FDA: 78957216954.04.A5F7AC0 Received: from mail-lj1-f172.google.com (mail-lj1-f172.google.com [209.85.208.172]) by imf27.hostedemail.com (Postfix) with ESMTP id 6ED7240027 for ; Sat, 25 Dec 2021 18:54:16 +0000 (UTC) Received: by mail-lj1-f172.google.com with SMTP id h21so6832119ljh.3 for ; Sat, 25 Dec 2021 10:54:16 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:date:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to; bh=Y2+/qiNo/lc2hZbvIk7mkHcn43TlT7O3oJDxcDPiMEI=; b=frX9AN8+9X+gKKNK1hZQBCLJFeFH1XlmlYjyJMZxRWtsj1UYQxRZ/pjahJc8KacK6O 2EC4B3Ij7ZEP5T/tM0UkdEf+Wp0z0fhgt2xcQWAH7rCVERG2yHatrfj+cPxhU9jiI08p b6OgwbE1cYkuodxyK6HuHqvlufNveBAHNX9AwXMdQ13+6QH1q8dWsbR5JvyinVYFqnom yTUqwMFxR1nQGQJbOg+eOStpGMAKmw2niFkk2WtlAHrV5kdGbJdFnIaJS3Z0HUQfYKN/ tkJPK9eNyJVkulkFiTrLUU6LVH1H4LxxPHfk9giZhAp3yaV860gbY4Sh2oaXB75VmwBP QCYg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:date:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to; bh=Y2+/qiNo/lc2hZbvIk7mkHcn43TlT7O3oJDxcDPiMEI=; b=bOsPHxkpKCe4YKmpk3Z7i7LlIvohaUItjt2pnVHIIxoJ+/CgWWFm56ogv9f9xkBugH yPMY4Wlqq4hmUSofF6PowZWUq9muxRzGMkwEmawgTG7/+tpgVH6p55dbDUpWs0/6Alb0 BATPDT+fvXdhIOxk8Qa15svEzvwEjEYZo1VQz2YANK4FyR4vxE06MsivyjknB/uUud77 5AKO8xvDBzvnGDn0nPhCKHG5FRDQNsTuEh6cD2rr5+EQhBvzL+PkXJWC8i7NZFCrKZmC aDcQJvbL5ub/wq+HyeWVG2lJKVahxlqXjpfTNmEtHIbrE7yGT4oF+q2R5lJCHrln2/P3 XfEg== X-Gm-Message-State: AOAM531n04o2LHDCjUJPwmlpFh/nX3D7EwbOAJpE/ma1m44KJLJMEFIY J/TuaMw2SwCJuvGHeRrrJMg= X-Google-Smtp-Source: ABdhPJwfjV5Sod80kgtYBDoTlDwfpRGA+vXrfMsjNnrTDPxSfJH838WXHq9IQP/awkpLxr9WQz1Biw== X-Received: by 2002:a2e:b88c:: with SMTP id r12mr8315292ljp.294.1640458455562; Sat, 25 Dec 2021 10:54:15 -0800 (PST) Received: from pc638.lan (h5ef52e3d.seluork.dyn.perspektivbredband.net. [94.245.46.61]) by smtp.gmail.com with ESMTPSA id z23sm1106539ljn.23.2021.12.25.10.54.14 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sat, 25 Dec 2021 10:54:14 -0800 (PST) From: Uladzislau Rezki X-Google-Original-From: Uladzislau Rezki Date: Sat, 25 Dec 2021 19:54:12 +0100 To: Manfred Spraul Cc: LKML , Andrew Morton , Vasily Averin , cgel.zte@gmail.com, shakeelb@google.com, rdunlap@infradead.org, dbueso@suse.de, unixbhaskar@gmail.com, chi.minghao@zte.com.cn, arnd@arndb.de, Zeal Robot , linux-mm@kvack.org, 1vier1@web.de, stable@vger.kernel.org Subject: Re: [PATCH] mm/util.c: Make kvfree() safe for calling while holding spinlocks Message-ID: References: <20211222194828.15320-1-manfred@colorfullife.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20211222194828.15320-1-manfred@colorfullife.com> X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: 6ED7240027 X-Stat-Signature: odopmcrgpdb67f6soouyqc1mync9mw1y Authentication-Results: imf27.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=frX9AN8+; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf27.hostedemail.com: domain of urezki@gmail.com designates 209.85.208.172 as permitted sender) smtp.mailfrom=urezki@gmail.com X-HE-Tag: 1640458456-230675 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: > One codepath in find_alloc_undo() calls kvfree() while holding a spinlock. > Since vfree() can sleep this is a bug. > > Previously, the code path used kfree(), and kfree() is safe to be called > while holding a spinlock. > > Minghao proposed to fix this by updating find_alloc_undo(). > > Alternate proposal to fix this: Instead of changing find_alloc_undo(), > change kvfree() so that the same rules as for kfree() apply: > Having different rules for kfree() and kvfree() just asks for bugs. > > Disadvantage: Releasing vmalloc'ed memory will be delayed a bit. > I guess the issues is with "vmap_purge_lock" mutex? I think it is better to make the vfree() call as non-blocking one, i.e. the current design is is suffering from one drawback. It is related to purging the outstanding lazy areas from caller context. The drain process can be time consuming and if it is done from high-prio or RT contexts it can hog a CPU. Another issue is what you have reported that is about calling the schedule() and holding spinlock. The proposal is to perform a drain in a separate work: diff --git a/mm/vmalloc.c b/mm/vmalloc.c index d2a00ad4e1dd..7c5d9b148fa4 100644 --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -1717,18 +1717,6 @@ static bool __purge_vmap_area_lazy(unsigned long start, unsigned long end) return true; } -/* - * Kick off a purge of the outstanding lazy areas. Don't bother if somebody - * is already purging. - */ -static void try_purge_vmap_area_lazy(void) -{ - if (mutex_trylock(&vmap_purge_lock)) { - __purge_vmap_area_lazy(ULONG_MAX, 0); - mutex_unlock(&vmap_purge_lock); - } -} - /* * Kick off a purge of the outstanding lazy areas. */ @@ -1740,6 +1728,16 @@ static void purge_vmap_area_lazy(void) mutex_unlock(&vmap_purge_lock); } +static void drain_vmap_area(struct work_struct *work) +{ + if (mutex_trylock(&vmap_purge_lock)) { + __purge_vmap_area_lazy(ULONG_MAX, 0); + mutex_unlock(&vmap_purge_lock); + } +} + +static DECLARE_WORK(drain_vmap_area_work, drain_vmap_area); + /* * Free a vmap area, caller ensuring that the area has been unmapped * and flush_cache_vunmap had been called for the correct range @@ -1766,7 +1764,7 @@ static void free_vmap_area_noflush(struct vmap_area *va) /* After this point, we may free va at any time */ if (unlikely(nr_lazy > lazy_max_pages())) - try_purge_vmap_area_lazy(); + schedule_work(&drain_vmap_area_work); } /* -- Vlad Rezki